Safeguarding Content Quality Against AI “Slop”

By Debra Kahn
August 29, 2025

We are privileged these days to be able to roll our eyes still at fakery created by generative AI. Think of the blurred hands and misaligned clothes in the Princess of Wales’ infamous 2024 Mother’s Day family photo. More recent and brazen examples exist in fake citations included in some lawyers’ depositions and even in the first version of the U.S. government’s 2025 MAHA (Make America Healthy Again) report

But we won’t have that easy eye-roll privilege for long.

The recent iterations of generative AI (GenAI) models, such as ChatGPT, Claude, and Google’s Gemini, include even more sophisticated reasoning and huge context windows—thousands of times the size of the original ChatGPT release. Generally, the longer the context window, the better the model is able to perform, according to quiq.com.

The omnipresence of AI has the capability—and now the model power—to compound inaccurate information (and misinformation) a thousand-fold. This endangers the whole concept of truth in our modern society, warns my colleague, Noz Urbina.

Given this capability, what are reasonable steps an individual, an organization, and the content profession can take to guard against even the subtlest “AI slop”?

Understanding Key Terms

Let’s start by understanding the dangers of unmonitored generative AI content by reviewing key terms.

AI slop originally referred to “low-effort, poor-quality, mass-produced AI-generated content,” according to libryo.com. It encompasses those obviously error-filled efforts like the photo I mentioned in the first paragraph. But it can also refer to the “buzzword salad” that leaves your readers scratching their heads.

The libryo.com authors provide this example of buzzword-filled slop:

“Embarking on a journey through the dynamic landscape of AI, it’s vital to delve into the vibrant tapestry of its capabilities. Arguably, the most pivotal advancements come from comprehensive solutions that seamlessly elevate user experience.”

AI hallucination refers to content that is incorrect or simply made up. The latter type of hallucination is generally referred to as a confabulation. It occurs when AI gives “inconsistent wrong answers,” according to TIME magazine author Billy Perrigo. Confabulations can happen when an AI model supplies an answer even when it can’t find one, simply to satisfy and complete the requested task.  False journal and case-law citations are examples of confabulations.

AI model collapse is the compounding of all these errors and more. According to The Register’s Steven J. Vaughan-Nichols, AI model collapse is when an AI model becomes “poisoned” with its own distortion of reality. Vaugan-Nichols writes, “This occurs because errors compound across successive model generations, leading to distorted data distributions and ‘irreversible defects’ in performance.”  He identifies three causes:

  • Error accumulation or “drift”
  • Loss of “tail” or training data
  • Feedback loops that reinforce narrow patterns

Vaughan-Nichols’ warning is a dire one: If the trend isn’t reversed, generative AI models might one day become totally useless. My colleague Noz Urbina echoes this warning for the entirety of digitized human knowledge on his website Truth Collapse.

Let Us Be Wary

The reality is that the use of generative AI models has become a popular shortcut to completing all sorts of tasks, including content creation and revision. We are pressured by the media, our peers, and even our bosses to put generative AI to effective use so that we can get to market faster, engage more potential customers, and beat the competition. What should constitute that good use then? I have some thoughts.

What Individuals (You) Can Do 

To start, decide whether you even want to explore the AI landscape for its potential uses. Some creatives prefer to stand aloof, and that is OK, too. If you decide to dip your toe, I suggest the following. (Some sentences were generated by ChatGPT.)

1) Educate Yourself About Generative AI and the Available Tools

Understanding GenAI is foundational. Models like ChatGPT, Perplexity, and DALL·E generate content based on patterns in data—not genuine understanding. They can produce impressive outputs but also fabricate information or perpetuate biases.

Stay informed about how these tools work, their strengths, and their limitations. Resources like MIT Technology Review or the AI Literacy Project can be valuable starting points.

Understand the differences among GenAI tools. Not all AI tools are created equal. Some are designed for conversational tasks, others for image generation, coding assistance, or data analysis. The AI Critique website provides a recent comparison of the most popular AI agents. Scroll down to read the comparative analysis.

Even within the same category, tools can vary in their outputs and reliability. AI leaderboards have emerged this year to compare various large-language models. For example, AlpacaEval compares how well they follow instructions.

2) Use GenAI Tools Purposefully

GenAI can be a helpful partner — when used with intention, not as a substitution for your own thinking. Before you use the tool, be clear on your goal — are you brainstorming, structuring, refining, or ideating?  Generative AI is most helpful when you approach it with a clear goal. As professional guidelines from UMU note, “It’s not about cutting corners. It’s about making your content work smarter across every channel” (blog.umu.com).

Writing coach Allison K. Williams puts it plainly, “AI is a tool… dependent on the human user” and its output is most valuable when treated as “a smarmy first draft” that gets rewritten with human insight and voice (Brevity Blog, 2025). When you use GenAI purposefully, it enhances your process without eroding your credibility.

3) Self-Regulate Ethical Use of GenAI Content

In the absence of universal guidelines, personal ethics become paramount. Reflect on questions such as:
  • When is it appropriate to use AI-generated content?
  • How do I ensure the accuracy and integrity of such content?
  • Am I transparent about the use of AI in my work?
Developing a personal code of ethics can guide your responsible AI usage. Professor Kevin Hartman’s 2024 blog post provides some starting points.

4) Label GenAI Content Appropriately

Transparency fosters trust. If AI has played a significant role in creating content, disclose it. Simple statements like “This content was generated with the assistance of AI” can suffice. Such labeling helps audiences assess the content’s origin and apply appropriate scrutiny.

5) Be a Responsible and Responsive Consumer

As consumers, we must critically evaluate the content we encounter. Be vigilant for signs of AI-generated misinformation, copyright infringement, or bias. If something seems off, investigate further before accepting or sharing it.

Be mindful of the sheer amount of energy you are using when you engage with an AI agent. A recent study by McKinsey indicates that by 2030, data centers are projected to require $6.7 trillion worldwide to keep pace with the demand for AI processing loads. That represents more than a threefold increase in AI capacity over the next five years, with its share of the demand on the electrical grid growing to 8 percent (up from approximately 1 percent) in the next 15 years.

Engaging ethically with AI agents and critically examining the content they serve helps maintain the integrity of our information ecosystems.

What Organizations Can Do 

For those of us who work with teams of creatives, consider working with your organization’s leadership to develop policies, guidelines, and infrastructure to help ensure AI is used ethically, appropriately, and securely. (Some sentences were generated with ChatGPT.)

1) Create and Enforce Policies About Use and Labeling

Organizations should establish a policy and clear guidelines on generative AI usage before employees begin using AI tools regularly. Policies should address the following:
  • Acceptable use cases for AI-generated content
  • Requirements for human oversight and review
  • Standards for transparency and labeling
  • Guardrails against the misuse of AI agents and related tools
The policy should be reviewed and updated as new questions and concerns emerge.

2) Develop a GenAI Infrastructure

Implementing GenAI responsibly requires the right infrastructure to ensure consistency, policy compliance, and security. Some key elements include (with thanks to my colleague Scott Abel for some of these ideas):

  • Prompt management tools to help create, manage, repurpose, nest, localize, and augment prompts
  • Internally bounded generative AI tools tailored to the organization’s specific needs
  • A private library of content on which to train your Large Language Models (LLMs)
  • Retrieval-Augmented Generation (RAG) structures for grounded outputs
  • A componentized content management system (CCMS) and workable content architecture (including templates)
  • Style-checking and accuracy-checking tools
  • a content strategy 

Not all elements of an infrastructure need to be in place to start. Engage IT, tool, and content experts to create a plan. For some suggested prompt formats, read my blog post AI Prompting for Bloggers: My Trial-and-Error Discoveries.

3) Enforce a Content Strategy

An organization-wide content strategy should bring some sanity to your content-generating efforts. This strategy should maintain usable user profiles, define workflows for AI-assisted content creation, establish review processes (including archival processes), and ensure adherence to business goals and standards, including consistency in voice and tone. It might also contain ontologies and/or knowledge graphs, as well as taxonomies. Most importantly, your content strategy should dovetail with the organization’s AI policies.

Integrating generative-AI considerations into your content strategy helps ensure ongoing coherence and accountability. To explore how generative AI may impact content strategy, listen to the recent Coffee and Content session with Michael Andrews, How Artificial Intelligence is Impacting Content Strategy.

4) Include Quality Checks in Development and Release Processes

Your organization’s AI-generated content should undergo rigorous quality assurance before release. Your processes should include the following:

  • Fact-checking for accuracy
  • Reviewing for bias or inappropriate content
  • Checking for alignment with content strategy and brand voice
  • Editing for alignment with organizational style guides and standards

To ensure consistency, your organizational content standards should be documented and available to all who create content. For a check of content accuracy, download my content accuracy checklist. For a more encompassing set of quality checks, review Lizzie Bruce’s free AI content quality checklist.  Incorporate these checks into your standard workflows to maintain content integrity.

5) Institute Continuous Improvement

AI tools and best practices evolve rapidly. Regularly assess and update your AI infrastructure, policies, and training programs. Solicit feedback from users and stakeholders to identify areas for enhancement. A commitment to continuous improvement ensures your organization adapts effectively to the changing AI landscape.

What the Profession Can Do 

Content professionals have a duty to advocate for the responsible use of AI for content generation, revision, and management. Below are my suggestions for ways in which the content profession can leverage its collective power. (Some sentences were generated by ChatGPT.)

1) Learn the Pitfalls of AI and Best Practices

Professional bodies should promote education on generative AI’s limitations and ethical considerations. Workshops, webinars, and resources can equip communicators with the knowledge to use generative AI responsibly. Understanding AI’s pitfalls can help prevent misuse and guard the quality of content in our systems.

2) Access to Generative AI Model Scores

Individuals and groups can advocate for transparency from AI developers regarding model performance metrics, such as accuracy, bias, and reliability. For additional information on performance scores for LLMs, review quiq.com’s article, How to Evaluate Generated Text and Model Performance.  Access to these scores enables informed decisions about tool selection and usage.

3) Support Development of Open-Source Tools

Advocacy groups can support the development and adoption of open-source AI tools. These tools allow for greater transparency, customization, and community-driven improvements, fostering ethical and effective AI integration.

4) Develop Code of Ethics for Generative AI Content

Professional bodies should build a professional code of ethics that provides a shared framework for responsible AI usage. Such a code should address issues like transparency, accountability, and the preservation of human oversight.

5) Thoughtful Legislation

Individuals and groups can engage with policymakers to develop legislation that balances innovation with ethical considerations. Laws should promote transparency, protect against misuse, and ensure equitable access to AI technologies.

Final Thoughts 

The rise of generative AI presents both opportunities and challenges for communicators. By taking proactive steps at the individual, organizational, and professional levels, we can harness AI’s benefits while safeguarding the quality and integrity of content. Let’s commit to thoughtful, ethical, and informed AI integration in our communication practices.

Debra Kahn, CPTC, MA, PMP, is an AI-aware content and project leader. She has 20 years of experience in product and content development and management, including with Oracle Corporation. In 2013, Debra founded DK Consulting of Colorado to assist organizations in planning, developing, and maintaining high-quality, high-performing content. Her clients include small and large organizations as well as fellow consultants.

Debra holds an MA in English and has taught communication skills at two universities in Colorado. Her involvement in professional organizations includes writing and speaking about product project management, content strategy, and GenAI. Her most popular topics focus on the intersectionality of project management and content management.

Find Debra Here:
debra@dkconsultingcolorado.com
LinkedIn: www.linkedin.com/in/debkahn
Twitter/X: @dkconsultco

Scroll to Top