By Debra Kahn
August 29, 2025
We are privileged these days to be able to roll our eyes still at fakery created by generative AI. Think of the blurred hands and misaligned clothes in the Princess of Wales’ infamous 2024 Mother’s Day family photo. More recent and brazen examples exist in fake citations included in some lawyers’ depositions and even in the first version of the U.S. government’s 2025 MAHA (Make America Healthy Again) report
But we won’t have that easy eye-roll privilege for long.
The recent iterations of generative AI (GenAI) models, such as ChatGPT, Claude, and Google’s Gemini, include even more sophisticated reasoning and huge context windows—thousands of times the size of the original ChatGPT release. Generally, the longer the context window, the better the model is able to perform, according to quiq.com.
The omnipresence of AI has the capability—and now the model power—to compound inaccurate information (and misinformation) a thousand-fold. This endangers the whole concept of truth in our modern society, warns my colleague, Noz Urbina.
Given this capability, what are reasonable steps an individual, an organization, and the content profession can take to guard against even the subtlest “AI slop”?
Understanding Key Terms
Let’s start by understanding the dangers of unmonitored generative AI content by reviewing key terms.
AI slop originally referred to “low-effort, poor-quality, mass-produced AI-generated content,” according to libryo.com. It encompasses those obviously error-filled efforts like the photo I mentioned in the first paragraph. But it can also refer to the “buzzword salad” that leaves your readers scratching their heads.
The libryo.com authors provide this example of buzzword-filled slop:
“Embarking on a journey through the dynamic landscape of AI, it’s vital to delve into the vibrant tapestry of its capabilities. Arguably, the most pivotal advancements come from comprehensive solutions that seamlessly elevate user experience.”
AI hallucination refers to content that is incorrect or simply made up. The latter type of hallucination is generally referred to as a confabulation. It occurs when AI gives “inconsistent wrong answers,” according to TIME magazine author Billy Perrigo. Confabulations can happen when an AI model supplies an answer even when it can’t find one, simply to satisfy and complete the requested task. False journal and case-law citations are examples of confabulations.
AI model collapse is the compounding of all these errors and more. According to The Register’s Steven J. Vaughan-Nichols, AI model collapse is when an AI model becomes “poisoned” with its own distortion of reality. Vaugan-Nichols writes, “This occurs because errors compound across successive model generations, leading to distorted data distributions and ‘irreversible defects’ in performance.” He identifies three causes:
- Error accumulation or “drift”
- Loss of “tail” or training data
- Feedback loops that reinforce narrow patterns
Vaughan-Nichols’ warning is a dire one: If the trend isn’t reversed, generative AI models might one day become totally useless. My colleague Noz Urbina echoes this warning for the entirety of digitized human knowledge on his website Truth Collapse.
Let Us Be Wary
What Individuals (You) Can Do
1) Educate Yourself About Generative AI and the Available Tools
Understanding GenAI is foundational. Models like ChatGPT, Perplexity, and DALL·E generate content based on patterns in data—not genuine understanding. They can produce impressive outputs but also fabricate information or perpetuate biases.
Stay informed about how these tools work, their strengths, and their limitations. Resources like MIT Technology Review or the AI Literacy Project can be valuable starting points.
Understand the differences among GenAI tools. Not all AI tools are created equal. Some are designed for conversational tasks, others for image generation, coding assistance, or data analysis. The AI Critique website provides a recent comparison of the most popular AI agents. Scroll down to read the comparative analysis.
Even within the same category, tools can vary in their outputs and reliability. AI leaderboards have emerged this year to compare various large-language models. For example, AlpacaEval compares how well they follow instructions.
2) Use GenAI Tools Purposefully
GenAI can be a helpful partner — when used with intention, not as a substitution for your own thinking. Before you use the tool, be clear on your goal — are you brainstorming, structuring, refining, or ideating? Generative AI is most helpful when you approach it with a clear goal. As professional guidelines from UMU note, “It’s not about cutting corners. It’s about making your content work smarter across every channel” (blog.umu.com).
Writing coach Allison K. Williams puts it plainly, “AI is a tool… dependent on the human user” and its output is most valuable when treated as “a smarmy first draft” that gets rewritten with human insight and voice (Brevity Blog, 2025). When you use GenAI purposefully, it enhances your process without eroding your credibility.
3) Self-Regulate Ethical Use of GenAI Content
- When is it appropriate to use AI-generated content?
- How do I ensure the accuracy and integrity of such content?
- Am I transparent about the use of AI in my work?
4) Label GenAI Content Appropriately
5) Be a Responsible and Responsive Consumer
As consumers, we must critically evaluate the content we encounter. Be vigilant for signs of AI-generated misinformation, copyright infringement, or bias. If something seems off, investigate further before accepting or sharing it.
Be mindful of the sheer amount of energy you are using when you engage with an AI agent. A recent study by McKinsey indicates that by 2030, data centers are projected to require $6.7 trillion worldwide to keep pace with the demand for AI processing loads. That represents more than a threefold increase in AI capacity over the next five years, with its share of the demand on the electrical grid growing to 8 percent (up from approximately 1 percent) in the next 15 years.
Engaging ethically with AI agents and critically examining the content they serve helps maintain the integrity of our information ecosystems.
What Organizations Can Do
1) Create and Enforce Policies About Use and Labeling
- Acceptable use cases for AI-generated content
- Requirements for human oversight and review
- Standards for transparency and labeling
- Guardrails against the misuse of AI agents and related tools
2) Develop a GenAI Infrastructure
Implementing GenAI responsibly requires the right infrastructure to ensure consistency, policy compliance, and security. Some key elements include (with thanks to my colleague Scott Abel for some of these ideas):
- Prompt management tools to help create, manage, repurpose, nest, localize, and augment prompts
- Internally bounded generative AI tools tailored to the organization’s specific needs
- A private library of content on which to train your Large Language Models (LLMs)
- Retrieval-Augmented Generation (RAG) structures for grounded outputs
- A componentized content management system (CCMS) and workable content architecture (including templates)
- Style-checking and accuracy-checking tools
- a content strategy
Not all elements of an infrastructure need to be in place to start. Engage IT, tool, and content experts to create a plan. For some suggested prompt formats, read my blog post AI Prompting for Bloggers: My Trial-and-Error Discoveries.
3) Enforce a Content Strategy
An organization-wide content strategy should bring some sanity to your content-generating efforts. This strategy should maintain usable user profiles, define workflows for AI-assisted content creation, establish review processes (including archival processes), and ensure adherence to business goals and standards, including consistency in voice and tone. It might also contain ontologies and/or knowledge graphs, as well as taxonomies. Most importantly, your content strategy should dovetail with the organization’s AI policies.
Integrating generative-AI considerations into your content strategy helps ensure ongoing coherence and accountability. To explore how generative AI may impact content strategy, listen to the recent Coffee and Content session with Michael Andrews, How Artificial Intelligence is Impacting Content Strategy.
4) Include Quality Checks in Development and Release Processes
Your organization’s AI-generated content should undergo rigorous quality assurance before release. Your processes should include the following:
- Fact-checking for accuracy
- Reviewing for bias or inappropriate content
- Checking for alignment with content strategy and brand voice
- Editing for alignment with organizational style guides and standards
To ensure consistency, your organizational content standards should be documented and available to all who create content. For a check of content accuracy, download my content accuracy checklist. For a more encompassing set of quality checks, review Lizzie Bruce’s free AI content quality checklist. Incorporate these checks into your standard workflows to maintain content integrity.
5) Institute Continuous Improvement
What the Profession Can Do
1) Learn the Pitfalls of AI and Best Practices
2) Access to Generative AI Model Scores
3) Support Development of Open-Source Tools
4) Develop Code of Ethics for Generative AI Content
5) Thoughtful Legislation
Final Thoughts
Debra Kahn, CPTC, MA, PMP, is an AI-aware content and project leader. She has 20 years of experience in product and content development and management, including with Oracle Corporation. In 2013, Debra founded DK Consulting of Colorado to assist organizations in planning, developing, and maintaining high-quality, high-performing content. Her clients include small and large organizations as well as fellow consultants.
Debra holds an MA in English and has taught communication skills at two universities in Colorado. Her involvement in professional organizations includes writing and speaking about product project management, content strategy, and GenAI. Her most popular topics focus on the intersectionality of project management and content management.
Find Debra Here:
debra@dkconsultingcolorado.com
LinkedIn: www.linkedin.com/in/debkahn
Twitter/X: @dkconsultco