By Markus Kopko, PgMP®, PMP®, PMI-CPMAI™
Your project team is already using AI. The question is whether you know about it.
According to a 2025 Cybernews survey, 59% of employees use AI tools their employers have not approved. IBM’s 2025 Cost of a Data Breach report found that 20% of organizations experienced a breach linked to shadow AI, adding $200,000 to the average cost per incident. And MIT’s Project NANDA revealed a telling disconnect: while only 40% of companies have official AI subscriptions, employees at over 90% of organizations actively use personal AI tools for work.
For project managers, this is not an abstract IT security issue. It is a governance challenge at the core of how teams plan, communicate, and deliver. Every risk register drafted with an unapproved AI tool, every stakeholder update generated without data oversight creates a governance gap that policies alone cannot close.
The Policy Illusion
Most organizations respond to shadow AI the same way they responded to shadow IT: they write a policy. And most of those policies fail to change behavior.
The data supports this. Only 15% of organizations have updated their Acceptable Use Policies to include AI-specific guidelines. Even where policies exist, 50% of employees report that their organization’s AI use guidelines are unclear. And the compliance gap grows wider with every new tool: Netskope tracked over 1,550 distinct generative AI SaaS applications by mid-2025, up from 317 just months earlier.
Policies fail when they operate at the wrong altitude. An enterprise-wide AI policy sets boundaries, but it does not tell a project manager how to handle a team member who uses ChatGPT to draft a risk response plan, or a program coordinator who feeds sensitive stakeholder data into an unapproved summarization tool. That gap between organizational policy and daily project practice is where governance breaks down.
Why This Is a Project Management Problem
AI governance in project teams is not an IT responsibility delegated downward. It is a governance responsibility that sits with the project manager.
The PMBOK® Guide, Eighth Edition, makes this connection explicit. Governance is now one of seven performance domains, elevated from a principle-level reference in the previous edition to a dedicated area of practice. For project managers dealing with AI adoption, that elevation matters. It signals that governance is not a background activity. It is a core function of project delivery.
The principle of leading accountably means managing AI adoption with the same diligence applied to budget, scope, and schedule. The project manager remains accountable for quality, accuracy, and confidentiality regardless of whether a human or an AI system produced the output. The Risk performance domain requires treating unvetted AI use as a risk that needs identification, assessment, and response. And governance structures need to be adapted to the team’s specific AI maturity and context, rather than applying blanket rules that teams will work around.
The PMBOK® Guide, Eighth Edition, also addresses AI explicitly as a critical topic for the first time in the standard’s history. This is not coincidental. It reflects the reality that AI is already embedded in how project teams work and that governance frameworks need to account for it.
There is a critical dimension that runs beneath all of this and is easy to overlook: data quality. An approved AI tool with poor data is no less risky than an unapproved tool with good data. If your project schedules, resource allocations, or risk registers are inconsistent or outdated, any AI model that ingests them will produce outputs that look confident but are flawed. Bad data causes damage twice: once in what the model learns and once in the decisions that follow. AI governance is not only about which tools are permitted. It is equally about whether the data feeding those tools is trustworthy.
As a member of the Core Development Team for the PMI Standard on AI in Project, Program, and Portfolio Management, I can confirm: the governance principles outlined here are foundational to what the standard will formalize. Project teams do not need to wait for publication. The principles already exist. They need to be applied.
A Practical Framework: Team-Level AI Governance
Effective AI governance at the team level requires three components: visibility, boundaries, and accountability. None of these require enterprise-wide transformation. They require a project manager who treats AI adoption as a governance topic.
A word of caution: lightweight does not mean optional. Good data, consistent processes, and clear accountability will not emerge organically from informal guidelines. The framework below is designed to be simple to implement, but it needs to be treated as binding. Governance that exists on paper but is not enforced in practice is no governance at all.
- Visibility: Know What Your Team Uses
Start with a simple inventory. Ask your team: Which AI tools are you using for project work? What data are you feeding into them? What outputs are you incorporating into deliverables? This is not a compliance audit. It is a governance conversation. Most team members use AI because it saves them time. They are not trying to circumvent rules. The goal is to understand current usage patterns, not to punish them.
Document the results in a lightweight AI tool register as part of your project governance documentation. Include the tool name, its purpose, the type of data it processes, and whether it has organizational approval.
- Boundaries: Define What Is Acceptable
Once you have visibility, set boundaries specific to your project context. A project handling public data has different AI governance needs than a program managing sensitive financial information.
Define three categories for your team. Green: AI use cases that are approved and encouraged (e.g., drafting meeting agendas, summarizing public information, generating first-draft templates). Yellow: use cases that require review before proceeding (e.g., AI-assisted risk analysis, stakeholder sentiment analysis with internal data). Red: use cases that are not permitted (e.g., feeding confidential project data into public AI tools, using AI-generated outputs in contractual deliverables without human review).
Keep the boundaries simple and specific. A three-tier classification that fits on one page will be followed. A 40-page policy will not.
- Accountability: Human Review Is Not Optional
Every AI-generated output that enters a project deliverable requires human review. This is the human-in-the-loop principle applied to daily project work.
Assign clear accountability for AI outputs. If a team member uses AI to draft a status report, that team member owns the accuracy of the content. If a project manager uses AI to generate a risk assessment, the project manager validates every identified risk before it enters the register. AI-generated content is a draft. It is never a deliverable.
Build this into your team’s working agreements. Add one line to your definition of done: AI-assisted outputs have been reviewed and validated by the responsible team member. This creates accountability without adding bureaucratic overhead.
From Governance to Advantage
The governance framework above is not a constraint on productivity. It is the precondition for it.
The teams that get AI governance right will not be the ones that block AI adoption. They will be the ones that channel it. MIT’s Project NANDA found that shadow AI often delivers better ROI than formal AI initiatives, because employees adopt tools that solve real problems. Multiple industry studies on generative AI adoption consistently report productivity gains of 40 to 60 minutes per day for knowledge workers, even without formal organizational support.
You do not need a perfect setup to start. Even teams at low AI maturity benefit from governance. The process of establishing visibility, boundaries, and accountability creates the foundations (consistent data, clear processes, engaged leadership) that make advanced AI adoption possible later. You do not need to be “AI-ready” to govern AI. Governing AI is how you become ready.
The job of the project manager is not to block productivity gains from AI. It is to govern them. Visibility turns unknown risk into managed risk. Boundaries turn ad-hoc usage into structured adoption. Accountability turns AI-generated drafts into validated deliverables. AI governance at the team level is about applying the same accountability principles that define good project management to a new class of tools that are already part of how your team works. The policies will follow. The governance starts with you.
Key Takeaways
- Policies alone do not change behavior. Team-level governance requires visibility, boundaries, and accountability.
- Data quality matters as much as tool approval. An approved tool with bad data produces flawed outputs.
- The PMBOK® Guide, Eighth Edition, elevates governance to a dedicated performance domain and addresses AI as a critical topic for the first time.
- The PMBOK® 8 principle of leading accountably and the Risk performance domain apply directly to AI governance at the team level.
- Use a three-tier classification (green, yellow, red) to define acceptable AI use for your project context.
- Every AI-generated output that enters a deliverable requires human review. AI produces drafts, not deliverables.
- Governing AI is how you build readiness. You do not need a perfect setup to start.
This article is part of a series leading up to the IIL webcast “5 Steps to Integrate AI into Your PPM Practices: A Tactical Blueprint” on June 24, 2026. Register at: https://www.iil.com/your-ai-advantage-practice-habit-strategy/
Markus Kopko, PgMP
Coach, Speaker & Trusted Guide for Human-Centered PM Excellence
Markus Kopko is a seasoned expert in project, program, and portfolio management with over two decades of experience in shaping strategic transformation across industries. As Principal Consultant, founder of „MP4PM – Method Power for Project Management“ – (www.mp4pm.club ) – and content creator, he has supported countless professionals on their journey toward PMI certification (e.g. PMP, PgMP) and practical excellence in applying global standards (e.g. PMBoK Guide, ITIL etc.) in their daily work.
A trusted advisor and international speaker, Markus served on the PMI Review Team for the PMBOK® Guide – 7th Edition, contributes to the Core Development Team of the upcoming PMI Standard on AI in Project, Program, and Portfolio Management, and regularly publishes thought leadership content on integrating modern methodologies with real-world delivery.
Markus specializes in strategic program management, lifecycle governance, stakeholder alignment, and benefits realization. He is widely recognized for translating complex frameworks into actionable practices, helping organizations align execution with strategic intent – especially in AI-driven environments.
He holds certifications including PMP®, PgMP®, and is also a Certified AI Transformation Lead (C-AITL by USAII). Markus shares his expertise through global PMI communities, keynote contributions, and coaching – always with one core principle: Lead with empathy. Empower with trust. Show up human — every single day.“
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.