The PM’s AI Readiness Checklist- 5 Questions Before Your First AI Integration

The PM’s AI Readiness Checklist: 5 Questions Before Your First AI Integration

By Markus Kopko, PgMP®, PMP®, PMI-CPMAI™
April 30, 2026

Most project managers are not opposed to AI—they are unsure where to start. The tools are available. The tools are everywhere, vendor pitches are constant, and leadership expects results. But between “we should use AI” and “here is how we are using AI”, no tool alone can close the gap.

That gap is readiness. Not technology readiness. Organizational, process, and governance readiness. Info-Tech Research Group’s 2024 PPM transformation framework identifies three layers of readiness for AI integration: PPM readiness, IT readiness, and organizational readiness. Without all three, AI investments produce pilots that stall and tools that go unused.

The five questions in this article are a diagnostic. They are designed to surface the specific blockers that prevent AI from delivering value in your project environment. Answer them honestly, and you will know whether your team is ready to integrate AI, or what needs to change first.

Question 1: Is Your Project Data Consistent Enough for AI to Use?

AI patterns like forecasting, anomaly detection, and classification (see Article 4 in this series) depend on structured, consistent data. A forecasting model trained on schedule data where some teams report in hours, others in days, and a third group updates sporadically will produce predictions that look precise but mean nothing.

The readiness test: Can you pull a standardized schedule variance report across all active projects in your portfolio right now, with consistent units, current data, and no manual reconciliation? If the answer is no, your AI readiness problem is a data governance problem. The governance framework from Article 1 in this series, AI Governance for Project Teams; 3 Reasons Why Policies Alone are Not Enough, addresses this.

This is not a technology limitation. It is a process discipline issue. Info-Tech’s readiness scale runs from Ad Hoc (nothing in place) through Initial, Maturing, and Exponential IT Ready. Most organizations that struggle with AI adoption score Ad Hoc or Initial on data governance.

The PMBOK® Guide, Eighth Edition, includes a dedicated Governance performance domain for this reason. Without governance structures that enforce data standards, AI produces confident outputs from unreliable inputs. AI does not fix inconsistent data. It amplifies it.

Question 2: Do You Have Defined Processes That AI Can Augment?

AI augments processes. It does not create them. An NLP (natural language processing) agent that drafts meeting summaries is useful when your team has a consistent meeting structure with defined outputs. If meetings have no agenda, no defined decisions, and no follow-up protocol, AI summarization produces a polished record of dysfunction.

The readiness test: Pick your three most common recurring PM processes (status reporting, risk review, change control). Are they documented? Are they followed consistently across projects? If not, automating them with AI means automating inconsistency at speed.

The PMBOK® Guide—Eighth Edition, reinforces this through its focus on value: process improvements matter only when they produce outcomes worth the effort. Automating a broken process produces broken outcomes faster. Start with the process. Then add the AI.

Question 3: Does Your Team Know Which AI Pattern Fits Which Task?

PMI’s research on AI readiness found that only about 20% of project managers report extensive or good practical AI skills, while 49% have little to no experience with AI in a project context (PMI, Shaping the Future of Project Management with AI, 2023). This skills gap is not about prompting technique. It is about pattern literacy.

The readiness test: Can your team distinguish between forecasting and classification? Do they know when to apply optimization versus recommendation? Article 4 covers the 7 AI Patterns Every Project Professional Know, in detail. Once your team can identify the right pattern, the CRISP framework (Context, Role, Instruction, Scope, and Parameters) from Article 3, Prompting for Project Managers: How to Get Useful AI Output for Real PM Work, provides the prompting structure to get useful output from it. If your team cannot map a project problem to the right AI pattern, they will default to the only pattern they know (NLP) and miss opportunities to use other patterns that deliver more value.

AI Readiness here means structured learning on how AI works, where each pattern applies, and how to evaluate outputs critically. IIL Generative AI for Project Management course covers exactly this hands-on training.

Question 4: Do You Have a Human-in-the-Loop Protocol?

Article 2 in this series, Human-in-the-Loop is not Optional; Why AI in PM Needs Human Judgment, makes the case. Human-in-the-loop means three specific things: (1) verification before action; (2) accountability stays with people, and (3) active quality monitoring over time. AI-generated outputs require structured review before they become project decisions. The question is whether your team has operationalized that principle.

The readiness test: If an AI tool generates a risk assessment for your project today, what happens next? Who reviews it? Against what criteria? What is the escalation path if the assessment is wrong? If the answers are “it depends” or “whoever is available,” you do not have a protocol. You have a hope.

Article 5 in this series, AI Agents in Program Management: From Coordination Tool to Decision Partner, extends this to AI agents, where the stakes are higher. Agents that act autonomously require defined decision boundaries, accountability assignments, and audit trails. If your team does not have a review protocol for basic AI-generated text, it is not ready for autonomous agents.
Question 5: Does Leadership Support AI Adoption With Resources, Not Just Rhetoric?

McKinsey’s “The State of AI” report reveals that high-performing AI organizations share one characteristic: senior leaders actively drive AI adoption. They allocate budget, assign accountability, and role-model AI use. In organizations where leaders do not actively support AI, project teams use unsanctioned AI tools and platforms that create shadow AI problem described in Article 1, AI Governance for Project Teams: 3 Reasons Why Policies Alone Are Not Enough.

The readiness test: Has your organization allocated specific budget for AI tools, training, and governance in your PM function? Is there a named sponsor accountable for AI integration outcomes? If AI adoption depends on individual project managers finding free tools and teaching themselves, the organization is not ready. It is delegating readiness to the people least positioned to create it.
Info-Tech’s framework distinguishes five levels of organizational mandate, from “window shopping” to “transforming.” Most PMOs fall in the “researching” or “going slow” categories. That is not a failure. It is a starting point. But it requires honest acknowledgment that AI integration is a planned initiative, not a side project.
Scoring Your Readiness

If you answered “yes” to all five questions, you are ready to pilot. Start with the approach from Article 5, AI Agents in Program Management: From Coordination Tool to Decision Partner. Start with one agent, one high-friction task, one defined scope, or an audit trail.

If you answered “yes” to three or four questions, you have a foundation, but gaps that will limit AI’s value. Address the gaps before scaling. Most often, the blockers are data consistency (Question 1) and process maturity (Question 2).

If you answered “yes” to fewer than three questions, AI adoption is premature. The investment will produce tools that go unused and leadership frustration that sets back future AI initiatives. Focus on the foundational work first. Consistent data, well-documented processes, and trained teams improve project outcomes–with or without AI.

Key Takeaways

  • AI readiness is not about technology. It is about data consistency, process maturity, pattern literacy, oversight protocols, and leadership commitment.
  • Most AI adoption failures trace back to readiness gaps, rather than tool limitations. If your data is inconsistent and your processes are undocumented, AI amplifies the problem.
  • The five readiness questions in this article serve as a diagnostic to identify specific blockers in AI adoption. A “no” is not a failure—it’s a clear, prioritized action item.

This article is part of a series leading up to the IIL webcast “5 Steps to Integrate AI into Your PPM Practices: A Tactical Blueprint” on June 24, 2026. Register at: https://www.iil.com/your-ai-advantage-practice-habit-strategy/

Markus Kopko, PgMP

Markus Kopko is a strategic project and AI transformation expert with over 25 years of experience in project, program, and portfolio management. He contributes to the Core Development Team of the PMI Standard on AI in Project, Program, and Portfolio Management and served on the PMI Review Team for the PMBOK® Guide, 7th Edition. Markus holds PMP®, PgMP®, and PMI-CPMAI™ certifications and is a trainer and content creator for IIL. He is a lead instructor for IIL’s course, Generative AI for Project Management.

PMP®, PgMP®, and PMBOK® are registered marks of the Project Management Institute, Inc. PMI-CPMAI™ is a trademark of the Project Management Institute, Inc.

Scroll to Top