Governing AI in the Boardroom: What Directors Need to Know in 2026
By Leah C. Jochim | Board Director, Technology, Data & Governance Oversight
Boards are being asked to govern something most directors don't fully understand, at a pace that outstrips most governance frameworks, with consequences that are material to mission, reputation, and legal liability.
That's not a criticism of boards. It's an accurate description of the situation. AI governance has moved from a technology committee concern to a full-board responsibility in the span of 18 months. And most boards are not yet equipped to discharge that responsibility effectively.
This piece is for directors who want to close that gap — not by becoming AI experts, but by asking the right questions.
Why This Is Now a Board-Level Issue
Three developments have elevated AI governance to the board agenda.
Regulatory exposure. The EU AI Act, emerging US state-level AI regulations, and sector-specific guidance from financial regulators and healthcare authorities have created a compliance landscape that boards are accountable for navigating. The organizations that are caught unprepared will face not just regulatory penalties but reputational damage that is difficult to recover from.
Material risk. AI failures — biased outputs, data breaches, model hallucinations in high-stakes decisions, vendor lock-in — are now material risks that belong in the board's risk oversight framework alongside cybersecurity, financial risk, and operational risk.
Competitive and mission implications. For organizations that are deploying AI, the board has a fiduciary responsibility to ensure that AI investments are generating the returns they're supposed to generate. For organizations that aren't deploying AI, the board has a responsibility to understand the competitive and mission implications of that choice.
The Six Questions Every Board Should Be Asking
Question 1: What AI systems are we operating, and what are they being used for?
This sounds basic. In my experience, most boards don't have a complete answer. AI systems are often deployed at the business unit level without board visibility. The first governance step is establishing the inventory — what AI systems are in use, what decisions they're influencing, and what the risk profile of each system is.
Question 2: What is our AI governance framework, and who owns it?
AI governance requires clear ownership and a defined framework. Who is accountable for AI risk at the executive level? What policies govern AI use case approval? What oversight mechanisms exist for AI systems that influence high-stakes decisions? What is the escalation path when an AI system fails?
Boards that can't answer these questions have a governance gap that needs to be closed.
Question 3: How are we managing AI-related data privacy and security risks?
AI systems are data-intensive. The data privacy and security risks associated with AI — training data that includes personal information, inference data that reveals sensitive patterns, model outputs that can be reverse-engineered — are distinct from traditional data security risks and require specific governance attention.
The questions I ask in board contexts: What data is being used to train or fine-tune AI systems? What third-party AI vendors have access to our data? What are the contractual protections for data used in AI systems? What is our incident response plan for an AI-related data breach?
Question 4: How are we ensuring AI outputs are accurate, fair, and auditable?
AI systems can produce outputs that are confidently wrong, systematically biased, or difficult to audit. In high-stakes environments — healthcare decisions, financial recommendations, hiring and promotion decisions — the consequences of these failures are significant.
Boards should be asking: What quality controls exist for AI outputs in high-stakes decisions? How are we detecting and addressing bias in AI systems? What audit trail exists for AI-assisted decisions? Who is accountable when an AI system produces a harmful output?
Question 5: What is our AI vendor accountability framework?
Most organizations are deploying AI through third-party vendors. The governance question is not just whether the vendor's technology works — it's whether the organization has the contractual protections, the performance monitoring, and the exit strategy required to manage vendor risk.
The vendor accountability questions I focus on: What SLAs govern AI system performance? What transparency do we have into model changes that could affect our use cases? What are the data portability provisions if we need to change vendors? What is our contingency plan if a key AI vendor fails or is acquired?
Question 6: How are we measuring AI ROI, and is it worth the investment?
AI investment is significant and growing. Boards have a fiduciary responsibility to ensure that investment is generating returns — not just in terms of technology performance, but in terms of business outcomes.
The measurement framework I use with boards: What business outcomes was this AI investment supposed to drive? What are the leading indicators that predict those outcomes? What is the actual ROI to date, and how does it compare to the investment case? What is the trajectory — are we on track to achieve the projected returns?
A Note for Directors Who Feel Behind
If you're a board director who feels underprepared for AI governance, you're not alone. The pace of AI development has outstripped the governance frameworks of most organizations. The answer is not to become an AI expert — it's to ask better questions, build the right governance infrastructure, and ensure the organization has the executive leadership to manage AI risk effectively.
The six questions above are a starting point. They won't make you an AI expert. But they will make you a more effective director in an AI-enabled world.
Leah C. Jochim serves as Executive Board Director — Technology, Data & Governance Oversight for Tri Delta Fraternity International and advises PE/VC-backed organizations on AI governance and technology strategy. She is currently accepting 2–3 board director and strategic advisory engagements for Q2/Q3 2026. Connect at linkedin.com/in/leahac.
#AIGovernance #BoardDirector #CorporateGovernance #AIRisk #TechnologyGovernance #BoardLeadership
Related Posts
It's Not You. It's the Software.
We've normalized blaming ourselves for bad technology. But the bar for what 'good' looks like has never been lower — and it's time to raise it. A leadership perspective on human-centered design, AI adoption, and the order that still matters.
AI Makes It Easier Than Ever to Produce More Work. But Are We Creating More Value?
AI is dramatically increasing how much work we can produce. But are we creating more value — or just more output? A leadership perspective on outcomes, metrics, and clarity in the age of AI.
AI Adoption Isn't Failing Because of the Technology. It's Failing Because of Us.
Why enterprise AI tools fail within three weeks — and what change management has to do with it.
