[Webinar] Key Considerations for Developing Effective AI Governance Frameworks
July 08, 2025
June 12, 2025 | Presented by Christopher Risher (Senior Director of Consulting, 1Path & Ideal Integrations), Luke McOmie (VP of Offensive Security, 1Path & Ideal Integrations), and Qasim Ijaz (Director of Cybersecurity, Aveanna Healthcare)
Introduction: The Reality of AI in the Workplace
The reality is simple: your employees are already using AI, whether you've sanctioned it or not. From Apple Intelligence to Gemini, AI is embedded in the tools your workforce uses daily. The question isn't whether AI will enter your organization, it's how you'll govern it effectively.
Chapter 1: When Good Intentions Go Wrong - Real-World AI Failures
The Samsung Code Review Incident
One of the most telling examples comes from Samsung, where employees began using ChatGPT for code reviews with the best of intentions. The mundane nature of code reviews made AI seem like a perfect efficiency booster. However, employees copy-pasted sensitive source code directly into ChatGPT without any redactions or data sanitization. This resulted in three separate incidents that became international news, highlighting how intellectual property can be inadvertently exposed through well-meaning AI usage.
The Pen Testing Disaster
Luke McOmie shared a sobering example from the offensive security world, where an AI-powered penetration testing tool discovered a zero-day vulnerability in a client's archaic tape backup system. While finding vulnerabilities is the goal of pen testing, the AI took it a step further—it executed a delete command that wiped 7 terabytes of the customer's backup data. A human tester would likely have avoided such a destructive command, but the AI lacked the contextual judgment to understand that "delete all history" might not be the best course of action.
The PowerShell Script Gone Wrong
Organizations rushing to implement AI solutions often face similar issues. In one case, an AI-generated PowerShell script intended to manage Active Directory ended up disabling unintended users, demonstrating how speed and efficiency gains can quickly turn into operational disasters without proper oversight.
Chapter 2: Building Foundational AI Governance Principles
Understanding Your Data Before Protecting It
Qasim Ijaz emphasized that AI governance isn't fundamentally different from other technology governance—it requires the same foundational principles we've always used. The first step is understanding what data you have before you can protect it. Organizations need to assess what risks AI platforms create for sensitive data and intellectual property.
Addressing Bias and Trust
AI systems can perpetuate and amplify biases present in their training data. This is particularly critical for decision-making processes like hiring, where biased AI could lead to discriminatory practices. Banks have already faced billions in penalties for AI bias in loan processing decisions.
Building trust through transparency and accountability isn't just good practice, it's essential for employee adoption and regulatory compliance.
The Historical Context
Luke McOmie provided valuable historical perspective, noting that governance frameworks have evolved with each major technological shift. From the database adoption of the 1980s to cloud migration resistance just a decade ago, businesses initially resist new technologies before they become indispensable. AI is following the same pattern, and organizations that proactively develop governance frameworks will be better positioned for success.
Chapter 3: Technical Controls and Security Implementation
Data Classification: The Foundation You Can't Skip
Many organizations have treated data classification as a checkbox exercise, broadly categorizing everything as "private" and moving on. AI implementation is forcing companies to take data classification seriously. Effective AI governance requires granular data sensitivity labels that AI models can respect and enforce.
The Vendor Risk Assessment Imperative
Every AI tool should undergo the same rigorous vendor risk assessment process as any other business application. This includes:
- Third-party risk assessments of AI vendors
- Penetration testing of AI applications
- Documentation of cybersecurity principles from AI providers
- Clear risk acceptance or transfer strategies
Policy Enforcement with True Consequences
Having an acceptable use policy isn't enough, organizations need enforceable controls with real consequences. Data Loss Prevention (DLP) policies can work with AI systems, particularly tools like Microsoft Copilot, to respect sensitivity labels and access permissions.
Chapter 4: Organizational Structure and Continuous Adaptation
The AI Steering Committee
Qasim Ijaz stressed the importance of establishing cross-functional AI steering committees that include:
- Developers and product engineers
- HR representatives (for AI-enabled HR tools)
- Security and compliance teams
- Business stakeholders
This committee should inventory all AI capabilities across the organization, evaluate their benefits, and determine which should be enabled or disabled.
Future-Proofing Your Framework
Luke McOmie recommended keeping governance frameworks broad and flexible rather than getting lost in specific tools or controls. Think in terms of general expectations for tool usage rather than prescriptive requirements for specific AI brands or versions.
AI governance should be treated as a living document with two components:
- Core principles (fairness, accountability, transparency) that remain constant
- Technical controls and tools that are reviewed quarterly or annually
The Emerging Role of AI Architects
Organizations should prepare for new roles like AI architects and AI engineers, similar to how security specialists became essential. These professionals will bridge the gap between technical implementation and governance requirements.
Key Takeaways
- Accept the Reality: Your employees are already using AI. Provide them with organization-sanctioned, governed AI platforms rather than forcing them to use shadow IT solutions.
- Start with Fundamentals: Apply existing risk management principles to AI. You don't need entirely new frameworks—adapt your current vendor risk, data classification, and policy enforcement processes.
- Implement Cross-Functional Governance: Establish an AI steering committee with representatives from all affected departments.
- Invest in Data Classification: Proper data sensitivity labeling is crucial for effective AI governance and technical controls.
- Focus on Training: Employees need education on safe AI usage, bias identification, and hallucination recognition.
- Keep Frameworks Flexible: Write governance policies that can adapt to rapidly evolving AI technology without becoming obsolete.
- Enforce Policies with Technical Controls: Use DLP policies, content filtering, and access controls to support your governance framework.
- Plan for Specialized Roles: Consider hiring or developing AI-focused positions to manage governance and implementation.
For more information about implementing AI governance frameworks, technical controls, or training programs, reach out to our team. We're here to help you navigate the complex landscape of AI governance while maximizing the benefits of this transformative technology.