AI Discovery & Vendor Risk
You cannot govern what you cannot see. Full visibility into AI usage across your organization, both sanctioned and shadow, is the foundation of every effective governance program.
Shadow AI Detection
Your employees are already using AI. The question is whether you know about it. According to research from ISACA and UpGuard, 59% of employees use AI tools their organization has not approved. Second Talent reported a 156% increase in shadow AI usage between 2023 and 2025. These are not projections or worst case estimates. This is happening inside your organization right now, and every day without visibility increases your exposure to data leaks, compliance violations, and regulatory penalties.
Shadow AI spreads the same way shadow IT did a decade ago, only faster. An engineer pastes proprietary code into ChatGPT to debug a production issue. A marketing analyst uploads customer segmentation data to an AI writing tool. A finance team member feeds quarterly projections into a summarization service. Each individual action feels harmless. Collectively, they represent an unmonitored data pipeline flowing straight out of your organization.
Our discovery methodology combines multiple detection techniques to build a complete picture. Network traffic analysis identifies connections to known AI service endpoints. SaaS spend analysis uncovers AI subscriptions buried in department budgets and expense reports. Endpoint monitoring reveals locally installed AI applications and browser extensions that employees adopted on their own. Together, these approaches catch what any single method would miss.
Once identified, each shadow AI tool receives a risk classification score based on data access patterns, authentication requirements, data retention policies, and regulatory implications. That classification drives practical decisions. Some tools warrant formal adoption with proper controls. Others need immediate restriction. Many fall somewhere in between, where guardrails and training solve the problem without blocking productivity.
We also survey employees to understand why they turned to unsanctioned tools in the first place. Often the root cause is a gap in approved tooling or a workflow friction that AI solved faster than IT could. Understanding motivation helps you build policies people actually follow rather than work around.
Key Deliverables
- Shadow AI discovery scan and full inventory report
- Risk classification scoring for all discovered AI tools
- Policy recommendations mapped to each shadow AI category
- Employee AI usage survey with behavioral analysis
- Remediation roadmap with prioritized action items
AI Asset Inventory
A standard IT asset inventory tracks hardware, software licenses, and network endpoints. AI systems demand something fundamentally different. A language model embedded in your customer service platform behaves nothing like a traditional SaaS application. It learns, adapts, and produces outputs that vary based on training data and prompt context. Your inventory needs to reflect that complexity.
Each AI system in your catalog should document its model purpose, data sources it accesses, risk exposure level, integrations with other systems, and whether a human reviews its outputs before they reach end users. That last point, the human in the loop (HITL) expectation, matters more than most organizations realize. An AI system generating internal summaries carries a very different risk profile than one making customer facing decisions without human review. The inventory captures this distinction explicitly.
Beyond the initial catalog, AI assets require ongoing tracking that traditional inventories were never designed to handle. Model versions change. Retraining events alter behavior. Performance metrics drift over time as the data landscape shifts. Think of it like maintaining a fleet of vehicles where the engines periodically rebuild themselves. You need to know when a previously low risk system crosses into higher risk territory due to expanded data access, new integrations, or degraded accuracy.
We build your inventory framework with maintenance procedures that fit your existing change management processes. The goal is a living system your team will actually keep current, not a document that becomes outdated the week after delivery. Every entry includes clear ownership assignments, scheduled review dates, and triggers that prompt reassessment when material changes occur.
Key Deliverables
- AI asset inventory framework and classification templates
- Complete catalog of all organizational AI systems
- Risk exposure scoring with HITL assessment for each asset
- Inventory maintenance procedures and review triggers
- Integration mapping across all connected systems
Third Party AI Vendor Assessment
Traditional third party risk management programs evaluate vendors on financial stability, data security controls, business continuity, and compliance certifications. These factors still matter, but they miss an entire category of risk that AI vendors introduce. Your existing TPRM questionnaire almost certainly does not ask about training data provenance, model extraction attack surfaces, or prompt injection vulnerabilities. It should.
AI vendors carry risk factors that require dedicated assessment criteria. Data poisoning risk evaluates whether a vendor's training pipeline is vulnerable to adversarial inputs that corrupt model behavior. Model extraction attacks test whether competitors or bad actors could replicate the vendor's model through carefully crafted queries. Prompt injection surface measures how susceptible the system is to inputs designed to override its instructions or leak system prompts. Each of these represents a failure mode that traditional security assessments never anticipated.
Training data provenance is another critical dimension. Where did the vendor source its training data? Does it include your proprietary information? Can the vendor demonstrate lawful acquisition and proper licensing? These questions have real legal consequences, especially as courts and regulators establish precedent around AI training data rights. Getting answers before you sign a contract saves you from unpleasant discoveries after.
Data retention and deletion capabilities deserve particular scrutiny. When you terminate a vendor relationship, can they actually remove your data from their models? In many cases, data used for fine tuning becomes inseparable from the model itself. You need to understand these limitations during procurement, not during an exit.
Our vendor assessment framework produces a risk scoring matrix that lets you compare AI vendors on consistent, objective criteria. We also develop contractual requirement templates that address AI specific obligations including model change notification, performance guarantees, data handling restrictions, and incident response procedures for AI failures. Ongoing vendor monitoring rounds out the program with scheduled reassessments and event driven reviews triggered by vendor incidents, regulatory changes, or shifts in how your organization uses the vendor's AI capabilities.
Key Deliverables
- AI specific vendor assessment questionnaire
- Vendor risk scoring matrix with weighted criteria
- Third party AI risk register
- Contractual requirements template for AI vendors
- Ongoing vendor monitoring and reassessment program