NOXVERI Service

AI Security — adopting artificial intelligence without unmanaged risk

AI adoption is accelerating faster than the security and governance frameworks around it. Most organisations have already deployed AI tools — many without a clear picture of what data goes in, what comes out, who is accountable, or what happens when something goes wrong.

AI Security AI Risk LLM Security AI Governance Model Risk AI Act Prompt Injection Data Security

AI is not just an IT problem

Organisations that treat AI security as a subset of IT security miss most of the risk. The threat surface of AI systems extends into governance, liability, supplier relationships and operational processes — areas where the security function does not traditionally operate, and where the consequences of getting it wrong are not always reversible.

Risk 1

Model risk

AI models — particularly large language models — produce outputs that are probabilistic, not deterministic. Hallucinations, factual errors and biased outputs are inherent properties, not bugs to be patched. When AI outputs inform decisions, the organisation needs to understand the error characteristics of the specific model, in the specific use case, at the specific confidence thresholds in use. Model risk extends to training data poisoning — adversarial manipulation of training data to introduce systematic errors or backdoors into deployed models.

Risk 2

Data security and information leakage

What data does the organisation feed into AI systems? What does the AI vendor do with that data — how is it stored, is it used for model training, what are the data retention terms? What information can be extracted from the model through normal interaction? Most AI deployment agreements are signed at business unit level, without security or legal review of the data processing implications. The data that goes into an AI system should be treated as disclosed to the vendor and potentially to the model itself.

Risk 3

AI vendor dependencies

AI infrastructure — model providers, API platforms, vector databases, orchestration frameworks — creates a new category of critical third-party dependency. Lock-in is often deeper than with traditional software: switching model providers requires re-evaluation, re-testing and potentially retraining. Service continuity, SLA terms and the vendor's own security posture are material risks for any organisation with production AI dependencies. DORA's ICT third-party risk requirements apply to AI infrastructure providers where they are critical to the financial entity's operations.

Risk 4

Governance and accountability

When an AI system makes or influences a decision — a credit decision, a medical triage recommendation, a hiring screen — who is accountable for that decision? In most organisations today, the honest answer is that no one has thought this through. The EU AI Act creates legal obligations for organisations deploying AI in high-risk use cases, including requirements for human oversight, documentation and conformity assessment. Accountability gaps that are tolerable today become regulatory exposure under the Act.

Risk 5

Abuse scenarios

Prompt injection — manipulating an AI system's behaviour through crafted inputs — is a class of attack with no perfect defence and growing exploitation in production systems. Beyond prompt injection: AI-powered social engineering (highly targeted, highly convincing phishing and impersonation at scale), data exfiltration through LLM interactions, and adversarial attacks designed to manipulate AI-driven security tools themselves. These are not theoretical — they are being used in practice against organisations with deployed AI systems.

Practical controls, not theoretical frameworks

NOXVERI's AI security work is grounded in the specific systems the organisation is actually using, the specific risks those systems create given the organisation's context, and the specific controls that are feasible to implement. The output is actionable, not aspirational.

Where NOXVERI fits in: AI security sits at the intersection of security, governance and emerging technology. NOXVERI brings security expertise and a pragmatic governance perspective — not AI engineering. The focus is on the risk the AI systems create for the organisation and the controls that reduce that risk to an acceptable level.

The regulatory framework is already here

AI is no longer a regulatory grey area in Europe. The EU AI Act entered into force in August 2024 and is being phased in, with the highest-risk provisions applying earliest. Organisations that have not mapped their AI use cases against the Act's risk classification are behind the curve — and the curve is moving fast.

EU AI Act

Risk-based classification and obligations

The AI Act classifies AI systems by risk: unacceptable risk (prohibited), high risk (strict obligations), limited risk (transparency requirements), minimal risk (no specific obligations). High-risk AI systems — including AI used in employment decisions, credit scoring, critical infrastructure management, law enforcement and essential services — require conformity assessment, registration, human oversight, documentation and ongoing monitoring.

General-purpose AI models (GPAIs), including large language models, face transparency and documentation requirements. The most capable GPAIs face additional systemic risk obligations. Providers and deployers have distinct obligations — deploying a general-purpose AI model in a high-risk use case creates obligations for the deployer, not just the model provider.

NIS2 & ENISA

AI systems as critical infrastructure

NIS2 covers AI systems that form part of critical infrastructure or essential services — either as the primary system or as a component that services depend on. Organisations that have integrated AI into operations covered by NIS2 obligations should assess whether those AI components are in scope for NIS2 risk management measures.

ENISA has published guidance on AI security and threat landscapes for AI systems. The guidance covers adversarial machine learning, data poisoning, model theft and other AI-specific threat categories — providing the threat intelligence foundation for a structured AI risk assessment.

Organisations deploying AI — with or without a formal programme

The most common starting point is an organisation that has already deployed AI tools — ChatGPT, Copilot, custom LLM integrations — without a governance framework to match. The risk didn't wait for the framework. The work is catching up.

01

Organisations deploying AI or LLMs in operations

Any organisation using AI systems that touch customer data, influence operational decisions or integrate with production systems. The engagement maps the risk created by current deployments and identifies the controls needed to reduce that risk to an acceptable level — before a security incident or regulatory review forces the question.

02

Organisations with ChatGPT, Copilot or similar tools in use without formal oversight

Where employees are using AI tools — with or without official approval — but no governance framework exists. What data are they uploading? What are the terms of the AI service? What sensitive information might be accessible through the AI? The assessment starts by answering these questions, then builds the controls and policies needed to manage the risk going forward.

03

Boards that need to understand AI risk

Management bodies facing AI Act obligations, investor questions about AI risk, or simply the recognition that AI is now a material business risk that belongs on the board agenda alongside cyber and operational risk. NOXVERI provides a board-level view of the AI risk landscape, the organisation's current exposure and the governance changes needed — in a format suited to governance decision-making, not technical review.

Let's talk about your AI risk picture

Send a brief description of how your organisation is using AI — what tools, what use cases, what concerns. NOXVERI will come back with an honest assessment of the risk and how the engagement can help address it. No commitment, no templated proposal.

Schedule a conversation