
AI legislation is no longer just a legal topic; it is a concrete security and governance requirement that must be built into how Irish businesses design, deploy, and secure AI systems and the sensitive data around them. The EU AI Act sits alongside GDPR, NIS2 and existing cyber rules, and expects you not only to control models but to harden your data, your cloud tenants, and your people against misuse and breaches.
Why AI law cares about security
- The EU AI Act requires high‑risk AI systems to achieve an appropriate level of robustness, accuracy, and cybersecurity, including resilience against attacks such as data poisoning, model evasion and confidentiality attacks.
- These requirements build on existing obligations under GDPR and national cyber law, meaning poor security around your AI stack (data lakes, tenants, SaaS tools) can be both a regulatory and operational failure.publications.
Tenant isolation and data architecture
- Multi‑tenant isolation is critical where your AI or data platforms serve multiple business units, clients, or environments; it ensures one tenant’s data cannot be accessed or manipulated by another through strong logical separation, access control, and per‑tenant encryption.ones+1
- Best practice includes: separate schemas or databases per tenant, strict IAM roles, network isolation (VLANs/VNETs), container or workspace isolation for AI workloads, and unique encryption keys for each tenant to reduce blast radius if a breach occurs.
Labelling, encryption and “breach-ready” design
- Under AI and broader data‑protection expectations, training and inference data, prompts, and outputs that contain confidential or personal information should be sensitivity‑labelled (e.g. Public / Internal / Confidential / Restricted) and automatically encrypted at rest and in transit.nviso+1
- From a resilience perspective, planning to “get the data back” after a breach is almost pointless; the priority is to ensure that whatever is exfiltrated is encrypted with strong keys, tightly governed key management, and—ideally—segmented by tenant and by classification so the impact of compromise is minimal.
DLP, Purview and stopping “innocent” exfiltration
- Data Loss Prevention integrated with classification, such as Microsoft Purview DLP, can detect sensitivity labels or patterns in content (e.g. bank details, IDs) and automatically block or warn when users try to email confidential files externally or upload them to unsanctioned cloud storage.syskit+1
- Real‑world tests show that Purview DLP can prevent exfiltration when rules state that highly sensitive documents cannot be sent to external domains or uploaded to unapproved storage, with client‑side and browser‑extension controls plus insider‑risk analytics to score risky behaviour over time.
Auditing, governance, testing and training
- The AI Act expects risk management, monitoring, and incident reporting for high‑risk systems, which in practice means continuous logging, auditing of access and data flows, regular adversarial testing of AI models, and alignment with established security frameworks.
- Regular security testing—red‑teaming, blue‑team monitoring, penetration testing of AI pipelines, and targeted review of tenant isolation and DLP controls—helps validate that governance is real and that controls actually resist the attacks regulators are worried about.
Practical steps for Irish businesses
- Map and classify
- Inventory all AI use cases and data flows: where data is ingested, processed, stored, and exported (including SaaS and third‑party models).
- Define and enforce a simple but strict sensitivity labelling scheme (e.g. Public / Internal / Confidential / Restricted) and bind encryption, DLP and retention policies to those labels by default.
- Harden tenants and infrastructure
- Review tenant architecture for your cloud and AI platforms: implement strong tenant isolation (separate workspaces, schemas, keys, and admin roles for different clients or business units).
- Enforce encryption at rest and in transit, using per‑tenant keys where feasible, HSM‑backed key vaults, strict key‑rotation, and tight access controls around key‑management operations.
- Deploy DLP and exfiltration controls
- Implement Purview (or equivalent) DLP policies that explicitly block or require justification/approval for sending labelled “Confidential/Restricted” data to personal email or external domains, and for uploading such data to non‑approved storage or AI tools.nviso+1
- Enable endpoint and browser‑level DLP agents/extensions so that even if executives try to drag‑and‑drop sensitive files into webmail or consumer cloud services, rules still trigger, log, and where appropriate block the action.
- Strengthen governance, logging and audits
- Establish an AI and data governance forum (risk, legal, security, business) to own AI risk registers, approve high‑risk use cases, and sign off on model deployments and third‑party AI procurement.publications.
- Turn on detailed logging for access, model use, data movement and policy overrides; schedule regular audits that review DLP incidents, privileged‑user behaviour, and any cross‑tenant access anomalies, with board‑level reporting where necessary.
- Test continuously: red vs blue
- Include AI systems and data‑flows in penetration tests and red‑team exercises, focusing on tenant‑escape scenarios, prompt injection into business processes, and attempts to bypass DLP or exploit mis‑labelling.
- Use blue‑team monitoring to tune detections for unusual data‑access volumes, strange export patterns by executives, or repeated policy‑tip dismissals, feeding these into insider‑risk processes.
- Educate executives and staff
- Deliver targeted security‑and‑AI awareness training for executives and power users, explaining that sending sensitive docs to personal email, uploading strategy decks to public AI tools, or “just bypassing” a DLP warning is a direct breach of policy and may breach EU law.bsigroup+1
- Require explicit sign‑off on acceptable‑use and AI‑use policies, and back this up with simulated scenarios and just‑in‑time prompts when users encounter DLP blocks, so culture, not just tooling, reinforces secure behaviour.
- https://www.bsigroup.com/en-IE/insights-and-media/insights/blogs/the-eu-ai-act-and-its-interactions-with-cybersecurity-legislation/
- https://publications.matheson.com/cyber-bulletin-may-2024/the-ai-act-key-takeaways-for-cybersecurity-compliance
- https://www.bsigroup.com/en-GB/insights-and-media/insights/blogs/the-eu-ai-act-and-its-interactions-with-cybersecurity-legislation/
- https://iuslaboris.com/insights/cyber-security-obligations-under-the-eu-ai-act/
- https://ones.com/blog/multi-tenancy-isolation-cloud-security-saas-best-practices/
- https://securityboulevard.com/2025/12/tenant-isolation-in-multi-tenant-systems-architecture-identity-and-security/
- https://blog.nviso.eu/2024/12/18/microsoft-purview-evading-data-loss-prevention-policies/
- https://www.syskit.com/blog/advanced-data-loss-prevention-dlp-purview/
- https://www.morganlewis.com/pubs/2024/07/eu-ai-act-us-nist-target-cyberattacks-on-ai-systems-guidance-and-reporting-obligations
- https://ppl-ai-file-upload.s3.amazonaws.com/web/direct-files/collection_f064dd2b-1f22-4430-946d-a62152fea970/8724fa36-f62b-4ea7-9883-dd4ef6195e59/Colin-McHugo-CV_2025-2026.pdf
- https://artificialintelligenceact.eu/recital/76/