Location
Ottawa, Canada - Hybrid
Other Canadian locations - Remote
About the role
The AI Security Engineer is a handson security specialist responsible for designing, implementing, and operating security controls for AIenabled systems across Corporate IT and the Kinaxis Maestro SaaS ecosystem.
You will serve as a recognized subject matter expert in AI security defining AI security design considerations, risk assessment recommendations, and control implementation. In this role, you will act as an escalation point for complex AI security events and misuse scenarios, leads AI threat modeling and adversarial testing efforts, and drives measurable improvements in AI detection, prevention, and monitoring capabilities.
You will also contribute to the maturation of AI security frameworks, governance, and operational practices, and provide technical mentorship to teams building and operating AIenabled solutions.
Vacancy Status
This is an existing job vacancy
What you will do
Security Architecture & Control Design Design and implement endtoend security guardrails across the AI lifecycle, including data ingestion, training, evaluation, deployment, and runtime monitoring. Develop securebydefault patterns for AI enabled applications. Implement controls for agentic workflows, including tool permissioning, action constraints, auditability, and blastradius reduction. Define and enforce secure configuration baselines for AI services such as cloud AI platforms, model gateways, vector databases, and model runtimes
AI Threat Modeling & Risk Assessment Lead AI security design reviews, conduct threat modeling, and risk assessments for AI-enabled systems. Identify AI-specific risks and translate findings into prioritized mitigation plans, updated standards, and actionable engineering guidance. Monitor emerging AI threats, vulnerabilities, and research, incorporating relevant insights into security practices, documentation, and team enablement.
Red Teaming & Adversarial Testing Plan and execute targeted adversarial testing against AI enabled applications and workflows. Develop repeatable test cases to evaluate resistance against misuse, data leakage, and unsafe output. Partner with internal offensive security teams and external assessors to validate resilience before launch and during major changes.
AI Security Tooling, Telemetry & Monitoring Evaluate, deploy, and operate AI security controls. Define logging and telemetry requirements for AI enabled systems. Ensure AI security events are integrated into centralized monitoring and response workflows.
AI Security Governance & Collaboration Serve as a subjectmatter expert for AI security, advising product and engineering teams on secure design choices and risk trade-offs. Contribute to the evolution of AI security standards and governance practices. Collaborate with engineering leaders to embed AI security requirements into CI/CD and MLOps pipelines, aligned with secure SDLC practices. Serve as an escalation point for complex AI security investigations and abuse scenarios.
What we are looking for
Primary Skills and Qualifications Bachelors degree in Information Security, Computer Science, Engineering or equivalent practical experience. 6 8 years of experience in security engineering, application security, cloud security, or security architecture, including handson work securing production systems. Strong understanding of secure software development practices and modern cloud platforms. Demonstrated experience securing production AI-enabled systems. Excellent written and verbal communication skills, with the ability to clearly articulate complex technical information. Strong analytical, communication, and prioritization skills in fastmoving environments. Continuous learning Certificationso Preferred: CISSPo Desirable: CAISP, CSSLP, SABSA , Cloud Provider Security Certifications, NIST AI RMF Training or ISO/IEC 42001 Lead Implementer
Role Specific Skills and experience Deep understanding of LLMs, agents, RAG pipelines, model serving, and MLOps Strong grasp of AI-specific threats (prompt injection, jailbreaks, model inversion, poisoning, data leakage) Experience deploying AI security defenses (LLM firewalls, policy engines, input/output validation, DLP, monitoring) Experience building secure-by-design patterns and defense-in-depth for AI systems Ability to define telemetry, logging, and detection strategies for AI systems Ability to design and implement security controls across AI tools, platforms, and delivery pipelines Hands-on experience performing AI/ML threat modeling Ability to translate AI risks into actionable controls and engineering requirements Experience testing AI systems against adversarial attacks and abuse scenarios
#Senior; #LI-EM1