Role: Cyber AI Engineer - 4 roles open with AT&T
MOI:skype
visa: NO opt
Secure AI system design and implementation, threat modeling for AI/LLM use cases, adversarial/abuse testing, data privacy and secure-by-default controls, IAM/secrets management, secure APIs/integrations, monitoring and incident response, vulnerability management, security automation, collaboration with ML/engineering to embed governance and guardrails into production AI.
Role Overview
We are seeking a Cyber AI Engineer to design, build, and secure production-grade AI systems. This role focuses on embedding security, privacy, and governance into AI/LLM platforms through secure-by-default architectures, adversarial testing, and continuous monitoring—working closely with ML, platform, and product engineering teams.
Key Responsibilities
- Design and implement secure AI/LLM systems with privacy-first and secure-by-default controls
- Perform threat modeling for AI/LLM use cases, including data poisoning, prompt injection, model abuse, and supply-chain risks
- Conduct adversarial and abuse testing to identify edge cases and misuse scenarios
- Implement data privacy, access controls, and governance guardrails across AI pipelines
- Build and manage IAM, secrets management, and secure key handling for AI services
- Design and secure APIs, integrations, and model endpoints
- Implement monitoring, logging, and incident response for AI systems
- Drive vulnerability management and remediation for AI platforms and dependencies
- Automate security controls, testing, and compliance checks
- Collaborate with ML, engineering, and product teams to embed security, governance, and release readiness into production AI
Required Qualifications
- Experience securing AI/ML or LLM-based systems in production environments
- Strong background in application security, cloud security, and API security
- Hands-on experience with threat modeling and adversarial testing
- Knowledge of IAM, secrets management, encryption, and data privacy controls
- Familiarity with security monitoring, incident response, and automation
- Ability to work cross-functionally with ML and engineering teams
Nice to Have
- Experience with LLM platforms, prompt safety, and AI governance frameworks
- Knowledge of model evaluation, red-teaming, or AI abuse prevention
- Background in DevSecOps or security automation