ANT3005

MLSecOps

Duration of training: 5 days

sign up for a course

description
course

The material of this course will allow students to study the theory and practice of modern threats, attacks, methods of protection and audit of artificial intelligence and machine learning systems. Students will learn to model threats, implement and detect attacks (adversarial, poisoning, supply chain, privacy leakage), and also apply tools for building secure ML pipelines, monitoring and auditing machine learning models. Attention is paid to laboratory work on OSS frameworks.

course audience

Cybersecurity professionals who want to master the specifics of protecting ML/AI systems, as well as developers and architects of AI solutions who develop ML pipelines taking into account security requirements.

prerequisites

You must have knowledge of Python and practical experience with ML/AI frameworks (scikit-learn, PyTorch, TensorFlow, Jupyter).

how it works
education

online course

The online course involves group classes with an instructor via video conferencing, in addition, homework and an exam.

for corporate clients

training for corporate clients includes online and self-study courses, as well as additional services required by corporate clients: organizing training plans for client departments, assessing the effectiveness of training, etc.

teacher
course

program
course

• Risks and specifics of AI systems: differences from classical IS.
• Classification of threats (adversarial ML, data poisoning, model stealing, privacy leaks, etc.).
• Typical attackers, motivations and attack scenarios.
• Threat models for ML/AI pipeline.
• Brief analysis of incidents (case studies).
• Regulatory requirements (NIST, ISO/IEC 27001, 27017, IEEE) for AI.
• Adversarial: theory, generation methods (FGSM, PGD, Carlini-Wagner, etc.).
• Attack categories (white-box/black-box, targeted/untargeted).
• Attacked objects: classification, segmentation, OCR, sound, NLP.
• Protection mechanisms: adversarial training, input preprocessing, reinforced architectures.
• Methods for assessing the stability of models (robustness metrics).
• Data poisoning: types of attacks (label flipping, backdoor, clean-label).
• Attack mechanisms and stages of implementation.
• Supply chain hazards: substitution of models, repositories, libraries.
• Examples of attacks via PyPI, Huggingface.
• Protection mechanisms: content tracking, source code verification, sandboxes.
• Current standards and best practices (MLSecOps, DevSecOps).
• Leaks through trained models: membership inference, model inversion.
• Risks: reconstruction attacks, extraction attacks.
• Security measures: differential privacy, federated learning, homomorphic encryption.
• Attack Detection: Practical Methods for Fixing Leaks.
• Regulatory requirements: GDPR, prospects of the EAEU/RF, NIST AI RMF.
• Comprehensive security architecture for ML/AI systems.
• Building a secure pipeline (CI/CD, MLOps, Model Registry, Monitoring).
• Audit of protection quality and monitoring of models.
• Incident Response for AI/ML.
• Examples of corporate security architecture (Google TFX, Kubeflow, Microsoft).
• Brief overview of audit scenarios.

Сourse purchase
options

individual

Cost — $1,380.00

Group online classes

Unlimited access to all the materials

Live webinars with teachers

Homework

Exam with certificate

To confirm course dates fill out the form.

SUBMIT YOUR APPLICATION

* By clicking “send”, you agree to the Terms of Service And Privacy Policy

corporate

Cost from $1,380.00

To obtain information about the final cost and clarify the date of the course, please fill out the form.

SUBMIT YOUR APPLICATION

* By clicking “send”, you agree to the Terms of Service And Privacy Policy