Session: End-to-end secure ML development

We are seeing an increase in the number of AI powered applications. At the same time, we are seeing that AI software repeats the same security mistakes as traditional software, but at an accelerated time frame and with higher risks.

In this tutorial, we aim to show how AI applications can be developed in a safe way, starting with datasets and software dependencies, building a secure software supply chain, and only accepting models in production that have clear, untampered provenance (both SLSA but also analyzing the capabilities of the models to eliminate future risks). For example, we want to be able to trace back from a bad inference in production to the potential poisoned input in the training dataset. We will show how we can reduce cost of retraining models in the event of an ML framework compromise by analyzing the blast radius and only retraining impacted models.

In order to achieve these, we need an AI/ML control plane and an AI/ML package manager, which we will introduce during the tutorial.

Presenters: