Session: 2 for 1: Critical Conversation: Consuming Open Source Software Securely / Ready Player 2: designing and securing multi-user AI environments
Katherine Druckman: Critical Conversation: Consuming Open Source Software Securely
With the number of available open source projects seeing exponential growth, including the number of single-maintainer projects, evaluating and safely consuming open source software has never been more critical or challenging. Join Katherine Druckman, Open Source Security Evangelist at Intel, to unpack the basics of secure open source consumption.
This talk will explore the fundamentals of evaluating open source projects against maintenance best practices and overall health, and cover the significance of CVEs and how they are addressed within open source projects. We will highlight the roles of project maturity and governance, documented expectations about code contributions, and clearly outlined bug-reporting processes, and how all these factors build confidence in the integrity of our software.
Finally, we’ll touch on the use of tools to help harden the development process and initiatives from the broader open source security community, like the OpenSSF and its projects, that aim to make secure open source software consumption ubiquitous.
Objectives:
- Share best practices for all experience levels.
- Educate about security fundamentals.
- Define challenges in consuming open source software and managing dependencies.
- Learn how to evaluate projects based on active maintenance, patch cycles, and vulnerability management.
- Address security basics from the consumption perspective.
- Connect solutions and tools to pervasive risks.
Andrew Zigler: Ready Player 2: designing and securing multi-user AI environments
The integration of AI into multi-user environments presents unique security challenges and opportunities. For over a decade, technologists across the world have been marching to “”open source is eating software”” but now the drumbeat is changing. AI is eating everything. The rapid integration of AI across our everyday lives is reshaping the digital landscape. However, this new world introduces complex security questions that demand our immediate attention.
This presentation will examine the intricate security terrain of multi-user AI environments, focusing on AI-driven ChatOps as an example. We will dissect the unique security challenges posed by multi-user AI systems where multiple individuals interact with the same AI context, potentially exposing sensitive information or socially engineering one another.
This talk is designed for security-minded practitioners, but even non-technical participants will gain insight into the landscape of multi-user AI, potential vulnerabilities introduced by interactions with those systems, and strategies for maintaining data integrity and privacy in shared AI environments.