DEF CON 26 AI VILLAGE — Shankar and Kumar — Towards a Framework to Quantitatively Assess AI Safety

Дата: 28.11.2018. Автор: CISOCLUB. Категории: Подкасты и видео по информационной безопасности

While the papers are piling in arxiv on adversarial machine learning, and companies are committed to AI safety, what would a system that assess the safety of ML system look like in practice? Compare a ML system to a bridge under construction. Engineers along with regulatory authorities routinely and comprehensively assess the safety of the structure to attest the bridge’s reliability and ability to function under duress before opening it to the public. Can we as security data scientists provide similar guarantees for ML systems? This talk lays the challenges, open questions in creating a framework to quantitatively assess safety of ML systems. The opportunities, when such a framework is put to effect, are plentiful – for a start, we can gain trust with the population at large that ML systems aren’t brittle; that they just come in varying, quantifiable degrees of safety.

Об авторе CISOCLUB

Редакция CISOCLUB. Официальный аккаунт. CISOCLUB - информационный портал и профессиональное сообщество специалистов по информационной безопасности.
Читать все записи автора CISOCLUB

Добавить комментарий

Ваш адрес email не будет опубликован. Обязательные поля помечены *