DEF CON 26 AI VILLAGE — Shankar and Kumar — Towards a Framework to Quantitatively Assess AI Safety

Дата: 28.11.2018. Автор: CISO CLUB. Категории: Подкасты и видео по информационной безопасности

While the papers are piling in arxiv on adversarial machine learning, and companies are committed to AI safety, what would a system that assess the safety of ML system look like in practice? Compare a ML system to a bridge under construction. Engineers along with regulatory authorities routinely and comprehensively assess the safety of the structure to attest the bridge’s reliability and ability to function under duress before opening it to the public. Can we as security data scientists provide similar guarantees for ML systems? This talk lays the challenges, open questions in creating a framework to quantitatively assess safety of ML systems. The opportunities, when such a framework is put to effect, are plentiful – for a start, we can gain trust with the population at large that ML systems aren’t brittle; that they just come in varying, quantifiable degrees of safety.

CISO CLUB

Об авторе CISO CLUB

Редакция портала cisoclub.ru. Добавляйте ваш материал на сайт в разделе "Разместить публикацию".
Читать все записи автора CISO CLUB

Добавить комментарий

Ваш адрес email не будет опубликован. Обязательные поля помечены *