DEF CON 26 AI VILLAGE — Kang Li — Beyond Adversarial Learning Security Risks in AI Implementations

Дата: 28.11.2018. Автор: CISO CLUB. Категории: Подкасты и видео по информационной безопасности

A year after we discovered and reported a bunch of CVEs related to deep learning frameworks, many security and AI researchers have started to pay more attention to the software security of AI systems. Unfortunately, many deep learning developers are still unaware of the risks buried in AI software implementations. For example, by inspecting a set of newly developed AI applications, such as image classification and voice recognition, we found that they make strong assumptions about the input format used by training and classifications. Attackers can easily manipulate the classification and recognition without putting any effort in adversarial learning. In fact the potential danger introduced by software bugs and lack of input validation is much more severe than a weakness in a deep learning model. This talks will show threat examples that produce various attack effects from evading classifications, to data leakage, and even to whole system compromises. We hope by demonstrate such threats and risks, we can draw developers’ attention to software implementations and call for community collaborative effort to improve software security of deep learning frameworks and AI applications.


Об авторе CISO CLUB

Редакция портала Добавляйте ваш материал на сайт в разделе "Разместить публикацию".
Читать все записи автора CISO CLUB

Добавить комментарий

Ваш адрес email не будет опубликован. Обязательные поля помечены *