Guaranteeing AI Robustness against Deception

DARPA is funding research into defense against AI attacks.

The growing sophistication and ubiquity of ML components in advanced systems dramatically increases capabilities, but as a byproduct, increases opportunities for new, potentially unidentified vulnerabilities. The acceleration in ML attack capabilities has promoted an arms race: As defenses are developed to address new attack strategies and vulnerabilities, improved attack methodologies capable of bypassing the defense algorithms are created. The field now appears increasingly pessimistic, sensing that developing effective ML defenses may prove significantly more difficult than designing new attacks, leaving advanced systems vulnerable and exposed. Further, the lack of a comprehensive theoretical understanding of ML vulnerabilities in the “Adversarial Examples” field leaves significant exploitable blind spots in advanced systems and limits efforts to develop effective defenses.

Leave a Reply

Your email address will not be published. Required fields are marked *