Description
We are still learning how to cope with adversarial machine learning. Security researchers are used to perusing code for vulnerabilities. Now they must learn to find security holes in AI systems compos ...
Summary
- If you’ve been following news about artificial intelligence, you’ve probably heard of or seen modified images of pandas and turtles and stop signs that look ordinary to the human eye but cause AI systems to behave erratically.
- But despite the growing body of research on adversarial machine learning, the numbers show that there has been little progress in tackling adversarial attacks in real-world applications.
- Security researchers are used to perusing code for vulnerabilities.
- Since GitHub has no procedure to detect and block adversarial vulnerabilities, a malicious actor could easily use these kinds of tools to publish deep learning models that have hidden backdoors and exploit them after thousands of developers integrate them in their applications.