![]() However, the overall finding that fooling these models is possible is not totally surprising. The paper is interesting to the community in that it shows empirical lessons in fooling a set of real-world problems (ASR, human pose estimation, semantic segmentation) different than image classification that has been well studied. The authors instead propose a loss function (called Houdini) that is surrogate to the actual task loss of the model that we are interested in breaking. ![]() The white-box attacks to these image classifiers generate adversarial examples by backgpropagating the gradient from a differentiable classification loss function. ![]() This the paper instead proposes to generate adversarial examples for machine learning models for structured prediction problems. ![]() In the adversarial machine learning literature, most of research has been focusing on methods for fooling and defending image classifiers. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |