Manifold attack

Machine Learning in general and Deep Learning in particular has gained much interest in the recent decade and has shown significant performance improvements for many Computer Vision or Natural Language Processing tasks. In order to deal with databases which have just a small amount of training samples or to deal with models which have large amount of parameters, the regularization is indispensable...

In this paper, we enforce the manifold preservation (manifold learning) from the original data into latent presentation by using “manifold attack”. The later is inspired in a fashion of adversarial learning : finding virtual points that distort mostly the manifold preservation then using these points as supplementary samples to train the model. We show that our approach of regularization provides improvements for the accuracy rate and for the robustness to adversarial examples.

(read more)



PDF



Abstract

Visit source site

Leave a Reply