探测网络方法将直接通过神经网络来预测给定的样本是否是对抗样本,即把对抗样本的识别直接转化为端到端方式训练的二分类问题(Metzen et al., 2017)。探测网络方法在原本的神经网络(分类器)上扩展一个探测器子网络,该探测器的任务是判别样本是否来自真实数据。探测器的输出是一个范围为 的标量,表示数据属于对抗样本的概率。探测器的设计和具体的数据集有关,采用的架构一般是卷积神经网络。
继续阅读分类目录归档:会议论文
对抗攻防经典之二:Explaining and harnessing adversarial examples
本文提出一种经典的对抗样本生成方法——FGSM 算法。该算法(Goodfellow et al., 2014)是一种无目标攻击算法,通过在原有样本的基础上计算得到一个扰动,使得加上扰动后的样本被分类到错误的类别。为了方便地表示FGSM算法,定义扰动后的样本为 x’=x+r,其中 x 表示原样本, r 为扰动。显然,对于可区分的类别来讲,当满足||r||_∞ < e(e表示足够小的值),我们希望分类器将 x 和 x’ 识别为相同的类别。
继续阅读对抗攻防经典之一:Intriguing properties of neural networks
深度神经网络是具有高度表达性的模型,在语音识别和计算机视觉上已经取得了很大的成就。虽然深度神经网络的高表达性是它取得成功的原因,但这也会导致它学习不可解释的解决方案,这些解决方案可能会具有违反直觉的特性。这项工作提出了两个这样的属性:
(1) 首先,单个的高层神经网络单元与高层神经网络单元的随机线性组合没有区别。这表明,在神经网络的高层,语义信息是具有整体空间性的,而不仅仅是单个单元。
(2) 其次,这项工作发现深度神经网络学习的输入与输出的映射在很大程度上不是连续的,也就是说神经网络学到的表征不是连续的。
GAN论文分类整理-ICLR2018
下载链接:https://pan.baidu.com/s/1U2NgEzbNxCAinZNp2cvOPQ
提取码:4oi0
GAN论文分类整理-ICML2018
下载链接:https://pan.baidu.com/s/1xtT0K-2N88nebX6ocnhLnQ
提取码:vnmp
GAN论文分类整理-CVPR2018
下载链接:https://pan.baidu.com/s/1bCamc3JeAe5Au4WH9hVGFg
提取码:ut33
2018人工智能国际学术会议论文集
AAAI (AAAI Conference on Artificial Intelligence):https://aaai.org/Library/AAAI/aaai18contents.php
CVPR (IEEE Conference on Computer Vision and Pattern Recognition): http://openaccess.thecvf.com/CVPR2018.py
ICCV (International Conference on Computer Vision): https://waset.org/Publications
ICML (International Conference on Machine Learning):https://icml.cc/Conferences/2018/Schedule?type=Poster
IJCAI (International Joint Conference on Artificial Intelligence):https://www.ijcai.org/proceedings/2018/
NeurIPS (Annual Conference on Neural Information Processing Systems):https://papers.nips.cc/book/advances-in-neural-information-processing-systems-31-2018
ACL (Annual Meeting of the Association for Computational Linguistics):https://acl2018.org/programme/papers/
ECCV (European Conference on Computer Vision):http://openaccess.thecvf.com/ECCV2018.py
COLING (International Conference on Computational Linguistics):http://coling2018.org/coling-2018-accepted-papers/
UAI (International Conference on Uncertainty in Artificial Intelligence):http://www.auai.org/uai2018/accepted.php#
Latent Alignment and Variational Attention
Abstract: Neural attention has become central to many state-of-the-art models in naturallanguage processing and related domains. Attention networks are an easy-to-trainand effective method for softly simulating alignment; however, the approach doesnot marginalize over latent alignments in a probabilistic sense. This property makesit difficult to compare attention to other alignment approaches, to compose it withprobabilistic models, and to perform posterior inference conditioned on observeddata. A related latent approach, hard attention, fixes these issues, but is generallyharder to train and less accurate. This work considers variational attention networks,alternatives to soft and hard attention for learning latent variable alignmentmodels, with tighter approximation bounds based on amortized variational inference.We further propose methods for reducing the variance of gradients to makethese approaches computationally feasible. Experiments show that for machinetranslation and visual question answering, inefficient exact latent variable modelsoutperform standard neural attention, but these gains go away when using hardattention based training. On the other hand, variational attention retains most ofthe performance gain but with training speed comparable to neural attention.
Implicit Autoencoders
Abstract: In this paper, we describe the “implicit autoencoder” (IAE), a generative autoencoderin which both the generative path and the recognition path are parametrizedby implicit distributions. We use two generative adversarial networks to define thereconstruction and the regularization cost functions of the implicit autoencoder,and derive the learning rules based on maximum-likelihood learning. Using implicitdistributions allows us to learn more expressive posterior and conditionallikelihood distributions for the autoencoder. Learning an expressive conditional likelihood distribution enables the latent code to only capture the abstract andhigh-level information of the data, while the remaining information is capturedby the implicit conditional likelihood distribution. For example, we show thatimplicit autoencoders can disentangle the global and local information, and performdeterministic or stochastic reconstructions of the images. We further showthat implicit autoencoders can disentangle discrete underlying factors of variationfrom the continuous factors in an unsupervised fashion, and perform clustering andsemi-supervised learning.
Wasserstein Auto-Encoders
Abstract: We propose the Wasserstein Auto-Encoder (WAE)—a new algorithm for building a generative model of the data distribution. WAE minimizes a penalized form of the Wasserstein distance between the model distribution and the target distribution, which leads to a different regularizer than the one used by the Variational Auto-Encoder (VAE) [1]. This regularizer encourages the encoded training distribution to match the prior. We compare our algorithm with several other techniques and show that it is a generalization of adversarial auto-encoders(AAE) [2]. Our experiments show that WAE shares many of the properties of VAEs (stable training, encoder-decoder architecture, nice latent manifold structure) while generating samples of better quality, as measured by the FID score.