本文提出DIFEX方法,解决领域泛化中不变特征不足问题,划分为域内和域间不变特征。通过深度分析,结合傅里叶相位和跨域知识,提升模型性能和鲁棒性。实验证明在多数据集上表现优异,易于扩展。
摘要生成于
,由 DeepSeek-R1 满血版支持,
1 Title
Domain-invariant Feature Exploration for Domain Generalization(Wang Lu ,Jindong Wang,Haoliang Li,Yiqiang Chen,Xing Xie)[TMLR (07/2022)]
2 Conclusion
This paper focuses on domain generalized representation learning, and in order to solve the problem that the invariant features obtained in the existing representation learning are not sufficient, the invariant features are divided into intra-domain invariant features and inter-domain invariant features for the first time, and an improved DG (domain generalization) method is proposed: DIFEX. Experiments are carried out on the data of image and time series in multi-source and single-source scenarios, and the experimental results show that DIFEX can obtain better results by fully exploiting features, and at the same time, it is universal in multiple scenarios and easy to expand.
3 Good Sentence
1、The popularity and effectiveness of domain-invariant learning naturally motivate us to seek the rationale behind this kind of approach: what are the domain-invariant features and how to further improve its performance on DG?(The motivation of this research)
2、 In this paper, we take a deep analysis of the domain-invariant features. Specifically, we argue that domain-invariant features should be originating from both internal and mutual sides:the internally-invariant features capture the intrinsic semantic knowledge of the data while the mutuallyinvariant features learn the cross-domain transferable knowledge. (the principle of the method proposed by this paper)
3、As discussed early, Fourier phase features alone are insufficient to obtain enough discriminative features for classification. Thus, we explore the mutually-invariant features by leveraging the cross-domain knowledge contained in multiple training domains.(The improvement of this method does to learn invariant features)
4、 This demonstrates the great performance of our approach in these datasets. Moreover, we see that alignments across domains and exploiting characteristics of data own can both bring remarkable improvements.(the result of experience shows the improvement of DIFEX)
本文聚焦于域泛化表示学习,针对现有表示学习中获取的不变特征不够充分的问题,首次将不变特征划分为域内不变特征和域间不变特征,提出了一种DG(领域 泛化)的改进方法:DIFEX
为了公平性,最后一层特征一分为二,一部分进行域内不变特征学习,一部分进行域间特征学习;
为了特征的多样性,提出了一个正则项,来让两种特征的差别尽量大。
1、通过一个简单的蒸馏框架学习域不变特征:虽然有额外的代价,但保证了预测是端到端的,省去了不必要的FFT计算。首先使用一个老师网络来利用傅里叶相值信息来学习分类模型,从而获取有用的与分类有关的傅里叶相值信息,训练之后,我们认为老师模型可以得到与分类有关的傅里叶相值信息,那么在学生模型训练的时候,便可以让它参考老师的这部分特征,进行域内不变特征学习。分别让一半特征通过蒸馏学习域内不变特征,让一半特征通过对齐学习域间不变特征,并设置正则项使得特征之间不一致性尽量大,以获得更多的特征,增强模型鲁棒性
一堆数据服从相同的分布。
2.
Domain
adaptation 研究的问题
给了一个training set 这个set可能是由一个或多个
domain
构成的,给定的testing set的
domain
与training set是不同的。利用training有Label 的数据训练模型,使得这个模型在testing数据上也可以使用,并且取得良好的效果。
在经典的机器学习中,当 源域 和 目标域 数据分布不同,
A graph similarity
for
deep learning
An Unsupervised In
for
mation
-
Theoretic Perceptual Quality Metric
Self
-
Supervised MultiModal Versatile Networks
Benchmarking Deep Inverse Models over time, and the Neural
-
Adjoint method
Off
-
Policy Evaluation and Learning.
Online Multitask Learning with Long
-
Term Memory
Fewer is More: A Deep Graph Metric Learning Perspective Using Fewer Proxies
Adaptive Graph Convolutional Recurrent Network
for
Traffic
For
ecasting
On Reward
-
Free Rein
for
cement Learning with Linear Function A.
Unsupervised Image
-
to
-
Image Translation Networks (UNIT)的特征如下:
●Two distinct
domain
s 两个不同的
domain
● Unpaired training data 数据不对应,就是一个
domain
中图片,没有和他对应的另外一个
domain
的照片
● Share the same latent space z
●
Domain
Invariant
feature
这里相当于一个VAE+GAN
要找到两个
domain
的share
Domain
Adaptation中的error bound
error bound包含三项:源域上的误差+源域和目标域的距离+理想模型的误差(对相同的数据集,源域和目标域上的理想分类器的分类差异)
一般的DA方法将第三项视作为常数项,主要优化第一和第二项(缩小源域和目标域的距离), i.e.
domain
invariant
feature
rep
由于
domain
shift的存在,在源域数据上表现很好的物体检测模型在目标域上的效果不尽人意。这种情况下,域自适应的方法开始和目标检测算法相结合,以提升目标检测算法的跨域性能,而两个领域方法的结合也是在不断优化和完善。
之前工作存在的不足
文章主要贡献