2024年秋季学期
2024.12.17
- 郑金鹏:CLIPCleaner Cleaning Noisy Labels with CLIP[paper][slides]
2024.12.10
- 郑腾鑫陵:Efficient Test-Time Adaptation of Vision-Language Models & WATT: Weight Average Test-Time Adaptation of CLIP[paper1[paper2][slide]
第一篇文章使用CLIP为backbone,不调节CLIP参数,设计positive和negative队列作为缓存模型,减少时间开销。第二篇使用CLIP为backbone,根据text template 调整CLIP参数,验证不同text template的影响。
- 卢昕怡:Class Balanced Adaptive Pseudo Labeling for Federated Semi-Supervised Learning [paper][slides]
2024.12.3
- 赵世佶:Decorate the Newcomers: Visual Domain Prompt for Continual Test Time Adaptation & Dataset Condensation with Gradient Matching [paper1] [paper2] [slide]
第一篇文章做的是持续TTA的设定。文章通过直接在图像像素层面构造矩形prompt作为可变参数,并和输入图像聚合送入冻结的模型中得到输出。作者设计了两种prompt,一个用教师-学生网络,通过交叉熵更新学习域知识;另一个多了一项正则限制域敏感参数的更新以保证域泛化知识保留。第二篇是数据集蒸馏的经典论文,通过课程梯度匹配的方式实现数据集蒸馏。
2024.11.26
-
陶子健: Towards Calibrated Multi-label Deep Neural Networks[paper][slides]
-
郑金鹏:GLAD: Towards Better Reconstruction with Global and Local Adaptive Diffusion Models for Unsupervised Anomaly Detection[paper][slides]
2024.11.19
- 卢昕怡:Towards Unbiased Training in Federated Open-world Semi-supervised Learning [paper] [slide]
- 郑腾鑫陵:A Versatile Framework for Continual Test-Time Domain Adaptation: Balancing Discriminability and Generalizability
[paper][silde]
2024.11.12
- 赵世佶:Self-supervised Learning for Large-scale Item Recommendations & Deep Interest Network for Click-Through Rate Prediction [paper1] [paper2] [slide]
简要介绍了推荐系统的基础架构:召回->粗排->精排->重排,然后分别展示了召回和精排的两个经典模型。谷歌在召回双塔模型的基础上引入对比学习扩充了样本量,解决了长尾表征的问题。阿里利用attention对用户序列建模,设计了DIN。
- 郑宇祥:Learning Equi-angular Representations for Online Continual Learning[paper][slides]
2024.11.6
- 郑金鹏:AnomalyGPT: Detecting Industrial Anomalies Using Large Vision-Language Models[paper][slides]
2024.10.29
- 卢昕怡:Robust Semi-supervised Learning by wisely leveraging open-set data [paper] [slide]
- 郑宇祥:Breaking the Memory Barrier: Near Infinite Batch Size Scaling for Contrastive Loss[paper][slides]
2024.10.21
- 赵世佶:EcoTTA: Memory-Efficient Continual Test-time Adaptation via Self distilled Regularization [paper] [slide]
介绍了EcoTTA,一篇研究持续TTA过程中如何减少内存消耗,从而能在端侧设备上实现持续适应。文章借鉴了TinyTL、EATA等一系列工作,通过将源模型的encoder分块,并在每一块之间插入专属的元网络(包括一层BN和一个Conv的残差连接),实现模型激活值内存占用大幅缩减。设计的损失函数包括经典的熵最小化,以及一个子蒸馏的正则项用于防止模型遗忘。
- 陶子健:Pi-DUAL: Using privileged information to distinguish clean from noisy labels [paper] [slide]
2024.10.14
- 郑腾鑫陵:Continual test-time domain Adaption [paper] [slide]
2024.9.23
- 郑金鹏:DeiT-LT: Distillation Strikes Back for Vision Transformer Training on Long-Tailed Datasets [paper] [slide]
- 陶子健:Category-Prompt Refined Feature Learning for Long-Tailed Multi-Label Image Classification [paper] [slide]
2024.9.9
- 卢昕怡: FedCorr Multi-Stage Federated Learning for Label Noise Correction [paper] [slide]
- 郑腾鑫陵:Continual-MAE Adaptive Distribution Masked Autoencoders for Continual Test-Time Adaptation [paper] [slide]
2024年春季学期
写作参考材料