Accepted Papers

Spotlight Presentations

  1. BoxeR: Box-Attention for 2D and 3D Transformers. Duy-Kien Nguyen (University of Amsterdam); Jihong Ju ( TomTom Global Content B.V.); Olaf Booij (TomTom); Martin R. Oswald (ETH Zurich); Cees Snoek (University of Amsterdam) [paper] [poster] [video] [supplementary]

  2. Depth Estimation with Simplified Transformer. John Yang (NVIDIA); Le An (NVIDIA); Anurag Dixit (NVIDIA); Jinkyu Koo (NVIDIA); Su Inn Park (NVIDIA ) [paper] [poster] [video]

  3. GroupViT: Semantic Segmentation Emerges from Text Supervision. Jiarui Xu (University of California San Diego); Shalini De Mello (NVIDIA Research); Sifei Liu (NVIDIA); Wonmin Byeon (NVIDIA Research); Thomas Breuel (NVIDIA); Jan Kautz (NVIDIA); Xiaolong Wang (UCSD) [paper] [poster] [video] [supplementary]

  4. GradViT: Gradient Inversion of Vision Transformers. Ali Hatamizadeh (NVIDIA Corporation); Hongxu Yin (NVIDIA ); Holger R Roth (NVIDIA); Wenqi Li (NVIDIA); Jan Kautz (NVIDIA); Daguang Xu (NVIDIA Corporation); Pavlo Molchanov (NVIDIA) [paper] [poster] [video] [supplementary]

  5. Visual Attention Emerges from Recurrent Sparse Reconstruction. Baifeng Shi (UC Berkeley); Yale Song (Microsoft Research); Neel Joshi (Microsoft Research); Trevor Darrell (UC Berkeley); Xin Wang (Microsoft Research) [paper] [poster] [video] [supplementary]

  6. M2F3D: Mask2Former for 3D Instance Segmentation. Jonas Schult (RWTH Aachen University); Alexander Hermans (RWTH Aachen University); Francis Engelmann (ETH AI Center); Siyu Tang (ETH Zurich); Otmar Hilliges (ETH Zurich); Bastian Leibe (RWTH Aachen University) [paper] [poster] [video] [supplementary]

  7. Where are my Neighbors? Exploiting Patches Relations in Self-Supervised Vision Transformer. Guglielmo Camporese (Padova University); Elena Izzo (University of Padua); Lamberto Ballan (University of Padova) [paper] [poster] [video]

  8. NEAT: Neural Attention Fields for End-to-End Autonomous Driving. Kashyap Chitta (MPI-IS and University of Tuebingen); Aditya Prakash (University of Illinois Urbana-Champaign); Andreas Geiger (University of Tuebingen) [paper] [poster] [video] [supplementary]

  9. MC-SSL: Towards Multi-Concept Self-Supervised Learning. Sara Atito (University of Surrey); Muhammad Awais (University of Surrey); Josef Kittler (University of Surrey, UK) [paper] [poster] [video]

  10. Scaling Vision Transformers to Gigapixel Images via Hierarchical Self-Supervised Learning. Richard J Chen (Harvard Medical School); Chengkuan Chen (Columbia University); Yicong Li (Tsinghua-Berkeley Shenzhen Institute, Tsinghua University.); Tiffany Chen (Pathology, Brigham and Women's Hospital, Harvard Medical School); Andrew D Trister (Bill & Melinda Gates Foundation); Rahul G. Krishnan (University of Toronto); Faisal Mahmood (Pathology, Brigham and Women's Hospital, Harvard Medical School) [paper] [poster] [video] [supplementary]

  11. Enabling Faster Vision Transformers via Soft Token Pruning. Zhenglun Kong (Northeastern University); Peiyan Dong (Northeastern University); Xiaolong Ma (Northeastern University); Xin Meng (Peking university); Mengshu Sun (Northeastern University); Wei Niu (William & Mary); Xuan Shen (Northeastern University); Bin Ren (William & Mary); Peng Zhang (Tsinghua University); Minghai Qin (Western Digital Research); Hao Tang (ETH Zurich); Yanzhi Wang (Northeastern University) [paper] [poster] [video]

  12. Learned Queries for Efficient Local Attention. Moab Arar (Tel Aviv University ); Ariel Shamir (The Interdisciplinary Center); Amit H Bermano (Tel-Aviv University) [paper] [poster] [video] [code]

  13. Adversarial Token Attacks on Vision Transformers. Ameya Joshi (New York University); Gauri Jagatap (New York University); Chinmay Hegde (New York University) [paper] [poster] [video]

  14. Helix4D: Online Semantic Segmentation of LiDAR Sequences. Romain Loiseau (École des ponts ParisTech); Mathieu Aubry (École des ponts ParisTech); loic landrieu (IGN) [paper] [poster] [video] [supplementary]

  15. Is Large-scale Pre-training always Necessary for Vision Transformers? Alaaeldin M El-Nouby (Facebook AI Research); Gautier Izacard (Ecole Normale Supérieure); Hugo Touvron (Facebook AI Research); Ivan Laptev (INRIA Paris); Herve Jegou (Facebook AI Research); Edouard Grave (Facebook AI Research) [paper] [poster] [video] [supplementary]

  16. Self-Supervised Pre-training of Vision Transformers for Dense Prediction Tasks. Jaonary Rabarisoa (CEA); Valentin Belissen (CEA LIST); Florian Chabot (CEA); Quoc-Cuong Pham (CEA) [paper] [poster] [video]

  17. VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training. Zhan Tong (Nanjing University); Yibing Song (Tencent); Jue Wang (Tencent AI Lab); Limin Wang (Nanjing University) [paper] [poster] [code] [video]

Poster Presentations

  1. Differentiable Soft-Masked Attention. Ali Athar (RWTH Aachen); Jonathon Luiten (RWTH Aachen University); Alexander Hermans (RWTH Aachen University); Deva Ramanan (Carnegie Mellon University); Bastian Leibe (RWTH Aachen University) [paper] [poster] [supplementary]

  2. TubeR: Tubelet Transformer for Video Action Detection. Jiaojiao Zhao (University of Amsterdam); Yanyi Zhang (Rutgers University); Xinyu Li (ByteDance); Hao Chen (Amazon); Bing Shuai (Amazon); Mingze Xu (Amazon); Chunhui Liu (Amazon); Kaustav Kundu (Amazon); Yuanjun Xiong (Amazon); Davide Modolo (Amazon); Ivan Marsic (Rutgers University); Cees Snoek (University of Amsterdam); Joseph Tighe (Amazon) [paper] [poster] [supplementary]

  3. DETR++: Taming Your Multi-Scale Detection Transformer. Chi Zhang (University of California, Los Angeles); Lijuan Liu (Google); Xiaoxue Zang (Google); Frederick Liu (Google Inc.); Hao Zhang (Google); Xinying Song (Google); Jindong Chen (Google) [paper] [poster]

  4. PyramidTNT: Improved Transformer-in-Transformer Baselines with Pyramid Architecture. Kai Han (Huawei Noah’s Ark Lab); Jianyuan Guo (Noah’s Ark Lab, Huawei Technologies); Yehui Tang (Peking University); Yunhe Wang (Huawei Technologies) [paper] [poster] [supplementary]

  5. ReMixer: Object-aware Mixing Layer for Vision Transformers. Hyunwoo Kang (KAIST); Sangwoo Mo (KAIST); Jinwoo Shin (KAIST) [paper] [poster] [supplementary]

  6. X-ViT: High Performance Linear Vision Transformer without Softmax. Jeong-Geun Song (Kakao Enterprise); Heung-Chang Lee (Kakao Enterprise) [paper] [poster]

  7. Learning Co-segmentation by Segment Swapping for Retrieval and Discovery. Xi Shen (École des Ponts ParisTech); Alexei A Efros (UC Berkeley); Armand Joulin (Facebook AI Research); Mathieu Aubry (École des ponts ParisTech) [paper] [poster] [supplementary]

  8. Cerberus Transformer: Joint Semantic, Affordance and Attribute Parsing. Xiaoxue Chen (Tsinghua University); Hao Zhao (Intel Labs China) [paper] [poster] [supplementary]

  9. Adapting Multi-Input Multi-Output schemes to Vision Transformers. Rémy Sun (Institut des systèmes intelligents et robotiques); Clément Masson (Thales Land and Air Systems); Nicolas Thome (CNAM, Paris); Matthieu Cord (Sorbonne University) [paper] [poster]

  10. Towards Weakly-Supervised Text Spotting using a Multi-Task Transformer. Yair Kittenplon (Tel-Aviv University); Inbal Lavi (Amazon); Sharon Fogel (Tel Aviv University); Yarin Bar (Amazon); R. Manmatha (Amazon); Pietro Perona (Amazon Web Services (AWS))

  11. DyTox: Transformers for Continual Learning with DYnamic TOken eXpansion. Arthur Douillard (Heuritech / Sorbonne University); Alexandre Rame (Heuritech); Guillaume Couairon (Facebook AI Research); Matthieu Cord (Sorbonne University) [paper] [poster] [supplementary]

  12. Multi-Scale Hybrid Vision Transformer and Sinkhorn Tokenizer for Sewer Defect Classification. Joakim Bruslund Haurum (Aalborg University); Meysam Madadi (CVC,UAB); Andreas Møgelmose (Aalborg University); Sergio Escalera (Computer Vision Center (UAB) & University of Barcelona,); Thomas B. Moeslund (Aalborg University) [paper] [poster]

  13. TubeFormer-DeepLab: Video Mask Transformer. Dahun Kim (KAIST); Jun Xie (Google); Huiyu Wang (JHU); Siyuan Qiao (Google); Qihang Yu (Johns Hopkins University); Hong-Seok Kim (Google); Hartwig Adam (Google); Liang-Chieh Chen (Google Inc.)

  14. Dynamic Query Selection for Fast Visual Perceiver. Corentin Dancette (LIP6); Matthieu Cord (Sorbonne Université, Valeo) [paper] [poster]

  15. Surface Analysis with Vision Transformers. Simon Dahan (King's College London); Logan ZJ Williams (King's College London); Abdulah Fawaz (King's College London); Daniel Rueckert (Imperial College London); Emma C Robinson (King's College) [paper] [poster] [supplementary]

  16. MTL-TransMODS: Cascaded Multi-Task Learning for Moving Object Detection and Segmentation with Unified Transformers. Eslam Bakr (Valeo); Ahmad El Sallab (AITAR) [paper] [poster] [supplementary]

  17. A Dual-Attentive Approach to Style-Based Image Captioning Using a CNN-Transformer Model. Azel O Daniel (University of the West Indies); Phaedra S Mohammed (The University of the West Indies) [paper] [poster]

  18. Space-Time Video Super-Resolution Using Deformable Attention Network. Hai Wang (Tsinghua University); Xiaoyu Xiang (Meta Platforms Inc.); Yapeng Tian (University of Rochester); Wenming Yang (Tsinghua University); Qingmin Liao (Tsinghua Univeristy) [paper] [poster] [supplementary]

  19. Towards Self-Supervised Pre-Training of 3DETR for Label-Efficient 3D Object Detection. Rishabh Jain (University of Freiburg); Narunas Vaskevicius (Robert Bosch GmbH); Thomas Brox (University of Freiburg) [paper] [poster]

  20. CLIP4IDC: CLIP for Image Difference Captioning. Zixin Guo (Aalto university); Tzu-Jui Wang (Aalto University); Jorma Laaksonen (Aalto University) [paper] [poster]

  21. Simpler is Better: off-the-shelf Continual Learning through Pretrained Backbones. Francesco Pelosin (Ca' Foscari University of Venice) [paper] [poster] [code]

  22. VIMPAC: Video Pre-Training via Masked Token Prediction and Contrastive Learning. Hao Tan (UNC, Chapel Hill); Jie Lei (UNC Chapel Hill); Mohit Bansal (University of North Carolina at Chapel Hill); Thomas Wolf (Hugging Face) [paper] [poster]

  23. A comparative study between vision transformers and CNNs in digital pathology. Luca Deininger (Roche); Bernhard Stimpel (Roche); Anil Yuce (Roche); Samaneh Abbasi Sureshjani (Roche); Simon Till Schönenberger (ASUS); Paolo Ocampo (Genentech); Konstanty Korski (Roche); Fabien Gaire (Roche) [paper] [poster]

  24. Recurrent Transformer Variational Autoencoders for Multi-Action Motion Synthesis. Rania Briq (University of Bonn); Chuhang Zou (Amazon); Leonid Pishchulin (Amazon Go); Christopher Broaddus (Amazon); Jurgen Gall (University of Bonn) [paper] [poster]

  25. Scaling Novel Object Detection with Weakly Supervised Detection Transformers. Tyler M LaBonte (Georgia Institute of Technology); Yale Song (Microsoft Research); Xin Wang (Microsoft Research); Vibhav Vineet (Microsoft); Neel Joshi (MICROSOFT RESEARCH) [paper] [poster]

  26. Pre-training image-language transformers for open-vocabulary tasks. AJ Piergiovanni (Google); Weicheng Kuo (Google); Anelia Angelova (Google) [paper] [poster]

  27. Efficient Adaptive Image-Language Learning for Visual Question Answering. AJ Piergiovanni (Google); Weicheng Kuo (Google); Anelia Angelova (Google) [paper] [poster]

  28. PatchRot: A Self-Supervised Technique for Training Vision Transformers. Sachin Chhabra (Arizona State University); Prabal Bijoy Dutta (Arizona State University); Hemanth Venkateswara (Arizona State University); Baoxin Li (Arizona State University) [paper] [poster]

  29. Joint rotational invariance and adversarial training of a dual-stream Transformer yields state of the art Brain-Score for Area V4. Arturo Deza (MIT); William Berrios (MIT) [paper] [poster] [code]

  30. Fair Comparison between Efficient Attentions. Jiuk Hong (Kyungpook National University); Chaehyeon Lee (Kyungpook National University); Soyoun Bang (Kyungpook National University); Heechul Jung (Kyungpook National University) [paper] [poster] [code]