Computer vision has continued to grow with unprecedented progress in recent years. When comparing this to Natural Language Processing (NLP), one thing seems clear: the size of neural networks will keep growing, as will the capabilities of these models will too. Nonetheless, as the size and complexity of these models continue to expand, adapting them to novel tasks and domains presents significant challenges – that differ from those faced in the NLP community.
The goal of this workshop is to explore and discuss ways of dealing with the new reality of ever larger models in computer vision. The sheer parameter and training dataset sizes mean that these models often cannot be trained by academia and some models might not even fit on large GPUs for inference. These developments not only bring new challenges for computer vision researchers and practitioners but also many novel opportunities. In this workshop, we aim to bring together researchers from academia and industry to talk and discuss topics that are increasingly of importance for the vision community.
The workshop BigMAC: Big Model Adaptation for Computer Vision, will cover topics related to how large pretrained models can be effectively used:
- Prompting methods and techniques for vision models
- New methodologies for fine-tuning pretrained models
- Leveraging multi-modal weak-supervision techniques
- Scaling and fine-tuning with self-supervision
- Finetuning large general-pretrained models robustness
- Quantization and efficiency
This is the first iteration of the BigMAC event and does not have a call for papers, which might be added in future iterations. The workshop is organized as a half-day event with oral talks from the invited speakers.