Modular and Parameter-Efficient Fine-Tuning for NLP Models
University of Zurich AND (Room 2-02) Andreasstrasse 15, Zurich, SwitzerlandSummary: State-of-the-art language models in NLP perform best when fine-tuned even on small datasets, but due to their increasing size, fine-tuning and downstream usage have become extremely compute-intensive. Being able to efficiently and effectively fine-tune the largest pre-trained models is thus key in order to reap the benefits of the latest advances in NLP. In …
Modular and Parameter-Efficient Fine-Tuning for NLP Models Read More »