Modular and Parameter-Efficient Fine-Tuning for NLP Models
University of Zurich AND (Room 2-02) Andreasstrasse 15, ZurichSummary: State-of-the-art language models in NLP perform best when fine-tuned even on small datasets, but due to their increasing size, fine-tuning and downstream usage have become extremely compute-intensive. Being able …
Modular and Parameter-Efficient Fine-Tuning for NLP Models Read More »