I’m a second year PhD student working at UKP at the Technichal University Darmstadt. The focus of my research lies on multi-lingual, multi-modal, multi-task machine learning. Recently I have been focused on adapters, which do not require fine-tuning of all parameters of a pretrained model. Instead we introduce a small number of task specific parameters — while keeping the underlying pre-trained language model fixed. In this context we have recently published two papers. In AdapterFusion we propose a new approach for transfer learning where we combine knowledge from multiple tasks in a non-destructive manner using adapters. In MAD-X we propose a new adapter-based framework to adapt multilingual models to low-resource languages and languages that were not covered in their training data.