[Collection] Papers, Recipes, Toolkits and Interesting Posts
Published:
Last Updated: 2020-12-10
知乎:https://www.zhihu.com/people/liuchengwei/posts
清华大学语音组Wiki:http://cslt.riit.tsinghua.edu.cn/mediawiki/index.php/Weekly_meeting
NeurIPS2020 SAS workshop: https://neurips-sas-2020.github.io/#papers
Machine Translation Reading List: https://github.com/THUNLP-MT/MT-Reading-List
Posts
1. NLP中的少样本困境问题探究 [Chinese]
2. 探索孪生神经网络:请停止你的梯度传递! [Chinese]
3. 更深的编码器+更浅的解码器=更快的自回归模型 [Chinese]
Toolkits
1. PyTorch Lightning Bolts [GitHub]
- Pretrained SOTA Deep Learning models, callbacks and more for research and production with PyTorch Lightning and PyTorch
- Paper: A Framework For Contrastive Self-Supervised Learning And Designing A New Approach
2. S3PRL Speech Toolkit [GitHub]
- Self-supervised pre-training and representation learning (S3PRL) of Mockingjay, TERA, A-ALBERT, APC, and more to come. With easy-to-use standard downstream evaluation scripts including phone classification, speaker recognition, and ASR. (All in Pytorch!)
3. WavAugment [GitHub]
- WavAugment performs data augmentation on audio data. The audio data is represented as pytorch tensors
- Augmentations include: pitch randomization, reverberation, additive noise, time dropout (temporal masking), band reject, clipping
- Data Augmenting Contrastive Learning of Speech Representations in the Time Domain, E. Kharitonov, M. Rivière, G. Synnaeve, L. Wolf, P.-E. Mazaré, M. Douze, E. Dupoux. [arxiv]
4. Sequence-to-Sequence G2P Toolkit [GitHub]
- The tool does Grapheme-to-Phoneme (G2P) conversion using transformer model from tensor2tensor toolkit based on TensorFlow
- Lukasz Kaiser. “Accelerating Deep Learning Research with the Tensor2Tensor Library.” In Google Research Blog, 2017
5. Abkhazia [GitHub]
- Online documentation https://docs.cognitive-ml.fr/abkhazia
- The Abkhazia project makes it easy to obtain simple baselines for supervised ASR (using Kaldi) and ABX tasks (using ABXpy) on the large corpora of speech recordings typically used in speech engineering, linguistics or cognitive science research
Datasets
1. Libri-Adapt: a New Speech Dataset for Unsupervised Domain Adaptation [ICASSP2020] [GitHub]
- 7200 hours of English speech recorded on mobile and embedded scale microphones, 72 different domains (6 microphones x 3 accents x 4 environments x 100-hour Librispeech corpus) sampled at 16KHz
2. MLS: A Large-Scale Multilingual Dataset for Speech Research [INTERSPEECH2020] (Newer arxiv version) [Recipe/Pretrained Models] [OpenSLR]
- Multilingual LibriSpeech (MLS) dataset: derived from LibriVox, 50.5K hours, 8 languages (44.5K hours of English and a total of about 6K hours for other languages)
- Languages covered: English, German, Dutch, French, Spanish, Italian, Portuguese, Polish
End-to-End ASR
1. Listen Attentively, and Spell Once: Whole Sentence Generation via a Non-Autoregressive Architecture for Low-Latency Speech Recognition [INTERSPEECH2020]
- Introduce non-autoregressive ASR system LASO (Listen Attentively, and Spell Once)
- CER 6.4% (SOTA autoregressive Transformer model 6.7%) on AISHELL-1
- 21 ms, 1/50 average inference latency of the autoressive Transformer model
2. Independent Language Modeling Architecture for End-To-End ASR [ICASSP2020]
-
Introduce independent language modeling subnet to leverage external text data
-
Existing method: Replace encoding with an all-zero vector and freeze the encoder
3. Conformer: Convolution-augmented Transformer for Speech Recognition [INTERSPEECH2020] [Paper Reading Slide]
- Combine CNNs and Transformers to model both local and global dependencies in a parameter-efficient way
- LibriSpeech WER of 2.1/4.3 without using LM and 1.9/3.9 with an external LM on test clean/other, 2.7/6.3 with a 10M-parameter small model
4. Exploring Transformers for Large-Scale Speech Recognition [INTERSPEECH2020]
- Depth-scale model initialization to accelerate convergence
- Pre-LayerNorm instead of Post-LayerNorm to accelerate convergence
- Chunk-based Transformer-XL for streaming ASR (low computation + GPU memory-saving)
5. Distilling the Knowledge of BERT for Sequence-to-Sequence ASR [INTERSPEECH2020] [GitHub]
- Use BERT to generate soft labels by masking and predicting target words for the training of seq2seq ASR
- Concatenate multiple utterances together to a fixed size for BERT prediction to make pre-training and distillation consistent and improve WER
Multilingual
A. ASR
1. Massively Multilingual Adversarial Speech Recognition [NAACL-HLT2019]
- Analyze the relative importance of similarity between the target and pre-training languages along the dimensions of phonetics, phonology, language family, geographical location, and orthography
- Investigate 2 additional objectives for hybrid CTC/Attention architecture: phoneme CTC and language-adversarial during pre-training
2. Learning Robust and Multilingual Speech Representations [Findings of EMNLP2020]
3. Language-agnostic Multilingual Modeling [ICASSP2020]
- Propose a language-agnostic multilingual ASR system by transforming all languages to one writing system through a many-to-one transliteration transducer
- Obtain 10% relative WER reduction on 4 Indic languages
6. End-to-End Multilingual Speech Recognition System with Language Supervision Training [IEICETrans.2020]
- Propose a Language Masks Estimation method to constrain the output distribution
7. Towards Language-Universal Mandarin-English Speech Recognition [INTERSPEECH2019]
- Propose to combine two monolingual models to build a bilingual model
8. Multilingual Speech Recognition With A Single End-To-End Model [ICASSP2018]
- Invesitgate the joint/encoding LID multi-task learning/language embedding conditioned cases on 9 Indian languages
9. Towards Language-Universal End-to-End Speech Recognition [ICASSP2018]
- Explore language-specific/universal output layer
- Propose language-specific gating units in hidden layers
10. Large-Scale End-to-End Multilingual Speech Recognition and Language Identification with Multi-Task Learning [INTERSPEECH2020] [Slide] [Recipe]
- Present LID-42: very-large-scale Transformer-based hybrid CTC/Attention models based on subwords / characters for 42-lingual ASR, average CER: 27.8/27.2
- Language-independent architecture with shared vocabulary including language IDs for joint language identification, LID accuracy: 93.5/94.0
- Relative improvements of 28.1% in WER by transfer learning to low-resource languages
11. Leveraging Language ID in Multilingual End-to-End Speech Recognition [ASRU2019]
12. Investigating End-to-end Speech Recognition for Mandarin-english Code-switching [ICASSP2019]
- Introduce MTL where at each time step, the model predicts both the modeling unit and the language ID
13. Meta Learning for End-to-End Low-Resource Speech Recognition [ICASSP2020]
- Apply model-agnostic meta-learning (MAML) to pre-train a CTC multilingual model and transfer to low-resource languages with language-specific head
14. Language Adaptive Multilingual CTC Speech Recognition
15. Multilingual Speech Recognition with Corpus Relatedness Sampling
16. Adversarial Multilingual Training for Low-Resource Speech Recognition [ICASSP2018]
17. Bytes are All You Need: End-to-End Multilingual Speech Recognition and Synthesis with Bytes [ICASSP2019]
- Propose Audio-to-Byte (A2B) and Byte-to-Audio (B2A) models for multilingual ASR and TTS
- ASR model: LAS, TTS model: Tacotron 2. Input/Output layer: 256 possible byte values.
18. An Investigation of Deep Neural Networks for Multilingual Speech Recognition Training and Adaptation
19. Bootstrap an End-to-End ASR System by Multilingual Training, Transfer Learning, Text-to-Text Mapping and Synthetic Audio [Arxiv2020]
- Demostrate that post-ASR text-to-text mapping and synthetic TTS data can be effectively combined with approaches such as multilingual training and transfer learning for improving simulated Italian ASR bootstrapping scenario
20. Adversarial Meta Sampling for Multilingual Low-Resource Speech Recognition [AAAI2021] [GitHub]
- Propose Adversarial Meta-Sampling (AMS) approach for multilingual meta-learning ASR (MML-ASR) and multilingual transfer learning ASR (MTL-ASR) by
B. STT
1. Phonological Features for 0-shot Multilingual Speech Synthesis [INTERSPEECH2020]
- Utilize binary phonological features
- Mapping tables for phoneme to IPA, as well as an IPA-PF lookup dictionary are available at https://github.com/papercup-open-source/phonological-features
C. Speech Translation
1. Effectively Pretraining a Speech Translation Decoder with Machine Translation Data [EMNLP2020]
- Propose to use adversarial discriminator to train NMT and ASR systems simultaneously by aligning their encodings to the same latent space –> both the pre-trained ASR encoder and NMT deocder can be used to improve AST
- 1.5 BLEU improvements on En-De and En-Fr compared with conventional pretraining methods (ASR encoder only / ASR encoder + NMT deocder pre-trained separately)
D. Analysis
1. Automatically Identifying Language Family from Acoustic Examples in Low Resource Scenarios [Arxiv2020]
- Train a LID model on the Wilderness dataset and analyze the learned embeddings by comparing with classical language family findings (Ethnologue, Glottolog, Wikipedia)
- Show that languages grouped by learned embeddings perform better than distance-based or phoneme-based approaches on zero-shot TTS
Curriculum Learning
1. When Do Curricula Work? [ICLR2021 (submitted)]
-
Investigate the implicit curricula resulting from architectural and optimization bias and find that samples are learned in a highly consistent order
-
Conduct extensive experiments over thousands of orderings spanning three kinds of learning: curriculum, anti-curriculum, and random-curriculum and find that benefit is entirely due to the dynamic training set size rather than the order of examples
-
Curriculum, but not anti-curriculum or random ordering can indeed improve the performance either with limited training time budget or in the existence of noisy data
Semi-Supervised Learning
1. Deep Contextualized Acoustic Representations For Semi-Supervised Speech Recognition
2. Semi-supervised Development of ASR Systems for Multilingual Code-switched Speech in Under-resourced Languages
3. Semi-Supervised End-to-End Speech Recognition [INTERSPEECH2018] [GitHub]
4. Self-Training for End-to-End Speech Recognition [ICASSP2020] [Recipe]
- Train a base AM on limited paried data and LM on large-scale unparied text data, use beam search to generate peudo-labels for unlabelled speech data
- Filtering a) n-gram repeated more than c times (looping). b) hypotheses with an EOS probability below a threshold (early-stopping) or no EOS generated. c) length-normalized log likelihood as the confidence score for quality ranking: $\text{ConfidenceScore}(\hat{Y_i})=\frac{\log P_{AM}(\hat{Y_i}|X_i)}{|\hat{Y_i}|}$
- Propose sample ensemble: Combine pseudo samples generated by M models and average their loss during optimization
5. End-to-end ASR: from Supervised to Semi-Supervised Learning with Modern Architectures [SAS Workshop@ICML2020] [Recipe]
- Train AM + LM on Librispeech (960 hours) to generate pseudo labels for LibriVox (53.8k hours) –> (10k hours) can already obtain promising results as shown in the paper
- E2E AMs are implicit LMs, thus with enough unlabeled audio, decoding with an external LM doesn’t improve performance
- On Librispeech: achieve 2.27/4.8 WER on test-clean/other sets without LM, 2.09/4.11 with LM
Self-Supervised Learning
1. Unsupervised Pretraining Transfers Well Across Languages [ICASSP2020] [GitHub]
- Investigate into the CPC for cross-lingual tasks and evaluate the linear separability of the learned phoneme representation (Common Voice / LibriSpeech phoneme classification, Zerospeech2017)
- Introduce two modifications to improve CPC: a) replacing batch normalization with a channel-wise normalization to avoid information leakage and stablize training; b) Replace the linear classifier with a 1-layer Transformer to make the future prediction target more reasonable
- PER of modified CPC pretrained on LS-360 is comparable to the supervised model pretrained on LS-100
2. wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations [NeurIPS2020] [Paper Reading Slide]
- Self-supervised learning by masking + contrastive loss + quantized representations
- On Librispeech: achieve 1.8/3.3 WER on the clean/other test sets, 4.8/8.2 WER on 10 minutes of labeled data and 53k hours of unlabeled data. All experiments are with external LM
3. Unsupervised Cross-lingual Representation Learning for Speech Recognition [Arxiv2020]
- Investigate into self-supervised pre-training (wav2vec 2.0) for multilingual / cross-lingual ASR
- Publish XLSR-53, a large-scale multilingual wav2vec 2.0 pre-trained on 56K-hour combined corpora of Common Voice (38 languages) + BABEL (14 languages) + Multilingual LibriSpeech (MLS) (8 languages)
4. Unsupervised Learning of Disentangled Representations for Speech with Neural Variational Inference Models [MIT PhD Dissertation]
5. A Further Study of Unsupervised Pre-training for Transformer Based Speech Recognition
6. Leveraging Text Data Using Hybrid Transformer-LSTM Based End-to-End ASR in Transfer Learning
7. Self-training and Pre-training are Complementary for Speech Recognition [Arxiv2020]
- Combine self-training (paper 5 in semi-supervised learning section) and unsupervised pre-training (wav2vec 2.0) to achieve new SOTA
- Use pre-trained wav2vec 2.0 for pseudo-labelling, then fine-tune wav2vec 2.0 (CTC) or train randomly-initialized Transformer model (S2S) as final model
- On Librispeech: achieve 1.8/3.3 (CTC) or 1.5/3.1 (S2S) WER on the clean/other test sets, 3.0/5.2 (CTC) or 3.1/5.4 (S2S) WER on 10 minutes of labeled data and 53k hours of unlabeled data. All experiments are with external LM
8. Vector-Quantized Autoregressive Predictive Coding [INTERSPEECH2020] [GitHub]
- Introduce one or more vector quantization layers to the APC model to explicitly control the amount of encoded information
- Probiing tasks show the APC model prefer to retain speaker information over phonetic information when the capacity is limited
- When the phonetic information is present, the learned VQ codes correspond well with English phones
9. Speech-XLNet: Unsupervised Acoustic Model Pretraining for Self-Attention Networks [INTERSPEECH2020]
- Apply XLNet (refer to NLP 13th) to ASR tasks (w/o segment recurrence mechanism)
10. Mockingjay: Unsupervised Speech Representation Learning with Deep Bidirectional Transformer Encoders [ICASSP2020] [GitHub]
- Apply BERT to speech with proposed additional consecutive masking (mask consecutive C frames to 0) and evaluate on phoneme classification, sentiment classification and speaker recognition
11. TERA: Self-Supervised Learning of Transformer Encoder Representation for Speech [TASLP2020 (submitted)] [S3PRL]
- Propose TERA (Transformer Encoder Representations from Alteration)
- Multi-target 3 auxiliary L1 reconstruction objectives: time (time masking, Mockingjay) / channel (frequency masking) / magnitude (+ Gaussian noise) alteration
- Comprehensive analysis on its application to ASR (representation / fine-tune) / phoneme classification / speaker classification
12. Unsupervised Pre-training of Bidirectional Speech Encoders via Masked Reconstruction [ICASSP2020]
- Inspied by BERT, pre-train bidirectional RNNs via masked reconstruction loss to improve ASR
- Inspired by SpecAugment, masking segments of sufficient width in both time and frequency
13. Improving Transformer-based Speech Recognition Using Unsupervised Pre-training [Arxiv2019]
- Almost same as paper 9 except it is evaluated on HKUST and AISHELL Mandarin ASR datasets
14. A Simple Framework for Contrastive Learning of Visual Representations [ICML2020] [GitHub]
- Propose SimCLR framework consisting of: input image $x$ –> two data augmentation to $x_i$, $x_j$ –> same fetaure extractor –> $h_i$, $h_j$ –> same two-layer DNN –> $z_i$, $z_j$ –> contrastive loss (positive samples are $(i, j)$ and $(j, i)$, negative samples are other $2(N-1)$ augmented examples
- Important factors: multiple data augmentation operations, larger batch sizes and longer training, normalize temperature for contrastive cross entropy, nonlinear transformation between representation and contrastive loss
15. Speech SimCLR: Combining Contrastive and Reconstruction Objective for Self-supervised Speech Representation Learning [Arxiv2020] [GitHub]
- Data augmentation: random pitch shift, speed perturbation, room reverberation and additive noise to the waveform; time and frequency masking to the spectrogram implemented with WavAugment toolkit
- Final loss = reconstruction loss (TERA) + contrastive loss (SimCLR)
16. Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning [Arxiv2020]
- Propose BYOL,
17. The Zero Resource Speech Benchmark 2021: Metrics and Baselines for Unsupervised Spoken Language Modeling [NeurIPS2020 SAS]
- d
18. Multi-Format Contrastive Learning of Audio Representations [NeurIPS2020 SAS]
19. Similarity Analysis of Self-Supervised Speech Representations [NeurIPS2020 SAS]
- d
Transfer Learning / Domain Adaptation
1. Co-Tuning for Transfer Learning [NeurIPS2020] [GitHub]
- Propose Co-Tuning to fully transfer pre-trained models by utilizing pretraining-task-specific parameters
- Loss of Co-Tuning: $\text{CE}(\text{prediction of target head}, y_t) + \lambda \cdot \text{CE}(\text{prediction of pretrained head}, p(y_s|y_t))$
- Propose two category relationship learning approaches to translate target labels into probabilistic source labels
2. Self-training For Few-shot Transfer Across Extreme Task Differences [ICLR2021 (submitted)]
- Propose “Self Training to Adapt Representations To Unseen Problems (STARTUP)”
- Three stages to learn representations: train teacher model on base dataset –> contruct softly-labeled set on target unlabeled set –> train student model
- Loss for training student model: CE loss on base dataset + KL divergence on softly-labeled set + self-supervised loss on unlabeled set (SimCLR in this paper)
3. Adaptation Algorithms for Speech Recognition: An Overview [IEEE OJSP2021 (submitted)]
4. Unsupervised Domain Adaptation for Speech Recognition via Uncertainty Driven Self-Training [ICASSP2021 (submitted)]
- Propose Dropout-based Uncertainty-driven Self-Training (DUST) by filtering uncertain predictions from pseudo-labelled samples
- Confidence is computed based on the maximum edit distance between model outputs w/o dropout vs. model outputs w/ dropout with different random seeds
5. Supervised Contrastive Learning for Pre-trained Language Model Fine-tuning [ICLR2021 (submitted)]
- dd
6. Domain Adaptation Using Class Similarity for Robust Speech Recognition [INTERSPEECH2020]
- Propose a novel adaptation method composed of two stages: a) train the source model and computing mean soft labels of every class over source samples; b) use the soft labels as a regularization term to train the target model on the target domain data
- Experiment on a) accent adaptation (Common Voice English) and b) noise adapation (CHiME-3). The proposed method is more robust and performs even better on the latter task
7. UNKs Everywhere: Adapting Multilingual Language Models to New Scripts [Arxiv2020]
- d
NLP (NMT, PLM)
1. Cross-lingual Language Model Pretraining [NeurIPS2019]
- Investigate into the cross-lingual language model (XLM) pretrained by Casual LM (classic LM), Masked LM (unsupervised) and Translation LM (supervised) tasks
2. Unicoder: A Universal Language Encoder by Pre-training with Multiple Cross-lingual Tasks [EMNLP2019]
- Based on 2 tasks (MLM, TLM) in XLM, further introduces 3 new cross-lingual pretraining tasks: Cross-lingual Word Recovery (based on attention), Cross-lingual Paraphrase Classification (similar to Next Sentence Prediction but predicting meaning), Cross-lingual MLM (code-switch sentences)
3. Learning Deep Transformer Models for Machine Translation [ACL2019]
- Study the effects of Pre-Norm and Post-Norm in deep Transformer, the gradient of Post-Norm poses a higher risk of gradient vanishing or exploding
- Propose Dynamic Linear Combination of Layers (DLCL) to memorizing the features extracted from all preceding layers
- Sucessfully train a 30-layer deepest Transformer encoder and 6-layer decoder for NMT
4. Massively Multilingual Sentence Embeddings for Zero-Shot Cross-Lingual Transfer and Beyond [TACL2019]
###
5. A Study of Cross-Lingual Ability and Language-specific Information in Multilingual BERT
6. Zero-shot Reading Comprehension by Cross-lingual Transfer Learning with Multi-lingual Language Representation Model
7. Are All Languages Created Equal in Multilingual BERT?
8. Multilingual Neural Machine Translation with Language Clustering
9. Multilingual Unsupervised NMT using Shared Encoder and Language-Specific Decoders
10. Multilingual NMT with a Language-Independent Attention Bridge https://github.com/Helsinki-NLP/OpenNMT-py/tree/att-brg
11. Cross-lingual Spoken Language Understanding with Regularized Representation Alignment
12. Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context [ACL2019] [GitHub]
- Propose Transformer-XL (extra long) to capture longer-term dependency and resolve the context fragmentation problem
- Introduce segment-level recurrence mechanism to reuse previous segments as context
- Introduce relative positional encodings for the proposed recurrence mechanism
13. XLNet: Generalized Autoregressive Pretraining for Language Understanding [NeurIPS2019] [GitHub] [Mask分析]
- Introduce a permutation language modeling (PLM) pre-training objective
- Two-stream self-attention + partial prediction (only predict the lasttokens in a factorization order)
- Integrate Transformer-XL (relative positional encoding + segment recurrence mechanism)
Multi-Modal
1. Speech to Text Adaptation: Towards an Efficient Cross-Modal Distillation
- Knowledge distillation from BERT to ASR pretrained module for Spoken Language Understanding
Deep Learning Architecture
1. Pay Less Attention with Lightweight and Dynamic Convolutions [ICLR2019]
- Depth-wise convolution independently performed on every channel
- Lightweight convolution (depth-wise + weight sharing + GLU)
- Dynamic convolution: compute weight based on input X
2. Dynamic Convolution: Attention over Convolution Kernels [CVPR2020]
- Aggregate multiple convolution kernels dynamically based on their attentions
- Tricks: Sum the attention to 1 + Near-uniform attention in early epochs (softmax + large temperature)
3. Rethinking the Value of Transformer Components [COLING2020]
- For decoder self-attention least important, FFN most important, higher encoder-attention > lower encoder-attention (closer to the output layer)
- For encoder, the lower components (self-attention, FFN) are more important.
- Two methods to improve Transformer NMT: a) Prune unimportant components and retrain the model. b) Rewind unimportant components and finetune the model
4. Understanding the Difficulty of Training Transformers [EMNLP2020]
- Strong dependencies of Post-Norm amplify fluctuations brought by parameter changes and destabilize the training
- Loose reliance on residual branches in Pre-Norm generally limits the algorithm’s potential and often produces inferior models (than Post-Norm)
- Propose Admin: an Adaptive model initialization method to stablize the early stage of training
5. MADE: Masked Autoencoder for Distribution Estimation [ICML2015]
- Modify the autoencoder to make it autoregssive by using masks to change hidden layer connectivity structures
6. Lite Transformer
7. Object-Centric Learning with Slot Attention
8. Deep Learning Recommendation Model for Personalization and Recommendation Systems
9. Multi-Head Attention:Collaborate Instead of Concatenat
Meta-Learning
1. On First-Order Meta-Learning Algorithms
2. Reptile: a Scalable Metalearning Algorithm
3. Rapid Learning or Feature Reuse? Towards Understanding the Effectiveness of MAML [ICLR2020]
- Question: is the effectiveness of MAML due to the meta-initialization being primed for rapid learning (large, efficient changes in the representations) or due to feature reuse, with the meta-initialization already containing high quality features? Answer: feature reuse is the dominant factor
- Propose Almost No Inner Loop (ANIL) algorithm, a competitive simplification of MAML by removing the inner loop for all but the (task-specific) head of the underlying neural network
- Propose No Inner Loop (NIL) algorithm to classify the test sample based on cosine similarities of penultimate layer representations with the k labelled examples (support set)
4. Convergence of Meta-Learning with Task-Specific Adaptation over Partial Parameters
Others
1. Scaling Hidden Markov Language Models [EMNLP2020]
2. REVISIT KNOWLEDGE DISTILLATION: A TEACHER-FREE FRAMEWORK
3. PEGASUS: Pre-training with Extracted Gap-sentences forAbstractive Summarization
4. AdapterFusion:Non-Destructive Task Composition for Transfer Learning
5. Improving Massively Multilingual Neural Machine Translation and Zero-Shot Translation
6. Dynamic Fusion Network for Multi-Domain End-to-end Task-Oriented Dialog
7. Multi-Source Domain Adaptation with Mixture of Experts
8. Meta-Learning for Few-Shot NMT Adaptation
8. Simple, Scalable Adaptation for Neural Machine Translation
9. Parameter-Efficient Transfer Learning for NLP
11. Improving Target-side Lexical Transfer in Multilingual Neural Machine Translation
12. Large Memory Layers with Product Keys
13. An Analysis of Massively Multilingual Neural Machine Translation forLow-Resource Languages
14. Large Product Key Memory for Pretrained Language Models
15. Contextual Parameter Generation for Universal Neural Machine Translation
16. Multilingual Speech Recognition with Self-Attention Structured Parameterization
17. Experience Grounds Language [EMNLP2020]
- Propose the notion of a World Scope (WS) as a lens to audit progress in NLP
- Five levels of WS: Corpus, Internet, Perception, Embodiment, Social
18. Monolingual Adapters for Zero-Shot Neural Machine Translation [EMNLP2020]
- Propose language-specific adapter layers which are more parameter-efficient $2n$, $O(n)$ than bilingual ones $n\cdot (n-1)$, $O(n^2)$ and enable combining any encoder adapter with other decoder adapters