Mixed Precision Training
Massively Multilingual Neural Machine Translation
Adversarial Representation Learning for Robust Privacy Preservation in\n Audio
CE-Net: Context Encoder Network for 2D Medical Image Segmentation
Res2Net: A New Multi-scale Backbone Architecture
BitTrain: Sparse Bitmap Compression for Memory-Efficient Training on the Edge
Adversarial NLI: A New Benchmark for Natural Language Understanding
Visual Relationship Detection with Language Priors
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization
Robust Optimization for Multilingual Translation with Imbalanced Data
K-Adapter: Infusing Knowledge into Pre-Trained Models with Adapters
Neural Natural Language Inference Models Enhanced with External Knowledge
Deep Directed Generative Autoencoders
Whitening Sentence Representations for Better Semantics and Faster Retrieval
TextBoxes++: A Single-Shot Oriented Scene Text Detector
DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs
BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
Deep Fragment Embeddings for Bidirectional Image Sentence Mapping
Ecological Consequences of Trophic Cascades: A Global Perspective
Making Pre-trained Language Models Better Few-shot Learners