Artificial Intelligence

DeepMind, Microsoft, Allen AI & UW Researchers Convert Pretrained Transformers into RNNs, Lowering Memory Cost While Retaining High Accuracy | Synced | Synced

Read more at syncedreview.com

A research team from University of Washington, Microsoft, DeepMind and Allen Institute for AI develop a method to convert pretrained transformers into efficient RNNs. The Transformer-to-RNN (T2R) approach speeds up generation and reduces memory cost.
Back to top button