GPT-3: Language Models are Few-Shot Learners, by @notTomBrown et al.
— hardmaru (@hardmaru) May 29, 2020
“We train GPT-3, an autoregressive language model with 175 billion parameters, 10x more than any previous non-sparse language model, and test its performance in the few-shot setting.”https://t.co/qhcHPSH22I pic.twitter.com/ng1Dc6aFg3
from Twitter https://twitter.com/hardmaru
May 28, 2020 at 06:53PM
via IFTTT
No comments:
Post a Comment