New research from Meta AI reduces latency of existing Vision Transformer models with no additional training. ToMe combines similar tokens, reducing computation w/o losing information. Results are 2-3x speed for state-of-the-art models w/ minimal performance loss. Read more ⬇️
— Meta AI (@MetaAI) Feb 13, 2023
from Twitter https://twitter.com/MetaAI
February 13, 2023 at 11:54AM
via IFTTT
No comments:
Post a Comment