twitter-roberta-base-sentiment-latest
by cardiffnlp
--- language: en widget: - text: Covid cases are increasing fast! datasets: - tweet_eval license: cc-by-4.0 --- This is a RoBERTa-base model trained on ~124M tw
π§ Architecture Explorer
Neural network architecture
About
This is a RoBERTa-base model trained on ~124M tweets from January 2018 to December 2021, and finetuned for sentiment analysis with the TweetEval benchmark. The original Twitter-based RoBERTa model can be found here and the original reference paper is TweetEval. This model is suitable for English. - Reference Paper: TimeLMs paper. - Git Repo: TimeLMs official repository. Labels</b...
π Limitations & Considerations
- β’ Benchmark scores may vary based on evaluation methodology and hardware configuration.
- β’ VRAM requirements are estimates; actual usage depends on quantization and batch size.
- β’ FNI scores are relative rankings and may change as new models are added.
- β’ Data source: [{"source_platform":"huggingface","source_url":"https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment-latest","fetched_at":"2025-12-19T07:41:01.179Z","adapter_version":"3.2.0"}]
π Related Resources
π Related Papers
No related papers linked yet. Check the model's official documentation for research papers.
π Training Datasets
Training data information not available. Refer to the original model card for details.
π Related Models
Data unavailable