🧠
Model

Xlm Roberta Large Ineq Binary V6

by poltextlab hf-model--poltextlab--xlm-roberta-large-ineq-binary-v6
Nexus Index
40.8 Top 100%
S: Semantic 50
A: Authority 0
P: Popularity 19
R: Recency 96
Q: Quality 65
Tech Context
Vital Performance
435 DL / 30D
0.0%
Audited 40.8 FNI Score
Tiny - Params
- Context
435 Downloads
Restricted CC License
Model Information Summary
Entity Passport
Registry ID hf-model--poltextlab--xlm-roberta-large-ineq-binary-v6
License CC-BY-4.0
Provider huggingface
📜

Cite this model

Academic & Research Attribution

BibTeX
@misc{hf_model__poltextlab__xlm_roberta_large_ineq_binary_v6,
  author = {poltextlab},
  title = {Xlm Roberta Large Ineq Binary V6 Model},
  year = {2026},
  howpublished = {\url{https://huggingface.co/poltextlab/xlm-roberta-large-ineq-binary-v6}},
  note = {Accessed via Free2AITools Knowledge Fortress}
}
APA Style
poltextlab. (2026). Xlm Roberta Large Ineq Binary V6 [Model]. Free2AITools. https://huggingface.co/poltextlab/xlm-roberta-large-ineq-binary-v6

đŸ”ŦTechnical Deep Dive

Full Specifications [+]

Quick Commands

🤗 HF Download
huggingface-cli download poltextlab/xlm-roberta-large-ineq-binary-v6
đŸ“Ļ Install Lib
pip install -U transformers

âš–ī¸ Nexus Index V2.0

40.8
TOP 100% SYSTEM IMPACT
Semantic (S) 50
Authority (A) 0
Popularity (P) 19
Recency (R) 96
Quality (Q) 65

đŸ’Ŧ Index Insight

FNI V2.0 for Xlm Roberta Large Ineq Binary V6: Semantic (S:50), Authority (A:0), Popularity (P:19), Recency (R:96), Quality (Q:65).

Free2AITools Nexus Index

Verification Authority

Unbiased Data Node Refresh: VFS Live
---

🚀 What's Next?

Technical Deep Dive

xlm-roberta-large-ineq-binary-v6

Model desicription

An xlm-roberta-large model finetuned on english-translated, sentence-segmented parliamentary speeches training data from the V4 countries (Czechia, Hungary, Poland, and SLovakia). The model uses the following codebook:

Label Description
(0) Not inequality related If the text in question does not relate to individual-level economic inequality
(1) Inequality related If the text in question relates to individual-level economic inequality

How to use the model

python
from transformers import AutoTokenizer, pipeline

tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-large")
pipe = pipeline(
    model="poltextlab/xlm-roberta-large-ineq-binary-v6",
    task="text-classification",
    tokenizer=tokenizer,
    use_fast=False,
    token=""
)

text = ""
pipe(text)

Classification Report

Overall Performance:

  • Accuracy: N/A
  • Macro Avg: Precision: 0.82, Recall: 0.82, F1-score: 0.82
  • Weighted Avg: Precision: 0.82, Recall: 0.82, F1-score: 0.82

Per-Class Metrics:

Label Precision Recall F1-score Support
(0) Not inequality related 0.82 0.82 0.82 51
(1) Inequality related 0.82 0.82 0.82 51

Inference platform

This model is used by the CAP Babel Machine, an open-source and free natural language processing tool, designed to simplify and speed up projects for comparative research.

Cooperation

Model performance can be significantly improved by extending our training sets. We appreciate every submission of CAP-coded corpora (of any domain and language) at poltextlab{at}poltextlab{dot}com or by using the CAP Babel Machine.

Debugging and issues

This architecture uses the sentencepiece tokenizer. In order to run the model before transformers==4.27 you need to install it manually.

âš ī¸ Incomplete Data

Some information about this model is not available. Use with Caution - Verify details from the original source before relying on this data.

View Original Source →

📝 Limitations & Considerations

  • â€ĸ Benchmark scores may vary based on evaluation methodology and hardware configuration.
  • â€ĸ VRAM requirements are estimates; actual usage depends on quantization and batch size.
  • â€ĸ FNI scores are relative rankings and may change as new models are added.
  • ⚠ License Unknown: Verify licensing terms before commercial use.

Social Proof

HuggingFace Hub
435Downloads
🔄 Daily sync (03:00 UTC)

AI Summary: Based on Hugging Face metadata. Not a recommendation.

📊 FNI Methodology 📚 Knowledge Baseâ„šī¸ Verify with original source

đŸ›Ąī¸ Model Transparency Report

Technical metadata sourced from upstream repositories.

Open Metadata

🆔 Identity & Source

id
hf-model--poltextlab--xlm-roberta-large-ineq-binary-v6
slug
poltextlab--xlm-roberta-large-ineq-binary-v6
source
huggingface
author
poltextlab
license
CC-BY-4.0
tags
transformers, safetensors, xlm-roberta, text-classification, pytorch, en, base_model:facebookai/xlm-roberta-large, base_model:finetune:facebookai/xlm-roberta-large, license:cc-by-4.0, model-index, endpoints_compatible, region:us

âš™ī¸ Technical Specs

architecture
null
params billions
null
context length
null
pipeline tag
text-classification

📊 Engagement & Metrics

downloads
435
stars
0
forks
0

Data indexed from public sources. Updated daily.