Xlm Roberta Large Italian Parlspeech Cap V3
| Entity Passport | |
| Registry ID | hf-model--poltextlab--xlm-roberta-large-italian-parlspeech-cap-v3 |
| License | CC-BY-4.0 |
| Provider | huggingface |
Cite this model
Academic & Research Attribution
@misc{hf_model__poltextlab__xlm_roberta_large_italian_parlspeech_cap_v3,
author = {poltextlab},
title = {Xlm Roberta Large Italian Parlspeech Cap V3 Model},
year = {2026},
howpublished = {\url{https://huggingface.co/poltextlab/xlm-roberta-large-italian-parlspeech-cap-v3}},
note = {Accessed via Free2AITools Knowledge Fortress}
} ๐ฌTechnical Deep Dive
Full Specifications [+]โพ
Quick Commands
huggingface-cli download poltextlab/xlm-roberta-large-italian-parlspeech-cap-v3 pip install -U transformers โ๏ธ Nexus Index V2.0
๐ฌ Index Insight
FNI V2.0 for Xlm Roberta Large Italian Parlspeech Cap V3: Semantic (S:50), Authority (A:0), Popularity (P:29), Recency (R:96), Quality (Q:50).
Verification Authority
๐ What's Next?
Technical Deep Dive
xlm-roberta-large-italian-parlspeech-cap-v3
Model description
An xlm-roberta-large model fine-tuned on italian training data containing parliamentary speeches (oral questions, interpellations, bill debates, other plenary speeches, urgent questions) labeled with major topic codes from the Comparative Agendas Project.
We follow the master codebook of the Comparative Agendas Project, and all of our models use the same major topic codes.
How to use the model
from transformers import AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-large")
pipe = pipeline(
model="poltextlab/xlm-roberta-large-italian-parlspeech-cap-v3",
task="text-classification",
tokenizer=tokenizer,
use_fast=False,
token=""
)
text = "We will place an immediate 6-month halt on the finance driven closure of beds and wards, and set up an independent audit of needs and facilities."
pipe(text)
The translation table from the model results to CAP codes is the following:
CAP_NUM_DICT = {
0: 1,
1: 2,
2: 3,
3: 4,
4: 5,
5: 6,
6: 7,
7: 8,
8: 9,
9: 10,
10: 12,
11: 13,
12: 14,
13: 15,
14: 16,
15: 17,
16: 18,
17: 19,
18: 20,
19: 21,
20: 23,
21: 999,
}
We have included a 999 label because our models are fine-tuned on training data containing the label 'None' in addition to the 21 CAP major policy topic codes, indicating that the given text contains no relevant policy content. We use the label 999 for these cases.
Gated access
Due to the gated access, you must pass the token parameter when loading the model. In earlier versions of the Transformers package, you may need to use the use_auth_token parameter instead.
Model performance
The model was evaluated on a test set of 835 examples.
Model accuracy is 0.66.
| label | precision | recall | f1-score | support |
|---|---|---|---|---|
| 0 | 0.51 | 0.61 | 0.56 | 64 |
| 1 | 0.62 | 0.38 | 0.47 | 21 |
| 2 | 0.81 | 0.77 | 0.79 | 61 |
| 3 | 0.77 | 0.86 | 0.81 | 35 |
| 4 | 0.64 | 0.58 | 0.61 | 50 |
| 5 | 0.84 | 0.78 | 0.81 | 49 |
| 6 | 0.54 | 0.45 | 0.49 | 31 |
| 7 | 0.79 | 0.83 | 0.81 | 23 |
| 8 | 0.45 | 0.75 | 0.57 | 20 |
| 9 | 0.78 | 0.83 | 0.80 | 82 |
| 10 | 0.66 | 0.79 | 0.72 | 122 |
| 11 | 0.42 | 0.40 | 0.41 | 20 |
| 12 | 0.67 | 0.53 | 0.59 | 15 |
| 13 | 0.52 | 0.52 | 0.52 | 62 |
| 14 | 0.67 | 0.48 | 0.56 | 25 |
| 15 | 0.67 | 0.50 | 0.57 | 16 |
| 16 | 0.00 | 0.00 | 0.00 | 4 |
| 17 | 0.52 | 0.47 | 0.49 | 36 |
| 18 | 0.55 | 0.56 | 0.56 | 75 |
| 19 | 0.78 | 0.39 | 0.52 | 18 |
| 20 | 0.00 | 0.00 | 0.00 | 5 |
| 21 | 0.95 | 1.00 | 0.98 | 39 |
| accuracy | 0.66 | 0.66 | 0.66 | 0.66 |
| macro avg | 0.60 | 0.57 | 0.57 | 873 |
| weighted avg | 0.66 | 0.66 | 0.65 | 873 |
Fine-tuning procedure
This model was fine-tuned with the following key hyperparameters:
- Number of Training Epochs: 10
- Batch Size: 8
- Learning Rate: 5e-06
- Early Stopping: enabled with a patience of 2 epochs
Inference platform
This model is used by the CAP Babel Machine, an open-source and free natural language processing tool, designed to simplify and speed up projects for comparative research.
Cooperation
Model performance can be significantly improved by extending our training sets. We appreciate every submission of CAP-coded corpora (of any domain and language) at poltextlab{at}poltextlab{dot}com or by using the CAP Babel Machine.
Reference
Sebลk, M., Mรกtรฉ, ร., Ring, O., Kovรกcs, V., & Lehoczki, R. (2024). Leveraging Open Large Language Models for Multilingual Policy Topic Classification: The Babel Machine Approach. Social Science Computer Review, 0(0). https://doi.org/10.1177/08944393241259434
Debugging and issues
This architecture uses the sentencepiece tokenizer. In order to use the model before transformers==4.27 you need to install it manually.
If you encounter a RuntimeError when loading the model using the from_pretrained() method, adding ignore_mismatched_sizes=True should solve the issue.
โ ๏ธ Incomplete Data
Some information about this model is not available. Use with Caution - Verify details from the original source before relying on this data.
View Original Source โ๐ Limitations & Considerations
- โข Benchmark scores may vary based on evaluation methodology and hardware configuration.
- โข VRAM requirements are estimates; actual usage depends on quantization and batch size.
- โข FNI scores are relative rankings and may change as new models are added.
- โ License Unknown: Verify licensing terms before commercial use.
Social Proof
AI Summary: Based on Hugging Face metadata. Not a recommendation.
๐ก๏ธ Model Transparency Report
Technical metadata sourced from upstream repositories.
๐ Identity & Source
- id
- hf-model--poltextlab--xlm-roberta-large-italian-parlspeech-cap-v3
- slug
- poltextlab--xlm-roberta-large-italian-parlspeech-cap-v3
- source
- huggingface
- author
- poltextlab
- license
- CC-BY-4.0
- tags
- transformers, pytorch, xlm-roberta, text-classification, it, license:cc-by-4.0, endpoints_compatible, region:us
โ๏ธ Technical Specs
- architecture
- null
- params billions
- null
- context length
- null
- pipeline tag
- text-classification
๐ Engagement & Metrics
- downloads
- 1,395
- stars
- 0
- forks
- 0
Data indexed from public sources. Updated daily.