@misc{hf_model__poltextlab__xlm_roberta_large_judiciary_cap_v3,
author = {poltextlab},
title = {Xlm Roberta Large Judiciary Cap V3 Model},
year = {2026},
howpublished = {\url{https://huggingface.co/poltextlab/xlm-roberta-large-judiciary-cap-v3}},
note = {Accessed via Free2AITools Knowledge Fortress}
}
APA Style
poltextlab. (2026). Xlm Roberta Large Judiciary Cap V3 [Model]. Free2AITools. https://huggingface.co/poltextlab/xlm-roberta-large-judiciary-cap-v3
An xlm-roberta-large model finetuned on multilingual training data containing texts of the judiciary domain labelled with major topic codes from the Comparative Agendas Project.
We follow the master codebook of the Comparative Agendas Project, and all of our models use the same major topic codes.
How to use the model
python
from transformers import AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-large")
pipe = pipeline(
model="poltextlab/xlm-roberta-large-judiciary-cap-v3",
task="text-classification",
tokenizer=tokenizer,
use_fast=False,
token=""
)
text = "We will place an immediate 6-month halt on the finance driven closure of beds and wards, and set up an independent audit of needs and facilities."
pipe(text)
The translation table from the model results to CAP codes is the following:
We have included a 999 label because our models are fine-tuned on training data containing the label 'None' in addition to the 21 CAP major policy topic codes, indicating that the given text contains no relevant policy content. We use the label 999 for these cases.
Gated access
Due to the gated access, you must pass the token parameter when loading the model. In earlier versions of the Transformers package, you may need to use the use_auth_token parameter instead.
Model performance
The model was evaluated on a test set of 1833 examples (10% of the available data).
Model accuracy is 0.77.
label
precision
recall
f1-score
support
0
0.39
0.26
0.32
34
1
0.82
0.8
0.81
296
2
0.64
0.62
0.63
34
3
0.7
0.78
0.74
9
4
0.87
0.77
0.82
171
5
0.48
0.41
0.44
29
6
0.68
0.65
0.67
20
7
0.79
0.88
0.83
56
8
0.65
0.79
0.71
33
9
0.68
0.8
0.74
81
10
0.88
0.83
0.85
489
11
0.71
0.79
0.75
28
12
0.67
0.89
0.76
9
13
0.77
0.82
0.79
251
14
0.59
0.86
0.7
37
15
0.65
0.62
0.64
24
16
0.64
0.43
0.51
21
17
1
0.14
0.25
7
18
0.59
0.67
0.63
139
19
0.77
0.86
0.81
63
20
0
0
0
2
21
0
0
0
0
macro avg
0.64
0.62
0.61
1833
weighted avg
0.78
0.77
0.77
1833
Inference platform
This model is used by the CAP Babel Machine, an open-source and free natural language processing tool, designed to simplify and speed up projects for comparative research.
Cooperation
Model performance can be significantly improved by extending our training sets. We appreciate every submission of CAP-coded corpora (of any domain and language) at poltextlab{at}poltextlab{dot}com or by using the CAP Babel Machine.
Debugging and issues
This architecture uses the sentencepiece tokenizer. In order to run the model before transformers==4.27 you need to install it manually.
If you encounter a RuntimeError when loading the model using the from_pretrained() method, adding ignore_mismatched_sizes=True should solve the issue.
â ī¸ Incomplete Data
Some information about this model is not available.
Use with Caution - Verify details from the original source before relying on this data.