📊
Dataset

Mt5 Small Prompted Germanquad 1

by philschmid hf-model--philschmid--mt5-small-prompted-germanquad-1
Nexus Index
25.6 Top 100%
S: Semantic 50
A: Authority 0
P: Popularity 0
R: Recency 100
Q: Quality 23
Tech Context
Vital Performance
0 DL / 30D
0.0%
Data Integrity 25.6 FNI Score
- Size
- Rows
Parquet Format
- Tokens
Dataset Information Summary
Entity Passport
Registry ID hf-model--philschmid--mt5-small-prompted-germanquad-1
Provider huggingface
📜

Cite this dataset

Academic & Research Attribution

BibTeX
@misc{hf_model__philschmid__mt5_small_prompted_germanquad_1,
  author = {philschmid},
  title = {Mt5 Small Prompted Germanquad 1 Dataset},
  year = {2026},
  howpublished = {\url{https://free2aitools.com/dataset/hf-model--philschmid--mt5-small-prompted-germanquad-1}},
  note = {Accessed via Free2AITools Knowledge Fortress}
}
APA Style
philschmid. (2026). Mt5 Small Prompted Germanquad 1 [Dataset]. Free2AITools. https://free2aitools.com/dataset/hf-model--philschmid--mt5-small-prompted-germanquad-1

đŸ”ŦTechnical Deep Dive

Full Specifications [+]

âš–ī¸ Nexus Index V2.0

25.6
TOP 100% SYSTEM IMPACT
Semantic (S) 50
Authority (A) 0
Popularity (P) 0
Recency (R) 100
Quality (Q) 23

đŸ’Ŧ Index Insight

FNI V2.0 for Mt5 Small Prompted Germanquad 1: Semantic (S:50), Authority (A:0), Popularity (P:0), Recency (R:100), Quality (Q:23).

Free2AITools Nexus Index

Verification Authority

Unbiased Data Node Refresh: VFS Live

đŸ‘ī¸ Data Preview

📊

Row-level preview not available for this dataset.

Schema structure is shown in the Field Logic panel when available.

đŸ§Ŧ Field Logic

đŸ§Ŧ

Schema not yet indexed for this dataset.

Dataset Specification

mt5-small-prompted-germanquad-1

This model is a fine-tuned version of google/mt5-small on an philschmid/prompted-germanquad dataset. A prompt datasets using the BigScience PromptSource library. The dataset is a copy of germanquad with applying the squad template and translated it to german. TEMPLATE.

This is a first test if it is possible to fine-tune mt5 models to solve similar tasks than T0 of big science but for the German language.

It achieves the following results on the evaluation set:

  • Loss: 1.6835
  • Rouge1: 27.7309
  • Rouge2: 18.7311
  • Rougel: 27.4704
  • Rougelsum: 27.4818

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5.6e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • num_epochs: 7

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum
3.3795 1.0 17496 2.0693 15.8652 9.2569 15.6237 15.6142
2.3582 2.0 34992 1.9057 21.9348 14.0057 21.6769 21.6825
2.1809 3.0 52488 1.8143 24.3401 16.0354 24.0862 24.0914
2.0721 4.0 69984 1.7563 25.8672 17.2442 25.5854 25.6051
2.0004 5.0 87480 1.7152 27.0275 18.0548 26.7561 26.7685
1.9531 6.0 104976 1.6939 27.4702 18.5156 27.2027 27.2107
1.9218 7.0 122472 1.6835 27.7309 18.7311 27.4704 27.4818

Framework versions

  • Transformers 4.14.1
  • Pytorch 1.10.1+cu102
  • Datasets 1.16.1
  • Tokenizers 0.10.3
🔄 Daily sync (03:00 UTC)

AI Summary: Based on Hugging Face metadata. Not a recommendation.

📊 FNI Methodology 📚 Knowledge Baseâ„šī¸ Verify with original source

đŸ›Ąī¸ Dataset Transparency Report

Technical metadata sourced from upstream repositories.

Open Metadata

🆔 Identity & Source

id
hf-model--philschmid--mt5-small-prompted-germanquad-1
slug
philschmid--mt5-small-prompted-germanquad-1
source
huggingface
author
philschmid
license
tags

âš™ī¸ Technical Specs

architecture
null
params billions
null
context length
null
pipeline tag

📊 Engagement & Metrics

downloads
0
stars
0
forks
0

Data indexed from public sources. Updated daily.