📊
Dataset

Whisper Medium Swedish

by marinone94 hf-model--marinone94--whisper-medium-swedish
Nexus Index
25.6 Top 100%
S: Semantic 50
A: Authority 0
P: Popularity 0
R: Recency 100
Q: Quality 23
Tech Context
Vital Performance
0 DL / 30D
0.0%
Data Integrity 25.6 FNI Score
- Size
- Rows
Parquet Format
- Tokens
Dataset Information Summary
Entity Passport
Registry ID hf-model--marinone94--whisper-medium-swedish
Provider huggingface
📜

Cite this dataset

Academic & Research Attribution

BibTeX
@misc{hf_model__marinone94__whisper_medium_swedish,
  author = {marinone94},
  title = {Whisper Medium Swedish Dataset},
  year = {2026},
  howpublished = {\url{https://free2aitools.com/dataset/hf-model--marinone94--whisper-medium-swedish}},
  note = {Accessed via Free2AITools Knowledge Fortress}
}
APA Style
marinone94. (2026). Whisper Medium Swedish [Dataset]. Free2AITools. https://free2aitools.com/dataset/hf-model--marinone94--whisper-medium-swedish

đŸ”ŦTechnical Deep Dive

Full Specifications [+]

âš–ī¸ Nexus Index V2.0

25.6
TOP 100% SYSTEM IMPACT
Semantic (S) 50
Authority (A) 0
Popularity (P) 0
Recency (R) 100
Quality (Q) 23

đŸ’Ŧ Index Insight

FNI V2.0 for Whisper Medium Swedish: Semantic (S:50), Authority (A:0), Popularity (P:0), Recency (R:100), Quality (Q:23).

Free2AITools Nexus Index

Verification Authority

Unbiased Data Node Refresh: VFS Live

đŸ‘ī¸ Data Preview

📊

Row-level preview not available for this dataset.

Schema structure is shown in the Field Logic panel when available.

đŸ§Ŧ Field Logic

đŸ§Ŧ

Schema not yet indexed for this dataset.

Dataset Specification

Whisper Medium Swedish

This model is a fine-tuned version of Whisper Medium Nordic on the mozilla-foundation/common_voice_11_0 (train+validation), the babelbox/babelbox_voice (NST SV - train split) and the google/fleurs (sv_se - train+validation+test) datasets. It achieves the following results on the evaluation set:

  • eval_loss: 0.2483
  • eval_wer: 9.8914
  • eval_runtime: 2924.8709
  • eval_samples_per_second: 1.733
  • eval_steps_per_second: 0.108

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 32
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 250
  • training_steps: 5000
  • mixed_precision_training: Native AMP

Framework versions

  • Transformers 4.26.0.dev0
  • Pytorch 1.13.1+cu117
  • Datasets 2.7.1.dev0
  • Tokenizers 0.13.2

WandB run

https://wandb.ai/pn-aa/whisper/runs/z2lzjx4x?workspace=user-emilio_marinone

🔄 Daily sync (03:00 UTC)

AI Summary: Based on Hugging Face metadata. Not a recommendation.

📊 FNI Methodology 📚 Knowledge Baseâ„šī¸ Verify with original source

đŸ›Ąī¸ Dataset Transparency Report

Technical metadata sourced from upstream repositories.

Open Metadata

🆔 Identity & Source

id
hf-model--marinone94--whisper-medium-swedish
slug
marinone94--whisper-medium-swedish
source
huggingface
author
marinone94
license
tags

âš™ī¸ Technical Specs

architecture
null
params billions
null
context length
null
pipeline tag

📊 Engagement & Metrics

downloads
0
stars
0
forks
0

Data indexed from public sources. Updated daily.