🧠
Model

Wav2vec2 Large Waxal Keyword Spotting

by galsenai hf-model--galsenai--wav2vec2-large-waxal-keyword-spotting
Nexus Index
24.1 Top 100%
S: Semantic 50
A: Authority 0
P: Popularity 1
R: Recency 10
Q: Quality 50
Tech Context
Vital Performance
7 DL / 30D
0.0%
Audited 24.1 FNI Score
Tiny - Params
- Context
7 Downloads
Commercial APACHE License
Model Information Summary
Entity Passport
Registry ID hf-model--galsenai--wav2vec2-large-waxal-keyword-spotting
License Apache-2.0
Provider huggingface
📜

Cite this model

Academic & Research Attribution

BibTeX
@misc{hf_model__galsenai__wav2vec2_large_waxal_keyword_spotting,
  author = {galsenai},
  title = {Wav2vec2 Large Waxal Keyword Spotting Model},
  year = {2026},
  howpublished = {\url{https://huggingface.co/galsenai/wav2vec2-large-waxal-keyword-spotting}},
  note = {Accessed via Free2AITools Knowledge Fortress}
}
APA Style
galsenai. (2026). Wav2vec2 Large Waxal Keyword Spotting [Model]. Free2AITools. https://huggingface.co/galsenai/wav2vec2-large-waxal-keyword-spotting

đŸ”ŦTechnical Deep Dive

Full Specifications [+]

Quick Commands

🤗 HF Download
huggingface-cli download galsenai/wav2vec2-large-waxal-keyword-spotting
đŸ“Ļ Install Lib
pip install -U transformers

âš–ī¸ Nexus Index V2.0

24.1
TOP 100% SYSTEM IMPACT
Semantic (S) 50
Authority (A) 0
Popularity (P) 1
Recency (R) 10
Quality (Q) 50

đŸ’Ŧ Index Insight

FNI V2.0 for Wav2vec2 Large Waxal Keyword Spotting: Semantic (S:50), Authority (A:0), Popularity (P:1), Recency (R:10), Quality (Q:50).

Free2AITools Nexus Index

Verification Authority

Unbiased Data Node Refresh: VFS Live
---

🚀 What's Next?

Technical Deep Dive

wav2vec2-large

This model is a fine-tuned version of facebook/wav2vec2-large on the galsenai/waxal_dataset dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3413
  • Accuracy: 0.9443
  • Precision: 0.9780
  • F1: 0.9604

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-05
  • train_batch_size: 12
  • eval_batch_size: 12
  • seed: 0
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 48
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 32.0

Training results

Training Loss Epoch Step Validation Loss Accuracy Precision F1
4.6314 1.01 500 4.9165 0.0205 0.0028 0.0049
3.7739 2.02 1000 4.4491 0.0356 0.0750 0.0252
2.5035 3.04 1500 4.1429 0.1129 0.2672 0.1114
1.5633 4.05 2000 3.1973 0.3676 0.6598 0.3830
1.0538 5.06 2500 2.5479 0.5889 0.8417 0.6557
0.7422 6.07 3000 1.4494 0.7825 0.8921 0.8194
0.5762 7.08 3500 1.3168 0.7726 0.9277 0.8267
0.46 8.1 4000 0.8783 0.8564 0.9532 0.8982
0.4007 9.11 4500 0.7524 0.8738 0.9637 0.9137
0.3374 10.12 5000 0.6386 0.8852 0.9678 0.9221
0.3108 11.13 5500 0.5049 0.9106 0.9681 0.9373
0.2735 12.15 6000 0.6097 0.8905 0.9624 0.9226
0.2716 13.16 6500 0.4543 0.9000 0.9569 0.9206
0.2484 14.17 7000 0.3965 0.9272 0.9742 0.9489
0.228 15.18 7500 0.6807 0.8856 0.9777 0.9257
0.2307 16.19 8000 0.5219 0.9174 0.9802 0.9464
0.2169 17.21 8500 0.4630 0.9121 0.9677 0.9338
0.1997 18.22 9000 0.5152 0.9128 0.9740 0.9398
0.1921 19.23 9500 0.5105 0.9144 0.9867 0.9476
0.1825 20.24 10000 0.6302 0.9053 0.9832 0.9407
0.1786 21.25 10500 0.4602 0.9272 0.9813 0.9524
0.1671 22.27 11000 0.5443 0.9147 0.9794 0.9444
0.1623 23.28 11500 0.3413 0.9443 0.9780 0.9604
0.1595 24.29 12000 0.4478 0.9288 0.9813 0.9531
0.151 25.3 12500 0.4178 0.9360 0.9818 0.9571
0.1472 26.32 13000 0.4154 0.9356 0.9833 0.9578
0.1473 27.33 13500 0.4549 0.9318 0.9837 0.9561
0.131 28.34 14000 0.3574 0.9424 0.9845 0.9621
0.134 29.35 14500 0.4475 0.9333 0.9840 0.9568
0.1282 30.36 15000 0.4012 0.9382 0.9837 0.9591
0.1307 31.38 15500 0.3552 0.9428 0.9847 0.9624

Framework versions

  • Transformers 4.27.0.dev0
  • Pytorch 1.11.0+cu113
  • Datasets 2.9.1.dev0
  • Tokenizers 0.13.2

âš ī¸ Incomplete Data

Some information about this model is not available. Use with Caution - Verify details from the original source before relying on this data.

View Original Source →

📝 Limitations & Considerations

  • â€ĸ Benchmark scores may vary based on evaluation methodology and hardware configuration.
  • â€ĸ VRAM requirements are estimates; actual usage depends on quantization and batch size.
  • â€ĸ FNI scores are relative rankings and may change as new models are added.
  • ⚠ License Unknown: Verify licensing terms before commercial use.

Social Proof

HuggingFace Hub
7Downloads
🔄 Daily sync (03:00 UTC)

AI Summary: Based on Hugging Face metadata. Not a recommendation.

📊 FNI Methodology 📚 Knowledge Baseâ„šī¸ Verify with original source

đŸ›Ąī¸ Model Transparency Report

Technical metadata sourced from upstream repositories.

Open Metadata

🆔 Identity & Source

id
hf-model--galsenai--wav2vec2-large-waxal-keyword-spotting
slug
galsenai--wav2vec2-large-waxal-keyword-spotting
source
huggingface
author
galsenai
license
Apache-2.0
tags
transformers, pytorch, wav2vec2, audio-classification, generated_from_trainer, license:apache-2.0, endpoints_compatible, region:us

âš™ī¸ Technical Specs

architecture
null
params billions
null
context length
null
pipeline tag
audio-classification

📊 Engagement & Metrics

downloads
7
stars
0
forks
0

Data indexed from public sources. Updated daily.