🧠
Model

Clinicalbert Mimic Phi Ner

by racheltong hf-model--racheltong--clinicalbert-mimic-phi-ner
Nexus Index
40.8 Top 100%
S: Semantic 50
A: Authority 0
P: Popularity 17
R: Recency 94
Q: Quality 65
Tech Context
Vital Performance
363 DL / 30D
0.0%
Audited 40.8 FNI Score
Tiny - Params
- Context
363 Downloads
Commercial MIT License
Model Information Summary
Entity Passport
Registry ID hf-model--racheltong--clinicalbert-mimic-phi-ner
License MIT
Provider huggingface
📜

Cite this model

Academic & Research Attribution

BibTeX
@misc{hf_model__racheltong__clinicalbert_mimic_phi_ner,
  author = {racheltong},
  title = {Clinicalbert Mimic Phi Ner Model},
  year = {2026},
  howpublished = {\url{https://huggingface.co/racheltong/clinicalbert-mimic-phi-ner}},
  note = {Accessed via Free2AITools Knowledge Fortress}
}
APA Style
racheltong. (2026). Clinicalbert Mimic Phi Ner [Model]. Free2AITools. https://huggingface.co/racheltong/clinicalbert-mimic-phi-ner

đŸ”ŦTechnical Deep Dive

Full Specifications [+]

Quick Commands

🤗 HF Download
huggingface-cli download racheltong/clinicalbert-mimic-phi-ner
đŸ“Ļ Install Lib
pip install -U transformers

âš–ī¸ Nexus Index V2.0

40.8
TOP 100% SYSTEM IMPACT
Semantic (S) 50
Authority (A) 0
Popularity (P) 17
Recency (R) 94
Quality (Q) 65

đŸ’Ŧ Index Insight

FNI V2.0 for Clinicalbert Mimic Phi Ner: Semantic (S:50), Authority (A:0), Popularity (P:17), Recency (R:94), Quality (Q:65).

Free2AITools Nexus Index

Verification Authority

Unbiased Data Node Refresh: VFS Live
---

🚀 What's Next?

Technical Deep Dive

ClinicalBERT-mimic-phi-ner

This model is a fine-tuned version of emilyalsentzer/Bio_ClinicalBERT on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0017
  • F1 Macro: 0.9441
  • F1 Weighted: 0.9441
  • Precision: 0.9140
  • Recall: 0.9763
  • F1 Name: 0.94
  • F1 Location: 0.91
  • F1 Phone: 0.93
  • F1 Date: 0.84
  • F1 Mrn: 0.96
  • F1 Account: 0.97
  • F1 Age Over 89: 0.98
  • F1 Device Id: 0.99
  • F1 Ssn: 1.0
  • F1 Url: 1.0
  • F1 Email: 0.99

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 32
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 32
  • optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 0.1
  • num_epochs: 2
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss F1 Macro F1 Weighted Precision Recall F1 Name F1 Location F1 Phone F1 Date F1 Mrn F1 Account F1 Age Over 89 F1 Device Id F1 Ssn F1 Url F1 Email
0.4470 0.1774 300 0.0868 0.3948 0.3948 0.2935 0.6032 0.46 0.33 0.41 0.04 0.46 0.4 0.08 0.58 0.32 0.0 0.31
0.0508 0.3547 600 0.0112 0.7449 0.7449 0.6654 0.8461 0.82 0.57 0.8 0.21 0.64 0.85 0.04 0.89 0.86 0.56 0.95
0.0302 0.5321 900 0.0131 0.8389 0.8389 0.7652 0.9284 0.88 0.72 0.86 0.27 0.59 0.98 0.84 0.9 0.92 0.93 0.99
0.0244 0.7094 1200 0.0046 0.8816 0.8816 0.8212 0.9517 0.9 0.81 0.81 0.48 0.75 0.97 0.95 0.97 0.98 1.0 1.0
0.0187 0.8868 1500 0.0030 0.9160 0.9160 0.8713 0.9656 0.93 0.82 0.87 0.52 0.89 0.95 0.96 0.96 1.0 1.0 1.0
0.0055 1.0638 1800 0.0030 0.9343 0.9343 0.8979 0.9737 0.94 0.89 0.9 0.57 0.92 0.97 0.98 0.99 1.0 1.0 1.0
0.0037 1.2412 2100 0.0027 0.9306 0.9306 0.8944 0.9697 0.93 0.89 0.9 0.74 0.92 0.98 0.98 0.99 1.0 1.0 1.0
0.0117 1.4186 2400 0.0025 0.9338 0.9338 0.8988 0.9716 0.94 0.88 0.89 0.8 0.94 0.97 0.98 0.99 1.0 1.0 0.99
0.0066 1.5959 2700 0.0020 0.9454 0.9454 0.9159 0.9769 0.95 0.9 0.93 0.83 0.96 0.98 0.99 0.99 1.0 1.0 0.99
0.0043 1.7733 3000 0.0018 0.9433 0.9433 0.9124 0.9763 0.94 0.9 0.93 0.82 0.96 0.97 0.99 0.99 1.0 1.0 0.99
0.0030 1.9506 3300 0.0017 0.9441 0.9441 0.9140 0.9763 0.94 0.91 0.93 0.84 0.96 0.97 0.98 0.99 1.0 1.0 0.99

Framework versions

  • Transformers 5.0.0
  • Pytorch 2.10.0+cu128
  • Datasets 4.0.0
  • Tokenizers 0.22.2

âš ī¸ Incomplete Data

Some information about this model is not available. Use with Caution - Verify details from the original source before relying on this data.

View Original Source →

📝 Limitations & Considerations

  • â€ĸ Benchmark scores may vary based on evaluation methodology and hardware configuration.
  • â€ĸ VRAM requirements are estimates; actual usage depends on quantization and batch size.
  • â€ĸ FNI scores are relative rankings and may change as new models are added.
  • ⚠ License Unknown: Verify licensing terms before commercial use.

Social Proof

HuggingFace Hub
363Downloads
🔄 Daily sync (03:00 UTC)

AI Summary: Based on Hugging Face metadata. Not a recommendation.

📊 FNI Methodology 📚 Knowledge Baseâ„šī¸ Verify with original source

đŸ›Ąī¸ Model Transparency Report

Technical metadata sourced from upstream repositories.

Open Metadata

🆔 Identity & Source

id
hf-model--racheltong--clinicalbert-mimic-phi-ner
slug
racheltong--clinicalbert-mimic-phi-ner
source
huggingface
author
racheltong
license
MIT
tags
transformers, safetensors, bert, token-classification, generated_from_trainer, base_model:emilyalsentzer/bio_clinicalbert, license:mit, endpoints_compatible, region:us

âš™ī¸ Technical Specs

architecture
null
params billions
null
context length
null
pipeline tag
token-classification

📊 Engagement & Metrics

downloads
363
stars
0
forks
0

Data indexed from public sources. Updated daily.