🧠
Model

WeCheck

by nightdessert hf-model--nightdessert--wecheck
Nexus Index
24.8 Top 100%
S: Semantic 50
A: Authority 0
P: Popularity 4
R: Recency 12
Q: Quality 50
Tech Context
Vital Performance
35 DL / 30D
0.0%
Audited 24.8 FNI Score
Tiny - Params
- Context
35 Downloads
Model Information Summary
Entity Passport
Registry ID hf-model--nightdessert--wecheck
Provider huggingface
📜

Cite this model

Academic & Research Attribution

BibTeX
@misc{hf_model__nightdessert__wecheck,
  author = {nightdessert},
  title = {WeCheck Model},
  year = {2026},
  howpublished = {\url{https://huggingface.co/nightdessert/wecheck}},
  note = {Accessed via Free2AITools Knowledge Fortress}
}
APA Style
nightdessert. (2026). WeCheck [Model]. Free2AITools. https://huggingface.co/nightdessert/wecheck

đŸ”ŦTechnical Deep Dive

Full Specifications [+]

Quick Commands

🤗 HF Download
huggingface-cli download nightdessert/wecheck
đŸ“Ļ Install Lib
pip install -U transformers

âš–ī¸ Nexus Index V2.0

24.8
TOP 100% SYSTEM IMPACT
Semantic (S) 50
Authority (A) 0
Popularity (P) 4
Recency (R) 12
Quality (Q) 50

đŸ’Ŧ Index Insight

FNI V2.0 for WeCheck: Semantic (S:50), Authority (A:0), Popularity (P:4), Recency (R:12), Quality (Q:50).

Free2AITools Nexus Index

Verification Authority

Unbiased Data Node Refresh: VFS Live
---

🚀 What's Next?

Technical Deep Dive

Factual Consistency Evaluator/Metric in ACL 2023 paper

WeCheck: Strong Factual Consistency Checker via Weakly Supervised Learning

Open-sourced code: https://github.com/nightdessert/WeCheck

Model description

WeCheck is a factual consistency metric trained from weakly annotated samples.

This WeCheck checkpoint can be used to check the following three generation tasks:

Text Summarization/Knowlege grounded dialogue Generation/Paraphrase

This WeCheck checkpoint is trained based on the following three weak labler:

*[QAFactEval ](https://github.com/salesforce/QAFactEval)* / *[Summarc](https://github.com/tingofurro/summac)* / *[NLI warmup](https://huggingface.co/MoritzLaurer/DeBERTa-v3-large-mnli-fever-anli-ling-wanli)*

How to use the model

python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch

device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
model_name = "nightdessert/WeCheck"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
premise = "I first thought that I liked the movie, but upon second thought it was actually disappointing." # Input for Summarization/ Dialogue / Paraphrase
hypothesis = "The movie was not good." # Output for Summarization/ Dialogue / Paraphrase
input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt", truncation_strategy="only_first", max_length=512)
output = model(input["input_ids"].to(device))['logits'][:,0]  # device = "cuda:0" or "cpu"
prediction = torch.sigmoid(output).tolist()
print(prediction) #0.884

or apply for a batch of samples

python
premise = ["I first thought that I liked the movie, but upon second thought it was actually disappointing."]*3 # Input list for Summarization/ Dialogue / Paraphrase
hypothesis = ["The movie was not good."]*3 # Output list for Summarization/ Dialogue / Paraphrase
batch_tokens = tokenizer.batch_encode_plus(list(zip(premise, hypothesis)), padding=True, 
            truncation=True, max_length=512, return_tensors="pt", truncation_strategy="only_first")
output = model(batch_tokens["input_ids"].to(device))['logits'][:,0]  # device = "cuda:0" or "cpu"
prediction = torch.sigmoid(output).tolist()
print(prediction) #[0.884,0.884,0.884]

license: openrail pipeline_tag: text-classification tags:

  • Factual Consistency
  • Natrual Language Inference

language:

  • en tags:
  • Factual Consistency Evaluation

âš ī¸ Incomplete Data

Some information about this model is not available. Use with Caution - Verify details from the original source before relying on this data.

View Original Source →

📝 Limitations & Considerations

  • â€ĸ Benchmark scores may vary based on evaluation methodology and hardware configuration.
  • â€ĸ VRAM requirements are estimates; actual usage depends on quantization and batch size.
  • â€ĸ FNI scores are relative rankings and may change as new models are added.
  • ⚠ License Unknown: Verify licensing terms before commercial use.

Social Proof

HuggingFace Hub
35Downloads
🔄 Daily sync (03:00 UTC)

AI Summary: Based on Hugging Face metadata. Not a recommendation.

📊 FNI Methodology 📚 Knowledge Baseâ„šī¸ Verify with original source

đŸ›Ąī¸ Model Transparency Report

Technical metadata sourced from upstream repositories.

Open Metadata

🆔 Identity & Source

id
hf-model--nightdessert--wecheck
slug
nightdessert--wecheck
source
huggingface
author
nightdessert
license
tags
transformers, pytorch, deberta-v2, text-classification, text-generation, arxiv:2212.10057, endpoints_compatible, region:us

âš™ī¸ Technical Specs

architecture
null
params billions
null
context length
null
pipeline tag
text-generation

📊 Engagement & Metrics

downloads
35
stars
0
forks
0

Data indexed from public sources. Updated daily.