๐Ÿง 

text2vec-large-chinese

by ganymedenil Model ID: hf-model--ganymedenil--text2vec-large-chinese
FNI 0.1
Top 69%

"Based on the derivative model of https://huggingface.co/shibing624/text2vec-base-chinese, replace MacBERT with LERT, and keep other training conditions unchangedใ€‚ News 2024-06-25 text2vec-large-chinese onnxruntime version. Talk to me: https://twitter.com/GanymedeNil..."

๐Ÿ”— View Source
Audited 0.1 FNI Score
Tiny 0.33B Params
4k Context
4.2K Downloads
8G GPU ~2GB Est. VRAM

โšก Quick Commands

๐Ÿฆ™ Ollama Run
ollama run text2vec-large-chinese
๐Ÿค— HF Download
huggingface-cli download ganymedenil/text2vec-large-chinese
๐Ÿ“ฆ Install Lib
pip install -U transformers
๐Ÿ“Š

Engineering Specs

โšก Hardware

Parameters
0.33B
Architecture
BertModel
Context Length
4K
Model Size
2.4GB

๐Ÿง  Lifecycle

Library
-
Precision
float16
Tokenizer
-

๐ŸŒ Identity

Source
HuggingFace
License
Open Access
๐Ÿ’พ

Est. VRAM Benchmark

~1.5GB

Analyze Hardware

* Technical estimation for FP16/Q4 weights. Does not include OS overhead or long-context batching. For Technical Reference Only.

๐Ÿ“ˆ Interest Trend

--

* Real-time activity index across HuggingFace, GitHub and Research citations.

No similar models found.

๐Ÿ”ฌTechnical Deep Dive

Full Specifications [+]
---

๐Ÿš€ What's Next?

โšก Quick Commands

๐Ÿฆ™ Ollama Run
ollama run text2vec-large-chinese
๐Ÿค— HF Download
huggingface-cli download ganymedenil/text2vec-large-chinese
๐Ÿ“ฆ Install Lib
pip install -U transformers
๐Ÿ–ฅ๏ธ

Hardware Compatibility

Multi-Tier Validation Matrix

Live Sync
๐ŸŽฎ Compatible

RTX 3060 / 4060 Ti

Entry 8GB VRAM
๐ŸŽฎ Compatible

RTX 4070 Super

Mid 12GB VRAM
๐Ÿ’ป Compatible

RTX 4080 / Mac M3

High 16GB VRAM
๐Ÿš€ Compatible

RTX 3090 / 4090

Pro 24GB VRAM
๐Ÿ—๏ธ Compatible

RTX 6000 Ada

Workstation 48GB VRAM
๐Ÿญ Compatible

A100 / H100

Datacenter 80GB VRAM
โ„น๏ธ

Pro Tip: Compatibility is estimated for 4-bit quantization (Q4). High-precision (FP16) or ultra-long context windows will significantly increase VRAM requirements.

README

Based on the derivative model of https://huggingface.co/shibing624/text2vec-base-chinese, replace MacBERT with LERT, and keep other training conditions unchangedใ€‚

News

2024-06-25 text2vec-large-chinese onnxruntime version.

Talk to me: https://twitter.com/GanymedeNil

ZEN MODE โ€ข README

Based on the derivative model of https://huggingface.co/shibing624/text2vec-base-chinese, replace MacBERT with LERT, and keep other training conditions unchangedใ€‚

News

2024-06-25 text2vec-large-chinese onnxruntime version.

Talk to me: https://twitter.com/GanymedeNil

๐Ÿ“ Limitations & Considerations

  • โ€ข Benchmark scores may vary based on evaluation methodology and hardware configuration.
  • โ€ข VRAM requirements are estimates; actual usage depends on quantization and batch size.
  • โ€ข FNI scores are relative rankings and may change as new models are added.
  • โš  License Unknown: Verify licensing terms before commercial use.
  • โ€ข Source: Unknown
๐Ÿ“œ

Cite this model

Academic & Research Attribution

BibTeX
@misc{hf_model__ganymedenil__text2vec_large_chinese,
  author = {ganymedenil},
  title = {undefined Model},
  year = {2026},
  howpublished = {\url{https://huggingface.co/ganymedenil/text2vec-large-chinese}},
  note = {Accessed via Free2AITools Knowledge Fortress}
}
APA Style
ganymedenil. (2026). undefined [Model]. Free2AITools. https://huggingface.co/ganymedenil/text2vec-large-chinese
๐Ÿ”„ Daily sync (03:00 UTC)

AI Summary: Based on Hugging Face metadata. Not a recommendation.

๐Ÿ“Š FNI Methodology ๐Ÿ“š Knowledge Baseโ„น๏ธ Verify with original source

๐Ÿ›ก๏ธ Model Transparency Report

Verified data manifest for traceability and transparency.

100% Data Disclosure Active

๐Ÿ†” Identity & Source

id
hf-model--ganymedenil--text2vec-large-chinese
author
ganymedenil
tags
transformerspytorchsafetensorsbertfeature-extractiontext2vecsentence-similarityzhlicense:apache-2.0text-embeddings-inferenceendpoints_compatibledeploy:azureregion:us

โš™๏ธ Technical Specs

architecture
BertModel
params billions
0.33
context length
4,096
vram gb
1.5
vram is estimated
true
vram formula
VRAM โ‰ˆ (params * 0.75) + 0.8GB (KV) + 0.5GB (OS)

๐Ÿ“Š Engagement & Metrics

likes
760
downloads
4,152

Free2AITools Constitutional Data Pipeline: Curated disclosure mode active. (V15.x Standard)