🧠
Model

Openvla 7b Oft Finetuned Libero Spatial

by moojink hf-model--moojink--openvla-7b-oft-finetuned-libero-spatial
Nexus Index
38.4 Top 100%
S: Semantic 50
A: Authority 0
P: Popularity 42
R: Recency 53
Q: Quality 65
Tech Context
7 Params
4.096K Ctx
Vital Performance
14.1K DL / 30D
0.0%
Audited 38.4 FNI Score
7B Params
4k Context
14.1K Downloads
8G GPU ~7GB Est. VRAM
Commercial MIT License
Model Information Summary
Entity Passport
Registry ID hf-model--moojink--openvla-7b-oft-finetuned-libero-spatial
License MIT
Provider huggingface
💾

Compute Threshold

~6.5GB VRAM

Interactive
Analyze Hardware
â–ŧ

* Static estimation for 4-Bit Quantization.

📜

Cite this model

Academic & Research Attribution

BibTeX
@misc{hf_model__moojink__openvla_7b_oft_finetuned_libero_spatial,
  author = {moojink},
  title = {Openvla 7b Oft Finetuned Libero Spatial Model},
  year = {2026},
  howpublished = {\url{https://huggingface.co/moojink/openvla-7b-oft-finetuned-libero-spatial}},
  note = {Accessed via Free2AITools Knowledge Fortress}
}
APA Style
moojink. (2026). Openvla 7b Oft Finetuned Libero Spatial [Model]. Free2AITools. https://huggingface.co/moojink/openvla-7b-oft-finetuned-libero-spatial

đŸ”ŦTechnical Deep Dive

Full Specifications [+]

Quick Commands

đŸĻ™ Ollama Run
ollama run openvla-7b-oft-finetuned-libero-spatial
🤗 HF Download
huggingface-cli download moojink/openvla-7b-oft-finetuned-libero-spatial
đŸ“Ļ Install Lib
pip install -U transformers

âš–ī¸ Nexus Index V2.0

38.4
TOP 100% SYSTEM IMPACT
Semantic (S) 50
Authority (A) 0
Popularity (P) 42
Recency (R) 53
Quality (Q) 65

đŸ’Ŧ Index Insight

FNI V2.0 for Openvla 7b Oft Finetuned Libero Spatial: Semantic (S:50), Authority (A:0), Popularity (P:42), Recency (R:53), Quality (Q:65).

Free2AITools Nexus Index

Verification Authority

Unbiased Data Node Refresh: VFS Live
---

🚀 What's Next?

Technical Deep Dive

Fine-Tuning Vision-Language-Action Models: Optimizing Speed and Success

This repository contains the OpenVLA-OFT checkpoint for LIBERO-Spatial, as described in Fine-Tuning Vision-Language-Action Models: Optimizing Speed and Success. OpenVLA-OFT significantly improves upon the base OpenVLA model by incorporating optimized fine-tuning techniques.

Project Page: https://openvla-oft.github.io/

Code: https://github.com/openvla-oft/openvla-oft

See here for other OpenVLA-OFT checkpoints: https://huggingface.co/moojink?search_models=oft

Quick Start

This example demonstrates generating an action chunk using a pretrained OpenVLA-OFT checkpoint. Ensure you have set up the conda environment as described in the GitHub README.

python
import pickle
from experiments.robot.libero.run_libero_eval import GenerateConfig
from experiments.robot.openvla_utils import get_action_head, get_processor, get_proprio_projector, get_vla, get_vla_action
from prismatic.vla.constants import NUM_ACTIONS_CHUNK, PROPRIO_DIM

# Instantiate config (see class GenerateConfig in experiments/robot/libero/run_libero_eval.py for definitions)
cfg = GenerateConfig(
    pretrained_checkpoint = "moojink/openvla-7b-oft-finetuned-libero-spatial",
    use_l1_regression = True,
    use_diffusion = False,
    use_film = False,
    num_images_in_input = 2,
    use_proprio = True,
    load_in_8bit = False,
    load_in_4bit = False,
    center_crop = True,
    num_open_loop_steps = NUM_ACTIONS_CHUNK,
    unnorm_key = "libero_spatial_no_noops",
)

# Load OpenVLA-OFT policy and inputs processor
vla = get_vla(cfg)
processor = get_processor(cfg)

# Load MLP action head to generate continuous actions (via L1 regression)
action_head = get_action_head(cfg, llm_dim=vla.llm_dim)

# Load proprio projector to map proprio to language embedding space
proprio_projector = get_proprio_projector(cfg, llm_dim=vla.llm_dim, proprio_dim=PROPRIO_DIM)

# Load sample observation:
#   observation (dict): {
#     "full_image": primary third-person image,
#     "wrist_image": wrist-mounted camera image,
#     "state": robot proprioceptive state,
#     "task_description": task description,
#   }
with open("experiments/robot/libero/sample_libero_spatial_observation.pkl", "rb") as file:
    observation = pickle.load(file)

# Generate robot action chunk (sequence of future actions)
actions = get_vla_action(cfg, vla, processor, observation, observation["task_description"], action_head, proprio_projector)
print("Generated action chunk:")
for act in actions:
    print(act)

Citation

bibtex
@article{kim2025fine,
  title={Fine-Tuning Vision-Language-Action Models: Optimizing Speed and Success},
  author={Kim, Moo Jin and Finn, Chelsea and Liang, Percy},
  journal={arXiv preprint arXiv:2502.19645},
  year={2025}
}

âš ī¸ Incomplete Data

Some information about this model is not available. Use with Caution - Verify details from the original source before relying on this data.

View Original Source →

📝 Limitations & Considerations

  • â€ĸ Benchmark scores may vary based on evaluation methodology and hardware configuration.
  • â€ĸ VRAM requirements are estimates; actual usage depends on quantization and batch size.
  • â€ĸ FNI scores are relative rankings and may change as new models are added.
  • ⚠ License Unknown: Verify licensing terms before commercial use.

Social Proof

HuggingFace Hub
14.1KDownloads
🔄 Daily sync (03:00 UTC)

AI Summary: Based on Hugging Face metadata. Not a recommendation.

📊 FNI Methodology 📚 Knowledge Baseâ„šī¸ Verify with original source

đŸ›Ąī¸ Model Transparency Report

Technical metadata sourced from upstream repositories.

Open Metadata

🆔 Identity & Source

id
hf-model--moojink--openvla-7b-oft-finetuned-libero-spatial
slug
moojink--openvla-7b-oft-finetuned-libero-spatial
source
huggingface
author
moojink
license
MIT
tags
transformers, safetensors, openvla, feature-extraction, robotics, custom_code, arxiv:2502.19645, license:mit, region:us

âš™ī¸ Technical Specs

architecture
null
params billions
7
context length
4,096
pipeline tag
robotics
vram gb
6.5
vram is estimated
true
vram formula
VRAM ≈ (params * 0.75) + 0.8GB (KV) + 0.5GB (OS)

📊 Engagement & Metrics

downloads
14,130
stars
0
forks
0

Data indexed from public sources. Updated daily.