Pig Detection
Pillar scores are computed during the next indexing cycle.
This repository contains scripts to train a YOLOv8 detector for pigs and run a simple video tracker that assigns per-animal IDs. Location - Project root: - Virtual environment python: - Trained weights (example): Important notes about your environment - You're on Windows (PowerShell). Use the virtualenv python above when running commands to avoid path/launcher issues. - If you see launcher errors like "Unable to create process" when running , run with the venv python to avoid broken/encoded p...
| Entity Passport | |
| Registry ID | hf-dataset--xiaoquancai--pig-detection |
| Provider | huggingface |
Cite this dataset
Academic & Research Attribution
@misc{hf_dataset__xiaoquancai__pig_detection,
author = {XiaoquanCai},
title = {Pig Detection Dataset},
year = {2026},
howpublished = {\url{https://huggingface.co/datasets/XiaoquanCai/pig-detection}},
note = {Accessed via Free2AITools Knowledge Fortress}
} 🔬Technical Deep Dive
Full Specifications [+]▾
⚖️ Nexus Index V2.0
💬 Index Insight
FNI V2.0 for Pig Detection: Semantic (S:50), Authority (A:0), Popularity (P:0), Recency (R:0), Quality (Q:0).
Verification Authority
👁️ Data Preview
Row-level preview not available for this dataset.
Schema structure is shown in the Field Logic panel when available.
🔗 Explore Full Dataset ↗🧬 Field Logic
Schema not yet indexed for this dataset.
Dataset Specification
Pig Detection & Tracking (YOLO + Simple IoU Tracker)
This repository contains scripts to train a YOLOv8 detector for pigs and run a simple video tracker that assigns per-animal IDs.
Location
- Project root:
C:/Users/Administrator/Desktop/xinhaochuli - Virtual environment python:
C:/Users/Administrator/Desktop/xinhaochuli/.venv/Scripts/python.exe - Trained weights (example):
runs/detect/train/weights/best.pt
Important notes about your environment
- You're on Windows (PowerShell). Use the virtualenv python above when running commands to avoid path/launcher issues.
- If you see launcher errors like "Unable to create process" when running
pip, runpython -m pip ...with the venv python to avoid broken/encoded paths.
Files of interest
train_yolo.py— trains YOLO (Ultralytics) usingswine_dataset/data.yaml.track_pigs.py— runs detection on a video and performs tracking. Includes NMS + IoU+Hungarian tracker and simple filters to reduce wall/edge false positives.swine_dataset/data.yaml— dataset config (train/val paths,nc,names)..vscode/launch.json— VS Code Run configuration "Run Track Pigs" (ready to run with current default args).
Quick commands (PowerShell)
Note: replace the python path with the venv python shown above or use the venv activated.
Activate venv (optional interactive):
.\.venv\Scripts\Activate.ps1
Run training (recommended to run in a terminal where you can see logs):
C:/Users/Administrator/Desktop/xinhaochuli/.venv/Scripts/python.exe train_yolo.py --epochs 50 --batch 8 --imgsz 640
Validate trained model (numeric mAP/precision/recall):
C:/Users/Administrator/Desktop/xinhaochuli/.venv/Scripts/python.exe -c "from ultralytics import YOLO; YOLO('runs/detect/train/weights/best.pt').val(data='swine_dataset/data.yaml', imgsz=640)"
Run full video tracking (uses current defaults set in track_pigs.py):
C:/Users/Administrator/Desktop/xinhaochuli/.venv/Scripts/python.exe track_pigs.py --weights runs/detect/train/weights/best.pt --source pig_video.mp4 --output tracked_output_filtered2.mp4 --conf 0.25 --nms 0.45 --min-area 500 --edge-margin 8 --display
Desktop GUI (one-click app)
If you'd like a simple desktop application to open a video, run detection+tracking and preview/save the results, use the included PyQt GUI app_gui.py.
Prerequisites:
- Install PyQt5 in the project venv:
C:/.../.venv/Scripts/python.exe -m pip install PyQt5
Or install all required packages from requirements.txt:
C:/.../.venv/Scripts/python.exe -m pip install -r requirements.txt
Run GUI (from project root):
C:/Users/Administrator/Desktop/xinhaochuli/.venv/Scripts/python.exe app_gui.py
Notes:
- Default weights path is
runs/detect/train/weights/best.pt(change in the GUI if needed). - You can select a video file, tune confidence/NMS/min-area/edge-margin, start/pause/stop processing and optionally save the processed output video.
- If PyQt5 is not installed, install it as shown above.
- GUI 界面已本地化为简体中文,界面元素(按钮、提示)均为中文。
Packaging as a Windows executable
You can package the GUI into a single exe with PyInstaller:
C:/.../.venv/Scripts/python.exe -m pip install pyinstaller
C:/.../.venv/Scripts/python.exe -m pyinstaller --onefile --windowed app_gui.py
PyInstaller may need extra --add-data entries for the ultralytics models or other hidden imports; test the exe after building and inspect the generated dist folder.
Run using VS Code Run button
- Open the project in VS Code.
- Make sure the interpreter is set to the venv:
C:/Users/Administrator/Desktop/xinhaochuli/.venv/Scripts/python.exe(bottom-right). - Open Run & Debug (Ctrl+Shift+D) and select "Run Track Pigs" then press the green Run button (or F5).
Default tracking parameters (in track_pigs.py)
--conf(float): detection confidence threshold (default 0.25). Increase to reduce false positives.--nms(float): IoU threshold for non-maximum suppression (default 0.45).--min-area(float): min bbox area to keep (default 500). Raises removes small/wall boxes.--edge-margin(int): pixels near border to consider edge (default 8). Small boxes at edges are removed unless big.- Tracker internals: IoU+Hungarian assignment,
iou_thresholddefault 0.3 andmax_missingdefault 90 (in code). These can be tuned intrack_pigs.py.
Why you saw duplicate boxes and ID changes
- Duplicate / overlapping boxes: detector sometimes produces multiple overlapping predictions — NMS reduces these. Also reflections and wall markings can be mistaken for pigs; the
--min-areaand--edge-marginfilters help. - Same pig receiving new ID after a while: tracker previously used a short
max_missingand centroid matching which is fragile. The repository now uses an IoU+Hungarian tracker and increasedmax_missingto tolerate short occlusions. Still, long occlusions or complete disappearance + reappearance may create a new ID.
Tips to further improve stability
- Train longer (more epochs) and/or use a larger model (yolov8s/m) for better detections.
- Add more varied training images (different angles, lighting, walls) and label false positives to reduce wall detections.
- Use appearance-based re-ID (color histogram or DeepSORT) to keep IDs across long occlusions — this is more work but much more robust.
- If you see many small false boxes, increase
--min-areaor--conf.
Troubleshooting
- "weights not found": locate
best.ptwithdir runs\detect\**\weights\best.ptor search in Explorer. Update--weightsor copy the file to the path used by the launch config. piplauncher errors or garbled paths: useC:/.../python.exe -m pip install ...to avoid launcher problems.- If the Run button gives a deprecation warning about the Python debug type, the workspace already includes a fixed
.vscode/launch.jsonusingdebugpy.
What I changed in the repo for you
- Repaired
swine_dataset/data.yamlto use valid paths. - Implemented IoU+Hungarian tracker and NMS + small/edge filtering in
track_pigs.py. - Added
.vscode/launch.jsonso you can press Run in VS Code with current defaults.
# Pig Detection & Tracking (YOLO + Simple IoU Tracker)
This repository contains scripts to train a YOLOv8 detector for pigs and run a simple video tracker that assigns per-animal IDs.
Location
- Project root:
C:/Users/Administrator/Desktop/xinhaochuli
- Virtual environment python:
C:/Users/Administrator/Desktop/xinhaochuli/.venv/Scripts/python.exe
- Trained weights (example):
runs/detect/train/weights/best.pt
Important notes about your environment
- You're on Windows (PowerShell). Use the virtualenv python above when running commands to avoid path/launcher issues.
- If you see launcher errors like "Unable to create process" when running
pip, run python -m pip ... with the venv python to avoid broken/encoded paths.
Files of interest
train_yolo.py — trains YOLO (Ultralytics) using swine_dataset/data.yaml.
track_pigs.py — runs detection on a video and performs tracking. Includes NMS + IoU+Hungarian tracker and simple filters to reduce wall/edge false positives.
swine_dataset/data.yaml — dataset config (train/val paths, nc, names).
.vscode/launch.json — VS Code Run configuration "Run Track Pigs" (ready to run with current default args).
Quick commands (PowerShell)
Note: replace the python path with the venv python shown above or use the venv activated.
Activate venv (optional interactive):
powershell
.\.venv\Scripts\Activate.ps1
Run training (recommended to run in a terminal where you can see logs):
C:/Users/Administrator/Desktop/xinhaochuli/.venv/Scripts/python.exe train_yolo.py --epochs 50 --batch 8 --imgsz 640
Validate trained model (numeric mAP/precision/recall):
C:/Users/Administrator/Desktop/xinhaochuli/.venv/Scripts/python.exe -c "from ultralytics import YOLO; YOLO('runs/detect/train/weights/best.pt').val(data='swine_dataset/data.yaml', imgsz=640)"
Run full video tracking (uses current defaults set in track_pigs.py):
C:/Users/Administrator/Desktop/xinhaochuli/.venv/Scripts/python.exe track_pigs.py --weights runs/detect/train/weights/best.pt --source pig_video.mp4 --output tracked_output_filtered2.mp4 --conf 0.25 --nms 0.45 --min-area 500 --edge-margin 8 --display
Run using VS Code Run button
- Open the project in VS Code.
- Make sure the interpreter is set to the venv:
C:/Users/Administrator/Desktop/xinhaochuli/.venv/Scripts/python.exe (bottom-right).
- Open Run & Debug (Ctrl+Shift+D) and select "Run Track Pigs" then press the green Run button (or F5).
Default tracking parameters (in track_pigs.py)
--conf (float): detection confidence threshold (default 0.25). Increase to reduce false positives.
--nms (float): IoU threshold for non-maximum suppression (default 0.45).
--min-area (float): min bbox area to keep (default 500). Raises removes small/wall boxes.
--edge-margin (int): pixels near border to consider edge (default 8). Small boxes at edges are removed unless big.
- Tracker internals: IoU+Hungarian assignment,
iou_threshold default 0.3 and max_missing default 90 (in code). These can be tuned in track_pigs.py.
Why you saw duplicate boxes and ID changes
- Duplicate / overlapping boxes: detector sometimes produces multiple overlapping predictions — NMS reduces these. Also reflections and wall markings can be mistaken for pigs; the
--min-area and --edge-margin filters help.
- Same pig receiving new ID after a while: tracker previously used a short
max_missing and centroid matching which is fragile. The repository now uses an IoU+Hungarian tracker and increased max_missing to tolerate short occlusions. Still, long occlusions or complete disappearance + reappearance may create a new ID.
Tips to further improve stability
- Train longer (more epochs) and/or use a larger model (yolov8s/m) for better detections.
- Add more varied training images (different angles, lighting, walls) and label false positives to reduce wall detections.
- Use appearance-based re-ID (color histogram or DeepSORT) to keep IDs across long occlusions — this is more work but much more robust.
- If you see many small false boxes, increase
--min-area or --conf.
Troubleshooting
- "weights not found": locate
best.pt with dir runs\detect\**\weights\best.pt or search in Explorer. Update --weights or copy the file to the path used by the launch config.
pip launcher errors or garbled paths: use C:/.../python.exe -m pip install ... to avoid launcher problems.
- If the Run button gives a deprecation warning about the Python debug type, the workspace already includes a fixed
.vscode/launch.json using debugpy.
What I changed in the repo for you
- Repaired
swine_dataset/data.yaml to use valid paths.
- Implemented IoU+Hungarian tracker and NMS + small/edge filtering in
track_pigs.py.
- Added
.vscode/launch.json so you can press Run in VS Code with current defaults.
Next steps you can ask me to do
- Tune the tracker parameters (I can change
min-area, nms, iou_threshold, max_missing).
- Add appearance re-ID (color histogram quick option or DeepSORT integration).
- Create a Windows double-click script (
.ps1 or .bat) to run tracking without opening VS Code.
If you want me to make any of the above changes or run the full tracking again with adjusted params, tell me which option and I will update the code and run it.
中文说明(Chinese)
下面是 README 的中文版,包含相同内容(环境、运行命令、故障排查与建议),方便中文阅读者使用:
项目概述
本项目用于训练猪只检测模型(YOLOv8)并对视频中检测到的猪进行跟踪与编号。
主要路径与环境
- 项目根目录:
C:/Users/Administrator/Desktop/xinhaochuli
- 虚拟环境 Python:
C:/Users/Administrator/Desktop/xinhaochuli/.venv/Scripts/python.exe
- 训练权重示例:
runs/detect/train/weights/best.pt
快速命令(PowerShell)
激活虚拟环境(可选):
.\.venv\Scripts\Activate.ps1
训练模型:
C:/Users/Administrator/Desktop/xinhaochuli/.venv/Scripts/python.exe train_yolo.py --epochs 50 --batch 8 --imgsz 640
验证模型(输出 mAP 等指标):
C:/Users/Administrator/Desktop/xinhaochuli/.venv/Scripts/python.exe -c "from ultralytics import YOLO; YOLO('runs/detect/train/weights/best.pt').val(data='swine_dataset/data.yaml', imgsz=640)"
运行视频跟踪(当前默认参数):
C:/Users/Administrator/Desktop/xinhaochuli/.venv/Scripts/python.exe track_pigs.py --weights runs/detect/train/weights/best.pt --source pig_video.mp4 --output tracked_output_filtered2.mp4 --conf 0.3 --nms 0.5 --min-area 500 --edge-margin 8 --display
参数说明与建议
--conf: 检测置信度阈值,增大可减少误检。
--nms: NMS 的 IoU 阈值。
--min-area: 过滤掉过小的检测框(墙面或噪声)。
--edge-margin: 靠近图像边缘的小框可选性过滤。
常见问题与解决
- 重复框:使用 NMS 去重并增加
--min-area 过滤小框。
- ID 变化:已采用 IoU+Hungarian 并延长
max_missing,但长遮挡仍可能换号;可考虑加入外观 re-ID。
如需我进一步调整参数或集成更先进的跟踪方法(如 DeepSORT),请告诉我。
End of README
Social Proof
AI Summary: Based on Hugging Face metadata. Not a recommendation.
🛡️ Dataset Transparency Report
Verified data manifest for traceability and transparency.
🆔 Identity & Source
- id
- hf-dataset--xiaoquancai--pig-detection
- source
- huggingface
- author
- XiaoquanCai
- tags
- modality:imagemodality:textregion:us
⚙️ Technical Specs
- architecture
- null
- params billions
- null
- context length
- null
📊 Engagement & Metrics
- likes
- 0
- downloads
- 39,155
Free2AITools Constitutional Data Pipeline: Curated disclosure mode active. (V15.x Standard)