产品快照

LLaMA Factory

Users are experiencing multiple training infrastructure issues when fine-tuning large language and vision-language models, including distributed training errors (FSDP2, Ray, NCCL), data processing bugs, and model-specific compatibility problems with Qwen3.5 and Gemma4. These issues prevent reliable training across multi-GPU and specialized hardware (Apple Silicon MPS, Ascend NPUs) setups, requiring fixes to resource allocation, parameter offloading, and model loading to ensure stable fine-tuning workflows.

已分析 Issue43
纳入排序41
需求簇1
更新时间2026-04-06
头号需求

Distributed Training Infrastructure and Model Compatibility Fixes

7.2 得分

上升需求

Distributed Training Infrastructure and Model Compatibility Fixes

1.3x

主导分类

Performance

LLM Fine-tuning

优先级地图

当前最重要需求

  1. 1

    Distributed Training Infrastructure and Model Compatibility Fixes

    Performance

    Users are experiencing multiple training infrastructure issues when fine-tuning large language and vision-language models, including distributed training errors (FSDP2, Ray, NCCL), data processing bugs, and model-specific compatibility problems with Qwen3.5 and Gemma4. These issues prevent reliable training across multi-GPU and specialized hardware (Apple Silicon MPS, Ascend NPUs) setups, requiring fixes to resource allocation, parameter offloading, and model loading to ensure stable fine-tuning workflows.

    41 条 issue 7.2 得分