产品快照

OpenCode

Users want improved configuration support for various AI model providers including OpenRouter, Ollama, Kimi K2, GCP Anthropic, and Azure OpenAI. They also need fixes for issues with environment variables, endpoint configuration, and provider selection loops that prevent smooth integration with these providers.

已分析 Issue62
纳入排序59
需求簇10
更新时间2026-04-03
头号需求

Model Provider Configuration and Integration Fixes

0.3 得分

上升需求

主导分类

Integration

AI Coding Assistant

优先级地图

当前最重要需求

  1. 1

    Model Provider Configuration and Integration Fixes

    Integration

    Users want improved configuration support for various AI model providers including OpenRouter, Ollama, Kimi K2, GCP Anthropic, and Azure OpenAI. They also need fixes for issues with environment variables, endpoint configuration, and provider selection loops that prevent smooth integration with these providers.

    14 条 issue 0.3 得分
  2. 2

    Improve Installation, Configuration, and CLI Flexibility

    Configuration

    Users are experiencing installation and setup issues across different platforms (npm, WSL/Linux) while also seeking more flexibility in how they configure and interact with the CLI. Key requests include adding alternative installation methods, fixing 'agent coder not found' errors, improving model configuration options, and enhancing input handling for special characters and multiline content.

    12 条 issue 0.1 得分
  3. 3

    Provider Integration, Model Support, and Tool Improvements

    Integration

    Users want expanded support for AI providers (GitHub Models, Kimi k2, Ollama, Copilot) along with fixes for tool calling, shell timeouts, and install scripts. These improvements address integration gaps and configuration limitations that prevent smooth operation across different environments and use cases.

    13 条 issue 0.1 得分
  4. 4

    Expand AI Provider Support and Error Handling

    Integration

    Users are requesting expanded support for new AI models and tools (Claude WebSearch, Sonnet 4, Opus 4 in Vertex AI) while also asking for better error handling and robustness (MCP timeouts, token limit handling, Ollama validation fixes). These needs focus on improving integrations with various AI providers and preventing application failures during provider interactions.

    6 条 issue 0.1 得分
  5. 5

    Ollama Local LLM Integration Support

    Integration

    Users want official support for using Ollama as a local LLM provider, enabling privacy-focused and cost-effective AI capabilities. They also need graceful handling when Ollama models don't support tool/function calling, rather than experiencing errors or unexpected behavior.

    2 条 issue 0.1 得分