84 issues: Fix 1M context features (WebSearch, subagents) to work on Max plan without requiring extra usage toggle; Fix continue command to respect workflow phases and resume from the correct step rather than skipping ahead; Fix disproportionate session budget consumption for CLI shell executions and sub-tasks
Top need: Issues related to other
Rising need: Issues related to other
Dominant category: Other
Users are experiencing critical stability issues with long-lived CLI sessions, including data loss on exit, memory leaks causing system freezes, orphaned child processes, and crashes when handling long chat histories. These issues undermine trust in the tool for important work, as users risk losing progress and context when sessions become unstable.
Top need: CLI stability, data loss prevention, and session reliability
Rising need: CLI stability, data loss prevention, and session reliability
Dominant category: Developer Experience
Users want granular control over AI assistant behavior per project/mode, including per-mode auto-approve settings, MCP server access controls, and custom model/provider handling. This includes fixing issues with custom model names, temperature defaults, and AWS Bedrock caching, while adding new features like external skill repository imports and MCP tool allowlisting.
Top need: Enhanced mode configuration and custom provider flexibility
Rising need: —
Dominant category: Configuration
Users are reporting multiple bugs affecting the command-line and terminal user interface experience, including platform-specific issues (particularly on Windows), UI state synchronization problems when switching between modes, tool execution hangs and parsing failures, and display/rendering issues in terminal environments. These issues collectively degrade the reliability and usability of the CLI/TUI for daily development workflows.
Top need: CLI/TUI Bug Fixes and Cross-Platform Stability
Rising need: CLI/TUI Bug Fixes and Cross-Platform Stability
Dominant category: Developer Experience
Users are requesting better error handling across various failure scenarios including missing configurations, network failures, invalid data, and dependency issues. Additionally, there are requests to improve CLI argument validation, add non-interactive single-pass execution modes for automation, and fix cross-platform compatibility issues with Windows glob patterns.
Top need: Error handling and CLI robustness improvements
Rising need: Handle unencodable Unicode characters in Windows console
Dominant category: Developer Experience
Users are experiencing bugs across multiple IDE features including language tooling, UI interactions, and AI integrations. These issues range from crashes and incorrect behavior to performance degradation, all of which degrade the overall development experience. Addressing these bugs would significantly improve the reliability and usability of the IDE.
Top need: Fix Various IDE Bug Fixes and Stability Issues
Rising need: Fix Various IDE Bug Fixes and Stability Issues
Dominant category: Developer Experience
Users want improved inference performance, memory efficiency, and platform compatibility for AI models, particularly Gemma4. Issues include slow inference and hanging on Apple Silicon M4 and GB10 platforms, memory constraints on low-end devices, and Flash Attention hangs at large context sizes. Additionally, users need consistent API behavior across OpenAI-compatible and Anthropic endpoints, reliable streaming responses, and proper handling of model-specific features like thinking mode.
Top need: Performance Optimization and Model Efficiency
Rising need: Performance Optimization and Model Efficiency
Dominant category: Performance
Users are reporting various bugs affecting workflow stability, node integrations, and core functionality. Issues range from MCP SDK parsing errors and integration node crashes (WooCommerce, Dropbox, CoinGecko) to workflow loading failures and intermittent credential timeouts. These bugs cause crashes, infinite loading states, and incorrect API behavior, degrading the reliability of workflow execution.
Top need: Bug Fixes Across Nodes and Core Workflow
Rising need: Bug Fixes Across Nodes and Core Workflow
Dominant category: Reliability
Users are experiencing multiple reliability issues with the CLI including indefinite hangs during network blocks, processing errors, and external editor integration. They also need improved session management features like resume prompts, optimized session listing with JSON output, and better preservation of chat history. These issues affect developer productivity and trust in the tool's reliability.
Top need: CLI Stability, Session Management, and UI Improvements
Rising need: CLI Stability, Session Management, and UI Improvements
Dominant category: Developer Experience
Users are encountering multiple installation failures and compatibility issues across Windows, Apple Silicon, ARM64, and various GPU architectures (Pascal, Blackwell, Jetson Orin). These include PyTorch installation failures in restricted network environments, environment conflicts with Anaconda/Google Colab, missing GPU support for specialized hardware, and platform-specific issues like Windows ACL inheritance and CUDA library path conflicts. Users need resilient installation options with configurable sources and better fault tolerance.
Top need: Cross-Platform Installation and GPU Compatibility
Rising need: Cross-Platform Installation and GPU Compatibility
Dominant category: Developer Experience
Users are requesting various improvements to the editor experience, including better viewing controls like zoom and mini-maps, fixes to table manipulation in both editor and edgeless modes (context menus, nested lists, column resizing, markdown export), and clarifications to UI labels. These improvements address both functional bugs and user experience concerns to make the application more intuitive.
Top need: Editor UI refinements and table editing fixes
Rising need: —
Dominant category: UI/UX
Users are experiencing multiple crashes and stability issues with the Vulkan backend, particularly when running Gemma 4 models, multimodal vision pipelines, and handling large prompts. These issues include assertion failures, invalid pointer crashes, memory leaks, and incorrect memory estimation that prevent reliable inference. Additionally, users need better documentation for multimodal model configuration and expanded audio modality support.
Top need: Vulkan backend stability and Gemma 4 model fixes
Rising need: Vulkan backend stability and Gemma 4 model fixes
Dominant category: Reliability
Users are experiencing multiple training infrastructure issues when fine-tuning large language and vision-language models, including distributed training errors (FSDP2, Ray, NCCL), data processing bugs, and model-specific compatibility problems with Qwen3.5 and Gemma4. These issues prevent reliable training across multi-GPU and specialized hardware (Apple Silicon MPS, Ascend NPUs) setups, requiring fixes to resource allocation, parameter offloading, and model loading to ensure stable fine-tuning workflows.
Top need: Distributed Training Infrastructure and Model Compatibility Fixes
Rising need: Distributed Training Infrastructure and Model Compatibility Fixes
Dominant category: Performance
Users are requesting a comprehensive set of enhancements including cryptographic audit trails and governance controls for secure agent execution, new tool integrations (AIGEN, MAXIA AI Marketplace, Umarise Bitcoin API), multi-stage pipeline workflows with RAG verification, and various bug fixes for LLM providers and existing tools. These features would expand platform security, ecosystem breadth, and workflow flexibility.
Top need: Security, Tool Integrations, and Platform Extensions
Rising need: Security, Tool Integrations, and Platform Extensions
Dominant category: Platform Support
Users are requesting various workflow management enhancements including improved UI interactions (node dropdown filtering, preset colors, parameter locking, tab restoration) and fixes for critical performance issues (WebP/VAE decode speed, VRAM management, memory exhaustion, search lag). They also want bugs fixed for workflow persistence and auto-save to ensure their work is reliably saved.
Top need: Workflow UI/UX and Performance Improvements
Rising need: Workflow UI/UX and Performance Improvements
Dominant category: UI/UX
Users need fixes for tracing accuracy issues including recursive trace loops, phantom parent observations, and span flushing problems. They also want improved integrations with tools like Kiro CLI and LangChain, plus UI refinements for time display, markdown rendering, and internationalization support. These improvements address core observability reliability and developer experience gaps in the Langfuse platform.
Top need: Tracing reliability, integrations, and UI polish
Rising need: Tracing reliability, integrations, and UI polish
Dominant category: Observability & Integrations
Users are requesting improvements to workflow operations including better HITL (Human-in-the-Loop) node functionality with dropdown field support and action value exposure, workflow template examples for creative writing and LLM pipelines, and critical reliability fixes for runtime state persistence, crash recovery, and initialization. Additionally, various UI and bug fixes are needed for workflow execution, code node state, and application context handling.
Top need: Workflow Runtime Stability and HITL Enhancements
Rising need: Workflow Runtime Stability and HITL Enhancements
Dominant category: Developer Experience
Users are reporting various bugs across API providers (StepFun 404 errors), extensions (Filesystem loading failures, summon frontmatter warnings), and UI components (macOS menu alignment, code block rendering). Additionally, they want new CLI capabilities including model management commands, markdown rendering, and platform-specific configuration paths. These improvements enhance reliability and developer experience across the Desktop and CLI interfaces.
Top need: Bug fixes and CLI feature improvements
Rising need: Bug fixes and CLI feature improvements
Dominant category: Developer Experience
Users are reporting multiple bugs affecting various LLM provider integrations (Bedrock, Vertex AI, Azure, Modal, Predibase, Together AI) including streaming inconsistencies, crashes, and incorrect parameter handling. Additionally, users want improved observability through built-in latency profiling and want to address security vulnerabilities in dependencies. These issues collectively affect the reliability and accuracy of the proxy when serving requests across different cloud providers and model types.
Top need: LLM Provider Integration Fixes and Performance Observability
Rising need: LLM Provider Integration Fixes and Performance Observability
Dominant category: Integration
Users are reporting various bug fixes and enhancements needed across deployment, integration, and core platform functionality. Issues include missing Docker configuration files, broken workflow editor interactions, incomplete external connector syncing, and gaps in API/SDK configuration exposure. Addressing these would improve system reliability, deployment flexibility, and user experience across multiple areas.
Top need: Bug fixes and feature improvements across platform components
Rising need: Bug fixes and feature improvements across platform components
Dominant category: Platform Support
Users are requesting various improvements including fixes to chat message handling and UI consistency, model configuration options for better control, and new analytics capabilities to track usage patterns and metrics across the platform. These enhancements aim to improve both the developer experience when working with AI models and provide better visibility into system usage.
Top need: Chat UX Fixes, Model Configuration, and Analytics Features
Rising need: Chat UX Fixes, Model Configuration, and Analytics Features
Dominant category: Platform Support
Users are experiencing various reliability issues with GitHub Copilot Chat, including sudden unresponsiveness, hangs, and freezes during operation. Additionally, there are compatibility problems between different models (especially Codex) and the /chat/completions API endpoint, as well as issues with VS Code Insiders compatibility and MCP server functionality. These problems prevent users from having a stable, productive AI-assisted development experience.
Top need: Fix Copilot Chat reliability and model compatibility issues
Rising need: —
Dominant category: Reliability
This cluster addresses multiple reliability issues across the CLI, agent runtime, and plugin system. Users need fixes for npm installation hoisting, network connectivity on macOS, spawn timeouts, plugin registration visibility, and video generation callbacks, along with new CLI capabilities for interactive model parameter prompts and auth profile migration. These improvements ensure more predictable behavior when developing and deploying agent-based workflows.
Top need: Bug fixes and CLI enhancements for agents and plugins
Rising need: Bug fixes and CLI enhancements for agents and plugins
Dominant category: Developer Experience
Users are experiencing multiple API integration issues including authentication failures with Anthropic and OpenAI keys, proxy connection problems, and custom headers not being passed correctly. These issues prevent proper communication with model providers and affect both the Continue proxy and direct API connections. Fixing these will ensure reliable API access and proper error handling for edge cases like context length limits.
Top need: API Integration and Authentication Fixes
Rising need: Document .continue/configs/ directory support
Dominant category: Documentation
Users are reporting critical issues with Mixture of Experts (MoE) model performance including significant decode throughput regressions, quantization-related accuracy problems with new models like Gemma 4 and Qwen3, and CUDA/ROCm backend stability issues causing crashes and hangs. These fixes are essential for running large-scale MoE deployments reliably and efficiently.
Top need: MoE Performance, Quantization, and Backend Stability Fixes
Rising need: MoE Performance, Quantization, and Backend Stability Fixes
Dominant category: Performance
Users want improved configuration options to control platform behavior (such as accepting 'none' values, respecting mime types allowlists, and disabling summarization), fixes to API integration issues (including excessive retries, response API usage, and recursion limits), and authentication improvements for OAuth/MCP servers and token exchange. These changes address gaps in configurability, API correctness, and security that impact both developer experience and end-user functionality.
Top need: Configuration flexibility, API fixes, and authentication improvements
Rising need: Configuration flexibility, API fixes, and authentication improvements
Dominant category: Configuration
20 issues: Fix auth resolve endpoint to handle missing organization/user data gracefully instead of returning 500 error; Add password visibility toggle to sign-in page password field; Add organization-level default chatflow configuration support via OrganizationConfig
Top need: Issues related to other
Rising need: —
Dominant category: Other
Users are requesting new LLM provider integrations (Avian, Moonshot AI, Kimi Code) and enhanced agent capabilities including multi-agent swarm orchestration, persistent memory, output verification gates, and security-focused permission controls. They also want tooling improvements like runtime precondition gating for tool dependencies and bug detection during development.
Top need: Add LLM Provider Integrations and Agent Capabilities
Rising need: Add LLM Provider Integrations and Agent Capabilities
Dominant category: Integration
Users are requesting comprehensive security features to protect multi-agent systems from various threats including prompt injection attacks, credential leakage, agent spoofing, and malicious MCP servers. These include OS-level sandboxing, cryptographic authentication, identity verification, trust verification for delegation chains, and standardized guardrail protocols for policy enforcement and audit logging.
Top need: Security and Trust Framework for Multi-Agent Systems
Rising need: Security and Trust Framework for Multi-Agent Systems
Dominant category: Security
Users are reporting multiple issues with browser automation reliability across platforms, including initialization timeouts, permission failures, process isolation conflicts, and coordinate calculation errors on high-DPI displays. Additionally, there are requests to enhance security through sensitive data redaction, JWT-based trust verification, and MCP tool schema compatibility improvements for the Claude API.
Top need: Cross-Platform Browser Automation Stability and Security
Rising need: Cross-Platform Browser Automation Stability and Security
Dominant category: Reliability
This cluster covers multiple areas of improvement: adding policy-based security measures for agent delegation and dependency age verification, fixing bugs in messaging and UI components, and enhancing observability and configuration options for developers. Users want better trust verification, fewer redundant messages, and improved developer tooling.
Top need: Security hardening and UX polish fixes
Rising need: Security hardening and UX polish fixes
Dominant category: Reliability
Users are encountering multiple installation-related issues including missing module imports, failed dependency installations for core packages like PyTorch and CLIP, and startup errors. Additionally, security vulnerabilities in the dependency cloning process pose supply chain risks. These problems prevent users from successfully setting up the development environment.
Top need: Installation failures and dependency fixes
Rising need: —
Dominant category: Developer Experience
Users want the AutoGPT platform to be more reliable and robust through better error handling and graceful degradation when external services fail or unexpected inputs occur. This includes fixing bugs in block execution, JSON parsing, and UI rendering, as well as adding verification layers and cost estimation to make agents more production-ready.
Top need: Improve Agent Reliability and Error Handling
Rising need: Improve Agent Reliability and Error Handling
Dominant category: Reliability
Users are requesting advanced AI-powered development tools including better knowledge routing for code understanding, multi-agent collaboration, hallucination detection, and improved code generation reliability. They also want enhanced CLI tooling for git management, technology detection, and workflow automation to improve overall developer productivity.
Top need: AI Development Assistance and CLI Tooling
Rising need: AI Development Assistance and CLI Tooling
Dominant category: Developer Experience
This cluster groups three distinct user requests for the Tabby AI code completion tool: enhancing completion functionality with branched suggestions for different user actions, keeping documentation current with updated model recommendations, and resolving a connection issue in the Eclipse plugin. Each request addresses a different aspect of the product ecosystem.
Top need: Product Improvements and Bug Fixes
Rising need: —
Dominant category: Developer Experience
Users are requesting fixes for compatibility issues with PyTorch 2.6, new GPU architectures, and ONNX export, along with resolution of audio generation bugs (truncation, incomplete output). Additionally, they want enhanced language support through native Spanish phonetic handling and a plugin integration system for extensibility.
Top need: Compatibility fixes, bug resolution, and extensibility
Rising need: —
Dominant category: Developer Experience
Users want support for additional AI models including Gemini, Claude, GPT, and Grok series, indicating the current model selection is limited. Additionally, they want more control over model updates in Docker deployments, suggesting the current update mechanism lacks flexibility for their deployment needs.
Top need: Expand AI Model Support and Management
Rising need: Expand AI Model Support and Management
Dominant category: Integration
Users want improved configuration support for various AI model providers including OpenRouter, Ollama, Kimi K2, GCP Anthropic, and Azure OpenAI. They also need fixes for issues with environment variables, endpoint configuration, and provider selection loops that prevent smooth integration with these providers.
Top need: Model Provider Configuration and Integration Fixes
Rising need: —
Dominant category: Integration
Top need: —
Rising need: —
Dominant category: —