Qwen Agent MCP Server
Full IDE capabilities for Qwen model - works like native Cursor/Continue, not a dumb chatbot.
Features
✅ File Operations
- Read files (with line range support)
- Write/create files
- Edit files (replace strings)
✅ Terminal Execution
- Run shell commands
- Git operations
- npm/pip/any CLI tool
- Custom working directory & timeout
✅ Code Search & Analysis
- Grep/ripgrep search
- Regex support
- File pattern filtering
- Error detection (Python, JS/TS)
✅ Python Execution
- Execute code in isolated scope
- Great for calculations and data processing
✅ Directory Operations
- List files/folders
- Recursive tree view
✅ Streaming Support
- SSE (Server-Sent Events)
- Real-time responses
- Multi-step agent reasoning
Setup
Quick Start
# Run the setup script
cd ~/agent58k
chmod +x setup.sh
./setup.sh
Manual Setup
# Create directory
mkdir -p ~/agent58k
cd ~/agent58k
# Create virtual environment
python3 -m venv venv
source venv/bin/activate
# Install dependencies
pip install -r requirements.txt
# Make sure Ollama is running
ollama serve
# Pull the model (if not already downloaded)
ollama pull zdolny/qwen3-coder58k-tools:latest
Usage
Start the Server
cd ~/agent58k
source venv/bin/activate
python server.py
The server will start on http://localhost:8000
Configure Your IDE
For Continue.dev
Edit ~/.continue/config.json:
{
"models": [
{
"title": "Qwen Agent",
"provider": "openai",
"model": "zdolny/qwen3-coder58k-tools:latest",
"apiBase": "http://localhost:8000/v1",
"apiKey": "not-needed"
}
]
}
For Cursor
- Go to Settings → Models
- Add Custom Model
- Set:
- API Base:
http://localhost:8000/v1 - Model:
zdolny/qwen3-coder58k-tools:latest - API Key: (leave blank or any value)
- API Base:
For Other OpenAI-Compatible IDEs
Use these settings:
- Base URL:
http://localhost:8000/v1 - Model:
zdolny/qwen3-coder58k-tools:latest - API Key: (not required, use any value)
Available Tools
The agent has access to these tools:
- FileReadTool - Read file contents with line ranges
- FileWriteTool - Create/overwrite files
- FileEditTool - Precise string replacement in files
- TerminalTool - Execute shell commands
- GrepSearchTool - Search workspace with regex support
- ListDirectoryTool - List directory contents
- PythonExecuteTool - Execute Python code
- GetErrorsTool - Check for syntax/lint errors
Example Requests
File Operations
Read the main.py file, lines 50-100
Create a new file at src/utils/helper.py with function to parse JSON
In config.py, replace the old database URL with postgresql://localhost/newdb
Terminal & Git
Run git status and show me what files changed
Install the requests library using pip
Run the tests in the tests/ directory
Code Search
Find all TODO comments in Python files
Search for the function "calculate_total" in the codebase
Analysis
Check if there are any syntax errors in src/app.py
List all files in the project directory
API Endpoints
GET /- Server info and capabilitiesGET /v1/models- List available modelsPOST /v1/chat/completions- Chat endpoint (OpenAI compatible)GET /health- Health checkGET /docs- Interactive API documentation (Swagger UI)
Configuration
Edit the LLM_CFG in server.py to customize:
LLM_CFG = {
"model": "zdolny/qwen3-coder58k-tools:latest",
"model_server": "http://127.0.0.1:11434/v1",
"api_key": "your-api-key-here",
}
Troubleshooting
Server won't start
- Check if port 8000 is available:
lsof -i :8000 - Make sure virtual environment is activated
- Verify all dependencies are installed:
pip list
Ollama connection issues
- Ensure Ollama is running:
ollama serve - Test the API:
curl http://localhost:11434/api/tags - Check the model is pulled:
ollama list
IDE not connecting
- Verify server is running:
curl http://localhost:8000/health - Check IDE configuration matches the base URL
- Look at server logs for connection attempts
Agent not using tools
- Check server logs for tool execution
- Make sure requests are clear and actionable
- Try being more explicit: "Use the file read tool to read config.py"
Advanced Usage
Custom Tools
Add your own tools by creating a new BaseTool subclass:
class MyCustomTool(BaseTool):
description = "What this tool does"
parameters = [{
'name': 'param1',
'type': 'string',
'description': 'Parameter description',
'required': True
}]
def call(self, params: str, **kwargs) -> str:
args = json.loads(params)
# Your tool logic here
return "Tool result"
# Add to TOOLS list
TOOLS.append(MyCustomTool())
Workspace Context
The agent works best when you:
- Provide clear file paths
- Mention specific functions/classes
- Be explicit about what you want done
- Give context about your project structure
Multi-step Workflows
The agent can handle complex workflows:
1. Read the config.py file
2. Search for all files that import it
3. Update those files to use the new config format
4. Run the tests to verify everything works
Performance Tips
- Use line ranges when reading large files
- Be specific with search queries to reduce noise
- Use file patterns to limit search scope
- Set timeouts for long-running terminal commands
Security Notes
- The agent can execute ANY shell command
- It can read/write ANY file the server user has access to
- DO NOT expose this server to the internet
- Use only on localhost/trusted networks
- Consider running in a container for isolation
License
MIT - Use at your own risk
Credits
Built on: