Files
igny8/agent58k/README.md
IGNY8 VPS (Salman) 0b3830c891 1
2025-11-29 01:48:53 +00:00

295 lines
6.0 KiB
Markdown

# Qwen Agent MCP Server
Full IDE capabilities for Qwen model - works like native Cursor/Continue, not a dumb chatbot.
## Features
**File Operations**
- Read files (with line range support)
- Write/create files
- Edit files (replace strings)
**Terminal Execution**
- Run shell commands
- Git operations
- npm/pip/any CLI tool
- Custom working directory & timeout
**Code Search & Analysis**
- Grep/ripgrep search
- Regex support
- File pattern filtering
- Error detection (Python, JS/TS)
**Python Execution**
- Execute code in isolated scope
- Great for calculations and data processing
**Directory Operations**
- List files/folders
- Recursive tree view
**Streaming Support**
- SSE (Server-Sent Events)
- Real-time responses
- Multi-step agent reasoning
## Setup
### Quick Start
```bash
# Run the setup script
cd ~/agent58k
chmod +x setup.sh
./setup.sh
```
### Manual Setup
```bash
# Create directory
mkdir -p ~/agent58k
cd ~/agent58k
# Create virtual environment
python3 -m venv venv
source venv/bin/activate
# Install dependencies
pip install -r requirements.txt
# Make sure Ollama is running
ollama serve
# Pull the model (if not already downloaded)
ollama pull zdolny/qwen3-coder58k-tools:latest
```
## Usage
### Start the Server
```bash
cd ~/agent58k
source venv/bin/activate
python server.py
```
The server will start on `http://localhost:8000`
### Configure Your IDE
#### For Continue.dev
Edit `~/.continue/config.json`:
```json
{
"models": [
{
"title": "Qwen Agent",
"provider": "openai",
"model": "zdolny/qwen3-coder58k-tools:latest",
"apiBase": "http://localhost:8000/v1",
"apiKey": "not-needed"
}
]
}
```
#### For Cursor
1. Go to Settings → Models
2. Add Custom Model
3. Set:
- API Base: `http://localhost:8000/v1`
- Model: `zdolny/qwen3-coder58k-tools:latest`
- API Key: (leave blank or any value)
#### For Other OpenAI-Compatible IDEs
Use these settings:
- **Base URL**: `http://localhost:8000/v1`
- **Model**: `zdolny/qwen3-coder58k-tools:latest`
- **API Key**: (not required, use any value)
## Available Tools
The agent has access to these tools:
1. **FileReadTool** - Read file contents with line ranges
2. **FileWriteTool** - Create/overwrite files
3. **FileEditTool** - Precise string replacement in files
4. **TerminalTool** - Execute shell commands
5. **GrepSearchTool** - Search workspace with regex support
6. **ListDirectoryTool** - List directory contents
7. **PythonExecuteTool** - Execute Python code
8. **GetErrorsTool** - Check for syntax/lint errors
## Example Requests
### File Operations
```
Read the main.py file, lines 50-100
```
```
Create a new file at src/utils/helper.py with function to parse JSON
```
```
In config.py, replace the old database URL with postgresql://localhost/newdb
```
### Terminal & Git
```
Run git status and show me what files changed
```
```
Install the requests library using pip
```
```
Run the tests in the tests/ directory
```
### Code Search
```
Find all TODO comments in Python files
```
```
Search for the function "calculate_total" in the codebase
```
### Analysis
```
Check if there are any syntax errors in src/app.py
```
```
List all files in the project directory
```
## API Endpoints
- `GET /` - Server info and capabilities
- `GET /v1/models` - List available models
- `POST /v1/chat/completions` - Chat endpoint (OpenAI compatible)
- `GET /health` - Health check
- `GET /docs` - Interactive API documentation (Swagger UI)
## Configuration
Edit the `LLM_CFG` in `server.py` to customize:
```python
LLM_CFG = {
"model": "zdolny/qwen3-coder58k-tools:latest",
"model_server": "http://127.0.0.1:11434/v1",
"api_key": "your-api-key-here",
}
```
## Troubleshooting
### Server won't start
- Check if port 8000 is available: `lsof -i :8000`
- Make sure virtual environment is activated
- Verify all dependencies are installed: `pip list`
### Ollama connection issues
- Ensure Ollama is running: `ollama serve`
- Test the API: `curl http://localhost:11434/api/tags`
- Check the model is pulled: `ollama list`
### IDE not connecting
- Verify server is running: `curl http://localhost:8000/health`
- Check IDE configuration matches the base URL
- Look at server logs for connection attempts
### Agent not using tools
- Check server logs for tool execution
- Make sure requests are clear and actionable
- Try being more explicit: "Use the file read tool to read config.py"
## Advanced Usage
### Custom Tools
Add your own tools by creating a new `BaseTool` subclass:
```python
class MyCustomTool(BaseTool):
description = "What this tool does"
parameters = [{
'name': 'param1',
'type': 'string',
'description': 'Parameter description',
'required': True
}]
def call(self, params: str, **kwargs) -> str:
args = json.loads(params)
# Your tool logic here
return "Tool result"
# Add to TOOLS list
TOOLS.append(MyCustomTool())
```
### Workspace Context
The agent works best when you:
- Provide clear file paths
- Mention specific functions/classes
- Be explicit about what you want done
- Give context about your project structure
### Multi-step Workflows
The agent can handle complex workflows:
```
1. Read the config.py file
2. Search for all files that import it
3. Update those files to use the new config format
4. Run the tests to verify everything works
```
## Performance Tips
1. **Use line ranges** when reading large files
2. **Be specific** with search queries to reduce noise
3. **Use file patterns** to limit search scope
4. **Set timeouts** for long-running terminal commands
## Security Notes
- The agent can execute **ANY** shell command
- It can read/write **ANY** file the server user has access to
- **DO NOT** expose this server to the internet
- **Use only** on localhost/trusted networks
- Consider running in a container for isolation
## License
MIT - Use at your own risk
## Credits
Built on:
- [Qwen Agent](https://github.com/QwenLM/Qwen-Agent)
- [FastAPI](https://fastapi.tiangolo.com/)
- [Ollama](https://ollama.ai/)