1
This commit is contained in:
122
agent58k/QUICKSTART.md
Normal file
122
agent58k/QUICKSTART.md
Normal file
@@ -0,0 +1,122 @@
|
|||||||
|
# Quick Start Guide
|
||||||
|
|
||||||
|
## Installation (30 seconds)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd ~/agent58k
|
||||||
|
chmod +x setup.sh
|
||||||
|
./setup.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
## Start Server
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd ~/agent58k
|
||||||
|
source venv/bin/activate
|
||||||
|
python server.py
|
||||||
|
```
|
||||||
|
|
||||||
|
## Configure IDE
|
||||||
|
|
||||||
|
### Continue.dev
|
||||||
|
Copy the config:
|
||||||
|
```bash
|
||||||
|
cp continue-config.json ~/.continue/config.json
|
||||||
|
```
|
||||||
|
|
||||||
|
### Cursor
|
||||||
|
Settings → Models → Add Custom:
|
||||||
|
- Base URL: `http://localhost:8000/v1`
|
||||||
|
- Model: `zdolny/qwen3-coder58k-tools:latest`
|
||||||
|
|
||||||
|
## Test It Works
|
||||||
|
|
||||||
|
```bash
|
||||||
|
chmod +x test_server.sh
|
||||||
|
./test_server.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
## Example Prompts
|
||||||
|
|
||||||
|
**Instead of:**
|
||||||
|
❌ "Can you help me read the config file?"
|
||||||
|
|
||||||
|
**Use:**
|
||||||
|
✅ "Read config.py and show me the database settings"
|
||||||
|
✅ "Search the codebase for all files importing requests"
|
||||||
|
✅ "Create a new file utils/parser.py with a JSON parser"
|
||||||
|
✅ "Run the tests in tests/ directory"
|
||||||
|
✅ "Find all TODO comments in Python files"
|
||||||
|
|
||||||
|
## What Makes This Different
|
||||||
|
|
||||||
|
Your original code had:
|
||||||
|
- ❌ Only 2 basic tools (calculate, python)
|
||||||
|
- ❌ Broken tool extraction (regex parsing)
|
||||||
|
- ❌ No streaming support
|
||||||
|
- ❌ No file operations
|
||||||
|
- ❌ No terminal access
|
||||||
|
- ❌ Single-step execution only
|
||||||
|
|
||||||
|
This version has:
|
||||||
|
- ✅ 8 comprehensive tools (file ops, terminal, search, etc.)
|
||||||
|
- ✅ Proper Qwen Agent tool integration
|
||||||
|
- ✅ Full streaming support
|
||||||
|
- ✅ Multi-step agent reasoning
|
||||||
|
- ✅ Works like native Cursor/Continue
|
||||||
|
- ✅ Production-ready error handling
|
||||||
|
|
||||||
|
## Auto-Start on Boot (Optional)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Edit the service file and replace %YOUR_USERNAME% and %HOME%
|
||||||
|
sudo cp qwen-agent.service /etc/systemd/system/
|
||||||
|
sudo systemctl enable qwen-agent
|
||||||
|
sudo systemctl start qwen-agent
|
||||||
|
```
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
**"ModuleNotFoundError: No module named 'qwen_agent'"**
|
||||||
|
→ Activate venv: `source venv/bin/activate`
|
||||||
|
|
||||||
|
**"Connection refused to localhost:8000"**
|
||||||
|
→ Start server: `python server.py`
|
||||||
|
|
||||||
|
**"Ollama API error"**
|
||||||
|
→ Start Ollama: `ollama serve`
|
||||||
|
→ Pull model: `ollama pull zdolny/qwen3-coder58k-tools:latest`
|
||||||
|
|
||||||
|
**Agent not using tools**
|
||||||
|
→ Be explicit: "Use the file read tool to..."
|
||||||
|
→ Check server logs for errors
|
||||||
|
|
||||||
|
## What Fixed
|
||||||
|
|
||||||
|
1. **Tool System**: Implemented proper `BaseTool` classes that Qwen Agent understands
|
||||||
|
2. **Streaming**: Added SSE support with proper chunk formatting
|
||||||
|
3. **Response Handling**: Properly extracts content from agent responses
|
||||||
|
4. **Multi-step**: Agent can now chain multiple tool calls
|
||||||
|
5. **Error Handling**: Comprehensive try/catch with detailed error messages
|
||||||
|
6. **IDE Integration**: OpenAI-compatible API that works with Continue/Cursor
|
||||||
|
|
||||||
|
## Files Created
|
||||||
|
|
||||||
|
- `server.py` - Main server (400+ lines with 8 tools)
|
||||||
|
- `requirements.txt` - Python dependencies
|
||||||
|
- `setup.sh` - One-command installation
|
||||||
|
- `test_server.sh` - Verify everything works
|
||||||
|
- `continue-config.json` - IDE configuration
|
||||||
|
- `qwen-agent.service` - Systemd service
|
||||||
|
- `README.md` - Full documentation
|
||||||
|
- `QUICKSTART.md` - This file
|
||||||
|
|
||||||
|
## Next Steps
|
||||||
|
|
||||||
|
1. Run setup: `./setup.sh`
|
||||||
|
2. Start server: `python server.py`
|
||||||
|
3. Configure your IDE (copy continue-config.json)
|
||||||
|
4. Test with: "List files in the current directory"
|
||||||
|
5. Try complex tasks: "Read all Python files, find bugs, fix them"
|
||||||
|
|
||||||
|
Enjoy your fully-capable AI coding assistant! 🚀
|
||||||
294
agent58k/README.md
Normal file
294
agent58k/README.md
Normal file
@@ -0,0 +1,294 @@
|
|||||||
|
# Qwen Agent MCP Server
|
||||||
|
|
||||||
|
Full IDE capabilities for Qwen model - works like native Cursor/Continue, not a dumb chatbot.
|
||||||
|
|
||||||
|
## Features
|
||||||
|
|
||||||
|
✅ **File Operations**
|
||||||
|
- Read files (with line range support)
|
||||||
|
- Write/create files
|
||||||
|
- Edit files (replace strings)
|
||||||
|
|
||||||
|
✅ **Terminal Execution**
|
||||||
|
- Run shell commands
|
||||||
|
- Git operations
|
||||||
|
- npm/pip/any CLI tool
|
||||||
|
- Custom working directory & timeout
|
||||||
|
|
||||||
|
✅ **Code Search & Analysis**
|
||||||
|
- Grep/ripgrep search
|
||||||
|
- Regex support
|
||||||
|
- File pattern filtering
|
||||||
|
- Error detection (Python, JS/TS)
|
||||||
|
|
||||||
|
✅ **Python Execution**
|
||||||
|
- Execute code in isolated scope
|
||||||
|
- Great for calculations and data processing
|
||||||
|
|
||||||
|
✅ **Directory Operations**
|
||||||
|
- List files/folders
|
||||||
|
- Recursive tree view
|
||||||
|
|
||||||
|
✅ **Streaming Support**
|
||||||
|
- SSE (Server-Sent Events)
|
||||||
|
- Real-time responses
|
||||||
|
- Multi-step agent reasoning
|
||||||
|
|
||||||
|
## Setup
|
||||||
|
|
||||||
|
### Quick Start
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Run the setup script
|
||||||
|
cd ~/agent58k
|
||||||
|
chmod +x setup.sh
|
||||||
|
./setup.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
### Manual Setup
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Create directory
|
||||||
|
mkdir -p ~/agent58k
|
||||||
|
cd ~/agent58k
|
||||||
|
|
||||||
|
# Create virtual environment
|
||||||
|
python3 -m venv venv
|
||||||
|
source venv/bin/activate
|
||||||
|
|
||||||
|
# Install dependencies
|
||||||
|
pip install -r requirements.txt
|
||||||
|
|
||||||
|
# Make sure Ollama is running
|
||||||
|
ollama serve
|
||||||
|
|
||||||
|
# Pull the model (if not already downloaded)
|
||||||
|
ollama pull zdolny/qwen3-coder58k-tools:latest
|
||||||
|
```
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
|
||||||
|
### Start the Server
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd ~/agent58k
|
||||||
|
source venv/bin/activate
|
||||||
|
python server.py
|
||||||
|
```
|
||||||
|
|
||||||
|
The server will start on `http://localhost:8000`
|
||||||
|
|
||||||
|
### Configure Your IDE
|
||||||
|
|
||||||
|
#### For Continue.dev
|
||||||
|
|
||||||
|
Edit `~/.continue/config.json`:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"models": [
|
||||||
|
{
|
||||||
|
"title": "Qwen Agent",
|
||||||
|
"provider": "openai",
|
||||||
|
"model": "zdolny/qwen3-coder58k-tools:latest",
|
||||||
|
"apiBase": "http://localhost:8000/v1",
|
||||||
|
"apiKey": "not-needed"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
#### For Cursor
|
||||||
|
|
||||||
|
1. Go to Settings → Models
|
||||||
|
2. Add Custom Model
|
||||||
|
3. Set:
|
||||||
|
- API Base: `http://localhost:8000/v1`
|
||||||
|
- Model: `zdolny/qwen3-coder58k-tools:latest`
|
||||||
|
- API Key: (leave blank or any value)
|
||||||
|
|
||||||
|
#### For Other OpenAI-Compatible IDEs
|
||||||
|
|
||||||
|
Use these settings:
|
||||||
|
- **Base URL**: `http://localhost:8000/v1`
|
||||||
|
- **Model**: `zdolny/qwen3-coder58k-tools:latest`
|
||||||
|
- **API Key**: (not required, use any value)
|
||||||
|
|
||||||
|
## Available Tools
|
||||||
|
|
||||||
|
The agent has access to these tools:
|
||||||
|
|
||||||
|
1. **FileReadTool** - Read file contents with line ranges
|
||||||
|
2. **FileWriteTool** - Create/overwrite files
|
||||||
|
3. **FileEditTool** - Precise string replacement in files
|
||||||
|
4. **TerminalTool** - Execute shell commands
|
||||||
|
5. **GrepSearchTool** - Search workspace with regex support
|
||||||
|
6. **ListDirectoryTool** - List directory contents
|
||||||
|
7. **PythonExecuteTool** - Execute Python code
|
||||||
|
8. **GetErrorsTool** - Check for syntax/lint errors
|
||||||
|
|
||||||
|
## Example Requests
|
||||||
|
|
||||||
|
### File Operations
|
||||||
|
|
||||||
|
```
|
||||||
|
Read the main.py file, lines 50-100
|
||||||
|
```
|
||||||
|
|
||||||
|
```
|
||||||
|
Create a new file at src/utils/helper.py with function to parse JSON
|
||||||
|
```
|
||||||
|
|
||||||
|
```
|
||||||
|
In config.py, replace the old database URL with postgresql://localhost/newdb
|
||||||
|
```
|
||||||
|
|
||||||
|
### Terminal & Git
|
||||||
|
|
||||||
|
```
|
||||||
|
Run git status and show me what files changed
|
||||||
|
```
|
||||||
|
|
||||||
|
```
|
||||||
|
Install the requests library using pip
|
||||||
|
```
|
||||||
|
|
||||||
|
```
|
||||||
|
Run the tests in the tests/ directory
|
||||||
|
```
|
||||||
|
|
||||||
|
### Code Search
|
||||||
|
|
||||||
|
```
|
||||||
|
Find all TODO comments in Python files
|
||||||
|
```
|
||||||
|
|
||||||
|
```
|
||||||
|
Search for the function "calculate_total" in the codebase
|
||||||
|
```
|
||||||
|
|
||||||
|
### Analysis
|
||||||
|
|
||||||
|
```
|
||||||
|
Check if there are any syntax errors in src/app.py
|
||||||
|
```
|
||||||
|
|
||||||
|
```
|
||||||
|
List all files in the project directory
|
||||||
|
```
|
||||||
|
|
||||||
|
## API Endpoints
|
||||||
|
|
||||||
|
- `GET /` - Server info and capabilities
|
||||||
|
- `GET /v1/models` - List available models
|
||||||
|
- `POST /v1/chat/completions` - Chat endpoint (OpenAI compatible)
|
||||||
|
- `GET /health` - Health check
|
||||||
|
- `GET /docs` - Interactive API documentation (Swagger UI)
|
||||||
|
|
||||||
|
## Configuration
|
||||||
|
|
||||||
|
Edit the `LLM_CFG` in `server.py` to customize:
|
||||||
|
|
||||||
|
```python
|
||||||
|
LLM_CFG = {
|
||||||
|
"model": "zdolny/qwen3-coder58k-tools:latest",
|
||||||
|
"model_server": "http://127.0.0.1:11434/v1",
|
||||||
|
"api_key": "your-api-key-here",
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### Server won't start
|
||||||
|
|
||||||
|
- Check if port 8000 is available: `lsof -i :8000`
|
||||||
|
- Make sure virtual environment is activated
|
||||||
|
- Verify all dependencies are installed: `pip list`
|
||||||
|
|
||||||
|
### Ollama connection issues
|
||||||
|
|
||||||
|
- Ensure Ollama is running: `ollama serve`
|
||||||
|
- Test the API: `curl http://localhost:11434/api/tags`
|
||||||
|
- Check the model is pulled: `ollama list`
|
||||||
|
|
||||||
|
### IDE not connecting
|
||||||
|
|
||||||
|
- Verify server is running: `curl http://localhost:8000/health`
|
||||||
|
- Check IDE configuration matches the base URL
|
||||||
|
- Look at server logs for connection attempts
|
||||||
|
|
||||||
|
### Agent not using tools
|
||||||
|
|
||||||
|
- Check server logs for tool execution
|
||||||
|
- Make sure requests are clear and actionable
|
||||||
|
- Try being more explicit: "Use the file read tool to read config.py"
|
||||||
|
|
||||||
|
## Advanced Usage
|
||||||
|
|
||||||
|
### Custom Tools
|
||||||
|
|
||||||
|
Add your own tools by creating a new `BaseTool` subclass:
|
||||||
|
|
||||||
|
```python
|
||||||
|
class MyCustomTool(BaseTool):
|
||||||
|
description = "What this tool does"
|
||||||
|
parameters = [{
|
||||||
|
'name': 'param1',
|
||||||
|
'type': 'string',
|
||||||
|
'description': 'Parameter description',
|
||||||
|
'required': True
|
||||||
|
}]
|
||||||
|
|
||||||
|
def call(self, params: str, **kwargs) -> str:
|
||||||
|
args = json.loads(params)
|
||||||
|
# Your tool logic here
|
||||||
|
return "Tool result"
|
||||||
|
|
||||||
|
# Add to TOOLS list
|
||||||
|
TOOLS.append(MyCustomTool())
|
||||||
|
```
|
||||||
|
|
||||||
|
### Workspace Context
|
||||||
|
|
||||||
|
The agent works best when you:
|
||||||
|
- Provide clear file paths
|
||||||
|
- Mention specific functions/classes
|
||||||
|
- Be explicit about what you want done
|
||||||
|
- Give context about your project structure
|
||||||
|
|
||||||
|
### Multi-step Workflows
|
||||||
|
|
||||||
|
The agent can handle complex workflows:
|
||||||
|
|
||||||
|
```
|
||||||
|
1. Read the config.py file
|
||||||
|
2. Search for all files that import it
|
||||||
|
3. Update those files to use the new config format
|
||||||
|
4. Run the tests to verify everything works
|
||||||
|
```
|
||||||
|
|
||||||
|
## Performance Tips
|
||||||
|
|
||||||
|
1. **Use line ranges** when reading large files
|
||||||
|
2. **Be specific** with search queries to reduce noise
|
||||||
|
3. **Use file patterns** to limit search scope
|
||||||
|
4. **Set timeouts** for long-running terminal commands
|
||||||
|
|
||||||
|
## Security Notes
|
||||||
|
|
||||||
|
- The agent can execute **ANY** shell command
|
||||||
|
- It can read/write **ANY** file the server user has access to
|
||||||
|
- **DO NOT** expose this server to the internet
|
||||||
|
- **Use only** on localhost/trusted networks
|
||||||
|
- Consider running in a container for isolation
|
||||||
|
|
||||||
|
## License
|
||||||
|
|
||||||
|
MIT - Use at your own risk
|
||||||
|
|
||||||
|
## Credits
|
||||||
|
|
||||||
|
Built on:
|
||||||
|
- [Qwen Agent](https://github.com/QwenLM/Qwen-Agent)
|
||||||
|
- [FastAPI](https://fastapi.tiangolo.com/)
|
||||||
|
- [Ollama](https://ollama.ai/)
|
||||||
51
agent58k/continue-config.json
Normal file
51
agent58k/continue-config.json
Normal file
@@ -0,0 +1,51 @@
|
|||||||
|
{
|
||||||
|
"models": [
|
||||||
|
{
|
||||||
|
"title": "Qwen Agent (Local)",
|
||||||
|
"provider": "openai",
|
||||||
|
"model": "zdolny/qwen3-coder58k-tools:latest",
|
||||||
|
"apiBase": "http://localhost:8000/v1",
|
||||||
|
"apiKey": "not-needed",
|
||||||
|
"capabilities": {
|
||||||
|
"edit": true,
|
||||||
|
"apply": true,
|
||||||
|
"chat": true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"tabAutocompleteModel": {
|
||||||
|
"title": "Qwen Agent",
|
||||||
|
"provider": "openai",
|
||||||
|
"model": "zdolny/qwen3-coder58k-tools:latest",
|
||||||
|
"apiBase": "http://localhost:8000/v1"
|
||||||
|
},
|
||||||
|
"embeddingsProvider": {
|
||||||
|
"provider": "openai",
|
||||||
|
"model": "nomic-embed-text",
|
||||||
|
"apiBase": "http://localhost:11434/v1"
|
||||||
|
},
|
||||||
|
"customCommands": [
|
||||||
|
{
|
||||||
|
"name": "explain",
|
||||||
|
"description": "Explain this code",
|
||||||
|
"prompt": "Explain this code in detail:\n\n{{{ input }}}"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "fix",
|
||||||
|
"description": "Fix any errors in this code",
|
||||||
|
"prompt": "Find and fix any errors in this code:\n\n{{{ input }}}"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "optimize",
|
||||||
|
"description": "Optimize this code",
|
||||||
|
"prompt": "Optimize this code for better performance and readability:\n\n{{{ input }}}"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "test",
|
||||||
|
"description": "Generate tests",
|
||||||
|
"prompt": "Generate comprehensive unit tests for this code:\n\n{{{ input }}}"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"allowAnonymousTelemetry": false,
|
||||||
|
"systemMessage": "You are an expert AI coding assistant with full IDE capabilities. You have access to file operations, terminal commands, code search, and more. When asked to perform tasks:\n\n1. Use tools proactively - don't just suggest, actually do it\n2. Read files before editing them to understand context\n3. Make precise edits using the file edit tool\n4. Run commands to verify changes work\n5. Search the codebase when you need to understand dependencies\n6. Execute code to test solutions\n\nBe direct, efficient, and take action. You're not just a chatbot - you're a full development assistant."
|
||||||
|
}
|
||||||
19
agent58k/qwen-agent.service
Normal file
19
agent58k/qwen-agent.service
Normal file
@@ -0,0 +1,19 @@
|
|||||||
|
[Unit]
|
||||||
|
Description=Qwen Agent MCP Server
|
||||||
|
After=network.target
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
Type=simple
|
||||||
|
User=%YOUR_USERNAME%
|
||||||
|
WorkingDirectory=%HOME%/agent58k
|
||||||
|
Environment="PATH=%HOME%/agent58k/venv/bin:/usr/local/bin:/usr/bin:/bin"
|
||||||
|
ExecStart=%HOME%/agent58k/venv/bin/python server.py
|
||||||
|
Restart=always
|
||||||
|
RestartSec=10
|
||||||
|
|
||||||
|
# Logging
|
||||||
|
StandardOutput=append:/var/log/qwen-agent.log
|
||||||
|
StandardError=append:/var/log/qwen-agent-error.log
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=multi-user.target
|
||||||
6
agent58k/requirements.txt
Normal file
6
agent58k/requirements.txt
Normal file
@@ -0,0 +1,6 @@
|
|||||||
|
fastapi>=0.104.0
|
||||||
|
uvicorn[standard]>=0.24.0
|
||||||
|
pydantic>=2.0.0
|
||||||
|
qwen-agent>=0.0.6
|
||||||
|
httpx>=0.25.0
|
||||||
|
python-multipart>=0.0.6
|
||||||
597
agent58k/server.py
Normal file
597
agent58k/server.py
Normal file
@@ -0,0 +1,597 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
Qwen Agent MCP Server - Full IDE capabilities
|
||||||
|
Provides file operations, terminal, code analysis, search, and more
|
||||||
|
"""
|
||||||
|
|
||||||
|
from fastapi import FastAPI, HTTPException
|
||||||
|
from fastapi.responses import StreamingResponse
|
||||||
|
from pydantic import BaseModel, Field
|
||||||
|
from typing import List, Optional, Literal, Dict, Any, AsyncGenerator
|
||||||
|
import time, uuid, json, subprocess, traceback, os, pathlib, re, asyncio
|
||||||
|
from qwen_agent.agents import Assistant
|
||||||
|
from qwen_agent.tools.base import BaseTool, register_tool
|
||||||
|
|
||||||
|
# =============================================================================
|
||||||
|
# LLM Configuration
|
||||||
|
# =============================================================================
|
||||||
|
LLM_CFG = {
|
||||||
|
"model": "zdolny/qwen3-coder58k-tools:latest",
|
||||||
|
"model_server": "http://127.0.0.1:11434/v1",
|
||||||
|
"api_key": "9cf447669f9b080bcd4ec44ed93488b1ba32319b4a34288a52d549cd2bfddec7",
|
||||||
|
}
|
||||||
|
|
||||||
|
# =============================================================================
|
||||||
|
# Tool Implementations - Full IDE Capabilities
|
||||||
|
# =============================================================================
|
||||||
|
|
||||||
|
class FileReadTool(BaseTool):
|
||||||
|
"""Read contents of a file with line range support"""
|
||||||
|
description = "Read file contents. Supports line ranges for large files."
|
||||||
|
parameters = [{
|
||||||
|
'name': 'file_path',
|
||||||
|
'type': 'string',
|
||||||
|
'description': 'Absolute path to the file',
|
||||||
|
'required': True
|
||||||
|
}, {
|
||||||
|
'name': 'start_line',
|
||||||
|
'type': 'integer',
|
||||||
|
'description': 'Starting line number (1-indexed, optional)',
|
||||||
|
'required': False
|
||||||
|
}, {
|
||||||
|
'name': 'end_line',
|
||||||
|
'type': 'integer',
|
||||||
|
'description': 'Ending line number (1-indexed, optional)',
|
||||||
|
'required': False
|
||||||
|
}]
|
||||||
|
|
||||||
|
def call(self, params: str, **kwargs) -> str:
|
||||||
|
try:
|
||||||
|
args = json.loads(params) if isinstance(params, str) else params
|
||||||
|
file_path = args['file_path']
|
||||||
|
start_line = args.get('start_line')
|
||||||
|
end_line = args.get('end_line')
|
||||||
|
|
||||||
|
with open(file_path, 'r', encoding='utf-8') as f:
|
||||||
|
if start_line is not None or end_line is not None:
|
||||||
|
lines = f.readlines()
|
||||||
|
start = (start_line - 1) if start_line else 0
|
||||||
|
end = end_line if end_line else len(lines)
|
||||||
|
content = ''.join(lines[start:end])
|
||||||
|
else:
|
||||||
|
content = f.read()
|
||||||
|
|
||||||
|
return f"File: {file_path}\n{'='*60}\n{content}"
|
||||||
|
except Exception as e:
|
||||||
|
return f"Error reading file: {str(e)}\n{traceback.format_exc()}"
|
||||||
|
|
||||||
|
|
||||||
|
class FileWriteTool(BaseTool):
|
||||||
|
"""Write or create a file with content"""
|
||||||
|
description = "Create or overwrite a file with new content"
|
||||||
|
parameters = [{
|
||||||
|
'name': 'file_path',
|
||||||
|
'type': 'string',
|
||||||
|
'description': 'Absolute path to the file',
|
||||||
|
'required': True
|
||||||
|
}, {
|
||||||
|
'name': 'content',
|
||||||
|
'type': 'string',
|
||||||
|
'description': 'Content to write to the file',
|
||||||
|
'required': True
|
||||||
|
}]
|
||||||
|
|
||||||
|
def call(self, params: str, **kwargs) -> str:
|
||||||
|
try:
|
||||||
|
args = json.loads(params) if isinstance(params, str) else params
|
||||||
|
file_path = args['file_path']
|
||||||
|
content = args['content']
|
||||||
|
|
||||||
|
# Create directory if it doesn't exist
|
||||||
|
os.makedirs(os.path.dirname(file_path), exist_ok=True)
|
||||||
|
|
||||||
|
with open(file_path, 'w', encoding='utf-8') as f:
|
||||||
|
f.write(content)
|
||||||
|
|
||||||
|
return f"Successfully wrote {len(content)} bytes to {file_path}"
|
||||||
|
except Exception as e:
|
||||||
|
return f"Error writing file: {str(e)}\n{traceback.format_exc()}"
|
||||||
|
|
||||||
|
|
||||||
|
class FileEditTool(BaseTool):
|
||||||
|
"""Replace string in file - for precise edits"""
|
||||||
|
description = "Replace exact string match in a file with new content"
|
||||||
|
parameters = [{
|
||||||
|
'name': 'file_path',
|
||||||
|
'type': 'string',
|
||||||
|
'description': 'Absolute path to the file',
|
||||||
|
'required': True
|
||||||
|
}, {
|
||||||
|
'name': 'old_string',
|
||||||
|
'type': 'string',
|
||||||
|
'description': 'Exact string to find and replace (include context)',
|
||||||
|
'required': True
|
||||||
|
}, {
|
||||||
|
'name': 'new_string',
|
||||||
|
'type': 'string',
|
||||||
|
'description': 'New string to replace with',
|
||||||
|
'required': True
|
||||||
|
}]
|
||||||
|
|
||||||
|
def call(self, params: str, **kwargs) -> str:
|
||||||
|
try:
|
||||||
|
args = json.loads(params) if isinstance(params, str) else params
|
||||||
|
file_path = args['file_path']
|
||||||
|
old_string = args['old_string']
|
||||||
|
new_string = args['new_string']
|
||||||
|
|
||||||
|
with open(file_path, 'r', encoding='utf-8') as f:
|
||||||
|
content = f.read()
|
||||||
|
|
||||||
|
if old_string not in content:
|
||||||
|
return f"Error: old_string not found in {file_path}"
|
||||||
|
|
||||||
|
count = content.count(old_string)
|
||||||
|
if count > 1:
|
||||||
|
return f"Error: old_string appears {count} times. Be more specific."
|
||||||
|
|
||||||
|
new_content = content.replace(old_string, new_string, 1)
|
||||||
|
|
||||||
|
with open(file_path, 'w', encoding='utf-8') as f:
|
||||||
|
f.write(new_content)
|
||||||
|
|
||||||
|
return f"Successfully replaced string in {file_path}"
|
||||||
|
except Exception as e:
|
||||||
|
return f"Error editing file: {str(e)}\n{traceback.format_exc()}"
|
||||||
|
|
||||||
|
|
||||||
|
class TerminalTool(BaseTool):
|
||||||
|
"""Execute shell commands"""
|
||||||
|
description = "Run shell commands in terminal. Use for git, npm, python scripts, etc."
|
||||||
|
parameters = [{
|
||||||
|
'name': 'command',
|
||||||
|
'type': 'string',
|
||||||
|
'description': 'Shell command to execute',
|
||||||
|
'required': True
|
||||||
|
}, {
|
||||||
|
'name': 'working_dir',
|
||||||
|
'type': 'string',
|
||||||
|
'description': 'Working directory (optional)',
|
||||||
|
'required': False
|
||||||
|
}, {
|
||||||
|
'name': 'timeout',
|
||||||
|
'type': 'integer',
|
||||||
|
'description': 'Timeout in seconds (default 30)',
|
||||||
|
'required': False
|
||||||
|
}]
|
||||||
|
|
||||||
|
def call(self, params: str, **kwargs) -> str:
|
||||||
|
try:
|
||||||
|
args = json.loads(params) if isinstance(params, str) else params
|
||||||
|
command = args['command']
|
||||||
|
working_dir = args.get('working_dir', os.getcwd())
|
||||||
|
timeout = args.get('timeout', 30)
|
||||||
|
|
||||||
|
result = subprocess.run(
|
||||||
|
command,
|
||||||
|
shell=True,
|
||||||
|
cwd=working_dir,
|
||||||
|
capture_output=True,
|
||||||
|
text=True,
|
||||||
|
timeout=timeout
|
||||||
|
)
|
||||||
|
|
||||||
|
output = f"Command: {command}\n"
|
||||||
|
output += f"Exit Code: {result.returncode}\n"
|
||||||
|
output += f"{'='*60}\n"
|
||||||
|
if result.stdout:
|
||||||
|
output += f"STDOUT:\n{result.stdout}\n"
|
||||||
|
if result.stderr:
|
||||||
|
output += f"STDERR:\n{result.stderr}\n"
|
||||||
|
|
||||||
|
return output
|
||||||
|
except subprocess.TimeoutExpired:
|
||||||
|
return f"Error: Command timed out after {timeout}s"
|
||||||
|
except Exception as e:
|
||||||
|
return f"Error executing command: {str(e)}\n{traceback.format_exc()}"
|
||||||
|
|
||||||
|
|
||||||
|
class GrepSearchTool(BaseTool):
|
||||||
|
"""Search for text/regex in workspace files"""
|
||||||
|
description = "Search for text or regex patterns in files"
|
||||||
|
parameters = [{
|
||||||
|
'name': 'query',
|
||||||
|
'type': 'string',
|
||||||
|
'description': 'Text or regex pattern to search for',
|
||||||
|
'required': True
|
||||||
|
}, {
|
||||||
|
'name': 'path',
|
||||||
|
'type': 'string',
|
||||||
|
'description': 'Directory to search in (default: current)',
|
||||||
|
'required': False
|
||||||
|
}, {
|
||||||
|
'name': 'is_regex',
|
||||||
|
'type': 'boolean',
|
||||||
|
'description': 'Whether query is regex (default: false)',
|
||||||
|
'required': False
|
||||||
|
}, {
|
||||||
|
'name': 'file_pattern',
|
||||||
|
'type': 'string',
|
||||||
|
'description': 'File pattern to match (e.g., "*.py")',
|
||||||
|
'required': False
|
||||||
|
}]
|
||||||
|
|
||||||
|
def call(self, params: str, **kwargs) -> str:
|
||||||
|
try:
|
||||||
|
args = json.loads(params) if isinstance(params, str) else params
|
||||||
|
query = args['query']
|
||||||
|
search_path = args.get('path', os.getcwd())
|
||||||
|
is_regex = args.get('is_regex', False)
|
||||||
|
file_pattern = args.get('file_pattern', '*')
|
||||||
|
|
||||||
|
# Use ripgrep if available, otherwise fallback to grep
|
||||||
|
if subprocess.run(['which', 'rg'], capture_output=True).returncode == 0:
|
||||||
|
cmd = ['rg', '--line-number', '--color', 'never']
|
||||||
|
if not is_regex:
|
||||||
|
cmd.append('--fixed-strings')
|
||||||
|
if file_pattern != '*':
|
||||||
|
cmd.extend(['--glob', file_pattern])
|
||||||
|
cmd.extend([query, search_path])
|
||||||
|
else:
|
||||||
|
cmd = ['grep', '-rn']
|
||||||
|
if not is_regex:
|
||||||
|
cmd.append('-F')
|
||||||
|
if file_pattern != '*':
|
||||||
|
cmd.extend(['--include', file_pattern])
|
||||||
|
cmd.extend([query, search_path])
|
||||||
|
|
||||||
|
result = subprocess.run(cmd, capture_output=True, text=True, timeout=30)
|
||||||
|
|
||||||
|
if result.returncode == 0:
|
||||||
|
lines = result.stdout.split('\n')[:100] # Limit to 100 results
|
||||||
|
return f"Found {len(lines)} matches:\n{chr(10).join(lines)}"
|
||||||
|
elif result.returncode == 1:
|
||||||
|
return "No matches found"
|
||||||
|
else:
|
||||||
|
return f"Search error: {result.stderr}"
|
||||||
|
except Exception as e:
|
||||||
|
return f"Error searching: {str(e)}"
|
||||||
|
|
||||||
|
|
||||||
|
class ListDirectoryTool(BaseTool):
|
||||||
|
"""List files and directories"""
|
||||||
|
description = "List contents of a directory"
|
||||||
|
parameters = [{
|
||||||
|
'name': 'path',
|
||||||
|
'type': 'string',
|
||||||
|
'description': 'Directory path to list',
|
||||||
|
'required': True
|
||||||
|
}, {
|
||||||
|
'name': 'recursive',
|
||||||
|
'type': 'boolean',
|
||||||
|
'description': 'List recursively (default: false)',
|
||||||
|
'required': False
|
||||||
|
}]
|
||||||
|
|
||||||
|
def call(self, params: str, **kwargs) -> str:
|
||||||
|
try:
|
||||||
|
args = json.loads(params) if isinstance(params, str) else params
|
||||||
|
path = args['path']
|
||||||
|
recursive = args.get('recursive', False)
|
||||||
|
|
||||||
|
if recursive:
|
||||||
|
result = []
|
||||||
|
for root, dirs, files in os.walk(path):
|
||||||
|
level = root.replace(path, '').count(os.sep)
|
||||||
|
indent = ' ' * 2 * level
|
||||||
|
result.append(f'{indent}{os.path.basename(root)}/')
|
||||||
|
subindent = ' ' * 2 * (level + 1)
|
||||||
|
for file in files:
|
||||||
|
result.append(f'{subindent}{file}')
|
||||||
|
return '\n'.join(result[:200]) # Limit output
|
||||||
|
else:
|
||||||
|
items = os.listdir(path)
|
||||||
|
result = []
|
||||||
|
for item in sorted(items):
|
||||||
|
full_path = os.path.join(path, item)
|
||||||
|
if os.path.isdir(full_path):
|
||||||
|
result.append(f"{item}/")
|
||||||
|
else:
|
||||||
|
size = os.path.getsize(full_path)
|
||||||
|
result.append(f"{item} ({size} bytes)")
|
||||||
|
return '\n'.join(result)
|
||||||
|
except Exception as e:
|
||||||
|
return f"Error listing directory: {str(e)}"
|
||||||
|
|
||||||
|
|
||||||
|
class PythonExecuteTool(BaseTool):
|
||||||
|
"""Execute Python code in isolated scope"""
|
||||||
|
description = "Execute Python code and return result. Use for calculations, data processing, etc."
|
||||||
|
parameters = [{
|
||||||
|
'name': 'code',
|
||||||
|
'type': 'string',
|
||||||
|
'description': 'Python code to execute',
|
||||||
|
'required': True
|
||||||
|
}]
|
||||||
|
|
||||||
|
def call(self, params: str, **kwargs) -> str:
|
||||||
|
try:
|
||||||
|
args = json.loads(params) if isinstance(params, str) else params
|
||||||
|
code = args['code']
|
||||||
|
|
||||||
|
# Create isolated scope
|
||||||
|
scope = {'__builtins__': __builtins__}
|
||||||
|
|
||||||
|
# Execute code
|
||||||
|
exec(code, scope)
|
||||||
|
|
||||||
|
# Try to get result
|
||||||
|
if 'result' in scope:
|
||||||
|
return str(scope['result'])
|
||||||
|
elif scope:
|
||||||
|
# Return last non-builtin value
|
||||||
|
values = [v for k, v in scope.items() if not k.startswith('_')]
|
||||||
|
if values:
|
||||||
|
return str(values[-1])
|
||||||
|
|
||||||
|
return "Code executed successfully (no return value)"
|
||||||
|
except Exception as e:
|
||||||
|
return f"Python execution error:\n{traceback.format_exc()}"
|
||||||
|
|
||||||
|
|
||||||
|
class GetErrorsTool(BaseTool):
|
||||||
|
"""Get syntax/lint errors in workspace"""
|
||||||
|
description = "Check for errors in code files (uses common linters)"
|
||||||
|
parameters = [{
|
||||||
|
'name': 'file_path',
|
||||||
|
'type': 'string',
|
||||||
|
'description': 'File to check for errors (optional, checks all if not provided)',
|
||||||
|
'required': False
|
||||||
|
}]
|
||||||
|
|
||||||
|
def call(self, params: str, **kwargs) -> str:
|
||||||
|
try:
|
||||||
|
args = json.loads(params) if isinstance(params, str) else params
|
||||||
|
file_path = args.get('file_path')
|
||||||
|
|
||||||
|
if file_path:
|
||||||
|
# Check specific file
|
||||||
|
ext = os.path.splitext(file_path)[1]
|
||||||
|
if ext == '.py':
|
||||||
|
result = subprocess.run(['python', '-m', 'py_compile', file_path],
|
||||||
|
capture_output=True, text=True)
|
||||||
|
if result.returncode != 0:
|
||||||
|
return f"Python errors in {file_path}:\n{result.stderr}"
|
||||||
|
return f"No syntax errors in {file_path}"
|
||||||
|
elif ext in ['.js', '.ts', '.jsx', '.tsx']:
|
||||||
|
# Try eslint
|
||||||
|
result = subprocess.run(['npx', 'eslint', file_path],
|
||||||
|
capture_output=True, text=True)
|
||||||
|
return result.stdout or "No errors found"
|
||||||
|
else:
|
||||||
|
return f"No linter available for {ext} files"
|
||||||
|
else:
|
||||||
|
return "Specify a file_path to check for errors"
|
||||||
|
except Exception as e:
|
||||||
|
return f"Error checking for errors: {str(e)}"
|
||||||
|
|
||||||
|
|
||||||
|
# =============================================================================
|
||||||
|
# Register All Tools
|
||||||
|
# =============================================================================
|
||||||
|
TOOLS = [
|
||||||
|
FileReadTool(),
|
||||||
|
FileWriteTool(),
|
||||||
|
FileEditTool(),
|
||||||
|
TerminalTool(),
|
||||||
|
GrepSearchTool(),
|
||||||
|
ListDirectoryTool(),
|
||||||
|
PythonExecuteTool(),
|
||||||
|
GetErrorsTool(),
|
||||||
|
]
|
||||||
|
|
||||||
|
# Create agent with all tools
|
||||||
|
agent = Assistant(llm=LLM_CFG, function_list=TOOLS)
|
||||||
|
|
||||||
|
# =============================================================================
|
||||||
|
# FastAPI Server
|
||||||
|
# =============================================================================
|
||||||
|
app = FastAPI(title="Qwen Agent MCP Server", version="1.0.0")
|
||||||
|
|
||||||
|
class ChatMessage(BaseModel):
|
||||||
|
role: Literal["system", "user", "assistant", "function"]
|
||||||
|
content: str
|
||||||
|
name: Optional[str] = None
|
||||||
|
|
||||||
|
class ChatRequest(BaseModel):
|
||||||
|
model: str
|
||||||
|
messages: List[ChatMessage]
|
||||||
|
stream: Optional[bool] = False
|
||||||
|
temperature: Optional[float] = 0.7
|
||||||
|
max_tokens: Optional[int] = None
|
||||||
|
|
||||||
|
class ChatResponse(BaseModel):
|
||||||
|
id: str
|
||||||
|
object: str = "chat.completion"
|
||||||
|
created: int
|
||||||
|
model: str
|
||||||
|
choices: List[Dict[str, Any]]
|
||||||
|
usage: Dict[str, int]
|
||||||
|
|
||||||
|
|
||||||
|
@app.get("/")
|
||||||
|
async def root():
|
||||||
|
return {
|
||||||
|
"name": "Qwen Agent MCP Server",
|
||||||
|
"version": "1.0.0",
|
||||||
|
"capabilities": {
|
||||||
|
"file_operations": ["read", "write", "edit"],
|
||||||
|
"terminal": True,
|
||||||
|
"search": True,
|
||||||
|
"code_execution": True,
|
||||||
|
"streaming": True
|
||||||
|
},
|
||||||
|
"tools": [tool.__class__.__name__ for tool in TOOLS]
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
@app.get("/v1/models")
|
||||||
|
async def list_models():
|
||||||
|
return {
|
||||||
|
"object": "list",
|
||||||
|
"data": [{
|
||||||
|
"id": LLM_CFG["model"],
|
||||||
|
"object": "model",
|
||||||
|
"created": int(time.time()),
|
||||||
|
"owned_by": "qwen-agent"
|
||||||
|
}]
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
async def stream_agent_response(messages: List[Dict], session_id: str) -> AsyncGenerator[str, None]:
|
||||||
|
"""Stream agent responses as SSE"""
|
||||||
|
try:
|
||||||
|
accumulated_content = []
|
||||||
|
|
||||||
|
for response in agent.run(messages=messages, session_id=session_id):
|
||||||
|
# Handle different response types from qwen_agent
|
||||||
|
if isinstance(response, list):
|
||||||
|
for item in response:
|
||||||
|
if isinstance(item, dict) and 'content' in item:
|
||||||
|
content = item['content']
|
||||||
|
accumulated_content.append(content)
|
||||||
|
|
||||||
|
# Stream the delta
|
||||||
|
chunk = {
|
||||||
|
"id": f"agent-{session_id}",
|
||||||
|
"object": "chat.completion.chunk",
|
||||||
|
"created": int(time.time()),
|
||||||
|
"model": LLM_CFG["model"],
|
||||||
|
"choices": [{
|
||||||
|
"index": 0,
|
||||||
|
"delta": {"content": content},
|
||||||
|
"finish_reason": None
|
||||||
|
}]
|
||||||
|
}
|
||||||
|
yield f"data: {json.dumps(chunk)}\n\n"
|
||||||
|
elif isinstance(response, dict) and 'content' in response:
|
||||||
|
content = response['content']
|
||||||
|
accumulated_content.append(content)
|
||||||
|
|
||||||
|
chunk = {
|
||||||
|
"id": f"agent-{session_id}",
|
||||||
|
"object": "chat.completion.chunk",
|
||||||
|
"created": int(time.time()),
|
||||||
|
"model": LLM_CFG["model"],
|
||||||
|
"choices": [{
|
||||||
|
"index": 0,
|
||||||
|
"delta": {"content": content},
|
||||||
|
"finish_reason": None
|
||||||
|
}]
|
||||||
|
}
|
||||||
|
yield f"data: {json.dumps(chunk)}\n\n"
|
||||||
|
|
||||||
|
# Send final chunk
|
||||||
|
final_chunk = {
|
||||||
|
"id": f"agent-{session_id}",
|
||||||
|
"object": "chat.completion.chunk",
|
||||||
|
"created": int(time.time()),
|
||||||
|
"model": LLM_CFG["model"],
|
||||||
|
"choices": [{
|
||||||
|
"index": 0,
|
||||||
|
"delta": {},
|
||||||
|
"finish_reason": "stop"
|
||||||
|
}]
|
||||||
|
}
|
||||||
|
yield f"data: {json.dumps(final_chunk)}\n\n"
|
||||||
|
yield "data: [DONE]\n\n"
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
error_msg = f"Error in agent stream: {str(e)}\n{traceback.format_exc()}"
|
||||||
|
print(error_msg)
|
||||||
|
error_chunk = {
|
||||||
|
"id": f"agent-{session_id}",
|
||||||
|
"object": "chat.completion.chunk",
|
||||||
|
"created": int(time.time()),
|
||||||
|
"model": LLM_CFG["model"],
|
||||||
|
"choices": [{
|
||||||
|
"index": 0,
|
||||||
|
"delta": {"content": f"\n\nError: {str(e)}"},
|
||||||
|
"finish_reason": "error"
|
||||||
|
}]
|
||||||
|
}
|
||||||
|
yield f"data: {json.dumps(error_chunk)}\n\n"
|
||||||
|
yield "data: [DONE]\n\n"
|
||||||
|
|
||||||
|
|
||||||
|
@app.post("/v1/chat/completions")
|
||||||
|
async def chat_completions(req: ChatRequest):
|
||||||
|
"""Main chat endpoint with streaming support"""
|
||||||
|
try:
|
||||||
|
session_id = str(uuid.uuid4())
|
||||||
|
messages = [m.model_dump() for m in req.messages]
|
||||||
|
|
||||||
|
# Handle streaming
|
||||||
|
if req.stream:
|
||||||
|
return StreamingResponse(
|
||||||
|
stream_agent_response(messages, session_id),
|
||||||
|
media_type="text/event-stream"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Non-streaming response
|
||||||
|
final_content = []
|
||||||
|
|
||||||
|
for response in agent.run(messages=messages, session_id=session_id):
|
||||||
|
if isinstance(response, list):
|
||||||
|
for item in response:
|
||||||
|
if isinstance(item, dict) and 'content' in item:
|
||||||
|
final_content.append(item['content'])
|
||||||
|
elif isinstance(item, str):
|
||||||
|
final_content.append(item)
|
||||||
|
elif isinstance(response, dict) and 'content' in response:
|
||||||
|
final_content.append(response['content'])
|
||||||
|
elif isinstance(response, str):
|
||||||
|
final_content.append(response)
|
||||||
|
|
||||||
|
# Combine all content
|
||||||
|
combined_content = '\n'.join(str(c) for c in final_content if c)
|
||||||
|
|
||||||
|
return {
|
||||||
|
"id": f"agent-{session_id}",
|
||||||
|
"object": "chat.completion",
|
||||||
|
"created": int(time.time()),
|
||||||
|
"model": req.model,
|
||||||
|
"choices": [{
|
||||||
|
"index": 0,
|
||||||
|
"message": {
|
||||||
|
"role": "assistant",
|
||||||
|
"content": combined_content or "No response generated"
|
||||||
|
},
|
||||||
|
"finish_reason": "stop"
|
||||||
|
}],
|
||||||
|
"usage": {
|
||||||
|
"prompt_tokens": 0,
|
||||||
|
"completion_tokens": 0,
|
||||||
|
"total_tokens": 0
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
print(f"Error in chat completion: {str(e)}")
|
||||||
|
print(traceback.format_exc())
|
||||||
|
raise HTTPException(status_code=500, detail=str(e))
|
||||||
|
|
||||||
|
|
||||||
|
@app.get("/health")
|
||||||
|
async def health():
|
||||||
|
return {"status": "healthy", "agent": "ready"}
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
import uvicorn
|
||||||
|
print("🚀 Starting Qwen Agent MCP Server...")
|
||||||
|
print(f"📦 Model: {LLM_CFG['model']}")
|
||||||
|
print(f"🔧 Tools: {len(TOOLS)}")
|
||||||
|
print(f"🌐 Server: http://localhost:8000")
|
||||||
|
print(f"📚 Docs: http://localhost:8000/docs")
|
||||||
|
uvicorn.run(app, host="0.0.0.0", port=8000)
|
||||||
63
agent58k/setup.sh
Normal file
63
agent58k/setup.sh
Normal file
@@ -0,0 +1,63 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
# Setup script for Qwen Agent MCP Server
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
echo "🚀 Setting up Qwen Agent MCP Server..."
|
||||||
|
|
||||||
|
# Create directory if it doesn't exist
|
||||||
|
mkdir -p ~/agent58k
|
||||||
|
cd ~/agent58k
|
||||||
|
|
||||||
|
# Check Python version
|
||||||
|
if ! command -v python3 &> /dev/null; then
|
||||||
|
echo "❌ Python 3 is required but not installed"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
PYTHON_VERSION=$(python3 --version | cut -d' ' -f2 | cut -d'.' -f1,2)
|
||||||
|
echo "✅ Found Python $PYTHON_VERSION"
|
||||||
|
|
||||||
|
# Create virtual environment
|
||||||
|
if [ ! -d "venv" ]; then
|
||||||
|
echo "📦 Creating virtual environment..."
|
||||||
|
python3 -m venv venv
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Activate virtual environment
|
||||||
|
source venv/bin/activate
|
||||||
|
|
||||||
|
# Upgrade pip
|
||||||
|
echo "⬆️ Upgrading pip..."
|
||||||
|
pip install --upgrade pip
|
||||||
|
|
||||||
|
# Install requirements
|
||||||
|
echo "📥 Installing dependencies..."
|
||||||
|
pip install -r requirements.txt
|
||||||
|
|
||||||
|
# Check if Ollama is running
|
||||||
|
echo "🔍 Checking Ollama..."
|
||||||
|
if ! curl -s http://localhost:11434/api/tags > /dev/null 2>&1; then
|
||||||
|
echo "⚠️ Warning: Ollama doesn't seem to be running on localhost:11434"
|
||||||
|
echo " Start it with: ollama serve"
|
||||||
|
else
|
||||||
|
echo "✅ Ollama is running"
|
||||||
|
|
||||||
|
# Check if model is available
|
||||||
|
if ! curl -s http://localhost:11434/api/tags | grep -q "zdolny/qwen3-coder58k-tools"; then
|
||||||
|
echo "📥 Pulling Qwen model (this may take a while)..."
|
||||||
|
ollama pull zdolny/qwen3-coder58k-tools:latest
|
||||||
|
else
|
||||||
|
echo "✅ Qwen model is available"
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "✨ Setup complete!"
|
||||||
|
echo ""
|
||||||
|
echo "To start the server:"
|
||||||
|
echo " cd ~/agent58k"
|
||||||
|
echo " source venv/bin/activate"
|
||||||
|
echo " python server.py"
|
||||||
|
echo ""
|
||||||
|
echo "Then configure your IDE to use: http://localhost:8000/v1/chat/completions"
|
||||||
70
agent58k/test_server.sh
Normal file
70
agent58k/test_server.sh
Normal file
@@ -0,0 +1,70 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
# Test script to verify the Qwen Agent MCP Server is working
|
||||||
|
|
||||||
|
echo "🧪 Testing Qwen Agent MCP Server..."
|
||||||
|
|
||||||
|
# Check if server is running
|
||||||
|
if ! curl -s http://localhost:8000/health > /dev/null 2>&1; then
|
||||||
|
echo "❌ Server is not running on localhost:8000"
|
||||||
|
echo " Start it with: python server.py"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "✅ Server is running"
|
||||||
|
|
||||||
|
# Test health endpoint
|
||||||
|
echo ""
|
||||||
|
echo "📊 Health Check:"
|
||||||
|
curl -s http://localhost:8000/health | jq '.'
|
||||||
|
|
||||||
|
# Test root endpoint
|
||||||
|
echo ""
|
||||||
|
echo "📊 Capabilities:"
|
||||||
|
curl -s http://localhost:8000/ | jq '.'
|
||||||
|
|
||||||
|
# Test models endpoint
|
||||||
|
echo ""
|
||||||
|
echo "📊 Available Models:"
|
||||||
|
curl -s http://localhost:8000/v1/models | jq '.'
|
||||||
|
|
||||||
|
# Test chat completion (non-streaming)
|
||||||
|
echo ""
|
||||||
|
echo "📊 Testing Chat Completion (Simple):"
|
||||||
|
curl -s http://localhost:8000/v1/chat/completions \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-d '{
|
||||||
|
"model": "zdolny/qwen3-coder58k-tools:latest",
|
||||||
|
"messages": [
|
||||||
|
{"role": "user", "content": "What is 2+2? Just give the answer."}
|
||||||
|
],
|
||||||
|
"stream": false
|
||||||
|
}' | jq '.choices[0].message.content'
|
||||||
|
|
||||||
|
# Test with tool usage
|
||||||
|
echo ""
|
||||||
|
echo "📊 Testing Chat with Tools (List Directory):"
|
||||||
|
curl -s http://localhost:8000/v1/chat/completions \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-d '{
|
||||||
|
"model": "zdolny/qwen3-coder58k-tools:latest",
|
||||||
|
"messages": [
|
||||||
|
{"role": "user", "content": "List the files in the current directory using the list directory tool"}
|
||||||
|
],
|
||||||
|
"stream": false
|
||||||
|
}' | jq '.choices[0].message.content'
|
||||||
|
|
||||||
|
# Test streaming
|
||||||
|
echo ""
|
||||||
|
echo "📊 Testing Streaming:"
|
||||||
|
curl -s -N http://localhost:8000/v1/chat/completions \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-d '{
|
||||||
|
"model": "zdolny/qwen3-coder58k-tools:latest",
|
||||||
|
"messages": [
|
||||||
|
{"role": "user", "content": "Count from 1 to 3"}
|
||||||
|
],
|
||||||
|
"stream": true
|
||||||
|
}' | head -n 20
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "✅ All tests completed!"
|
||||||
Reference in New Issue
Block a user