3.1 KiB
Quick Start Guide
Installation (30 seconds)
cd ~/agent58k
chmod +x setup.sh
./setup.sh
Start Server
cd ~/agent58k
source venv/bin/activate
python server.py
Configure IDE
Continue.dev
Copy the config:
cp continue-config.json ~/.continue/config.json
Cursor
Settings → Models → Add Custom:
- Base URL:
http://localhost:8000/v1 - Model:
zdolny/qwen3-coder58k-tools:latest
Test It Works
chmod +x test_server.sh
./test_server.sh
Example Prompts
Instead of: ❌ "Can you help me read the config file?"
Use: ✅ "Read config.py and show me the database settings" ✅ "Search the codebase for all files importing requests" ✅ "Create a new file utils/parser.py with a JSON parser" ✅ "Run the tests in tests/ directory" ✅ "Find all TODO comments in Python files"
What Makes This Different
Your original code had:
- ❌ Only 2 basic tools (calculate, python)
- ❌ Broken tool extraction (regex parsing)
- ❌ No streaming support
- ❌ No file operations
- ❌ No terminal access
- ❌ Single-step execution only
This version has:
- ✅ 8 comprehensive tools (file ops, terminal, search, etc.)
- ✅ Proper Qwen Agent tool integration
- ✅ Full streaming support
- ✅ Multi-step agent reasoning
- ✅ Works like native Cursor/Continue
- ✅ Production-ready error handling
Auto-Start on Boot (Optional)
# Edit the service file and replace %YOUR_USERNAME% and %HOME%
sudo cp qwen-agent.service /etc/systemd/system/
sudo systemctl enable qwen-agent
sudo systemctl start qwen-agent
Troubleshooting
"ModuleNotFoundError: No module named 'qwen_agent'"
→ Activate venv: source venv/bin/activate
"Connection refused to localhost:8000"
→ Start server: python server.py
"Ollama API error"
→ Start Ollama: ollama serve
→ Pull model: ollama pull zdolny/qwen3-coder58k-tools:latest
Agent not using tools → Be explicit: "Use the file read tool to..." → Check server logs for errors
What Fixed
- Tool System: Implemented proper
BaseToolclasses that Qwen Agent understands - Streaming: Added SSE support with proper chunk formatting
- Response Handling: Properly extracts content from agent responses
- Multi-step: Agent can now chain multiple tool calls
- Error Handling: Comprehensive try/catch with detailed error messages
- IDE Integration: OpenAI-compatible API that works with Continue/Cursor
Files Created
server.py- Main server (400+ lines with 8 tools)requirements.txt- Python dependenciessetup.sh- One-command installationtest_server.sh- Verify everything workscontinue-config.json- IDE configurationqwen-agent.service- Systemd serviceREADME.md- Full documentationQUICKSTART.md- This file
Next Steps
- Run setup:
./setup.sh - Start server:
python server.py - Configure your IDE (copy continue-config.json)
- Test with: "List files in the current directory"
- Try complex tasks: "Read all Python files, find bugs, fix them"
Enjoy your fully-capable AI coding assistant! 🚀