This commit is contained in:
alorig
2025-11-09 19:34:54 +05:00
parent c04c688aa0
commit b7d9fd43c7
9 changed files with 686 additions and 52 deletions

View File

@@ -0,0 +1,220 @@
# Stage 4 - Prompt Registry, Model Unification, and Final Function Hooks - COMPLETE ✅
## Summary
Successfully created a centralized prompt registry system, unified model configurations, and standardized all AI function execution with clean, minimal function files.
## ✅ Completed Deliverables
### 1. Prompt Registry System Created
#### `ai/prompts.py` - PromptRegistry Class
- **Purpose**: Centralized prompt management with hierarchical resolution
- **Features**:
- Hierarchical prompt resolution:
1. Task-level `prompt_override` (if exists)
2. DB prompt for (account, function)
3. Default fallback from registry
- Supports both `.format()` style and `[IGNY8_*]` placeholder replacement
- Function-to-prompt-type mapping
- Convenience methods: `get_image_prompt_template()`, `get_negative_prompt()`
#### Prompt Resolution Priority
```python
# Priority 1: Task override
if task.prompt_override:
use task.prompt_override
# Priority 2: DB prompt
elif DB prompt for (account, function) exists:
use DB prompt
# Priority 3: Default fallback
else:
use default from registry
```
### 2. Model Configuration Centralized
#### `ai/settings.py` - MODEL_CONFIG
- **Purpose**: Centralized model configurations for all AI functions
- **Configurations**:
```python
MODEL_CONFIG = {
"auto_cluster": {
"model": "gpt-4o-mini",
"max_tokens": 3000,
"temperature": 0.7,
"response_format": {"type": "json_object"},
},
"generate_ideas": {
"model": "gpt-4.1",
"max_tokens": 4000,
"temperature": 0.7,
"response_format": {"type": "json_object"},
},
"generate_content": {
"model": "gpt-4.1",
"max_tokens": 8000,
"temperature": 0.7,
"response_format": None, # Text output
},
"generate_images": {
"model": "dall-e-3",
"size": "1024x1024",
"provider": "openai",
},
}
```
#### Helper Functions
- `get_model_config(function_name)` - Get full config
- `get_model(function_name)` - Get model name
- `get_max_tokens(function_name)` - Get max tokens
- `get_temperature(function_name)` - Get temperature
### 3. Updated All AI Functions
#### `functions/auto_cluster.py`
- ✅ Uses `PromptRegistry.get_prompt()`
- ✅ Uses `get_model_config()` for model settings
- ✅ Removed direct `get_prompt_value()` calls
#### `functions/generate_ideas.py`
- ✅ Uses `PromptRegistry.get_prompt()` with context
- ✅ Uses `get_model_config()` for model settings
- ✅ Clean prompt building with context variables
#### `functions/generate_content.py`
- ✅ Uses `PromptRegistry.get_prompt()` with task support
- ✅ Uses `get_model_config()` for model settings
- ✅ Supports task-level prompt overrides
#### `functions/generate_images.py`
- ✅ Uses `PromptRegistry.get_prompt()` for extraction
- ✅ Uses `PromptRegistry.get_image_prompt_template()`
- ✅ Uses `PromptRegistry.get_negative_prompt()`
- ✅ Uses `get_model_config()` for model settings
### 4. Updated Engine
#### `engine.py`
- ✅ Uses `get_model_config()` instead of `fn.get_model()`
- ✅ Passes model config to `run_ai_request()`
- ✅ Unified model selection across all functions
### 5. Standardized Response Format
All functions now return consistent format:
```python
{
"success": True/False,
"output": "HTML or image_url or data",
"raw": raw_response_json, # Optional
"meta": {
"word_count": 1536, # For content
"keywords": [...], # For clusters
"model_used": "gpt-4.1",
"tokens": 250,
"cost": 0.000123
},
"error": None or error_message
}
```
## 📋 File Changes Summary
| File | Changes | Status |
|------|---------|--------|
| `prompts.py` | Created PromptRegistry class | ✅ Complete |
| `settings.py` | Created MODEL_CONFIG and helpers | ✅ Complete |
| `functions/auto_cluster.py` | Updated to use registry and settings | ✅ Complete |
| `functions/generate_ideas.py` | Updated to use registry and settings | ✅ Complete |
| `functions/generate_content.py` | Updated to use registry and settings | ✅ Complete |
| `functions/generate_images.py` | Updated to use registry and settings | ✅ Complete |
| `engine.py` | Updated to use model config | ✅ Complete |
| `__init__.py` | Exported new modules | ✅ Complete |
## 🔄 Migration Path
### Old Code (Deprecated)
```python
from igny8_core.modules.system.utils import get_prompt_value, get_default_prompt
prompt_template = get_prompt_value(account, 'clustering')
prompt = prompt_template.replace('[IGNY8_KEYWORDS]', keywords_text)
```
### New Code (Recommended)
```python
from igny8_core.ai.prompts import PromptRegistry
from igny8_core.ai.settings import get_model_config
# Get prompt from registry
prompt = PromptRegistry.get_prompt(
function_name='auto_cluster',
account=account,
context={'KEYWORDS': keywords_text}
)
# Get model config
model_config = get_model_config('auto_cluster')
```
## ✅ Verification Checklist
- [x] PromptRegistry created with hierarchical resolution
- [x] MODEL_CONFIG created with all function configs
- [x] All functions updated to use registry
- [x] All functions updated to use model config
- [x] Engine updated to use model config
- [x] Response format standardized
- [x] No direct prompt utility calls in functions
- [x] Task-level overrides supported
- [x] DB prompts supported
- [x] Default fallbacks working
## 🎯 Benefits Achieved
1. **Centralized Prompts**: All prompts in one registry
2. **Hierarchical Resolution**: Task → DB → Default
3. **Model Unification**: All models configured in one place
4. **Easy Customization**: Tenant admins can override prompts
5. **Consistent Execution**: All functions use same pattern
6. **Traceability**: Prompt source clearly identifiable
7. **Minimal Functions**: Functions are clean and focused
## 📝 Prompt Source Traceability
Each prompt execution logs its source:
- `[PROMPT] Using task-level prompt override for generate_content`
- `[PROMPT] Using DB prompt for generate_ideas (account 123)`
- `[PROMPT] Using default prompt for auto_cluster`
## 🚀 Final Structure
```
/ai/
├── functions/
│ ├── auto_cluster.py ← Uses registry + settings
│ ├── generate_ideas.py ← Uses registry + settings
│ ├── generate_content.py ← Uses registry + settings
│ └── generate_images.py ← Uses registry + settings
├── prompts.py ← Prompt Registry ✅
├── settings.py ← Model Configs ✅
├── ai_core.py ← Unified execution ✅
├── engine.py ← Uses settings ✅
└── tracker.py ← Console logging ✅
```
## ✅ Expected Outcomes Achieved
- ✅ All AI executions use common format
- ✅ Prompt customization is dynamic and override-able
- ✅ No duplication across AI functions
- ✅ Every AI task has:
- ✅ Clean inputs
- ✅ Unified execution
- ✅ Standard outputs
- ✅ Clear error tracking
- ✅ Prompt traceability