7.0 KiB
7.0 KiB
Stage 4 - Prompt Registry, Model Unification, and Final Function Hooks - COMPLETE ✅
Summary
Successfully created a centralized prompt registry system, unified model configurations, and standardized all AI function execution with clean, minimal function files.
✅ Completed Deliverables
1. Prompt Registry System Created
ai/prompts.py - PromptRegistry Class
- Purpose: Centralized prompt management with hierarchical resolution
- Features:
- Hierarchical prompt resolution:
- Task-level
prompt_override(if exists) - DB prompt for (account, function)
- Default fallback from registry
- Task-level
- Supports both
.format()style and[IGNY8_*]placeholder replacement - Function-to-prompt-type mapping
- Convenience methods:
get_image_prompt_template(),get_negative_prompt()
- Hierarchical prompt resolution:
Prompt Resolution Priority
# Priority 1: Task override
if task.prompt_override:
use task.prompt_override
# Priority 2: DB prompt
elif DB prompt for (account, function) exists:
use DB prompt
# Priority 3: Default fallback
else:
use default from registry
2. Model Configuration Centralized
ai/settings.py - MODEL_CONFIG
- Purpose: Centralized model configurations for all AI functions
- Configurations:
MODEL_CONFIG = { "auto_cluster": { "model": "gpt-4o-mini", "max_tokens": 3000, "temperature": 0.7, "response_format": {"type": "json_object"}, }, "generate_ideas": { "model": "gpt-4.1", "max_tokens": 4000, "temperature": 0.7, "response_format": {"type": "json_object"}, }, "generate_content": { "model": "gpt-4.1", "max_tokens": 8000, "temperature": 0.7, "response_format": None, # Text output }, "generate_images": { "model": "dall-e-3", "size": "1024x1024", "provider": "openai", }, }
Helper Functions
get_model_config(function_name)- Get full configget_model(function_name)- Get model nameget_max_tokens(function_name)- Get max tokensget_temperature(function_name)- Get temperature
3. Updated All AI Functions
functions/auto_cluster.py
- ✅ Uses
PromptRegistry.get_prompt() - ✅ Uses
get_model_config()for model settings - ✅ Removed direct
get_prompt_value()calls
functions/generate_ideas.py
- ✅ Uses
PromptRegistry.get_prompt()with context - ✅ Uses
get_model_config()for model settings - ✅ Clean prompt building with context variables
functions/generate_content.py
- ✅ Uses
PromptRegistry.get_prompt()with task support - ✅ Uses
get_model_config()for model settings - ✅ Supports task-level prompt overrides
functions/generate_images.py
- ✅ Uses
PromptRegistry.get_prompt()for extraction - ✅ Uses
PromptRegistry.get_image_prompt_template() - ✅ Uses
PromptRegistry.get_negative_prompt() - ✅ Uses
get_model_config()for model settings
4. Updated Engine
engine.py
- ✅ Uses
get_model_config()instead offn.get_model() - ✅ Passes model config to
run_ai_request() - ✅ Unified model selection across all functions
5. Standardized Response Format
All functions now return consistent format:
{
"success": True/False,
"output": "HTML or image_url or data",
"raw": raw_response_json, # Optional
"meta": {
"word_count": 1536, # For content
"keywords": [...], # For clusters
"model_used": "gpt-4.1",
"tokens": 250,
"cost": 0.000123
},
"error": None or error_message
}
📋 File Changes Summary
| File | Changes | Status |
|---|---|---|
prompts.py |
Created PromptRegistry class | ✅ Complete |
settings.py |
Created MODEL_CONFIG and helpers | ✅ Complete |
functions/auto_cluster.py |
Updated to use registry and settings | ✅ Complete |
functions/generate_ideas.py |
Updated to use registry and settings | ✅ Complete |
functions/generate_content.py |
Updated to use registry and settings | ✅ Complete |
functions/generate_images.py |
Updated to use registry and settings | ✅ Complete |
engine.py |
Updated to use model config | ✅ Complete |
__init__.py |
Exported new modules | ✅ Complete |
🔄 Migration Path
Old Code (Deprecated)
from igny8_core.modules.system.utils import get_prompt_value, get_default_prompt
prompt_template = get_prompt_value(account, 'clustering')
prompt = prompt_template.replace('[IGNY8_KEYWORDS]', keywords_text)
New Code (Recommended)
from igny8_core.ai.prompts import PromptRegistry
from igny8_core.ai.settings import get_model_config
# Get prompt from registry
prompt = PromptRegistry.get_prompt(
function_name='auto_cluster',
account=account,
context={'KEYWORDS': keywords_text}
)
# Get model config
model_config = get_model_config('auto_cluster')
✅ Verification Checklist
- PromptRegistry created with hierarchical resolution
- MODEL_CONFIG created with all function configs
- All functions updated to use registry
- All functions updated to use model config
- Engine updated to use model config
- Response format standardized
- No direct prompt utility calls in functions
- Task-level overrides supported
- DB prompts supported
- Default fallbacks working
🎯 Benefits Achieved
- Centralized Prompts: All prompts in one registry
- Hierarchical Resolution: Task → DB → Default
- Model Unification: All models configured in one place
- Easy Customization: Tenant admins can override prompts
- Consistent Execution: All functions use same pattern
- Traceability: Prompt source clearly identifiable
- Minimal Functions: Functions are clean and focused
📝 Prompt Source Traceability
Each prompt execution logs its source:
[PROMPT] Using task-level prompt override for generate_content[PROMPT] Using DB prompt for generate_ideas (account 123)[PROMPT] Using default prompt for auto_cluster
🚀 Final Structure
/ai/
├── functions/
│ ├── auto_cluster.py ← Uses registry + settings
│ ├── generate_ideas.py ← Uses registry + settings
│ ├── generate_content.py ← Uses registry + settings
│ └── generate_images.py ← Uses registry + settings
├── prompts.py ← Prompt Registry ✅
├── settings.py ← Model Configs ✅
├── ai_core.py ← Unified execution ✅
├── engine.py ← Uses settings ✅
└── tracker.py ← Console logging ✅
✅ Expected Outcomes Achieved
- ✅ All AI executions use common format
- ✅ Prompt customization is dynamic and override-able
- ✅ No duplication across AI functions
- ✅ Every AI task has:
- ✅ Clean inputs
- ✅ Unified execution
- ✅ Standard outputs
- ✅ Clear error tracking
- ✅ Prompt traceability