- Added primary_keyword, secondary_keywords, tags, and categories fields to Tasks model - Updated generate_content function to handle full JSON response with all SEO fields - Improved progress bar animation: smooth 1% increments every 300ms - Enhanced step detection for content generation vs clustering vs ideas - Fixed progress modal to show correct messages for each function type - Added comprehensive logging to Keywords and Tasks pages for AI functions - Fixed error handling to show meaningful error messages instead of generic failures
6.6 KiB
IGNY8 AI Framework Documentation
Version: 1.0
Last Updated: 2025-01-XX
Purpose: Complete documentation of the unified AI framework architecture.
Overview
The IGNY8 AI Framework provides a unified, consistent architecture for all AI functions. It eliminates code duplication, standardizes progress tracking, and provides a single interface for all AI operations.
Key Benefits
- 90% Code Reduction: Functions are now ~100 lines instead of ~600
- Consistent UX: All functions use the same progress modal and tracking
- Unified Logging: Single
AITaskLogtable for all AI operations - Easy Extension: Add new functions by creating one class
- Better Debugging: Detailed step-by-step tracking for all operations
Architecture
Directory Structure
igny8_core/ai/
├── __init__.py # Auto-registers all functions
├── apps.py # Django app configuration
├── admin.py # Admin interface for AITaskLog
├── base.py # BaseAIFunction abstract class
├── engine.py # AIEngine orchestrator
├── processor.py # AIProcessor wrapper
├── registry.py # Function registry
├── tracker.py # StepTracker, ProgressTracker, CostTracker
├── tasks.py # Unified Celery task entrypoint
├── types.py # Shared dataclasses
├── models.py # AITaskLog model
└── functions/ # Function implementations
├── __init__.py
└── auto_cluster.py # Auto cluster function
Core Components
1. BaseAIFunction
Abstract base class that all AI functions inherit from.
Methods to implement:
get_name(): Return function nameprepare(): Load and prepare databuild_prompt(): Build AI promptparse_response(): Parse AI responsesave_output(): Save results to database
Optional overrides:
validate(): Custom validationget_max_items(): Set item limitget_model(): Specify AI modelget_metadata(): Function metadata
2. AIEngine
Central orchestrator that manages the execution pipeline.
Phases:
- INIT (0-10%): Validation & setup
- PREP (10-25%): Data loading & prompt building
- AI_CALL (25-60%): API call to provider
- PARSE (60-80%): Response parsing
- SAVE (80-95%): Database operations
- DONE (95-100%): Finalization
3. Function Registry
Dynamic function discovery system.
Usage:
from igny8_core.ai.registry import register_function, get_function
# Register function
register_function('auto_cluster', AutoClusterFunction)
# Get function
fn = get_function('auto_cluster')
4. Unified Celery Task
Single entrypoint for all AI functions.
Endpoint: run_ai_task(function_name, payload, account_id)
Example:
from igny8_core.ai.tasks import run_ai_task
task = run_ai_task.delay(
function_name='auto_cluster',
payload={'ids': [1, 2, 3], 'sector_id': 1},
account_id=1
)
Function Implementation Example
Auto Cluster Function
from igny8_core.ai.base import BaseAIFunction
class AutoClusterFunction(BaseAIFunction):
def get_name(self) -> str:
return 'auto_cluster'
def get_max_items(self) -> int:
return 20
def prepare(self, payload: dict, account=None) -> Dict:
# Load keywords
ids = payload.get('ids', [])
keywords = Keywords.objects.filter(id__in=ids)
return {'keywords': keywords, ...}
def build_prompt(self, data: Dict, account=None) -> str:
# Build clustering prompt
return prompt_template.replace('[IGNY8_KEYWORDS]', keywords_text)
def parse_response(self, response: str, step_tracker=None) -> List[Dict]:
# Parse AI response
return clusters
def save_output(self, parsed, original_data, account, progress_tracker) -> Dict:
# Save clusters to database
return {'clusters_created': 5, 'keywords_updated': 20}
API Endpoint Example
Before (Old): ~300 lines
After (New): ~50 lines
@action(detail=False, methods=['post'], url_path='auto_cluster')
def auto_cluster(self, request):
from igny8_core.ai.tasks import run_ai_task
account = getattr(request, 'account', None)
account_id = account.id if account else None
payload = {
'ids': request.data.get('ids', []),
'sector_id': request.data.get('sector_id')
}
task = run_ai_task.delay(
function_name='auto_cluster',
payload=payload,
account_id=account_id
)
return Response({
'success': True,
'task_id': str(task.id),
'message': 'Clustering started'
})
Progress Tracking
Unified Progress Endpoint
URL: /api/v1/system/settings/task_progress/<task_id>/
Response:
{
"state": "PROGRESS",
"meta": {
"phase": "AI_CALL",
"percentage": 45,
"message": "Analyzing keyword relationships...",
"request_steps": [...],
"response_steps": [...],
"cost": 0.000123,
"tokens": 1500
}
}
Frontend Integration
All AI functions use the same progress modal:
- Single
useProgressModalhook - Unified progress endpoint
- Consistent phase labels
- Step-by-step logs
Database Logging
AITaskLog Model
Unified logging table for all AI operations.
Fields:
task_id: Celery task IDfunction_name: Function nameaccount: Account (required)phase: Current phasestatus: success/error/pendingcost: API costtokens: Token usagerequest_steps: Request step logsresponse_steps: Response step logserror: Error message (if any)
Migration Guide
Migrating Existing Functions
- Create function class inheriting
BaseAIFunction - Implement required methods
- Register function in
ai/__init__.py - Update API endpoint to use
run_ai_task - Test and remove old code
Example Migration
Old code:
@action(...)
def auto_cluster(self, request):
# 300 lines of code
New code:
@action(...)
def auto_cluster(self, request):
# 20 lines using framework
Summary
The AI Framework provides:
- Unified Architecture: Single framework for all AI functions
- Code Reduction: 90% less code per function
- Consistent UX: Same progress modal for all functions
- Better Debugging: Detailed step tracking
- Easy Extension: Add functions quickly
- Unified Logging: Single log table
- Cost Tracking: Automatic cost calculation
This architecture ensures maintainability, consistency, and extensibility while dramatically reducing code duplication.