17 KiB
17 KiB
IGNY8 AI Elements Reference Table
Generated by extract_ai_elements.py analysis
🧠 AI Core Functions
| Function Name | Category | Type | File | Line | Uses AIProcessor | Celery | Progress | Steps | Prompt Source | Model Source |
|---|---|---|---|---|---|---|---|---|---|---|
_auto_cluster_keywords_core |
cluster | core_function | backend/igny8_core/modules/planner/tasks.py |
26 | ✅ | ❌ | ✅ | ✅ | Database (get_prompt_value) | AIProcessor.default_model |
_generate_single_idea_core |
ideas | core_function | backend/igny8_core/modules/planner/tasks.py |
1047 | ✅ | ❌ | ✅ | ✅ | Database (get_prompt_value) | AIProcessor.default_model |
auto_generate_content_task |
content | celery_task | backend/igny8_core/modules/writer/tasks.py |
27 | ✅ | ✅ | ✅ | ❌ | Database (get_prompt_value) | AIProcessor.default_model |
auto_generate_images_task |
image | celery_task | backend/igny8_core/modules/writer/tasks.py |
741 | ✅ | ✅ | ✅ | ❌ | Database (get_prompt_value) | AIProcessor.default_model |
AutoClusterFunction |
cluster | class | backend/igny8_core/ai/functions/auto_cluster.py |
14 | ✅ | ❌ | ✅ | ✅ | Database (get_prompt_value) | Function.get_model() |
cluster_keywords |
cluster | method | backend/igny8_core/utils/ai_processor.py |
1080 | ✅ | ❌ | ✅ | ✅ | Inline/Hardcoded | AIProcessor.default_model |
generate_ideas |
ideas | method | backend/igny8_core/utils/ai_processor.py |
1280 | ✅ | ❌ | ✅ | ✅ | Inline/Hardcoded | AIProcessor.default_model |
generate_content |
content | method | backend/igny8_core/utils/ai_processor.py |
446 | ✅ | ❌ | ❌ | ❌ | Inline/Hardcoded | AIProcessor.default_model |
generate_image |
image | method | backend/igny8_core/utils/ai_processor.py |
656 | ✅ | ❌ | ❌ | ❌ | Inline/Hardcoded | Parameter or default |
run_ai_task |
unified | celery_task | backend/igny8_core/ai/tasks.py |
13 | ❌ | ✅ | ✅ | ✅ | Via function | Via function |
execute |
unified | method | backend/igny8_core/ai/engine.py |
26 | ✅ | ❌ | ✅ | ✅ | Via function | Via function |
🧱 Prompt Sources
| Prompt Type | Source | File | Retrieval Method |
|---|---|---|---|
clustering |
Hardcoded in get_default_prompt() | backend/igny8_core/modules/system/utils.py |
get_prompt_value() -> AIPrompt model or default |
ideas |
Hardcoded in get_default_prompt() | backend/igny8_core/modules/system/utils.py |
get_prompt_value() -> AIPrompt model or default |
content_generation |
Hardcoded in get_default_prompt() | backend/igny8_core/modules/system/utils.py |
get_prompt_value() -> AIPrompt model or default |
image_prompt_extraction |
Hardcoded in get_default_prompt() | backend/igny8_core/modules/system/utils.py |
get_prompt_value() -> AIPrompt model or default |
image_prompt_template |
Hardcoded in get_default_prompt() | backend/igny8_core/modules/system/utils.py |
get_prompt_value() -> AIPrompt model or default |
negative_prompt |
Hardcoded in get_default_prompt() | backend/igny8_core/modules/system/utils.py |
get_prompt_value() -> AIPrompt model or default |
Prompt Storage:
- Database Model:
AIPromptinbackend/igny8_core/modules/system/models.py - Table:
igny8_ai_prompts - Fields:
prompt_type,prompt_value,default_prompt,account(FK) - Unique Constraint:
(account, prompt_type)
Prompt Retrieval Flow:
get_prompt_value(account, prompt_type)inmodules/system/utils.py:108- Tries:
AIPrompt.objects.get(account=account, prompt_type=prompt_type, is_active=True) - Falls back to:
get_default_prompt(prompt_type)if not found
🧾 Model Configuration
| Model Name | Source | File | Selection Method |
|---|---|---|---|
gpt-4.1 |
MODEL_RATES constant | backend/igny8_core/utils/ai_processor.py |
AIProcessor._get_model() -> IntegrationSettings or Django settings |
gpt-4o-mini |
MODEL_RATES constant | backend/igny8_core/utils/ai_processor.py |
AIProcessor._get_model() -> IntegrationSettings or Django settings |
gpt-4o |
MODEL_RATES constant | backend/igny8_core/utils/ai_processor.py |
AIProcessor._get_model() -> IntegrationSettings or Django settings |
dall-e-3 |
IMAGE_MODEL_RATES constant | backend/igny8_core/utils/ai_processor.py |
Parameter or default in generate_image() |
dall-e-2 |
IMAGE_MODEL_RATES constant | backend/igny8_core/utils/ai_processor.py |
Parameter or default in generate_image() |
Model Selection Flow:
AIProcessor.__init__(account)inutils/ai_processor.py:54- Calls
_get_model('openai', account)inutils/ai_processor.py:98 - Tries:
IntegrationSettings.objects.filter(integration_type='openai', account=account, is_active=True).first().config.get('model') - Validates model is in
MODEL_RATESdict - Falls back to:
settings.DEFAULT_AI_MODEL(default: 'gpt-4.1')
Model Storage:
- Database Model:
IntegrationSettingsinbackend/igny8_core/modules/system/models.py - Table:
igny8_integration_settings - Fields:
integration_type,config(JSONField),account(FK) - Config Structure:
{"apiKey": "...", "model": "gpt-4.1", "enabled": true}
⚠️ Validation & Limits
| Function | Validation Checks | Limit Checks |
|---|---|---|
_auto_cluster_keywords_core |
Has validate() call, Keywords exist check | Credit check, Plan limits (daily_cluster_limit, max_clusters) |
AutoClusterFunction.validate() |
Base validation (ids array, max_items), Keywords exist | Plan limits (daily_cluster_limit, max_clusters) |
auto_generate_content_task |
Task existence, Account validation | Credit check (via CreditService) |
auto_generate_images_task |
Task existence, Account validation | Credit check (via CreditService) |
generate_image |
Model validation (VALID_OPENAI_IMAGE_MODELS), Size validation (VALID_SIZES_BY_MODEL) | None |
AIProcessor._get_model() |
Model in MODEL_RATES validation | None |
Validation Details:
-
Plan Limits (in
AutoClusterFunction.validate()):plan.daily_cluster_limit- Max clusters per dayplan.max_clusters- Total max clusters- Checked in
backend/igny8_core/ai/functions/auto_cluster.py:59-79
-
Credit Checks:
CreditService.check_credits(account, required_credits)inmodules/billing/services.py:16- Used before AI operations
-
Model Validation:
- OpenAI images: Only
dall-e-3anddall-e-2valid (line 704-708 inai_processor.py) - Size validation per model (line 719-724 in
ai_processor.py)
- OpenAI images: Only
-
Input Validation:
- Base validation in
BaseAIFunction.validate()checks for 'ids' array and max_items limit AutoClusterFunction.get_max_items()returns 20 (max keywords per cluster)
- Base validation in
🔁 Retry & Error Handling
| Component | Retry Logic | Error Handling | Fallback |
|---|---|---|---|
run_ai_task |
max_retries=3 (Celery decorator) |
Exception caught, task state updated to FAILURE | None |
auto_generate_content_task |
max_retries=3 (Celery decorator) |
Try/except blocks, error logging | None |
_call_openai |
None (single attempt) | HTTP error handling, JSON parse errors, timeout (60s) | Returns error dict |
_get_api_key |
None | Exception caught, logs warning | Falls back to Django settings (OPENAI_API_KEY, RUNWARE_API_KEY) |
_get_model |
None | Exception caught, logs warning | Falls back to Django settings (DEFAULT_AI_MODEL) |
🪵 AI Debug Steps
| Function | Request Steps | Response Steps | Step Tracking Method |
|---|---|---|---|
_auto_cluster_keywords_core |
✅ (manual list) | ✅ (manual list) | Manual request_steps.append() and response_steps.append() |
AutoClusterFunction |
✅ (via StepTracker) | ✅ (via StepTracker) | StepTracker.add_request_step() and add_response_step() |
run_ai_task |
✅ (via engine) | ✅ (via engine) | Extracted from engine.execute() result |
AIEngine.execute |
✅ (via StepTracker) | ✅ (via StepTracker) | StepTracker instance tracks all steps |
auto_generate_content_task |
❌ | ❌ | No step tracking (legacy) |
auto_generate_images_task |
❌ | ❌ | No step tracking (legacy) |
Step Tracking Implementation:
-
New Framework (AIEngine):
- Uses
StepTrackerclass inbackend/igny8_core/ai/tracker.py - Steps added at each phase: INIT, PREP, AI_CALL, PARSE, SAVE, DONE
- Steps stored in
request_stepsandresponse_stepslists - Returned in result dict and logged to
AITaskLogmodel
- Uses
-
Legacy Functions:
- Manual step tracking with lists
- Steps added to
metadict for Celery task progress - Extracted in
integration_views.py:task_progress()for frontend
-
Step Structure:
{ 'stepNumber': int, 'stepName': str, # INIT, PREP, AI_CALL, PARSE, SAVE, DONE 'functionName': str, 'status': 'success' | 'error' | 'pending', 'message': str, 'error': str (optional), 'duration': int (milliseconds, optional) }
📦 Request/Response Structuring
| Function | Request Format | Response Format | JSON Mode | Parsing Method |
|---|---|---|---|---|
_call_openai |
OpenAI API format: {'model': str, 'messages': [...], 'temperature': float, 'max_tokens': int, 'response_format': dict} |
{'content': str, 'input_tokens': int, 'output_tokens': int, 'total_tokens': int, 'model': str, 'cost': float, 'error': str, 'api_id': str} |
✅ (if response_format={'type': 'json_object'}) |
_extract_json_from_response() |
cluster_keywords |
Prompt string with keywords | JSON with 'clusters' array | ✅ (auto-enabled for json_models) | _extract_json_from_response() then extract 'clusters' |
generate_ideas |
Prompt string with clusters | JSON with 'ideas' array | ✅ (auto-enabled for json_models) | _extract_json_from_response() then extract 'ideas' |
generate_image (OpenAI) |
{'prompt': str, 'model': str, 'n': int, 'size': str} |
{'url': str, 'revised_prompt': str, 'cost': float} |
N/A | Direct JSON response |
generate_image (Runware) |
Array format with imageInference tasks |
{'url': str, 'cost': float} |
N/A | Extract from nested response structure |
JSON Mode Auto-Enable:
- Models:
['gpt-4o', 'gpt-4o-mini', 'gpt-4-turbo-preview'] - Auto-enabled in
AIProcessor.call()ifresponse_formatnot specified - Location:
backend/igny8_core/ai/processor.py:40-42
JSON Extraction:
- Primary: Direct
json.loads()on response - Fallback:
_extract_json_from_response()handles:- Markdown code blocks (
json ...) - Multiline JSON
- Partial JSON extraction
- Markdown code blocks (
- Location:
backend/igny8_core/utils/ai_processor.py:334-440
📍 Paths & Constants
| Constant | Value | File | Usage |
|---|---|---|---|
OPENAI_API_KEY |
Django setting | backend/igny8_core/utils/ai_processor.py:93 |
Fallback API key |
RUNWARE_API_KEY |
Django setting | backend/igny8_core/utils/ai_processor.py:95 |
Fallback API key |
DEFAULT_AI_MODEL |
Django setting (default: 'gpt-4.1') | backend/igny8_core/utils/ai_processor.py:121 |
Fallback model |
| OpenAI API URL | 'https://api.openai.com/v1/chat/completions' |
backend/igny8_core/utils/ai_processor.py:163 |
Text generation endpoint |
| OpenAI Images URL | 'https://api.openai.com/v1/images/generations' |
backend/igny8_core/utils/ai_processor.py:735 |
Image generation endpoint |
| Runware API URL | 'https://api.runware.ai/v1' |
backend/igny8_core/utils/ai_processor.py:844 |
Runware image generation |
MODEL_RATES |
Dict with pricing per 1M tokens | backend/igny8_core/utils/ai_processor.py:19 |
Cost calculation |
IMAGE_MODEL_RATES |
Dict with pricing per image | backend/igny8_core/utils/ai_processor.py:26 |
Image cost calculation |
VALID_OPENAI_IMAGE_MODELS |
{'dall-e-3', 'dall-e-2'} |
backend/igny8_core/utils/ai_processor.py:34 |
Model validation |
VALID_SIZES_BY_MODEL |
Dict mapping models to valid sizes | backend/igny8_core/utils/ai_processor.py:41 |
Size validation |
💰 Cost Tracking
| Component | Cost Calculation | Token Tracking | Storage |
|---|---|---|---|
_call_openai |
Calculated from MODEL_RATES based on input/output tokens |
✅ (input_tokens, output_tokens, total_tokens) | Returned in result dict |
generate_image (OpenAI) |
IMAGE_MODEL_RATES[model] * n |
N/A | Returned in result dict |
generate_image (Runware) |
0.036 * n (hardcoded) |
N/A | Returned in result dict |
CostTracker |
Aggregates costs from multiple operations | ✅ (total_tokens) | In-memory during execution |
AITaskLog |
Stored in cost field (DecimalField) |
✅ (stored in tokens field) |
Database table igny8_ai_task_logs |
CreditUsageLog |
Stored in cost_usd field |
✅ (tokens_input, tokens_output) | Database table (billing module) |
Cost Calculation Formula:
# Text generation
input_cost = (input_tokens / 1_000_000) * MODEL_RATES[model]['input']
output_cost = (output_tokens / 1_000_000) * MODEL_RATES[model]['output']
total_cost = input_cost + output_cost
# Image generation
cost = IMAGE_MODEL_RATES[model] * n_images
📊 Progress Tracking
| Function | Progress Method | Phase Tracking | Percentage Mapping |
|---|---|---|---|
_auto_cluster_keywords_core |
progress_callback() function |
Manual phase strings | Manual percentage |
auto_generate_content_task |
self.update_state() (Celery) |
Manual phase strings | Manual percentage |
AIEngine.execute |
ProgressTracker.update() |
Automatic (INIT, PREP, AI_CALL, PARSE, SAVE, DONE) | Automatic: INIT (0-10%), PREP (10-25%), AI_CALL (25-70%), PARSE (70-85%), SAVE (85-98%), DONE (98-100%) |
run_ai_task |
Via AIEngine |
Via AIEngine |
Via AIEngine |
Progress Tracker:
- Class:
ProgressTrackerinbackend/igny8_core/ai/tracker.py:77 - Updates Celery task state via
task.update_state() - Tracks: phase, percentage, message, current, total, meta
🗄️ Database Logging
| Component | Log Table | Fields Logged | When Logged |
|---|---|---|---|
AIEngine.execute |
AITaskLog |
task_id, function_name, phase, message, status, duration, cost, tokens, request_steps, response_steps, error, payload, result | After execution (success or error) |
| Credit usage | CreditUsageLog |
account, operation_type, credits_used, cost_usd, model_used, tokens_input, tokens_output | After successful save operation |
AITaskLog Model:
- Table:
igny8_ai_task_logs - Location:
backend/igny8_core/ai/models.py:8 - Fields: All execution details including steps, costs, tokens, errors
🔄 Celery Integration
| Task | Entrypoint | Task ID | State Updates | Error Handling |
|---|---|---|---|---|
run_ai_task |
backend/igny8_core/ai/tasks.py:13 |
self.request.id |
Via ProgressTracker |
Updates state to FAILURE, raises exception |
auto_generate_content_task |
backend/igny8_core/modules/writer/tasks.py:27 |
self.request.id |
Manual self.update_state() |
Try/except, logs error |
auto_generate_images_task |
backend/igny8_core/modules/writer/tasks.py:741 |
self.request.id |
Manual self.update_state() |
Try/except, logs error |
Task Progress Endpoint:
- Route:
/api/v1/system/settings/task_progress/{task_id}/ - Handler:
IntegrationSettingsViewSet.task_progress()inmodules/system/integration_views.py:936 - Extracts:
request_stepsandresponse_stepsfrom task meta - Returns: Progress data to frontend for debug panel
Summary
Key Findings:
-
Two AI Systems Coexist:
- Legacy: Direct functions in
modules/planner/tasks.pyandmodules/writer/tasks.py - New Framework:
AIEngine+BaseAIFunctionclasses inai/directory
- Legacy: Direct functions in
-
Unified Entrypoint:
run_ai_task()inai/tasks.pyis the unified Celery entrypoint- Uses
AIEngineto execute any registered AI function
-
Prompt Management:
- All prompts stored in
AIPromptmodel (database) - Fallback to hardcoded defaults in
get_default_prompt() - Retrieved via
get_prompt_value(account, prompt_type)
- All prompts stored in
-
Model Selection:
- Per-account via
IntegrationSettings.config['model'] - Falls back to Django
DEFAULT_AI_MODELsetting - Validated against
MODEL_RATESdict
- Per-account via
-
Step Tracking:
- New framework uses
StepTrackerclass - Legacy functions use manual lists
- Both stored in Celery task meta and
AITaskLogmodel
- New framework uses
-
Cost Tracking:
- Calculated from
MODEL_RATESandIMAGE_MODEL_RATES - Logged to
AITaskLogandCreditUsageLog - Tracked via
CostTrackerduring execution
- Calculated from