- Added primary_keyword, secondary_keywords, tags, and categories fields to Tasks model - Updated generate_content function to handle full JSON response with all SEO fields - Improved progress bar animation: smooth 1% increments every 300ms - Enhanced step detection for content generation vs clustering vs ideas - Fixed progress modal to show correct messages for each function type - Added comprehensive logging to Keywords and Tasks pages for AI functions - Fixed error handling to show meaningful error messages instead of generic failures
7.7 KiB
7.7 KiB
Stage 2 - AI Execution & Logging Layer - COMPLETE ✅
Summary
Successfully created a centralized, consistent, and traceable execution layer for all AI requests with unified request handler and clean console-based logging.
✅ Completed Deliverables
1. Centralized Execution in ai_core.py
run_ai_request() Method
- Purpose: Single entry point for all AI text generation requests
- Features:
- Step-by-step console logging with
print()statements - Standardized request payload construction
- Error handling with detailed logging
- Token counting and cost calculation
- Rate limit detection and logging
- Timeout handling
- JSON mode auto-enablement for supported models
- Step-by-step console logging with
Console Logging Format
[AI][function_name] Step 1: Preparing request...
[AI][function_name] Step 2: Using model: gpt-4o
[AI][function_name] Step 3: Auto-enabled JSON mode for gpt-4o
[AI][function_name] Step 4: Prompt length: 1234 characters
[AI][function_name] Step 5: Request payload prepared (model=gpt-4o, max_tokens=4000, temp=0.7)
[AI][function_name] Step 6: Sending request to OpenAI API...
[AI][function_name] Step 7: Received response in 2.34s (status=200)
[AI][function_name] Step 8: Received 150 tokens (input: 50, output: 100)
[AI][function_name] Step 9: Content length: 450 characters
[AI][function_name] Step 10: Cost calculated: $0.000123
[AI][function_name][Success] Request completed successfully
Error Logging Format
[AI][function_name][Error] OpenAI Rate Limit - waiting 60s
[AI][function_name][Error] HTTP 429 error: Rate limit exceeded (Rate limit - retry after 60s)
[AI][function_name][Error] Request timeout (60s exceeded)
[AI][function_name][Error] Failed to parse JSON response: ...
2. Image Generation with Logging
generate_image() Method
- Purpose: Centralized image generation with console logging
- Features:
- Supports OpenAI DALL-E and Runware
- Model and size validation
- Step-by-step console logging
- Error handling with detailed messages
- Cost calculation
Console Logging Format
[AI][generate_images] Step 1: Preparing image generation request...
[AI][generate_images] Provider: OpenAI
[AI][generate_images] Step 2: Using model: dall-e-3, size: 1024x1024
[AI][generate_images] Step 3: Sending request to OpenAI Images API...
[AI][generate_images] Step 4: Received response in 5.67s (status=200)
[AI][generate_images] Step 5: Image generated successfully
[AI][generate_images] Step 6: Cost: $0.0400
[AI][generate_images][Success] Image generation completed
3. Updated All Function Files
functions/auto_cluster.py
- ✅ Uses
AICore.extract_json()for JSON parsing - ✅ Engine calls
run_ai_request()(via engine.py)
functions/generate_ideas.py
- ✅ Updated
generate_ideas_core()to userun_ai_request() - ✅ Console logging enabled with function name
functions/generate_content.py
- ✅ Updated
generate_content_core()to userun_ai_request() - ✅ Console logging enabled with function name
functions/generate_images.py
- ✅ Updated to use
run_ai_request()for prompt extraction - ✅ Updated to use
generate_image()with logging - ✅ Console logging enabled
4. Updated Engine
engine.py
- ✅ Updated to use
run_ai_request()instead ofcall_openai() - ✅ Passes function name for logging context
- ✅ Maintains backward compatibility
5. Deprecated Old Code
processor.py
- ✅ Marked as DEPRECATED
- ✅ Redirects all calls to
AICore - ✅ Kept for backward compatibility only
- ✅ All methods now use
AICoreinternally
6. Edge Case Handling
Implemented in run_ai_request():
- ✅ API Key Validation: Logs error if not configured
- ✅ Prompt Length: Logs character count
- ✅ Rate Limits: Detects and logs retry-after time
- ✅ Timeouts: Handles 60s timeout with clear error
- ✅ JSON Parsing Errors: Logs decode errors with context
- ✅ Empty Responses: Validates content exists
- ✅ Token Overflow: Max tokens enforced
- ✅ Model Validation: Auto-selects JSON mode for supported models
7. Standardized Request Schema
OpenAI Request Payload
{
"model": "gpt-4o",
"messages": [{"role": "user", "content": prompt}],
"temperature": 0.7,
"max_tokens": 4000,
"response_format": {"type": "json_object"} # Auto-enabled for supported models
}
All Functions Use Same Logic:
- Model selection (account default or override)
- JSON mode auto-enablement
- Token limits
- Temperature settings
- Error handling
8. Test Script Created
ai/tests/test_run.py
- ✅ Test script for all AI functions
- ✅ Tests
run_ai_request()directly - ✅ Tests JSON extraction
- ✅ Placeholder tests for all functions
- ✅ Can be run standalone to verify logging
📋 File Changes Summary
| File | Changes | Status |
|---|---|---|
ai_core.py |
Complete rewrite with run_ai_request() and console logging |
✅ Complete |
engine.py |
Updated to use run_ai_request() |
✅ Complete |
processor.py |
Marked deprecated, redirects to AICore | ✅ Complete |
functions/auto_cluster.py |
Uses AICore methods | ✅ Complete |
functions/generate_ideas.py |
Uses run_ai_request() |
✅ Complete |
functions/generate_content.py |
Uses run_ai_request() |
✅ Complete |
functions/generate_images.py |
Uses run_ai_request() and generate_image() |
✅ Complete |
tests/test_run.py |
Test script created | ✅ Complete |
🔄 Migration Path
Old Code (Deprecated)
from igny8_core.utils.ai_processor import AIProcessor
processor = AIProcessor(account=account)
result = processor._call_openai(prompt, model=model)
New Code (Recommended)
from igny8_core.ai.ai_core import AICore
ai_core = AICore(account=account)
result = ai_core.run_ai_request(
prompt=prompt,
model=model,
function_name='my_function'
)
✅ Verification Checklist
run_ai_request()created with console logging- All function files updated to use
run_ai_request() - Engine updated to use
run_ai_request() - Old processor code deprecated
- Edge cases handled with logging
- Request schema standardized
- Test script created
- No linting errors
- Backward compatibility maintained
🎯 Benefits Achieved
- Centralized Execution: All AI requests go through one method
- Consistent Logging: Every request logs steps to console
- Better Debugging: Clear step-by-step visibility
- Error Handling: Comprehensive error detection and logging
- Reduced Duplication: No scattered AI call logic
- Easy Testing: Single point to test/mock
- Future Ready: Easy to add retry logic, backoff, etc.
📝 Console Output Example
When running any AI function, you'll see:
[AI][generate_ideas] Step 1: Preparing request...
[AI][generate_ideas] Step 2: Using model: gpt-4o
[AI][generate_ideas] Step 3: Auto-enabled JSON mode for gpt-4o
[AI][generate_ideas] Step 4: Prompt length: 2345 characters
[AI][generate_ideas] Step 5: Request payload prepared (model=gpt-4o, max_tokens=4000, temp=0.7)
[AI][generate_ideas] Step 6: Sending request to OpenAI API...
[AI][generate_ideas] Step 7: Received response in 3.45s (status=200)
[AI][generate_ideas] Step 8: Received 250 tokens (input: 100, output: 150)
[AI][generate_ideas] Step 9: Content length: 600 characters
[AI][generate_ideas] Step 10: Cost calculated: $0.000250
[AI][generate_ideas][Success] Request completed successfully
🚀 Next Steps (Future Stages)
- Stage 3: Simplify logging (optional - console logging already implemented)
- Stage 4: Clean up legacy code (remove old processor completely)
- Future: Add retry logic, exponential backoff, request queuing