12 KiB
12 KiB
Automation Runs Detail View - Implementation Log
Phase 1: Backend API Enhancement ✅
Implementation Date: January 13, 2025
Status: COMPLETED
Time Spent: ~2 hours
File Modified: /backend/igny8_core/business/automation/views.py
Summary of Changes
1. New Imports Added
from django.db.models import Count, Sum, Avg, F
from datetime import timedelta
# Business model imports
from igny8_core.business.keywords.models import Keywords
from igny8_core.business.clusters.models import Clusters
from igny8_core.business.content_ideas.models import ContentIdeas
from igny8_core.business.tasks.models import Tasks
from igny8_core.business.content.models import Content
from igny8_core.business.images.models import Images
2. Helper Methods Implemented
_calculate_run_number(site, run)
- Purpose: Calculate sequential run number for a site
- Logic: Counts all runs with
started_at <= current_run.started_at - Returns: Integer run number (e.g., 1, 2, 3...)
- Usage: Generates human-readable run titles like "mysite.com #42"
_calculate_historical_averages(site, completed_runs)
- Purpose: Analyze historical performance from last 10 completed runs
- Minimum Required: 3 completed runs (returns defaults if insufficient)
- Returns Object with:
stages: Array of 7 stage averages (avg_credits, avg_items_created, avg_output_ratio)avg_total_credits: Average total credits per runavg_duration_seconds: Average run durationavg_credits_per_item: Overall credit efficiencytotal_runs_analyzed: Count of runs in samplehas_sufficient_data: Boolean flag
_calculate_predictive_analysis(site, historical_averages)
- Purpose: Estimate costs and outputs for next automation run
- Data Sources:
- Queries pending items in each stage (keywords, clusters, ideas, tasks, content, images)
- Uses historical averages for per-item cost estimation
- Returns:
stages: Array of 7 stage predictions (pending_items, estimated_credits, estimated_output)totals: Aggregated totals with 20% safety buffer recommendationconfidence: High/Medium/Low based on historical data availability
_get_attention_items(site)
- Purpose: Count items needing attention
- Returns:
skipped_ideas: Content ideas in "skipped" statusfailed_content: Content with failed generationfailed_images: Images with failed generation
3. API Endpoints
3.1 overview_stats (NEW)
Route: GET /api/v1/automation/overview_stats/
Response Structure:
{
"run_statistics": {
"total_runs": 42,
"completed_runs": 38,
"failed_runs": 2,
"running_runs": 1,
"total_credits_used": 24680,
"total_credits_last_30_days": 8420,
"avg_credits_per_run": 587,
"avg_duration_last_7_days_seconds": 2280
},
"predictive_analysis": {
"stages": [
{
"stage_number": 1,
"stage_name": "Keyword Clustering",
"pending_items": 150,
"estimated_credits": 45,
"estimated_output": 12
},
// ... stages 2-7
],
"totals": {
"total_pending_items": 413,
"total_estimated_credits": 569,
"total_estimated_output": 218,
"recommended_buffer_credits": 114
},
"confidence": "high"
},
"attention_items": {
"skipped_ideas": 5,
"failed_content": 2,
"failed_images": 1
},
"historical_averages": {
"avg_total_credits": 587,
"avg_duration_seconds": 2400,
"avg_credits_per_item": 2.69,
"total_runs_analyzed": 10,
"has_sufficient_data": true,
"stages": [/* stage averages */]
}
}
Use Cases:
- Display on overview page dashboard
- Show predictive cost estimates before running
- Alert users to failed/skipped items
- Display historical trends
3.2 history (ENHANCED)
Route: GET /api/v1/automation/history/?page=1&page_size=20
New Fields Added:
run_number: Sequential number (1, 2, 3...)run_title: Human-readable title (e.g., "mysite.com #42")duration_seconds: Total run time in secondsstages_completed: Count of successfully completed stagesstages_failed: Count of failed stagesinitial_snapshot: Snapshot of pending items at run startsummary: Aggregated metricsitems_processed: Total input itemsitems_created: Total output itemscontent_created: Content pieces generatedimages_generated: Images created
stage_statuses: Array of 7 stage statuses ["completed", "pending", "skipped", "failed"]
Response Structure:
{
"runs": [
{
"run_id": "run_20260113_140523_manual",
"run_number": 42,
"run_title": "mysite.com #42",
"status": "completed",
"trigger_type": "manual",
"started_at": "2026-01-13T14:05:23Z",
"completed_at": "2026-01-13T14:43:44Z",
"duration_seconds": 2301,
"total_credits_used": 569,
"current_stage": 7,
"stages_completed": 7,
"stages_failed": 0,
"initial_snapshot": { /* snapshot data */ },
"summary": {
"items_processed": 263,
"items_created": 218,
"content_created": 25,
"images_generated": 24
},
"stage_statuses": [
"completed", "completed", "completed", "completed",
"completed", "completed", "completed"
]
}
// ... more runs
],
"pagination": {
"page": 1,
"page_size": 20,
"total_count": 42,
"total_pages": 3
}
}
Features:
- Pagination support (configurable page size)
- Ordered by most recent first
- Clickable run titles for navigation to detail page
3.3 run_detail (NEW)
Route: GET /api/v1/automation/run_detail/?run_id=abc123
Response Structure:
{
"run": {
"run_id": "run_20260113_140523_manual",
"run_number": 42,
"run_title": "mysite.com #42",
"status": "completed",
"trigger_type": "manual",
"started_at": "2026-01-13T14:05:23Z",
"completed_at": "2026-01-13T14:43:44Z",
"duration_seconds": 2301,
"current_stage": 7,
"total_credits_used": 569,
"initial_snapshot": { /* snapshot */ }
},
"stages": [
{
"stage_number": 1,
"stage_name": "Keyword Clustering",
"status": "completed",
"credits_used": 45,
"items_processed": 150,
"items_created": 12,
"duration_seconds": 204,
"error": "",
"comparison": {
"historical_avg_credits": 48,
"historical_avg_items": 11,
"credit_variance_pct": -6.3,
"items_variance_pct": 9.1
}
}
// ... stages 2-7
],
"efficiency": {
"credits_per_item": 2.61,
"items_per_minute": 5.68,
"credits_per_minute": 14.84
},
"insights": [
{
"type": "success",
"severity": "info",
"message": "This run was 12% more credit-efficient than average"
},
{
"type": "variance",
"severity": "warning",
"message": "Content Writing used 23% higher credits than average"
}
],
"historical_comparison": {
"avg_credits": 587,
"avg_duration_seconds": 2400,
"avg_credits_per_item": 2.69
}
}
Features:
- Full stage-by-stage breakdown
- Automatic variance detection (flags >20% differences)
- Efficiency metrics calculation
- Auto-generated insights (success, warnings, errors)
- Historical comparison for context
4. Data Quality & Edge Cases Handled
Run Numbering
- Uses count-based approach for consistency with legacy runs
- No database schema changes required
- Calculated on-the-fly per request
Historical Averages
- Minimum 3 completed runs required for reliability
- Falls back to conservative defaults if insufficient data
- Uses last 10 runs to balance recency with sample size
Stage Status Logic
- credits_used > 0 OR items_created > 0 → "completed"
- error present in result → "failed"
- run completed but stage <= current_stage and no data → "skipped"
- otherwise → "pending"
Division by Zero Protection
- All calculations check denominators before dividing
- Returns 0 or default values for edge cases
- No exceptions thrown for missing data
Multi-Tenancy Security
- All queries filtered by
sitefrom request context - Run detail endpoint validates run belongs to site
- No cross-site data leakage possible
5. Testing Recommendations
API Testing (Phase 1 Complete)
# Test overview stats
curl -H "Authorization: Bearer <token>" \
"http://localhost:8000/api/v1/automation/overview_stats/"
# Test history with pagination
curl -H "Authorization: Bearer <token>" \
"http://localhost:8000/api/v1/automation/history/?page=1&page_size=10"
# Test run detail
curl -H "Authorization: Bearer <token>" \
"http://localhost:8000/api/v1/automation/run_detail/?run_id=run_20260113_140523_manual"
Edge Cases to Test
- New site with 0 runs
- Site with 1-2 completed runs (insufficient historical data)
- Run with failed stages
- Run with skipped stages
- Very short runs (<1 minute)
- Very long runs (>1 hour)
- Runs with 0 credits used (all skipped)
- Invalid run_id in run_detail
6. Next Steps: Frontend Implementation
Phase 2: Frontend Overview Page (4-5 hours)
Components to Build:
RunStatisticsSummary.tsx- Display run_statistics with trendsPredictiveCostAnalysis.tsx- Show predictive_analysis with donut chartAttentionItemsAlert.tsx- Display attention_items warningsEnhancedRunHistory.tsx- Table with clickable run titles- Update
AutomationOverview.tsxto integrate all components
Phase 3: Frontend Detail Page (5-6 hours)
Components to Build:
AutomationRunDetail.tsx- Main page component with routingRunSummaryCard.tsx- Display run header infoPipelineFlowVisualization.tsx- Visual stage flow diagramStageAccordion.tsx- Expandable stage detailsCreditBreakdownChart.tsx- Recharts donut chartRunTimeline.tsx- Chronological stage timelineEfficiencyMetrics.tsx- Display efficiency statsInsightsPanel.tsx- Show auto-generated insights
Phase 4: Polish & Testing (3-4 hours)
- Loading states and error handling
- Empty states (no runs, no data)
- Mobile responsive design
- Dark mode support
- Accessibility (ARIA labels, keyboard navigation)
- Unit tests with Vitest
7. Performance Considerations
Database Queries
- overview_stats: ~8-10 queries (optimized with select_related)
- history: 1 query + pagination (efficient)
- run_detail: 1 query for run + 1 for historical averages
Optimization Opportunities (Future)
- Cache historical_averages for 1 hour (low churn)
- Add database indexes on
site_id,started_at,status - Consider materialized view for run statistics
- Add Redis caching for frequently accessed runs
Estimated Load Impact
- Typical overview page load: 500-800ms
- Run detail page load: 200-400ms
- History pagination: 100-200ms per page
8. Documentation Links
- Main UX Plan:
/docs/plans/AUTOMATION_RUNS_DETAIL_VIEW_UX_PLAN.md - Implementation File:
/backend/igny8_core/business/automation/views.py - Related Models:
/backend/igny8_core/business/automation/models.py/backend/igny8_core/business/keywords/models.py/backend/igny8_core/business/clusters/models.py/backend/igny8_core/business/content_ideas/models.py
9. Success Metrics (Post-Deployment)
User Engagement
- Track clicks on run titles in history (expect 40%+ CTR)
- Monitor time spent on detail pages (target: 2-3 min avg)
- Track usage of predictive analysis before runs
Performance
- P95 API response time < 1 second
- Frontend initial load < 2 seconds
- No errors in error tracking (Sentry/equivalent)
Business Impact
- Reduction in support tickets about "why did this cost X credits?"
- Increase in manual automation triggers (due to cost predictability)
- User feedback scores (NPS) improvement
End of Phase 1 Implementation Log
Next Action: Begin Phase 2 - Frontend Overview Page Components