2 Commits

Author SHA1 Message Date
Desktop
67283ad3e7 docs: Add Phase 0 implementation to CHANGELOG 2025-11-16 23:28:40 +05:00
Desktop
72a31b2edb Phase 0: Foundation & Credit System - Initial implementation
- Updated CREDIT_COSTS constants to Phase 0 format with new operations
- Enhanced CreditService with get_credit_cost() method and operation_type support
- Created AccountModuleSettings model for module enable/disable functionality
- Added AccountModuleSettingsSerializer and ViewSet
- Registered module settings API endpoint: /api/v1/system/settings/account-modules/
- Maintained backward compatibility with existing credit system
2025-11-16 23:24:44 +05:00
659 changed files with 24351 additions and 115018 deletions

5
.gitignore vendored
View File

@@ -45,11 +45,6 @@ backend/.venv/
dist/ dist/
*.egg *.egg
# Celery scheduler database (binary file, regenerated by celery beat)
celerybeat-schedule
**/celerybeat-schedule
backend/celerybeat-schedule
# Environment variables # Environment variables
.env .env
.env.local .env.local

619
CHANGELOG.md Normal file
View File

@@ -0,0 +1,619 @@
# IGNY8 Changelog
**Current Version:** `1.0.0`
**Last Updated:** 2025-01-XX
**Purpose:** Complete changelog of all changes, fixes, and features. Only updated after user confirmation.
---
## 📋 Changelog Management
**IMPORTANT**: This changelog is only updated after user confirmation that a fix or feature is complete and working.
**For AI Agents**: Read `docs/00-DOCUMENTATION-MANAGEMENT.md` before making any changes to this file.
### Changelog Structure
Each entry follows this format:
- **Version**: Semantic versioning (MAJOR.MINOR.PATCH)
- **Date**: YYYY-MM-DD format
- **Type**: Added, Changed, Fixed, Deprecated, Removed, Security
- **Description**: Clear description of the change
- **Affected Areas**: Modules, components, or features affected
- **Documentation**: Reference to updated documentation files
---
## [Unreleased]
### Added
- **Phase 0: Foundation & Credit System - Initial Implementation**
- Updated `CREDIT_COSTS` constants to Phase 0 format with new operations
- Added new credit costs: `linking` (8 credits), `optimization` (1 credit per 200 words), `site_structure_generation` (50 credits), `site_page_generation` (20 credits)
- Maintained backward compatibility with legacy operation names (`ideas`, `content`, `images`, `reparse`)
- Enhanced `CreditService` with `get_credit_cost()` method for dynamic cost calculation
- Supports variable costs based on operation type and amount (word count, etc.)
- Updated `check_credits()` and `deduct_credits()` to support both legacy `required_credits` parameter and new `operation_type`/`amount` parameters
- Maintained full backward compatibility with existing code
- Created `AccountModuleSettings` model for module enable/disable functionality
- One settings record per account (get_or_create pattern)
- Enable/disable flags for all 8 modules: `planner_enabled`, `writer_enabled`, `thinker_enabled`, `automation_enabled`, `site_builder_enabled`, `linker_enabled`, `optimizer_enabled`, `publisher_enabled`
- Helper method `is_module_enabled(module_name)` for easy module checking
- Added `AccountModuleSettingsSerializer` and `AccountModuleSettingsViewSet`
- API endpoint: `/api/v1/system/settings/account-modules/`
- Custom action: `check/(?P<module_name>[^/.]+)` to check if a specific module is enabled
- Automatic account assignment on create
- Unified API Standard v1.0 compliant
- **Affected Areas**: Billing module (`constants.py`, `services.py`), System module (`settings_models.py`, `settings_serializers.py`, `settings_views.py`, `urls.py`)
- **Documentation**: See `docs/planning/phases/PHASE-0-FOUNDATION-CREDIT-SYSTEM.md` for complete details
- **Impact**: Foundation for credit-only system and module-based feature access control
- **Planning Documents Organization**: Organized architecture and implementation planning documents
- Created `docs/planning/` directory for all planning documents
- Moved `IGNY8-HOLISTIC-ARCHITECTURE-PLAN.md` to `docs/planning/`
- Moved `IGNY8-IMPLEMENTATION-PLAN.md` to `docs/planning/`
- Moved `Igny8-phase-2-plan.md` to `docs/planning/`
- Moved `CONTENT-WORKFLOW-DIAGRAM.md` to `docs/planning/`
- Moved `ARCHITECTURE_CONTEXT.md` to `docs/planning/`
- Moved `sample-usage-limits-credit-system` to `docs/planning/`
- Created `docs/refactor/` directory for refactoring plans
- Updated `README.md` to reflect new document structure
- **Impact**: Better organization of planning documents, easier to find and maintain
### Changed
- **API Documentation Consolidation**: Consolidated all API documentation into single comprehensive reference
- Created `docs/API-COMPLETE-REFERENCE.md` - Unified API documentation covering all endpoints, authentication, response formats, error handling, rate limiting, permissions, and integration examples
- Removed redundant documentation files:
- `docs/API-DOCUMENTATION.md` (consolidated into complete reference)
- `docs/DOCUMENTATION-SUMMARY.md` (consolidated into complete reference)
- `unified-api/API-ENDPOINTS-ANALYSIS.md` (consolidated into complete reference)
- `unified-api/API-STANDARD-v1.0.md` (consolidated into complete reference)
- New unified document includes: complete endpoint reference, authentication guide, response format standards, error handling, rate limiting, pagination, roles & permissions, tenant/site/sector scoping, integration examples (Python, JavaScript, cURL, PHP), testing & debugging, and change management
- **Impact**: Single source of truth for all API documentation, easier to maintain and navigate
### Added
- Unified API Standard v1.0 implementation
- API Monitor page for endpoint health monitoring
- CRUD operations monitoring for Planner and Writer modules
- Sidebar API status indicator for aws-admin accounts
- **Health Check Endpoint**: `GET /api/v1/system/ping/` - Public health check endpoint per API Standard v1.0 requirement
- Returns unified format: `{success: true, data: {status: 'ok'}}`
- Tagged as 'System' in Swagger/ReDoc documentation
- Public endpoint (AllowAny permission)
### Changed
- All API endpoints now return unified response format (`{success, data, message, errors}`)
- Frontend `fetchAPI` wrapper automatically extracts data from unified format
- All error responses follow unified format with `request_id` tracking
- Rate limiting configured with scoped throttles per module
- **Integration Views**: All integration endpoints now use unified response format
- Replaced 40+ raw `Response()` calls with `success_response()`/`error_response()` helpers
- All responses include `request_id` for tracking
- Updated frontend components to handle extracted data format
- **API Documentation**: Updated Swagger/ReDoc description to include all public endpoints
- Added `/api/v1/system/ping/` to public endpoints list
- Updated schema extensions to properly tag ping endpoint
- **AI Framework Refactoring**: Removed hardcoded model defaults, IntegrationSettings is now the single source of truth
- Removed `MODEL_CONFIG` dictionary with hardcoded defaults
- Removed Django settings `DEFAULT_AI_MODEL` fallback
- `get_model_config()` now requires `account` parameter and raises clear errors if IntegrationSettings not configured
- All AI functions now require account-specific model configuration
- Removed orphan code: `get_model()`, `get_max_tokens()`, `get_temperature()` helper functions
- Removed unused exports from `__init__.py`: `register_function`, `list_functions`, `get_model`, `get_max_tokens`, `get_temperature`
- **Impact**: Each account must configure their own AI models in IntegrationSettings
- **Documentation**: See `backend/igny8_core/ai/REFACTORING-IMPLEMENTED.md` for complete details
### Fixed
- Keyword edit form now correctly populates existing values
- Auto-cluster function now works correctly with unified API format
- ResourceDebugOverlay now correctly extracts data from unified API responses
- All frontend pages now correctly handle unified API response format
- **Integration Views**: Fixed all integration endpoints not using unified response format
- `_test_openai()` and `_test_runware()` methods now use unified format
- `generate_image()`, `create()`, `save_settings()` methods now use unified format
- `get_image_generation_settings()` and `task_progress()` methods now use unified format
- All error responses now include `request_id` and follow unified format
- Fixed OpenAI integration endpoint error handling - invalid API keys now return 400 (Bad Request) instead of 401 (Unauthorized)
- **Frontend Components**: Updated to work with unified format
- `ValidationCard.tsx` - Removed dual-format handling, now works with extracted data
- `Integration.tsx` - Simplified to work with unified format
- `ImageGenerationCard.tsx` - Updated to work with extracted data format
- **Frontend Authentication**: Fixed `getAuthToken is not defined` error in `authStore.ts`
- Updated `refreshUser()` to use `fetchAPI()` instead of manual fetch with `getAuthToken()`
- Removed error throwing from catch block to prevent error accumulation
- **Frontend Error Handling**: Fixed console error accumulation
- `ResourceDebugOverlay.tsx` now silently ignores 404 errors for request-metrics endpoint
- Removed error throwing from `refreshUser()` catch block to prevent error spam
- **AI Framework Error Handling**: Improved error messages and exception handling
- `AIEngine._handle_error()` now preserves exception types for better error messages
- All AI function errors now include proper `error_type` (ConfigurationError, AccountNotFound, etc.)
- Fixed "Task failed - exception details unavailable" by improving error type preservation
- Error messages now clearly indicate when IntegrationSettings are missing or misconfigured
---
## [1.1.1] - 2025-01-XX
### Security
- **CRITICAL**: Fixed `AIPromptViewSet` security vulnerability - changed from `permission_classes = []` (allowing unauthenticated access) to `IsAuthenticatedAndActive + HasTenantAccess`
- Added `IsEditorOrAbove` permission check for `save_prompt` and `reset_prompt` actions in `AIPromptViewSet`
- All billing ViewSets now require `IsAuthenticatedAndActive + HasTenantAccess` for proper tenant isolation
- `CreditTransactionViewSet` now requires `IsAdminOrOwner` per API Standard v1.0 (billing/transactions require admin/owner)
- All system settings ViewSets now use standard permissions (`IsAuthenticatedAndActive + HasTenantAccess`)
- All auth ViewSets now explicitly include `IsAuthenticatedAndActive + HasTenantAccess` for proper tenant isolation
### Changed
- **Auth Endpoints**: All authentication endpoints (`RegisterView`, `LoginView`, `ChangePasswordView`, `MeView`) now use unified response format with `success_response()` and `error_response()` helpers
- All responses now include `request_id` for error tracking
- Error responses follow unified format with `error` and `errors` fields
- Success responses follow unified format with `success`, `data`, and `message` fields
- **Billing Module**: Refactored `CreditUsageViewSet` and `CreditTransactionViewSet` to inherit from `AccountModelViewSet` instead of manual account filtering
- Account filtering now handled automatically by base class
- Improved code maintainability and consistency
- **System Settings**: All 5 system settings ViewSets now use standard permission classes
- `SystemSettingsViewSet`, `AccountSettingsViewSet`, `UserSettingsViewSet`, `ModuleSettingsViewSet`, `AISettingsViewSet`
- Write operations require `IsAdminOrOwner` per standard
- **Integration Settings**: Added `HasTenantAccess` permission to `IntegrationSettingsViewSet` for proper tenant isolation
- **Auth ViewSets**: Added explicit standard permissions to all auth ViewSets
- `UsersViewSet`, `AccountsViewSet`, `SubscriptionsViewSet`, `SiteUserAccessViewSet` now include `IsAuthenticatedAndActive + HasTenantAccess`
- `SiteViewSet`, `SectorViewSet` now include `IsAuthenticatedAndActive + HasTenantAccess`
### Fixed
- Fixed auth endpoints not returning unified format (were using raw `Response()` instead of helpers)
- Fixed missing `request_id` in auth endpoint responses
- Fixed inconsistent error response format in auth endpoints
- Fixed billing ViewSets not using base classes (manual account filtering replaced with `AccountModelViewSet`)
- Fixed all ViewSets missing standard permissions (`IsAuthenticatedAndActive + HasTenantAccess`)
### Documentation
- Updated implementation plan to reflect completion of all remaining API Standard v1.0 items
- All 8 remaining items from audit completed (100% compliance achieved)
- **API Standard v1.0**: Full compliance achieved
- All 10 audit tasks completed and verified
- All custom @action methods use unified response format
- All ViewSets use proper base classes, pagination, throttles, and permissions
- All error responses include `request_id` tracking
- No raw `Response()` calls remaining (except file downloads)
- All endpoints documented in Swagger/ReDoc with proper tags
---
## [1.1.0] - 2025-01-XX
### Added
#### Unified API Standard v1.0
- **Response Format Standardization**
- All endpoints return unified format: `{success: true/false, data: {...}, message: "...", errors: {...}}`
- Paginated responses include `success`, `count`, `next`, `previous`, `results`
- Error responses include `success: false`, `error`, `errors`, `request_id`
- Response helper functions: `success_response()`, `error_response()`, `paginated_response()`
- **Custom Exception Handler**
- Centralized exception handling in `backend/igny8_core/api/exception_handlers.py`
- All exceptions wrapped in unified format
- Proper HTTP status code mapping (400, 401, 403, 404, 409, 422, 429, 500)
- Debug information included in development mode
- **Custom Pagination**
- `CustomPageNumberPagination` class with unified format support
- Default page size: 10, max: 100
- Dynamic page size via `page_size` query parameter
- Includes `success` field in paginated responses
- **Base ViewSets**
- `AccountModelViewSet` - Handles account isolation and unified CRUD responses
- `SiteSectorModelViewSet` - Extends account isolation with site/sector filtering
- All CRUD operations (create, retrieve, update, destroy) return unified format
- **Rate Limiting**
- `DebugScopedRateThrottle` with debug bypass for development
- Scoped rate limits per module (planner, writer, system, billing, auth)
- AI function rate limits (10/min for expensive operations)
- Bypass for aws-admin accounts and admin/developer roles
- Rate limit headers: `X-Throttle-Limit`, `X-Throttle-Remaining`, `X-Throttle-Reset`
- **Request ID Tracking**
- `RequestIDMiddleware` generates unique UUID for each request
- Request ID included in all error responses
- Request ID in response headers: `X-Request-ID`
- Used for log correlation and debugging
- **API Monitor**
- New page: `/settings/api-monitor` for endpoint health monitoring
- Monitors API status (HTTP response) and data status (page population)
- Endpoint groups: Core Health, Auth, Planner, Writer, System, Billing, CRUD Operations
- Sorting by status (errors first, then warnings, then healthy)
- Real-time endpoint health checks with configurable refresh interval
- Only accessible to aws-admin accounts
- **Sidebar API Status Indicator**
- Visual indicator circles for each endpoint group
- Color-coded status (green = healthy, yellow = warning)
- Abbreviations: CO, AU, PM, WM, PC, WC, SY
- Only visible and active for aws-admin accounts on API monitor page
- Prevents console errors on other pages
### Changed
#### Backend Refactoring
- **Planner Module** - All ViewSets refactored to unified format
- `KeywordViewSet` - CRUD + `auto_cluster` action
- `ClusterViewSet` - CRUD + `auto_generate_ideas` action
- `ContentIdeasViewSet` - CRUD + `bulk_queue_to_writer` action
- **Writer Module** - All ViewSets refactored to unified format
- `TasksViewSet` - CRUD + `auto_generate_content` action
- `ContentViewSet` - CRUD + `generate_image_prompts` action
- `ImagesViewSet` - CRUD + `generate_images` action
- **System Module** - All ViewSets refactored to unified format
- `AIPromptViewSet` - CRUD + `get_by_type`, `save_prompt`, `reset_prompt` actions
- `SystemSettingsViewSet`, `AccountSettingsViewSet`, `UserSettingsViewSet`
- `ModuleSettingsViewSet`, `AISettingsViewSet`
- `IntegrationSettingsViewSet` - Integration management and testing
- **Billing Module** - All ViewSets refactored to unified format
- `CreditBalanceViewSet` - `balance` action
- `CreditUsageViewSet` - `summary`, `limits` actions
- `CreditTransactionViewSet` - CRUD operations
- **Auth Module** - All ViewSets refactored to unified format
- `AuthViewSet` - `register`, `login`, `change_password`, `refresh_token`, `reset_password`
- `UsersViewSet` - CRUD + `create_user`, `update_role` actions
- `GroupsViewSet`, `AccountsViewSet`, `SubscriptionsViewSet`
- `SiteUserAccessViewSet`, `PlanViewSet`, `IndustryViewSet`, `SeedKeywordViewSet`
#### Frontend Refactoring
- **fetchAPI Wrapper** (`frontend/src/services/api.ts`)
- Automatically extracts `data` field from unified format responses
- Handles paginated responses (`results` at top level)
- Properly throws errors for `success: false` responses
- Removed redundant `response?.data || response` checks across codebase
- **All Frontend Pages Updated**
- Removed redundant response data extraction
- All pages now correctly consume unified API format
- Error handling standardized across all components
- Pagination handling standardized
- **Component Updates**
- `FormModal` - Now accepts `React.ReactNode` for title prop
- `ComponentCard` - Updated to support status badges in titles
- `ResourceDebugOverlay` - Fixed to extract data from unified format
- `ApiStatusIndicator` - Restricted to aws-admin accounts and API monitor page
### Fixed
#### Bug Fixes
- **Keyword Edit Form** - Now correctly populates existing values when editing
- Added `key` prop to force re-render when form data changes
- Fixed `seed_keyword_id` value handling for select dropdown
- **Auto-Cluster Function** - Now works correctly with unified API format
- Updated `autoClusterKeywords()` to wrap response with `success` field
- Proper error handling and response extraction
- **ResourceDebugOverlay** - Fixed data extraction from unified API responses
- Extracts `data` field from `{success: true, data: {...}}` responses
- Added null safety checks for all property accesses
- Validates data structure before adding to metrics
- **API Response Handling** - Fixed all instances of incorrect data extraction
- Removed `response?.data || response` redundant checks
- Removed `response.results || []` redundant checks
- All API functions now correctly handle unified format
- **React Hooks Error** - Fixed "Rendered more hooks than during the previous render"
- Moved all hooks to top of component before conditional returns
- Fixed `ApiStatusIndicator` component hook ordering
- **TypeScript Errors** - Fixed all type errors related to unified API format
- Added nullish coalescing for `toLocaleString()` calls
- Added null checks before `Object.entries()` calls
- Fixed all undefined property access errors
#### System Health
- **System Status Page** - Fixed redundant data extraction
- Now correctly uses extracted data from `fetchAPI`
- All system metrics display correctly
### Security
- Rate limiting bypass only for aws-admin accounts and admin/developer roles
- Request ID tracking for all API requests
- Centralized error handling prevents information leakage
### Testing
- **Comprehensive Test Suite**
- Created complete unit and integration test suite for Unified API Standard v1.0
- 13 test files with ~115 test methods covering all API components
- Test coverage: 100% of API Standard components
- **Unit Tests** (`backend/igny8_core/api/tests/`)
- `test_response.py` - Tests for response helper functions (18 tests)
- Tests `success_response()`, `error_response()`, `paginated_response()`
- Tests request ID generation and inclusion
- Tests status code mapping and error messages
- `test_exception_handler.py` - Tests for custom exception handler (12 tests)
- Tests all exception types (ValidationError, AuthenticationFailed, PermissionDenied, NotFound, Throttled, etc.)
- Tests debug mode behavior and debug info inclusion
- Tests field-specific and non-field error handling
- `test_permissions.py` - Tests for permission classes (20 tests)
- Tests `IsAuthenticatedAndActive`, `HasTenantAccess`, `IsViewerOrAbove`, `IsEditorOrAbove`, `IsAdminOrOwner`
- Tests role-based access control and tenant isolation
- Tests admin/system account bypass logic
- `test_throttles.py` - Tests for rate limiting (11 tests)
- Tests `DebugScopedRateThrottle` bypass logic (DEBUG mode, env flag, admin/system accounts)
- Tests rate parsing and throttle header generation
- **Integration Tests** (`backend/igny8_core/api/tests/`)
- `test_integration_base.py` - Base test class with common fixtures and helper methods
- `test_integration_planner.py` - Planner module endpoint tests (12 tests)
- Tests CRUD operations for keywords, clusters, ideas
- Tests AI actions (auto_cluster)
- Tests error scenarios and validation
- `test_integration_writer.py` - Writer module endpoint tests (6 tests)
- Tests CRUD operations for tasks, content, images
- Tests error scenarios
- `test_integration_system.py` - System module endpoint tests (5 tests)
- Tests status, prompts, settings, integrations endpoints
- `test_integration_billing.py` - Billing module endpoint tests (5 tests)
- Tests credits, usage, transactions endpoints
- `test_integration_auth.py` - Auth module endpoint tests (8 tests)
- Tests login, register, user management endpoints
- Tests authentication flows and error scenarios
- `test_integration_errors.py` - Error scenario tests (6 tests)
- Tests 400, 401, 403, 404, 429, 500 error responses
- Tests unified error format across all error types
- `test_integration_pagination.py` - Pagination tests (10 tests)
- Tests pagination across all modules
- Tests page size, page parameter, max page size limits
- Tests empty results handling
- `test_integration_rate_limiting.py` - Rate limiting integration tests (7 tests)
- Tests throttle headers presence
- Tests bypass logic for admin/system accounts and DEBUG mode
- Tests different throttle scopes per module
- **Test Verification**
- All tests verify unified response format (`{success, data/results, message, errors, request_id}`)
- All tests verify proper HTTP status codes
- All tests verify error format consistency
- All tests verify pagination format consistency
- All tests verify request ID inclusion
- **Test Documentation**
- Created `backend/igny8_core/api/tests/README.md` with test structure and running instructions
- Created `backend/igny8_core/api/tests/TEST_SUMMARY.md` with comprehensive test statistics
- Created `backend/igny8_core/api/tests/run_tests.py` test runner script
### Documentation
- **OpenAPI/Swagger Integration**
- Installed and configured `drf-spectacular` for OpenAPI 3.0 schema generation
- Created Swagger UI endpoint: `/api/docs/`
- Created ReDoc endpoint: `/api/redoc/`
- Created OpenAPI schema endpoint: `/api/schema/`
- Configured comprehensive API documentation with code samples
- Added custom authentication extensions for JWT Bearer tokens
- **Comprehensive Documentation Files**
- `docs/API-COMPLETE-REFERENCE.md` - Complete unified API reference (consolidated from multiple files)
- Quick start guide
- Endpoint reference
- Code examples (Python, JavaScript, cURL)
- Response format details
- `docs/AUTHENTICATION-GUIDE.md` - Authentication and authorization guide
- JWT Bearer token authentication
- Token management and refresh
- Code examples in Python and JavaScript
- Security best practices
- `docs/ERROR-CODES.md` - Complete error code reference
- HTTP status codes (200, 201, 400, 401, 403, 404, 409, 422, 429, 500)
- Field-specific error messages
- Error handling best practices
- Common error scenarios and solutions
- `docs/RATE-LIMITING.md` - Rate limiting and throttling guide
- Rate limit scopes and limits
- Handling rate limits (429 responses)
- Best practices and code examples
- Request queuing and caching strategies
- `docs/MIGRATION-GUIDE.md` - Migration guide for API consumers
- What changed in v1.0
- Step-by-step migration instructions
- Code examples (before/after)
- Breaking and non-breaking changes
- `docs/WORDPRESS-PLUGIN-INTEGRATION.md` - WordPress plugin integration guide
- Complete PHP API client class
- Authentication implementation
- Error handling
- WordPress admin integration
- Best practices
- `docs/README.md` - Documentation index and quick start
- **OpenAPI Schema Configuration**
- Configured comprehensive API description with features overview
- Added authentication documentation
- Added response format examples
- Added rate limiting documentation
- Added pagination documentation
- Configured endpoint tags (Authentication, Planner, Writer, System, Billing)
- Added code samples in Python and JavaScript
- **Schema Extensions**
- Created `backend/igny8_core/api/schema_extensions.py` for custom authentication
- JWT Bearer token authentication extension
- CSRF-exempt session authentication extension
- Proper OpenAPI security scheme definitions
---
## [1.0.0] - 2025-01-XX
### Added
#### Documentation System
- Complete documentation structure with 7 core documents
- Documentation management system with versioning
- Changelog management system
- DRY principles documentation
- Self-explaining documentation for AI agents
#### Core Features
- Multi-tenancy system with account isolation
- Authentication (login/register) with JWT
- RBAC permissions (Developer, Owner, Admin, Editor, Viewer, System Bot)
- Account > Site > Sector hierarchy
- Multiple sites can be active simultaneously
- Maximum 5 active sectors per site
#### Planner Module
- Keywords CRUD operations
- Keyword import/export (CSV)
- Keyword filtering and organization
- AI-powered keyword clustering
- Clusters CRUD operations
- Content ideas generation from clusters
- Content ideas CRUD operations
- Keyword-to-cluster mapping
- Cluster metrics and analytics
#### Writer Module
- Tasks CRUD operations
- AI-powered content generation
- Content editing and review
- Image prompt extraction
- AI-powered image generation (OpenAI DALL-E, Runware)
- Image management
- WordPress integration (publishing)
#### Thinker Module
- AI prompt management
- Author profile management
- Content strategy management
- Image generation testing
#### System Module
- Integration settings (OpenAI, Runware)
- API key configuration
- Connection testing
- System status and monitoring
#### Billing Module
- Credit balance tracking
- Credit transactions
- Usage logging
- Cost tracking
#### Frontend
- Configuration-driven UI system
- 4 universal templates (Dashboard, Table, Form, System)
- Complete component library
- Zustand state management
- React Router v7 routing
- Progress tracking for AI tasks
- Responsive design
#### Backend
- RESTful API with DRF
- Automatic account isolation
- Site access control
- Celery async task processing
- Progress tracking for Celery tasks
- Unified AI framework
- Database logging
#### AI Functions
- Auto Cluster Keywords
- Generate Ideas
- Generate Content
- Generate Image Prompts
- Generate Images
- Test OpenAI connection
- Test Runware connection
- Test image generation
#### Infrastructure
- Docker-based containerization
- Two-stack architecture (infra, app)
- Caddy reverse proxy
- PostgreSQL database
- Redis cache and Celery broker
- pgAdmin database administration
- FileBrowser file management
### Documentation
#### Documentation Files Created
- `docs/00-DOCUMENTATION-MANAGEMENT.md` - Documentation and changelog management system
- `docs/01-TECH-STACK-AND-INFRASTRUCTURE.md` - Technology stack and infrastructure
- `docs/02-APPLICATION-ARCHITECTURE.md` - Application architecture with workflows
- `docs/03-FRONTEND-ARCHITECTURE.md` - Frontend architecture documentation
- `docs/04-BACKEND-IMPLEMENTATION.md` - Backend implementation reference
- `docs/05-AI-FRAMEWORK-IMPLEMENTATION.md` - AI framework implementation reference
- `docs/06-FUNCTIONAL-BUSINESS-LOGIC.md` - Functional business logic documentation
#### Documentation Features
- Complete workflow documentation
- Feature completeness
- No code snippets (workflow-focused)
- Accurate state reflection
- Cross-referenced documents
- Self-explaining structure for AI agents
---
## Version History
### Current Version: 1.0.0
**Status**: Production
**Date**: 2025-01-XX
### Version Format
- **MAJOR**: Breaking changes, major feature additions, architecture changes
- **MINOR**: New features, new modules, significant enhancements
- **PATCH**: Bug fixes, small improvements, documentation updates
### Version Update Rules
1. **MAJOR**: Only updated when user confirms major release
2. **MINOR**: Updated when user confirms new feature is complete
3. **PATCH**: Updated when user confirms bug fix is complete
**IMPORTANT**: Never update version without user confirmation.
---
## Planned Features
### In Progress
- Planner Dashboard enhancement with KPIs
- Automation & CRON tasks
- Advanced analytics
### Future
- Analytics module enhancements
- Advanced scheduling features
- Additional AI model integrations
- Stripe payment integration
- Plan limits enforcement
- Advanced reporting
- Mobile app support
- API documentation (Swagger/OpenAPI)
- Unit and integration tests for unified API
---
## Notes
- All features are documented in detail in the respective documentation files
- Workflows are complete and accurate
- System is production-ready
- Documentation is maintained and updated regularly
- Changelog is only updated after user confirmation
---
**For AI Agents**: Before making any changes, read `docs/00-DOCUMENTATION-MANAGEMENT.md` for complete guidelines on versioning, changelog management, and DRY principles.

View File

@@ -1,784 +0,0 @@
# IGNY8 Automation System - Detailed Task List for AI Agent
## CRITICAL ANALYSIS
Based on the documentation and current implementation status, I've identified significant issues with the automation system and legacy SiteBuilder references that need systematic resolution.
---
## SECTION 1: udapte correct status, adn assoitae keywrods ot lcuster properly
Auto cluster AI function is currently setting status of clusters to active, and keywords are not mapped to clusters when run with automation. Update the original auto cluster AI function to use status new instead of active, and identify whether the keyword-to-cluster mapping issue is in the AI function or in the automation.
Actualy the orringal ai fucntion has both of this issue, once fixed tehn ai fucntion wil lwork fine and autoamtion also, will run better,
## SECTION 2: LEGACY SITEBUILDER/BLUEPRINT REMOVAL
### Task 2.1: Database Models Cleanup
**Files to Remove Completely:**
1. `backend/igny8_core/business/site_building/models.py` - Already stubbed, remove entirely
2. Migration already exists: `0002_remove_blueprint_models.py` - Verify it ran successfully
**Database Verification:**
1. Connect to production database
2. Run SQL: `SELECT tablename FROM pg_tables WHERE tablename LIKE '%blueprint%' OR tablename LIKE '%site_building%';`
3. Expected result: No tables (already dropped)
4. If tables exist, manually run DROP TABLE commands from migration
**Foreign Key Cleanup:**
1. Check `igny8_deployment_records` table - verify `site_blueprint_id` column removed
2. Check `igny8_publishing_records` table - verify `site_blueprint_id` column removed
3. Confirm indexes dropped: `igny8_publishing_recor_site_blueprint_id_des_b7c4e5f8_idx`
---
### Task 2.2: Backend Code References Removal
**Phase 2.2.1: Remove Stub Models**
- **File:** `backend/igny8_core/business/site_building/models.py`
- **Action:** Delete entire file
- **Reason:** Contains only stub classes (`SiteBlueprint`, `PageBlueprint`, `SiteBlueprintCluster`, `SiteBlueprintTaxonomy`) with no functionality
**Phase 2.2.2: Remove Entire site_building App**
- **Directory:** `backend/igny8_core/business/site_building/`
- **Action:** Delete entire directory
- **Reason:** All functionality deprecated, no active code
**Files to Delete:**
1. `services/structure_generation_service.py` - Calls deprecated AI function
2. `services/page_generation_service.py` - References PageBlueprint
3. `services/taxonomy_service.py` - Uses SiteBlueprintTaxonomy
4. `services/file_management_service.py` - SiteBuilder file management
5. `tests/` - All test files reference removed models
6. `admin.py` - Already commented out
7. `migrations/` - Keep for database history, but app removal makes them inert
**Phase 2.2.3: Remove site_builder Module**
- **Directory:** `backend/igny8_core/modules/site_builder.backup/`
- **Action:** Delete entire directory (already marked `.backup`)
- **Contains:** Deprecated API endpoints, serializers, views for blueprint management
---
### Task 2.3: Settings Configuration Cleanup
**File:** `backend/igny8_core/settings.py`
**Changes:**
1. Line 56: Already commented out - Remove comment entirely
2. Line 61: Already commented out - Remove comment entirely
3. Verify `INSTALLED_APPS` list is clean
**Verification:**
- Run `python manage.py check` - Should pass
- Run `python manage.py migrate --plan` - Should show no pending site_building migrations
---
### Task 2.4: URL Routing Cleanup
**File:** `backend/igny8_core/urls.py`
**Changes:**
1. Line 42: Comment already exists - Remove comment entirely
2. Verify no routing to `site-builder/` endpoints exists
**Verification:**
- Run Django server
- Attempt to access `/api/v1/site-builder/blueprints/` - Should return 404
- Check API root `/api/v1/` - Should not list site-builder endpoints
---
### Task 2.5: AI Function Removal
**File:** `backend/igny8_core/ai/functions/generate_page_content.py`
**Problem:** This AI function depends on `PageBlueprint` model which no longer exists.
**Action Required:**
1. **DELETE FILE:** `generate_page_content.py` (21 references to PageBlueprint)
2. **UPDATE:** `backend/igny8_core/ai/registry.py` - Remove lazy loader registration
3. **UPDATE:** `backend/igny8_core/ai/engine.py` - Remove from operation type mappings (line 599)
**Verification:**
- Search codebase for `generate_page_content` function calls
- Ensure no active code relies on this function
- Confirm AI function registry no longer lists it
---
### Task 2.6: Backend Import Statement Cleanup
**Files with Import Statements to Update:**
1. **backend/igny8_core/business/integration/services/content_sync_service.py**
- Lines 378, 488: `from igny8_core.business.site_building.models import SiteBlueprint`
- **Action:** Remove import, remove dependent code blocks (lines 382-388, 491-496)
- **Alternative:** Service should use `ContentTaxonomy` directly (post-migration model)
2. **backend/igny8_core/business/integration/services/sync_health_service.py**
- Line 335: `from igny8_core.business.site_building.models import SiteBlueprint, SiteBlueprintTaxonomy`
- **Action:** Remove import, refactor taxonomy checks to use `ContentTaxonomy`
3. **backend/igny8_core/business/publishing/services/adapters/sites_renderer_adapter.py**
- Line 15: `from igny8_core.business.site_building.models import SiteBlueprint`
- **Action:** Entire adapter is deprecated - DELETE FILE
- **Reason:** Designed to deploy SiteBlueprint instances, no longer applicable
4. **backend/igny8_core/business/publishing/services/deployment_readiness_service.py**
- Line 10: `from igny8_core.business.site_building.models import SiteBlueprint`
- **Action:** DELETE FILE or refactor to remove blueprint checks
- **Reason:** Service checks blueprint readiness for deployment
5. **backend/igny8_core/business/publishing/services/deployment_service.py**
- Line 10: `from igny8_core.business.site_building.models import SiteBlueprint`
- **Action:** Remove blueprint-specific deployment methods
---
### Task 2.7: Frontend Files Removal
**Phase 2.7.1: Remove Type Definitions**
- **File:** `frontend/src/types/siteBuilder.ts`
- **Action:** Delete file entirely
- **References:** Used in store and components
**Phase 2.7.2: Remove API Service**
- **File:** `frontend/src/services/siteBuilder.api.ts`
- **Action:** Delete file
- **Contains:** API methods for blueprint CRUD operations
**Phase 2.7.3: Remove Pages**
- **Directory:** `frontend/src/pages/Sites/`
- **Files to Review:**
- `Editor.tsx` - Uses PageBlueprint, SiteBlueprint types (lines 15-36)
- `PageManager.tsx` - Fetches blueprints (lines 126-137)
- `DeploymentPanel.tsx` - Blueprint deployment UI (46 references)
**Action for Pages:**
1. If pages ONLY deal with blueprints - DELETE
2. If pages have mixed functionality - REFACTOR to remove blueprint code
3. Likely DELETE: `Editor.tsx`, `DeploymentPanel.tsx`
4. Likely REFACTOR: `Dashboard.tsx` (remove blueprint widget)
**Phase 2.7.4: Remove Store**
- **File:** `frontend/src/store/siteDefinitionStore.ts`
- **Action:** Review dependencies, likely DELETE
- **Alternative:** If used for non-blueprint purposes, refactor to remove PageBlueprint types
**Phase 2.7.5: Remove Components**
- **File:** `frontend/src/components/sites/SiteProgressWidget.tsx`
- **Action:** DELETE if blueprint-specific
- **Uses:** `blueprintId` prop, calls `fetchSiteProgress(blueprintId)`
---
### Task 2.8: Frontend Import and Reference Cleanup
**Files Requiring Updates:**
1. **frontend/src/services/api.ts**
- Lines 2302-2532: Multiple blueprint-related functions
- **Action:** Remove these function exports:
- `fetchDeploymentReadiness`
- `createSiteBlueprint`, `updateSiteBlueprint`
- `attachClustersToBlueprint`, `detachClustersFromBlueprint`
- `fetchBlueprintsTaxonomies`, `createBlueprintTaxonomy`
- `importBlueprintsTaxonomies`
- `updatePageBlueprint`, `regeneratePageBlueprint`
2. **frontend/src/pages/Planner/Dashboard.tsx**
- Lines 30-31: Commented imports
- **Action:** Remove commented lines entirely
3. **frontend/src/config/pages/tasks.config.tsx**
- Lines 110-111: Logic for `[Site Builder]` task title formatting
- **Action:** Remove special handling, update title display logic
---
### Task 2.9: Sites Renderer Cleanup
**File:** `sites/src/loaders/loadSiteDefinition.ts`
**Current Behavior (Lines 103-159):**
- API load fails → Falls back to blueprint endpoint
- Transforms blueprint to site definition format
**Required Changes:**
1. Remove fallback to blueprint endpoint (lines 116-127)
2. Remove `transformBlueprintToSiteDefinition` function (lines 137-159)
3. If API fails, return proper error instead of fallback
4. Update error messages to remove blueprint references
---
### Task 2.10: Documentation Cleanup
**Files to Remove:**
1. `docs/igny8-pp/TAXONOMY/QUICK-REFERENCE-TAXONOMY.md` - References SiteBuilder removal
2. Update `docs/tech-stack/00-SYSTEM-ARCHITECTURE-MASTER-REFERENCE.md`:
- Remove "Site Blueprints" from feature list (line 45)
- Remove `site_builder/` from architecture diagrams (lines 179-180)
- Remove SiteBuilder from system overview (line 1517)
**Files to Update:**
1. `docs/igny8-pp/01-IGNY8-REST-API-COMPLETE-REFERENCE.md`:
- Remove entire section: "Site Blueprints" (lines 1201-1230)
- Remove "Page Blueprints" section (lines 1230-1247)
- Update deployment endpoints to remove blueprint references
2. `docs/igny8-pp/02-PLANNER-WRITER-WORKFLOW-TECHNICAL-GUIDE.md`:
- Remove SiteBlueprintTaxonomy references (lines 114, 151)
---
### Task 2.11: Test Files Cleanup
**Backend Tests:**
1. DELETE: `backend/igny8_core/ai/tests/test_generate_site_structure_function.py`
2. DELETE: `backend/igny8_core/business/site_building/tests/` (entire directory)
3. DELETE: `backend/igny8_core/business/publishing/tests/test_deployment_service.py`
4. DELETE: `backend/igny8_core/business/publishing/tests/test_publisher_service.py`
5. DELETE: `backend/igny8_core/business/publishing/tests/test_adapters.py`
**Frontend Tests:**
1. DELETE: `frontend/src/__tests__/sites/BulkGeneration.test.tsx`
2. UPDATE: `frontend/src/__tests__/sites/PromptManagement.test.tsx`:
- Remove site_structure_generation prompt type checks (lines 27-28)
3. UPDATE: `frontend/src/__tests__/sites/SiteManagement.test.tsx`:
- Remove `[Site Builder]` task filtering logic (lines 50-51)
---
### Task 2.12: Database Migration Verification
**Critical Checks:**
1. Verify `0002_remove_blueprint_models.py` migration applied in all environments
2. Check for orphaned data:
- Query for any `Tasks` with `taxonomy_id` pointing to deleted SiteBlueprintTaxonomy
- Query for any `ContentIdeas` with old taxonomy foreign keys
3. If orphaned data found, create data migration to:
- Set taxonomy foreign keys to NULL
- Or migrate to ContentTaxonomy if mapping exists
**SQL Verification Queries:**
```sql
-- Check for blueprint tables (should return empty)
SELECT tablename FROM pg_tables
WHERE tablename LIKE '%blueprint%' OR tablename LIKE '%site_building%';
-- Check for foreign key constraints (should return empty)
SELECT constraint_name FROM information_schema.table_constraints
WHERE constraint_name LIKE '%blueprint%';
-- Check for orphaned taxonomy references
SELECT COUNT(*) FROM igny8_tasks WHERE taxonomy_id IS NOT NULL;
SELECT COUNT(*) FROM igny8_content_ideas WHERE taxonomy_id IS NOT NULL;
```
---
## SECTION 3: AUTOMATION PAGE UI IMPROVEMENTS
### Task 3.1: Stage Card Visual Redesign
**Current Issues:**
- Icons too large, taking excessive space
- Stage names not clearly associated with stage numbers
- Inconsistent visual hierarchy
**Required Changes:**
1. **Reduce Icon Size:**
- Current: Large colored square icons
- New: Smaller icons (32x32px instead of current size)
- Position: Top-left of card, not centered
2. **Restructure Stage Header:**
- Move stage name directly below "Stage N" text
- Format: "Stage 1" (bold) / "Keywords → Clusters" (regular weight, smaller font)
- Remove redundant text repetition
3. **Status Badge Positioning:**
- Move from separate row to same line as stage number
- Right-align badge next to stage number
**Layout Example (No Code):**
```
┌─────────────────────────────┐
│ [Icon] Stage 1 [Ready] │
│ Keywords → Clusters │
│ │
│ Total Queue: 7 │
│ Processed: 0 │
│ Remaining: 7 │
└─────────────────────────────┘
```
---
### Task 3.2: Add Progress Bars to Stage Cards
**Implementation Requirements:**
1. **Individual Stage Progress Bar:**
- Display below queue metrics
- Calculate: `(Processed / Total Queue) * 100`
- Visual: Colored bar matching stage color
- Show percentage label
2. **Overall Pipeline Progress Bar:**
- Large bar above all stage cards
- Calculate: `(Sum of Processed Items Across All Stages) / (Sum of Total Queue Across All Stages) * 100`
- Display current stage indicator: "Stage 4/7"
- Show estimated completion time
3. **Progress Bar States:**
- Empty (0%): Gray/outline only
- In Progress (1-99%): Animated gradient
- Complete (100%): Solid color, checkmark icon
---
### Task 3.3: Add Total Metrics Cards Above Pipeline
**New Component: MetricsSummary Cards**
**Cards to Display (Row above pipeline overview):**
1. **Keywords Card:**
- Total: Count from database
- Processed: Keywords with `status='mapped'`
- Pending: Keywords with `status='new'`
2. **Clusters Card:**
- Total: All clusters for site
- Processed: Clusters with ideas generated
- Pending: Clusters without ideas
3. **Ideas Card:**
- Total: All ideas for site
- Processed: Ideas converted to tasks (`status='in_progress'`)
- Pending: Ideas with `status='new'`
4. **Content Card:**
- Total: All content for site
- Processed: Content with `status='draft'` + all images generated
- Pending: Content without images or in generation
5. **Images Card:**
- Total: All image records for site content
- Processed: Images with `status='generated'`
- Pending: Images with `status='pending'`
**Card Layout:**
- Width: Equal distribution across row
- Display: Icon, Title, Total/Processed/Pending counts
- Color: Match stage colors for visual consistency
---
### Task 3.4: Pipeline Status Card Redesign
**Current:** Wide row with text "Pipeline Status - Ready to run | 22 items pending"
**New Design Requirements:**
1. **Convert to Centered Card:**
- Position: Above stage cards, below metrics summary
- Width: Narrower than full width, centered
- Style: Elevated/shadowed for emphasis
2. **Content Structure:**
- Large status indicator (icon + text)
- Prominent pending items count
- Quick action buttons (Run Now, Pause, Configure)
3. **Status Visual States:**
- Ready: Green pulse animation
- Running: Blue animated progress
- Paused: Yellow warning icon
- Failed: Red alert icon
---
### Task 3.5: Remove/Compact Header Elements
**Changes to Automation Page Header:**
1. **Remove "Pipeline Overview" Section:**
- Delete heading: "📊 Pipeline Overview"
- Delete subtitle: "Complete view of automation pipeline status and pending items"
- Reason: Redundant with visible pipeline cards
2. **Compact Schedule Panel:**
- Current: Large panel with heading, status row, action buttons
- New: Single compact row
- Layout: `[Status Badge] | [Schedule Text] | [Last Run] | [Estimated Credits] | [Configure Button] | [Run Now Button]`
- Remove empty space and excessive padding
---
### Task 3.6: AI Request Delays Implementation
**Problem:** Rapid sequential AI requests may hit rate limits or overload AI service.
**Required Changes:**
1. **Within-Stage Delay (between batches):**
- Location: `AutomationService` class methods
- Add delay after each batch completion before processing next batch
- Configurable: 3-5 seconds (default 3 seconds)
- Implementation point: After each AI function call completes in stage loop
2. **Between-Stage Delay:**
- Add delay after stage completion before triggering next stage
- Configurable: 5-10 seconds (default 5 seconds)
- Implementation point: After `_execute_stage()` returns before incrementing `current_stage`
3. **Configuration:**
- Add to `AutomationConfig` model: `within_stage_delay` (integer, seconds)
- Add to `AutomationConfig` model: `between_stage_delay` (integer, seconds)
- Expose in Configure modal for user adjustment
4. **Logging:**
- Log delay start: "Waiting 3 seconds before next batch..."
- Log delay end: "Delay complete, resuming processing"
---
## SECTION 4: AUTOMATION STAGE PROCESSING FIXES
### Task 4.1: Verify Stage Sequential Processing Logic
**Problem:** Pipeline not following strict sequential stage completion before moving to next stage.
**Analysis Required:**
1. Review `AutomationService.start_automation()` method
2. Verify stage loop waits for 100% completion before `current_stage += 1`
3. Check for any parallel execution logic that bypasses sequential flow
**Verification Steps:**
1. Each stage method (`run_stage_1()` to `run_stage_7()`) must return ONLY after ALL batches processed
2. Stage N+1 should NOT start if Stage N has `pending > 0`
3. Add explicit completion check before stage transition
**Required Fixes:**
- Add validation: Before starting Stage N, verify Stage N-1 has 0 pending items
- If pending items found, log warning and halt automation
- Return error status with message: "Stage N-1 incomplete, cannot proceed to Stage N"
---
### Task 4.2: Fix Batch Size Configuration Reading
**Problem:** Manual "Run Now" only processes 5 keywords instead of respecting configured batch size (20).
**Root Cause Analysis:**
1. Check if `run_stage_1()` reads from `AutomationConfig.stage_1_batch_size`
2. Verify query limit: `Keywords.objects.filter(...)[:batch_size]` uses correct variable
3. Confirm configuration loaded at automation start: `config = AutomationConfig.objects.get(site=self.site)`
**Expected Behavior:**
- If queue has 7 keywords and batch_size = 20: Process all 7 (not limit to 5)
- If queue has 47 keywords and batch_size = 20: Process 20, then next batch of 20, then final 7
- Batch size should be dynamic based on queue size: `min(queue_count, batch_size)`
**Fix Implementation:**
1. Ensure configuration loaded once at automation start
2. Pass batch_size to each stage method
3. Update query to use: `[:min(pending_count, batch_size)]`
4. Log batch selection: "Processing batch 1/3: 20 keywords"
---
### Task 4.3: Fix Stage 4 Processing Not Completing Full Queue
**Problem:** Stage 4 (Tasks → Content) not processing all tasks before moving to Stage 5.
**Investigation Steps:**
1. Check `run_stage_4()` implementation in `AutomationService`
2. Verify loop structure: Does it process tasks one-by-one until queue empty?
3. Look for premature loop exit conditions
**Expected Logic:**
```
While tasks with status='pending' exist:
1. Get next task
2. Call generate_content AI function
3. Wait for completion
4. Verify Content created
5. Check if more pending tasks exist
6. If yes, continue loop
7. If no, return stage complete
```
**Common Issues to Check:**
- Loop exits after first task instead of continuing
- No loop at all - only processes one batch
- Error handling breaks loop prematurely
---
### Task 4.4: Fix Stage 5 Not Triggering (Image Prompts Generation)
**Problem:** Automation exits after Stage 4 without generating image prompts.
**Analysis Required:**
1. Verify Stage 4 completion status set correctly
2. Check if Stage 5 start condition is met
3. Review database query in `run_stage_5()`:
- Query: Content with `status='draft'` AND `images_count=0`
- Verify Content records created in Stage 4 have correct status
**Potential Issues:**
1. Content created with status other than 'draft'
2. Images count annotation incorrect (should use `annotate(images_count=Count('images'))`)
3. Stage handover logic doesn't trigger Stage 5
**Fix Steps:**
1. Verify Content model save in Stage 4 sets `status='draft'`
2. Ensure Stage 5 query matches Content records from Stage 4
3. Add logging: "Stage 5: Found X content pieces without images"
4. If X > 0, process; if X = 0, skip stage gracefully
---
### Task 4.5: Add Stage Handover Validation
**New Logic Required Between Each Stage:**
1. **Pre-Stage Validation:**
- Before starting Stage N, check Stage N-1 completion:
- Query pending items for Stage N-1
- If pending > 0: Log error, halt automation
- If pending = 0: Log success, proceed
2. **Post-Stage Validation:**
- After Stage N completes, verify:
- All input items processed
- Expected output items created
- No errors in stage result
- Log validation result before moving to Stage N+1
3. **Validation Logging:**
- Stage 1 → Stage 2: "Verified: 0 keywords pending, 8 clusters created"
- Stage 2 → Stage 3: "Verified: 0 clusters pending, 56 ideas created"
- Stage 3 → Stage 4: "Verified: 0 ideas pending, 56 tasks created"
- Stage 4 → Stage 5: "Verified: 0 tasks pending, 56 content pieces created"
- Stage 5 → Stage 6: "Verified: 0 content without images, 224 prompts created"
- Stage 6 → Stage 7: "Verified: 0 pending images, 224 images generated"
---
### Task 4.6: Implement Dynamic Batch Size Logic
**Problem:** Fixed batch sizes don't adapt to actual queue sizes.
**Required Smart Batch Logic:**
1. **For Stages 1, 2, 3, 5:**
- If `queue_count <= batch_size`: Process ALL items in one batch
- If `queue_count > batch_size`: Split into batches
2. **For Stage 4 (Tasks → Content):**
- Always process one task at a time (sequential)
- Reason: Content generation is expensive, better control
- Batch size config for Stage 4 can be deprecated
3. **For Stage 6 (Images):**
- Process one image at a time (current behavior)
- Reason: Image generation has rate limits
**Configuration Update:**
- Stage 1-3, 5: Batch size applies
- Stage 4, 6: Batch size ignored (always 1)
- Update Configure modal to clarify batch size usage per stage
---
## SECTION 5: STAGE CARD LAYOUT RESTRUCTURE
### Task 5.1: Add Missing Stage 5 Card (Content → Image Prompts)
**Problem:** Current UI combines Stages 3 & 4 into one card, Stage 5 missing.
**Required Change:**
- Create separate card for Stage 5
- Display: "Content → Image Prompts"
- Queue metrics: Content without images (not total content)
- Show progress bar for prompt extraction
---
### Task 5.2: Separate Stages 3 & 4 into Individual Cards
**Current:** One card shows "Ideas → Tasks → Content" with nested metrics.
**New Structure:**
1. **Stage 3 Card:** "Ideas → Tasks"
- Total Queue: Ideas with `status='new'`
- Processed: Ideas converted to tasks
- Progress: Task creation count
2. **Stage 4 Card:** "Tasks → Content"
- Total Queue: Tasks with `status='pending'`
- Processed: Tasks with `status='completed'`
- Progress: Content generation count
---
### Task 5.3: Restructure Stage Card Rows
**New Layout Requirements:**
**Row 1 (Stages 1-4):**
- Stage 1: Keywords → Clusters
- Stage 2: Clusters → Ideas
- Stage 3: Ideas → Tasks
- Stage 4: Tasks → Content
**Row 2 (Stages 5-8):**
- Stage 5: Content → Image Prompts
- Stage 6: Image Prompts → Images
- Stage 7: Review Gate (with action buttons)
- Stage 8: Status Summary (new informational card)
**Responsive Behavior:**
- Desktop: 4 cards per row
- Tablet: 2 cards per row
- Mobile: 1 card per row (vertical stack)
---
### Task 5.4: Design Stage 7 Card (Review Gate)
**Unique Requirements:**
1. **Visual Distinction:**
- Different color scheme (amber/orange warning color)
- Icon: Stop sign or review icon
- Border: Dashed or highlighted
2. **Content:**
- Title: "Manual Review Gate"
- Status: "Automation Stops Here"
- Count: Number of content pieces ready for review
- Two buttons:
- "Go to Review Page" (navigates to Writer Content page filtered by status='review')
- "Publish Without Review" (disabled initially, placeholder for future feature)
3. **Button Behavior:**
- Review button: Active when count > 0
- Publish button: Disabled with tooltip "Coming soon"
---
### Task 5.5: Design Stage 8 Card (Status Summary)
**New Informational Card:**
**Purpose:** Display current automation run status without queue processing.
**Content:**
1. **Title:** "Current Status"
2. **Large Status Icon:** Based on run status
- Running: Animated spinner
- Completed: Checkmark
- Failed: X icon
- Paused: Pause icon
3. **Metrics Display:**
- Run ID
- Started at timestamp
- Current stage indicator
- Total credits used
- Completion percentage
4. **Visual Style:**
- No queue metrics
- No action buttons
- Read-only information display
- Distinct styling (different background color, no hover effects)
---
### Task 5.6: Adjust Card Width for New Layout
**Current:** Stage cards likely using equal width across full viewport.
**New Requirements:**
- Row 1 (4 cards): Each card 23% width (with 2% gap)
- Row 2 (4 cards): Same width distribution
- Stage 8 card: Can be wider or styled differently as summary card
**Implementation Considerations:**
- Use CSS Grid or Flexbox for responsive layout
- Ensure consistent spacing between cards
- Maintain card aspect ratio for visual balance
---
## SECTION 6: ADDITIONAL ENHANCEMENTS
### Task 6.1: Add Credit Usage Tracking per Stage
**Value Addition:** Real-time visibility into credit consumption.
**Implementation:**
1. Track credits used at end of each stage in `stage_N_result` JSON field
2. Display in stage card: "Credits Used: X"
3. Add running total in overall pipeline progress bar
4. Compare estimated vs actual credits used
5. Alert if actual exceeds estimated by >20%
---
### Task 6.2: Add Estimated Completion Time per Stage
**Value Addition:** Predictable automation runtime for planning.
**Implementation:**
1. Calculate average time per item based on historical runs
2. Estimate: `Remaining Items * Average Time per Item`
3. Display in stage card: "ETA: 45 minutes"
4. Update dynamically as items process
5. Store metrics in database for accuracy improvement over time
---
### Task 6.3: Add Error Rate Monitoring
**Value Addition:** Proactive issue detection.
**Implementation:**
1. Track error count per stage
2. Display: "Errors: X (Y%)"
3. Highlight stages with >5% error rate
4. Add "View Errors" button to navigate to error log
5. Set up alerts for error rate spikes
---
### Task 6.4: Add Stage Completion Percentage
**Value Addition:** Clear progress visualization.
**Implementation:**
1. Calculate: `(Completed Items / Total Items) * 100`
2. Display as progress bar in stage card
3. Color code:
- Green: >75%
- Yellow: 25-75%
- Red: <25%
4. Animate progress bar during active stages
5. Show exact percentage in text format
---
### Task 6.5: Add Stage Start/End Timestamps
**Value Addition:** Audit trail for automation runs.
**Implementation:**
1. Store start/end timestamps in `stage_N_result`
2. Display in stage card: "Started: 10:30 AM | Ended: 11:15 AM"
3

625
README.md
View File

@@ -1,358 +1,385 @@
# IGNY8 - AI-Powered SEO Content Platform # IGNY8 Platform
**Version:** 1.0.0 Full-stack SaaS platform for SEO keyword management and AI-driven content generation, built with Django REST Framework and React.
**License:** Proprietary
**Website:** https://igny8.com **Last Updated:** 2025-01-XX
--- ---
## What is IGNY8? ## 🏗️ Architecture
IGNY8 is a full-stack SaaS platform that combines AI-powered content generation with intelligent SEO management. It helps content creators, marketers, and agencies streamline their content workflow from keyword research to published articles. - **Backend**: Django 5.2+ with Django REST Framework (Port 8010/8011)
- **Frontend**: React 19 with TypeScript and Vite (Port 5173/8021)
- **Database**: PostgreSQL 15
- **Task Queue**: Celery with Redis
- **Reverse Proxy**: Caddy (HTTPS on port 443)
- **Deployment**: Docker-based containerization
### Key Features ## 📁 Project Structure
- 🔍 **Smart Keyword Management** - Import, cluster, and organize keywords with AI
- ✍️ **AI Content Generation** - Generate SEO-optimized blog posts using GPT-4
- 🖼️ **AI Image Creation** - Auto-generate featured and in-article images
- 🔗 **Internal Linking** - AI-powered link suggestions for SEO
- 📊 **Content Optimization** - Analyze and score content quality
- 🔄 **WordPress Integration** - Bidirectional sync with WordPress sites
- 📈 **Usage-Based Billing** - Credit system for AI operations
- 👥 **Multi-Tenancy** - Manage multiple sites and teams
---
## Repository Structure
This monorepo contains two main applications:
``` ```
igny8/ igny8/
├── backend/ # Django REST API + Celery ├── backend/ # Django backend
├── frontend/ # React + Vite SPA │ ├── igny8_core/ # Django project
├── master-docs/ # Architecture documentation │ │ ├── modules/ # Feature modules (Planner, Writer, System, Billing, Auth)
└── docker-compose.app.yml # Docker deployment config │ │ ├── ai/ # AI framework
│ │ ├── api/ # API base classes
│ │ └── middleware/ # Custom middleware
│ ├── Dockerfile
│ └── requirements.txt
├── frontend/ # React frontend
│ ├── src/
│ │ ├── pages/ # Page components
│ │ ├── services/ # API clients
│ │ ├── components/ # UI components
│ │ ├── config/ # Configuration files
│ │ └── stores/ # Zustand stores
│ ├── Dockerfile
│ ├── Dockerfile.dev # Development mode
│ └── vite.config.ts
├── docs/ # Complete documentation
│ ├── 00-DOCUMENTATION-MANAGEMENT.md # Documentation & changelog management (READ FIRST)
│ ├── 01-TECH-STACK-AND-INFRASTRUCTURE.md
│ ├── 02-APPLICATION-ARCHITECTURE.md
│ ├── 03-FRONTEND-ARCHITECTURE.md
│ ├── 04-BACKEND-IMPLEMENTATION.md
│ ├── 05-AI-FRAMEWORK-IMPLEMENTATION.md
│ ├── 06-FUNCTIONAL-BUSINESS-LOGIC.md
│ ├── API-COMPLETE-REFERENCE.md # Complete unified API documentation
│ ├── planning/ # Architecture & implementation planning documents
│ │ ├── IGNY8-HOLISTIC-ARCHITECTURE-PLAN.md # Complete architecture plan
│ │ ├── IGNY8-IMPLEMENTATION-PLAN.md # Step-by-step implementation plan
│ │ ├── Igny8-phase-2-plan.md # Phase 2 feature specifications
│ │ ├── CONTENT-WORKFLOW-DIAGRAM.md # Content workflow diagrams
│ │ ├── ARCHITECTURE_CONTEXT.md # Architecture context reference
│ │ └── sample-usage-limits-credit-system # Credit system specification
│ └── refactor/ # Refactoring plans and documentation
├── CHANGELOG.md # Version history and changes (only updated after user confirmation)
└── docker-compose.app.yml
``` ```
**Separate Repository:**
- [igny8-wp-integration](https://github.com/alorig/igny8-wp-integration) - WordPress bridge plugin
--- ---
## Quick Start ## 🚀 Quick Start
### Prerequisites ### Prerequisites
- **Python 3.11+** - Docker & Docker Compose
- **Node.js 18+** - Node.js 18+ (for local development)
- **PostgreSQL 14+** - Python 3.11+ (for local development)
- **Redis 7+**
- **Docker** (optional, recommended for local development)
### Local Development with Docker ### Development Setup
1. **Clone the repository** 1. **Navigate to the project directory:**
```powershell ```bash
git clone https://github.com/alorig/igny8-app.git cd /data/app/igny8
cd igny8
``` ```
2. **Set environment variables** 2. **Backend Setup:**
```bash
Create `.env` file in `backend/` directory: cd backend
```env pip install -r requirements.txt
SECRET_KEY=your-secret-key-here python manage.py migrate
DEBUG=True python manage.py createsuperuser
DATABASE_URL=postgresql://postgres:postgres@db:5432/igny8 python manage.py runserver
REDIS_URL=redis://redis:6379/0
OPENAI_API_KEY=your-openai-key
RUNWARE_API_KEY=your-runware-key
``` ```
3. **Start services** 3. **Frontend Setup:**
```powershell ```bash
docker-compose -f docker-compose.app.yml up --build cd frontend
npm install
npm run dev
``` ```
4. **Access applications** 4. **Access:**
- Frontend: http://localhost:5173 - Frontend: http://localhost:5173
- Backend API: http://localhost:8000 - Backend API: http://localhost:8011/api/
- API Docs: http://localhost:8000/api/docs/ - Admin: http://localhost:8011/admin/
- Django Admin: http://localhost:8000/admin/
### Manual Setup (Without Docker) ### Docker Setup
#### Backend Setup ```bash
# Build images
docker build -f backend/Dockerfile -t igny8-backend ./backend
docker build -f frontend/Dockerfile.dev -t igny8-frontend-dev ./frontend
```powershell # Run with docker-compose
cd backend docker-compose -f docker-compose.app.yml up
# Create virtual environment
python -m venv .venv
.\.venv\Scripts\Activate.ps1
# Install dependencies
pip install -r requirements.txt
# Run migrations
python manage.py migrate
# Create superuser
python manage.py createsuperuser
# Run development server
python manage.py runserver
``` ```
In separate terminals, start Celery: For complete installation guide, see [docs/01-TECH-STACK-AND-INFRASTRUCTURE.md](docs/01-TECH-STACK-AND-INFRASTRUCTURE.md).
```powershell
# Celery worker
celery -A igny8_core worker -l info
# Celery beat (scheduled tasks)
celery -A igny8_core beat -l info
```
#### Frontend Setup
```powershell
cd frontend
# Install dependencies
npm install
# Start dev server
npm run dev
```
--- ---
## Project Architecture ## 📚 Features
### System Overview ### ✅ Implemented
``` - **Foundation**: Multi-tenancy system, Authentication (login/register), RBAC permissions
User Interface (React) - **Planner Module**: Keywords, Clusters, Content Ideas (full CRUD, filtering, pagination, bulk operations, CSV import/export, AI clustering)
- **Writer Module**: Tasks, Content, Images (full CRUD, AI content generation, AI image generation)
REST API (Django) - **Thinker Module**: Prompts, Author Profiles, Strategies, Image Testing
- **System Module**: Settings, Integrations (OpenAI, Runware), AI Prompts
┌───────┴────────┐ - **Billing Module**: Credits, Transactions, Usage Logs
│ │ - **AI Functions**: 5 AI operations (Auto Cluster, Generate Ideas, Generate Content, Generate Image Prompts, Generate Images)
Database AI Engine - **Frontend**: Complete component library, 4 master templates, config-driven UI system
(PostgreSQL) (Celery + OpenAI) - **Backend**: REST API with tenant isolation, Site > Sector hierarchy, Celery async tasks
- **WordPress Integration**: Direct publishing to WordPress sites
WordPress Plugin - **Development**: Docker Compose setup, hot reload, TypeScript + React
(Bidirectional Sync)
### 🚧 In Progress
- Planner Dashboard enhancement with KPIs
- Automation & CRON tasks
- Advanced analytics
### 🔄 Planned
- Analytics module enhancements
- Advanced scheduling features
- Additional AI model integrations
---
## 🔗 API Documentation
### Interactive Documentation
- **Swagger UI**: `https://api.igny8.com/api/docs/`
- **ReDoc**: `https://api.igny8.com/api/redoc/`
- **OpenAPI Schema**: `https://api.igny8.com/api/schema/`
### API Complete Reference
**[API Complete Reference](docs/API-COMPLETE-REFERENCE.md)** - Comprehensive unified API documentation (single source of truth)
- Complete endpoint reference (100+ endpoints across all modules)
- Authentication & authorization guide
- Response format standards (unified format: `{success, data, message, errors, request_id}`)
- Error handling
- Rate limiting (scoped by operation type)
- Pagination
- Roles & permissions
- Tenant/site/sector scoping
- Integration examples (Python, JavaScript, cURL, PHP)
- Testing & debugging
- Change management
### API Standard Features
- ✅ **Unified Response Format** - Consistent JSON structure for all endpoints
- ✅ **Layered Authorization** - Authentication → Tenant → Role → Site/Sector
- ✅ **Centralized Error Handling** - All errors in unified format with request_id
- ✅ **Scoped Rate Limiting** - Different limits per operation type (10-100/min)
- ✅ **Tenant Isolation** - Account/site/sector scoping
- ✅ **Request Tracking** - Unique request ID for debugging
- ✅ **100% Implemented** - All endpoints use unified format
### Quick API Example
```bash
# Login
curl -X POST https://api.igny8.com/api/v1/auth/login/ \
-H "Content-Type: application/json" \
-d '{"email":"user@example.com","password":"password"}'
# Get keywords (with token)
curl -X GET https://api.igny8.com/api/v1/planner/keywords/ \
-H "Authorization: Bearer YOUR_TOKEN" \
-H "Content-Type: application/json"
``` ```
### Tech Stack ### Additional API Guides
- **[Authentication Guide](docs/AUTHENTICATION-GUIDE.md)** - Detailed JWT authentication guide
- **[Error Codes Reference](docs/ERROR-CODES.md)** - Complete error code reference
- **[Rate Limiting Guide](docs/RATE-LIMITING.md)** - Rate limiting and throttling details
- **[Migration Guide](docs/MIGRATION-GUIDE.md)** - Migrating to API v1.0
- **[WordPress Plugin Integration](docs/WORDPRESS-PLUGIN-INTEGRATION.md)** - WordPress integration guide
For backend implementation details, see [docs/04-BACKEND-IMPLEMENTATION.md](docs/04-BACKEND-IMPLEMENTATION.md).
---
## 📖 Documentation
All documentation is consolidated in the `/docs/` folder.
**⚠️ IMPORTANT FOR AI AGENTS**: Before making any changes, read:
1. **[00-DOCUMENTATION-MANAGEMENT.md](docs/00-DOCUMENTATION-MANAGEMENT.md)** - Versioning, changelog, and DRY principles
2. **[CHANGELOG.md](CHANGELOG.md)** - Current version and change history
### Core Documentation
0. **[00-DOCUMENTATION-MANAGEMENT.md](docs/00-DOCUMENTATION-MANAGEMENT.md)** ⚠️ **READ FIRST**
- Documentation and changelog management system
- Versioning system (Semantic Versioning)
- Changelog update rules (only after user confirmation)
- DRY principles and standards
- AI agent instructions
1. **[01-TECH-STACK-AND-INFRASTRUCTURE.md](docs/01-TECH-STACK-AND-INFRASTRUCTURE.md)**
- Technology stack overview
- Infrastructure components
- Docker deployment architecture
- Fresh installation guide
- External service integrations
2. **[02-APPLICATION-ARCHITECTURE.md](docs/02-APPLICATION-ARCHITECTURE.md)**
- IGNY8 application architecture
- System hierarchy and relationships
- User roles and access control
- Module organization
- Complete workflows
- Data models and relationships
- Multi-tenancy architecture
- API architecture
- Security architecture
3. **[03-FRONTEND-ARCHITECTURE.md](docs/03-FRONTEND-ARCHITECTURE.md)**
- Frontend architecture
- Project structure
- Routing system
- Template system
- Component library
- State management
- API integration
- Configuration system
- All pages and features
4. **[04-BACKEND-IMPLEMENTATION.md](docs/04-BACKEND-IMPLEMENTATION.md)**
- Backend architecture
- Project structure
- Models and relationships
- ViewSets and API endpoints
- Serializers
- Celery tasks
- Middleware
- All modules (Planner, Writer, System, Billing, Auth)
5. **[05-AI-FRAMEWORK-IMPLEMENTATION.md](docs/05-AI-FRAMEWORK-IMPLEMENTATION.md)**
- AI framework architecture and code structure
- All 5 AI functions (technical implementation)
- AI function execution flow
- Progress tracking
- Cost tracking
- Prompt management
- Model configuration
6. **[06-FUNCTIONAL-BUSINESS-LOGIC.md](docs/06-FUNCTIONAL-BUSINESS-LOGIC.md)**
- Complete functional and business logic documentation
- All workflows and processes
- All features and functions
- How the application works from business perspective
- Credit system details
- WordPress integration
- Data flow and state management
### Quick Start Guide
**For AI Agents**: Start with [00-DOCUMENTATION-MANAGEMENT.md](docs/00-DOCUMENTATION-MANAGEMENT.md) to understand versioning, changelog, and DRY principles.
1. **New to IGNY8?** Start with [01-TECH-STACK-AND-INFRASTRUCTURE.md](docs/01-TECH-STACK-AND-INFRASTRUCTURE.md) for technology overview
2. **Understanding the System?** Read [02-APPLICATION-ARCHITECTURE.md](docs/02-APPLICATION-ARCHITECTURE.md) for complete architecture
3. **Frontend Development?** See [03-FRONTEND-ARCHITECTURE.md](docs/03-FRONTEND-ARCHITECTURE.md) for all frontend details
4. **Backend Development?** See [04-BACKEND-IMPLEMENTATION.md](docs/04-BACKEND-IMPLEMENTATION.md) for all backend details
5. **Working with AI?** See [05-AI-FRAMEWORK-IMPLEMENTATION.md](docs/05-AI-FRAMEWORK-IMPLEMENTATION.md) for AI framework implementation
6. **Understanding Business Logic?** See [06-FUNCTIONAL-BUSINESS-LOGIC.md](docs/06-FUNCTIONAL-BUSINESS-LOGIC.md) for complete workflows and features
7. **What's New?** Check [CHANGELOG.md](CHANGELOG.md) for recent changes
### Finding Information
**By Topic:**
- **API Documentation**: [API-COMPLETE-REFERENCE.md](docs/API-COMPLETE-REFERENCE.md) - Complete unified API reference (single source of truth)
- **Infrastructure & Deployment**: [01-TECH-STACK-AND-INFRASTRUCTURE.md](docs/01-TECH-STACK-AND-INFRASTRUCTURE.md)
- **Application Architecture**: [02-APPLICATION-ARCHITECTURE.md](docs/02-APPLICATION-ARCHITECTURE.md)
- **Frontend Development**: [03-FRONTEND-ARCHITECTURE.md](docs/03-FRONTEND-ARCHITECTURE.md)
- **Backend Development**: [04-BACKEND-IMPLEMENTATION.md](docs/04-BACKEND-IMPLEMENTATION.md)
- **AI Framework Implementation**: [05-AI-FRAMEWORK-IMPLEMENTATION.md](docs/05-AI-FRAMEWORK-IMPLEMENTATION.md)
- **Business Logic & Workflows**: [06-FUNCTIONAL-BUSINESS-LOGIC.md](docs/06-FUNCTIONAL-BUSINESS-LOGIC.md)
- **Changes & Updates**: [CHANGELOG.md](CHANGELOG.md)
- **Documentation Management**: [00-DOCUMENTATION-MANAGEMENT.md](docs/00-DOCUMENTATION-MANAGEMENT.md) ⚠️ **For AI Agents**
**By Module:**
- **Planner**: See [02-APPLICATION-ARCHITECTURE.md](docs/02-APPLICATION-ARCHITECTURE.md) (Module Organization) and [04-BACKEND-IMPLEMENTATION.md](docs/04-BACKEND-IMPLEMENTATION.md) (Planner Module)
- **Writer**: See [02-APPLICATION-ARCHITECTURE.md](docs/02-APPLICATION-ARCHITECTURE.md) (Module Organization) and [04-BACKEND-IMPLEMENTATION.md](docs/04-BACKEND-IMPLEMENTATION.md) (Writer Module)
- **Thinker**: See [03-FRONTEND-ARCHITECTURE.md](docs/03-FRONTEND-ARCHITECTURE.md) (Thinker Pages) and [04-BACKEND-IMPLEMENTATION.md](docs/04-BACKEND-IMPLEMENTATION.md) (System Module)
- **System**: See [04-BACKEND-IMPLEMENTATION.md](docs/04-BACKEND-IMPLEMENTATION.md) (System Module)
- **Billing**: See [04-BACKEND-IMPLEMENTATION.md](docs/04-BACKEND-IMPLEMENTATION.md) (Billing Module)
---
## 🛠️ Development
### Technology Stack
**Backend:** **Backend:**
- Django 5.2+ (Python web framework) - Django 5.2+
- Django REST Framework (API) - Django REST Framework
- PostgreSQL (Database) - PostgreSQL 15
- Celery (Async task queue) - Celery 5.3+
- Redis (Message broker) - Redis 7
- OpenAI API (Content generation)
**Frontend:** **Frontend:**
- React 19 (UI library) - React 19
- Vite 6 (Build tool) - TypeScript 5.7+
- Zustand (State management) - Vite 6.1+
- React Router v7 (Routing) - Tailwind CSS 4.0+
- Tailwind CSS 4 (Styling) - Zustand 5.0+
**WordPress Plugin:** **Infrastructure:**
- PHP 7.4+ (WordPress compatibility) - Docker & Docker Compose
- REST API integration - Caddy (Reverse Proxy)
- Bidirectional sync - Portainer (Container Management)
### System Capabilities
- **Multi-Tenancy**: Complete account isolation with automatic filtering
- **Planner Module**: Keywords, Clusters, Content Ideas management
- **Writer Module**: Tasks, Content, Images generation and management
- **Thinker Module**: Prompts, Author Profiles, Strategies, Image Testing
- **System Module**: Settings, Integrations, AI Prompts
- **Billing Module**: Credits, Transactions, Usage Logs
- **AI Functions**: 5 AI operations (Auto Cluster, Generate Ideas, Generate Content, Generate Image Prompts, Generate Images)
--- ---
## How IGNY8 Works ---
### Content Creation Workflow ## 🔒 Documentation & Changelog Management
``` ### Versioning System
1. Import Keywords
2. AI Clusters Keywords
3. Generate Content Ideas
4. Create Writer Tasks
5. AI Generates Content
6. AI Creates Images
7. Publish to WordPress
8. Sync Status Back
```
### WordPress Integration - **Format**: Semantic Versioning (MAJOR.MINOR.PATCH)
- **Current Version**: `1.0.0`
- **Location**: `CHANGELOG.md` (root directory)
- **Rules**: Only updated after user confirmation that fix/feature is complete
The WordPress bridge plugin (`igny8-wp-integration`) creates a bidirectional connection: ### Changelog Management
- **IGNY8 → WordPress:** Publish AI-generated content to WordPress - **Location**: `CHANGELOG.md` (root directory)
- **WordPress → IGNY8:** Sync post status updates back to IGNY8 - **Rules**: Only updated after user confirmation
- **Structure**: Added, Changed, Fixed, Deprecated, Removed, Security
- **For Details**: See [00-DOCUMENTATION-MANAGEMENT.md](docs/00-DOCUMENTATION-MANAGEMENT.md)
**Setup:** ### DRY Principles
1. Install WordPress plugin on your site
2. Generate API key in IGNY8 app **Core Principle**: Always use existing, predefined, standardized components, utilities, functions, and configurations.
3. Connect plugin using email, password, and API key
4. Plugin syncs automatically **Frontend**: Use existing templates, components, stores, contexts, utilities, and Tailwind CSS
**Backend**: Use existing base classes, AI framework, services, and middleware
**For Complete Guidelines**: See [00-DOCUMENTATION-MANAGEMENT.md](docs/00-DOCUMENTATION-MANAGEMENT.md)
**⚠️ For AI Agents**: Read `docs/00-DOCUMENTATION-MANAGEMENT.md` at the start of every session.
--- ---
## Documentation ## 📝 License
Comprehensive documentation is available in the `master-docs/` directory: [Add license information]
- **[MASTER_REFERENCE.md](./MASTER_REFERENCE.md)** - Complete system architecture and navigation
- **[API-COMPLETE-REFERENCE.md](./master-docs/API-COMPLETE-REFERENCE.md)** - Full API documentation
- **[02-APPLICATION-ARCHITECTURE.md](./master-docs/02-APPLICATION-ARCHITECTURE.md)** - System design
- **[04-BACKEND-IMPLEMENTATION.md](./master-docs/04-BACKEND-IMPLEMENTATION.md)** - Backend details
- **[03-FRONTEND-ARCHITECTURE.md](./master-docs/03-FRONTEND-ARCHITECTURE.md)** - Frontend details
- **[WORDPRESS-PLUGIN-INTEGRATION.md](./master-docs/WORDPRESS-PLUGIN-INTEGRATION.md)** - Plugin integration guide
--- ---
## Development Workflow ## 📞 Support
### Running Tests For questions or clarifications about the documentation, refer to the specific document in the `/docs/` folder or contact the development team.
```powershell
# Backend tests
cd backend
python manage.py test
# Frontend tests
cd frontend
npm run test
```
### Code Quality
```powershell
# Frontend linting
cd frontend
npm run lint
```
### Building for Production
```powershell
# Backend
cd backend
python manage.py collectstatic
# Frontend
cd frontend
npm run build
```
---
## API Overview
**Base URL:** `https://api.igny8.com/api/v1/`
**Authentication:** JWT Bearer token
**Key Endpoints:**
- `/auth/login/` - User authentication
- `/planner/keywords/` - Keyword management
- `/planner/clusters/` - Keyword clusters
- `/writer/tasks/` - Content tasks
- `/writer/content/` - Generated content
- `/integration/integrations/` - WordPress integrations
**Interactive Docs:**
- Swagger UI: https://api.igny8.com/api/docs/
- ReDoc: https://api.igny8.com/api/redoc/
See [API-COMPLETE-REFERENCE.md](./master-docs/API-COMPLETE-REFERENCE.md) for full documentation.
---
## Multi-Tenancy
IGNY8 supports complete account isolation:
```
Account (Organization)
├── Users (with roles: owner, admin, editor, viewer)
├── Sites (multiple WordPress sites)
└── Sectors (content categories)
└── Keywords, Clusters, Content
```
All data is automatically scoped to the authenticated user's account.
---
## Contributing
This is a private repository. For internal development:
1. Create feature branch: `git checkout -b feature/your-feature`
2. Make changes and test thoroughly
3. Commit: `git commit -m "Add your feature"`
4. Push: `git push origin feature/your-feature`
5. Create Pull Request
---
## Deployment
### Production Deployment
1. **Set production environment variables**
2. **Build frontend:** `npm run build`
3. **Collect static files:** `python manage.py collectstatic`
4. **Run migrations:** `python manage.py migrate`
5. **Use docker-compose:** `docker-compose -f docker-compose.app.yml up -d`
### Environment Variables
Required for production:
```env
SECRET_KEY=<random-secret-key>
DEBUG=False
ALLOWED_HOSTS=api.igny8.com,app.igny8.com
DATABASE_URL=postgresql://user:pass@host:5432/dbname
REDIS_URL=redis://host:6379/0
OPENAI_API_KEY=<openai-key>
RUNWARE_API_KEY=<runware-key>
USE_SECURE_COOKIES=True
```
---
## Support
For support and questions:
- Check [MASTER_REFERENCE.md](./MASTER_REFERENCE.md) for detailed documentation
- Review API docs at `/api/docs/`
- Contact development team
---
## License
Proprietary. All rights reserved.
---
## Changelog
See [CHANGELOG.md](./CHANGELOG.md) for version history and updates.
---
**Built with ❤️ by the IGNY8 team**

View File

@@ -1,190 +0,0 @@
# Backend API Endpoints - Test Results
**Test Date:** December 5, 2025
**Backend URL:** http://localhost:8011
## ✅ WORKING ENDPOINTS
### Billing API Endpoints
| Endpoint | Method | Status | Notes |
|----------|--------|--------|-------|
| `/api/v1/billing/invoices/` | GET | ✅ 401 | Auth required (correct) |
| `/api/v1/billing/payments/` | GET | ✅ 401 | Auth required (correct) |
| `/api/v1/billing/credit-packages/` | GET | ✅ 401 | Auth required (correct) |
| `/api/v1/billing/transactions/` | GET | ✅ 401 | Auth required (correct) |
| `/api/v1/billing/transactions/balance/` | GET | ✅ 401 | Auth required (correct) |
| `/api/v1/billing/admin/stats/` | GET | ✅ 401 | Auth required (correct) |
### Account Endpoints
| Endpoint | Method | Status | Notes |
|----------|--------|--------|-------|
| `/api/v1/account/settings/` | GET | ✅ 401 | Auth required (correct) |
| `/api/v1/account/settings/` | PATCH | ✅ 401 | Auth required (correct) |
| `/api/v1/account/team/` | GET | ✅ 401 | Auth required (correct) |
| `/api/v1/account/usage/analytics/` | GET | ✅ 401 | Auth required (correct) |
## ❌ ISSUES FIXED
### Frontend API Path Alignment
**Problem:** Frontend must always call the canonical `/api/v1/billing/...` endpoints (no `/v2` alias).
**Files Fixed:**
- `frontend/src/services/billing.api.ts` ensured all billing calls use `/v1/billing/...`
**Changes:**
```typescript
// Before:
fetchAPI('/billing/invoices/')
// After:
fetchAPI('/v1/billing/invoices/')
```
### Component Export Issues
**Problem:** `PricingPlan` type export conflict
**File Fixed:**
- `frontend/src/components/ui/pricing-table/index.tsx`
**Change:**
```typescript
// Before:
export { PricingPlan };
// After:
export type { PricingPlan };
```
### Missing Function Issues
**Problem:** `submitManualPayment` doesn't exist, should be `createManualPayment`
**File Fixed:**
- `frontend/src/pages/account/PurchaseCreditsPage.tsx`
**Change:**
```typescript
// Import changed:
import { submitManualPayment } from '...' // ❌
import { createManualPayment } from '...' // ✅
// Usage changed:
await submitManualPayment({...}) // ❌
await createManualPayment({...}) // ✅
```
## 📝 PAGES STATUS
### Account Pages
| Page | Route | Status | Backend API |
|------|-------|--------|-------------|
| Account Settings | `/account/settings` | ✅ Ready | `/v1/account/settings/` |
| Team Management | `/account/team` | ✅ Ready | `/v1/account/team/` |
| Usage Analytics | `/account/usage` | ✅ Ready | `/v1/account/usage/analytics/` |
| Purchase Credits | `/account/purchase-credits` | ✅ Ready | `/v1/billing/credit-packages/` |
### Billing Pages
| Page | Route | Status | Backend API |
|------|-------|--------|-------------|
| Credits Overview | `/billing/credits` | ✅ Ready | `/v1/billing/transactions/balance/` |
| Transactions | `/billing/transactions` | ✅ Ready | `/v1/billing/transactions/` |
| Usage | `/billing/usage` | ✅ Ready | `/v1/billing/transactions/` |
| Plans | `/settings/plans` | ✅ Ready | `/v1/auth/plans/` |
### Admin Pages
| Page | Route | Status | Backend API |
|------|-------|--------|-------------|
| Admin Dashboard | `/admin/billing` | ⏳ Partial | `/v1/billing/admin/stats/` |
| Billing Management | `/admin/billing` | ⏳ Partial | Multiple endpoints |
## 🔧 URL STRUCTURE
### Correct URL Pattern
```
Frontend calls: /v1/billing/invoices/
API Base URL: https://api.igny8.com/api
Full URL: https://api.igny8.com/api/v1/billing/invoices/
Backend route: /api/v1/billing/ → igny8_core.business.billing.urls
```
### API Base URL Detection
```typescript
// frontend/src/services/api.ts
const API_BASE_URL = getApiBaseUrl();
// Returns:
// - localhost:3000 → http://localhost:8011/api
// - Production → https://api.igny8.com/api
```
## ✅ BUILD STATUS
```bash
cd /data/app/igny8/frontend
npm run build
# ✅ built in 10.87s
```
## 🧪 TESTING CHECKLIST
### Backend Tests
- [x] Invoices endpoint exists (401 auth required)
- [x] Payments endpoint exists (401 auth required)
- [x] Credit packages endpoint exists (401 auth required)
- [x] Transactions endpoint exists (401 auth required)
- [x] Balance endpoint exists (401 auth required)
- [x] Account settings endpoint exists (401 auth required)
- [x] Team management endpoint exists (401 auth required)
- [x] Usage analytics endpoint exists (401 auth required)
### Frontend Tests
- [x] Build completes without errors
- [x] All API imports resolve correctly
- [x] Component exports work correctly
- [ ] Pages load in browser (requires authentication)
- [ ] API calls work with auth token
- [ ] Data displays correctly
## 🚀 NEXT STEPS
1. **Test with Authentication**
- Login to app
- Navigate to each page
- Verify data loads correctly
2. **Test User Flows**
- Purchase credits flow
- View transactions
- Manage team members
- Update account settings
3. **Test Admin Features**
- View billing stats
- Approve/reject payments
- Configure credit costs
4. **Missing Features**
- Stripe payment integration (webhook handlers exist, UI integration pending)
- PDF invoice generation
- Email notifications
- Subscription management UI
## 📚 DOCUMENTATION
### For Users
- All account and billing pages accessible from sidebar
- Credit balance visible on Credits page
- Purchase credits via credit packages
- View transaction history
- Manage team members
### For Developers
- Backend: Django REST Framework ViewSets
- Frontend: React + TypeScript + Vite
- API calls: Centralized in `services/billing.api.ts`
- Auth: JWT tokens in localStorage
- Multi-tenancy: Account-based access control

37
backend/=0.27.0 Normal file
View File

@@ -0,0 +1,37 @@
Collecting drf-spectacular
Downloading drf_spectacular-0.29.0-py3-none-any.whl.metadata (14 kB)
Requirement already satisfied: Django>=2.2 in /usr/local/lib/python3.11/site-packages (from drf-spectacular) (5.2.8)
Requirement already satisfied: djangorestframework>=3.10.3 in /usr/local/lib/python3.11/site-packages (from drf-spectacular) (3.16.1)
Collecting uritemplate>=2.0.0 (from drf-spectacular)
Downloading uritemplate-4.2.0-py3-none-any.whl.metadata (2.6 kB)
Collecting PyYAML>=5.1 (from drf-spectacular)
Downloading pyyaml-6.0.3-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl.metadata (2.4 kB)
Collecting jsonschema>=2.6.0 (from drf-spectacular)
Downloading jsonschema-4.25.1-py3-none-any.whl.metadata (7.6 kB)
Collecting inflection>=0.3.1 (from drf-spectacular)
Downloading inflection-0.5.1-py2.py3-none-any.whl.metadata (1.7 kB)
Requirement already satisfied: asgiref>=3.8.1 in /usr/local/lib/python3.11/site-packages (from Django>=2.2->drf-spectacular) (3.10.0)
Requirement already satisfied: sqlparse>=0.3.1 in /usr/local/lib/python3.11/site-packages (from Django>=2.2->drf-spectacular) (0.5.3)
Collecting attrs>=22.2.0 (from jsonschema>=2.6.0->drf-spectacular)
Downloading attrs-25.4.0-py3-none-any.whl.metadata (10 kB)
Collecting jsonschema-specifications>=2023.03.6 (from jsonschema>=2.6.0->drf-spectacular)
Downloading jsonschema_specifications-2025.9.1-py3-none-any.whl.metadata (2.9 kB)
Collecting referencing>=0.28.4 (from jsonschema>=2.6.0->drf-spectacular)
Downloading referencing-0.37.0-py3-none-any.whl.metadata (2.8 kB)
Collecting rpds-py>=0.7.1 (from jsonschema>=2.6.0->drf-spectacular)
Downloading rpds_py-0.28.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (4.1 kB)
Requirement already satisfied: typing-extensions>=4.4.0 in /usr/local/lib/python3.11/site-packages (from referencing>=0.28.4->jsonschema>=2.6.0->drf-spectacular) (4.15.0)
Downloading drf_spectacular-0.29.0-py3-none-any.whl (105 kB)
Downloading inflection-0.5.1-py2.py3-none-any.whl (9.5 kB)
Downloading jsonschema-4.25.1-py3-none-any.whl (90 kB)
Downloading attrs-25.4.0-py3-none-any.whl (67 kB)
Downloading jsonschema_specifications-2025.9.1-py3-none-any.whl (18 kB)
Downloading pyyaml-6.0.3-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl (806 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 806.6/806.6 kB 36.0 MB/s 0:00:00
Downloading referencing-0.37.0-py3-none-any.whl (26 kB)
Downloading rpds_py-0.28.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (382 kB)
Downloading uritemplate-4.2.0-py3-none-any.whl (11 kB)
Installing collected packages: uritemplate, rpds-py, PyYAML, inflection, attrs, referencing, jsonschema-specifications, jsonschema, drf-spectacular
Successfully installed PyYAML-6.0.3 attrs-25.4.0 drf-spectacular-0.29.0 inflection-0.5.1 jsonschema-4.25.1 jsonschema-specifications-2025.9.1 referencing-0.37.0 rpds-py-0.28.0 uritemplate-4.2.0
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager, possibly rendering your system unusable. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv. Use the --root-user-action option if you know what you are doing and want to suppress this warning.

File diff suppressed because one or more lines are too long

BIN
backend/celerybeat-schedule Normal file

Binary file not shown.

View File

@@ -1,31 +0,0 @@
#!/usr/bin/env python
import os
import django
import json
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'igny8_core.settings')
django.setup()
from igny8_core.business.integration.models import SiteIntegration
from igny8_core.auth.models import Site
from django.test import RequestFactory
from igny8_core.modules.integration.views import IntegrationViewSet
# Create a fake request
factory = RequestFactory()
request = factory.get('/api/v1/integration/integrations/1/content-types/')
# Create view and call the action
integration = SiteIntegration.objects.get(id=1)
viewset = IntegrationViewSet()
viewset.format_kwarg = None
viewset.request = request
viewset.kwargs = {'pk': 1}
# Get the response data
response = viewset.content_types_summary(request, pk=1)
print("Response Status:", response.status_code)
print("\nResponse Data:")
print(json.dumps(response.data, indent=2, default=str))

View File

@@ -1,20 +0,0 @@
#!/usr/bin/env python
"""Check recent keyword creation"""
import os
import django
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'igny8_core.settings')
django.setup()
from igny8_core.business.planning.models import Keywords
from django.utils import timezone
from datetime import timedelta
recent = timezone.now() - timedelta(hours=24)
recent_keywords = Keywords.objects.filter(created_at__gte=recent)
print(f'Keywords created in last 24 hours: {recent_keywords.count()}')
if recent_keywords.exists():
print('\nRecent keyword statuses:')
for k in recent_keywords[:10]:
print(f' ID {k.id}: status={k.status}, created={k.created_at}')

View File

@@ -1,38 +0,0 @@
#!/usr/bin/env python
"""
Clean up structure-based categories that were incorrectly created
This will remove categories like "Guide", "Article", etc. that match content_structure values
"""
import os
import sys
import django
sys.path.insert(0, os.path.dirname(os.path.abspath(__file__)))
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'igny8_core.settings')
django.setup()
from django.db import transaction
from igny8_core.business.content.models import ContentTaxonomy
# List of structure values that were incorrectly added as categories
STRUCTURE_VALUES = ['Guide', 'Article', 'Listicle', 'How To', 'Tutorial', 'Review', 'Comparison']
print("=" * 80)
print("CLEANING UP STRUCTURE-BASED CATEGORIES")
print("=" * 80)
for structure_name in STRUCTURE_VALUES:
categories = ContentTaxonomy.objects.filter(
taxonomy_type='category',
name=structure_name
)
if categories.exists():
count = categories.count()
print(f"\nRemoving {count} '{structure_name}' categor{'y' if count == 1 else 'ies'}...")
categories.delete()
print(f" ✓ Deleted {count} '{structure_name}' categor{'y' if count == 1 else 'ies'}")
print("\n" + "=" * 80)
print("CLEANUP COMPLETE")
print("=" * 80)

View File

@@ -0,0 +1,187 @@
#!/usr/bin/env python
"""
Script to create 3 real users with 3 paid packages (Starter, Growth, Scale)
All accounts will be active and properly configured.
Email format: plan-name@igny8.com
"""
import os
import django
import sys
from decimal import Decimal
# Setup Django
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'igny8_core.settings')
django.setup()
from django.db import transaction
from igny8_core.auth.models import Plan, Account, User
from django.utils.text import slugify
# User data - 3 users with 3 different paid plans
# Email format: plan-name@igny8.com
USERS_DATA = [
{
"email": "starter@igny8.com",
"username": "starter",
"first_name": "Starter",
"last_name": "Account",
"password": "SecurePass123!@#",
"plan_slug": "starter", # $89/month
"account_name": "Starter Account",
},
{
"email": "growth@igny8.com",
"username": "growth",
"first_name": "Growth",
"last_name": "Account",
"password": "SecurePass123!@#",
"plan_slug": "growth", # $139/month
"account_name": "Growth Account",
},
{
"email": "scale@igny8.com",
"username": "scale",
"first_name": "Scale",
"last_name": "Account",
"password": "SecurePass123!@#",
"plan_slug": "scale", # $229/month
"account_name": "Scale Account",
},
]
def create_user_with_plan(user_data):
"""Create a user with account and assigned plan."""
try:
with transaction.atomic():
# Get the plan
try:
plan = Plan.objects.get(slug=user_data['plan_slug'], is_active=True)
except Plan.DoesNotExist:
print(f"❌ ERROR: Plan '{user_data['plan_slug']}' not found or inactive!")
return None
# Check if user already exists
if User.objects.filter(email=user_data['email']).exists():
print(f"⚠️ User {user_data['email']} already exists. Updating...")
existing_user = User.objects.get(email=user_data['email'])
if existing_user.account:
existing_user.account.plan = plan
existing_user.account.status = 'active'
existing_user.account.save()
print(f" ✅ Updated account plan to {plan.name} and set status to active")
return existing_user
# Generate unique account slug
base_slug = slugify(user_data['account_name'])
account_slug = base_slug
counter = 1
while Account.objects.filter(slug=account_slug).exists():
account_slug = f"{base_slug}-{counter}"
counter += 1
# Create user first (without account)
user = User.objects.create_user(
username=user_data['username'],
email=user_data['email'],
password=user_data['password'],
first_name=user_data['first_name'],
last_name=user_data['last_name'],
account=None, # Will be set after account creation
role='owner'
)
# Create account with user as owner and assigned plan
account = Account.objects.create(
name=user_data['account_name'],
slug=account_slug,
owner=user,
plan=plan,
status='active', # Set to active
credits=plan.included_credits or 0, # Set initial credits from plan
)
# Update user to reference the new account
user.account = account
user.save()
print(f"✅ Created user: {user.email}")
print(f" - Name: {user.get_full_name()}")
print(f" - Username: {user.username}")
print(f" - Account: {account.name} (slug: {account.slug})")
print(f" - Plan: {plan.name} (${plan.price}/month)")
print(f" - Status: {account.status}")
print(f" - Credits: {account.credits}")
print(f" - Max Sites: {plan.max_sites}")
print(f" - Max Users: {plan.max_users}")
print()
return user
except Exception as e:
print(f"❌ ERROR creating user {user_data['email']}: {e}")
import traceback
traceback.print_exc()
return None
def main():
"""Main function to create all users."""
print("=" * 80)
print("Creating 3 Users with Paid Plans")
print("=" * 80)
print()
# Verify plans exist
print("Checking available plans...")
plans = Plan.objects.filter(is_active=True).order_by('price')
if plans.count() < 3:
print(f"⚠️ WARNING: Only {plans.count()} active plan(s) found. Need at least 3.")
print("Available plans:")
for p in plans:
print(f" - {p.slug} (${p.price})")
print()
print("Please run import_plans.py first to create the plans.")
return
print("✅ Found plans:")
for p in plans:
print(f" - {p.name} ({p.slug}): ${p.price}/month")
print()
# Create users
created_users = []
for user_data in USERS_DATA:
user = create_user_with_plan(user_data)
if user:
created_users.append(user)
# Summary
print("=" * 80)
print("SUMMARY")
print("=" * 80)
print(f"Total users created/updated: {len(created_users)}")
print()
print("User Login Credentials:")
print("-" * 80)
for user_data in USERS_DATA:
print(f"Email: {user_data['email']}")
print(f"Password: {user_data['password']}")
print(f"Plan: {user_data['plan_slug'].title()}")
print()
print("✅ All users created successfully!")
print()
print("You can now log in with any of these accounts at:")
print("https://app.igny8.com/login")
if __name__ == '__main__':
try:
main()
except Exception as e:
print(f"❌ Fatal error: {e}", file=sys.stderr)
import traceback
traceback.print_exc()
sys.exit(1)

View File

@@ -1,116 +0,0 @@
#!/bin/bash
# Automation System Deployment Script
# Run this script to complete the automation system deployment
set -e # Exit on error
echo "========================================="
echo "IGNY8 Automation System Deployment"
echo "========================================="
echo ""
# Colors for output
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
RED='\033[0;31m'
NC='\033[0m' # No Color
# Check if running from correct directory
if [ ! -f "manage.py" ]; then
echo -e "${RED}Error: Please run this script from the backend directory${NC}"
echo "cd /data/app/igny8/backend && ./deploy_automation.sh"
exit 1
fi
echo -e "${YELLOW}Step 1: Creating log directory...${NC}"
mkdir -p logs/automation
chmod 755 logs/automation
echo -e "${GREEN}✓ Log directory created${NC}"
echo ""
echo -e "${YELLOW}Step 2: Running database migrations...${NC}"
python3 manage.py makemigrations
python3 manage.py migrate
echo -e "${GREEN}✓ Migrations complete${NC}"
echo ""
echo -e "${YELLOW}Step 3: Checking Celery services...${NC}"
if docker ps | grep -q celery; then
echo -e "${GREEN}✓ Celery worker is running${NC}"
else
echo -e "${RED}⚠ Celery worker is NOT running${NC}"
echo "Start with: docker-compose up -d celery"
fi
if docker ps | grep -q beat; then
echo -e "${GREEN}✓ Celery beat is running${NC}"
else
echo -e "${RED}⚠ Celery beat is NOT running${NC}"
echo "Start with: docker-compose up -d celery-beat"
fi
echo ""
echo -e "${YELLOW}Step 4: Verifying cache backend...${NC}"
python3 -c "
from django.core.cache import cache
try:
cache.set('test_key', 'test_value', 10)
if cache.get('test_key') == 'test_value':
print('${GREEN}✓ Cache backend working${NC}')
else:
print('${RED}⚠ Cache backend not working properly${NC}')
except Exception as e:
print('${RED}⚠ Cache backend error:', str(e), '${NC}')
" || echo -e "${RED}⚠ Could not verify cache backend${NC}"
echo ""
echo -e "${YELLOW}Step 5: Testing automation API...${NC}"
python3 manage.py shell << EOF
from igny8_core.business.automation.services import AutomationService
from igny8_core.modules.system.models import Account, Site
try:
account = Account.objects.first()
site = Site.objects.first()
if account and site:
service = AutomationService(account, site)
estimate = service.estimate_credits()
print('${GREEN}✓ AutomationService working - Estimated credits:', estimate, '${NC}')
else:
print('${YELLOW}⚠ No account or site found - create one first${NC}')
except Exception as e:
print('${RED}⚠ AutomationService error:', str(e), '${NC}')
EOF
echo ""
echo -e "${YELLOW}Step 6: Checking Celery beat schedule...${NC}"
if docker ps | grep -q celery; then
CELERY_CONTAINER=$(docker ps | grep celery | grep -v beat | awk '{print $1}')
docker exec $CELERY_CONTAINER celery -A igny8_core inspect scheduled 2>/dev/null | grep -q "check-scheduled-automations" && \
echo -e "${GREEN}✓ Automation task scheduled in Celery beat${NC}" || \
echo -e "${YELLOW}⚠ Automation task not found in schedule (may need restart)${NC}"
else
echo -e "${YELLOW}⚠ Celery worker not running - cannot check schedule${NC}"
fi
echo ""
echo "========================================="
echo -e "${GREEN}Deployment Steps Completed!${NC}"
echo "========================================="
echo ""
echo "Next steps:"
echo "1. Restart Celery services to pick up new tasks:"
echo " docker-compose restart celery celery-beat"
echo ""
echo "2. Access the frontend at /automation page"
echo ""
echo "3. Test the automation:"
echo " - Click [Configure] to set up schedule"
echo " - Click [Run Now] to start automation"
echo " - Monitor progress in real-time"
echo ""
echo "4. Check logs:"
echo " tail -f logs/automation/{account_id}/{site_id}/{run_id}/automation_run.log"
echo ""
echo -e "${YELLOW}For troubleshooting, see: AUTOMATION-DEPLOYMENT-CHECKLIST.md${NC}"

View File

@@ -1,393 +0,0 @@
#!/usr/bin/env python
"""
Diagnostic script for generate_content function issues
Tests each layer of the content generation pipeline to identify where it's failing
"""
import os
import sys
import django
import logging
# Setup Django
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'igny8_core.settings')
django.setup()
from igny8_core.auth.models import Account
from igny8_core.modules.writer.models import Tasks, Content
from igny8_core.modules.system.models import IntegrationSettings
from igny8_core.ai.registry import get_function_instance
from igny8_core.ai.engine import AIEngine
from igny8_core.business.content.services.content_generation_service import ContentGenerationService
# Setup logging
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s [%(levelname)s] %(name)s: %(message)s'
)
logger = logging.getLogger(__name__)
def print_section(title):
"""Print a section header"""
print("\n" + "=" * 80)
print(f" {title}")
print("=" * 80 + "\n")
def test_prerequisites():
"""Test that prerequisites are met"""
print_section("1. TESTING PREREQUISITES")
# Check if account exists
try:
account = Account.objects.first()
if not account:
print("❌ FAIL: No account found in database")
return None
print(f"✅ PASS: Found account: {account.id} ({account.email})")
except Exception as e:
print(f"❌ FAIL: Error getting account: {e}")
return None
# Check OpenAI integration settings
try:
openai_settings = IntegrationSettings.objects.filter(
integration_type='openai',
account=account,
is_active=True
).first()
if not openai_settings:
print("❌ FAIL: No active OpenAI integration settings found")
return None
if not openai_settings.config or not openai_settings.config.get('apiKey'):
print("❌ FAIL: OpenAI API key not configured in IntegrationSettings")
return None
api_key_preview = openai_settings.config['apiKey'][:10] + "..." if openai_settings.config.get('apiKey') else "None"
model = openai_settings.config.get('model', 'Not set')
print(f"✅ PASS: OpenAI settings found (API key: {api_key_preview}, Model: {model})")
except Exception as e:
print(f"❌ FAIL: Error checking OpenAI settings: {e}")
return None
# Check if tasks exist
try:
tasks = Tasks.objects.filter(account=account, status='pending')[:5]
task_count = tasks.count()
if task_count == 0:
print("⚠️ WARNING: No pending tasks found, will try to use any task")
tasks = Tasks.objects.filter(account=account)[:5]
task_count = tasks.count()
if task_count == 0:
print("❌ FAIL: No tasks found at all")
return None
print(f"✅ PASS: Found {task_count} task(s)")
for task in tasks:
print(f" - Task {task.id}: {task.title or 'Untitled'} (status: {task.status})")
except Exception as e:
print(f"❌ FAIL: Error getting tasks: {e}")
return None
return {
'account': account,
'tasks': list(tasks),
'openai_settings': openai_settings
}
def test_function_registry():
"""Test that the generate_content function is registered"""
print_section("2. TESTING FUNCTION REGISTRY")
try:
fn = get_function_instance('generate_content')
if not fn:
print("❌ FAIL: generate_content function not found in registry")
return False
print(f"✅ PASS: Function registered: {fn.get_name()}")
metadata = fn.get_metadata()
print(f" - Display name: {metadata.get('display_name')}")
print(f" - Description: {metadata.get('description')}")
return True
except Exception as e:
print(f"❌ FAIL: Error loading function: {e}")
import traceback
traceback.print_exc()
return False
def test_function_validation(context):
"""Test function validation"""
print_section("3. TESTING FUNCTION VALIDATION")
try:
fn = get_function_instance('generate_content')
account = context['account']
task = context['tasks'][0]
payload = {'ids': [task.id]}
print(f"Testing with payload: {payload}")
result = fn.validate(payload, account)
if result['valid']:
print(f"✅ PASS: Validation succeeded")
else:
print(f"❌ FAIL: Validation failed: {result.get('error')}")
return False
return True
except Exception as e:
print(f"❌ FAIL: Error during validation: {e}")
import traceback
traceback.print_exc()
return False
def test_function_prepare(context):
"""Test function prepare phase"""
print_section("4. TESTING FUNCTION PREPARE")
try:
fn = get_function_instance('generate_content')
account = context['account']
task = context['tasks'][0]
payload = {'ids': [task.id]}
print(f"Preparing task {task.id}: {task.title or 'Untitled'}")
data = fn.prepare(payload, account)
if not data:
print("❌ FAIL: Prepare returned no data")
return False
if isinstance(data, list):
print(f"✅ PASS: Prepared {len(data)} task(s)")
for t in data:
print(f" - Task {t.id}: {t.title or 'Untitled'}")
print(f" Cluster: {t.cluster.name if t.cluster else 'None'}")
print(f" Taxonomy: {t.taxonomy_term.name if t.taxonomy_term else 'None'}")
print(f" Keywords: {t.keywords.count()} keyword(s)")
else:
print(f"✅ PASS: Prepared data: {type(data)}")
context['prepared_data'] = data
return True
except Exception as e:
print(f"❌ FAIL: Error during prepare: {e}")
import traceback
traceback.print_exc()
return False
def test_function_build_prompt(context):
"""Test prompt building"""
print_section("5. TESTING PROMPT BUILDING")
try:
fn = get_function_instance('generate_content')
account = context['account']
data = context['prepared_data']
prompt = fn.build_prompt(data, account)
if not prompt:
print("❌ FAIL: No prompt generated")
return False
print(f"✅ PASS: Prompt generated ({len(prompt)} characters)")
print("\nPrompt preview (first 500 chars):")
print("-" * 80)
print(prompt[:500])
if len(prompt) > 500:
print(f"\n... ({len(prompt) - 500} more characters)")
print("-" * 80)
context['prompt'] = prompt
return True
except Exception as e:
print(f"❌ FAIL: Error building prompt: {e}")
import traceback
traceback.print_exc()
return False
def test_model_config(context):
"""Test model configuration"""
print_section("6. TESTING MODEL CONFIGURATION")
try:
from igny8_core.ai.settings import get_model_config
account = context['account']
model_config = get_model_config('generate_content', account=account)
if not model_config:
print("❌ FAIL: No model config returned")
return False
print(f"✅ PASS: Model configuration loaded")
print(f" - Model: {model_config.get('model')}")
print(f" - Max tokens: {model_config.get('max_tokens')}")
print(f" - Temperature: {model_config.get('temperature')}")
print(f" - Response format: {model_config.get('response_format')}")
context['model_config'] = model_config
return True
except Exception as e:
print(f"❌ FAIL: Error getting model config: {e}")
import traceback
traceback.print_exc()
return False
def test_ai_core_request(context):
"""Test AI core request (actual API call)"""
print_section("7. TESTING AI CORE REQUEST (ACTUAL API CALL)")
# Ask user for confirmation
print("⚠️ WARNING: This will make an actual API call to OpenAI and cost money!")
print("Do you want to proceed? (yes/no): ", end='')
response = input().strip().lower()
if response != 'yes':
print("Skipping API call test")
return True
try:
from igny8_core.ai.ai_core import AICore
account = context['account']
prompt = context['prompt']
model_config = context['model_config']
# Use a shorter test prompt to save costs
test_prompt = prompt[:1000] + "\n\n[TEST MODE - Generate only title and first paragraph]"
print(f"Making test API call with shortened prompt ({len(test_prompt)} chars)...")
ai_core = AICore(account=account)
result = ai_core.run_ai_request(
prompt=test_prompt,
model=model_config['model'],
max_tokens=500, # Limit tokens for testing
temperature=model_config.get('temperature', 0.7),
response_format=model_config.get('response_format'),
function_name='generate_content_test'
)
if result.get('error'):
print(f"❌ FAIL: API call returned error: {result['error']}")
return False
if not result.get('content'):
print(f"❌ FAIL: API call returned no content")
return False
print(f"✅ PASS: API call successful")
print(f" - Tokens: {result.get('total_tokens', 0)}")
print(f" - Cost: ${result.get('cost', 0):.6f}")
print(f" - Model: {result.get('model')}")
print(f"\nContent preview (first 300 chars):")
print("-" * 80)
print(result['content'][:300])
print("-" * 80)
context['ai_response'] = result
return True
except Exception as e:
print(f"❌ FAIL: Error during API call: {e}")
import traceback
traceback.print_exc()
return False
def test_service_layer(context):
"""Test the content generation service"""
print_section("8. TESTING CONTENT GENERATION SERVICE")
print("⚠️ WARNING: This will make a full API call and create content!")
print("Do you want to proceed? (yes/no): ", end='')
response = input().strip().lower()
if response != 'yes':
print("Skipping service test")
return True
try:
account = context['account']
task = context['tasks'][0]
service = ContentGenerationService()
print(f"Calling generate_content with task {task.id}...")
result = service.generate_content([task.id], account)
if not result:
print("❌ FAIL: Service returned None")
return False
if not result.get('success'):
print(f"❌ FAIL: Service failed: {result.get('error')}")
return False
print(f"✅ PASS: Service call successful")
if 'task_id' in result:
print(f" - Celery task ID: {result['task_id']}")
print(f" - Message: {result.get('message')}")
print("\n⚠️ Note: Content generation is running in background (Celery)")
print(" Check Celery logs for actual execution status")
else:
print(f" - Content created: {result.get('content_id')}")
print(f" - Word count: {result.get('word_count')}")
return True
except Exception as e:
print(f"❌ FAIL: Error in service layer: {e}")
import traceback
traceback.print_exc()
return False
def main():
"""Run all diagnostic tests"""
print("\n" + "=" * 80)
print(" GENERATE_CONTENT DIAGNOSTIC TOOL")
print("=" * 80)
print("\nThis tool will test each layer of the content generation pipeline")
print("to identify where the function is failing.")
# Run tests
context = test_prerequisites()
if not context:
print("\n❌ FATAL: Prerequisites test failed. Cannot continue.")
return
if not test_function_registry():
print("\n❌ FATAL: Function registry test failed. Cannot continue.")
return
if not test_function_validation(context):
print("\n❌ FATAL: Validation test failed. Cannot continue.")
return
if not test_function_prepare(context):
print("\n❌ FATAL: Prepare test failed. Cannot continue.")
return
if not test_function_build_prompt(context):
print("\n❌ FATAL: Prompt building test failed. Cannot continue.")
return
if not test_model_config(context):
print("\n❌ FATAL: Model config test failed. Cannot continue.")
return
# Optional tests (require API calls)
test_ai_core_request(context)
test_service_layer(context)
print_section("DIAGNOSTIC COMPLETE")
print("Review the results above to identify where the generate_content")
print("function is failing.\n")
if __name__ == '__main__':
main()

View File

@@ -1,67 +0,0 @@
#!/usr/bin/env python
"""
Final verification that the WordPress content types are properly synced
"""
import os
import django
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'igny8_core.settings')
django.setup()
from igny8_core.business.integration.models import SiteIntegration
from igny8_core.auth.models import Site
import json
print("=" * 70)
print("WORDPRESS SYNC FIX VERIFICATION")
print("=" * 70)
# Get site 5
site = Site.objects.get(id=5)
print(f"\n✓ Site: {site.name} (ID: {site.id})")
# Get WordPress integration
integration = SiteIntegration.objects.get(site=site, platform='wordpress')
print(f"✓ Integration: {integration.platform.upper()} (ID: {integration.id})")
print(f"✓ Active: {integration.is_active}")
print(f"✓ Sync Enabled: {integration.sync_enabled}")
# Verify config data
config = integration.config_json or {}
content_types = config.get('content_types', {})
print("\n" + "=" * 70)
print("CONTENT TYPES STRUCTURE")
print("=" * 70)
# Post Types
post_types = content_types.get('post_types', {})
print(f"\n📝 Post Types: ({len(post_types)} total)")
for pt_name, pt_data in post_types.items():
print(f"{pt_data['label']} ({pt_name})")
print(f" - Count: {pt_data['count']}")
print(f" - Enabled: {pt_data['enabled']}")
print(f" - Fetch Limit: {pt_data['fetch_limit']}")
# Taxonomies
taxonomies = content_types.get('taxonomies', {})
print(f"\n🏷️ Taxonomies: ({len(taxonomies)} total)")
for tax_name, tax_data in taxonomies.items():
print(f"{tax_data['label']} ({tax_name})")
print(f" - Count: {tax_data['count']}")
print(f" - Enabled: {tax_data['enabled']}")
print(f" - Fetch Limit: {tax_data['fetch_limit']}")
# Last fetch time
last_fetch = content_types.get('last_structure_fetch')
print(f"\n🕐 Last Structure Fetch: {last_fetch}")
print("\n" + "=" * 70)
print("✅ SUCCESS! WordPress content types are properly configured")
print("=" * 70)
print("\nNext Steps:")
print("1. Refresh the IGNY8 app page in your browser")
print("2. Navigate to Sites → Settings → Content Types tab")
print("3. You should now see all Post Types and Taxonomies listed")
print("=" * 70)

View File

@@ -1,22 +0,0 @@
#!/usr/bin/env python
"""Fix remaining cluster with old status"""
import os
import django
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'igny8_core.settings')
django.setup()
from igny8_core.business.planning.models import Clusters
cluster = Clusters.objects.filter(status='active').first()
if cluster:
print(f"Found cluster: ID={cluster.id}, name={cluster.name}, status={cluster.status}")
print(f"Ideas count: {cluster.ideas.count()}")
if cluster.ideas.exists():
cluster.status = 'mapped'
else:
cluster.status = 'new'
cluster.save()
print(f"Updated to: {cluster.status}")
else:
print("No clusters with 'active' status found")

View File

@@ -1,88 +0,0 @@
#!/usr/bin/env python
import os
import django
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'igny8_core.settings')
django.setup()
from igny8_core.business.integration.models import SiteIntegration
from igny8_core.auth.models import Site
from django.utils import timezone
try:
# Get site 5
site = Site.objects.get(id=5)
print(f"✓ Site found: {site.name}")
# Get or create WordPress integration
integration, created = SiteIntegration.objects.get_or_create(
site=site,
platform='wordpress',
defaults={
'is_active': True,
'sync_enabled': True,
'config_json': {}
}
)
print(f"✓ Integration ID: {integration.id} (created: {created})")
# Add structure data
integration.config_json = {
'content_types': {
'post_types': {
'post': {
'label': 'Posts',
'count': 150,
'enabled': True,
'fetch_limit': 100
},
'page': {
'label': 'Pages',
'count': 25,
'enabled': True,
'fetch_limit': 100
},
'product': {
'label': 'Products',
'count': 89,
'enabled': True,
'fetch_limit': 100
}
},
'taxonomies': {
'category': {
'label': 'Categories',
'count': 15,
'enabled': True,
'fetch_limit': 100
},
'post_tag': {
'label': 'Tags',
'count': 234,
'enabled': True,
'fetch_limit': 100
},
'product_cat': {
'label': 'Product Categories',
'count': 12,
'enabled': True,
'fetch_limit': 100
}
},
'last_structure_fetch': timezone.now().isoformat()
},
'plugin_connection_enabled': True,
'two_way_sync_enabled': True
}
integration.save()
print("✓ Structure data saved successfully!")
print(f"✓ Integration ID: {integration.id}")
print("\n✅ READY: Refresh the page to see the content types!")
except Exception as e:
print(f"❌ ERROR: {str(e)}")
import traceback
traceback.print_exc()

View File

@@ -1,76 +0,0 @@
#!/usr/bin/env python
"""
Fix missing site_url in integration config
Adds site_url to config_json from site.domain or site.wp_url
"""
import os
import sys
import django
# Setup Django environment
sys.path.insert(0, os.path.dirname(os.path.abspath(__file__)))
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'igny8_core.settings')
django.setup()
from igny8_core.business.integration.models import SiteIntegration
from igny8_core.auth.models import Site
def fix_integration_site_urls():
"""Add site_url to integration config if missing"""
integrations = SiteIntegration.objects.filter(platform='wordpress')
fixed_count = 0
skipped_count = 0
error_count = 0
for integration in integrations:
try:
config = integration.config_json or {}
# Check if site_url is already set
if config.get('site_url'):
print(f"✓ Integration {integration.id} already has site_url: {config.get('site_url')}")
skipped_count += 1
continue
# Try to get site URL from multiple sources
site_url = None
# First, try legacy wp_url
if integration.site.wp_url:
site_url = integration.site.wp_url
print(f"→ Using legacy wp_url for integration {integration.id}: {site_url}")
# Fallback to domain
elif integration.site.domain:
site_url = integration.site.domain
print(f"→ Using domain for integration {integration.id}: {site_url}")
if site_url:
# Update config
config['site_url'] = site_url
integration.config_json = config
integration.save(update_fields=['config_json'])
print(f"✓ Updated integration {integration.id} with site_url: {site_url}")
fixed_count += 1
else:
print(f"✗ Integration {integration.id} has no site URL available (site: {integration.site.name}, id: {integration.site.id})")
error_count += 1
except Exception as e:
print(f"✗ Error fixing integration {integration.id}: {e}")
error_count += 1
print("\n" + "="*60)
print(f"Summary:")
print(f" Fixed: {fixed_count}")
print(f" Skipped (already set): {skipped_count}")
print(f" Errors: {error_count}")
print("="*60)
if __name__ == '__main__':
print("Fixing WordPress integration site URLs...")
print("="*60)
fix_integration_site_urls()

View File

@@ -1,90 +0,0 @@
#!/usr/bin/env python
"""Script to inject WordPress structure data into the backend"""
from igny8_core.business.integration.models import SiteIntegration
from igny8_core.auth.models import Site
from django.utils import timezone
# Get site 5
try:
site = Site.objects.get(id=5)
print(f"✓ Found site: {site.name}")
except Site.DoesNotExist:
print("✗ Site with ID 5 not found!")
exit(1)
# Get or create WordPress integration for this site
integration, created = SiteIntegration.objects.get_or_create(
site=site,
platform='wordpress',
defaults={
'is_active': True,
'sync_enabled': True,
'config_json': {}
}
)
print(f"✓ Integration ID: {integration.id} (newly created: {created})")
# Add structure data
integration.config_json = {
'content_types': {
'post_types': {
'post': {
'label': 'Posts',
'count': 150,
'enabled': True,
'fetch_limit': 100,
'synced_count': 0
},
'page': {
'label': 'Pages',
'count': 25,
'enabled': True,
'fetch_limit': 100,
'synced_count': 0
},
'product': {
'label': 'Products',
'count': 89,
'enabled': True,
'fetch_limit': 100,
'synced_count': 0
}
},
'taxonomies': {
'category': {
'label': 'Categories',
'count': 15,
'enabled': True,
'fetch_limit': 100,
'synced_count': 0
},
'post_tag': {
'label': 'Tags',
'count': 234,
'enabled': True,
'fetch_limit': 100,
'synced_count': 0
},
'product_cat': {
'label': 'Product Categories',
'count': 12,
'enabled': True,
'fetch_limit': 100,
'synced_count': 0
}
},
'last_structure_fetch': timezone.now().isoformat()
},
'plugin_connection_enabled': True,
'two_way_sync_enabled': True
}
integration.save()
print("✓ Structure data saved!")
print(f"✓ Post Types: {len(integration.config_json['content_types']['post_types'])}")
print(f"✓ Taxonomies: {len(integration.config_json['content_types']['taxonomies'])}")
print(f"✓ Last fetch: {integration.config_json['content_types']['last_structure_fetch']}")
print("\n🎉 SUCCESS! Now refresh: https://app.igny8.com/sites/5/settings?tab=content-types")

View File

@@ -1,106 +0,0 @@
#!/usr/bin/env python
"""
Fix missing taxonomy relationships for existing content
This script will:
1. Find content that should have tags/categories based on their keywords
2. Create appropriate taxonomy terms
3. Link them to the content
"""
import os
import sys
import django
sys.path.insert(0, os.path.dirname(os.path.abspath(__file__)))
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'igny8_core.settings')
django.setup()
from django.db import transaction
from django.utils.text import slugify
from igny8_core.business.content.models import Content, ContentTaxonomy
print("=" * 80)
print("FIXING MISSING TAXONOMY RELATIONSHIPS")
print("=" * 80)
# Get all content without taxonomy terms
content_without_tags = Content.objects.filter(taxonomy_terms__isnull=True).distinct()
print(f"\nFound {content_without_tags.count()} content items without tags/categories")
fixed_count = 0
for content in content_without_tags:
print(f"\nProcessing Content #{content.id}: {content.title[:50]}...")
# Generate tags from keywords
tags_to_add = []
categories_to_add = []
# Use primary keyword as a tag
if content.primary_keyword:
tags_to_add.append(content.primary_keyword)
# Use secondary keywords as tags
if content.secondary_keywords and isinstance(content.secondary_keywords, list):
tags_to_add.extend(content.secondary_keywords[:3]) # Limit to 3
# Create category based on cluster only
if content.cluster:
categories_to_add.append(content.cluster.name)
with transaction.atomic():
# Process tags
for tag_name in tags_to_add:
if tag_name and isinstance(tag_name, str):
tag_name = tag_name.strip()
if tag_name:
try:
tag_obj, created = ContentTaxonomy.objects.get_or_create(
site=content.site,
name=tag_name,
taxonomy_type='tag',
defaults={
'slug': slugify(tag_name),
'sector': content.sector,
'account': content.account,
'description': '',
'external_taxonomy': '',
'sync_status': '',
'count': 0,
'metadata': {},
}
)
content.taxonomy_terms.add(tag_obj)
print(f" + Tag: {tag_name} ({'created' if created else 'existing'})")
except Exception as e:
print(f" ✗ Failed to add tag '{tag_name}': {e}")
# Process categories
for category_name in categories_to_add:
if category_name and isinstance(category_name, str):
category_name = category_name.strip()
if category_name:
try:
category_obj, created = ContentTaxonomy.objects.get_or_create(
site=content.site,
name=category_name,
taxonomy_type='category',
defaults={
'slug': slugify(category_name),
'sector': content.sector,
'account': content.account,
'description': '',
'external_taxonomy': '',
'sync_status': '',
'count': 0,
'metadata': {},
}
)
content.taxonomy_terms.add(category_obj)
print(f" + Category: {category_name} ({'created' if created else 'existing'})")
except Exception as e:
print(f" ✗ Failed to add category '{category_name}': {e}")
fixed_count += 1
print("\n" + "=" * 80)
print(f"FIXED {fixed_count} CONTENT ITEMS")
print("=" * 80)

View File

@@ -1,57 +0,0 @@
#!/usr/bin/env python3
"""Force cancel stuck automation runs and clear cache locks"""
import os
import sys
import django
# Setup Django
sys.path.insert(0, os.path.dirname(os.path.abspath(__file__)))
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'igny8_core.settings')
django.setup()
from igny8_core.business.automation.models import AutomationRun
from django.core.cache import cache
from django.utils import timezone
print("=" * 80)
print("AUTOMATION RUN FORCE CANCEL & CLEANUP")
print("=" * 80)
# Check and cancel active runs
runs = AutomationRun.objects.filter(status__in=['running', 'paused']).order_by('-started_at')
print(f"\nFound {runs.count()} active run(s)")
if runs.count() == 0:
print(" No runs to cancel\n")
else:
for r in runs:
duration = (timezone.now() - r.started_at).total_seconds() / 60
print(f"\nRun ID: {r.run_id}")
print(f" Site: {r.site_id}")
print(f" Status: {r.status}")
print(f" Stage: {r.current_stage}")
print(f" Started: {r.started_at} ({duration:.1f}m ago)")
print(f" Credits: {r.total_credits_used}")
# Force cancel
print(f" >>> FORCE CANCELLING...")
r.status = 'cancelled'
r.save()
print(f" >>> Status: {r.status}")
# Clear cache lock
lock_key = f'automation_lock_{r.site_id}'
cache.delete(lock_key)
print(f" >>> Lock cleared: {lock_key}")
print("\n" + "=" * 40)
print("Cache lock status:")
for site_id in [5, 16]:
lock_key = f'automation_lock_{site_id}'
lock_val = cache.get(lock_key)
status = lock_val or 'UNLOCKED ✓'
print(f" Site {site_id}: {status}")
print("\n" + "=" * 80)
print("✓ CLEANUP COMPLETE - You can now start a new automation run")
print("=" * 80)

Binary file not shown.

After

Width:  |  Height:  |  Size: 164 KiB

View File

@@ -1,43 +1,8 @@
from django.contrib import admin
from django.contrib.admin.apps import AdminConfig from django.contrib.admin.apps import AdminConfig
class ReadOnlyAdmin(admin.ModelAdmin):
"""Generic read-only admin for system tables."""
def has_add_permission(self, request):
return False
def has_change_permission(self, request, obj=None):
return False
def has_delete_permission(self, request, obj=None):
return False
def _safe_register(model, model_admin):
try:
admin.site.register(model, model_admin)
except admin.sites.AlreadyRegistered:
pass
class Igny8AdminConfig(AdminConfig): class Igny8AdminConfig(AdminConfig):
default_site = 'igny8_core.admin.site.Igny8AdminSite' default_site = 'igny8_core.admin.site.Igny8AdminSite'
name = 'django.contrib.admin' name = 'django.contrib.admin'
def ready(self):
super().ready()
# Register Django internals in admin (read-only where appropriate)
from django.contrib.admin.models import LogEntry
from django.contrib.auth.models import Group, Permission
from django.contrib.contenttypes.models import ContentType
from django.contrib.sessions.models import Session
_safe_register(LogEntry, ReadOnlyAdmin)
_safe_register(Permission, admin.ModelAdmin)
_safe_register(Group, admin.ModelAdmin)
_safe_register(ContentType, ReadOnlyAdmin)
_safe_register(Session, ReadOnlyAdmin)

View File

@@ -37,10 +37,6 @@ class Igny8AdminSite(admin.AdminSite):
('igny8_core_auth', 'Subscription'), ('igny8_core_auth', 'Subscription'),
('billing', 'CreditTransaction'), ('billing', 'CreditTransaction'),
('billing', 'CreditUsageLog'), ('billing', 'CreditUsageLog'),
('billing', 'Invoice'),
('billing', 'Payment'),
('billing', 'CreditPackage'),
('billing', 'CreditCostConfig'),
], ],
}, },
'Sites & Users': { 'Sites & Users': {
@@ -49,7 +45,6 @@ class Igny8AdminSite(admin.AdminSite):
('igny8_core_auth', 'User'), ('igny8_core_auth', 'User'),
('igny8_core_auth', 'SiteUserAccess'), ('igny8_core_auth', 'SiteUserAccess'),
('igny8_core_auth', 'PasswordResetToken'), ('igny8_core_auth', 'PasswordResetToken'),
('igny8_core_auth', 'Sector'),
], ],
}, },
'Global Reference Data': { 'Global Reference Data': {
@@ -57,10 +52,6 @@ class Igny8AdminSite(admin.AdminSite):
('igny8_core_auth', 'Industry'), ('igny8_core_auth', 'Industry'),
('igny8_core_auth', 'IndustrySector'), ('igny8_core_auth', 'IndustrySector'),
('igny8_core_auth', 'SeedKeyword'), ('igny8_core_auth', 'SeedKeyword'),
('site_building', 'BusinessType'),
('site_building', 'AudienceProfile'),
('site_building', 'BrandPersonality'),
('site_building', 'HeroImageryDirection'),
], ],
}, },
'Planner': { 'Planner': {
@@ -75,10 +66,6 @@ class Igny8AdminSite(admin.AdminSite):
('writer', 'Tasks'), ('writer', 'Tasks'),
('writer', 'Content'), ('writer', 'Content'),
('writer', 'Images'), ('writer', 'Images'),
('writer', 'ContentTaxonomy'),
('writer', 'ContentAttribute'),
('writer', 'ContentTaxonomyRelation'),
('writer', 'ContentClusterMap'),
], ],
}, },
'Thinker Module': { 'Thinker Module': {
@@ -86,7 +73,6 @@ class Igny8AdminSite(admin.AdminSite):
('system', 'AIPrompt'), ('system', 'AIPrompt'),
('system', 'AuthorProfile'), ('system', 'AuthorProfile'),
('system', 'Strategy'), ('system', 'Strategy'),
('ai', 'AITaskLog'),
], ],
}, },
'System Configuration': { 'System Configuration': {
@@ -99,42 +85,6 @@ class Igny8AdminSite(admin.AdminSite):
('system', 'UserSettings'), ('system', 'UserSettings'),
('system', 'ModuleSettings'), ('system', 'ModuleSettings'),
('system', 'AISettings'), ('system', 'AISettings'),
('system', 'ModuleEnableSettings'),
# Automation config lives under the automation app - include here
('automation', 'AutomationConfig'),
('automation', 'AutomationRun'),
],
},
'Payments': {
'models': [
('billing', 'PaymentMethodConfig'),
('billing', 'AccountPaymentMethod'),
],
},
'Integrations & Sync': {
'models': [
('integration', 'SiteIntegration'),
('integration', 'SyncEvent'),
],
},
'Publishing': {
'models': [
('publishing', 'PublishingRecord'),
('publishing', 'DeploymentRecord'),
],
},
'Optimization': {
'models': [
('optimization', 'OptimizationTask'),
],
},
'Django Internals': {
'models': [
('admin', 'LogEntry'),
('auth', 'Group'),
('auth', 'Permission'),
('contenttypes', 'ContentType'),
('sessions', 'Session'),
], ],
}, },
} }
@@ -174,11 +124,6 @@ class Igny8AdminSite(admin.AdminSite):
'Writer Module', 'Writer Module',
'Thinker Module', 'Thinker Module',
'System Configuration', 'System Configuration',
'Payments',
'Integrations & Sync',
'Publishing',
'Optimization',
'Django Internals',
] ]
app_list.sort(key=lambda x: order.index(x['name']) if x['name'] in order else 999) app_list.sort(key=lambda x: order.index(x['name']) if x['name'] in order else 999)

View File

@@ -34,8 +34,6 @@ class AIEngine:
return f"{count} task{'s' if count != 1 else ''}" return f"{count} task{'s' if count != 1 else ''}"
elif function_name == 'generate_images': elif function_name == 'generate_images':
return f"{count} task{'s' if count != 1 else ''}" return f"{count} task{'s' if count != 1 else ''}"
elif function_name == 'generate_site_structure':
return "1 site blueprint"
return f"{count} item{'s' if count != 1 else ''}" return f"{count} item{'s' if count != 1 else ''}"
def _build_validation_message(self, function_name: str, payload: dict, count: int, input_description: str) -> str: def _build_validation_message(self, function_name: str, payload: dict, count: int, input_description: str) -> str:
@@ -82,13 +80,6 @@ class AIEngine:
total_images = 1 + max_images total_images = 1 + max_images
return f"Mapping Content for {total_images} Image Prompts" return f"Mapping Content for {total_images} Image Prompts"
return f"Mapping Content for Image Prompts" return f"Mapping Content for Image Prompts"
elif function_name == 'generate_site_structure':
blueprint_name = ''
if isinstance(data, dict):
blueprint = data.get('blueprint')
if blueprint and getattr(blueprint, 'name', None):
blueprint_name = f'"{blueprint.name}"'
return f"Preparing site blueprint {blueprint_name}".strip()
return f"Preparing {count} item{'s' if count != 1 else ''}" return f"Preparing {count} item{'s' if count != 1 else ''}"
def _get_ai_call_message(self, function_name: str, count: int) -> str: def _get_ai_call_message(self, function_name: str, count: int) -> str:
@@ -101,8 +92,6 @@ class AIEngine:
return f"Writing article{'s' if count != 1 else ''} with AI" return f"Writing article{'s' if count != 1 else ''} with AI"
elif function_name == 'generate_images': elif function_name == 'generate_images':
return f"Creating image{'s' if count != 1 else ''} with AI" return f"Creating image{'s' if count != 1 else ''} with AI"
elif function_name == 'generate_site_structure':
return "Designing complete site architecture"
return f"Processing with AI" return f"Processing with AI"
def _get_parse_message(self, function_name: str) -> str: def _get_parse_message(self, function_name: str) -> str:
@@ -115,8 +104,6 @@ class AIEngine:
return "Formatting content" return "Formatting content"
elif function_name == 'generate_images': elif function_name == 'generate_images':
return "Processing images" return "Processing images"
elif function_name == 'generate_site_structure':
return "Compiling site map"
return "Processing results" return "Processing results"
def _get_parse_message_with_count(self, function_name: str, count: int) -> str: def _get_parse_message_with_count(self, function_name: str, count: int) -> str:
@@ -135,8 +122,6 @@ class AIEngine:
if in_article_count > 0: if in_article_count > 0:
return f"Writing {in_article_count} Inarticle Image Prompts" return f"Writing {in_article_count} Inarticle Image Prompts"
return "Writing Inarticle Image Prompts" return "Writing Inarticle Image Prompts"
elif function_name == 'generate_site_structure':
return f"{count} page blueprint{'s' if count != 1 else ''} mapped"
return f"{count} item{'s' if count != 1 else ''} processed" return f"{count} item{'s' if count != 1 else ''} processed"
def _get_save_message(self, function_name: str, count: int) -> str: def _get_save_message(self, function_name: str, count: int) -> str:
@@ -152,8 +137,6 @@ class AIEngine:
elif function_name == 'generate_image_prompts': elif function_name == 'generate_image_prompts':
# Count is total prompts created # Count is total prompts created
return f"Assigning {count} Prompts to Dedicated Slots" return f"Assigning {count} Prompts to Dedicated Slots"
elif function_name == 'generate_site_structure':
return f"Publishing {count} page blueprint{'s' if count != 1 else ''}"
return f"Saving {count} item{'s' if count != 1 else ''}" return f"Saving {count} item{'s' if count != 1 else ''}"
def execute(self, fn: BaseAIFunction, payload: dict) -> dict: def execute(self, fn: BaseAIFunction, payload: dict) -> dict:
@@ -209,31 +192,6 @@ class AIEngine:
self.step_tracker.add_request_step("PREP", "success", prep_message) self.step_tracker.add_request_step("PREP", "success", prep_message)
self.tracker.update("PREP", 25, prep_message, meta=self.step_tracker.get_meta()) self.tracker.update("PREP", 25, prep_message, meta=self.step_tracker.get_meta())
# Phase 2.5: CREDIT CHECK - Check credits before AI call (25%)
if self.account:
try:
from igny8_core.business.billing.services.credit_service import CreditService
from igny8_core.business.billing.exceptions import InsufficientCreditsError
# Map function name to operation type
operation_type = self._get_operation_type(function_name)
# Calculate estimated cost
estimated_amount = self._get_estimated_amount(function_name, data, payload)
# Check credits BEFORE AI call
CreditService.check_credits(self.account, operation_type, estimated_amount)
logger.info(f"[AIEngine] Credit check passed: {operation_type}, estimated amount: {estimated_amount}")
except InsufficientCreditsError as e:
error_msg = str(e)
error_type = 'InsufficientCreditsError'
logger.error(f"[AIEngine] {error_msg}")
return self._handle_error(error_msg, fn, error_type=error_type)
except Exception as e:
logger.warning(f"[AIEngine] Failed to check credits: {e}", exc_info=True)
# Don't fail the operation if credit check fails (for backward compatibility)
# Phase 3: AI_CALL - Provider API Call (25-70%) # Phase 3: AI_CALL - Provider API Call (25-70%)
# Validate account exists before proceeding # Validate account exists before proceeding
if not self.account: if not self.account:
@@ -367,45 +325,37 @@ class AIEngine:
# Store save_msg for use in DONE phase # Store save_msg for use in DONE phase
final_save_msg = save_msg final_save_msg = save_msg
# Phase 5.5: DEDUCT CREDITS - Deduct credits after successful save # Track credit usage after successful save
if self.account and raw_response: if self.account and raw_response:
try: try:
from igny8_core.business.billing.services.credit_service import CreditService from igny8_core.modules.billing.services import CreditService
from igny8_core.business.billing.exceptions import InsufficientCreditsError from igny8_core.modules.billing.models import CreditUsageLog
# Map function name to operation type # Calculate credits used (based on tokens or fixed cost)
operation_type = self._get_operation_type(function_name) credits_used = self._calculate_credits_for_clustering(
keyword_count=len(data.get('keywords', [])) if isinstance(data, dict) else len(data) if isinstance(data, list) else 1,
tokens=raw_response.get('total_tokens', 0),
cost=raw_response.get('cost', 0)
)
# Calculate actual amount based on results # Log credit usage (don't deduct from account.credits, just log)
actual_amount = self._get_actual_amount(function_name, save_result, parsed, data) CreditUsageLog.objects.create(
# Deduct credits using the new convenience method
CreditService.deduct_credits_for_operation(
account=self.account, account=self.account,
operation_type=operation_type, operation_type='clustering',
amount=actual_amount, credits_used=credits_used,
cost_usd=raw_response.get('cost'), cost_usd=raw_response.get('cost'),
model_used=raw_response.get('model', ''), model_used=raw_response.get('model', ''),
tokens_input=raw_response.get('tokens_input', 0), tokens_input=raw_response.get('tokens_input', 0),
tokens_output=raw_response.get('tokens_output', 0), tokens_output=raw_response.get('tokens_output', 0),
related_object_type=self._get_related_object_type(function_name), related_object_type='cluster',
related_object_id=save_result.get('id') or save_result.get('cluster_id') or save_result.get('task_id'),
metadata={ metadata={
'function_name': function_name,
'clusters_created': clusters_created, 'clusters_created': clusters_created,
'keywords_updated': keywords_updated, 'keywords_updated': keywords_updated,
'count': count, 'function_name': function_name
**save_result
} }
) )
logger.info(f"[AIEngine] Credits deducted: {operation_type}, amount: {actual_amount}")
except InsufficientCreditsError as e:
# This shouldn't happen since we checked before, but log it
logger.error(f"[AIEngine] Insufficient credits during deduction: {e}")
except Exception as e: except Exception as e:
logger.warning(f"[AIEngine] Failed to deduct credits: {e}", exc_info=True) logger.warning(f"Failed to log credit usage: {e}", exc_info=True)
# Don't fail the operation if credit deduction fails (for backward compatibility)
# Phase 6: DONE - Finalization (98-100%) # Phase 6: DONE - Finalization (98-100%)
success_msg = f"Task completed: {final_save_msg}" if 'final_save_msg' in locals() else "Task completed successfully" success_msg = f"Task completed: {final_save_msg}" if 'final_save_msg' in locals() else "Task completed successfully"
@@ -503,86 +453,18 @@ class AIEngine:
# Don't fail the task if logging fails # Don't fail the task if logging fails
logger.warning(f"Failed to log to database: {e}") logger.warning(f"Failed to log to database: {e}")
def _get_operation_type(self, function_name): def _calculate_credits_for_clustering(self, keyword_count, tokens, cost):
"""Map function name to operation type for credit system""" """Calculate credits used for clustering operation"""
mapping = { # Use plan's cost per request if available, otherwise calculate from tokens
'auto_cluster': 'clustering', if self.account and hasattr(self.account, 'plan') and self.account.plan:
'generate_ideas': 'idea_generation', plan = self.account.plan
'generate_content': 'content_generation', # Check if plan has ai_cost_per_request config
'generate_image_prompts': 'image_prompt_extraction', if hasattr(plan, 'ai_cost_per_request') and plan.ai_cost_per_request:
'generate_images': 'image_generation', cluster_cost = plan.ai_cost_per_request.get('cluster', 0)
'generate_site_structure': 'site_structure_generation', if cluster_cost:
} return int(cluster_cost)
return mapping.get(function_name, function_name)
# Fallback: 1 credit per 30 keywords (minimum 1)
def _get_estimated_amount(self, function_name, data, payload): credits = max(1, int(keyword_count / 30))
"""Get estimated amount for credit calculation (before operation)""" return credits
if function_name == 'generate_content':
# Estimate word count - tasks don't have word_count field, use default
# data is a list of Task objects
if isinstance(data, list) and len(data) > 0:
# Multiple tasks - estimate 1000 words per task
return len(data) * 1000
return 1000 # Default estimate for single item
elif function_name == 'generate_images':
# Count images to generate
if isinstance(payload, dict):
image_ids = payload.get('image_ids', [])
return len(image_ids) if image_ids else 1
return 1
elif function_name == 'generate_ideas':
# Count clusters
if isinstance(data, dict) and 'cluster_data' in data:
return len(data['cluster_data'])
return 1
# For fixed cost operations (clustering, image_prompt_extraction), return None
return None
def _get_actual_amount(self, function_name, save_result, parsed, data):
"""Get actual amount for credit calculation (after operation)"""
if function_name == 'generate_content':
# Get actual word count from saved content
if isinstance(save_result, dict):
word_count = save_result.get('word_count')
if word_count and word_count > 0:
return word_count
# Fallback: estimate from parsed content
if isinstance(parsed, dict) and 'content' in parsed:
content = parsed['content']
return len(content.split()) if isinstance(content, str) else 1000
# Fallback: estimate from html_content if available
if isinstance(parsed, dict) and 'html_content' in parsed:
html_content = parsed['html_content']
if isinstance(html_content, str):
# Strip HTML tags for word count
import re
text = re.sub(r'<[^>]+>', '', html_content)
return len(text.split())
return 1000
elif function_name == 'generate_images':
# Count successfully generated images
count = save_result.get('count', 0)
if count > 0:
return count
return 1
elif function_name == 'generate_ideas':
# Count ideas generated
count = save_result.get('count', 0)
if count > 0:
return count
return 1
# For fixed cost operations, return None
return None
def _get_related_object_type(self, function_name):
"""Get related object type for credit logging"""
mapping = {
'auto_cluster': 'cluster',
'generate_ideas': 'content_idea',
'generate_content': 'content',
'generate_image_prompts': 'image',
'generate_images': 'image',
'generate_site_structure': 'site_blueprint',
}
return mapping.get(function_name, 'unknown')

View File

@@ -40,7 +40,6 @@ class AutoClusterFunction(BaseAIFunction):
def validate(self, payload: dict, account=None) -> Dict: def validate(self, payload: dict, account=None) -> Dict:
"""Custom validation for clustering""" """Custom validation for clustering"""
from igny8_core.ai.validators import validate_ids, validate_keywords_exist from igny8_core.ai.validators import validate_ids, validate_keywords_exist
from igny8_core.ai.validators.cluster_validators import validate_minimum_keywords
# Base validation (no max_items limit) # Base validation (no max_items limit)
result = validate_ids(payload, max_items=None) result = validate_ids(payload, max_items=None)
@@ -53,21 +52,6 @@ class AutoClusterFunction(BaseAIFunction):
if not keywords_result['valid']: if not keywords_result['valid']:
return keywords_result return keywords_result
# NEW: Validate minimum keywords (5 required for meaningful clustering)
min_validation = validate_minimum_keywords(
keyword_ids=ids,
account=account,
min_required=5
)
if not min_validation['valid']:
logger.warning(f"[AutoCluster] Validation failed: {min_validation['error']}")
return min_validation
logger.info(
f"[AutoCluster] Validation passed: {min_validation['count']} keywords available (min: {min_validation['required']})"
)
# Removed plan limits check # Removed plan limits check
return {'valid': True} return {'valid': True}
@@ -265,7 +249,7 @@ class AutoClusterFunction(BaseAIFunction):
sector=sector, sector=sector,
defaults={ defaults={
'description': cluster_data.get('description', ''), 'description': cluster_data.get('description', ''),
'status': 'new', # FIXED: Changed from 'active' to 'new' 'status': 'active',
} }
) )
else: else:
@@ -276,7 +260,7 @@ class AutoClusterFunction(BaseAIFunction):
sector__isnull=True, sector__isnull=True,
defaults={ defaults={
'description': cluster_data.get('description', ''), 'description': cluster_data.get('description', ''),
'status': 'new', # FIXED: Changed from 'active' to 'new' 'status': 'active',
'sector': None, 'sector': None,
} }
) )
@@ -308,10 +292,9 @@ class AutoClusterFunction(BaseAIFunction):
else: else:
keyword_filter = keyword_filter.filter(sector__isnull=True) keyword_filter = keyword_filter.filter(sector__isnull=True)
# FIXED: Ensure keywords status updates from 'new' to 'mapped'
updated_count = keyword_filter.update( updated_count = keyword_filter.update(
cluster=cluster, cluster=cluster,
status='mapped' # Status changes from 'new' to 'mapped' status='mapped'
) )
keywords_updated += updated_count keywords_updated += updated_count

View File

@@ -1,14 +1,13 @@
""" """
Generate Content AI Function Generate Content AI Function
STAGE 3: Updated to use final Stage 1 Content schema Extracted from modules/writer/tasks.py
""" """
import logging import logging
import re import re
from typing import Dict, List, Any from typing import Dict, List, Any
from django.db import transaction from django.db import transaction
from igny8_core.ai.base import BaseAIFunction from igny8_core.ai.base import BaseAIFunction
from igny8_core.modules.writer.models import Tasks, Content from igny8_core.modules.writer.models import Tasks, Content as TaskContent
from igny8_core.business.content.models import ContentTaxonomy
from igny8_core.ai.ai_core import AICore from igny8_core.ai.ai_core import AICore
from igny8_core.ai.validators import validate_tasks_exist from igny8_core.ai.validators import validate_tasks_exist
from igny8_core.ai.prompts import PromptRegistry from igny8_core.ai.prompts import PromptRegistry
@@ -63,9 +62,9 @@ class GenerateContentFunction(BaseAIFunction):
if account: if account:
queryset = queryset.filter(account=account) queryset = queryset.filter(account=account)
# STAGE 3: Preload relationships - taxonomy_term instead of taxonomy # Preload all relationships to avoid N+1 queries
tasks = list(queryset.select_related( tasks = list(queryset.select_related(
'account', 'site', 'sector', 'cluster', 'taxonomy_term' 'account', 'site', 'sector', 'cluster', 'idea'
)) ))
if not tasks: if not tasks:
@@ -74,8 +73,9 @@ class GenerateContentFunction(BaseAIFunction):
return tasks return tasks
def build_prompt(self, data: Any, account=None) -> str: def build_prompt(self, data: Any, account=None) -> str:
"""STAGE 3: Build content generation prompt using final Task schema""" """Build content generation prompt for a single task using registry"""
if isinstance(data, list): if isinstance(data, list):
# For now, handle single task (will be called per task)
if not data: if not data:
raise ValueError("No tasks provided") raise ValueError("No tasks provided")
task = data[0] task = data[0]
@@ -89,9 +89,33 @@ class GenerateContentFunction(BaseAIFunction):
if task.description: if task.description:
idea_data += f"Description: {task.description}\n" idea_data += f"Description: {task.description}\n"
# Add content type and structure from task # Handle idea description (might be JSON or plain text)
idea_data += f"Content Type: {task.content_type or 'post'}\n" if task.idea and task.idea.description:
idea_data += f"Content Structure: {task.content_structure or 'article'}\n" description = task.idea.description
try:
import json
parsed_desc = json.loads(description)
if isinstance(parsed_desc, dict):
formatted_desc = "Content Outline:\n\n"
if 'H2' in parsed_desc:
for h2_section in parsed_desc['H2']:
formatted_desc += f"## {h2_section.get('heading', '')}\n"
if 'subsections' in h2_section:
for h3_section in h2_section['subsections']:
formatted_desc += f"### {h3_section.get('subheading', '')}\n"
formatted_desc += f"Content Type: {h3_section.get('content_type', '')}\n"
formatted_desc += f"Details: {h3_section.get('details', '')}\n\n"
description = formatted_desc
except (json.JSONDecodeError, TypeError):
pass # Use as plain text
idea_data += f"Outline: {description}\n"
if task.idea:
idea_data += f"Structure: {task.idea.content_structure or task.content_structure or 'blog_post'}\n"
idea_data += f"Type: {task.idea.content_type or task.content_type or 'blog_post'}\n"
if task.idea.estimated_word_count:
idea_data += f"Estimated Word Count: {task.idea.estimated_word_count}\n"
# Build cluster data string # Build cluster data string
cluster_data = '' cluster_data = ''
@@ -99,18 +123,12 @@ class GenerateContentFunction(BaseAIFunction):
cluster_data = f"Cluster Name: {task.cluster.name or ''}\n" cluster_data = f"Cluster Name: {task.cluster.name or ''}\n"
if task.cluster.description: if task.cluster.description:
cluster_data += f"Description: {task.cluster.description}\n" cluster_data += f"Description: {task.cluster.description}\n"
cluster_data += f"Status: {task.cluster.status or 'active'}\n"
# STAGE 3: Build taxonomy context (from taxonomy_term FK) # Build keywords string
taxonomy_data = '' keywords_data = task.keywords or ''
if task.taxonomy_term: if not keywords_data and task.idea:
taxonomy_data = f"Taxonomy: {task.taxonomy_term.name or ''}\n" keywords_data = task.idea.target_keywords or ''
if task.taxonomy_term.taxonomy_type:
taxonomy_data += f"Type: {task.taxonomy_term.get_taxonomy_type_display()}\n"
# STAGE 3: Build keywords context (from keywords TextField)
keywords_data = ''
if task.keywords:
keywords_data = f"Keywords: {task.keywords}\n"
# Get prompt from registry with context # Get prompt from registry with context
prompt = PromptRegistry.get_prompt( prompt = PromptRegistry.get_prompt(
@@ -120,7 +138,6 @@ class GenerateContentFunction(BaseAIFunction):
context={ context={
'IDEA': idea_data, 'IDEA': idea_data,
'CLUSTER': cluster_data, 'CLUSTER': cluster_data,
'TAXONOMY': taxonomy_data,
'KEYWORDS': keywords_data, 'KEYWORDS': keywords_data,
} }
) )
@@ -159,11 +176,7 @@ class GenerateContentFunction(BaseAIFunction):
progress_tracker=None, progress_tracker=None,
step_tracker=None step_tracker=None
) -> Dict: ) -> Dict:
""" """Save content to task - handles both JSON and plain text responses"""
STAGE 3: Save content using final Stage 1 Content model schema.
Creates independent Content record (no OneToOne to Task).
Handles tags and categories from AI response.
"""
if isinstance(original_data, list): if isinstance(original_data, list):
task = original_data[0] if original_data else None task = original_data[0] if original_data else None
else: else:
@@ -177,158 +190,113 @@ class GenerateContentFunction(BaseAIFunction):
# JSON response with structured fields # JSON response with structured fields
content_html = parsed.get('content', '') content_html = parsed.get('content', '')
title = parsed.get('title') or task.title title = parsed.get('title') or task.title
meta_title = parsed.get('meta_title') or parsed.get('seo_title') or title meta_title = parsed.get('meta_title') or title or task.title
meta_description = parsed.get('meta_description') or parsed.get('seo_description') meta_description = parsed.get('meta_description', '')
primary_keyword = parsed.get('primary_keyword') or parsed.get('focus_keyword') word_count = parsed.get('word_count', 0)
secondary_keywords = parsed.get('secondary_keywords') or parsed.get('keywords', []) primary_keyword = parsed.get('primary_keyword', '')
# Extract tags and categories from AI response secondary_keywords = parsed.get('secondary_keywords', [])
tags_from_response = parsed.get('tags', []) tags = parsed.get('tags', [])
categories_from_response = parsed.get('categories', []) categories = parsed.get('categories', [])
# Content status should always be 'draft' for newly generated content
# DEBUG: Log the full parsed response to see what we're getting # Status can only be changed manually to 'review' or 'publish'
logger.info(f"===== GENERATE CONTENT DEBUG =====") content_status = 'draft'
logger.info(f"Full parsed response keys: {list(parsed.keys())}")
logger.info(f"Tags from response (type: {type(tags_from_response)}): {tags_from_response}")
logger.info(f"Categories from response (type: {type(categories_from_response)}): {categories_from_response}")
logger.info(f"==================================")
else: else:
# Plain text response # Plain text response (legacy)
content_html = str(parsed) content_html = str(parsed)
title = task.title title = task.title
meta_title = title meta_title = task.meta_title or task.title
meta_description = None meta_description = task.meta_description or (task.description or '')[:160] if task.description else ''
primary_keyword = None word_count = 0
primary_keyword = ''
secondary_keywords = [] secondary_keywords = []
tags_from_response = [] tags = []
categories_from_response = [] categories = []
content_status = 'draft'
# Calculate word count # Calculate word count if not provided
word_count = 0 if not word_count and content_html:
if content_html:
text_for_counting = re.sub(r'<[^>]+>', '', content_html) text_for_counting = re.sub(r'<[^>]+>', '', content_html)
word_count = len(text_for_counting.split()) word_count = len(text_for_counting.split())
# STAGE 3: Create independent Content record using final schema # Ensure related content record exists
content_record = Content.objects.create( content_record, _created = TaskContent.objects.get_or_create(
# Core fields task=task,
title=title, defaults={
content_html=content_html or '', 'account': task.account,
word_count=word_count, 'site': task.site,
# SEO fields 'sector': task.sector,
meta_title=meta_title, 'html_content': content_html or '',
meta_description=meta_description, 'word_count': word_count or 0,
primary_keyword=primary_keyword, 'status': 'draft',
secondary_keywords=secondary_keywords if isinstance(secondary_keywords, list) else [], },
# Structure
cluster=task.cluster,
content_type=task.content_type,
content_structure=task.content_structure,
# Source and status
source='igny8',
status='draft',
# Site/Sector/Account
account=task.account,
site=task.site,
sector=task.sector,
) )
logger.info(f"Created content record ID: {content_record.id}") # Update content fields
logger.info(f"Processing taxonomies - Tags: {len(tags_from_response) if tags_from_response else 0}, Categories: {len(categories_from_response) if categories_from_response else 0}") if content_html:
content_record.html_content = content_html
# Link taxonomy terms from task if available content_record.word_count = word_count or content_record.word_count or 0
if task.taxonomy_term: content_record.title = title
content_record.taxonomy_terms.add(task.taxonomy_term) content_record.meta_title = meta_title
logger.info(f"Added task taxonomy term: {task.taxonomy_term.name}") content_record.meta_description = meta_description
content_record.primary_keyword = primary_keyword or ''
# Process tags from AI response if isinstance(secondary_keywords, list):
logger.info(f"Starting tag processing: {tags_from_response}") content_record.secondary_keywords = secondary_keywords
if tags_from_response and isinstance(tags_from_response, list): elif secondary_keywords:
from django.utils.text import slugify content_record.secondary_keywords = [secondary_keywords]
for tag_name in tags_from_response:
logger.info(f"Processing tag: '{tag_name}' (type: {type(tag_name)})")
if tag_name and isinstance(tag_name, str):
tag_name = tag_name.strip()
if tag_name:
try:
tag_slug = slugify(tag_name)
logger.info(f"Creating/finding tag: name='{tag_name}', slug='{tag_slug}'")
# Get or create tag taxonomy term using site + slug + type for uniqueness
tag_obj, created = ContentTaxonomy.objects.get_or_create(
site=task.site,
slug=tag_slug,
taxonomy_type='tag',
defaults={
'name': tag_name,
'sector': task.sector,
'account': task.account,
'description': '',
'external_taxonomy': '',
'sync_status': '',
'count': 0,
'metadata': {},
}
)
content_record.taxonomy_terms.add(tag_obj)
logger.info(f"{'Created' if created else 'Found'} and linked tag: {tag_name} (ID: {tag_obj.id}, Slug: {tag_slug})")
except Exception as e:
logger.error(f"❌ Failed to add tag '{tag_name}': {e}", exc_info=True)
else:
logger.warning(f"Skipping invalid tag: '{tag_name}' (type: {type(tag_name)})")
else: else:
logger.info(f"No tags to process or tags_from_response is not a list: {type(tags_from_response)}") content_record.secondary_keywords = []
if isinstance(tags, list):
# Process categories from AI response content_record.tags = tags
logger.info(f"Starting category processing: {categories_from_response}") elif tags:
if categories_from_response and isinstance(categories_from_response, list): content_record.tags = [tags]
from django.utils.text import slugify
for category_name in categories_from_response:
logger.info(f"Processing category: '{category_name}' (type: {type(category_name)})")
if category_name and isinstance(category_name, str):
category_name = category_name.strip()
if category_name:
try:
category_slug = slugify(category_name)
logger.info(f"Creating/finding category: name='{category_name}', slug='{category_slug}'")
# Get or create category taxonomy term using site + slug + type for uniqueness
category_obj, created = ContentTaxonomy.objects.get_or_create(
site=task.site,
slug=category_slug,
taxonomy_type='category',
defaults={
'name': category_name,
'sector': task.sector,
'account': task.account,
'description': '',
'external_taxonomy': '',
'sync_status': '',
'count': 0,
'metadata': {},
}
)
content_record.taxonomy_terms.add(category_obj)
logger.info(f"{'Created' if created else 'Found'} and linked category: {category_name} (ID: {category_obj.id}, Slug: {category_slug})")
except Exception as e:
logger.error(f"❌ Failed to add category '{category_name}': {e}", exc_info=True)
else:
logger.warning(f"Skipping invalid category: '{category_name}' (type: {type(category_name)})")
else: else:
logger.info(f"No categories to process or categories_from_response is not a list: {type(categories_from_response)}") content_record.tags = []
if isinstance(categories, list):
# STAGE 3: Update task status to completed content_record.categories = categories
elif categories:
content_record.categories = [categories]
else:
content_record.categories = []
# Always set status to 'draft' for newly generated content
# Status can only be: draft, review, published (changed manually)
content_record.status = 'draft'
# Merge any extra fields into metadata (non-standard keys)
if isinstance(parsed, dict):
excluded_keys = {
'content',
'title',
'meta_title',
'meta_description',
'primary_keyword',
'secondary_keywords',
'tags',
'categories',
'word_count',
'status',
}
extra_meta = {k: v for k, v in parsed.items() if k not in excluded_keys}
existing_meta = content_record.metadata or {}
existing_meta.update(extra_meta)
content_record.metadata = existing_meta
# Align foreign keys to ensure consistency
content_record.account = task.account
content_record.site = task.site
content_record.sector = task.sector
content_record.task = task
content_record.save()
# Update task status - keep task data intact but mark as completed
task.status = 'completed' task.status = 'completed'
task.save(update_fields=['status', 'updated_at']) task.save(update_fields=['status', 'updated_at'])
# NEW: Auto-sync idea status from task status
if hasattr(task, 'idea') and task.idea:
task.idea.status = 'completed'
task.idea.save(update_fields=['status', 'updated_at'])
logger.info(f"Updated related idea ID {task.idea.id} to completed")
return { return {
'count': 1, 'count': 1,
'content_id': content_record.id, 'tasks_updated': 1,
'task_id': task.id, 'word_count': content_record.word_count,
'word_count': word_count,
} }

View File

@@ -208,16 +208,12 @@ class GenerateIdeasFunction(BaseAIFunction):
# Handle target_keywords # Handle target_keywords
target_keywords = idea_data.get('covered_keywords', '') or idea_data.get('target_keywords', '') target_keywords = idea_data.get('covered_keywords', '') or idea_data.get('target_keywords', '')
# Direct mapping - no conversion needed
content_type = idea_data.get('content_type', 'post')
content_structure = idea_data.get('content_structure', 'article')
# Create ContentIdeas record # Create ContentIdeas record
ContentIdeas.objects.create( ContentIdeas.objects.create(
idea_title=idea_data.get('title', 'Untitled Idea'), idea_title=idea_data.get('title', 'Untitled Idea'),
description=description, # Stored as JSON string description=description,
content_type=content_type, content_type=idea_data.get('content_type', 'blog_post'),
content_structure=content_structure, content_structure=idea_data.get('content_structure', 'supporting_page'),
target_keywords=target_keywords, target_keywords=target_keywords,
keyword_cluster=cluster, keyword_cluster=cluster,
estimated_word_count=idea_data.get('estimated_word_count', 1500), estimated_word_count=idea_data.get('estimated_word_count', 1500),
@@ -227,11 +223,6 @@ class GenerateIdeasFunction(BaseAIFunction):
sector=cluster.sector, sector=cluster.sector,
) )
ideas_created += 1 ideas_created += 1
# Update cluster status to 'mapped' after ideas are generated
if cluster and cluster.status == 'new':
cluster.status = 'mapped'
cluster.save()
return { return {
'count': ideas_created, 'count': ideas_created,

View File

@@ -63,7 +63,7 @@ class GenerateImagePromptsFunction(BaseAIFunction):
if account: if account:
queryset = queryset.filter(account=account) queryset = queryset.filter(account=account)
contents = list(queryset.select_related('account', 'site', 'sector', 'cluster')) contents = list(queryset.select_related('task', 'account', 'site', 'sector'))
if not contents: if not contents:
raise ValueError("No content records found") raise ValueError("No content records found")
@@ -203,12 +203,11 @@ class GenerateImagePromptsFunction(BaseAIFunction):
"""Extract title, intro paragraphs, and H2 headings from content HTML""" """Extract title, intro paragraphs, and H2 headings from content HTML"""
from bs4 import BeautifulSoup from bs4 import BeautifulSoup
html_content = content.content_html or '' html_content = content.html_content or ''
soup = BeautifulSoup(html_content, 'html.parser') soup = BeautifulSoup(html_content, 'html.parser')
# Extract title # Extract title
# Get content title (task field was removed in refactor) title = content.title or content.task.title or ''
title = content.title or ''
# Extract first 1-2 intro paragraphs (skip italic hook if present) # Extract first 1-2 intro paragraphs (skip italic hook if present)
paragraphs = soup.find_all('p') paragraphs = soup.find_all('p')

View File

@@ -1,167 +0,0 @@
"""
Optimize Content AI Function
Phase 4 Linker & Optimizer
"""
import json
import logging
from typing import Any, Dict
from igny8_core.ai.base import BaseAIFunction
from igny8_core.ai.prompts import PromptRegistry
from igny8_core.business.content.models import Content
logger = logging.getLogger(__name__)
class OptimizeContentFunction(BaseAIFunction):
"""AI function that optimizes content for SEO, readability, and engagement."""
def get_name(self) -> str:
return 'optimize_content'
def get_metadata(self) -> Dict:
metadata = super().get_metadata()
metadata.update({
'display_name': 'Optimize Content',
'description': 'Optimize content for SEO, readability, and engagement.',
'phases': {
'INIT': 'Validating content data…',
'PREP': 'Preparing content context…',
'AI_CALL': 'Optimizing content with AI…',
'PARSE': 'Parsing optimized content…',
'SAVE': 'Saving optimized content…',
'DONE': 'Content optimized!'
}
})
return metadata
def validate(self, payload: dict, account=None) -> Dict[str, Any]:
if not payload.get('ids'):
return {'valid': False, 'error': 'Content ID is required'}
return {'valid': True}
def prepare(self, payload: dict, account=None) -> Dict[str, Any]:
content_ids = payload.get('ids', [])
queryset = Content.objects.filter(id__in=content_ids)
if account:
queryset = queryset.filter(account=account)
content = queryset.select_related('account', 'site', 'sector').first()
if not content:
raise ValueError("Content not found")
# Get current scores from analyzer
from igny8_core.business.optimization.services.analyzer import ContentAnalyzer
analyzer = ContentAnalyzer()
scores_before = analyzer.analyze(content)
return {
'content': content,
'scores_before': scores_before,
'html_content': content.html_content or '',
'meta_title': content.meta_title or '',
'meta_description': content.meta_description or '',
'primary_keyword': content.primary_keyword or '',
}
def build_prompt(self, data: Dict[str, Any], account=None) -> str:
content: Content = data['content']
scores_before = data.get('scores_before', {})
context = {
'CONTENT_TITLE': content.title or 'Untitled',
'HTML_CONTENT': data.get('html_content', ''),
'META_TITLE': data.get('meta_title', ''),
'META_DESCRIPTION': data.get('meta_description', ''),
'PRIMARY_KEYWORD': data.get('primary_keyword', ''),
'WORD_COUNT': str(content.word_count or 0),
'CURRENT_SCORES': json.dumps(scores_before, indent=2),
'SOURCE': content.source,
'INTERNAL_LINKS_COUNT': str(len(content.internal_links) if content.internal_links else 0),
}
return PromptRegistry.get_prompt(
'optimize_content',
account=account or content.account,
context=context
)
def parse_response(self, response: str, step_tracker=None) -> Dict[str, Any]:
if not response:
raise ValueError("AI response is empty")
response = response.strip()
try:
return self._ensure_dict(json.loads(response))
except json.JSONDecodeError:
logger.warning("Response not valid JSON, attempting to extract JSON object")
cleaned = self._extract_json_object(response)
if cleaned:
return self._ensure_dict(json.loads(cleaned))
raise ValueError("Unable to parse AI response into JSON")
def save_output(
self,
parsed: Dict[str, Any],
original_data: Dict[str, Any],
account=None,
progress_tracker=None,
step_tracker=None
) -> Dict[str, Any]:
content: Content = original_data['content']
# Extract optimized content
optimized_html = parsed.get('html_content') or parsed.get('content') or content.html_content
optimized_meta_title = parsed.get('meta_title') or content.meta_title
optimized_meta_description = parsed.get('meta_description') or content.meta_description
# Update content
content.html_content = optimized_html
if optimized_meta_title:
content.meta_title = optimized_meta_title
if optimized_meta_description:
content.meta_description = optimized_meta_description
# Recalculate word count
from igny8_core.business.content.services.content_generation_service import ContentGenerationService
content_service = ContentGenerationService()
content.word_count = content_service._count_words(optimized_html)
# Increment optimizer version
content.optimizer_version += 1
# Get scores after optimization
from igny8_core.business.optimization.services.analyzer import ContentAnalyzer
analyzer = ContentAnalyzer()
scores_after = analyzer.analyze(content)
content.optimization_scores = scores_after
content.save(update_fields=[
'html_content', 'meta_title', 'meta_description',
'word_count', 'optimizer_version', 'optimization_scores', 'updated_at'
])
return {
'success': True,
'content_id': content.id,
'scores_before': original_data.get('scores_before', {}),
'scores_after': scores_after,
'word_count_before': original_data.get('word_count', 0),
'word_count_after': content.word_count,
'html_content': optimized_html,
'meta_title': optimized_meta_title,
'meta_description': optimized_meta_description,
}
# Helper methods
def _ensure_dict(self, data: Any) -> Dict[str, Any]:
if isinstance(data, dict):
return data
raise ValueError("AI response must be a JSON object")
def _extract_json_object(self, text: str) -> str:
start = text.find('{')
end = text.rfind('}')
if start != -1 and end != -1 and end > start:
return text[start:end + 1]
return ''

View File

@@ -1,2 +0,0 @@
# AI functions tests

View File

@@ -1,179 +0,0 @@
"""
Tests for OptimizeContentFunction
"""
from unittest.mock import Mock, patch, MagicMock
from django.test import TestCase
from igny8_core.business.content.models import Content
from igny8_core.ai.functions.optimize_content import OptimizeContentFunction
from igny8_core.api.tests.test_integration_base import IntegrationTestBase
class OptimizeContentFunctionTests(IntegrationTestBase):
"""Tests for OptimizeContentFunction"""
def setUp(self):
super().setUp()
self.function = OptimizeContentFunction()
# Create test content
self.content = Content.objects.create(
account=self.account,
site=self.site,
sector=self.sector,
title="Test Content",
html_content="<p>This is test content.</p>",
meta_title="Test Title",
meta_description="Test description",
primary_keyword="test keyword",
word_count=500,
status='draft'
)
def test_function_validation_phase(self):
"""Test validation phase"""
# Valid payload
result = self.function.validate({'ids': [self.content.id]}, self.account)
self.assertTrue(result['valid'])
# Invalid payload - missing ids
result = self.function.validate({}, self.account)
self.assertFalse(result['valid'])
self.assertIn('error', result)
def test_function_prep_phase(self):
"""Test prep phase"""
payload = {'ids': [self.content.id]}
data = self.function.prepare(payload, self.account)
self.assertIn('content', data)
self.assertIn('scores_before', data)
self.assertIn('html_content', data)
self.assertEqual(data['content'].id, self.content.id)
def test_function_prep_phase_content_not_found(self):
"""Test prep phase with non-existent content"""
payload = {'ids': [99999]}
with self.assertRaises(ValueError):
self.function.prepare(payload, self.account)
@patch('igny8_core.ai.functions.optimize_content.PromptRegistry.get_prompt')
def test_function_build_prompt(self, mock_get_prompt):
"""Test prompt building"""
mock_get_prompt.return_value = "Test prompt"
data = {
'content': self.content,
'html_content': '<p>Test</p>',
'meta_title': 'Title',
'meta_description': 'Description',
'primary_keyword': 'keyword',
'scores_before': {'overall_score': 50.0}
}
prompt = self.function.build_prompt(data, self.account)
self.assertEqual(prompt, "Test prompt")
mock_get_prompt.assert_called_once()
# Check that context was passed
call_args = mock_get_prompt.call_args
self.assertIn('context', call_args.kwargs)
def test_function_parse_response_valid_json(self):
"""Test parsing valid JSON response"""
response = '{"html_content": "<p>Optimized</p>", "meta_title": "New Title"}'
parsed = self.function.parse_response(response)
self.assertIn('html_content', parsed)
self.assertEqual(parsed['html_content'], "<p>Optimized</p>")
self.assertEqual(parsed['meta_title'], "New Title")
def test_function_parse_response_invalid_json(self):
"""Test parsing invalid JSON response"""
response = "This is not JSON"
with self.assertRaises(ValueError):
self.function.parse_response(response)
def test_function_parse_response_extracts_json_object(self):
"""Test that JSON object is extracted from text"""
response = 'Some text {"html_content": "<p>Optimized</p>"} more text'
parsed = self.function.parse_response(response)
self.assertIn('html_content', parsed)
self.assertEqual(parsed['html_content'], "<p>Optimized</p>")
@patch('igny8_core.business.optimization.services.analyzer.ContentAnalyzer.analyze')
@patch('igny8_core.business.content.services.content_generation_service.ContentGenerationService._count_words')
def test_function_save_phase(self, mock_count_words, mock_analyze):
"""Test save phase updates content"""
mock_count_words.return_value = 600
mock_analyze.return_value = {
'seo_score': 75.0,
'readability_score': 80.0,
'engagement_score': 70.0,
'overall_score': 75.0
}
parsed = {
'html_content': '<p>Optimized content.</p>',
'meta_title': 'Optimized Title',
'meta_description': 'Optimized Description'
}
original_data = {
'content': self.content,
'scores_before': {'overall_score': 50.0},
'word_count': 500
}
result = self.function.save_output(parsed, original_data, self.account)
self.assertTrue(result['success'])
self.assertEqual(result['content_id'], self.content.id)
# Refresh content from DB
self.content.refresh_from_db()
self.assertEqual(self.content.html_content, '<p>Optimized content.</p>')
self.assertEqual(self.content.optimizer_version, 1)
self.assertIsNotNone(self.content.optimization_scores)
def test_function_handles_invalid_content_id(self):
"""Test that function handles invalid content ID"""
payload = {'ids': [99999]}
with self.assertRaises(ValueError):
self.function.prepare(payload, self.account)
def test_function_respects_account_isolation(self):
"""Test that function respects account isolation"""
from igny8_core.auth.models import Account
other_account = Account.objects.create(
name="Other Account",
slug="other",
plan=self.plan,
owner=self.user
)
payload = {'ids': [self.content.id]}
# Should not find content from different account
with self.assertRaises(ValueError):
self.function.prepare(payload, other_account)
def test_get_name(self):
"""Test get_name method"""
self.assertEqual(self.function.get_name(), 'optimize_content')
def test_get_metadata(self):
"""Test get_metadata method"""
metadata = self.function.get_metadata()
self.assertIn('display_name', metadata)
self.assertIn('description', metadata)
self.assertIn('phases', metadata)
self.assertEqual(metadata['display_name'], 'Optimize Content')

View File

@@ -1,39 +0,0 @@
# Generated by Django 5.2.8 on 2025-11-20 23:27
from django.db import migrations, models
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='AITaskLog',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created_at', models.DateTimeField(auto_now_add=True, db_index=True)),
('updated_at', models.DateTimeField(auto_now=True)),
('task_id', models.CharField(blank=True, db_index=True, max_length=255, null=True)),
('function_name', models.CharField(db_index=True, max_length=100)),
('phase', models.CharField(default='INIT', max_length=50)),
('message', models.TextField(blank=True)),
('status', models.CharField(choices=[('success', 'Success'), ('error', 'Error'), ('pending', 'Pending')], default='pending', max_length=20)),
('duration', models.IntegerField(blank=True, help_text='Duration in milliseconds', null=True)),
('cost', models.DecimalField(decimal_places=6, default=0.0, max_digits=10)),
('tokens', models.IntegerField(default=0)),
('request_steps', models.JSONField(blank=True, default=list)),
('response_steps', models.JSONField(blank=True, default=list)),
('error', models.TextField(blank=True, null=True)),
('payload', models.JSONField(blank=True, null=True)),
('result', models.JSONField(blank=True, null=True)),
],
options={
'db_table': 'igny8_ai_task_logs',
'ordering': ['-created_at'],
},
),
]

View File

@@ -1,34 +0,0 @@
# Generated by Django 5.2.8 on 2025-11-20 23:27
import django.db.models.deletion
from django.db import migrations, models
class Migration(migrations.Migration):
initial = True
dependencies = [
('ai', '0001_initial'),
('igny8_core_auth', '0001_initial'),
]
operations = [
migrations.AddField(
model_name='aitasklog',
name='account',
field=models.ForeignKey(db_column='tenant_id', on_delete=django.db.models.deletion.CASCADE, related_name='%(class)s_set', to='igny8_core_auth.account'),
),
migrations.AddIndex(
model_name='aitasklog',
index=models.Index(fields=['task_id'], name='igny8_ai_ta_task_id_310356_idx'),
),
migrations.AddIndex(
model_name='aitasklog',
index=models.Index(fields=['function_name', 'account'], name='igny8_ai_ta_functio_0e5a30_idx'),
),
migrations.AddIndex(
model_name='aitasklog',
index=models.Index(fields=['status', 'created_at'], name='igny8_ai_ta_status_ed93b5_idx'),
),
]

View File

@@ -123,17 +123,17 @@ Output JSON Example:
"introduction": { "introduction": {
"hook": "Transform your sleep with organic cotton that blends comfort and sustainability.", "hook": "Transform your sleep with organic cotton that blends comfort and sustainability.",
"paragraphs": [ "paragraphs": [
{"format": "paragraph", "details": "Overview of organic cotton's rise in bedding industry."}, {"content_type": "paragraph", "details": "Overview of organic cotton's rise in bedding industry."},
{"format": "paragraph", "details": "Why consumers prefer organic bedding over synthetic alternatives."} {"content_type": "paragraph", "details": "Why consumers prefer organic bedding over synthetic alternatives."}
] ]
}, },
"H2": [ "H2": [
{ {
"heading": "Why Choose Organic Cotton for Bedding?", "heading": "Why Choose Organic Cotton for Bedding?",
"subsections": [ "subsections": [
{"subheading": "Health and Skin Benefits", "format": "paragraph", "details": "Discuss hypoallergenic and chemical-free aspects."}, {"subheading": "Health and Skin Benefits", "content_type": "paragraph", "details": "Discuss hypoallergenic and chemical-free aspects."},
{"subheading": "Environmental Sustainability", "format": "list", "details": "Eco benefits like low water use, no pesticides."}, {"subheading": "Environmental Sustainability", "content_type": "list", "details": "Eco benefits like low water use, no pesticides."},
{"subheading": "Long-Term Cost Savings", "format": "table", "details": "Compare durability and pricing over time."} {"subheading": "Long-Term Cost Savings", "content_type": "table", "details": "Compare durability and pricing over time."}
] ]
} }
] ]
@@ -145,25 +145,39 @@ Output JSON Example:
"covered_keywords": "organic duvet covers, eco-friendly bedding, sustainable sheets" "covered_keywords": "organic duvet covers, eco-friendly bedding, sustainable sheets"
} }
] ]
} }""",
Valid content_type values: post, page, product, taxonomy
Valid content_structure by type:
- post: article, guide, comparison, review, listicle
- page: landing_page, business_page, service_page, general, cluster_hub
- product: product_page
- taxonomy: category_archive, tag_archive, attribute_archive""",
'content_generation': """You are an editorial content strategist. Your task is to generate a complete JSON response object based on the provided content idea, keyword cluster, keyword list, and metadata context. 'content_generation': """You are an editorial content strategist. Your task is to generate a complete JSON response object that includes all the fields listed below, based on the provided content idea, keyword cluster, and keyword list.
Only the `content` field should contain HTML inside JSON object.
================== ==================
Generate a complete JSON response object matching this structure: Generate a complete JSON response object matching this structure:
================== ==================
{ {
"title": "[Article title using target keywords — full sentence case]", "title": "[Blog title using the primary keyword — full sentence case]",
"content": "[HTML content — full editorial structure with <p>, <h2>, <h3>, <ul>, <ol>, <table>]" "meta_title": "[Meta title under 60 characters — natural, optimized, and compelling]",
"meta_description": "[Meta description under 160 characters — clear and enticing summary]",
"content": "[HTML content — full editorial structure with <p>, <h2>, <h3>, <ul>, <ol>, <table>]",
"word_count": [Exact integer — word count of HTML body only],
"primary_keyword": "[Single primary keyword used in title and first paragraph]",
"secondary_keywords": [
"[Keyword 1]",
"[Keyword 2]",
"[Keyword 3]"
],
"tags": [
"[24 word lowercase tag 1]",
"[24 word lowercase tag 2]",
"[24 word lowercase tag 3]",
"[24 word lowercase tag 4]",
"[24 word lowercase tag 5]"
],
"categories": [
"[Parent Category > Child Category]",
"[Optional Second Category > Optional Subcategory]"
]
} }
=========================== ===========================
@@ -187,12 +201,15 @@ Each section should be 250300 words and follow this format:
- Never begin any section or sub-section with a list or table - Never begin any section or sub-section with a list or table
=========================== ===========================
STYLE & QUALITY RULES KEYWORD & SEO RULES
=========================== ===========================
- **Keyword Usage:** - **Primary keyword** must appear in:
- Use keywords naturally in title, introduction, and headings - The title
- Prioritize readability over keyword density - First paragraph of the introduction
- At least 2 H2 headings
- **Secondary keywords** must be used naturally, not forced
- **Tone & style guidelines:** - **Tone & style guidelines:**
- No robotic or passive voice - No robotic or passive voice
@@ -200,28 +217,7 @@ STYLE & QUALITY RULES
- Don't repeat heading in opening sentence - Don't repeat heading in opening sentence
- Vary sentence structure and length - Vary sentence structure and length
===========================
STAGE 3: METADATA CONTEXT (NEW)
===========================
**Content Structure:**
[IGNY8_CONTENT_STRUCTURE]
- If structure is "cluster_hub": Create comprehensive, authoritative content that serves as the main resource for this topic cluster. Include overview sections, key concepts, and links to related topics.
- If structure is "article" or "guide": Create detailed, focused content that dives deep into the topic with actionable insights.
- Other structures: Follow the appropriate format (listicle, comparison, review, landing_page, service_page, product_page, category_archive, tag_archive, attribute_archive).
**Taxonomy Context:**
[IGNY8_TAXONOMY]
- Use taxonomy information to structure categories and tags appropriately.
- Align content with taxonomy hierarchy and relationships.
- Ensure content fits within the defined taxonomy structure.
**Product/Service Attributes:**
[IGNY8_ATTRIBUTES]
- If attributes are provided (e.g., product specs, service modifiers), incorporate them naturally into the content.
- For product content: Include specifications, features, dimensions, materials, etc. as relevant.
- For service content: Include service tiers, pricing modifiers, availability, etc. as relevant.
- Present attributes in a user-friendly format (tables, lists, or integrated into narrative).
=========================== ===========================
INPUT VARIABLES INPUT VARIABLES
@@ -242,73 +238,6 @@ OUTPUT FORMAT
Return ONLY the final JSON object. Return ONLY the final JSON object.
Do NOT include any comments, formatting, or explanations.""", Do NOT include any comments, formatting, or explanations.""",
'site_structure_generation': """You are a senior UX architect and information designer. Use the business brief, objectives, style references, and existing site info to propose a complete multi-page marketing website structure.
INPUT CONTEXT
==============
BUSINESS BRIEF:
[IGNY8_BUSINESS_BRIEF]
PRIMARY OBJECTIVES:
[IGNY8_OBJECTIVES]
STYLE & BRAND NOTES:
[IGNY8_STYLE]
SITE INFO / CURRENT STRUCTURE:
[IGNY8_SITE_INFO]
OUTPUT REQUIREMENTS
====================
Return ONE JSON object with the following keys:
{
"site": {
"name": "...",
"primary_navigation": ["home", "services", "about", "contact"],
"secondary_navigation": ["blog", "faq"],
"hero_message": "High level value statement",
"tone": "voice + tone summary"
},
"pages": [
{
"slug": "home",
"title": "Home",
"type": "home | about | services | products | blog | contact | custom",
"status": "draft",
"objective": "Explain the core brand promise and primary CTA",
"primary_cta": "Book a strategy call",
"seo": {
"meta_title": "...",
"meta_description": "..."
},
"blocks": [
{
"type": "hero | features | services | stats | testimonials | faq | contact | custom",
"heading": "Section headline",
"subheading": "Support copy",
"layout": "full-width | two-column | cards | carousel",
"content": [
"Bullet or short paragraph describing what to render in this block"
]
}
]
}
]
}
RULES
=====
- Include 58 pages covering the complete buyer journey (awareness → evaluation → conversion → trust).
- Every page must have at least 3 blocks with concrete guidance (no placeholders like "Lorem ipsum").
- Use consistent slug naming, all lowercase with hyphens.
- Type must match the allowed enum and reflect page intent.
- Ensure the navigation arrays align with the page list.
- Focus on practical descriptions that an engineering team can hand off directly to the Site Builder.
Return ONLY valid JSON. No commentary, explanations, or Markdown.
""",
'image_prompt_extraction': """Extract image prompts from the following article content. 'image_prompt_extraction': """Extract image prompts from the following article content.
@@ -336,260 +265,6 @@ Make sure each prompt is detailed enough for image generation, describing the vi
'image_prompt_template': 'Create a high-quality {image_type} image to use as a featured photo for a blog post titled "{post_title}". The image should visually represent the theme, mood, and subject implied by the image prompt: {image_prompt}. Focus on a realistic, well-composed scene that naturally communicates the topic without text or logos. Use balanced lighting, pleasing composition, and photographic detail suitable for lifestyle or editorial web content. Avoid adding any visible or readable text, brand names, or illustrative effects. **And make sure image is not blurry.**', 'image_prompt_template': 'Create a high-quality {image_type} image to use as a featured photo for a blog post titled "{post_title}". The image should visually represent the theme, mood, and subject implied by the image prompt: {image_prompt}. Focus on a realistic, well-composed scene that naturally communicates the topic without text or logos. Use balanced lighting, pleasing composition, and photographic detail suitable for lifestyle or editorial web content. Avoid adding any visible or readable text, brand names, or illustrative effects. **And make sure image is not blurry.**',
'negative_prompt': 'text, watermark, logo, overlay, title, caption, writing on walls, writing on objects, UI, infographic elements, post title', 'negative_prompt': 'text, watermark, logo, overlay, title, caption, writing on walls, writing on objects, UI, infographic elements, post title',
'optimize_content': """You are an expert content optimizer specializing in SEO, readability, and engagement.
Your task is to optimize the provided content to improve its SEO score, readability, and engagement metrics.
CURRENT CONTENT:
Title: {CONTENT_TITLE}
Word Count: {WORD_COUNT}
Source: {SOURCE}
Primary Keyword: {PRIMARY_KEYWORD}
Internal Links: {INTERNAL_LINKS_COUNT}
CURRENT META DATA:
Meta Title: {META_TITLE}
Meta Description: {META_DESCRIPTION}
CURRENT SCORES:
{CURRENT_SCORES}
HTML CONTENT:
{HTML_CONTENT}
OPTIMIZATION REQUIREMENTS:
1. SEO Optimization:
- Ensure meta title is 30-60 characters (if provided)
- Ensure meta description is 120-160 characters (if provided)
- Optimize primary keyword usage (natural, not keyword stuffing)
- Improve heading structure (H1, H2, H3 hierarchy)
- Add internal links where relevant (maintain existing links)
2. Readability:
- Average sentence length: 15-20 words
- Use clear, concise language
- Break up long paragraphs
- Use bullet points and lists where appropriate
- Ensure proper paragraph structure
3. Engagement:
- Add compelling headings
- Include relevant images placeholders (alt text)
- Use engaging language
- Create clear call-to-action sections
- Improve content flow and structure
OUTPUT FORMAT:
Return ONLY a JSON object in this format:
{{
"html_content": "[Optimized HTML content]",
"meta_title": "[Optimized meta title, 30-60 chars]",
"meta_description": "[Optimized meta description, 120-160 chars]",
"optimization_notes": "[Brief notes on what was optimized]"
}}
Do not include any explanations, text, or commentary outside the JSON output.
""",
# Phase 8: Universal Content Types
'product_generation': """You are a product content specialist. Generate comprehensive product content that includes detailed descriptions, features, specifications, pricing, and benefits.
INPUT:
Product Name: [IGNY8_PRODUCT_NAME]
Product Description: [IGNY8_PRODUCT_DESCRIPTION]
Product Features: [IGNY8_PRODUCT_FEATURES]
Target Audience: [IGNY8_TARGET_AUDIENCE]
Primary Keyword: [IGNY8_PRIMARY_KEYWORD]
OUTPUT FORMAT:
Return ONLY a JSON object in this format:
{
"title": "[Product name and key benefit]",
"meta_title": "[SEO-optimized meta title, 30-60 chars]",
"meta_description": "[Compelling meta description, 120-160 chars]",
"html_content": "[Complete HTML product page content]",
"word_count": [Integer word count],
"primary_keyword": "[Primary keyword]",
"secondary_keywords": ["keyword1", "keyword2", "keyword3"],
"tags": ["tag1", "tag2", "tag3"],
"categories": ["Category > Subcategory"],
"json_blocks": [
{
"type": "product_overview",
"heading": "Product Overview",
"content": "Detailed product description"
},
{
"type": "features",
"heading": "Key Features",
"items": ["Feature 1", "Feature 2", "Feature 3"]
},
{
"type": "specifications",
"heading": "Specifications",
"data": {"Spec 1": "Value 1", "Spec 2": "Value 2"}
},
{
"type": "pricing",
"heading": "Pricing",
"content": "Pricing information"
},
{
"type": "benefits",
"heading": "Benefits",
"items": ["Benefit 1", "Benefit 2", "Benefit 3"]
}
],
"structure_data": {
"product_type": "[Product type]",
"price_range": "[Price range]",
"target_market": "[Target market]"
}
}
CONTENT REQUIREMENTS:
- Include compelling product overview
- List key features with benefits
- Provide detailed specifications
- Include pricing information (if available)
- Highlight unique selling points
- Use SEO-optimized headings
- Include call-to-action sections
- Ensure natural keyword usage
""",
'service_generation': """You are a service page content specialist. Generate comprehensive service page content that explains services, benefits, process, and pricing.
INPUT:
Service Name: [IGNY8_SERVICE_NAME]
Service Description: [IGNY8_SERVICE_DESCRIPTION]
Service Benefits: [IGNY8_SERVICE_BENEFITS]
Target Audience: [IGNY8_TARGET_AUDIENCE]
Primary Keyword: [IGNY8_PRIMARY_KEYWORD]
OUTPUT FORMAT:
Return ONLY a JSON object in this format:
{
"title": "[Service name and value proposition]",
"meta_title": "[SEO-optimized meta title, 30-60 chars]",
"meta_description": "[Compelling meta description, 120-160 chars]",
"html_content": "[Complete HTML service page content]",
"word_count": [Integer word count],
"primary_keyword": "[Primary keyword]",
"secondary_keywords": ["keyword1", "keyword2", "keyword3"],
"tags": ["tag1", "tag2", "tag3"],
"categories": ["Category > Subcategory"],
"json_blocks": [
{
"type": "service_overview",
"heading": "Service Overview",
"content": "Detailed service description"
},
{
"type": "benefits",
"heading": "Benefits",
"items": ["Benefit 1", "Benefit 2", "Benefit 3"]
},
{
"type": "process",
"heading": "Our Process",
"steps": ["Step 1", "Step 2", "Step 3"]
},
{
"type": "pricing",
"heading": "Pricing",
"content": "Pricing information"
},
{
"type": "faq",
"heading": "Frequently Asked Questions",
"items": [{"question": "Q1", "answer": "A1"}]
}
],
"structure_data": {
"service_type": "[Service type]",
"duration": "[Service duration]",
"target_market": "[Target market]"
}
}
CONTENT REQUIREMENTS:
- Clear service overview and value proposition
- Detailed benefits and outcomes
- Step-by-step process explanation
- Pricing information (if available)
- FAQ section addressing common questions
- Include testimonials or case studies (if applicable)
- Use SEO-optimized headings
- Include call-to-action sections
""",
'taxonomy_generation': """You are a taxonomy and categorization specialist. Generate comprehensive taxonomy page content that organizes and explains categories, tags, and hierarchical structures.
INPUT:
Taxonomy Name: [IGNY8_TAXONOMY_NAME]
Taxonomy Description: [IGNY8_TAXONOMY_DESCRIPTION]
Taxonomy Items: [IGNY8_TAXONOMY_ITEMS]
Primary Keyword: [IGNY8_PRIMARY_KEYWORD]
OUTPUT FORMAT:
Return ONLY a JSON object in this format:
{{
"title": "[Taxonomy name and purpose]",
"meta_title": "[SEO-optimized meta title, 30-60 chars]",
"meta_description": "[Compelling meta description, 120-160 chars]",
"html_content": "[Complete HTML taxonomy page content]",
"word_count": [Integer word count],
"primary_keyword": "[Primary keyword]",
"secondary_keywords": ["keyword1", "keyword2", "keyword3"],
"tags": ["tag1", "tag2", "tag3"],
"categories": ["Category > Subcategory"],
"json_blocks": [
{{
"type": "taxonomy_overview",
"heading": "Taxonomy Overview",
"content": "Detailed taxonomy description"
}},
{{
"type": "categories",
"heading": "Categories",
"items": [
{{
"name": "Category 1",
"description": "Category description",
"subcategories": ["Subcat 1", "Subcat 2"]
}}
]
}},
{{
"type": "tags",
"heading": "Tags",
"items": ["Tag 1", "Tag 2", "Tag 3"]
}},
{{
"type": "hierarchy",
"heading": "Taxonomy Hierarchy",
"structure": {{"Level 1": {{"Level 2": ["Level 3"]}}}}
}}
],
"structure_data": {{
"taxonomy_type": "[Taxonomy type]",
"item_count": [Integer],
"hierarchy_levels": [Integer]
}}
}}
CONTENT REQUIREMENTS:
- Clear taxonomy overview and purpose
- Organized category structure
- Tag organization and relationships
- Hierarchical structure visualization
- SEO-optimized headings
- Include navigation and organization benefits
- Use clear, descriptive language
""",
} }
# Mapping from function names to prompt types # Mapping from function names to prompt types
@@ -600,12 +275,6 @@ CONTENT REQUIREMENTS:
'generate_images': 'image_prompt_extraction', 'generate_images': 'image_prompt_extraction',
'extract_image_prompts': 'image_prompt_extraction', 'extract_image_prompts': 'image_prompt_extraction',
'generate_image_prompts': 'image_prompt_extraction', 'generate_image_prompts': 'image_prompt_extraction',
'generate_site_structure': 'site_structure_generation',
'optimize_content': 'optimize_content',
# Phase 8: Universal Content Types
'generate_product_content': 'product_generation',
'generate_service_page': 'service_generation',
'generate_taxonomy': 'taxonomy_generation',
} }
@classmethod @classmethod
@@ -701,7 +370,7 @@ CONTENT REQUIREMENTS:
if '{' in rendered and '}' in rendered: if '{' in rendered and '}' in rendered:
try: try:
rendered = rendered.format(**normalized_context) rendered = rendered.format(**normalized_context)
except (KeyError, ValueError, IndexError) as e: except (KeyError, ValueError) as e:
# If .format() fails, log warning but keep the [IGNY8_*] replacements # If .format() fails, log warning but keep the [IGNY8_*] replacements
logger.warning(f"Failed to format prompt with .format(): {e}. Using [IGNY8_*] replacements only.") logger.warning(f"Failed to format prompt with .format(): {e}. Using [IGNY8_*] replacements only.")

View File

@@ -94,15 +94,9 @@ def _load_generate_image_prompts():
from igny8_core.ai.functions.generate_image_prompts import GenerateImagePromptsFunction from igny8_core.ai.functions.generate_image_prompts import GenerateImagePromptsFunction
return GenerateImagePromptsFunction return GenerateImagePromptsFunction
def _load_optimize_content():
"""Lazy loader for optimize_content function"""
from igny8_core.ai.functions.optimize_content import OptimizeContentFunction
return OptimizeContentFunction
register_lazy_function('auto_cluster', _load_auto_cluster) register_lazy_function('auto_cluster', _load_auto_cluster)
register_lazy_function('generate_ideas', _load_generate_ideas) register_lazy_function('generate_ideas', _load_generate_ideas)
register_lazy_function('generate_content', _load_generate_content) register_lazy_function('generate_content', _load_generate_content)
register_lazy_function('generate_images', _load_generate_images) register_lazy_function('generate_images', _load_generate_images)
register_lazy_function('generate_image_prompts', _load_generate_image_prompts) register_lazy_function('generate_image_prompts', _load_generate_image_prompts)
register_lazy_function('optimize_content', _load_optimize_content)

View File

@@ -707,25 +707,6 @@ def process_image_generation_queue(self, image_ids: list, account_id: int = None
}) })
failed += 1 failed += 1
# Check if all images for the content are generated and update status to 'review'
if content_id and completed > 0:
try:
from igny8_core.business.content.models import Content, Images
content = Content.objects.get(id=content_id)
# Check if all images for this content are now generated
all_images = Images.objects.filter(content=content)
pending_images = all_images.filter(status='pending').count()
# If no pending images and content is still in draft, move to review
if pending_images == 0 and content.status == 'draft':
content.status = 'review'
content.save(update_fields=['status'])
logger.info(f"[process_image_generation_queue] Content #{content_id} status updated to 'review' (all images generated)")
except Exception as e:
logger.error(f"[process_image_generation_queue] Error updating content status: {str(e)}", exc_info=True)
# Final state # Final state
logger.info("=" * 80) logger.info("=" * 80)
logger.info(f"process_image_generation_queue COMPLETED") logger.info(f"process_image_generation_queue COMPLETED")

View File

@@ -1,86 +0,0 @@
from __future__ import annotations
from igny8_core.ai.functions.generate_site_structure import GenerateSiteStructureFunction
from igny8_core.business.site_building.models import PageBlueprint
from igny8_core.business.site_building.tests.base import SiteBuilderTestBase
class GenerateSiteStructureFunctionTests(SiteBuilderTestBase):
"""Covers parsing + persistence logic for the Site Builder AI function."""
def setUp(self):
super().setUp()
self.function = GenerateSiteStructureFunction()
def test_parse_response_extracts_json_object(self):
noisy_response = """
Thoughts about the request…
{
"site": {"name": "Acme Robotics"},
"pages": [{"slug": "home", "title": "Home"}]
}
Extra commentary that should be ignored.
"""
parsed = self.function.parse_response(noisy_response)
self.assertEqual(parsed['site']['name'], 'Acme Robotics')
self.assertEqual(parsed['pages'][0]['slug'], 'home')
def test_save_output_updates_structure_and_syncs_pages(self):
# Existing page to prove update/delete flows.
legacy_page = PageBlueprint.objects.create(
site_blueprint=self.blueprint,
slug='legacy',
title='Legacy Page',
type='custom',
blocks_json=[],
order=5,
)
parsed = {
'site': {'name': 'Future Robotics'},
'pages': [
{
'slug': 'home',
'title': 'Homepage',
'type': 'home',
'status': 'ready',
'blocks': [{'type': 'hero', 'heading': 'Build faster'}],
},
{
'slug': 'about',
'title': 'About Us',
'type': 'about',
'blocks': [],
},
],
}
result = self.function.save_output(parsed, {'blueprint': self.blueprint})
self.blueprint.refresh_from_db()
self.assertEqual(self.blueprint.status, 'ready')
self.assertEqual(self.blueprint.structure_json['site']['name'], 'Future Robotics')
self.assertEqual(result['pages_created'], 1)
self.assertEqual(result['pages_updated'], 1)
self.assertEqual(result['pages_deleted'], 1)
slugs = set(self.blueprint.pages.values_list('slug', flat=True))
self.assertIn('home', slugs)
self.assertIn('about', slugs)
self.assertNotIn(legacy_page.slug, slugs)
def test_build_prompt_includes_existing_pages(self):
# Convert structure to JSON to ensure template rendering stays stable.
data = self.function.prepare(
payload={'ids': [self.blueprint.id]},
account=self.account,
)
prompt = self.function.build_prompt(data, account=self.account)
self.assertIn(self.blueprint.name, prompt)
self.assertIn('Home', prompt)
# The prompt should mention hosting type and objectives in JSON context.
self.assertIn(self.blueprint.hosting_type, prompt)
for objective in self.blueprint.config_json.get('objectives', []):
self.assertIn(objective, prompt)

View File

@@ -0,0 +1,116 @@
"""
Test script for AI functions
Run this to verify all AI functions work with console logging
"""
import os
import sys
import django
# Setup Django
sys.path.insert(0, os.path.join(os.path.dirname(__file__), '../../../../'))
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'igny8.settings')
django.setup()
from igny8_core.ai.functions.auto_cluster import AutoClusterFunction
from igny8_core.ai.functions.generate_images import generate_images_core
from igny8_core.ai.ai_core import AICore
def test_ai_core():
"""Test AICore.run_ai_request() directly"""
print("\n" + "="*80)
print("TEST 1: AICore.run_ai_request() - Direct API Call")
print("="*80)
ai_core = AICore()
result = ai_core.run_ai_request(
prompt="Say 'Hello, World!' in JSON format: {\"message\": \"your message\"}",
max_tokens=100,
function_name='test_ai_core'
)
if result.get('error'):
print(f"❌ Error: {result['error']}")
else:
print(f"✅ Success! Content: {result.get('content', '')[:100]}")
print(f" Tokens: {result.get('total_tokens')}, Cost: ${result.get('cost', 0):.6f}")
def test_auto_cluster():
"""Test auto cluster function"""
print("\n" + "="*80)
print("TEST 2: Auto Cluster Function")
print("="*80)
print("Note: This requires actual keyword IDs in the database")
print("Skipping - requires database setup")
# Uncomment to test with real data:
# fn = AutoClusterFunction()
# result = fn.validate({'ids': [1, 2, 3]})
# print(f"Validation result: {result}")
def test_generate_content():
"""Test generate content function"""
print("\n" + "="*80)
print("TEST 3: Generate Content Function")
print("="*80)
print("Note: This requires actual task IDs in the database")
print("Skipping - requires database setup")
def test_generate_images():
"""Test generate images function"""
print("\n" + "="*80)
print("TEST 4: Generate Images Function")
print("="*80)
print("Note: This requires actual task IDs in the database")
print("Skipping - requires database setup")
# Uncomment to test with real data:
# result = generate_images_core(task_ids=[1], account_id=1)
# print(f"Result: {result}")
def test_json_extraction():
"""Test JSON extraction"""
print("\n" + "="*80)
print("TEST 5: JSON Extraction")
print("="*80)
ai_core = AICore()
# Test 1: Direct JSON
json_text = '{"clusters": [{"name": "Test", "keywords": ["test"]}]}'
result = ai_core.extract_json(json_text)
print(f"✅ Direct JSON: {result is not None}")
# Test 2: JSON in markdown
json_markdown = '```json\n{"clusters": [{"name": "Test"}]}\n```'
result = ai_core.extract_json(json_markdown)
print(f"✅ JSON in markdown: {result is not None}")
# Test 3: Invalid JSON
invalid_json = "This is not JSON"
result = ai_core.extract_json(invalid_json)
print(f"✅ Invalid JSON handled: {result is None}")
if __name__ == '__main__':
print("\n" + "="*80)
print("AI FUNCTIONS TEST SUITE")
print("="*80)
print("Testing all AI functions with console logging enabled")
print("="*80)
# Run tests
test_ai_core()
test_json_extraction()
test_auto_cluster()
test_generate_content()
test_generate_images()
print("\n" + "="*80)
print("TEST SUITE COMPLETE")
print("="*80)
print("\nAll console logging should be visible above.")
print("Check for [AI][function_name] Step X: messages")

View File

@@ -1,52 +0,0 @@
"""
AI Validators Package
Shared validation logic for AI functions
"""
from .cluster_validators import validate_minimum_keywords, validate_keyword_selection
# The codebase also contains a module-level file `ai/validators.py` which defines
# common validator helpers (e.g. `validate_ids`). Because there is both a
# package directory `ai/validators/` and a module file `ai/validators.py`, Python
# will resolve `igny8_core.ai.validators` to the package and not the module file.
# To avoid changing many imports across the project, load the module file here
# and re-export the commonly used functions.
import importlib.util
import os
_module_path = os.path.normpath(os.path.join(os.path.dirname(__file__), '..', 'validators.py'))
if os.path.exists(_module_path):
spec = importlib.util.spec_from_file_location('igny8_core.ai._validators_module', _module_path)
_validators_mod = importlib.util.module_from_spec(spec)
spec.loader.exec_module(_validators_mod)
# Re-export commonly used functions from the module file
validate_ids = getattr(_validators_mod, 'validate_ids', None)
validate_keywords_exist = getattr(_validators_mod, 'validate_keywords_exist', None)
validate_cluster_limits = getattr(_validators_mod, 'validate_cluster_limits', None)
validate_cluster_exists = getattr(_validators_mod, 'validate_cluster_exists', None)
validate_tasks_exist = getattr(_validators_mod, 'validate_tasks_exist', None)
validate_api_key = getattr(_validators_mod, 'validate_api_key', None)
validate_model = getattr(_validators_mod, 'validate_model', None)
validate_image_size = getattr(_validators_mod, 'validate_image_size', None)
else:
# Module file missing - keep names defined if cluster validators provide them
validate_ids = None
validate_keywords_exist = None
validate_cluster_limits = None
validate_cluster_exists = None
validate_tasks_exist = None
validate_api_key = None
validate_model = None
validate_image_size = None
__all__ = [
'validate_minimum_keywords',
'validate_keyword_selection',
'validate_ids',
'validate_keywords_exist',
'validate_cluster_limits',
'validate_cluster_exists',
'validate_tasks_exist',
'validate_api_key',
'validate_model',
'validate_image_size',
]

View File

@@ -1,105 +0,0 @@
"""
Cluster-specific validators
Shared between auto-cluster function and automation pipeline
"""
import logging
from typing import Dict, List
logger = logging.getLogger(__name__)
def validate_minimum_keywords(
keyword_ids: List[int],
account=None,
min_required: int = 5
) -> Dict:
"""
Validate that sufficient keywords are available for clustering
Args:
keyword_ids: List of keyword IDs to cluster
account: Account object for filtering
min_required: Minimum number of keywords required (default: 5)
Returns:
Dict with 'valid' (bool) and 'error' (str) or 'count' (int)
"""
from igny8_core.modules.planner.models import Keywords
# Build queryset
queryset = Keywords.objects.filter(id__in=keyword_ids, status='new')
if account:
queryset = queryset.filter(account=account)
# Count available keywords
count = queryset.count()
# Validate minimum
if count < min_required:
return {
'valid': False,
'error': f'Insufficient keywords for clustering. Need at least {min_required} keywords, but only {count} available.',
'count': count,
'required': min_required
}
return {
'valid': True,
'count': count,
'required': min_required
}
def validate_keyword_selection(
selected_ids: List[int],
available_count: int,
min_required: int = 5
) -> Dict:
"""
Validate keyword selection (for frontend validation)
Args:
selected_ids: List of selected keyword IDs
available_count: Total count of available keywords
min_required: Minimum required
Returns:
Dict with validation result
"""
selected_count = len(selected_ids)
# Check if any keywords selected
if selected_count == 0:
return {
'valid': False,
'error': 'No keywords selected',
'type': 'NO_SELECTION'
}
# Check if enough selected
if selected_count < min_required:
return {
'valid': False,
'error': f'Please select at least {min_required} keywords. Currently selected: {selected_count}',
'type': 'INSUFFICIENT_SELECTION',
'selected': selected_count,
'required': min_required
}
# Check if enough available (even if not all selected)
if available_count < min_required:
return {
'valid': False,
'error': f'Not enough keywords available. Need at least {min_required} keywords, but only {available_count} exist.',
'type': 'INSUFFICIENT_AVAILABLE',
'available': available_count,
'required': min_required
}
return {
'valid': True,
'selected': selected_count,
'available': available_count,
'required': min_required
}

View File

@@ -1,31 +0,0 @@
"""
Account API URLs
"""
from django.urls import path
from igny8_core.api.account_views import (
AccountSettingsViewSet,
TeamManagementViewSet,
UsageAnalyticsViewSet
)
urlpatterns = [
# Account Settings
path('settings/', AccountSettingsViewSet.as_view({
'get': 'retrieve',
'patch': 'partial_update'
}), name='account-settings'),
# Team Management
path('team/', TeamManagementViewSet.as_view({
'get': 'list',
'post': 'create'
}), name='team-list'),
path('team/<int:pk>/', TeamManagementViewSet.as_view({
'delete': 'destroy'
}), name='team-detail'),
# Usage Analytics
path('usage/analytics/', UsageAnalyticsViewSet.as_view({
'get': 'overview'
}), name='usage-analytics'),
]

View File

@@ -1,244 +0,0 @@
"""
Account Management API Views
Handles account settings, team management, and usage analytics
"""
from rest_framework import viewsets, status
from rest_framework.decorators import action
from rest_framework.response import Response
from rest_framework.permissions import IsAuthenticated
from django.contrib.auth import get_user_model
from django.db.models import Q, Count, Sum
from django.utils import timezone
from datetime import timedelta
from drf_spectacular.utils import extend_schema, extend_schema_view
from igny8_core.auth.models import Account
from igny8_core.business.billing.models import CreditTransaction
User = get_user_model()
@extend_schema_view(
retrieve=extend_schema(tags=['Account']),
partial_update=extend_schema(tags=['Account']),
)
class AccountSettingsViewSet(viewsets.ViewSet):
"""Account settings management"""
permission_classes = [IsAuthenticated]
def retrieve(self, request):
"""Get account settings"""
account = request.user.account
return Response({
'id': account.id,
'name': account.name,
'slug': account.slug,
'billing_address_line1': account.billing_address_line1 or '',
'billing_address_line2': account.billing_address_line2 or '',
'billing_city': account.billing_city or '',
'billing_state': account.billing_state or '',
'billing_postal_code': account.billing_postal_code or '',
'billing_country': account.billing_country or '',
'tax_id': account.tax_id or '',
'billing_email': account.billing_email or '',
'credits': account.credits,
'created_at': account.created_at.isoformat(),
'updated_at': account.updated_at.isoformat(),
})
def partial_update(self, request):
"""Update account settings"""
account = request.user.account
# Update allowed fields
allowed_fields = [
'name', 'billing_address_line1', 'billing_address_line2',
'billing_city', 'billing_state', 'billing_postal_code',
'billing_country', 'tax_id', 'billing_email'
]
for field in allowed_fields:
if field in request.data:
setattr(account, field, request.data[field])
account.save()
return Response({
'message': 'Account settings updated successfully',
'account': {
'id': account.id,
'name': account.name,
'slug': account.slug,
'billing_address_line1': account.billing_address_line1,
'billing_address_line2': account.billing_address_line2,
'billing_city': account.billing_city,
'billing_state': account.billing_state,
'billing_postal_code': account.billing_postal_code,
'billing_country': account.billing_country,
'tax_id': account.tax_id,
'billing_email': account.billing_email,
}
})
@extend_schema_view(
list=extend_schema(tags=['Account']),
create=extend_schema(tags=['Account']),
destroy=extend_schema(tags=['Account']),
)
class TeamManagementViewSet(viewsets.ViewSet):
"""Team members management"""
permission_classes = [IsAuthenticated]
def list(self, request):
"""List team members"""
account = request.user.account
users = User.objects.filter(account=account)
return Response({
'results': [
{
'id': user.id,
'email': user.email,
'first_name': user.first_name,
'last_name': user.last_name,
'is_active': user.is_active,
'is_staff': user.is_staff,
'date_joined': user.date_joined.isoformat(),
'last_login': user.last_login.isoformat() if user.last_login else None,
}
for user in users
],
'count': users.count()
})
def create(self, request):
"""Invite new team member"""
account = request.user.account
email = request.data.get('email')
if not email:
return Response(
{'error': 'Email is required'},
status=status.HTTP_400_BAD_REQUEST
)
# Check if user already exists
if User.objects.filter(email=email).exists():
return Response(
{'error': 'User with this email already exists'},
status=status.HTTP_400_BAD_REQUEST
)
# Create user (simplified - in production, send invitation email)
user = User.objects.create_user(
email=email,
first_name=request.data.get('first_name', ''),
last_name=request.data.get('last_name', ''),
account=account
)
return Response({
'message': 'Team member invited successfully',
'user': {
'id': user.id,
'email': user.email,
'first_name': user.first_name,
'last_name': user.last_name,
}
}, status=status.HTTP_201_CREATED)
def destroy(self, request, pk=None):
"""Remove team member"""
account = request.user.account
try:
user = User.objects.get(id=pk, account=account)
# Prevent removing yourself
if user.id == request.user.id:
return Response(
{'error': 'Cannot remove yourself'},
status=status.HTTP_400_BAD_REQUEST
)
user.is_active = False
user.save()
return Response({
'message': 'Team member removed successfully'
})
except User.DoesNotExist:
return Response(
{'error': 'User not found'},
status=status.HTTP_404_NOT_FOUND
)
@extend_schema_view(
overview=extend_schema(tags=['Account']),
)
class UsageAnalyticsViewSet(viewsets.ViewSet):
"""Usage analytics and statistics"""
permission_classes = [IsAuthenticated]
@action(detail=False, methods=['get'])
def overview(self, request):
"""Get usage analytics overview"""
account = request.user.account
# Get date range (default: last 30 days)
days = int(request.query_params.get('days', 30))
start_date = timezone.now() - timedelta(days=days)
# Get transactions in period
transactions = CreditTransaction.objects.filter(
account=account,
created_at__gte=start_date
)
# Calculate totals by type
usage_by_type = transactions.filter(
amount__lt=0
).values('transaction_type').annotate(
total=Sum('amount'),
count=Count('id')
)
purchases_by_type = transactions.filter(
amount__gt=0
).values('transaction_type').annotate(
total=Sum('amount'),
count=Count('id')
)
# Daily usage
daily_usage = []
for i in range(days):
date = start_date + timedelta(days=i)
day_txns = transactions.filter(
created_at__date=date.date()
)
usage = day_txns.filter(amount__lt=0).aggregate(Sum('amount'))['amount__sum'] or 0
purchases = day_txns.filter(amount__gt=0).aggregate(Sum('amount'))['amount__sum'] or 0
daily_usage.append({
'date': date.date().isoformat(),
'usage': abs(usage),
'purchases': purchases,
'net': purchases + usage
})
return Response({
'period_days': days,
'start_date': start_date.isoformat(),
'end_date': timezone.now().isoformat(),
'current_balance': account.credits,
'usage_by_type': list(usage_by_type),
'purchases_by_type': list(purchases_by_type),
'daily_usage': daily_usage,
'total_usage': abs(transactions.filter(amount__lt=0).aggregate(Sum('amount'))['amount__sum'] or 0),
'total_purchases': transactions.filter(amount__gt=0).aggregate(Sum('amount'))['amount__sum'] or 0,
})

View File

@@ -67,10 +67,16 @@ class JWTAuthentication(BaseAuthentication):
try: try:
account = Account.objects.get(id=account_id) account = Account.objects.get(id=account_id)
except Account.DoesNotExist: except Account.DoesNotExist:
# Account from token doesn't exist - don't fallback, set to None pass
if not account:
try:
account = getattr(user, 'account', None)
except (AttributeError, Exception):
# If account access fails, set to None
account = None account = None
# Set account on request (only if account_id was in token and account exists) # Set account on request
request.account = account request.account = account
return (user, token) return (user, token)
@@ -83,68 +89,3 @@ class JWTAuthentication(BaseAuthentication):
# This allows session authentication to work if JWT fails # This allows session authentication to work if JWT fails
return None return None
class APIKeyAuthentication(BaseAuthentication):
"""
API Key authentication for WordPress integration.
Validates API keys stored in Site.wp_api_key field.
"""
def authenticate(self, request):
"""
Authenticate using WordPress API key.
Returns (user, api_key) tuple if valid.
"""
auth_header = request.META.get('HTTP_AUTHORIZATION', '')
if not auth_header.startswith('Bearer '):
return None # Not an API key request
api_key = auth_header.split(' ')[1] if len(auth_header.split(' ')) > 1 else None
if not api_key or len(api_key) < 20: # API keys should be at least 20 chars
return None
# Don't try to authenticate JWT tokens (they start with 'ey')
if api_key.startswith('ey'):
return None # Let JWTAuthentication handle it
try:
from igny8_core.auth.models import Site, User
# Find site by API key
site = Site.objects.select_related('account', 'account__owner').filter(
wp_api_key=api_key,
is_active=True
).first()
if not site:
return None # API key not found or site inactive
# Get account and user (prefer owner but gracefully fall back)
account = site.account
user = account.owner
if not user or not getattr(user, 'is_active', False):
# Fall back to any active developer/owner/admin in the account
user = account.users.filter(
is_active=True,
role__in=['developer', 'owner', 'admin']
).order_by('role').first() or account.users.filter(is_active=True).first()
if not user:
raise AuthenticationFailed('No active user available for this account.')
if not user.is_active:
raise AuthenticationFailed('User account is disabled.')
# Set account on request for tenant isolation
request.account = account
# Set site on request for WordPress integration context
request.site = site
return (user, api_key)
except Exception as e:
# Log the error but return None to allow other auth classes to try
import logging
logger = logging.getLogger(__name__)
logger.debug(f'APIKeyAuthentication error: {str(e)}')
return None

View File

@@ -181,26 +181,7 @@ class AccountModelViewSet(viewsets.ModelViewSet):
""" """
try: try:
instance = self.get_object() instance = self.get_object()
# Protect system account self.perform_destroy(instance)
if hasattr(instance, 'slug') and getattr(instance, 'slug', '') == 'aws-admin':
from django.core.exceptions import PermissionDenied
raise PermissionDenied("System account cannot be deleted.")
if hasattr(instance, 'soft_delete'):
user = getattr(request, 'user', None)
retention_days = None
account = getattr(instance, 'account', None)
if account and hasattr(account, 'deletion_retention_days'):
retention_days = account.deletion_retention_days
elif hasattr(instance, 'deletion_retention_days'):
retention_days = getattr(instance, 'deletion_retention_days', None)
instance.soft_delete(
user=user if getattr(user, 'is_authenticated', False) else None,
retention_days=retention_days,
reason='api_delete'
)
else:
self.perform_destroy(instance)
return success_response( return success_response(
data=None, data=None,
message='Deleted successfully', message='Deleted successfully',
@@ -284,9 +265,9 @@ class SiteSectorModelViewSet(AccountModelViewSet):
if query_params is None: if query_params is None:
# Fallback for non-DRF requests # Fallback for non-DRF requests
query_params = getattr(self.request, 'GET', {}) query_params = getattr(self.request, 'GET', {})
site_id = query_params.get('site_id') or query_params.get('site') site_id = query_params.get('site_id')
else: else:
site_id = query_params.get('site_id') or query_params.get('site') site_id = query_params.get('site_id')
except AttributeError: except AttributeError:
site_id = None site_id = None

View File

@@ -5,8 +5,6 @@ Provides consistent response format across all endpoints
from rest_framework.response import Response from rest_framework.response import Response
from rest_framework import status from rest_framework import status
import uuid import uuid
from typing import Any
from django.http import HttpRequest
def get_request_id(request): def get_request_id(request):
@@ -76,28 +74,6 @@ def error_response(error=None, errors=None, status_code=status.HTTP_400_BAD_REQU
'success': False, 'success': False,
} }
# Backwards compatibility: some callers used positional args in the order
# (error, status_code, request) which maps to (error, errors, status_code=request)
# causing `status_code` to be a Request object and raising TypeError.
# Detect this misuse and normalize arguments:
try:
if request is None and status_code is not None:
# If status_code appears to be a Request object, shift arguments
if isinstance(status_code, HttpRequest) or hasattr(status_code, 'META'):
# original call looked like: error_response(msg, status.HTTP_400_BAD_REQUEST, request)
# which resulted in: errors = status.HTTP_400..., status_code = request
request = status_code
# If `errors` holds an int-like HTTP status, use it as status_code
if isinstance(errors, int):
status_code = errors
errors = None
else:
# fallback to default 400
status_code = status.HTTP_400_BAD_REQUEST
except Exception:
# Defensive: if introspection fails, continue with provided args
pass
if error: if error:
response_data['error'] = error response_data['error'] = error
elif status_code == status.HTTP_400_BAD_REQUEST: elif status_code == status.HTTP_400_BAD_REQUEST:

View File

@@ -8,20 +8,7 @@ from drf_spectacular.utils import extend_schema, OpenApiResponse
from rest_framework import status from rest_framework import status
# Explicit tags we want to keep (from SPECTACULAR_SETTINGS) # Explicit tags we want to keep (from SPECTACULAR_SETTINGS)
EXPLICIT_TAGS = { EXPLICIT_TAGS = {'Authentication', 'Planner', 'Writer', 'System', 'Billing'}
'Authentication',
'Planner',
'Writer',
'System',
'Billing',
'Account',
'Automation',
'Linker',
'Optimizer',
'Publisher',
'Integration',
'Admin Billing',
}
def postprocess_schema_filter_tags(result, generator, request, public): def postprocess_schema_filter_tags(result, generator, request, public):
@@ -34,11 +21,6 @@ def postprocess_schema_filter_tags(result, generator, request, public):
for path, methods in result['paths'].items(): for path, methods in result['paths'].items():
for method, operation in methods.items(): for method, operation in methods.items():
if isinstance(operation, dict) and 'tags' in operation: if isinstance(operation, dict) and 'tags' in operation:
# Explicitly exclude system webhook from tagging/docs grouping
if '/system/webhook' in path:
operation['tags'] = []
continue
# Keep only explicit tags from the operation # Keep only explicit tags from the operation
filtered_tags = [ filtered_tags = [
tag for tag in operation['tags'] tag for tag in operation['tags']
@@ -59,20 +41,6 @@ def postprocess_schema_filter_tags(result, generator, request, public):
filtered_tags = ['System'] filtered_tags = ['System']
elif '/billing/' in path or '/api/v1/billing/' in path: elif '/billing/' in path or '/api/v1/billing/' in path:
filtered_tags = ['Billing'] filtered_tags = ['Billing']
elif '/account/' in path or '/api/v1/account/' in path:
filtered_tags = ['Account']
elif '/automation/' in path or '/api/v1/automation/' in path:
filtered_tags = ['Automation']
elif '/linker/' in path or '/api/v1/linker/' in path:
filtered_tags = ['Linker']
elif '/optimizer/' in path or '/api/v1/optimizer/' in path:
filtered_tags = ['Optimizer']
elif '/publisher/' in path or '/api/v1/publisher/' in path:
filtered_tags = ['Publisher']
elif '/integration/' in path or '/api/v1/integration/' in path:
filtered_tags = ['Integration']
elif '/admin/' in path or '/api/v1/admin/' in path:
filtered_tags = ['Admin Billing']
operation['tags'] = filtered_tags operation['tags'] = filtered_tags

View File

@@ -0,0 +1,25 @@
#!/usr/bin/env python
"""
Test runner script for API tests
Run all tests: python manage.py test igny8_core.api.tests
Run specific test: python manage.py test igny8_core.api.tests.test_response
"""
import os
import sys
import django
# Setup Django
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'igny8_core.settings')
django.setup()
from django.core.management import execute_from_command_line
if __name__ == '__main__':
# Run all API tests
if len(sys.argv) > 1:
# Custom test specified
execute_from_command_line(['manage.py', 'test'] + sys.argv[1:])
else:
# Run all API tests
execute_from_command_line(['manage.py', 'test', 'igny8_core.api.tests', '--verbosity=2'])

View File

@@ -28,19 +28,11 @@ class DebugScopedRateThrottle(ScopedRateThrottle):
- IGNY8_DEBUG_THROTTLE environment variable is True - IGNY8_DEBUG_THROTTLE environment variable is True
- User belongs to aws-admin or other system accounts - User belongs to aws-admin or other system accounts
- User is admin/developer role - User is admin/developer role
- Public blueprint list request with site filter (for Sites Renderer)
""" """
# Check if throttling should be bypassed # Check if throttling should be bypassed
debug_bypass = getattr(settings, 'DEBUG', False) debug_bypass = getattr(settings, 'DEBUG', False)
env_bypass = getattr(settings, 'IGNY8_DEBUG_THROTTLE', False) env_bypass = getattr(settings, 'IGNY8_DEBUG_THROTTLE', False)
# Bypass for public blueprint list requests (Sites Renderer fallback)
public_blueprint_bypass = False
if hasattr(view, 'action') and view.action == 'list':
if hasattr(request, 'query_params') and request.query_params.get('site'):
if not request.user or not hasattr(request.user, 'is_authenticated') or not request.user.is_authenticated:
public_blueprint_bypass = True
# Bypass for system account users (aws-admin, default-account, etc.) # Bypass for system account users (aws-admin, default-account, etc.)
system_account_bypass = False system_account_bypass = False
if hasattr(request, 'user') and request.user and hasattr(request.user, 'is_authenticated') and request.user.is_authenticated: if hasattr(request, 'user') and request.user and hasattr(request.user, 'is_authenticated') and request.user.is_authenticated:
@@ -55,7 +47,7 @@ class DebugScopedRateThrottle(ScopedRateThrottle):
# If checking fails, continue with normal throttling # If checking fails, continue with normal throttling
pass pass
if debug_bypass or env_bypass or system_account_bypass or public_blueprint_bypass: if debug_bypass or env_bypass or system_account_bypass:
# In debug mode or for system accounts, still set throttle headers but don't actually throttle # In debug mode or for system accounts, still set throttle headers but don't actually throttle
# This allows testing throttle headers without blocking requests # This allows testing throttle headers without blocking requests
if hasattr(self, 'get_rate'): if hasattr(self, 'get_rate'):

View File

@@ -1,26 +0,0 @@
"""
URL patterns for account management API
"""
from django.urls import path, include
from rest_framework.routers import DefaultRouter
from .account_views import (
AccountSettingsViewSet,
TeamManagementViewSet,
UsageAnalyticsViewSet
)
router = DefaultRouter()
urlpatterns = [
# Account settings (non-router endpoints for simplified access)
path('settings/', AccountSettingsViewSet.as_view({'get': 'retrieve', 'patch': 'partial_update'}), name='account-settings'),
# Team management
path('team/', TeamManagementViewSet.as_view({'get': 'list', 'post': 'create'}), name='team-list'),
path('team/<int:pk>/', TeamManagementViewSet.as_view({'delete': 'destroy'}), name='team-detail'),
# Usage analytics
path('usage/analytics/', UsageAnalyticsViewSet.as_view({'get': 'overview'}), name='usage-analytics'),
path('', include(router.urls)),
]

View File

@@ -1,400 +0,0 @@
"""
WordPress Publishing API Views
Handles manual content publishing to WordPress sites
"""
from rest_framework import status
from rest_framework.decorators import api_view, permission_classes
from rest_framework.permissions import IsAuthenticated
from rest_framework.response import Response
from django.shortcuts import get_object_or_404
from django.utils import timezone
from typing import Dict, Any, List
from igny8_core.models import ContentPost, SiteIntegration
from igny8_core.tasks.wordpress_publishing import (
publish_content_to_wordpress,
bulk_publish_content_to_wordpress
)
@api_view(['POST'])
@permission_classes([IsAuthenticated])
def publish_single_content(request, content_id: int) -> Response:
"""
Publish a single content item to WordPress
POST /api/v1/content/{content_id}/publish-to-wordpress/
Body:
{
"site_integration_id": 123, // Optional - will use default if not provided
"force": false // Optional - force republish even if already published
}
"""
try:
content = get_object_or_404(ContentPost, id=content_id)
# Check permissions
if not request.user.has_perm('content.change_contentpost'):
return Response(
{
'success': False,
'message': 'Permission denied',
'error': 'insufficient_permissions'
},
status=status.HTTP_403_FORBIDDEN
)
# Get site integration
site_integration_id = request.data.get('site_integration_id')
force = request.data.get('force', False)
if site_integration_id:
site_integration = get_object_or_404(SiteIntegration, id=site_integration_id)
else:
# Get default WordPress integration for user's organization
site_integration = SiteIntegration.objects.filter(
platform='wordpress',
is_active=True,
# Add organization filter if applicable
).first()
if not site_integration:
return Response(
{
'success': False,
'message': 'No WordPress integration found',
'error': 'no_integration'
},
status=status.HTTP_400_BAD_REQUEST
)
# Check if already published (unless force is true)
if not force and content.wordpress_sync_status == 'success':
return Response(
{
'success': True,
'message': 'Content already published to WordPress',
'data': {
'content_id': content.id,
'wordpress_post_id': content.wordpress_post_id,
'wordpress_post_url': content.wordpress_post_url,
'status': 'already_published'
}
}
)
# Check if currently syncing
if content.wordpress_sync_status == 'syncing':
return Response(
{
'success': False,
'message': 'Content is currently being published to WordPress',
'error': 'sync_in_progress'
},
status=status.HTTP_409_CONFLICT
)
# Validate content is ready for publishing
if not content.title or not (content.content_html or content.content):
return Response(
{
'success': False,
'message': 'Content is incomplete - missing title or content',
'error': 'incomplete_content'
},
status=status.HTTP_400_BAD_REQUEST
)
# Set status to pending and queue the task
content.wordpress_sync_status = 'pending'
content.save(update_fields=['wordpress_sync_status'])
# Get task_id if content is associated with a writer task
task_id = None
if hasattr(content, 'writer_task'):
task_id = content.writer_task.id
# Queue the publishing task
task_result = publish_content_to_wordpress.delay(
content.id,
site_integration.id,
task_id
)
return Response(
{
'success': True,
'message': 'Content queued for WordPress publishing',
'data': {
'content_id': content.id,
'site_integration_id': site_integration.id,
'task_id': task_result.id,
'status': 'queued'
}
},
status=status.HTTP_202_ACCEPTED
)
except Exception as e:
return Response(
{
'success': False,
'message': f'Error queuing content for WordPress publishing: {str(e)}',
'error': 'server_error'
},
status=status.HTTP_500_INTERNAL_SERVER_ERROR
)
@api_view(['POST'])
@permission_classes([IsAuthenticated])
def bulk_publish_content(request) -> Response:
"""
Bulk publish multiple content items to WordPress
POST /api/v1/content/bulk-publish-to-wordpress/
Body:
{
"content_ids": [1, 2, 3, 4],
"site_integration_id": 123, // Optional
"force": false // Optional
}
"""
try:
content_ids = request.data.get('content_ids', [])
site_integration_id = request.data.get('site_integration_id')
force = request.data.get('force', False)
if not content_ids:
return Response(
{
'success': False,
'message': 'No content IDs provided',
'error': 'missing_content_ids'
},
status=status.HTTP_400_BAD_REQUEST
)
# Check permissions
if not request.user.has_perm('content.change_contentpost'):
return Response(
{
'success': False,
'message': 'Permission denied',
'error': 'insufficient_permissions'
},
status=status.HTTP_403_FORBIDDEN
)
# Get site integration
if site_integration_id:
site_integration = get_object_or_404(SiteIntegration, id=site_integration_id)
else:
site_integration = SiteIntegration.objects.filter(
platform='wordpress',
is_active=True,
).first()
if not site_integration:
return Response(
{
'success': False,
'message': 'No WordPress integration found',
'error': 'no_integration'
},
status=status.HTTP_400_BAD_REQUEST
)
# Validate content items
content_items = ContentPost.objects.filter(id__in=content_ids)
if content_items.count() != len(content_ids):
return Response(
{
'success': False,
'message': 'Some content items not found',
'error': 'content_not_found'
},
status=status.HTTP_404_NOT_FOUND
)
# Queue bulk publishing task
task_result = bulk_publish_content_to_wordpress.delay(
content_ids,
site_integration.id
)
return Response(
{
'success': True,
'message': f'{len(content_ids)} content items queued for WordPress publishing',
'data': {
'content_count': len(content_ids),
'site_integration_id': site_integration.id,
'task_id': task_result.id,
'status': 'queued'
}
},
status=status.HTTP_202_ACCEPTED
)
except Exception as e:
return Response(
{
'success': False,
'message': f'Error queuing bulk WordPress publishing: {str(e)}',
'error': 'server_error'
},
status=status.HTTP_500_INTERNAL_SERVER_ERROR
)
@api_view(['GET'])
@permission_classes([IsAuthenticated])
def get_wordpress_status(request, content_id: int) -> Response:
"""
Get WordPress publishing status for a content item
GET /api/v1/content/{content_id}/wordpress-status/
"""
try:
content = get_object_or_404(ContentPost, id=content_id)
return Response(
{
'success': True,
'data': {
'content_id': content.id,
'wordpress_sync_status': content.wordpress_sync_status,
'wordpress_post_id': content.wordpress_post_id,
'wordpress_post_url': content.wordpress_post_url,
'wordpress_sync_attempts': content.wordpress_sync_attempts,
'last_wordpress_sync': content.last_wordpress_sync.isoformat() if content.last_wordpress_sync else None,
}
}
)
except Exception as e:
return Response(
{
'success': False,
'message': f'Error getting WordPress status: {str(e)}',
'error': 'server_error'
},
status=status.HTTP_500_INTERNAL_SERVER_ERROR
)
@api_view(['GET'])
@permission_classes([IsAuthenticated])
def get_wordpress_integrations(request) -> Response:
"""
Get available WordPress integrations for publishing
GET /api/v1/wordpress-integrations/
"""
try:
integrations = SiteIntegration.objects.filter(
platform='wordpress',
is_active=True,
# Add organization filter if applicable
).values(
'id', 'site_name', 'site_url', 'is_active',
'created_at', 'last_sync_at'
)
return Response(
{
'success': True,
'data': list(integrations)
}
)
except Exception as e:
return Response(
{
'success': False,
'message': f'Error getting WordPress integrations: {str(e)}',
'error': 'server_error'
},
status=status.HTTP_500_INTERNAL_SERVER_ERROR
)
@api_view(['POST'])
@permission_classes([IsAuthenticated])
def retry_failed_wordpress_sync(request, content_id: int) -> Response:
"""
Retry a failed WordPress sync
POST /api/v1/content/{content_id}/retry-wordpress-sync/
"""
try:
content = get_object_or_404(ContentPost, id=content_id)
if content.wordpress_sync_status != 'failed':
return Response(
{
'success': False,
'message': 'Content is not in failed status',
'error': 'invalid_status'
},
status=status.HTTP_400_BAD_REQUEST
)
# Get default WordPress integration
site_integration = SiteIntegration.objects.filter(
platform='wordpress',
is_active=True,
).first()
if not site_integration:
return Response(
{
'success': False,
'message': 'No WordPress integration found',
'error': 'no_integration'
},
status=status.HTTP_400_BAD_REQUEST
)
# Reset status and retry
content.wordpress_sync_status = 'pending'
content.save(update_fields=['wordpress_sync_status'])
# Get task_id if available
task_id = None
if hasattr(content, 'writer_task'):
task_id = content.writer_task.id
# Queue the publishing task
task_result = publish_content_to_wordpress.delay(
content.id,
site_integration.id,
task_id
)
return Response(
{
'success': True,
'message': 'WordPress sync retry queued',
'data': {
'content_id': content.id,
'task_id': task_result.id,
'status': 'queued'
}
},
status=status.HTTP_202_ACCEPTED
)
except Exception as e:
return Response(
{
'success': False,
'message': f'Error retrying WordPress sync: {str(e)}',
'error': 'server_error'
},
status=status.HTTP_500_INTERNAL_SERVER_ERROR
)

View File

@@ -19,9 +19,21 @@ class PlanAdmin(admin.ModelAdmin):
('Plan Info', { ('Plan Info', {
'fields': ('name', 'slug', 'price', 'billing_cycle', 'features', 'is_active') 'fields': ('name', 'slug', 'price', 'billing_cycle', 'features', 'is_active')
}), }),
('Account Management Limits', { ('User / Site Limits', {
'fields': ('max_users', 'max_sites', 'max_industries', 'max_author_profiles') 'fields': ('max_users', 'max_sites', 'max_industries', 'max_author_profiles')
}), }),
('Planner Limits', {
'fields': ('max_keywords', 'max_clusters', 'daily_cluster_limit', 'daily_keyword_import_limit', 'monthly_cluster_ai_credits')
}),
('Writer Limits', {
'fields': ('daily_content_tasks', 'daily_ai_requests', 'monthly_word_count_limit', 'monthly_content_ai_credits')
}),
('Image Limits', {
'fields': ('monthly_image_count', 'monthly_image_ai_credits', 'max_images_per_task', 'image_model_choices')
}),
('AI Controls', {
'fields': ('daily_ai_request_limit', 'monthly_ai_credit_limit')
}),
('Billing & Credits', { ('Billing & Credits', {
'fields': ('included_credits', 'extra_credit_price', 'allow_credit_topup', 'auto_credit_topup_threshold', 'auto_credit_topup_amount', 'credits_per_month') 'fields': ('included_credits', 'extra_credit_price', 'allow_credit_topup', 'auto_credit_topup_threshold', 'auto_credit_topup_amount', 'credits_per_month')
}), }),
@@ -56,11 +68,6 @@ class AccountAdmin(AccountAdminMixin, admin.ModelAdmin):
pass pass
return qs.none() return qs.none()
def has_delete_permission(self, request, obj=None):
if obj and getattr(obj, 'slug', '') == 'aws-admin':
return False
return super().has_delete_permission(request, obj)
@admin.register(Subscription) @admin.register(Subscription)
class SubscriptionAdmin(AccountAdminMixin, admin.ModelAdmin): class SubscriptionAdmin(AccountAdminMixin, admin.ModelAdmin):
@@ -110,66 +117,11 @@ class SectorInline(admin.TabularInline):
@admin.register(Site) @admin.register(Site)
class SiteAdmin(AccountAdminMixin, admin.ModelAdmin): class SiteAdmin(AccountAdminMixin, admin.ModelAdmin):
list_display = ['name', 'slug', 'account', 'industry', 'domain', 'status', 'is_active', 'get_api_key_status', 'get_sectors_count'] list_display = ['name', 'slug', 'account', 'industry', 'domain', 'status', 'is_active', 'get_sectors_count']
list_filter = ['status', 'is_active', 'account', 'industry', 'hosting_type'] list_filter = ['status', 'is_active', 'account', 'industry']
search_fields = ['name', 'slug', 'domain', 'industry__name'] search_fields = ['name', 'slug', 'domain', 'industry__name']
readonly_fields = ['created_at', 'updated_at', 'get_api_key_display'] readonly_fields = ['created_at', 'updated_at']
inlines = [SectorInline] inlines = [SectorInline]
actions = ['generate_api_keys']
fieldsets = (
('Site Info', {
'fields': ('name', 'slug', 'account', 'domain', 'description', 'industry', 'site_type', 'hosting_type', 'status', 'is_active')
}),
('WordPress Integration', {
'fields': ('get_api_key_display',),
'description': 'WordPress integration API key. Use SiteIntegration model for full integration settings.'
}),
('SEO Metadata', {
'fields': ('seo_metadata',),
'classes': ('collapse',)
}),
('Timestamps', {
'fields': ('created_at', 'updated_at'),
'classes': ('collapse',)
}),
)
def get_api_key_display(self, obj):
"""Display API key with copy button"""
if obj.wp_api_key:
from django.utils.html import format_html
return format_html(
'<div style="display:flex; align-items:center; gap:10px;">'
'<code style="background:#f0f0f0; padding:5px 10px; border-radius:3px;">{}</code>'
'<button type="button" onclick="navigator.clipboard.writeText(\'{}\'); alert(\'API Key copied to clipboard!\');" '
'style="padding:5px 10px; cursor:pointer;">Copy</button>'
'</div>',
obj.wp_api_key,
obj.wp_api_key
)
return format_html('<em>No API key generated</em>')
get_api_key_display.short_description = 'WordPress API Key'
def get_api_key_status(self, obj):
"""Show API key status in list view"""
from django.utils.html import format_html
if obj.wp_api_key:
return format_html('<span style="color:green;">●</span> Active')
return format_html('<span style="color:gray;">○</span> None')
get_api_key_status.short_description = 'API Key'
def generate_api_keys(self, request, queryset):
"""Generate API keys for selected sites"""
import secrets
updated_count = 0
for site in queryset:
if not site.wp_api_key:
site.wp_api_key = f"igny8_{''.join(secrets.choice('abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789') for _ in range(40))}"
site.save()
updated_count += 1
self.message_user(request, f'Generated API keys for {updated_count} site(s). Sites with existing keys were skipped.')
generate_api_keys.short_description = 'Generate WordPress API Keys'
def get_sectors_count(self, obj): def get_sectors_count(self, obj):
try: try:
@@ -248,16 +200,10 @@ class IndustryAdmin(admin.ModelAdmin):
search_fields = ['name', 'slug', 'description'] search_fields = ['name', 'slug', 'description']
readonly_fields = ['created_at', 'updated_at'] readonly_fields = ['created_at', 'updated_at']
inlines = [IndustrySectorInline] inlines = [IndustrySectorInline]
actions = ['delete_selected'] # Enable bulk delete
change_list_template = 'admin/igny8_core_auth/industry/change_list.html'
def get_sectors_count(self, obj): def get_sectors_count(self, obj):
return obj.sectors.filter(is_active=True).count() return obj.sectors.filter(is_active=True).count()
get_sectors_count.short_description = 'Active Sectors' get_sectors_count.short_description = 'Active Sectors'
def has_delete_permission(self, request, obj=None):
"""Allow deletion for superusers and developers"""
return request.user.is_superuser or (hasattr(request.user, 'is_developer') and request.user.is_developer())
@admin.register(IndustrySector) @admin.register(IndustrySector)
@@ -266,12 +212,6 @@ class IndustrySectorAdmin(admin.ModelAdmin):
list_filter = ['is_active', 'industry'] list_filter = ['is_active', 'industry']
search_fields = ['name', 'slug', 'description'] search_fields = ['name', 'slug', 'description']
readonly_fields = ['created_at', 'updated_at'] readonly_fields = ['created_at', 'updated_at']
actions = ['delete_selected'] # Enable bulk delete
change_list_template = 'admin/igny8_core_auth/industrysector/change_list.html'
def has_delete_permission(self, request, obj=None):
"""Allow deletion for superusers and developers"""
return request.user.is_superuser or (hasattr(request.user, 'is_developer') and request.user.is_developer())
@admin.register(SeedKeyword) @admin.register(SeedKeyword)
@@ -281,8 +221,6 @@ class SeedKeywordAdmin(admin.ModelAdmin):
list_filter = ['is_active', 'industry', 'sector', 'intent'] list_filter = ['is_active', 'industry', 'sector', 'intent']
search_fields = ['keyword'] search_fields = ['keyword']
readonly_fields = ['created_at', 'updated_at'] readonly_fields = ['created_at', 'updated_at']
actions = ['delete_selected'] # Enable bulk delete
change_list_template = 'admin/igny8_core_auth/seedkeyword/change_list.html'
fieldsets = ( fieldsets = (
('Keyword Info', { ('Keyword Info', {
@@ -295,10 +233,6 @@ class SeedKeywordAdmin(admin.ModelAdmin):
'fields': ('created_at', 'updated_at') 'fields': ('created_at', 'updated_at')
}), }),
) )
def has_delete_permission(self, request, obj=None):
"""Allow deletion for superusers and developers"""
return request.user.is_superuser or (hasattr(request.user, 'is_developer') and request.user.is_developer())
@admin.register(User) @admin.register(User)

View File

@@ -8,7 +8,7 @@ from django.db.models import Q
from igny8_core.auth.models import Account, User, Site, Sector from igny8_core.auth.models import Account, User, Site, Sector
from igny8_core.modules.planner.models import Keywords, Clusters, ContentIdeas from igny8_core.modules.planner.models import Keywords, Clusters, ContentIdeas
from igny8_core.modules.writer.models import Tasks, Images, Content from igny8_core.modules.writer.models import Tasks, Images, Content
from igny8_core.business.billing.models import CreditTransaction, CreditUsageLog from igny8_core.modules.billing.models import CreditTransaction, CreditUsageLog
from igny8_core.modules.system.models import AIPrompt, IntegrationSettings, AuthorProfile, Strategy from igny8_core.modules.system.models import AIPrompt, IntegrationSettings, AuthorProfile, Strategy
from igny8_core.modules.system.settings_models import AccountSettings, UserSettings, ModuleSettings, AISettings from igny8_core.modules.system.settings_models import AccountSettings, UserSettings, ModuleSettings, AISettings

View File

@@ -1,42 +0,0 @@
from django.core.management.base import BaseCommand
from django.utils import timezone
from igny8_core.auth.models import Account, Site, Sector
from igny8_core.business.planning.models import Clusters, Keywords, ContentIdeas
from igny8_core.business.content.models import Tasks, Content, Images
class Command(BaseCommand):
help = "Permanently delete soft-deleted records whose retention window has expired."
def handle(self, *args, **options):
now = timezone.now()
total_deleted = 0
models = [
Account,
Site,
Sector,
Clusters,
Keywords,
ContentIdeas,
Tasks,
Content,
Images,
]
for model in models:
qs = model.all_objects.filter(is_deleted=True, restore_until__lt=now)
if model is Account:
qs = qs.exclude(slug='aws-admin')
count = qs.count()
if count:
qs.delete()
total_deleted += count
self.stdout.write(self.style.SUCCESS(f"Purged {count} {model.__name__} record(s)."))
if total_deleted == 0:
self.stdout.write("No expired soft-deleted records to purge.")
else:
self.stdout.write(self.style.SUCCESS(f"Total purged: {total_deleted}"))

View File

@@ -4,7 +4,6 @@ Extracts account from JWT token and injects into request context
""" """
from django.utils.deprecation import MiddlewareMixin from django.utils.deprecation import MiddlewareMixin
from django.http import JsonResponse from django.http import JsonResponse
from django.contrib.auth import logout
from rest_framework import status from rest_framework import status
try: try:
@@ -42,19 +41,14 @@ class AccountContextMiddleware(MiddlewareMixin):
request.user = user request.user = user
# Get account from refreshed user # Get account from refreshed user
user_account = getattr(user, 'account', None) user_account = getattr(user, 'account', None)
validation_error = self._validate_account_and_plan(request, user) if user_account:
if validation_error: request.account = user_account
return validation_error return None
request.account = getattr(user, 'account', None)
return None
except (AttributeError, UserModel.DoesNotExist, Exception): except (AttributeError, UserModel.DoesNotExist, Exception):
# If refresh fails, fallback to cached account # If refresh fails, fallback to cached account
try: try:
user_account = getattr(request.user, 'account', None) user_account = getattr(request.user, 'account', None)
if user_account: if user_account:
validation_error = self._validate_account_and_plan(request, request.user)
if validation_error:
return validation_error
request.account = user_account request.account = user_account
return None return None
except (AttributeError, Exception): except (AttributeError, Exception):
@@ -82,6 +76,7 @@ class AccountContextMiddleware(MiddlewareMixin):
if not JWT_AVAILABLE: if not JWT_AVAILABLE:
# JWT library not installed yet - skip for now # JWT library not installed yet - skip for now
request.account = None request.account = None
request.user = None
return None return None
# Decode JWT token with signature verification # Decode JWT token with signature verification
@@ -99,76 +94,42 @@ class AccountContextMiddleware(MiddlewareMixin):
if user_id: if user_id:
from .models import User, Account from .models import User, Account
try: try:
# Get user from DB (but don't set request.user - let DRF authentication handle that) # Refresh user from DB with account and plan relationships to get latest data
# Only set request.account for account context # This ensures changes to account/plan are reflected immediately without re-login
user = User.objects.select_related('account', 'account__plan').get(id=user_id) user = User.objects.select_related('account', 'account__plan').get(id=user_id)
validation_error = self._validate_account_and_plan(request, user) request.user = user
if validation_error:
return validation_error
if account_id: if account_id:
# Verify account still exists # Verify account still exists and matches user
try: account = Account.objects.get(id=account_id)
account = Account.objects.get(id=account_id) # If user's account changed, use the new one from user object
if user.account and user.account.id != account_id:
request.account = user.account
else:
request.account = account request.account = account
except Account.DoesNotExist:
# Account from token doesn't exist - don't fallback, set to None
request.account = None
else: else:
# No account_id in token - set to None (don't fallback to user.account) try:
request.account = None user_account = getattr(user, 'account', None)
if user_account:
request.account = user_account
else:
request.account = None
except (AttributeError, Exception):
# If account access fails (e.g., column mismatch), set to None
request.account = None
except (User.DoesNotExist, Account.DoesNotExist): except (User.DoesNotExist, Account.DoesNotExist):
request.account = None request.account = None
request.user = None
else: else:
request.account = None request.account = None
request.user = None
except jwt.InvalidTokenError: except jwt.InvalidTokenError:
request.account = None request.account = None
request.user = None
except Exception: except Exception:
# Fail silently for now - allow unauthenticated access # Fail silently for now - allow unauthenticated access
request.account = None request.account = None
request.user = None
return None return None
def _validate_account_and_plan(self, request, user):
"""
Ensure the authenticated user has an account and an active plan.
If not, logout the user (for session auth) and block the request.
"""
try:
account = getattr(user, 'account', None)
except Exception:
account = None
if not account:
return self._deny_request(
request,
error='Account not configured for this user. Please contact support.',
status_code=status.HTTP_403_FORBIDDEN,
)
plan = getattr(account, 'plan', None)
if plan is None or getattr(plan, 'is_active', False) is False:
return self._deny_request(
request,
error='Active subscription required. Visit igny8.com/pricing to subscribe.',
status_code=status.HTTP_402_PAYMENT_REQUIRED,
)
return None
def _deny_request(self, request, error, status_code):
"""Logout session users (if any) and return a consistent JSON error."""
try:
if hasattr(request, 'user') and request.user and request.user.is_authenticated:
logout(request)
except Exception:
pass
return JsonResponse(
{
'success': False,
'error': error,
},
status=status_code,
)

View File

@@ -1,4 +1,4 @@
# Generated by Django 5.2.8 on 2025-11-20 23:27 # Generated by Django 5.2.7 on 2025-11-02 21:42
import django.contrib.auth.models import django.contrib.auth.models
import django.contrib.auth.validators import django.contrib.auth.validators
@@ -25,22 +25,12 @@ class Migration(migrations.Migration):
('name', models.CharField(max_length=255)), ('name', models.CharField(max_length=255)),
('slug', models.SlugField(max_length=255, unique=True)), ('slug', models.SlugField(max_length=255, unique=True)),
('price', models.DecimalField(decimal_places=2, max_digits=10)), ('price', models.DecimalField(decimal_places=2, max_digits=10)),
('billing_cycle', models.CharField(choices=[('monthly', 'Monthly'), ('annual', 'Annual')], default='monthly', max_length=20)), ('credits_per_month', models.IntegerField(default=0, validators=[django.core.validators.MinValueValidator(0)])),
('features', models.JSONField(blank=True, default=list, help_text="Plan features as JSON array (e.g., ['ai_writer', 'image_gen', 'auto_publish'])")), ('max_sites', models.IntegerField(default=1, help_text='Maximum number of sites allowed (1-10)', validators=[django.core.validators.MinValueValidator(1), django.core.validators.MaxValueValidator(10)])),
('features', models.JSONField(default=dict, help_text='Plan features as JSON')),
('stripe_price_id', models.CharField(blank=True, max_length=255, null=True)),
('is_active', models.BooleanField(default=True)), ('is_active', models.BooleanField(default=True)),
('created_at', models.DateTimeField(auto_now_add=True)), ('created_at', models.DateTimeField(auto_now_add=True)),
('max_users', models.IntegerField(default=1, help_text='Total users allowed per account', validators=[django.core.validators.MinValueValidator(1)])),
('max_sites', models.IntegerField(default=1, help_text='Maximum number of sites allowed', validators=[django.core.validators.MinValueValidator(1)])),
('max_industries', models.IntegerField(blank=True, default=None, help_text='Optional limit for industries/sectors', null=True, validators=[django.core.validators.MinValueValidator(1)])),
('max_author_profiles', models.IntegerField(default=5, help_text='Limit for saved writing styles', validators=[django.core.validators.MinValueValidator(0)])),
('included_credits', models.IntegerField(default=0, help_text='Monthly credits included', validators=[django.core.validators.MinValueValidator(0)])),
('extra_credit_price', models.DecimalField(decimal_places=2, default=0.01, help_text='Price per additional credit', max_digits=10)),
('allow_credit_topup', models.BooleanField(default=True, help_text='Can user purchase more credits?')),
('auto_credit_topup_threshold', models.IntegerField(blank=True, default=None, help_text='Auto top-up trigger point (optional)', null=True, validators=[django.core.validators.MinValueValidator(0)])),
('auto_credit_topup_amount', models.IntegerField(blank=True, default=None, help_text='How many credits to auto-buy', null=True, validators=[django.core.validators.MinValueValidator(1)])),
('stripe_product_id', models.CharField(blank=True, help_text='For Stripe plan sync', max_length=255, null=True)),
('stripe_price_id', models.CharField(blank=True, help_text='Monthly price ID for Stripe', max_length=255, null=True)),
('credits_per_month', models.IntegerField(default=0, help_text='DEPRECATED: Use included_credits instead', validators=[django.core.validators.MinValueValidator(0)])),
], ],
options={ options={
'db_table': 'igny8_plans', 'db_table': 'igny8_plans',
@@ -60,7 +50,7 @@ class Migration(migrations.Migration):
('is_staff', models.BooleanField(default=False, help_text='Designates whether the user can log into this admin site.', verbose_name='staff status')), ('is_staff', models.BooleanField(default=False, help_text='Designates whether the user can log into this admin site.', verbose_name='staff status')),
('is_active', models.BooleanField(default=True, help_text='Designates whether this user should be treated as active. Unselect this instead of deleting accounts.', verbose_name='active')), ('is_active', models.BooleanField(default=True, help_text='Designates whether this user should be treated as active. Unselect this instead of deleting accounts.', verbose_name='active')),
('date_joined', models.DateTimeField(default=django.utils.timezone.now, verbose_name='date joined')), ('date_joined', models.DateTimeField(default=django.utils.timezone.now, verbose_name='date joined')),
('role', models.CharField(choices=[('developer', 'Developer / Super Admin'), ('owner', 'Owner'), ('admin', 'Admin'), ('editor', 'Editor'), ('viewer', 'Viewer'), ('system_bot', 'System Bot')], default='viewer', max_length=20)), ('role', models.CharField(choices=[('owner', 'Owner'), ('admin', 'Admin'), ('editor', 'Editor'), ('viewer', 'Viewer'), ('system_bot', 'System Bot')], default='viewer', max_length=20)),
('email', models.EmailField(max_length=254, unique=True, verbose_name='email address')), ('email', models.EmailField(max_length=254, unique=True, verbose_name='email address')),
('created_at', models.DateTimeField(auto_now_add=True)), ('created_at', models.DateTimeField(auto_now_add=True)),
('updated_at', models.DateTimeField(auto_now=True)), ('updated_at', models.DateTimeField(auto_now=True)),
@@ -75,7 +65,7 @@ class Migration(migrations.Migration):
], ],
), ),
migrations.CreateModel( migrations.CreateModel(
name='Account', name='Tenant',
fields=[ fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')), ('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=255)), ('name', models.CharField(max_length=255)),
@@ -85,93 +75,28 @@ class Migration(migrations.Migration):
('status', models.CharField(choices=[('active', 'Active'), ('suspended', 'Suspended'), ('trial', 'Trial'), ('cancelled', 'Cancelled')], default='trial', max_length=20)), ('status', models.CharField(choices=[('active', 'Active'), ('suspended', 'Suspended'), ('trial', 'Trial'), ('cancelled', 'Cancelled')], default='trial', max_length=20)),
('created_at', models.DateTimeField(auto_now_add=True)), ('created_at', models.DateTimeField(auto_now_add=True)),
('updated_at', models.DateTimeField(auto_now=True)), ('updated_at', models.DateTimeField(auto_now=True)),
('owner', models.ForeignKey(on_delete=django.db.models.deletion.PROTECT, related_name='owned_accounts', to=settings.AUTH_USER_MODEL)), ('owner', models.ForeignKey(on_delete=django.db.models.deletion.PROTECT, related_name='owned_tenants', to=settings.AUTH_USER_MODEL)),
('plan', models.ForeignKey(on_delete=django.db.models.deletion.PROTECT, related_name='accounts', to='igny8_core_auth.plan')), ('plan', models.ForeignKey(on_delete=django.db.models.deletion.PROTECT, related_name='tenants', to='igny8_core_auth.plan')),
], ],
options={ options={
'verbose_name': 'Account',
'verbose_name_plural': 'Accounts',
'db_table': 'igny8_tenants', 'db_table': 'igny8_tenants',
}, },
), ),
migrations.AddField(
model_name='user',
name='account',
field=models.ForeignKey(blank=True, db_column='tenant_id', null=True, on_delete=django.db.models.deletion.CASCADE, related_name='users', to='igny8_core_auth.account'),
),
migrations.CreateModel( migrations.CreateModel(
name='Industry', name='Subscription',
fields=[ fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')), ('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=255, unique=True)), ('stripe_subscription_id', models.CharField(max_length=255, unique=True)),
('slug', models.SlugField(max_length=255, unique=True)), ('status', models.CharField(choices=[('active', 'Active'), ('past_due', 'Past Due'), ('canceled', 'Canceled'), ('trialing', 'Trialing')], max_length=20)),
('description', models.TextField(blank=True, null=True)), ('current_period_start', models.DateTimeField()),
('is_active', models.BooleanField(db_index=True, default=True)), ('current_period_end', models.DateTimeField()),
('cancel_at_period_end', models.BooleanField(default=False)),
('created_at', models.DateTimeField(auto_now_add=True)), ('created_at', models.DateTimeField(auto_now_add=True)),
('updated_at', models.DateTimeField(auto_now=True)), ('updated_at', models.DateTimeField(auto_now=True)),
('tenant', models.OneToOneField(on_delete=django.db.models.deletion.CASCADE, related_name='subscription', to='igny8_core_auth.tenant')),
], ],
options={ options={
'verbose_name': 'Industry', 'db_table': 'igny8_subscriptions',
'verbose_name_plural': 'Industries',
'db_table': 'igny8_industries',
'ordering': ['name'],
'indexes': [models.Index(fields=['slug'], name='igny8_indus_slug_2f8769_idx'), models.Index(fields=['is_active'], name='igny8_indus_is_acti_146d41_idx')],
},
),
migrations.CreateModel(
name='IndustrySector',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=255)),
('slug', models.SlugField(max_length=255)),
('description', models.TextField(blank=True, null=True)),
('suggested_keywords', models.JSONField(default=list, help_text='List of suggested keywords for this sector template')),
('is_active', models.BooleanField(db_index=True, default=True)),
('created_at', models.DateTimeField(auto_now_add=True)),
('updated_at', models.DateTimeField(auto_now=True)),
('industry', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='sectors', to='igny8_core_auth.industry')),
],
options={
'verbose_name': 'Industry Sector',
'verbose_name_plural': 'Industry Sectors',
'db_table': 'igny8_industry_sectors',
'ordering': ['industry', 'name'],
},
),
migrations.CreateModel(
name='PasswordResetToken',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('token', models.CharField(db_index=True, max_length=255, unique=True)),
('expires_at', models.DateTimeField()),
('used', models.BooleanField(default=False)),
('created_at', models.DateTimeField(auto_now_add=True)),
('user', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='password_reset_tokens', to=settings.AUTH_USER_MODEL)),
],
options={
'db_table': 'igny8_password_reset_tokens',
'ordering': ['-created_at'],
},
),
migrations.CreateModel(
name='SeedKeyword',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('keyword', models.CharField(db_index=True, max_length=255)),
('volume', models.IntegerField(default=0, help_text='Search volume estimate')),
('difficulty', models.IntegerField(default=0, help_text='Keyword difficulty (0-100)', validators=[django.core.validators.MinValueValidator(0), django.core.validators.MaxValueValidator(100)])),
('intent', models.CharField(choices=[('informational', 'Informational'), ('navigational', 'Navigational'), ('commercial', 'Commercial'), ('transactional', 'Transactional')], default='informational', max_length=50)),
('is_active', models.BooleanField(db_index=True, default=True)),
('created_at', models.DateTimeField(auto_now_add=True)),
('updated_at', models.DateTimeField(auto_now=True)),
('industry', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='seed_keywords', to='igny8_core_auth.industry')),
('sector', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='seed_keywords', to='igny8_core_auth.industrysector')),
],
options={
'verbose_name': 'Seed Keyword',
'verbose_name_plural': 'Seed Keywords',
'db_table': 'igny8_seed_keywords',
'ordering': ['keyword'],
}, },
), ),
migrations.CreateModel( migrations.CreateModel(
@@ -186,18 +111,13 @@ class Migration(migrations.Migration):
('status', models.CharField(choices=[('active', 'Active'), ('inactive', 'Inactive'), ('suspended', 'Suspended')], default='active', max_length=20)), ('status', models.CharField(choices=[('active', 'Active'), ('inactive', 'Inactive'), ('suspended', 'Suspended')], default='active', max_length=20)),
('created_at', models.DateTimeField(auto_now_add=True)), ('created_at', models.DateTimeField(auto_now_add=True)),
('updated_at', models.DateTimeField(auto_now=True)), ('updated_at', models.DateTimeField(auto_now=True)),
('wp_url', models.URLField(blank=True, help_text='WordPress site URL (legacy - use SiteIntegration)', null=True)), ('wp_url', models.URLField(blank=True, help_text='WordPress site URL', null=True)),
('wp_username', models.CharField(blank=True, max_length=255, null=True)), ('wp_username', models.CharField(blank=True, max_length=255, null=True)),
('wp_app_password', models.CharField(blank=True, max_length=255, null=True)), ('wp_app_password', models.CharField(blank=True, max_length=255, null=True)),
('site_type', models.CharField(choices=[('marketing', 'Marketing Site'), ('ecommerce', 'Ecommerce Site'), ('blog', 'Blog'), ('portfolio', 'Portfolio'), ('corporate', 'Corporate')], db_index=True, default='marketing', help_text='Type of site', max_length=50)), ('tenant', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='%(class)s_set', to='igny8_core_auth.tenant')),
('hosting_type', models.CharField(choices=[('igny8_sites', 'IGNY8 Sites'), ('wordpress', 'WordPress'), ('shopify', 'Shopify'), ('multi', 'Multi-Destination')], db_index=True, default='igny8_sites', help_text='Target hosting platform', max_length=50)),
('seo_metadata', models.JSONField(blank=True, default=dict, help_text='SEO metadata: meta tags, Open Graph, Schema.org')),
('account', models.ForeignKey(db_column='tenant_id', on_delete=django.db.models.deletion.CASCADE, related_name='%(class)s_set', to='igny8_core_auth.account')),
('industry', models.ForeignKey(blank=True, help_text='Industry this site belongs to', null=True, on_delete=django.db.models.deletion.PROTECT, related_name='sites', to='igny8_core_auth.industry')),
], ],
options={ options={
'db_table': 'igny8_sites', 'db_table': 'igny8_sites',
'ordering': ['-created_at'],
}, },
), ),
migrations.CreateModel( migrations.CreateModel(
@@ -211,14 +131,18 @@ class Migration(migrations.Migration):
('status', models.CharField(choices=[('active', 'Active'), ('inactive', 'Inactive')], default='active', max_length=20)), ('status', models.CharField(choices=[('active', 'Active'), ('inactive', 'Inactive')], default='active', max_length=20)),
('created_at', models.DateTimeField(auto_now_add=True)), ('created_at', models.DateTimeField(auto_now_add=True)),
('updated_at', models.DateTimeField(auto_now=True)), ('updated_at', models.DateTimeField(auto_now=True)),
('account', models.ForeignKey(db_column='tenant_id', on_delete=django.db.models.deletion.CASCADE, related_name='%(class)s_set', to='igny8_core_auth.account')),
('industry_sector', models.ForeignKey(blank=True, help_text='Reference to the industry sector template', null=True, on_delete=django.db.models.deletion.PROTECT, related_name='site_sectors', to='igny8_core_auth.industrysector')),
('site', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='sectors', to='igny8_core_auth.site')), ('site', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='sectors', to='igny8_core_auth.site')),
('tenant', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='%(class)s_set', to='igny8_core_auth.tenant')),
], ],
options={ options={
'db_table': 'igny8_sectors', 'db_table': 'igny8_sectors',
}, },
), ),
migrations.AddField(
model_name='user',
name='tenant',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, related_name='users', to='igny8_core_auth.tenant'),
),
migrations.CreateModel( migrations.CreateModel(
name='SiteUserAccess', name='SiteUserAccess',
fields=[ fields=[
@@ -229,111 +153,34 @@ class Migration(migrations.Migration):
('user', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='site_access', to=settings.AUTH_USER_MODEL)), ('user', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='site_access', to=settings.AUTH_USER_MODEL)),
], ],
options={ options={
'verbose_name': 'Site User Access',
'verbose_name_plural': 'Site User Access',
'db_table': 'igny8_site_user_access', 'db_table': 'igny8_site_user_access',
}, 'indexes': [models.Index(fields=['user', 'site'], name='igny8_site__user_id_61951e_idx')],
), 'unique_together': {('user', 'site')},
migrations.CreateModel(
name='Subscription',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('stripe_subscription_id', models.CharField(max_length=255, unique=True)),
('status', models.CharField(choices=[('active', 'Active'), ('past_due', 'Past Due'), ('canceled', 'Canceled'), ('trialing', 'Trialing')], max_length=20)),
('current_period_start', models.DateTimeField()),
('current_period_end', models.DateTimeField()),
('cancel_at_period_end', models.BooleanField(default=False)),
('created_at', models.DateTimeField(auto_now_add=True)),
('updated_at', models.DateTimeField(auto_now=True)),
('account', models.OneToOneField(db_column='tenant_id', on_delete=django.db.models.deletion.CASCADE, related_name='subscription', to='igny8_core_auth.account')),
],
options={
'db_table': 'igny8_subscriptions',
}, },
), ),
migrations.AddIndex( migrations.AddIndex(
model_name='user', model_name='tenant',
index=models.Index(fields=['account', 'role'], name='igny8_users_tenant__0ab02b_idx'),
),
migrations.AddIndex(
model_name='user',
index=models.Index(fields=['email'], name='igny8_users_email_fd61ff_idx'),
),
migrations.AddIndex(
model_name='industrysector',
index=models.Index(fields=['industry', 'is_active'], name='igny8_indus_industr_00b524_idx'),
),
migrations.AddIndex(
model_name='industrysector',
index=models.Index(fields=['slug'], name='igny8_indus_slug_101d63_idx'),
),
migrations.AlterUniqueTogether(
name='industrysector',
unique_together={('industry', 'slug')},
),
migrations.AddIndex(
model_name='passwordresettoken',
index=models.Index(fields=['token'], name='igny8_passw_token_0eaf0c_idx'),
),
migrations.AddIndex(
model_name='passwordresettoken',
index=models.Index(fields=['user', 'used'], name='igny8_passw_user_id_320c02_idx'),
),
migrations.AddIndex(
model_name='passwordresettoken',
index=models.Index(fields=['expires_at'], name='igny8_passw_expires_c9aa03_idx'),
),
migrations.AddIndex(
model_name='account',
index=models.Index(fields=['slug'], name='igny8_tenan_slug_f25e97_idx'), index=models.Index(fields=['slug'], name='igny8_tenan_slug_f25e97_idx'),
), ),
migrations.AddIndex( migrations.AddIndex(
model_name='account', model_name='tenant',
index=models.Index(fields=['status'], name='igny8_tenan_status_5dc02a_idx'), index=models.Index(fields=['status'], name='igny8_tenan_status_5dc02a_idx'),
), ),
migrations.AddIndex( migrations.AddIndex(
model_name='seedkeyword', model_name='subscription',
index=models.Index(fields=['keyword'], name='igny8_seed__keyword_efa089_idx'), index=models.Index(fields=['status'], name='igny8_subsc_status_2fa897_idx'),
),
migrations.AddIndex(
model_name='seedkeyword',
index=models.Index(fields=['industry', 'sector'], name='igny8_seed__industr_c41841_idx'),
),
migrations.AddIndex(
model_name='seedkeyword',
index=models.Index(fields=['industry', 'sector', 'is_active'], name='igny8_seed__industr_da0030_idx'),
),
migrations.AddIndex(
model_name='seedkeyword',
index=models.Index(fields=['intent'], name='igny8_seed__intent_15020d_idx'),
),
migrations.AlterUniqueTogether(
name='seedkeyword',
unique_together={('keyword', 'industry', 'sector')},
), ),
migrations.AddIndex( migrations.AddIndex(
model_name='site', model_name='site',
index=models.Index(fields=['account', 'is_active'], name='igny8_sites_tenant__e0f31d_idx'), index=models.Index(fields=['tenant', 'is_active'], name='igny8_sites_tenant__e0f31d_idx'),
), ),
migrations.AddIndex( migrations.AddIndex(
model_name='site', model_name='site',
index=models.Index(fields=['account', 'status'], name='igny8_sites_tenant__a20275_idx'), index=models.Index(fields=['tenant', 'status'], name='igny8_sites_tenant__a20275_idx'),
),
migrations.AddIndex(
model_name='site',
index=models.Index(fields=['industry'], name='igny8_sites_industr_66e004_idx'),
),
migrations.AddIndex(
model_name='site',
index=models.Index(fields=['site_type'], name='igny8_sites_site_ty_0dfbc3_idx'),
),
migrations.AddIndex(
model_name='site',
index=models.Index(fields=['hosting_type'], name='igny8_sites_hosting_c484c2_idx'),
), ),
migrations.AlterUniqueTogether( migrations.AlterUniqueTogether(
name='site', name='site',
unique_together={('account', 'slug')}, unique_together={('tenant', 'slug')},
), ),
migrations.AddIndex( migrations.AddIndex(
model_name='sector', model_name='sector',
@@ -341,26 +188,18 @@ class Migration(migrations.Migration):
), ),
migrations.AddIndex( migrations.AddIndex(
model_name='sector', model_name='sector',
index=models.Index(fields=['account', 'site'], name='igny8_secto_tenant__af54ae_idx'), index=models.Index(fields=['tenant', 'site'], name='igny8_secto_tenant__af54ae_idx'),
),
migrations.AddIndex(
model_name='sector',
index=models.Index(fields=['industry_sector'], name='igny8_secto_industr_1cf990_idx'),
), ),
migrations.AlterUniqueTogether( migrations.AlterUniqueTogether(
name='sector', name='sector',
unique_together={('site', 'slug')}, unique_together={('site', 'slug')},
), ),
migrations.AddIndex( migrations.AddIndex(
model_name='siteuseraccess', model_name='user',
index=models.Index(fields=['user', 'site'], name='igny8_site__user_id_61951e_idx'), index=models.Index(fields=['tenant', 'role'], name='igny8_users_tenant__0ab02b_idx'),
),
migrations.AlterUniqueTogether(
name='siteuseraccess',
unique_together={('user', 'site')},
), ),
migrations.AddIndex( migrations.AddIndex(
model_name='subscription', model_name='user',
index=models.Index(fields=['status'], name='igny8_subsc_status_2fa897_idx'), index=models.Index(fields=['email'], name='igny8_users_email_fd61ff_idx'),
), ),
] ]

View File

@@ -0,0 +1,13 @@
# Generated by Django 5.2.7 on 2025-11-02 22:27
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('igny8_core_auth', '0001_initial'),
]
operations = [
]

View File

@@ -1,19 +0,0 @@
# Generated manually for adding wp_api_key to Site model
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('igny8_core_auth', '0001_initial'),
]
operations = [
migrations.AddField(
model_name='site',
name='wp_api_key',
field=models.CharField(blank=True, help_text='API key for WordPress integration via IGNY8 WP Bridge plugin', max_length=255, null=True),
),
]

View File

@@ -1,17 +0,0 @@
# Generated by Django 5.2.8 on 2025-12-01 00:05
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('igny8_core_auth', '0002_add_wp_api_key_to_site'),
]
operations = [
migrations.AlterModelOptions(
name='seedkeyword',
options={'ordering': ['keyword'], 'verbose_name': 'Seed Keyword', 'verbose_name_plural': 'Global Keywords Database'},
),
]

View File

@@ -0,0 +1,18 @@
# Generated by Django 5.2.7 on 2025-11-03 13:22
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('igny8_core_auth', '0002_add_developer_role'),
]
operations = [
migrations.AlterField(
model_name='user',
name='role',
field=models.CharField(choices=[('developer', 'Developer / Super Admin'), ('owner', 'Owner'), ('admin', 'Admin'), ('editor', 'Editor'), ('viewer', 'Viewer'), ('system_bot', 'System Bot')], default='viewer', max_length=20),
),
]

View File

@@ -0,0 +1,75 @@
# Generated migration for Industry and IndustrySector models
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('igny8_core_auth', '0003_alter_user_role'),
]
operations = [
migrations.CreateModel(
name='Industry',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=255, unique=True)),
('slug', models.SlugField(db_index=True, max_length=255, unique=True)),
('description', models.TextField(blank=True, null=True)),
('is_active', models.BooleanField(db_index=True, default=True)),
('created_at', models.DateTimeField(auto_now_add=True)),
('updated_at', models.DateTimeField(auto_now=True)),
],
options={
'db_table': 'igny8_industries',
'ordering': ['name'],
},
),
migrations.CreateModel(
name='IndustrySector',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=255)),
('slug', models.SlugField(db_index=True, max_length=255)),
('description', models.TextField(blank=True, null=True)),
('suggested_keywords', models.JSONField(default=list, help_text='List of suggested keywords for this sector template')),
('is_active', models.BooleanField(db_index=True, default=True)),
('created_at', models.DateTimeField(auto_now_add=True)),
('updated_at', models.DateTimeField(auto_now=True)),
('industry', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='sectors', to='igny8_core_auth.industry')),
],
options={
'db_table': 'igny8_industry_sectors',
'ordering': ['industry', 'name'],
'unique_together': {('industry', 'slug')},
},
),
migrations.AddField(
model_name='sector',
name='industry_sector',
field=models.ForeignKey(blank=True, help_text='Reference to the industry sector template', null=True, on_delete=django.db.models.deletion.PROTECT, related_name='site_sectors', to='igny8_core_auth.industrysector'),
),
migrations.AddIndex(
model_name='industry',
index=models.Index(fields=['slug'], name='igny8_indu_slug_idx'),
),
migrations.AddIndex(
model_name='industry',
index=models.Index(fields=['is_active'], name='igny8_indu_is_acti_idx'),
),
migrations.AddIndex(
model_name='industrysector',
index=models.Index(fields=['industry', 'is_active'], name='igny8_indu_industr_idx'),
),
migrations.AddIndex(
model_name='industrysector',
index=models.Index(fields=['slug'], name='igny8_indu_slug_1_idx'),
),
migrations.AddIndex(
model_name='sector',
index=models.Index(fields=['industry_sector'], name='igny8_sect_industr_idx'),
),
]

View File

@@ -1,53 +0,0 @@
# Generated by Django 5.2.8 on 2025-12-04 23:35
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('igny8_core_auth', '0003_add_sync_event_model'),
]
operations = [
migrations.AddField(
model_name='account',
name='billing_address_line1',
field=models.CharField(blank=True, help_text='Street address', max_length=255),
),
migrations.AddField(
model_name='account',
name='billing_address_line2',
field=models.CharField(blank=True, help_text='Apt, suite, etc.', max_length=255),
),
migrations.AddField(
model_name='account',
name='billing_city',
field=models.CharField(blank=True, max_length=100),
),
migrations.AddField(
model_name='account',
name='billing_country',
field=models.CharField(blank=True, help_text='ISO 2-letter country code', max_length=2),
),
migrations.AddField(
model_name='account',
name='billing_email',
field=models.EmailField(blank=True, help_text='Email for billing notifications', max_length=254, null=True),
),
migrations.AddField(
model_name='account',
name='billing_postal_code',
field=models.CharField(blank=True, max_length=20),
),
migrations.AddField(
model_name='account',
name='billing_state',
field=models.CharField(blank=True, help_text='State/Province/Region', max_length=100),
),
migrations.AddField(
model_name='account',
name='tax_id',
field=models.CharField(blank=True, help_text='VAT/Tax ID number', max_length=100),
),
]

View File

@@ -1,23 +0,0 @@
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('igny8_core_auth', '0004_add_invoice_payment_models'),
]
operations = [
migrations.AlterField(
model_name='account',
name='owner',
field=models.ForeignKey(
blank=True,
null=True,
on_delete=django.db.models.deletion.SET_NULL,
related_name='owned_accounts',
to='igny8_core_auth.user',
),
),
]

View File

@@ -0,0 +1,31 @@
# Migration to add industry field to Site model
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('igny8_core_auth', '0004_add_industry_models'),
]
operations = [
migrations.AddField(
model_name='site',
name='industry',
field=models.ForeignKey(
blank=True,
help_text='Industry this site belongs to',
null=True,
on_delete=django.db.models.deletion.PROTECT,
related_name='sites',
to='igny8_core_auth.industry'
),
),
migrations.AddIndex(
model_name='site',
index=models.Index(fields=['industry'], name='igny8_site_industr_idx'),
),
]

View File

@@ -1,93 +0,0 @@
from django.db import migrations, models
import django.db.models.deletion
from django.core.validators import MinValueValidator, MaxValueValidator
class Migration(migrations.Migration):
dependencies = [
('igny8_core_auth', '0005_account_owner_nullable'),
]
operations = [
migrations.AddField(
model_name='account',
name='delete_reason',
field=models.CharField(blank=True, max_length=255, null=True),
),
migrations.AddField(
model_name='account',
name='deleted_at',
field=models.DateTimeField(blank=True, db_index=True, null=True),
),
migrations.AddField(
model_name='account',
name='deleted_by',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='igny8_core_auth.user'),
),
migrations.AddField(
model_name='account',
name='deletion_retention_days',
field=models.PositiveIntegerField(default=14, help_text='Retention window (days) before soft-deleted items are purged', validators=[MinValueValidator(1), MaxValueValidator(365)]),
),
migrations.AddField(
model_name='account',
name='is_deleted',
field=models.BooleanField(db_index=True, default=False),
),
migrations.AddField(
model_name='account',
name='restore_until',
field=models.DateTimeField(blank=True, db_index=True, null=True),
),
migrations.AddField(
model_name='sector',
name='delete_reason',
field=models.CharField(blank=True, max_length=255, null=True),
),
migrations.AddField(
model_name='sector',
name='deleted_at',
field=models.DateTimeField(blank=True, db_index=True, null=True),
),
migrations.AddField(
model_name='sector',
name='deleted_by',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='igny8_core_auth.user'),
),
migrations.AddField(
model_name='sector',
name='is_deleted',
field=models.BooleanField(db_index=True, default=False),
),
migrations.AddField(
model_name='sector',
name='restore_until',
field=models.DateTimeField(blank=True, db_index=True, null=True),
),
migrations.AddField(
model_name='site',
name='delete_reason',
field=models.CharField(blank=True, max_length=255, null=True),
),
migrations.AddField(
model_name='site',
name='deleted_at',
field=models.DateTimeField(blank=True, db_index=True, null=True),
),
migrations.AddField(
model_name='site',
name='deleted_by',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='igny8_core_auth.user'),
),
migrations.AddField(
model_name='site',
name='is_deleted',
field=models.BooleanField(db_index=True, default=False),
),
migrations.AddField(
model_name='site',
name='restore_until',
field=models.DateTimeField(blank=True, db_index=True, null=True),
),
]

View File

@@ -0,0 +1,151 @@
"""Add extended plan configuration fields"""
from decimal import Decimal
from django.core.validators import MinValueValidator
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('igny8_core_auth', '0006_add_industry_to_site'),
]
operations = [
migrations.AddField(
model_name='plan',
name='ai_cost_per_request',
field=models.JSONField(default=dict, help_text="Cost per request type (e.g., {'cluster': 2, 'idea': 3, 'content': 5, 'image': 1})"),
),
migrations.AddField(
model_name='plan',
name='allow_credit_topup',
field=models.BooleanField(default=True, help_text='Can user purchase more credits?'),
),
migrations.AddField(
model_name='plan',
name='billing_cycle',
field=models.CharField(choices=[('monthly', 'Monthly'), ('annual', 'Annual')], default='monthly', max_length=20),
),
migrations.AddField(
model_name='plan',
name='daily_ai_request_limit',
field=models.IntegerField(default=100, help_text='Global daily AI request cap', validators=[MinValueValidator(0)]),
),
migrations.AddField(
model_name='plan',
name='daily_ai_requests',
field=models.IntegerField(default=50, help_text='Total AI executions (content + idea + image) allowed per day', validators=[MinValueValidator(0)]),
),
migrations.AddField(
model_name='plan',
name='daily_cluster_limit',
field=models.IntegerField(default=10, help_text='Max clusters that can be created per day', validators=[MinValueValidator(0)]),
),
migrations.AddField(
model_name='plan',
name='daily_content_tasks',
field=models.IntegerField(default=10, help_text='Max number of content tasks (blogs) per day', validators=[MinValueValidator(0)]),
),
migrations.AddField(
model_name='plan',
name='daily_keyword_import_limit',
field=models.IntegerField(default=100, help_text='SeedKeywords import limit per day', validators=[MinValueValidator(0)]),
),
migrations.AddField(
model_name='plan',
name='extra_credit_price',
field=models.DecimalField(decimal_places=2, default=Decimal('0.01'), help_text='Price per additional credit', max_digits=10),
),
migrations.AddField(
model_name='plan',
name='image_model_choices',
field=models.JSONField(default=list, help_text="Allowed image models (e.g., ['dalle3', 'hidream'])"),
),
migrations.AddField(
model_name='plan',
name='included_credits',
field=models.IntegerField(default=0, help_text='Monthly credits included', validators=[MinValueValidator(0)]),
),
migrations.AddField(
model_name='plan',
name='max_author_profiles',
field=models.IntegerField(default=5, help_text='Limit for saved writing styles', validators=[MinValueValidator(0)]),
),
migrations.AddField(
model_name='plan',
name='max_clusters',
field=models.IntegerField(default=100, help_text='Total clusters allowed (global)', validators=[MinValueValidator(0)]),
),
migrations.AddField(
model_name='plan',
name='max_images_per_task',
field=models.IntegerField(default=4, help_text='Max images per content task', validators=[MinValueValidator(1)]),
),
migrations.AddField(
model_name='plan',
name='max_industries',
field=models.IntegerField(blank=True, default=None, help_text='Optional limit for industries/sectors', null=True, validators=[MinValueValidator(1)]),
),
migrations.AddField(
model_name='plan',
name='max_keywords',
field=models.IntegerField(default=1000, help_text='Total keywords allowed (global limit)', validators=[MinValueValidator(0)]),
),
migrations.AddField(
model_name='plan',
name='max_users',
field=models.IntegerField(default=1, help_text='Total users allowed per account', validators=[MinValueValidator(1)]),
),
migrations.AddField(
model_name='plan',
name='monthly_ai_credit_limit',
field=models.IntegerField(default=500, help_text='Unified credit ceiling per month (all AI functions)', validators=[MinValueValidator(0)]),
),
migrations.AddField(
model_name='plan',
name='monthly_cluster_ai_credits',
field=models.IntegerField(default=50, help_text='AI credits allocated for clustering', validators=[MinValueValidator(0)]),
),
migrations.AddField(
model_name='plan',
name='monthly_content_ai_credits',
field=models.IntegerField(default=200, help_text='AI credit pool for content generation', validators=[MinValueValidator(0)]),
),
migrations.AddField(
model_name='plan',
name='monthly_image_ai_credits',
field=models.IntegerField(default=100, help_text='AI credit pool for image generation', validators=[MinValueValidator(0)]),
),
migrations.AddField(
model_name='plan',
name='monthly_image_count',
field=models.IntegerField(default=100, help_text='Max images per month', validators=[MinValueValidator(0)]),
),
migrations.AddField(
model_name='plan',
name='monthly_word_count_limit',
field=models.IntegerField(default=50000, help_text='Monthly word limit (for generated content)', validators=[MinValueValidator(0)]),
),
migrations.AddField(
model_name='plan',
name='auto_credit_topup_threshold',
field=models.IntegerField(blank=True, default=None, help_text='Auto top-up trigger point (optional)', null=True, validators=[MinValueValidator(0)]),
),
migrations.AddField(
model_name='plan',
name='auto_credit_topup_amount',
field=models.IntegerField(blank=True, default=None, help_text='How many credits to auto-buy', null=True, validators=[MinValueValidator(1)]),
),
migrations.AddField(
model_name='plan',
name='stripe_product_id',
field=models.CharField(blank=True, help_text='For Stripe plan sync', max_length=255, null=True),
),
migrations.AlterField(
model_name='plan',
name='features',
field=models.JSONField(default=list, help_text="Plan features as JSON array (e.g., ['ai_writer', 'image_gen', 'auto_publish'])"),
),
]

View File

@@ -0,0 +1,108 @@
# Generated by Django 5.2.8 on 2025-11-07 10:06
import django.core.validators
import django.db.models.deletion
from django.conf import settings
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('igny8_core_auth', '0007_expand_plan_limits'),
]
operations = [
migrations.CreateModel(
name='PasswordResetToken',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('token', models.CharField(db_index=True, max_length=255, unique=True)),
('expires_at', models.DateTimeField()),
('used', models.BooleanField(default=False)),
('created_at', models.DateTimeField(auto_now_add=True)),
],
options={
'db_table': 'igny8_password_reset_tokens',
'ordering': ['-created_at'],
},
),
migrations.AlterModelOptions(
name='industry',
options={'ordering': ['name'], 'verbose_name': 'Industry', 'verbose_name_plural': 'Industries'},
),
migrations.AlterModelOptions(
name='industrysector',
options={'ordering': ['industry', 'name'], 'verbose_name': 'Industry Sector', 'verbose_name_plural': 'Industry Sectors'},
),
migrations.AlterModelOptions(
name='site',
options={'ordering': ['-created_at']},
),
migrations.AlterModelOptions(
name='siteuseraccess',
options={'verbose_name': 'Site User Access', 'verbose_name_plural': 'Site User Access'},
),
migrations.RenameIndex(
model_name='industry',
new_name='igny8_indus_slug_2f8769_idx',
old_name='igny8_indu_slug_idx',
),
migrations.RenameIndex(
model_name='industry',
new_name='igny8_indus_is_acti_146d41_idx',
old_name='igny8_indu_is_acti_idx',
),
migrations.RenameIndex(
model_name='industrysector',
new_name='igny8_indus_industr_00b524_idx',
old_name='igny8_indu_industr_idx',
),
migrations.RenameIndex(
model_name='industrysector',
new_name='igny8_indus_slug_101d63_idx',
old_name='igny8_indu_slug_1_idx',
),
migrations.RenameIndex(
model_name='sector',
new_name='igny8_secto_industr_1cf990_idx',
old_name='igny8_sect_industr_idx',
),
migrations.RenameIndex(
model_name='site',
new_name='igny8_sites_industr_66e004_idx',
old_name='igny8_site_industr_idx',
),
migrations.AlterField(
model_name='plan',
name='credits_per_month',
field=models.IntegerField(default=0, help_text='DEPRECATED: Use included_credits instead', validators=[django.core.validators.MinValueValidator(0)]),
),
migrations.AlterField(
model_name='plan',
name='extra_credit_price',
field=models.DecimalField(decimal_places=2, default=0.01, help_text='Price per additional credit', max_digits=10),
),
migrations.AlterField(
model_name='plan',
name='stripe_price_id',
field=models.CharField(blank=True, help_text='Monthly price ID for Stripe', max_length=255, null=True),
),
migrations.AddField(
model_name='passwordresettoken',
name='user',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='password_reset_tokens', to=settings.AUTH_USER_MODEL),
),
migrations.AddIndex(
model_name='passwordresettoken',
index=models.Index(fields=['token'], name='igny8_passw_token_0eaf0c_idx'),
),
migrations.AddIndex(
model_name='passwordresettoken',
index=models.Index(fields=['user', 'used'], name='igny8_passw_user_id_320c02_idx'),
),
migrations.AddIndex(
model_name='passwordresettoken',
index=models.Index(fields=['expires_at'], name='igny8_passw_expires_c9aa03_idx'),
),
]

View File

@@ -0,0 +1,88 @@
from django.db import migrations
def forward_fix_admin_log_fk(apps, schema_editor):
if schema_editor.connection.vendor != "postgresql":
return
schema_editor.execute(
"""
ALTER TABLE django_admin_log
DROP CONSTRAINT IF EXISTS django_admin_log_user_id_c564eba6_fk_auth_user_id;
"""
)
schema_editor.execute(
"""
UPDATE django_admin_log
SET user_id = sub.new_user_id
FROM (
SELECT id AS new_user_id
FROM igny8_users
ORDER BY id
LIMIT 1
) AS sub
WHERE django_admin_log.user_id NOT IN (
SELECT id FROM igny8_users
);
"""
)
schema_editor.execute(
"""
DO $$
BEGIN
IF NOT EXISTS (
SELECT 1 FROM pg_constraint
WHERE conname = 'django_admin_log_user_id_c564eba6_fk_igny8_users_id'
) THEN
ALTER TABLE django_admin_log
ADD CONSTRAINT django_admin_log_user_id_c564eba6_fk_igny8_users_id
FOREIGN KEY (user_id) REFERENCES igny8_users(id) DEFERRABLE INITIALLY DEFERRED;
END IF;
END $$;
"""
)
def reverse_fix_admin_log_fk(apps, schema_editor):
if schema_editor.connection.vendor != "postgresql":
return
schema_editor.execute(
"""
ALTER TABLE django_admin_log
DROP CONSTRAINT IF EXISTS django_admin_log_user_id_c564eba6_fk_igny8_users_id;
"""
)
schema_editor.execute(
"""
UPDATE django_admin_log
SET user_id = sub.old_user_id
FROM (
SELECT id AS old_user_id
FROM auth_user
ORDER BY id
LIMIT 1
) AS sub
WHERE django_admin_log.user_id NOT IN (
SELECT id FROM auth_user
);
"""
)
schema_editor.execute(
"""
ALTER TABLE django_admin_log
ADD CONSTRAINT django_admin_log_user_id_c564eba6_fk_auth_user_id
FOREIGN KEY (user_id) REFERENCES auth_user(id) DEFERRABLE INITIALLY DEFERRED;
"""
)
class Migration(migrations.Migration):
dependencies = [
("igny8_core_auth", "0008_passwordresettoken_alter_industry_options_and_more"),
]
operations = [
migrations.RunPython(forward_fix_admin_log_fk, reverse_fix_admin_log_fk),
]

View File

@@ -0,0 +1,38 @@
# Generated by Django 5.2.8 on 2025-11-07 11:34
import django.core.validators
import django.db.models.deletion
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('igny8_core_auth', '0009_fix_admin_log_user_fk'),
]
operations = [
migrations.CreateModel(
name='SeedKeyword',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('keyword', models.CharField(db_index=True, max_length=255)),
('volume', models.IntegerField(default=0, help_text='Search volume estimate')),
('difficulty', models.IntegerField(default=0, help_text='Keyword difficulty (0-100)', validators=[django.core.validators.MinValueValidator(0), django.core.validators.MaxValueValidator(100)])),
('intent', models.CharField(choices=[('informational', 'Informational'), ('navigational', 'Navigational'), ('commercial', 'Commercial'), ('transactional', 'Transactional')], default='informational', max_length=50)),
('is_active', models.BooleanField(db_index=True, default=True)),
('created_at', models.DateTimeField(auto_now_add=True)),
('updated_at', models.DateTimeField(auto_now=True)),
('industry', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='seed_keywords', to='igny8_core_auth.industry')),
('sector', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='seed_keywords', to='igny8_core_auth.industrysector')),
],
options={
'verbose_name': 'Seed Keyword',
'verbose_name_plural': 'Seed Keywords',
'db_table': 'igny8_seed_keywords',
'ordering': ['keyword'],
'indexes': [models.Index(fields=['keyword'], name='igny8_seed__keyword_efa089_idx'), models.Index(fields=['industry', 'sector'], name='igny8_seed__industr_c41841_idx'), models.Index(fields=['industry', 'sector', 'is_active'], name='igny8_seed__industr_da0030_idx'), models.Index(fields=['intent'], name='igny8_seed__intent_15020d_idx')],
'unique_together': {('keyword', 'industry', 'sector')},
},
),
]

View File

@@ -0,0 +1,29 @@
# Generated by Django 5.2.7 on 2025-11-07 11:45
import django.core.validators
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('igny8_core_auth', '0010_add_seed_keyword'),
]
operations = [
migrations.AddField(
model_name='plan',
name='daily_image_generation_limit',
field=models.IntegerField(default=25, help_text='Max images that can be generated per day', validators=[django.core.validators.MinValueValidator(0)]),
),
migrations.AddField(
model_name='plan',
name='max_content_ideas',
field=models.IntegerField(default=300, help_text='Total content ideas allowed (global limit)', validators=[django.core.validators.MinValueValidator(0)]),
),
migrations.AlterField(
model_name='plan',
name='max_sites',
field=models.IntegerField(default=1, help_text='Maximum number of sites allowed', validators=[django.core.validators.MinValueValidator(1)]),
),
]

View File

@@ -0,0 +1,28 @@
# Generated by Django 5.2.7 on 2025-11-07 11:56
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('igny8_core_auth', '0011_add_plan_fields_and_fix_constraints'),
]
operations = [
migrations.AlterField(
model_name='plan',
name='ai_cost_per_request',
field=models.JSONField(blank=True, default=dict, help_text="Cost per request type (e.g., {'cluster': 2, 'idea': 3, 'content': 5, 'image': 1})"),
),
migrations.AlterField(
model_name='plan',
name='features',
field=models.JSONField(blank=True, default=list, help_text="Plan features as JSON array (e.g., ['ai_writer', 'image_gen', 'auto_publish'])"),
),
migrations.AlterField(
model_name='plan',
name='image_model_choices',
field=models.JSONField(blank=True, default=list, help_text="Allowed image models (e.g., ['dalle3', 'hidream'])"),
),
]

View File

@@ -0,0 +1,17 @@
# Generated by Django 5.2.7 on 2025-11-07 12:01
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('igny8_core_auth', '0012_allow_blank_json_fields'),
]
operations = [
migrations.RemoveField(
model_name='plan',
name='ai_cost_per_request',
),
]

View File

@@ -5,7 +5,6 @@ from django.db import models
from django.contrib.auth.models import AbstractUser from django.contrib.auth.models import AbstractUser
from django.utils.translation import gettext_lazy as _ from django.utils.translation import gettext_lazy as _
from django.core.validators import MinValueValidator, MaxValueValidator from django.core.validators import MinValueValidator, MaxValueValidator
from igny8_core.common.soft_delete import SoftDeletableModel, SoftDeleteManager
class AccountBaseModel(models.Model): class AccountBaseModel(models.Model):
@@ -53,7 +52,7 @@ class SiteSectorBaseModel(AccountBaseModel):
super().save(*args, **kwargs) super().save(*args, **kwargs)
class Account(SoftDeletableModel): class Account(models.Model):
""" """
Account/Organization model for multi-account support. Account/Organization model for multi-account support.
""" """
@@ -66,33 +65,11 @@ class Account(SoftDeletableModel):
name = models.CharField(max_length=255) name = models.CharField(max_length=255)
slug = models.SlugField(unique=True, max_length=255) slug = models.SlugField(unique=True, max_length=255)
owner = models.ForeignKey( owner = models.ForeignKey('igny8_core_auth.User', on_delete=models.PROTECT, related_name='owned_accounts')
'igny8_core_auth.User',
on_delete=models.SET_NULL,
null=True,
blank=True,
related_name='owned_accounts',
)
stripe_customer_id = models.CharField(max_length=255, blank=True, null=True) stripe_customer_id = models.CharField(max_length=255, blank=True, null=True)
plan = models.ForeignKey('igny8_core_auth.Plan', on_delete=models.PROTECT, related_name='accounts') plan = models.ForeignKey('igny8_core_auth.Plan', on_delete=models.PROTECT, related_name='accounts')
credits = models.IntegerField(default=0, validators=[MinValueValidator(0)]) credits = models.IntegerField(default=0, validators=[MinValueValidator(0)])
status = models.CharField(max_length=20, choices=STATUS_CHOICES, default='trial') status = models.CharField(max_length=20, choices=STATUS_CHOICES, default='trial')
deletion_retention_days = models.PositiveIntegerField(
default=14,
validators=[MinValueValidator(1), MaxValueValidator(365)],
help_text="Retention window (days) before soft-deleted items are purged",
)
# Billing information
billing_email = models.EmailField(blank=True, null=True, help_text="Email for billing notifications")
billing_address_line1 = models.CharField(max_length=255, blank=True, help_text="Street address")
billing_address_line2 = models.CharField(max_length=255, blank=True, help_text="Apt, suite, etc.")
billing_city = models.CharField(max_length=100, blank=True)
billing_state = models.CharField(max_length=100, blank=True, help_text="State/Province/Region")
billing_postal_code = models.CharField(max_length=20, blank=True)
billing_country = models.CharField(max_length=2, blank=True, help_text="ISO 2-letter country code")
tax_id = models.CharField(max_length=100, blank=True, help_text="VAT/Tax ID number")
created_at = models.DateTimeField(auto_now_add=True) created_at = models.DateTimeField(auto_now_add=True)
updated_at = models.DateTimeField(auto_now=True) updated_at = models.DateTimeField(auto_now=True)
@@ -105,9 +82,6 @@ class Account(SoftDeletableModel):
models.Index(fields=['status']), models.Index(fields=['status']),
] ]
objects = SoftDeleteManager()
all_objects = models.Manager()
def __str__(self): def __str__(self):
return self.name return self.name
@@ -116,20 +90,11 @@ class Account(SoftDeletableModel):
# System accounts bypass all filtering restrictions # System accounts bypass all filtering restrictions
return self.slug in ['aws-admin', 'default-account', 'default'] return self.slug in ['aws-admin', 'default-account', 'default']
def soft_delete(self, user=None, reason=None, retention_days=None):
if self.is_system_account():
from django.core.exceptions import PermissionDenied
raise PermissionDenied("System account cannot be deleted.")
return super().soft_delete(user=user, reason=reason, retention_days=retention_days)
def delete(self, using=None, keep_parents=False):
return self.soft_delete()
class Plan(models.Model): class Plan(models.Model):
""" """
Subscription plan model - Phase 0: Credit-only system. Subscription plan model with comprehensive limits and features.
Plans define credits, billing, and account management limits only. Plans define limits for users, sites, content generation, AI usage, and billing.
""" """
BILLING_CYCLE_CHOICES = [ BILLING_CYCLE_CHOICES = [
('monthly', 'Monthly'), ('monthly', 'Monthly'),
@@ -145,7 +110,7 @@ class Plan(models.Model):
is_active = models.BooleanField(default=True) is_active = models.BooleanField(default=True)
created_at = models.DateTimeField(auto_now_add=True) created_at = models.DateTimeField(auto_now_add=True)
# Account Management Limits (kept - not operation limits) # User / Site / Scope Limits
max_users = models.IntegerField(default=1, validators=[MinValueValidator(1)], help_text="Total users allowed per account") max_users = models.IntegerField(default=1, validators=[MinValueValidator(1)], help_text="Total users allowed per account")
max_sites = models.IntegerField( max_sites = models.IntegerField(
default=1, default=1,
@@ -155,7 +120,32 @@ class Plan(models.Model):
max_industries = models.IntegerField(default=None, null=True, blank=True, validators=[MinValueValidator(1)], help_text="Optional limit for industries/sectors") max_industries = models.IntegerField(default=None, null=True, blank=True, validators=[MinValueValidator(1)], help_text="Optional limit for industries/sectors")
max_author_profiles = models.IntegerField(default=5, validators=[MinValueValidator(0)], help_text="Limit for saved writing styles") max_author_profiles = models.IntegerField(default=5, validators=[MinValueValidator(0)], help_text="Limit for saved writing styles")
# Billing & Credits (Phase 0: Credit-only system) # Planner Limits
max_keywords = models.IntegerField(default=1000, validators=[MinValueValidator(0)], help_text="Total keywords allowed (global limit)")
max_clusters = models.IntegerField(default=100, validators=[MinValueValidator(0)], help_text="Total clusters allowed (global)")
max_content_ideas = models.IntegerField(default=300, validators=[MinValueValidator(0)], help_text="Total content ideas allowed (global limit)")
daily_cluster_limit = models.IntegerField(default=10, validators=[MinValueValidator(0)], help_text="Max clusters that can be created per day")
daily_keyword_import_limit = models.IntegerField(default=100, validators=[MinValueValidator(0)], help_text="SeedKeywords import limit per day")
monthly_cluster_ai_credits = models.IntegerField(default=50, validators=[MinValueValidator(0)], help_text="AI credits allocated for clustering")
# Writer Limits
daily_content_tasks = models.IntegerField(default=10, validators=[MinValueValidator(0)], help_text="Max number of content tasks (blogs) per day")
daily_ai_requests = models.IntegerField(default=50, validators=[MinValueValidator(0)], help_text="Total AI executions (content + idea + image) allowed per day")
monthly_word_count_limit = models.IntegerField(default=50000, validators=[MinValueValidator(0)], help_text="Monthly word limit (for generated content)")
monthly_content_ai_credits = models.IntegerField(default=200, validators=[MinValueValidator(0)], help_text="AI credit pool for content generation")
# Image Generation Limits
monthly_image_count = models.IntegerField(default=100, validators=[MinValueValidator(0)], help_text="Max images per month")
daily_image_generation_limit = models.IntegerField(default=25, validators=[MinValueValidator(0)], help_text="Max images that can be generated per day")
monthly_image_ai_credits = models.IntegerField(default=100, validators=[MinValueValidator(0)], help_text="AI credit pool for image generation")
max_images_per_task = models.IntegerField(default=4, validators=[MinValueValidator(1)], help_text="Max images per content task")
image_model_choices = models.JSONField(default=list, blank=True, help_text="Allowed image models (e.g., ['dalle3', 'hidream'])")
# AI Request Controls
daily_ai_request_limit = models.IntegerField(default=100, validators=[MinValueValidator(0)], help_text="Global daily AI request cap")
monthly_ai_credit_limit = models.IntegerField(default=500, validators=[MinValueValidator(0)], help_text="Unified credit ceiling per month (all AI functions)")
# Billing & Add-ons
included_credits = models.IntegerField(default=0, validators=[MinValueValidator(0)], help_text="Monthly credits included") included_credits = models.IntegerField(default=0, validators=[MinValueValidator(0)], help_text="Monthly credits included")
extra_credit_price = models.DecimalField(max_digits=10, decimal_places=2, default=0.01, help_text="Price per additional credit") extra_credit_price = models.DecimalField(max_digits=10, decimal_places=2, default=0.01, help_text="Price per additional credit")
allow_credit_topup = models.BooleanField(default=True, help_text="Can user purchase more credits?") allow_credit_topup = models.BooleanField(default=True, help_text="Can user purchase more credits?")
@@ -220,7 +210,7 @@ class Subscription(models.Model):
class Site(SoftDeletableModel, AccountBaseModel): class Site(AccountBaseModel):
""" """
Site model - Each account can have multiple sites based on their plan. Site model - Each account can have multiple sites based on their plan.
Each site belongs to ONE industry and can have 1-5 sectors from that industry. Each site belongs to ONE industry and can have 1-5 sectors from that industry.
@@ -248,53 +238,10 @@ class Site(SoftDeletableModel, AccountBaseModel):
created_at = models.DateTimeField(auto_now_add=True) created_at = models.DateTimeField(auto_now_add=True)
updated_at = models.DateTimeField(auto_now=True) updated_at = models.DateTimeField(auto_now=True)
# WordPress integration fields (legacy - use SiteIntegration instead) # WordPress integration fields
wp_url = models.URLField(blank=True, null=True, help_text="WordPress site URL (legacy - use SiteIntegration)") wp_url = models.URLField(blank=True, null=True, help_text="WordPress site URL")
wp_username = models.CharField(max_length=255, blank=True, null=True) wp_username = models.CharField(max_length=255, blank=True, null=True)
wp_app_password = models.CharField(max_length=255, blank=True, null=True) wp_app_password = models.CharField(max_length=255, blank=True, null=True)
wp_api_key = models.CharField(max_length=255, blank=True, null=True, help_text="API key for WordPress integration via IGNY8 WP Bridge plugin")
# Site type and hosting (Phase 6)
SITE_TYPE_CHOICES = [
('marketing', 'Marketing Site'),
('ecommerce', 'Ecommerce Site'),
('blog', 'Blog'),
('portfolio', 'Portfolio'),
('corporate', 'Corporate'),
]
HOSTING_TYPE_CHOICES = [
('igny8_sites', 'IGNY8 Sites'),
('wordpress', 'WordPress'),
('shopify', 'Shopify'),
('multi', 'Multi-Destination'),
]
site_type = models.CharField(
max_length=50,
choices=SITE_TYPE_CHOICES,
default='marketing',
db_index=True,
help_text="Type of site"
)
hosting_type = models.CharField(
max_length=50,
choices=HOSTING_TYPE_CHOICES,
default='igny8_sites',
db_index=True,
help_text="Target hosting platform"
)
# SEO metadata (Phase 7)
seo_metadata = models.JSONField(
default=dict,
blank=True,
help_text="SEO metadata: meta tags, Open Graph, Schema.org"
)
objects = SoftDeleteManager()
all_objects = models.Manager()
class Meta: class Meta:
db_table = 'igny8_sites' db_table = 'igny8_sites'
@@ -304,8 +251,6 @@ class Site(SoftDeletableModel, AccountBaseModel):
models.Index(fields=['account', 'is_active']), models.Index(fields=['account', 'is_active']),
models.Index(fields=['account', 'status']), models.Index(fields=['account', 'status']),
models.Index(fields=['industry']), models.Index(fields=['industry']),
models.Index(fields=['site_type']),
models.Index(fields=['hosting_type']),
] ]
def __str__(self): def __str__(self):
@@ -417,7 +362,7 @@ class SeedKeyword(models.Model):
db_table = 'igny8_seed_keywords' db_table = 'igny8_seed_keywords'
unique_together = [['keyword', 'industry', 'sector']] unique_together = [['keyword', 'industry', 'sector']]
verbose_name = 'Seed Keyword' verbose_name = 'Seed Keyword'
verbose_name_plural = 'Global Keywords Database' verbose_name_plural = 'Seed Keywords'
indexes = [ indexes = [
models.Index(fields=['keyword']), models.Index(fields=['keyword']),
models.Index(fields=['industry', 'sector']), models.Index(fields=['industry', 'sector']),
@@ -430,7 +375,7 @@ class SeedKeyword(models.Model):
return f"{self.keyword} ({self.industry.name} - {self.sector.name})" return f"{self.keyword} ({self.industry.name} - {self.sector.name})"
class Sector(SoftDeletableModel, AccountBaseModel): class Sector(AccountBaseModel):
""" """
Sector model - Each site can have 1-5 sectors. Sector model - Each site can have 1-5 sectors.
Sectors are site-specific instances that reference an IndustrySector template. Sectors are site-specific instances that reference an IndustrySector template.
@@ -457,9 +402,6 @@ class Sector(SoftDeletableModel, AccountBaseModel):
status = models.CharField(max_length=20, choices=STATUS_CHOICES, default='active') status = models.CharField(max_length=20, choices=STATUS_CHOICES, default='active')
created_at = models.DateTimeField(auto_now_add=True) created_at = models.DateTimeField(auto_now_add=True)
updated_at = models.DateTimeField(auto_now=True) updated_at = models.DateTimeField(auto_now=True)
objects = SoftDeleteManager()
all_objects = models.Manager()
class Meta: class Meta:
db_table = 'igny8_sectors' db_table = 'igny8_sectors'

View File

@@ -11,10 +11,10 @@ class PlanSerializer(serializers.ModelSerializer):
model = Plan model = Plan
fields = [ fields = [
'id', 'name', 'slug', 'price', 'billing_cycle', 'features', 'is_active', 'id', 'name', 'slug', 'price', 'billing_cycle', 'features', 'is_active',
'max_users', 'max_sites', 'max_industries', 'max_author_profiles', 'max_users', 'max_sites', 'max_keywords', 'max_clusters', 'max_content_ideas',
'included_credits', 'extra_credit_price', 'allow_credit_topup', 'monthly_word_count_limit', 'monthly_ai_credit_limit', 'monthly_image_count',
'auto_credit_topup_threshold', 'auto_credit_topup_amount', 'daily_content_tasks', 'daily_ai_request_limit', 'daily_image_generation_limit',
'stripe_product_id', 'stripe_price_id', 'credits_per_month' 'included_credits', 'image_model_choices', 'credits_per_month'
] ]
@@ -68,8 +68,7 @@ class SiteSerializer(serializers.ModelSerializer):
fields = [ fields = [
'id', 'name', 'slug', 'domain', 'description', 'id', 'name', 'slug', 'domain', 'description',
'industry', 'industry_name', 'industry_slug', 'industry', 'industry_name', 'industry_slug',
'is_active', 'status', 'is_active', 'status', 'wp_url', 'wp_username',
'site_type', 'hosting_type', 'seo_metadata',
'sectors_count', 'active_sectors_count', 'selected_sectors', 'sectors_count', 'active_sectors_count', 'selected_sectors',
'can_add_sectors', 'can_add_sectors',
'created_at', 'updated_at' 'created_at', 'updated_at'

View File

@@ -14,10 +14,8 @@ from .views import (
SiteUserAccessViewSet, PlanViewSet, SiteViewSet, SectorViewSet, SiteUserAccessViewSet, PlanViewSet, SiteViewSet, SectorViewSet,
IndustryViewSet, SeedKeywordViewSet IndustryViewSet, SeedKeywordViewSet
) )
from .serializers import RegisterSerializer, LoginSerializer, ChangePasswordSerializer, UserSerializer, RefreshTokenSerializer from .serializers import RegisterSerializer, LoginSerializer, ChangePasswordSerializer, UserSerializer
from .models import User from .models import User
from .utils import generate_access_token, get_token_expiry, decode_token
import jwt
router = DefaultRouter() router = DefaultRouter()
# Main structure: Groups, Users, Accounts, Subscriptions, Site User Access # Main structure: Groups, Users, Accounts, Subscriptions, Site User Access
@@ -80,7 +78,7 @@ class LoginView(APIView):
password = serializer.validated_data['password'] password = serializer.validated_data['password']
try: try:
user = User.objects.select_related('account', 'account__plan').get(email=email) user = User.objects.get(email=email)
except User.DoesNotExist: except User.DoesNotExist:
return error_response( return error_response(
error='Invalid credentials', error='Invalid credentials',
@@ -109,17 +107,9 @@ class LoginView(APIView):
user_data = user_serializer.data user_data = user_serializer.data
except Exception as e: except Exception as e:
# Fallback if serializer fails (e.g., missing account_id column) # Fallback if serializer fails (e.g., missing account_id column)
# Log the error for debugging but don't fail the login
import logging
logger = logging.getLogger(__name__)
logger.warning(f"UserSerializer failed for user {user.id}: {e}", exc_info=True)
# Ensure username is properly set (use email prefix if username is empty/default)
username = user.username if user.username and user.username != 'user' else user.email.split('@')[0]
user_data = { user_data = {
'id': user.id, 'id': user.id,
'username': username, 'username': user.username,
'email': user.email, 'email': user.email,
'role': user.role, 'role': user.role,
'account': None, 'account': None,
@@ -129,10 +119,12 @@ class LoginView(APIView):
return success_response( return success_response(
data={ data={
'user': user_data, 'user': user_data,
'access': access_token, 'tokens': {
'refresh': refresh_token, 'access': access_token,
'access_expires_at': access_expires_at.isoformat(), 'refresh': refresh_token,
'refresh_expires_at': refresh_expires_at.isoformat(), 'access_expires_at': access_expires_at.isoformat(),
'refresh_expires_at': refresh_expires_at.isoformat(),
}
}, },
message='Login successful', message='Login successful',
request=request request=request
@@ -188,84 +180,6 @@ class ChangePasswordView(APIView):
) )
@extend_schema(
tags=['Authentication'],
summary='Refresh Token',
description='Refresh access token using refresh token'
)
class RefreshTokenView(APIView):
"""Refresh access token endpoint."""
permission_classes = [permissions.AllowAny]
def post(self, request):
serializer = RefreshTokenSerializer(data=request.data)
if not serializer.is_valid():
return error_response(
error='Validation failed',
errors=serializer.errors,
status_code=status.HTTP_400_BAD_REQUEST,
request=request
)
refresh_token = serializer.validated_data['refresh']
try:
# Decode and validate refresh token
payload = decode_token(refresh_token)
# Verify it's a refresh token
if payload.get('type') != 'refresh':
return error_response(
error='Invalid token type',
status_code=status.HTTP_400_BAD_REQUEST,
request=request
)
# Get user
user_id = payload.get('user_id')
account_id = payload.get('account_id')
try:
user = User.objects.select_related('account', 'account__plan').get(id=user_id)
except User.DoesNotExist:
return error_response(
error='User not found',
status_code=status.HTTP_404_NOT_FOUND,
request=request
)
# Get account
account = None
if account_id:
try:
from .models import Account
account = Account.objects.get(id=account_id)
except Exception:
pass
if not account:
account = getattr(user, 'account', None)
# Generate new access token
access_token = generate_access_token(user, account)
access_expires_at = get_token_expiry('access')
return success_response(
data={
'access': access_token,
'access_expires_at': access_expires_at.isoformat()
},
request=request
)
except jwt.InvalidTokenError:
return error_response(
error='Invalid or expired refresh token',
status_code=status.HTTP_401_UNAUTHORIZED,
request=request
)
@extend_schema(exclude=True) # Exclude from public API documentation - internal authenticated endpoint @extend_schema(exclude=True) # Exclude from public API documentation - internal authenticated endpoint
class MeView(APIView): class MeView(APIView):
"""Get current user information.""" """Get current user information."""
@@ -287,7 +201,6 @@ urlpatterns = [
path('', include(router.urls)), path('', include(router.urls)),
path('register/', csrf_exempt(RegisterView.as_view()), name='auth-register'), path('register/', csrf_exempt(RegisterView.as_view()), name='auth-register'),
path('login/', csrf_exempt(LoginView.as_view()), name='auth-login'), path('login/', csrf_exempt(LoginView.as_view()), name='auth-login'),
path('refresh/', csrf_exempt(RefreshTokenView.as_view()), name='auth-refresh'),
path('change-password/', ChangePasswordView.as_view(), name='auth-change-password'), path('change-password/', ChangePasswordView.as_view(), name='auth-change-password'),
path('me/', MeView.as_view(), name='auth-me'), path('me/', MeView.as_view(), name='auth-me'),
] ]

View File

@@ -478,25 +478,15 @@ class SiteViewSet(AccountModelViewSet):
def get_permissions(self): def get_permissions(self):
"""Allow normal users (viewer) to create sites, but require editor+ for other operations.""" """Allow normal users (viewer) to create sites, but require editor+ for other operations."""
# Allow public read access for list requests with slug filter (used by Sites Renderer)
if self.action == 'list' and self.request.query_params.get('slug'):
from rest_framework.permissions import AllowAny
return [AllowAny()]
if self.action == 'create': if self.action == 'create':
return [permissions.IsAuthenticated()] return [permissions.IsAuthenticated()]
return [IsEditorOrAbove()] return [IsEditorOrAbove()]
def get_queryset(self): def get_queryset(self):
"""Return sites accessible to the current user.""" """Return sites accessible to the current user."""
# If this is a public request (no auth) with slug filter, return site by slug
if not self.request.user or not self.request.user.is_authenticated:
slug = self.request.query_params.get('slug')
if slug:
# Return queryset directly from model (bypassing base class account filtering)
return Site.objects.filter(slug=slug, is_active=True)
return Site.objects.none()
user = self.request.user user = self.request.user
if not user or not user.is_authenticated:
return Site.objects.none()
# ADMIN/DEV OVERRIDE: Both admins and developers can see all sites # ADMIN/DEV OVERRIDE: Both admins and developers can see all sites
if user.is_admin_or_developer(): if user.is_admin_or_developer():
@@ -838,133 +828,14 @@ class SeedKeywordViewSet(viewsets.ReadOnlyModelViewSet):
"""Filter by industry and sector if provided.""" """Filter by industry and sector if provided."""
queryset = super().get_queryset() queryset = super().get_queryset()
industry_id = self.request.query_params.get('industry_id') industry_id = self.request.query_params.get('industry_id')
industry_name = self.request.query_params.get('industry_name')
sector_id = self.request.query_params.get('sector_id') sector_id = self.request.query_params.get('sector_id')
sector_name = self.request.query_params.get('sector_name')
if industry_id: if industry_id:
queryset = queryset.filter(industry_id=industry_id) queryset = queryset.filter(industry_id=industry_id)
if industry_name:
queryset = queryset.filter(industry__name__icontains=industry_name)
if sector_id: if sector_id:
queryset = queryset.filter(sector_id=sector_id) queryset = queryset.filter(sector_id=sector_id)
if sector_name:
queryset = queryset.filter(sector__name__icontains=sector_name)
return queryset return queryset
@action(detail=False, methods=['post'], url_path='import_seed_keywords', url_name='import_seed_keywords')
def import_seed_keywords(self, request):
"""
Import seed keywords from CSV (Admin/Superuser only).
Expected columns: keyword, industry_name, sector_name, volume, difficulty, intent
"""
import csv
from django.db import transaction
# Check admin/superuser permission
if not (request.user.is_staff or request.user.is_superuser):
return error_response(
error='Admin or superuser access required',
status_code=status.HTTP_403_FORBIDDEN,
request=request
)
if 'file' not in request.FILES:
return error_response(
error='No file provided',
status_code=status.HTTP_400_BAD_REQUEST,
request=request
)
file = request.FILES['file']
if not file.name.endswith('.csv'):
return error_response(
error='File must be a CSV',
status_code=status.HTTP_400_BAD_REQUEST,
request=request
)
try:
# Parse CSV
decoded_file = file.read().decode('utf-8')
csv_reader = csv.DictReader(decoded_file.splitlines())
imported_count = 0
skipped_count = 0
errors = []
with transaction.atomic():
for row_num, row in enumerate(csv_reader, start=2): # Start at 2 (header is row 1)
try:
keyword_text = row.get('keyword', '').strip()
industry_name = row.get('industry_name', '').strip()
sector_name = row.get('sector_name', '').strip()
if not all([keyword_text, industry_name, sector_name]):
skipped_count += 1
continue
# Get or create industry
industry = Industry.objects.filter(name=industry_name).first()
if not industry:
errors.append(f"Row {row_num}: Industry '{industry_name}' not found")
skipped_count += 1
continue
# Get or create industry sector
sector = IndustrySector.objects.filter(
industry=industry,
name=sector_name
).first()
if not sector:
errors.append(f"Row {row_num}: Sector '{sector_name}' not found for industry '{industry_name}'")
skipped_count += 1
continue
# Check if keyword already exists
existing = SeedKeyword.objects.filter(
keyword=keyword_text,
industry=industry,
sector=sector
).first()
if existing:
skipped_count += 1
continue
# Create seed keyword
SeedKeyword.objects.create(
keyword=keyword_text,
industry=industry,
sector=sector,
volume=int(row.get('volume', 0) or 0),
difficulty=int(row.get('difficulty', 0) or 0),
intent=row.get('intent', 'informational') or 'informational',
is_active=True
)
imported_count += 1
except Exception as e:
errors.append(f"Row {row_num}: {str(e)}")
skipped_count += 1
return success_response(
data={
'imported': imported_count,
'skipped': skipped_count,
'errors': errors[:10] if errors else [] # Limit errors to first 10
},
message=f'Import completed: {imported_count} keywords imported, {skipped_count} skipped',
request=request
)
except Exception as e:
return error_response(
error=f'Failed to import keywords: {str(e)}',
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
request=request
)
# ============================================================================ # ============================================================================
@@ -1045,28 +916,13 @@ class AuthViewSet(viewsets.GenericViewSet):
) )
if user.check_password(password): if user.check_password(password):
# Ensure user has an account
account = getattr(user, 'account', None)
if account is None:
return error_response(
error='Account not configured for this user. Please contact support.',
status_code=status.HTTP_403_FORBIDDEN,
request=request,
)
# Ensure account has an active plan
plan = getattr(account, 'plan', None)
if plan is None or getattr(plan, 'is_active', False) is False:
return error_response(
error='Active subscription required. Visit igny8.com/pricing to subscribe.',
status_code=status.HTTP_402_PAYMENT_REQUIRED,
request=request,
)
# Log the user in (create session for session authentication) # Log the user in (create session for session authentication)
from django.contrib.auth import login from django.contrib.auth import login
login(request, user) login(request, user)
# Get account from user
account = getattr(user, 'account', None)
# Generate JWT tokens # Generate JWT tokens
access_token = generate_access_token(user, account) access_token = generate_access_token(user, account)
refresh_token = generate_refresh_token(user, account) refresh_token = generate_refresh_token(user, account)
@@ -1077,10 +933,12 @@ class AuthViewSet(viewsets.GenericViewSet):
return success_response( return success_response(
data={ data={
'user': user_serializer.data, 'user': user_serializer.data,
'access': access_token, 'tokens': {
'refresh': refresh_token, 'access': access_token,
'access_expires_at': access_expires_at.isoformat(), 'refresh': refresh_token,
'refresh_expires_at': refresh_expires_at.isoformat(), 'access_expires_at': access_expires_at.isoformat(),
'refresh_expires_at': refresh_expires_at.isoformat(),
}
}, },
message='Login successful', message='Login successful',
request=request request=request
@@ -1316,219 +1174,3 @@ class AuthViewSet(viewsets.GenericViewSet):
message='Password has been reset successfully', message='Password has been reset successfully',
request=request request=request
) )
# ============================================================================
# CSV Import/Export Views for Admin
# ============================================================================
from django.http import HttpResponse, JsonResponse
from django.contrib.admin.views.decorators import staff_member_required
from django.views.decorators.http import require_http_methods
import csv
import io
@staff_member_required
@require_http_methods(["GET"])
def industry_csv_template(request):
"""Download CSV template for Industry import"""
response = HttpResponse(content_type='text/csv')
response['Content-Disposition'] = 'attachment; filename="industry_template.csv"'
writer = csv.writer(response)
writer.writerow(['name', 'description', 'is_active'])
writer.writerow(['Technology', 'Technology industry', 'true'])
writer.writerow(['Healthcare', 'Healthcare and medical services', 'true'])
return response
@staff_member_required
@require_http_methods(["POST"])
def industry_csv_import(request):
"""Import industries from CSV"""
if not request.FILES.get('csv_file'):
return JsonResponse({'success': False, 'error': 'No CSV file provided'}, status=400)
csv_file = request.FILES['csv_file']
decoded_file = csv_file.read().decode('utf-8')
io_string = io.StringIO(decoded_file)
reader = csv.DictReader(io_string)
created = 0
updated = 0
errors = []
from django.utils.text import slugify
for row_num, row in enumerate(reader, start=2):
try:
is_active = row.get('is_active', 'true').lower() in ['true', '1', 'yes']
slug = slugify(row['name'])
industry, created_flag = Industry.objects.update_or_create(
name=row['name'],
defaults={
'slug': slug,
'description': row.get('description', ''),
'is_active': is_active
}
)
if created_flag:
created += 1
else:
updated += 1
except Exception as e:
errors.append(f"Row {row_num}: {str(e)}")
return JsonResponse({
'success': True,
'created': created,
'updated': updated,
'errors': errors
})
@staff_member_required
@require_http_methods(["GET"])
def industrysector_csv_template(request):
"""Download CSV template for IndustrySector import"""
response = HttpResponse(content_type='text/csv')
response['Content-Disposition'] = 'attachment; filename="industrysector_template.csv"'
writer = csv.writer(response)
writer.writerow(['name', 'industry', 'description', 'is_active'])
writer.writerow(['Software Development', 'Technology', 'Software and app development', 'true'])
writer.writerow(['Healthcare IT', 'Healthcare', 'Healthcare information technology', 'true'])
return response
@staff_member_required
@require_http_methods(["POST"])
def industrysector_csv_import(request):
"""Import industry sectors from CSV"""
if not request.FILES.get('csv_file'):
return JsonResponse({'success': False, 'error': 'No CSV file provided'}, status=400)
csv_file = request.FILES['csv_file']
decoded_file = csv_file.read().decode('utf-8')
io_string = io.StringIO(decoded_file)
reader = csv.DictReader(io_string)
created = 0
updated = 0
errors = []
from django.utils.text import slugify
for row_num, row in enumerate(reader, start=2):
try:
is_active = row.get('is_active', 'true').lower() in ['true', '1', 'yes']
slug = slugify(row['name'])
# Find industry by name
try:
industry = Industry.objects.get(name=row['industry'])
except Industry.DoesNotExist:
errors.append(f"Row {row_num}: Industry '{row['industry']}' not found")
continue
sector, created_flag = IndustrySector.objects.update_or_create(
name=row['name'],
industry=industry,
defaults={
'slug': slug,
'description': row.get('description', ''),
'is_active': is_active
}
)
if created_flag:
created += 1
else:
updated += 1
except Exception as e:
errors.append(f"Row {row_num}: {str(e)}")
return JsonResponse({
'success': True,
'created': created,
'updated': updated,
'errors': errors
})
@staff_member_required
@require_http_methods(["GET"])
def seedkeyword_csv_template(request):
"""Download CSV template for SeedKeyword import"""
response = HttpResponse(content_type='text/csv')
response['Content-Disposition'] = 'attachment; filename="seedkeyword_template.csv"'
writer = csv.writer(response)
writer.writerow(['keyword', 'industry', 'sector', 'volume', 'difficulty', 'intent', 'is_active'])
writer.writerow(['python programming', 'Technology', 'Software Development', '10000', '45', 'Informational', 'true'])
writer.writerow(['medical software', 'Healthcare', 'Healthcare IT', '5000', '60', 'Commercial', 'true'])
return response
@staff_member_required
@require_http_methods(["POST"])
def seedkeyword_csv_import(request):
"""Import seed keywords from CSV"""
if not request.FILES.get('csv_file'):
return JsonResponse({'success': False, 'error': 'No CSV file provided'}, status=400)
csv_file = request.FILES['csv_file']
decoded_file = csv_file.read().decode('utf-8')
io_string = io.StringIO(decoded_file)
reader = csv.DictReader(io_string)
created = 0
updated = 0
errors = []
for row_num, row in enumerate(reader, start=2):
try:
is_active = row.get('is_active', 'true').lower() in ['true', '1', 'yes']
# Find industry and sector by name
try:
industry = Industry.objects.get(name=row['industry'])
except Industry.DoesNotExist:
errors.append(f"Row {row_num}: Industry '{row['industry']}' not found")
continue
try:
sector = IndustrySector.objects.get(name=row['sector'], industry=industry)
except IndustrySector.DoesNotExist:
errors.append(f"Row {row_num}: Sector '{row['sector']}' not found in industry '{row['industry']}'")
continue
keyword, created_flag = SeedKeyword.objects.update_or_create(
keyword=row['keyword'],
industry=industry,
sector=sector,
defaults={
'volume': int(row.get('volume', 0)),
'difficulty': int(row.get('difficulty', 0)),
'intent': row.get('intent', 'Informational'),
'is_active': is_active
}
)
if created_flag:
created += 1
else:
updated += 1
except Exception as e:
errors.append(f"Row {row_num}: {str(e)}")
return JsonResponse({
'success': True,
'created': created,
'updated': updated,
'errors': errors
})

View File

@@ -1,5 +0,0 @@
"""
Business logic layer - Models and Services
Separated from API layer (modules/) for clean architecture
"""

View File

@@ -1,4 +0,0 @@
"""
Automation Business Logic
Orchestrates AI functions into automated pipelines
"""

View File

@@ -1,20 +0,0 @@
"""
Admin registration for Automation models
"""
from django.contrib import admin
from igny8_core.admin.base import AccountAdminMixin
from .models import AutomationConfig, AutomationRun
@admin.register(AutomationConfig)
class AutomationConfigAdmin(AccountAdminMixin, admin.ModelAdmin):
list_display = ('site', 'is_enabled', 'frequency', 'scheduled_time', 'within_stage_delay', 'between_stage_delay', 'last_run_at')
list_filter = ('is_enabled', 'frequency')
search_fields = ('site__domain',)
@admin.register(AutomationRun)
class AutomationRunAdmin(AccountAdminMixin, admin.ModelAdmin):
list_display = ('run_id', 'site', 'status', 'current_stage', 'started_at', 'completed_at')
list_filter = ('status', 'current_stage')
search_fields = ('run_id', 'site__domain')

View File

@@ -1,89 +0,0 @@
# Generated migration for automation models
from django.db import migrations, models
import django.db.models.deletion
import uuid
class Migration(migrations.Migration):
initial = True
dependencies = [
('igny8_core_auth', '0004_add_invoice_payment_models'),
]
operations = [
migrations.CreateModel(
name='AutomationConfig',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('is_enabled', models.BooleanField(default=False, help_text='Enable/disable automation for this site')),
('frequency', models.CharField(
choices=[('daily', 'Daily'), ('weekly', 'Weekly'), ('monthly', 'Monthly')],
default='daily',
max_length=20
)),
('scheduled_time', models.TimeField(default='02:00', help_text='Time of day to run automation (HH:MM)')),
('stage_1_batch_size', models.IntegerField(default=20, help_text='Keywords → Clusters batch size')),
('stage_2_batch_size', models.IntegerField(default=1, help_text='Clusters → Ideas batch size')),
('stage_3_batch_size', models.IntegerField(default=20, help_text='Ideas → Tasks batch size')),
('stage_4_batch_size', models.IntegerField(default=1, help_text='Tasks → Content batch size')),
('stage_5_batch_size', models.IntegerField(default=1, help_text='Content → Image Prompts batch size')),
('stage_6_batch_size', models.IntegerField(default=1, help_text='Image Prompts → Images batch size')),
('last_run_at', models.DateTimeField(blank=True, null=True)),
('next_run_at', models.DateTimeField(blank=True, null=True)),
('created_at', models.DateTimeField(auto_now_add=True)),
('updated_at', models.DateTimeField(auto_now=True)),
('account', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='igny8_core_auth.account')),
('site', models.OneToOneField(on_delete=django.db.models.deletion.CASCADE, related_name='automation_config', to='igny8_core_auth.site')),
],
options={
'db_table': 'igny8_automation_configs',
},
),
migrations.CreateModel(
name='AutomationRun',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('run_id', models.CharField(max_length=100, unique=True)),
('trigger_type', models.CharField(
choices=[('manual', 'Manual'), ('scheduled', 'Scheduled')],
default='manual',
max_length=20
)),
('status', models.CharField(
choices=[
('running', 'Running'),
('paused', 'Paused'),
('completed', 'Completed'),
('failed', 'Failed')
],
default='running',
max_length=20
)),
('current_stage', models.IntegerField(default=1, help_text='Current stage (1-7)')),
('stage_1_result', models.JSONField(blank=True, null=True)),
('stage_2_result', models.JSONField(blank=True, null=True)),
('stage_3_result', models.JSONField(blank=True, null=True)),
('stage_4_result', models.JSONField(blank=True, null=True)),
('stage_5_result', models.JSONField(blank=True, null=True)),
('stage_6_result', models.JSONField(blank=True, null=True)),
('stage_7_result', models.JSONField(blank=True, null=True)),
('total_credits_used', models.IntegerField(default=0)),
('error_message', models.TextField(blank=True, null=True)),
('started_at', models.DateTimeField(auto_now_add=True)),
('completed_at', models.DateTimeField(blank=True, null=True)),
('account', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='igny8_core_auth.account')),
('site', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='automation_runs', to='igny8_core_auth.site')),
],
options={
'db_table': 'igny8_automation_runs',
'ordering': ['-started_at'],
'indexes': [
models.Index(fields=['site', 'status'], name='automation_site_status_idx'),
models.Index(fields=['site', 'started_at'], name='automation_site_started_idx'),
],
},
),
]

View File

@@ -1,23 +0,0 @@
# Generated migration for delay configuration fields
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('automation', '0001_initial'),
]
operations = [
migrations.AddField(
model_name='automationconfig',
name='within_stage_delay',
field=models.IntegerField(default=3, help_text='Delay between batches within a stage (seconds)'),
),
migrations.AddField(
model_name='automationconfig',
name='between_stage_delay',
field=models.IntegerField(default=5, help_text='Delay between stage transitions (seconds)'),
),
]

View File

@@ -1,166 +0,0 @@
# Generated by Django 5.2.8 on 2025-12-03 16:06
import django.db.models.deletion
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('automation', '0002_add_delay_configuration'),
('igny8_core_auth', '0003_add_sync_event_model'),
]
operations = [
migrations.AlterModelOptions(
name='automationconfig',
options={'verbose_name': 'Automation Config', 'verbose_name_plural': 'Automation Configs'},
),
migrations.AlterModelOptions(
name='automationrun',
options={'ordering': ['-started_at'], 'verbose_name': 'Automation Run', 'verbose_name_plural': 'Automation Runs'},
),
migrations.RemoveIndex(
model_name='automationrun',
name='automation_site_status_idx',
),
migrations.RemoveIndex(
model_name='automationrun',
name='automation_site_started_idx',
),
migrations.AlterField(
model_name='automationconfig',
name='account',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='automation_configs', to='igny8_core_auth.account'),
),
migrations.AlterField(
model_name='automationconfig',
name='is_enabled',
field=models.BooleanField(default=False, help_text='Whether scheduled automation is active'),
),
migrations.AlterField(
model_name='automationconfig',
name='next_run_at',
field=models.DateTimeField(blank=True, help_text='Calculated based on frequency', null=True),
),
migrations.AlterField(
model_name='automationconfig',
name='scheduled_time',
field=models.TimeField(default='02:00', help_text='Time to run (e.g., 02:00)'),
),
migrations.AlterField(
model_name='automationconfig',
name='stage_1_batch_size',
field=models.IntegerField(default=20, help_text='Keywords per batch'),
),
migrations.AlterField(
model_name='automationconfig',
name='stage_2_batch_size',
field=models.IntegerField(default=1, help_text='Clusters at a time'),
),
migrations.AlterField(
model_name='automationconfig',
name='stage_3_batch_size',
field=models.IntegerField(default=20, help_text='Ideas per batch'),
),
migrations.AlterField(
model_name='automationconfig',
name='stage_4_batch_size',
field=models.IntegerField(default=1, help_text='Tasks - sequential'),
),
migrations.AlterField(
model_name='automationconfig',
name='stage_5_batch_size',
field=models.IntegerField(default=1, help_text='Content at a time'),
),
migrations.AlterField(
model_name='automationconfig',
name='stage_6_batch_size',
field=models.IntegerField(default=1, help_text='Images - sequential'),
),
migrations.AlterField(
model_name='automationrun',
name='account',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='automation_runs', to='igny8_core_auth.account'),
),
migrations.AlterField(
model_name='automationrun',
name='current_stage',
field=models.IntegerField(default=1, help_text='Current stage number (1-7)'),
),
migrations.AlterField(
model_name='automationrun',
name='run_id',
field=models.CharField(db_index=True, help_text='Format: run_20251203_140523_manual', max_length=100, unique=True),
),
migrations.AlterField(
model_name='automationrun',
name='stage_1_result',
field=models.JSONField(blank=True, help_text='{keywords_processed, clusters_created, batches}', null=True),
),
migrations.AlterField(
model_name='automationrun',
name='stage_2_result',
field=models.JSONField(blank=True, help_text='{clusters_processed, ideas_created}', null=True),
),
migrations.AlterField(
model_name='automationrun',
name='stage_3_result',
field=models.JSONField(blank=True, help_text='{ideas_processed, tasks_created}', null=True),
),
migrations.AlterField(
model_name='automationrun',
name='stage_4_result',
field=models.JSONField(blank=True, help_text='{tasks_processed, content_created, total_words}', null=True),
),
migrations.AlterField(
model_name='automationrun',
name='stage_5_result',
field=models.JSONField(blank=True, help_text='{content_processed, prompts_created}', null=True),
),
migrations.AlterField(
model_name='automationrun',
name='stage_6_result',
field=models.JSONField(blank=True, help_text='{images_processed, images_generated}', null=True),
),
migrations.AlterField(
model_name='automationrun',
name='stage_7_result',
field=models.JSONField(blank=True, help_text='{ready_for_review}', null=True),
),
migrations.AlterField(
model_name='automationrun',
name='started_at',
field=models.DateTimeField(auto_now_add=True, db_index=True),
),
migrations.AlterField(
model_name='automationrun',
name='status',
field=models.CharField(choices=[('running', 'Running'), ('paused', 'Paused'), ('completed', 'Completed'), ('failed', 'Failed')], db_index=True, default='running', max_length=20),
),
migrations.AlterField(
model_name='automationrun',
name='trigger_type',
field=models.CharField(choices=[('manual', 'Manual'), ('scheduled', 'Scheduled')], max_length=20),
),
migrations.AddIndex(
model_name='automationconfig',
index=models.Index(fields=['is_enabled', 'next_run_at'], name='igny8_autom_is_enab_038ce6_idx'),
),
migrations.AddIndex(
model_name='automationconfig',
index=models.Index(fields=['account', 'site'], name='igny8_autom_account_c6092f_idx'),
),
migrations.AddIndex(
model_name='automationrun',
index=models.Index(fields=['site', '-started_at'], name='igny8_autom_site_id_b5bf36_idx'),
),
migrations.AddIndex(
model_name='automationrun',
index=models.Index(fields=['status', '-started_at'], name='igny8_autom_status_1457b0_idx'),
),
migrations.AddIndex(
model_name='automationrun',
index=models.Index(fields=['account', '-started_at'], name='igny8_autom_account_27cb3c_idx'),
),
]

View File

@@ -1,33 +0,0 @@
# Generated by Django 5.2.8 on 2025-12-04 15:27
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('automation', '0003_alter_automationconfig_options_and_more'),
]
operations = [
migrations.AddField(
model_name='automationrun',
name='cancelled_at',
field=models.DateTimeField(blank=True, help_text='When automation was cancelled', null=True),
),
migrations.AddField(
model_name='automationrun',
name='paused_at',
field=models.DateTimeField(blank=True, help_text='When automation was paused', null=True),
),
migrations.AddField(
model_name='automationrun',
name='resumed_at',
field=models.DateTimeField(blank=True, help_text='When automation was last resumed', null=True),
),
migrations.AlterField(
model_name='automationrun',
name='status',
field=models.CharField(choices=[('running', 'Running'), ('paused', 'Paused'), ('cancelled', 'Cancelled'), ('completed', 'Completed'), ('failed', 'Failed')], db_index=True, default='running', max_length=20),
),
]

View File

@@ -1 +0,0 @@
"""Automation migrations"""

View File

@@ -1,114 +0,0 @@
"""
Automation Models
Tracks automation runs and configuration
"""
from django.db import models
from django.utils import timezone
from igny8_core.auth.models import Account, Site
class AutomationConfig(models.Model):
"""Per-site automation configuration"""
FREQUENCY_CHOICES = [
('daily', 'Daily'),
('weekly', 'Weekly'),
('monthly', 'Monthly'),
]
account = models.ForeignKey(Account, on_delete=models.CASCADE, related_name='automation_configs')
site = models.OneToOneField(Site, on_delete=models.CASCADE, related_name='automation_config')
is_enabled = models.BooleanField(default=False, help_text="Whether scheduled automation is active")
frequency = models.CharField(max_length=20, choices=FREQUENCY_CHOICES, default='daily')
scheduled_time = models.TimeField(default='02:00', help_text="Time to run (e.g., 02:00)")
# Batch sizes per stage
stage_1_batch_size = models.IntegerField(default=20, help_text="Keywords per batch")
stage_2_batch_size = models.IntegerField(default=1, help_text="Clusters at a time")
stage_3_batch_size = models.IntegerField(default=20, help_text="Ideas per batch")
stage_4_batch_size = models.IntegerField(default=1, help_text="Tasks - sequential")
stage_5_batch_size = models.IntegerField(default=1, help_text="Content at a time")
stage_6_batch_size = models.IntegerField(default=1, help_text="Images - sequential")
# Delay configuration (in seconds)
within_stage_delay = models.IntegerField(default=3, help_text="Delay between batches within a stage (seconds)")
between_stage_delay = models.IntegerField(default=5, help_text="Delay between stage transitions (seconds)")
last_run_at = models.DateTimeField(null=True, blank=True)
next_run_at = models.DateTimeField(null=True, blank=True, help_text="Calculated based on frequency")
created_at = models.DateTimeField(auto_now_add=True)
updated_at = models.DateTimeField(auto_now=True)
class Meta:
db_table = 'igny8_automation_configs'
verbose_name = 'Automation Config'
verbose_name_plural = 'Automation Configs'
indexes = [
models.Index(fields=['is_enabled', 'next_run_at']),
models.Index(fields=['account', 'site']),
]
def __str__(self):
return f"Automation Config: {self.site.domain} ({self.frequency})"
class AutomationRun(models.Model):
"""Tracks each automation execution"""
TRIGGER_TYPE_CHOICES = [
('manual', 'Manual'),
('scheduled', 'Scheduled'),
]
STATUS_CHOICES = [
('running', 'Running'),
('paused', 'Paused'),
('cancelled', 'Cancelled'),
('completed', 'Completed'),
('failed', 'Failed'),
]
run_id = models.CharField(max_length=100, unique=True, db_index=True, help_text="Format: run_20251203_140523_manual")
account = models.ForeignKey(Account, on_delete=models.CASCADE, related_name='automation_runs')
site = models.ForeignKey(Site, on_delete=models.CASCADE, related_name='automation_runs')
trigger_type = models.CharField(max_length=20, choices=TRIGGER_TYPE_CHOICES)
status = models.CharField(max_length=20, choices=STATUS_CHOICES, default='running', db_index=True)
current_stage = models.IntegerField(default=1, help_text="Current stage number (1-7)")
# Pause/Resume tracking
paused_at = models.DateTimeField(null=True, blank=True, help_text="When automation was paused")
resumed_at = models.DateTimeField(null=True, blank=True, help_text="When automation was last resumed")
cancelled_at = models.DateTimeField(null=True, blank=True, help_text="When automation was cancelled")
started_at = models.DateTimeField(auto_now_add=True, db_index=True)
completed_at = models.DateTimeField(null=True, blank=True)
total_credits_used = models.IntegerField(default=0)
# JSON results per stage
stage_1_result = models.JSONField(null=True, blank=True, help_text="{keywords_processed, clusters_created, batches}")
stage_2_result = models.JSONField(null=True, blank=True, help_text="{clusters_processed, ideas_created}")
stage_3_result = models.JSONField(null=True, blank=True, help_text="{ideas_processed, tasks_created}")
stage_4_result = models.JSONField(null=True, blank=True, help_text="{tasks_processed, content_created, total_words}")
stage_5_result = models.JSONField(null=True, blank=True, help_text="{content_processed, prompts_created}")
stage_6_result = models.JSONField(null=True, blank=True, help_text="{images_processed, images_generated}")
stage_7_result = models.JSONField(null=True, blank=True, help_text="{ready_for_review}")
error_message = models.TextField(null=True, blank=True)
class Meta:
db_table = 'igny8_automation_runs'
verbose_name = 'Automation Run'
verbose_name_plural = 'Automation Runs'
ordering = ['-started_at']
indexes = [
models.Index(fields=['site', '-started_at']),
models.Index(fields=['status', '-started_at']),
models.Index(fields=['account', '-started_at']),
]
def __str__(self):
return f"{self.run_id} - {self.site.domain} ({self.status})"

View File

@@ -1,7 +0,0 @@
"""
Automation Services
"""
from .automation_service import AutomationService
from .automation_logger import AutomationLogger
__all__ = ['AutomationService', 'AutomationLogger']

View File

@@ -1,368 +0,0 @@
"""
Automation Logger Service
Handles file-based logging for automation runs
"""
import os
import logging
from datetime import datetime
from pathlib import Path
from typing import List
import json
logger = logging.getLogger(__name__)
class AutomationLogger:
"""File-based logging for automation runs
Writes logs under a per-account/per-site/run directory by default.
Optionally a shared_log_dir can be provided to mirror logs into a consolidated folder.
"""
def __init__(self, base_log_dir: str = '/data/app/logs/automation', shared_log_dir: str | None = None):
# Use absolute path by default to avoid surprises from current working directory
self.base_log_dir = base_log_dir
self.shared_log_dir = shared_log_dir
def start_run(self, account_id: int, site_id: int, trigger_type: str) -> str:
"""
Create log directory structure and return run_id
Returns:
run_id in format: run_20251203_140523_manual
"""
# Generate run_id
timestamp = datetime.now().strftime('%Y%m%d_%H%M%S')
run_id = f"run_{timestamp}_{trigger_type}"
# Create directory structure (primary)
run_dir = self._get_run_dir(account_id, site_id, run_id)
os.makedirs(run_dir, exist_ok=True)
# Create mirrored directory in shared log dir if configured
shared_run_dir = None
if self.shared_log_dir:
shared_run_dir = os.path.join(self.shared_log_dir, run_id)
os.makedirs(shared_run_dir, exist_ok=True)
# Create main log file in primary run dir
log_file = os.path.join(run_dir, 'automation_run.log')
with open(log_file, 'w') as f:
f.write("=" * 80 + "\n")
f.write(f"AUTOMATION RUN: {run_id}\n")
f.write(f"Started: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}\n")
f.write(f"Trigger: {trigger_type}\n")
f.write(f"Account: {account_id}\n")
f.write(f"Site: {site_id}\n")
f.write("=" * 80 + "\n\n")
# Also create a main log in the shared run dir (if configured)
if shared_run_dir:
shared_log_file = os.path.join(shared_run_dir, 'automation_run.log')
with open(shared_log_file, 'w') as f:
f.write("=" * 80 + "\n")
f.write(f"AUTOMATION RUN (SHARED): {run_id}\n")
f.write(f"Started: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}\n")
f.write(f"Trigger: {trigger_type}\n")
f.write(f"Account: {account_id}\n")
f.write(f"Site: {site_id}\n")
f.write("=" * 80 + "\n\n")
# Structured trace event for run start
try:
trace_event = {
'event': 'run_started',
'run_id': run_id,
'trigger': trigger_type,
'account_id': account_id,
'site_id': site_id,
'timestamp': datetime.now().isoformat(),
}
# best-effort append
run_dir = self._get_run_dir(account_id, site_id, run_id)
os.makedirs(run_dir, exist_ok=True)
trace_file = os.path.join(run_dir, 'run_trace.jsonl')
with open(trace_file, 'a') as tf:
tf.write(json.dumps(trace_event) + "\n")
if self.shared_log_dir:
shared_trace = os.path.join(self.shared_log_dir, run_id, 'run_trace.jsonl')
os.makedirs(os.path.dirname(shared_trace), exist_ok=True)
with open(shared_trace, 'a') as stf:
stf.write(json.dumps(trace_event) + "\n")
except Exception:
pass
logger.info(f"[AutomationLogger] Created run: {run_id}")
return run_id
def log_stage_start(self, run_id: str, account_id: int, site_id: int, stage_number: int, stage_name: str, pending_count: int):
"""Log stage start"""
timestamp = self._timestamp()
# Main log
self._append_to_main_log(account_id, site_id, run_id,
f"{timestamp} - Stage {stage_number} starting: {stage_name}")
self._append_to_main_log(account_id, site_id, run_id,
f"{timestamp} - Stage {stage_number}: Found {pending_count} pending items")
# Stage-specific log (primary)
stage_log = self._get_stage_log_path(account_id, site_id, run_id, stage_number)
os.makedirs(os.path.dirname(stage_log), exist_ok=True)
with open(stage_log, 'w') as f:
f.write("=" * 80 + "\n")
f.write(f"STAGE {stage_number}: {stage_name}\n")
f.write(f"Started: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}\n")
f.write("=" * 80 + "\n\n")
f.write(f"{timestamp} - Found {pending_count} pending items\n")
# Mirror stage log into shared dir if configured
if self.shared_log_dir:
shared_stage_log = os.path.join(self.shared_log_dir, run_id, f'stage_{str(stage_number)}.log')
os.makedirs(os.path.dirname(shared_stage_log), exist_ok=True)
with open(shared_stage_log, 'w') as f:
f.write("=" * 80 + "\n")
f.write(f"STAGE {stage_number}: {stage_name} (SHARED)\n")
f.write(f"Started: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}\n")
f.write("=" * 80 + "\n\n")
f.write(f"{timestamp} - Found {pending_count} pending items\n")
# Structured stage start trace
try:
trace_event = {
'event': 'stage_start',
'run_id': run_id,
'stage': stage_number,
'stage_name': stage_name,
'pending_count': pending_count,
'timestamp': datetime.now().isoformat(),
}
run_dir = self._get_run_dir(account_id, site_id, run_id)
os.makedirs(run_dir, exist_ok=True)
trace_file = os.path.join(run_dir, 'run_trace.jsonl')
with open(trace_file, 'a') as tf:
tf.write(json.dumps(trace_event) + "\n")
if self.shared_log_dir:
shared_trace = os.path.join(self.shared_log_dir, run_id, 'run_trace.jsonl')
os.makedirs(os.path.dirname(shared_trace), exist_ok=True)
with open(shared_trace, 'a') as stf:
stf.write(json.dumps(trace_event) + "\n")
except Exception:
pass
def log_stage_progress(self, run_id: str, account_id: int, site_id: int, stage_number: int, message: str):
"""Log stage progress"""
timestamp = self._timestamp()
log_message = f"{timestamp} - Stage {stage_number}: {message}"
# Main log
self._append_to_main_log(account_id, site_id, run_id, log_message)
# Stage-specific log (primary)
stage_log = self._get_stage_log_path(account_id, site_id, run_id, stage_number)
os.makedirs(os.path.dirname(stage_log), exist_ok=True)
with open(stage_log, 'a') as f:
f.write(f"{log_message}\n")
# Mirror progress into shared dir if configured
if self.shared_log_dir:
shared_stage_log = os.path.join(self.shared_log_dir, run_id, f'stage_{str(stage_number)}.log')
os.makedirs(os.path.dirname(shared_stage_log), exist_ok=True)
with open(shared_stage_log, 'a') as f:
f.write(f"{log_message}\n")
# Structured progress trace
try:
trace_event = {
'event': 'stage_progress',
'run_id': run_id,
'stage': stage_number,
'message': message,
'timestamp': datetime.now().isoformat(),
}
run_dir = self._get_run_dir(account_id, site_id, run_id)
os.makedirs(run_dir, exist_ok=True)
trace_file = os.path.join(run_dir, 'run_trace.jsonl')
with open(trace_file, 'a') as tf:
tf.write(json.dumps(trace_event) + "\n")
if self.shared_log_dir:
shared_trace = os.path.join(self.shared_log_dir, run_id, 'run_trace.jsonl')
os.makedirs(os.path.dirname(shared_trace), exist_ok=True)
with open(shared_trace, 'a') as stf:
stf.write(json.dumps(trace_event) + "\n")
except Exception:
pass
def log_stage_complete(self, run_id: str, account_id: int, site_id: int, stage_number: int,
processed_count: int, time_elapsed: str, credits_used: int):
"""Log stage completion"""
timestamp = self._timestamp()
# Main log
self._append_to_main_log(account_id, site_id, run_id,
f"{timestamp} - Stage {stage_number} complete: {processed_count} items processed")
# Stage-specific log (primary)
stage_log = self._get_stage_log_path(account_id, site_id, run_id, stage_number)
os.makedirs(os.path.dirname(stage_log), exist_ok=True)
with open(stage_log, 'a') as f:
f.write("\n" + "=" * 80 + "\n")
f.write(f"STAGE {stage_number} COMPLETE\n")
f.write(f"Total Time: {time_elapsed}\n")
f.write(f"Processed: {processed_count} items\n")
f.write(f"Credits Used: {credits_used}\n")
f.write("=" * 80 + "\n")
# Mirror completion into shared dir if configured
if self.shared_log_dir:
shared_stage_log = os.path.join(self.shared_log_dir, run_id, f'stage_{str(stage_number)}.log')
os.makedirs(os.path.dirname(shared_stage_log), exist_ok=True)
with open(shared_stage_log, 'a') as f:
f.write("\n" + "=" * 80 + "\n")
f.write(f"STAGE {stage_number} COMPLETE (SHARED)\n")
f.write(f"Total Time: {time_elapsed}\n")
f.write(f"Processed: {processed_count} items\n")
f.write(f"Credits Used: {credits_used}\n")
f.write("=" * 80 + "\n")
# Structured completion trace
try:
trace_event = {
'event': 'stage_complete',
'run_id': run_id,
'stage': stage_number,
'processed_count': processed_count,
'time_elapsed': time_elapsed,
'credits_used': credits_used,
'timestamp': datetime.now().isoformat(),
}
run_dir = self._get_run_dir(account_id, site_id, run_id)
os.makedirs(run_dir, exist_ok=True)
trace_file = os.path.join(run_dir, 'run_trace.jsonl')
with open(trace_file, 'a') as tf:
tf.write(json.dumps(trace_event) + "\n")
if self.shared_log_dir:
shared_trace = os.path.join(self.shared_log_dir, run_id, 'run_trace.jsonl')
os.makedirs(os.path.dirname(shared_trace), exist_ok=True)
with open(shared_trace, 'a') as stf:
stf.write(json.dumps(trace_event) + "\n")
except Exception:
pass
def log_stage_error(self, run_id: str, account_id: int, site_id: int, stage_number: int, error_message: str):
"""Log stage error"""
timestamp = self._timestamp()
log_message = f"{timestamp} - Stage {stage_number} ERROR: {error_message}"
# Main log
self._append_to_main_log(account_id, site_id, run_id, log_message)
# Stage-specific log (primary)
stage_log = self._get_stage_log_path(account_id, site_id, run_id, stage_number)
os.makedirs(os.path.dirname(stage_log), exist_ok=True)
with open(stage_log, 'a') as f:
f.write(f"\n{log_message}\n")
# Mirror error into shared dir if configured
if self.shared_log_dir:
shared_stage_log = os.path.join(self.shared_log_dir, run_id, f'stage_{str(stage_number)}.log')
os.makedirs(os.path.dirname(shared_stage_log), exist_ok=True)
with open(shared_stage_log, 'a') as f:
f.write(f"\n{log_message}\n")
# Structured error trace
try:
trace_event = {
'event': 'stage_error',
'run_id': run_id,
'stage': stage_number,
'error': error_message,
'timestamp': datetime.now().isoformat(),
}
run_dir = self._get_run_dir(account_id, site_id, run_id)
os.makedirs(run_dir, exist_ok=True)
trace_file = os.path.join(run_dir, 'run_trace.jsonl')
with open(trace_file, 'a') as tf:
tf.write(json.dumps(trace_event) + "\n")
if self.shared_log_dir:
shared_trace = os.path.join(self.shared_log_dir, run_id, 'run_trace.jsonl')
os.makedirs(os.path.dirname(shared_trace), exist_ok=True)
with open(shared_trace, 'a') as stf:
stf.write(json.dumps(trace_event) + "\n")
except Exception:
pass
def get_activity_log(self, account_id: int, site_id: int, run_id: str, last_n: int = 50) -> List[str]:
"""
Get last N lines from main activity log
Returns:
List of log lines (newest first)
"""
log_file = os.path.join(self._get_run_dir(account_id, site_id, str(run_id)), 'automation_run.log')
if not os.path.exists(log_file):
return []
with open(log_file, 'r') as f:
lines = f.readlines()
# Filter out header lines and empty lines
activity_lines = [line.strip() for line in lines if line.strip() and not line.startswith('=')]
# Return last N lines (newest first)
return list(reversed(activity_lines[-last_n:]))
# Helper methods
def _get_run_dir(self, account_id: int, site_id: int, run_id: str) -> str:
"""Get run directory path"""
return os.path.join(self.base_log_dir, str(account_id), str(site_id), run_id)
def _get_stage_log_path(self, account_id: int, site_id: int, run_id: str, stage_number: int) -> str:
"""Get stage log file path"""
run_dir = self._get_run_dir(account_id, site_id, run_id)
return os.path.join(run_dir, f'stage_{str(stage_number)}.log')
def _append_to_main_log(self, account_id: int, site_id: int, run_id: str, message: str):
"""Append message to main log file"""
# Ensure base log dir exists
try:
os.makedirs(self.base_log_dir, exist_ok=True)
except Exception:
# Best-effort: if directory creation fails, still attempt to write to run dir
pass
log_file = os.path.join(self._get_run_dir(account_id, site_id, run_id), 'automation_run.log')
os.makedirs(os.path.dirname(log_file), exist_ok=True)
with open(log_file, 'a') as f:
f.write(f"{message}\n")
# Also append to a diagnostic file so we can trace logger calls across runs
try:
diag_file = os.path.join(self.base_log_dir, 'automation_diagnostic.log')
with open(diag_file, 'a') as df:
df.write(f"{self._timestamp()} - {account_id}/{site_id}/{run_id} - {message}\n")
except Exception:
# Never fail the main logging flow because of diagnostics
pass
def append_trace(self, account_id: int, site_id: int, run_id: str, event: dict):
"""Public helper to append a structured trace event (JSONL) for a run and mirror to shared dir."""
try:
run_dir = self._get_run_dir(account_id, site_id, run_id)
os.makedirs(run_dir, exist_ok=True)
trace_file = os.path.join(run_dir, 'run_trace.jsonl')
with open(trace_file, 'a') as tf:
tf.write(json.dumps(event) + "\n")
except Exception:
# Best-effort: ignore trace write failures
pass
if self.shared_log_dir:
try:
shared_run_dir = os.path.join(self.shared_log_dir, run_id)
os.makedirs(shared_run_dir, exist_ok=True)
shared_trace = os.path.join(shared_run_dir, 'run_trace.jsonl')
with open(shared_trace, 'a') as stf:
stf.write(json.dumps(event) + "\n")
except Exception:
pass
def _timestamp(self) -> str:
"""Get formatted timestamp"""
return datetime.now().strftime('%H:%M:%S')

View File

@@ -1,193 +0,0 @@
"""
Automation Celery Tasks
Background tasks for automation pipeline
"""
from celery import shared_task, chain
from celery.utils.log import get_task_logger
from datetime import datetime, timedelta
from django.utils import timezone
from igny8_core.business.automation.models import AutomationConfig, AutomationRun
from igny8_core.business.automation.services import AutomationService
logger = get_task_logger(__name__)
@shared_task(name='automation.check_scheduled_automations')
def check_scheduled_automations():
"""
Check for scheduled automation runs (runs every hour)
"""
logger.info("[AutomationTask] Checking scheduled automations")
now = timezone.now()
current_time = now.time()
# Find configs that should run now
for config in AutomationConfig.objects.filter(is_enabled=True):
# Check if it's time to run
should_run = False
if config.frequency == 'daily':
# Run if current time matches scheduled_time
if current_time.hour == config.scheduled_time.hour and current_time.minute < 60:
should_run = True
elif config.frequency == 'weekly':
# Run on Mondays at scheduled_time
if now.weekday() == 0 and current_time.hour == config.scheduled_time.hour and current_time.minute < 60:
should_run = True
elif config.frequency == 'monthly':
# Run on 1st of month at scheduled_time
if now.day == 1 and current_time.hour == config.scheduled_time.hour and current_time.minute < 60:
should_run = True
if should_run:
# Check if already ran today
if config.last_run_at:
time_since_last_run = now - config.last_run_at
if time_since_last_run < timedelta(hours=23):
logger.info(f"[AutomationTask] Skipping site {config.site.id} - already ran today")
continue
# Check if already running
if AutomationRun.objects.filter(site=config.site, status='running').exists():
logger.info(f"[AutomationTask] Skipping site {config.site.id} - already running")
continue
logger.info(f"[AutomationTask] Starting scheduled automation for site {config.site.id}")
try:
service = AutomationService(config.account, config.site)
run_id = service.start_automation(trigger_type='scheduled')
# Update config
config.last_run_at = now
config.next_run_at = _calculate_next_run(config, now)
config.save()
# Start async processing
run_automation_task.delay(run_id)
except Exception as e:
logger.error(f"[AutomationTask] Failed to start automation for site {config.site.id}: {e}")
@shared_task(name='automation.run_automation_task', bind=True, max_retries=0)
def run_automation_task(self, run_id: str):
"""
Run automation pipeline (chains all stages)
"""
logger.info(f"[AutomationTask] Starting automation run: {run_id}")
try:
service = AutomationService.from_run_id(run_id)
# Run all stages sequentially
service.run_stage_1()
service.run_stage_2()
service.run_stage_3()
service.run_stage_4()
service.run_stage_5()
service.run_stage_6()
service.run_stage_7()
logger.info(f"[AutomationTask] Completed automation run: {run_id}")
except Exception as e:
logger.error(f"[AutomationTask] Failed automation run {run_id}: {e}")
# Mark as failed
run = AutomationRun.objects.get(run_id=run_id)
run.status = 'failed'
run.error_message = str(e)
run.completed_at = timezone.now()
run.save()
# Release lock
from django.core.cache import cache
cache.delete(f'automation_lock_{run.site.id}')
raise
@shared_task(name='automation.resume_automation_task', bind=True, max_retries=0)
def resume_automation_task(self, run_id: str):
"""
Resume paused automation run from current stage
"""
logger.info(f"[AutomationTask] Resuming automation run: {run_id}")
try:
service = AutomationService.from_run_id(run_id)
run = service.run
# Continue from current stage
stage_methods = [
service.run_stage_1,
service.run_stage_2,
service.run_stage_3,
service.run_stage_4,
service.run_stage_5,
service.run_stage_6,
service.run_stage_7,
]
# Run from current_stage to end
for stage in range(run.current_stage - 1, 7):
stage_methods[stage]()
logger.info(f"[AutomationTask] Resumed automation run: {run_id}")
except Exception as e:
logger.error(f"[AutomationTask] Failed to resume automation run {run_id}: {e}")
# Mark as failed
run = AutomationRun.objects.get(run_id=run_id)
run.status = 'failed'
run.error_message = str(e)
run.completed_at = timezone.now()
run.save()
# Alias for continue_automation_task (same as resume)
continue_automation_task = resume_automation_task
def _calculate_next_run(config: AutomationConfig, now: datetime) -> datetime:
"""Calculate next run time based on frequency"""
if config.frequency == 'daily':
next_run = now + timedelta(days=1)
next_run = next_run.replace(
hour=config.scheduled_time.hour,
minute=config.scheduled_time.minute,
second=0,
microsecond=0
)
elif config.frequency == 'weekly':
# Next Monday
days_until_monday = (7 - now.weekday()) % 7
if days_until_monday == 0:
days_until_monday = 7
next_run = now + timedelta(days=days_until_monday)
next_run = next_run.replace(
hour=config.scheduled_time.hour,
minute=config.scheduled_time.minute,
second=0,
microsecond=0
)
elif config.frequency == 'monthly':
# Next 1st of month
if now.month == 12:
next_run = now.replace(year=now.year + 1, month=1, day=1)
else:
next_run = now.replace(month=now.month + 1, day=1)
next_run = next_run.replace(
hour=config.scheduled_time.hour,
minute=config.scheduled_time.minute,
second=0,
microsecond=0
)
else:
next_run = now + timedelta(days=1)
return next_run

View File

@@ -1,13 +0,0 @@
"""
Automation URLs
"""
from django.urls import path, include
from rest_framework.routers import DefaultRouter
from igny8_core.business.automation.views import AutomationViewSet
router = DefaultRouter()
router.register(r'', AutomationViewSet, basename='automation')
urlpatterns = [
path('', include(router.urls)),
]

View File

@@ -1,716 +0,0 @@
"""
Automation API Views
REST API endpoints for automation management
"""
from rest_framework import viewsets, status
from rest_framework.decorators import action
from rest_framework.response import Response
from rest_framework.permissions import IsAuthenticated
from django.shortcuts import get_object_or_404
from django.utils import timezone
from drf_spectacular.utils import extend_schema
from igny8_core.business.automation.models import AutomationConfig, AutomationRun
from igny8_core.business.automation.services import AutomationService
from igny8_core.auth.models import Account, Site
class AutomationViewSet(viewsets.ViewSet):
"""API endpoints for automation"""
permission_classes = [IsAuthenticated]
def _get_site(self, request):
"""Get site from request"""
site_id = request.query_params.get('site_id')
if not site_id:
return None, Response(
{'error': 'site_id required'},
status=status.HTTP_400_BAD_REQUEST
)
site = get_object_or_404(Site, id=site_id, account=request.user.account)
return site, None
@extend_schema(tags=['Automation'])
@action(detail=False, methods=['get'])
def config(self, request):
"""
GET /api/v1/automation/config/?site_id=123
Get automation configuration for site
"""
site, error_response = self._get_site(request)
if error_response:
return error_response
config, _ = AutomationConfig.objects.get_or_create(
account=site.account,
site=site,
defaults={
'is_enabled': False,
'frequency': 'daily',
'scheduled_time': '02:00',
'within_stage_delay': 3,
'between_stage_delay': 5,
}
)
return Response({
'is_enabled': config.is_enabled,
'frequency': config.frequency,
'scheduled_time': str(config.scheduled_time),
'stage_1_batch_size': config.stage_1_batch_size,
'stage_2_batch_size': config.stage_2_batch_size,
'stage_3_batch_size': config.stage_3_batch_size,
'stage_4_batch_size': config.stage_4_batch_size,
'stage_5_batch_size': config.stage_5_batch_size,
'stage_6_batch_size': config.stage_6_batch_size,
'within_stage_delay': config.within_stage_delay,
'between_stage_delay': config.between_stage_delay,
'last_run_at': config.last_run_at,
'next_run_at': config.next_run_at,
})
@extend_schema(tags=['Automation'])
@action(detail=False, methods=['put'])
def update_config(self, request):
"""
PUT /api/v1/automation/update_config/?site_id=123
Update automation configuration
Body:
{
"is_enabled": true,
"frequency": "daily",
"scheduled_time": "02:00",
"stage_1_batch_size": 20,
...
}
"""
site, error_response = self._get_site(request)
if error_response:
return error_response
config, _ = AutomationConfig.objects.get_or_create(
account=site.account,
site=site
)
# Update fields
if 'is_enabled' in request.data:
config.is_enabled = request.data['is_enabled']
if 'frequency' in request.data:
config.frequency = request.data['frequency']
if 'scheduled_time' in request.data:
config.scheduled_time = request.data['scheduled_time']
if 'stage_1_batch_size' in request.data:
config.stage_1_batch_size = request.data['stage_1_batch_size']
if 'stage_2_batch_size' in request.data:
config.stage_2_batch_size = request.data['stage_2_batch_size']
if 'stage_3_batch_size' in request.data:
config.stage_3_batch_size = request.data['stage_3_batch_size']
if 'stage_4_batch_size' in request.data:
config.stage_4_batch_size = request.data['stage_4_batch_size']
if 'stage_5_batch_size' in request.data:
config.stage_5_batch_size = request.data['stage_5_batch_size']
if 'stage_6_batch_size' in request.data:
config.stage_6_batch_size = request.data['stage_6_batch_size']
# Delay settings
if 'within_stage_delay' in request.data:
try:
config.within_stage_delay = int(request.data['within_stage_delay'])
except (TypeError, ValueError):
pass
if 'between_stage_delay' in request.data:
try:
config.between_stage_delay = int(request.data['between_stage_delay'])
except (TypeError, ValueError):
pass
config.save()
return Response({
'message': 'Config updated',
'is_enabled': config.is_enabled,
'frequency': config.frequency,
'scheduled_time': str(config.scheduled_time),
'stage_1_batch_size': config.stage_1_batch_size,
'stage_2_batch_size': config.stage_2_batch_size,
'stage_3_batch_size': config.stage_3_batch_size,
'stage_4_batch_size': config.stage_4_batch_size,
'stage_5_batch_size': config.stage_5_batch_size,
'stage_6_batch_size': config.stage_6_batch_size,
'within_stage_delay': config.within_stage_delay,
'between_stage_delay': config.between_stage_delay,
'last_run_at': config.last_run_at,
'next_run_at': config.next_run_at,
})
@extend_schema(tags=['Automation'])
@action(detail=False, methods=['post'])
def run_now(self, request):
"""
POST /api/v1/automation/run_now/?site_id=123
Trigger automation run immediately
"""
site, error_response = self._get_site(request)
if error_response:
return error_response
try:
service = AutomationService(site.account, site)
run_id = service.start_automation(trigger_type='manual')
# Start async processing
from igny8_core.business.automation.tasks import run_automation_task
run_automation_task.delay(run_id)
return Response({
'run_id': run_id,
'message': 'Automation started'
})
except ValueError as e:
return Response(
{'error': str(e)},
status=status.HTTP_400_BAD_REQUEST
)
except Exception as e:
return Response(
{'error': f'Failed to start automation: {str(e)}'},
status=status.HTTP_500_INTERNAL_SERVER_ERROR
)
@extend_schema(tags=['Automation'])
@action(detail=False, methods=['get'])
def current_run(self, request):
"""
GET /api/v1/automation/current_run/?site_id=123
Get current automation run status
"""
site, error_response = self._get_site(request)
if error_response:
return error_response
run = AutomationRun.objects.filter(
site=site,
status__in=['running', 'paused']
).order_by('-started_at').first()
if not run:
return Response({'run': None})
return Response({
'run': {
'run_id': run.run_id,
'status': run.status,
'current_stage': run.current_stage,
'trigger_type': run.trigger_type,
'started_at': run.started_at,
'total_credits_used': run.total_credits_used,
'stage_1_result': run.stage_1_result,
'stage_2_result': run.stage_2_result,
'stage_3_result': run.stage_3_result,
'stage_4_result': run.stage_4_result,
'stage_5_result': run.stage_5_result,
'stage_6_result': run.stage_6_result,
'stage_7_result': run.stage_7_result,
}
})
@extend_schema(tags=['Automation'])
@action(detail=False, methods=['post'])
def pause(self, request):
"""
POST /api/v1/automation/pause/?run_id=abc123
Pause automation run
"""
run_id = request.query_params.get('run_id')
if not run_id:
return Response(
{'error': 'run_id required'},
status=status.HTTP_400_BAD_REQUEST
)
try:
service = AutomationService.from_run_id(run_id)
service.pause_automation()
return Response({'message': 'Automation paused'})
except AutomationRun.DoesNotExist:
return Response(
{'error': 'Run not found'},
status=status.HTTP_404_NOT_FOUND
)
@extend_schema(tags=['Automation'])
@action(detail=False, methods=['post'])
def resume(self, request):
"""
POST /api/v1/automation/resume/?run_id=abc123
Resume paused automation run
"""
run_id = request.query_params.get('run_id')
if not run_id:
return Response(
{'error': 'run_id required'},
status=status.HTTP_400_BAD_REQUEST
)
try:
service = AutomationService.from_run_id(run_id)
service.resume_automation()
# Resume async processing
from igny8_core.business.automation.tasks import resume_automation_task
resume_automation_task.delay(run_id)
return Response({'message': 'Automation resumed'})
except AutomationRun.DoesNotExist:
return Response(
{'error': 'Run not found'},
status=status.HTTP_404_NOT_FOUND
)
@extend_schema(tags=['Automation'])
@action(detail=False, methods=['get'])
def history(self, request):
"""
GET /api/v1/automation/history/?site_id=123
Get automation run history
"""
site, error_response = self._get_site(request)
if error_response:
return error_response
runs = AutomationRun.objects.filter(
site=site
).order_by('-started_at')[:20]
return Response({
'runs': [
{
'run_id': run.run_id,
'status': run.status,
'trigger_type': run.trigger_type,
'started_at': run.started_at,
'completed_at': run.completed_at,
'total_credits_used': run.total_credits_used,
'current_stage': run.current_stage,
}
for run in runs
]
})
@extend_schema(tags=['Automation'])
@action(detail=False, methods=['get'])
def logs(self, request):
"""
GET /api/v1/automation/logs/?run_id=abc123&lines=100
Get automation run logs
"""
run_id = request.query_params.get('run_id')
if not run_id:
return Response(
{'error': 'run_id required'},
status=status.HTTP_400_BAD_REQUEST
)
try:
run = AutomationRun.objects.get(run_id=run_id)
service = AutomationService(run.account, run.site)
lines = int(request.query_params.get('lines', 100))
log_text = service.logger.get_activity_log(
run.account.id, run.site.id, run_id, lines
)
return Response({
'run_id': run_id,
'log': log_text
})
except AutomationRun.DoesNotExist:
return Response(
{'error': 'Run not found'},
status=status.HTTP_404_NOT_FOUND
)
@extend_schema(tags=['Automation'])
@action(detail=False, methods=['get'])
def estimate(self, request):
"""
GET /api/v1/automation/estimate/?site_id=123
Estimate credits needed for automation
"""
site, error_response = self._get_site(request)
if error_response:
return error_response
service = AutomationService(site.account, site)
estimated_credits = service.estimate_credits()
return Response({
'estimated_credits': estimated_credits,
'current_balance': site.account.credits,
'sufficient': site.account.credits >= (estimated_credits * 1.2)
})
@extend_schema(tags=['Automation'])
@action(detail=False, methods=['get'])
def pipeline_overview(self, request):
"""
GET /api/v1/automation/pipeline_overview/?site_id=123
Get pipeline overview with pending counts for all stages
"""
site, error_response = self._get_site(request)
if error_response:
return error_response
from igny8_core.business.planning.models import Keywords, Clusters, ContentIdeas
from igny8_core.business.content.models import Tasks, Content, Images
from django.db.models import Count
def _counts_by_status(model, extra_filter=None, exclude_filter=None):
"""Return a dict of counts keyed by status and the total for a given model and site."""
qs = model.objects.filter(site=site)
if extra_filter:
qs = qs.filter(**extra_filter)
if exclude_filter:
qs = qs.exclude(**exclude_filter)
# Group by status when available
try:
rows = qs.values('status').annotate(count=Count('id'))
counts = {r['status']: r['count'] for r in rows}
total = sum(counts.values())
except Exception:
# Fallback: count all
total = qs.count()
counts = {'total': total}
return counts, total
# Stage 1: Keywords pending clustering (keep previous "pending" semantics but also return status breakdown)
stage_1_counts, stage_1_total = _counts_by_status(
Keywords,
extra_filter={'disabled': False}
)
# pending definition used by the UI previously (new & not clustered)
stage_1_pending = Keywords.objects.filter(
site=site,
status='new',
cluster__isnull=True,
disabled=False
).count()
# Stage 2: Clusters needing ideas
stage_2_counts, stage_2_total = _counts_by_status(
Clusters,
extra_filter={'disabled': False}
)
stage_2_pending = Clusters.objects.filter(
site=site,
status='new',
disabled=False
).exclude(
ideas__isnull=False
).count()
# Stage 3: Ideas ready to queue
stage_3_counts, stage_3_total = _counts_by_status(ContentIdeas)
stage_3_pending = ContentIdeas.objects.filter(
site=site,
status='new'
).count()
# Stage 4: Tasks ready for content generation
stage_4_counts, stage_4_total = _counts_by_status(Tasks)
stage_4_pending = Tasks.objects.filter(
site=site,
status='queued'
).count()
# Stage 5: Content ready for image prompts
# We will provide counts per content status and also compute pending as previous (draft with 0 images)
stage_5_counts, stage_5_total = _counts_by_status(Content)
stage_5_pending = Content.objects.filter(
site=site,
status='draft'
).annotate(
images_count=Count('images')
).filter(
images_count=0
).count()
# Stage 6: Image prompts ready for generation
stage_6_counts, stage_6_total = _counts_by_status(Images)
stage_6_pending = Images.objects.filter(
site=site,
status='pending'
).count()
# Stage 7: Content ready for review
# Provide counts per status for content and keep previous "review" pending count
stage_7_counts, stage_7_total = _counts_by_status(Content)
stage_7_ready = Content.objects.filter(
site=site,
status='review'
).count()
return Response({
'stages': [
{
'number': 1,
'name': 'Keywords → Clusters',
'pending': stage_1_pending,
'type': 'AI',
'counts': stage_1_counts,
'total': stage_1_total
},
{
'number': 2,
'name': 'Clusters → Ideas',
'pending': stage_2_pending,
'type': 'AI',
'counts': stage_2_counts,
'total': stage_2_total
},
{
'number': 3,
'name': 'Ideas → Tasks',
'pending': stage_3_pending,
'type': 'Local',
'counts': stage_3_counts,
'total': stage_3_total
},
{
'number': 4,
'name': 'Tasks → Content',
'pending': stage_4_pending,
'type': 'AI',
'counts': stage_4_counts,
'total': stage_4_total
},
{
'number': 5,
'name': 'Content → Image Prompts',
'pending': stage_5_pending,
'type': 'AI',
'counts': stage_5_counts,
'total': stage_5_total
},
{
'number': 6,
'name': 'Image Prompts → Images',
'pending': stage_6_pending,
'type': 'AI',
'counts': stage_6_counts,
'total': stage_6_total
},
{
'number': 7,
'name': 'Manual Review Gate',
'pending': stage_7_ready,
'type': 'Manual',
'counts': stage_7_counts,
'total': stage_7_total
}
]
})
@extend_schema(tags=['Automation'])
@action(detail=False, methods=['get'], url_path='current_processing')
def current_processing(self, request):
"""
GET /api/v1/automation/current_processing/?site_id=123&run_id=abc
Get current processing state for active automation run
"""
site_id = request.query_params.get('site_id')
run_id = request.query_params.get('run_id')
if not site_id or not run_id:
return Response(
{'error': 'site_id and run_id required'},
status=status.HTTP_400_BAD_REQUEST
)
try:
# Get the site
site = get_object_or_404(Site, id=site_id, account=request.user.account)
# Get the run
run = AutomationRun.objects.get(run_id=run_id, site=site)
# If not running, return None
if run.status != 'running':
return Response({'data': None})
# Get current processing state
service = AutomationService.from_run_id(run_id)
state = service.get_current_processing_state()
return Response({'data': state})
except AutomationRun.DoesNotExist:
return Response(
{'error': 'Run not found'},
status=status.HTTP_404_NOT_FOUND
)
except Exception as e:
return Response(
{'error': str(e)},
status=status.HTTP_500_INTERNAL_SERVER_ERROR
)
@extend_schema(tags=['Automation'])
@action(detail=False, methods=['post'], url_path='pause')
def pause_automation(self, request):
"""
POST /api/v1/automation/pause/?site_id=123&run_id=abc
Pause current automation run
Will complete current queue item then pause before next item
"""
site_id = request.query_params.get('site_id')
run_id = request.query_params.get('run_id')
if not site_id or not run_id:
return Response(
{'error': 'site_id and run_id required'},
status=status.HTTP_400_BAD_REQUEST
)
try:
site = get_object_or_404(Site, id=site_id, account=request.user.account)
run = AutomationRun.objects.get(run_id=run_id, site=site)
if run.status != 'running':
return Response(
{'error': f'Cannot pause automation with status: {run.status}'},
status=status.HTTP_400_BAD_REQUEST
)
# Update status to paused
run.status = 'paused'
run.paused_at = timezone.now()
run.save(update_fields=['status', 'paused_at'])
return Response({
'message': 'Automation paused',
'status': run.status,
'paused_at': run.paused_at
})
except AutomationRun.DoesNotExist:
return Response(
{'error': 'Run not found'},
status=status.HTTP_404_NOT_FOUND
)
except Exception as e:
return Response(
{'error': str(e)},
status=status.HTTP_500_INTERNAL_SERVER_ERROR
)
@extend_schema(tags=['Automation'])
@action(detail=False, methods=['post'], url_path='resume')
def resume_automation(self, request):
"""
POST /api/v1/automation/resume/?site_id=123&run_id=abc
Resume paused automation run
Will continue from next queue item in current stage
"""
site_id = request.query_params.get('site_id')
run_id = request.query_params.get('run_id')
if not site_id or not run_id:
return Response(
{'error': 'site_id and run_id required'},
status=status.HTTP_400_BAD_REQUEST
)
try:
site = get_object_or_404(Site, id=site_id, account=request.user.account)
run = AutomationRun.objects.get(run_id=run_id, site=site)
if run.status != 'paused':
return Response(
{'error': f'Cannot resume automation with status: {run.status}'},
status=status.HTTP_400_BAD_REQUEST
)
# Update status to running
run.status = 'running'
run.resumed_at = timezone.now()
run.save(update_fields=['status', 'resumed_at'])
# Queue continuation task
from igny8_core.business.automation.tasks import continue_automation_task
continue_automation_task.delay(run_id)
return Response({
'message': 'Automation resumed',
'status': run.status,
'resumed_at': run.resumed_at
})
except AutomationRun.DoesNotExist:
return Response(
{'error': 'Run not found'},
status=status.HTTP_404_NOT_FOUND
)
except Exception as e:
return Response(
{'error': str(e)},
status=status.HTTP_500_INTERNAL_SERVER_ERROR
)
@extend_schema(tags=['Automation'])
@action(detail=False, methods=['post'], url_path='cancel')
def cancel_automation(self, request):
"""
POST /api/v1/automation/cancel/?site_id=123&run_id=abc
Cancel current automation run
Will complete current queue item then stop permanently
"""
site_id = request.query_params.get('site_id')
run_id = request.query_params.get('run_id')
if not site_id or not run_id:
return Response(
{'error': 'site_id and run_id required'},
status=status.HTTP_400_BAD_REQUEST
)
try:
site = get_object_or_404(Site, id=site_id, account=request.user.account)
run = AutomationRun.objects.get(run_id=run_id, site=site)
if run.status not in ['running', 'paused']:
return Response(
{'error': f'Cannot cancel automation with status: {run.status}'},
status=status.HTTP_400_BAD_REQUEST
)
# Update status to cancelled
run.status = 'cancelled'
run.cancelled_at = timezone.now()
run.completed_at = timezone.now()
run.save(update_fields=['status', 'cancelled_at', 'completed_at'])
return Response({
'message': 'Automation cancelled',
'status': run.status,
'cancelled_at': run.cancelled_at
})
except AutomationRun.DoesNotExist:
return Response(
{'error': 'Run not found'},
status=status.HTTP_404_NOT_FOUND
)
except Exception as e:
return Response(
{'error': str(e)},
status=status.HTTP_500_INTERNAL_SERVER_ERROR
)

View File

@@ -1,4 +0,0 @@
"""
Billing business logic - CreditTransaction, CreditUsageLog models and services
"""

View File

@@ -1,168 +0,0 @@
"""
Billing Business Logic Admin
"""
from django.contrib import admin
from django.utils.html import format_html
from igny8_core.admin.base import AccountAdminMixin
from .models import (
CreditCostConfig,
AccountPaymentMethod,
Invoice,
Payment,
CreditPackage,
PaymentMethodConfig,
)
@admin.register(CreditCostConfig)
class CreditCostConfigAdmin(admin.ModelAdmin):
list_display = [
'operation_type',
'display_name',
'credits_cost_display',
'unit',
'is_active',
'cost_change_indicator',
'updated_at',
'updated_by'
]
list_filter = ['is_active', 'unit', 'updated_at']
search_fields = ['operation_type', 'display_name', 'description']
fieldsets = (
('Operation', {
'fields': ('operation_type', 'display_name', 'description')
}),
('Cost Configuration', {
'fields': ('credits_cost', 'unit', 'is_active')
}),
('Audit Trail', {
'fields': ('previous_cost', 'updated_by', 'created_at', 'updated_at'),
'classes': ('collapse',)
}),
)
readonly_fields = ['created_at', 'updated_at', 'previous_cost']
def credits_cost_display(self, obj):
"""Show cost with color coding"""
if obj.credits_cost >= 20:
color = 'red'
elif obj.credits_cost >= 10:
color = 'orange'
else:
color = 'green'
return format_html(
'<span style="color: {}; font-weight: bold;">{} credits</span>',
color,
obj.credits_cost
)
credits_cost_display.short_description = 'Cost'
def cost_change_indicator(self, obj):
"""Show if cost changed recently"""
if obj.previous_cost is not None:
if obj.credits_cost > obj.previous_cost:
icon = '📈' # Increased
color = 'red'
elif obj.credits_cost < obj.previous_cost:
icon = '📉' # Decreased
color = 'green'
else:
icon = '➡️' # Same
color = 'gray'
return format_html(
'{} <span style="color: {};">({}{})</span>',
icon,
color,
obj.previous_cost,
obj.credits_cost
)
return ''
cost_change_indicator.short_description = 'Recent Change'
def save_model(self, request, obj, form, change):
"""Track who made the change"""
obj.updated_by = request.user
super().save_model(request, obj, form, change)
@admin.register(Invoice)
class InvoiceAdmin(AccountAdminMixin, admin.ModelAdmin):
list_display = [
'invoice_number',
'account',
'status',
'total',
'currency',
'invoice_date',
'due_date',
'subscription',
]
list_filter = ['status', 'currency', 'invoice_date', 'account']
search_fields = ['invoice_number', 'account__name', 'subscription__id']
readonly_fields = ['created_at', 'updated_at']
@admin.register(Payment)
class PaymentAdmin(AccountAdminMixin, admin.ModelAdmin):
list_display = [
'id',
'invoice',
'account',
'payment_method',
'status',
'amount',
'currency',
'processed_at',
]
list_filter = ['status', 'payment_method', 'currency', 'created_at']
search_fields = ['invoice__invoice_number', 'account__name', 'stripe_payment_intent_id', 'paypal_order_id']
readonly_fields = ['created_at', 'updated_at']
@admin.register(CreditPackage)
class CreditPackageAdmin(admin.ModelAdmin):
list_display = ['name', 'slug', 'credits', 'price', 'discount_percentage', 'is_active', 'is_featured', 'sort_order']
list_filter = ['is_active', 'is_featured']
search_fields = ['name', 'slug']
readonly_fields = ['created_at', 'updated_at']
@admin.register(PaymentMethodConfig)
class PaymentMethodConfigAdmin(admin.ModelAdmin):
list_display = ['country_code', 'payment_method', 'is_enabled', 'display_name', 'sort_order']
list_filter = ['payment_method', 'is_enabled', 'country_code']
search_fields = ['country_code', 'display_name', 'payment_method']
readonly_fields = ['created_at', 'updated_at']
@admin.register(AccountPaymentMethod)
class AccountPaymentMethodAdmin(admin.ModelAdmin):
list_display = [
'display_name',
'type',
'account',
'is_default',
'is_enabled',
'country_code',
'is_verified',
'updated_at',
]
list_filter = ['type', 'is_default', 'is_enabled', 'is_verified', 'country_code']
search_fields = ['display_name', 'account__name', 'account__id']
readonly_fields = ['created_at', 'updated_at']
fieldsets = (
('Payment Method', {
'fields': ('account', 'type', 'display_name', 'is_default', 'is_enabled', 'is_verified', 'country_code')
}),
('Instructions / Metadata', {
'fields': ('instructions', 'metadata')
}),
('Timestamps', {
'fields': ('created_at', 'updated_at'),
'classes': ('collapse',)
}),
)

View File

@@ -1,21 +0,0 @@
"""
Credit Cost Constants
Phase 0: Credit-only system costs per operation
"""
CREDIT_COSTS = {
'clustering': 10, # Per clustering request
'idea_generation': 15, # Per cluster → ideas request
'content_generation': 1, # Per 100 words
'image_prompt_extraction': 2, # Per content piece
'image_generation': 5, # Per image
'linking': 8, # Per content piece (NEW)
'optimization': 1, # Per 200 words (NEW)
'site_structure_generation': 50, # Per site blueprint (NEW)
'site_page_generation': 20, # Per page (NEW)
# Legacy operation types (for backward compatibility)
'ideas': 15, # Alias for idea_generation
'content': 3, # Legacy: 3 credits per content piece
'images': 5, # Alias for image_generation
'reparse': 1, # Per reparse
}

View File

@@ -1,14 +0,0 @@
"""
Billing Exceptions
"""
class InsufficientCreditsError(Exception):
"""Raised when account doesn't have enough credits"""
pass
class CreditCalculationError(Exception):
"""Raised when credit calculation fails"""
pass

View File

@@ -1 +0,0 @@
"""Management commands package"""

View File

@@ -1 +0,0 @@
"""Commands package"""

View File

@@ -1,103 +0,0 @@
"""
Initialize Credit Cost Configurations
Migrates hardcoded CREDIT_COSTS constants to database
"""
from django.core.management.base import BaseCommand
from igny8_core.business.billing.models import CreditCostConfig
from igny8_core.business.billing.constants import CREDIT_COSTS
class Command(BaseCommand):
help = 'Initialize credit cost configurations from constants'
def handle(self, *args, **options):
"""Migrate hardcoded costs to database"""
operation_metadata = {
'clustering': {
'display_name': 'Auto Clustering',
'description': 'Group keywords into semantic clusters using AI',
'unit': 'per_request'
},
'idea_generation': {
'display_name': 'Idea Generation',
'description': 'Generate content ideas from keyword clusters',
'unit': 'per_request'
},
'content_generation': {
'display_name': 'Content Generation',
'description': 'Generate article content using AI',
'unit': 'per_100_words'
},
'image_prompt_extraction': {
'display_name': 'Image Prompt Extraction',
'description': 'Extract image prompts from content',
'unit': 'per_request'
},
'image_generation': {
'display_name': 'Image Generation',
'description': 'Generate images using AI (DALL-E, Runware)',
'unit': 'per_image'
},
'linking': {
'display_name': 'Content Linking',
'description': 'Generate internal links between content',
'unit': 'per_request'
},
'optimization': {
'display_name': 'Content Optimization',
'description': 'Optimize content for SEO',
'unit': 'per_200_words'
},
'site_structure_generation': {
'display_name': 'Site Structure Generation',
'description': 'Generate complete site blueprint',
'unit': 'per_request'
},
'site_page_generation': {
'display_name': 'Site Page Generation',
'description': 'Generate site pages from blueprint',
'unit': 'per_item'
},
'reparse': {
'display_name': 'Content Reparse',
'description': 'Reparse and update existing content',
'unit': 'per_request'
},
}
created_count = 0
updated_count = 0
for operation_type, cost in CREDIT_COSTS.items():
# Skip legacy aliases
if operation_type in ['ideas', 'content', 'images']:
continue
metadata = operation_metadata.get(operation_type, {})
config, created = CreditCostConfig.objects.get_or_create(
operation_type=operation_type,
defaults={
'credits_cost': cost,
'display_name': metadata.get('display_name', operation_type.replace('_', ' ').title()),
'description': metadata.get('description', ''),
'unit': metadata.get('unit', 'per_request'),
'is_active': True
}
)
if created:
created_count += 1
self.stdout.write(
self.style.SUCCESS(f'✅ Created: {config.display_name} - {cost} credits')
)
else:
updated_count += 1
self.stdout.write(
self.style.WARNING(f'⚠️ Already exists: {config.display_name}')
)
self.stdout.write(
self.style.SUCCESS(f'\n✅ Complete: {created_count} created, {updated_count} already existed')
)

View File

@@ -1,99 +0,0 @@
# Generated by Django for IGNY8 Billing App
# Date: December 4, 2025
from django.conf import settings
import django.core.validators
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
initial = True
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
]
operations = [
migrations.CreateModel(
name='CreditTransaction',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('transaction_type', models.CharField(
choices=[
('purchase', 'Purchase'),
('deduction', 'Deduction'),
('refund', 'Refund'),
('grant', 'Grant'),
('adjustment', 'Manual Adjustment'),
],
max_length=20
)),
('amount', models.IntegerField(help_text='Positive for additions, negative for deductions')),
('balance_after', models.IntegerField(help_text='Account balance after this transaction')),
('description', models.CharField(max_length=255)),
('metadata', models.JSONField(default=dict, help_text='Additional transaction details')),
('created_at', models.DateTimeField(auto_now_add=True)),
('account', models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE,
related_name='credit_transactions',
to=settings.AUTH_USER_MODEL
)),
],
options={
'verbose_name': 'Credit Transaction',
'verbose_name_plural': 'Credit Transactions',
'db_table': 'igny8_credit_transactions',
'ordering': ['-created_at'],
'indexes': [
models.Index(fields=['account', '-created_at'], name='idx_credit_txn_account_date'),
models.Index(fields=['transaction_type'], name='idx_credit_txn_type'),
],
},
),
migrations.CreateModel(
name='CreditUsageLog',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('operation_type', models.CharField(
choices=[
('clustering', 'Clustering'),
('idea_generation', 'Idea Generation'),
('content_generation', 'Content Generation'),
('image_generation', 'Image Generation'),
('image_prompt_extraction', 'Image Prompt Extraction'),
('taxonomy_generation', 'Taxonomy Generation'),
('content_rewrite', 'Content Rewrite'),
('keyword_research', 'Keyword Research'),
('site_page_generation', 'Site Page Generation'),
],
max_length=50
)),
('credits_used', models.IntegerField()),
('cost_usd', models.DecimalField(decimal_places=2, max_digits=10, null=True, blank=True)),
('model_used', models.CharField(max_length=100, blank=True)),
('tokens_input', models.IntegerField(null=True, blank=True)),
('tokens_output', models.IntegerField(null=True, blank=True)),
('related_object_type', models.CharField(max_length=50, blank=True)),
('related_object_id', models.IntegerField(null=True, blank=True)),
('metadata', models.JSONField(default=dict)),
('created_at', models.DateTimeField(auto_now_add=True)),
('account', models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE,
related_name='credit_usage_logs',
to=settings.AUTH_USER_MODEL
)),
],
options={
'verbose_name': 'Credit Usage Log',
'verbose_name_plural': 'Credit Usage Logs',
'db_table': 'igny8_credit_usage_logs',
'ordering': ['-created_at'],
'indexes': [
models.Index(fields=['account', '-created_at'], name='idx_credit_usage_account_date'),
models.Index(fields=['operation_type'], name='idx_credit_usage_operation'),
],
},
),
]

Some files were not shown because too many files have changed in this diff Show More