176 Commits

Author SHA1 Message Date
IGNY8 VPS (Salman)
e317e1de26 docs: update prelaunch pending - mark phases 1, 5, 6 as completed
Phase 1 (Code Cleanup):
- Removed 3,218 lines, 24 files changed
- Cleaned up 11 empty folders, removed test files
- Removed 17 console.log statements
- All quality checks passed

Phase 5 (UX Improvements):
- Enhanced search modal with filters and context snippets
- Added 25+ help questions across 8 topics
- Implemented smart phrase matching and keyword coverage
- Added recent searches and suggested questions

Phase 6 (Data Backup & Cleanup):
- Created export_system_config Django management command
- Created cleanup_user_data Django management command
- Documented full 300+ line backup/cleanup guide
- Ready for V1.0 production deployment

Image regeneration feature deferred to post-launch (Phase 9).
2026-01-09 16:40:41 +00:00
IGNY8 VPS (Salman)
f04eb0a900 feat(search): add comprehensive keyword coverage and intelligent phrase matching
- Added 10+ new keyword categories (task, cluster, billing, invoice, payment, plan, usage, schedule, wordpress, writing, picture, user, ai)
- Implemented smart phrase normalization to strip filler words (how, to, what, is, etc.)
- Added duplicate prevention using Set to avoid showing same question multiple times
- Enhanced matching logic to check: direct keyword match, normalized term match, and question text match
- Supports basic stemming (plurals -> singular: tasks -> task)
- Now searches: 'how to import keywords' correctly matches 'import' in knowledge base
- Fixed duplicate keywords field in Team Management navigation item

This ensures all common search terms trigger relevant help suggestions with natural language support.
2026-01-09 16:37:34 +00:00
IGNY8 VPS (Salman)
264c720e3e Phase 6: Add data backup and cleanup management commands
- Created export_system_config.py command:
  * Exports Plans, Credit Costs, AI Models, Industries, Sectors, etc.
  * Saves to JSON files for V1.0 configuration backup
  * Includes metadata with export timestamp and stats
  * Usage: python manage.py export_system_config --output-dir=backups/config

- Created cleanup_user_data.py command:
  * Safely deletes all user-generated data
  * DRY-RUN mode to preview deletions
  * Confirmation prompt for safety
  * Production environment protection
  * Deletes: Sites, Keywords, Content, Images, Transactions, Logs, etc.
  * Preserves: System config and user accounts
  * Usage: python manage.py cleanup_user_data --dry-run
          python manage.py cleanup_user_data --confirm

Both commands essential for V1.0 pre-launch cleanup
2026-01-09 15:39:10 +00:00
IGNY8 VPS (Salman)
0921adbabb Phase 5: Enhanced search modal with filters and recent searches
- Added search filters (All, Workflow, Setup, Account, Help)
- Implemented recent searches (stored in localStorage, max 5)
- Enhanced search results with category display
- Improved result filtering by type and category
- Updated search items with proper categorization
- Keyboard shortcut Cmd/Ctrl+K already working ✓
2026-01-09 15:36:18 +00:00
IGNY8 VPS (Salman)
82d6a9e879 Cleanup: Remove one-time test files
- Removed test-module-settings.html (manual API test file)
- Removed test_urls.py (one-time URL verification script)
- Removed test_stage1_refactor.py (stage 1 refactor verification)
- Kept proper test suites in tests/ folders
2026-01-09 15:33:37 +00:00
IGNY8 VPS (Salman)
0526553c9b Phase 1: Code cleanup - remove unused pages, components, and console.logs
- Deleted 6 empty folders (pages/Admin, pages/admin, pages/settings, components/debug, components/widgets, components/metrics)
- Removed unused template components:
  - ecommerce/ (7 files)
  - sample-componeents/ (2 HTML files)
  - charts/bar/ and charts/line/
  - tables/BasicTables/
- Deleted deprecated file: CurrentProcessingCard.old.tsx
- Removed console.log statements from:
  - UserProfile components (UserMetaCard, UserAddressCard, UserInfoCard)
  - Automation/ConfigModal
  - ImageQueueModal (8 statements)
  - ImageGenerationCard (7 statements)
- Applied ESLint auto-fixes (9 errors fixed)
- All builds pass ✓
- TypeScript compiles without errors ✓
2026-01-09 15:22:23 +00:00
IGNY8 VPS (Salman)
7bb9d813f2 pending reorg 2026-01-09 14:31:05 +00:00
IGNY8 VPS (Salman)
59f7455521 ifnal prelunch pedning udpated plan 2026-01-08 09:37:53 +00:00
IGNY8 VPS (Salman)
34c8cc410a v1.6.2 release: Marketing site design refinements
CHANGELOG Updates:
- Added v1.6.2 section with comprehensive design refinement details
- Updated version history table with v1.6.2 entry
- Documented all gradient updates (primary + success mix)
- Documented shadow weight reductions
- Documented automation icon simplifications
- Documented new Upcoming Features page
- Listed all 8 files changed in marketing site
- Added git commit references

Documentation Updates:
- Updated docs/INDEX.md version from 1.6.1 to 1.6.2

Version 1.6.2 Summary:
 Brand Color Consistency - All gradients use primary + success
 Shadow Refinements - Reduced from 2xl to md/lg for cleaner look
 Automation Icons - Simplified from colorful mix to consistent primary
 Upcoming Features Page - 362 lines, 3 phases, 10 features
 Marketing Pages - Home, Product, Pricing, Solutions, Partners, CaseStudies updated

Design Principles Applied:
- Brand consistency (matching logo colors)
- Visual hierarchy (reduced shadows)
- Clean & branded (simplified icons)
- Subtle & elegant (modern appearance)
- Content first (reduced decorative effects)
2026-01-08 09:26:59 +00:00
IGNY8 VPS (Salman)
4f99fc1451 Update all CTA section backgrounds to primary + success gradient
Replaced pinkish/purple gradient backgrounds with brand colors on:
- Partners page: Footer CTA section
- CaseStudies page: Footer CTA section
- Pricing page: Footer CTA section
- Solutions page: Footer CTA section
- Product page: Footer CTA section

Changed from: from-primary via-purple to-purple-400
Changed to: from-success via-primary-dark to-primary

All CTA sections before footer now use consistent brand gradient matching the logo colors (primary + success mix) instead of purple/pink tones.
2026-01-08 09:12:01 +00:00
IGNY8 VPS (Salman)
84ed711f6d Reduce shadow weights and simplify automation icons
Product Module Screenshots:
- Reduced shadow from shadow-2xl to shadow-md for cleaner look
- Reduced blur from blur-xl to blur-lg on gradient glows
- Reduced inset values for more subtle frame effects

Hero Dashboard:
- Reduced shadow from shadow-2xl to shadow-lg
- Reduced blur effects from blur-3xl/blur-2xl to blur-xl/blur-lg
- Toned down opacity on glow effects

Automation Engine Section:
- Simplified numbered badges from colorful mix to consistent primary gradient
- Changed from w-10 h-10 to w-9 h-9 for cleaner appearance
- Removed heavy shadow-lg and glow effects, using subtle shadow-sm
- Removed hover glow animations for cleaner branded look
- Simplified icon badge from shadow-lg to shadow-sm
- Reduced automation dashboard shadow from shadow-2xl to shadow-md
- Updated glow colors to primary + success (matching brand)
2026-01-08 09:03:44 +00:00
IGNY8 VPS (Salman)
7c79bdcc6c Update gradient backgrounds from purple/pink to primary + success mix
- Home page hero: Changed from purple via purple-400 to primary via primary-dark to success
- Home page CTA: Changed from purple-400 via purple to success via primary-dark
- Upcoming page hero: Changed from purple via purple-400 to primary via primary-dark to success
- Upcoming page CTA: Changed from purple via purple-400 to success via primary-dark
- Updated radial glow overlays to use success RGB values instead of hardcoded purple
- Matches logo gradient colors (primary + success mix)
2026-01-08 08:54:28 +00:00
IGNY8 VPS (Salman)
74370685f4 Add Upcoming Features page with timeline-based roadmap
New Page: /upcoming
- Created comprehensive Upcoming Features page with 3 timeline phases
- Phase 1 (Feb 2026): Linker Module, Optimizer Module
- Phase 2 (Q2 2026): Products Pages, Services Pages, Company Pages
- Phase 3 (Q3-Q4 2026): Socializer, Video Creator, Site Builder, Analytics

Features:
- Timeline-based organization with unique color badges
- Rich visual design with gradients and hover effects
- Detailed feature descriptions with bullet points
- Icons for each module
- CTA sections for conversion

Integration:
- Added route to MarketingApp.tsx
- Added 'Upcoming Features' link to footer Resources section
- Updated FINAL-PRELAUNCH.md to mark task 8.3 complete

All upcoming features from docs integrated:
- Internal/external linking with clustering
- Content re-optimization
- Product/service/company page generation
- Social media multi-platform publishing
- Video content creation and publishing
- Site builder (SEO holy grail)
- Advanced analytics
2026-01-08 08:46:01 +00:00
IGNY8 VPS (Salman)
e2a1c15183 Update FINAL-PRELAUNCH.md: Mark Phase 7 & 8 tasks complete
Phase 7 Documentation ():
- Updated docs/INDEX.md to v1.6.1
- Updated CHANGELOG.md with detailed v1.6.1 changes
- Updated Help.tsx with 8-stage pipeline and visual flowcharts
- Synced all documentation with codebase

Phase 8 Frontend Marketing ():
- Updated Home.tsx with accurate 8-stage pipeline
- Updated Product.tsx with current module architecture
- Updated Tour.tsx with 5 detailed steps
- Updated Solutions.tsx with accurate outcomes
- Updated Pricing.tsx with correct features and providers
- All marketing pages synced with app

Phase 7.2 (Media) and 8.3 (Upcoming Features) remain pending
2026-01-08 08:04:03 +00:00
IGNY8 VPS (Salman)
51512d6c91 Update Tour and Solutions pages with accurate pipeline
- Update Tour.tsx with 5 steps including 8-stage pipeline details
- Fix automation section to show 7 handoffs for 8 stages
- Update Solutions.tsx outcomes for each persona (Publishers, Agencies, In-house)
- Add Publisher module and WordPress publishing details
- Add credit-based tracking and multi-site support details
2026-01-08 07:46:50 +00:00
IGNY8 VPS (Salman)
4e9f2d9dbc v1.6.1 release: Update docs, marketing pages with 8-stage pipeline
- Bump version to 1.6.1 in CHANGELOG.md and docs/INDEX.md
- Add detailed v1.6.1 changelog for email system (SMTP, auth flows, templates)
- Update marketing pages (Home, Product, Pricing) with accurate 8-stage pipeline
- Fix automation handoff count (7 handoffs for 8 stages)
- Update feature matrix in Pricing for image providers
- Add visual pipeline components and stage descriptions
- Sync marketing content with current codebase architecture
2026-01-08 07:45:35 +00:00
IGNY8 VPS (Salman)
d4ecddba22 SMTP and other email realted settings 2026-01-08 06:45:30 +00:00
IGNY8 VPS (Salman)
3651ee9ed4 Email COnfigs & setup 2026-01-08 05:41:28 +00:00
IGNY8 VPS (Salman)
7da3334c03 Reorg docs 2026-01-08 00:58:28 +00:00
IGNY8 VPS (Salman)
3028db5197 Version 1.6.0 2026-01-08 00:36:32 +00:00
IGNY8 VPS (Salman)
7ad1f6bdff FInal bank, stripe and paypal sandbox completed 2026-01-08 00:12:41 +00:00
IGNY8 VPS (Salman)
ad75fa031e payment gateways and plans billing and signup pages refactored 2026-01-07 13:02:53 +00:00
IGNY8 VPS (Salman)
ad1756c349 fixing and creatign mess 2026-01-07 10:19:34 +00:00
IGNY8 VPS (Salman)
0386d4bf33 STripe Paymen and PK payemtns and many othe rbacekd and froentened issues 2026-01-07 05:51:36 +00:00
IGNY8 VPS (Salman)
87d1662a18 payment options fixes 2026-01-07 01:46:28 +00:00
IGNY8 VPS (Salman)
909ed1cb17 Phase 3 & Phase 4 - Completed 2026-01-07 00:57:26 +00:00
IGNY8 VPS (Salman)
4b6a03a898 reorg docs 2026-01-06 22:08:40 +00:00
IGNY8 VPS (Salman)
6c8e5fdd57 3r party integrations pkan - payments & email services 2026-01-06 22:07:19 +00:00
IGNY8 VPS (Salman)
52603f2deb Version 1.5.0 2026-01-06 21:45:32 +00:00
IGNY8 VPS (Salman)
9ca048fb9d Phase 3 - credts, usage, plans app pages #Migrations 2026-01-06 21:28:13 +00:00
IGNY8 VPS (Salman)
cb8e747387 Phase 2, 2.1 and 2.2 complete 2026-01-05 08:17:56 +00:00
IGNY8 VPS (Salman)
abc6c011ea phase 1 complete 2026-01-05 05:06:30 +00:00
IGNY8 VPS (Salman)
de0e42cca8 Phase 1 fixes 2026-01-05 04:52:16 +00:00
IGNY8 VPS (Salman)
ff44827b35 Phase 1 missing file 2026-01-05 03:41:17 +00:00
IGNY8 VPS (Salman)
e93ea77c2b Pre luanch plan phase 1 complete 2026-01-05 03:40:39 +00:00
IGNY8 VPS (Salman)
1f2e734ea2 Version 1.5.0 Planning 2026-01-05 02:29:08 +00:00
IGNY8 VPS (Salman)
6947819742 Version 1.4.0 2026-01-05 01:48:23 +00:00
IGNY8 VPS (Salman)
dc7a459ebb django admin Groups reorg, Frontend udpates for site settings, #Migration runs 2026-01-05 01:21:52 +00:00
IGNY8 VPS (Salman)
6e30d2d4e8 Django admin cleanup 2026-01-04 06:04:37 +00:00
IGNY8 VPS (Salman)
b2922ebec5 refactor-4th-jan-plan 2026-01-04 00:39:44 +00:00
IGNY8 VPS (Salman)
c4de8994dd image gen mess 2026-01-03 22:31:30 +00:00
IGNY8 VPS (Salman)
f518e1751b IMage genartion service and models revamp - #Migration Runs 2026-01-03 20:08:16 +00:00
IGNY8 VPS (Salman)
a70f8cdd01 generate iamge button 2026-01-03 19:09:31 +00:00
IGNY8 VPS (Salman)
a1016ec1c2 wokring models and image genration model and admin apges 2026-01-03 17:28:18 +00:00
alorig
52600c9dca Update CHANGELOG.md 2026-01-03 21:23:08 +05:00
IGNY8 VPS (Salman)
f10916bfab VErsion 1.3.2 2026-01-03 09:35:43 +00:00
IGNY8 VPS (Salman)
f1ba0aa531 Section 2 Completed 2026-01-03 09:07:47 +00:00
IGNY8 VPS (Salman)
4d6ee21408 Section 2 Part 3 2026-01-03 08:11:41 +00:00
IGNY8 VPS (Salman)
935c7234b1 SEction 2 part 2 2026-01-03 04:39:06 +00:00
IGNY8 VPS (Salman)
94d37a0d84 Section 2 2.1 2.4 Completed 2026-01-03 02:43:43 +00:00
IGNY8 VPS (Salman)
e2d462d8b6 Update dashboard and automation colors to new module scheme
Dashboard widgets:
- WorkflowPipelineWidget: Sites now has transparent bg with colored icon
- Tasks stage uses navy (gray-700/800), Content/Drafts use blue (brand)
- AIOperationsWidget: Content now uses blue (brand) instead of green
- RecentActivityWidget: Content activity now uses blue (brand)
- QuickActionsWidget: Tasks step uses navy, Content uses blue

Automation components:
- AutomationPage STAGE_CONFIG: Tasks→Content now navy, Content→Prompts blue
- GlobalProgressBar: Updated stage colors to match new scheme
- CurrentProcessingCard: Stage colors match new module scheme

Color scheme:
- Planner Pipeline (Blue → Pink → Amber): Keywords, Clusters, Ideas
- Writer Pipeline (Navy → Blue → Pink → Green): Tasks, Content, Images, Published
2026-01-03 00:52:18 +00:00
IGNY8 VPS (Salman)
16dfc56ba0 Update module colors for visual distinction in pipelines
Module Color Updates:
- Tasks: Changed from primary (blue) → gray-base (navy #031D48)
- Content: Changed from success (green) → primary (blue #3B82F6)

Pipeline Flow Visual Distinction:
- Planner: Blue → Pink → Amber (Keywords → Clusters → Ideas)
- Writer: Navy → Blue → Green (Tasks → Content → Published)

Base Colors (already set):
- Primary: #3B82F6 (blue)
- Success: #10B981 (green)
- Warning: #F59E0B (amber)
- Danger: #DC2626 (red)
- Purple: #F63B82 (pink)
- Gray Base: #031D48 (navy)

Updated files:
- colors.config.ts: Updated MODULE_COLORS, PIPELINE_COLORS, WORKFLOW_COLORS
- Added grayBase/grayDark to CSS_VAR_COLORS
2026-01-03 00:24:57 +00:00
IGNY8 VPS (Salman)
bc371e5482 Consolidate docs: move design/docs files to docs folder
- Moved DESIGN-GUIDE.md → docs/30-FRONTEND/DESIGN-GUIDE.md
- Moved frontend/DESIGN_SYSTEM.md → docs/30-FRONTEND/DESIGN-TOKENS.md
- Moved IGNY8-APP.md → docs/00-SYSTEM/IGNY8-APP.md
- Moved fixes-kb.md → docs/90-REFERENCE/FIXES-KB.md
- Moved FINAL_PRELAUNCH.md → docs/plans/FINAL-PRELAUNCH.md
- Updated all references in .rules, README.md, docs/INDEX.md
- Updated ESLint plugin documentation comments
- Root folder now only contains: .rules, CHANGELOG.md, README.md
2026-01-02 23:43:58 +00:00
IGNY8 VPS (Salman)
f28f641fd5 COmpoeentes standardization 2 2026-01-02 00:27:27 +00:00
IGNY8 VPS (Salman)
a4691ad2da componenets standardization 1 2026-01-01 21:42:04 +00:00
IGNY8 VPS (Salman)
c880e24fc0 Styles styels styles 2026-01-01 18:12:51 +00:00
IGNY8 VPS (Salman)
e96069775c GLobal Styling part 1 2026-01-01 14:54:27 +00:00
IGNY8 VPS (Salman)
0e57c50e56 final styling and compoeents refacctor audit plan 2026-01-01 11:24:18 +00:00
IGNY8 VPS (Salman)
c44d520a7f reorg 2026-01-01 10:54:16 +00:00
IGNY8 VPS (Salman)
815c7b5129 12 2026-01-01 10:41:31 +00:00
IGNY8 VPS (Salman)
d389576634 final section 10 -- and lgoabl styles adn compoeents plan 2026-01-01 10:41:16 +00:00
IGNY8 VPS (Salman)
41e124d8e8 SEction 9-10 2026-01-01 08:10:24 +00:00
IGNY8 VPS (Salman)
0340016932 Section 3-8 - #MIgration Runs -
Multiple Migfeat: Update publishing terminology and add publishing settings

- Changed references from "WordPress" to "Site" across multiple components for consistency.
- Introduced a new "Publishing" tab in Site Settings to manage automatic content approval and publishing behavior.
- Added publishing settings model to the backend with fields for auto-approval, auto-publish, and publishing limits.
- Implemented Celery tasks for scheduling and processing automated content publishing.
- Enhanced Writer Dashboard to include metrics for content published to the site and scheduled for publishing.
2026-01-01 07:10:03 +00:00
IGNY8 VPS (Salman)
f81fffc9a6 Section 1 & 2 - #Migration Run 2026-01-01 06:29:13 +00:00
IGNY8 VPS (Salman)
dd63403e94 reorg-docs 2026-01-01 05:40:42 +00:00
IGNY8 VPS (Salman)
d16e5e1a4b PUBLISHING-ONBOARDING-IMPLEMENTATION-PLAN 2026-01-01 05:29:22 +00:00
IGNY8 VPS (Salman)
6caeed14cb docs adn more plan 2026-01-01 03:34:13 +00:00
IGNY8 VPS (Salman)
af408d0747 V 1.3.0 2026-01-01 01:54:54 +00:00
IGNY8 VPS (Salman)
0d3e25e50f autoamtiona nd other pages udpates, 2026-01-01 01:40:34 +00:00
IGNY8 VPS (Salman)
a02e485f7d 2 2025-12-31 23:52:58 +00:00
IGNY8 VPS (Salman)
89b64cd737 many changes for modules widgets and colors and styling 2025-12-31 23:52:43 +00:00
IGNY8 VPS (Salman)
b61bd6e64d last fix form master imp part 6 2025-12-31 20:50:47 +00:00
IGNY8 VPS (Salman)
6953343026 imp part 5 2025-12-30 14:37:28 +00:00
IGNY8 VPS (Salman)
1632ee62b6 imp part 4 2025-12-30 13:14:21 +00:00
IGNY8 VPS (Salman)
51950c7ce1 imp part 3 2025-12-30 10:28:24 +00:00
IGNY8 VPS (Salman)
885158e152 master - part 2 2025-12-30 09:47:58 +00:00
IGNY8 VPS (Salman)
2af7bb725f master plan implemenattion 2025-12-30 08:51:31 +00:00
IGNY8 VPS (Salman)
96aaa4151a 1 2025-12-30 00:29:55 +00:00
IGNY8 VPS (Salman)
6c1cf99488 Mast r final fix plan 2025-12-29 23:26:56 +00:00
IGNY8 VPS (Salman)
b23cb07f41 docs-plans 2025-12-29 20:32:07 +00:00
IGNY8 VPS (Salman)
4f7ab9c606 stlyes fixes 2025-12-29 19:52:51 +00:00
IGNY8 VPS (Salman)
c91175fdcb styling css gloablization 2025-12-29 05:59:56 +00:00
IGNY8 VPS (Salman)
0ffd21b9bf metricsa dn backedn fixes 2025-12-29 04:33:22 +00:00
IGNY8 VPS (Salman)
53fdebf733 automation and ai and some planning and fixes adn docs reorg 2025-12-29 01:41:36 +00:00
IGNY8 VPS (Salman)
748de099dd Automation final fixes 2025-12-28 20:37:46 +00:00
IGNY8 VPS (Salman)
7f82ef4551 automation fixes part 3 using claude opus4.5 2025-12-28 19:05:03 +00:00
IGNY8 VPS (Salman)
f92b3fba6e automation fixes (part2) 2025-12-28 03:15:39 +00:00
IGNY8 VPS (Salman)
d4b9c8693a implemeantion verifcation 2025-12-28 02:01:33 +00:00
IGNY8 VPS (Salman)
ea9125b805 Automation revamp part 1 2025-12-28 01:46:27 +00:00
IGNY8 VPS (Salman)
0605f650b1 notifciations issues fixed final 2025-12-28 00:52:14 +00:00
IGNY8 VPS (Salman)
28a60f8141 docs updated v1.2.0 2025-12-27 23:27:07 +00:00
IGNY8 VPS (Salman)
e0f3060df9 1 2025-12-27 22:32:51 +00:00
IGNY8 VPS (Salman)
d0f98d35d6 final all done 2nd last plan before goign live 2025-12-27 22:32:29 +00:00
IGNY8 VPS (Salman)
5f9a4b8dca final polish phase 1 2025-12-27 21:27:37 +00:00
IGNY8 VPS (Salman)
627938aa95 Section 3: Implement ThreeWidgetFooter on Planner & Writer pages
- Created ThreeWidgetFooter.tsx component with 3-column layout:
  - Widget 1: Page Progress (current page metrics + progress bar + hint)
  - Widget 2: Module Stats (workflow pipeline with links)
  - Widget 3: Completion (both modules summary)
- Created useThreeWidgetFooter.ts hook for building widget props
- Integrated ThreeWidgetFooter into:
  - Planner: Keywords, Clusters, Ideas pages
  - Writer: Tasks, Content pages
- SiteCard already has SiteSetupChecklist integrated (compact mode)
- Backend serializer returns all required fields
2025-12-27 18:01:33 +00:00
IGNY8 VPS (Salman)
a145e6742e Add ThreeWidgetFooter component and hook for 3-column table footer layout
- ThreeWidgetFooter.tsx: 3-column layout matching Section 3 of audit report
  - Widget 1: Page Progress (current page metrics + progress bar + hint)
  - Widget 2: Module Stats (workflow pipeline with progress bars)
  - Widget 3: Completion (both Planner/Writer stats + credits)
- useThreeWidgetFooter.ts: Hook to build widget props from data
  - Builds page progress for Keywords, Clusters, Ideas, Tasks, Content
  - Builds Planner/Writer module pipelines
  - Calculates completion stats from data

Uses CSS tokens from styles/tokens.css for consistent styling
2025-12-27 17:51:46 +00:00
IGNY8 VPS (Salman)
24cdb4fdf9 Fix: SiteSerializer has_integration uses platform field not integration_type 2025-12-27 17:41:54 +00:00
IGNY8 VPS (Salman)
a1ec3100fd Phase 1: Progress modal text, SiteSerializer fields, Notification store, SiteCard checklist
- Improved progress modal messages in ai/engine.py (Section 4)
- Added keywords_count and has_integration to SiteSerializer (Section 6)
- Added notificationStore.ts for frontend notifications (Section 8)
- Added NotificationDropdownNew component (Section 8)
- Added SiteSetupChecklist to SiteCard in compact mode (Section 6)
- Updated api.ts Site interface with new fields
2025-12-27 17:40:28 +00:00
IGNY8 VPS (Salman)
c44bee7fa7 final audit report fo all modules 1 2025-12-27 12:45:40 +00:00
IGNY8 VPS (Salman)
9d54bbff48 closing final plans 2025-12-27 11:14:51 +00:00
IGNY8 VPS (Salman)
c227d9ee03 final ui of planner and writer 2025-12-27 09:25:31 +00:00
IGNY8 VPS (Salman)
efd7193951 more fixes 2025-12-27 08:56:09 +00:00
IGNY8 VPS (Salman)
034c640601 mpre ui fixes 2025-12-27 08:00:09 +00:00
IGNY8 VPS (Salman)
4482d2f4c4 more fixes ui 2025-12-27 07:09:33 +00:00
IGNY8 VPS (Salman)
d5bda678fd more fixes 2025-12-27 06:53:36 +00:00
IGNY8 VPS (Salman)
302af6337e ui improvements 2025-12-27 06:08:29 +00:00
IGNY8 VPS (Salman)
726d945bda header rekated fixes 2025-12-27 05:33:05 +00:00
IGNY8 VPS (Salman)
fd6e7eb2dd page adn app header mods 2025-12-27 04:09:05 +00:00
IGNY8 VPS (Salman)
e5959c3e72 Section 6 COmpleted 2025-12-27 03:41:51 +00:00
IGNY8 VPS (Salman)
4e9bf0ba56 Section 5 Complete 2025-12-27 03:09:57 +00:00
IGNY8 VPS (Salman)
74a3441ee4 SEction 4 completeed 2025-12-27 02:59:27 +00:00
IGNY8 VPS (Salman)
178b7c23ce Section 3 Completed 2025-12-27 02:43:46 +00:00
IGNY8 VPS (Salman)
add04e2ad5 Section 2 COmpleted 2025-12-27 02:20:55 +00:00
IGNY8 VPS (Salman)
890e138829 credits commit 2025-12-27 01:54:21 +00:00
IGNY8 VPS (Salman)
7af4190e6d rules and planning finalize for docs to be standrd always 2025-12-27 00:55:50 +00:00
IGNY8 VPS (Salman)
7a9fa8fd8f final docs for final audit implemenation 2025-12-27 00:34:22 +00:00
IGNY8 VPS (Salman)
277ef5c81d kb 2025-12-26 19:52:54 +00:00
IGNY8 VPS (Salman)
544a397e3d tokens ocnifg 1000 per credit adn fix 2025-12-26 01:25:25 +00:00
IGNY8 VPS (Salman)
33b4454f96 todos docs 2025-12-26 00:12:25 +00:00
IGNY8 VPS (Salman)
444d53dc7b docs update 2025-12-25 23:18:35 +00:00
IGNY8 VPS (Salman)
91525b8999 finalizing app adn fixes 2025-12-25 22:58:21 +00:00
IGNY8 VPS (Salman)
4bffede052 docs & ux improvmeents 2025-12-25 20:31:58 +00:00
IGNY8 VPS (Salman)
90e6e96b2b signup form 2025-12-25 13:10:56 +00:00
IGNY8 VPS (Salman)
4248fd0969 pricign plans updates 2025-12-25 12:42:25 +00:00
IGNY8 VPS (Salman)
e736697d6d text udpates ux 2025-12-25 11:51:44 +00:00
IGNY8 VPS (Salman)
d21b5b1363 UX: Complete Add Keywords and Sites Management detailed improvements
ADD KEYWORDS PAGE:
- Sector selection banner: 'Select a Sector to Add Keywords' → 'Choose a Topic Area First'
- Description: Updated to be more conversational and helpful
- Changed: 'Please select a sector from the dropdown above to enable adding keywords to your workflow. Keywords must be added to a specific sector.'
- To: 'Pick a topic area first, then add keywords - You need to choose what you're writing about before adding search terms to target'

SITES MANAGEMENT PAGE:
- Filter labels made more conversational:
- 'All Types' → 'Show All Types'
- 'All Hosting' → 'Show All Hosting'
- 'All Status' → 'Show All Status'

These changes complete the detailed text improvements from Sections 2 and 3 of the UX plan.
2025-12-25 09:54:21 +00:00
IGNY8 VPS (Salman)
34e8017770 UX: Complete detailed Dashboard text improvements per plan
PROGRESS SECTION:
- Site & Sectors: 'Industry & sectors configured' → 'Niches you're targeting - Industry & sectors set up'
- Keywords: 'Keywords added from opportunities' → 'Search terms to target - Keywords added from research'
- Clusters: 'Keywords grouped into clusters' → 'Topic groups - Keywords organized by theme'
- Ideas: 'Content ideas and outlines' → 'Article outlines ready - Ideas and outlines created'
- Content: 'Content pieces + images created' → 'Articles created - Written content + images ready'
- Published: 'Content published to site' → 'Live on your site - Articles published and active'

QUICK ACTIONS:
- 'Find Keywords' → 'Find Keywords to Rank For' with 'Search for topics your audience wants to read about'
- 'Organize Topics' → 'Organize Topics & Create Outlines' with 'Group keywords and create article plans'
- 'Create Articles' → 'Write Articles with AI' with 'Generate full articles ready to publish'
- 'Add Links' → 'Connect Your Articles' with 'Automatically link related articles for better SEO'
- 'Improve Content' → 'Make Articles Better' with 'Improve readability, keywords, and search rankings'

All descriptions now match the detailed UX improvement plan specifications.
2025-12-25 09:53:08 +00:00
IGNY8 VPS (Salman)
65bf65bb6b UX: Complete remaining page updates with user-friendly text
PLANNER MODULE:
- Ideas: 'Content Ideas' → 'Article Ideas'
- Dashboard: 'Planner Dashboard' → 'Planning Dashboard'
- Keyword Opportunities: 'Keyword Opportunities' → 'Discover Keywords'

LINKER MODULE:
- Content List: 'Link Content' → 'Add Internal Links'
- Dashboard: 'Linker Dashboard' → 'Internal Linking Dashboard'

OPTIMIZER MODULE:
- Content Selector: 'Optimize Content' → 'Improve Your Articles'
- Dashboard: 'Optimizer Dashboard' → 'Optimization Dashboard'

All page titles now use clear, action-oriented language that non-technical
users can easily understand.
2025-12-25 09:45:59 +00:00
IGNY8 VPS (Salman)
d9346e6f16 UX: Update Setup, Settings, and Help pages with user-friendly text
- Setup: Changed 'Add Keywords' to 'Find Keywords'
- Account Settings: Updated description to be more user-friendly
- AI Settings: Updated description to explain AI models and preferences
- General Settings: Changed to 'App Preferences' with clearer description
- Help: Changed 'Help & Documentation' to 'Help Center' with friendlier description
2025-12-25 09:11:44 +00:00
IGNY8 VPS (Salman)
f559bd44a1 UX: Update Thinker module pages with user-friendly text
- Prompts: Changed 'AI Prompts Management' to 'Prompt Library'
- Author Profiles: Changed to 'Writing Styles'
- Strategies: Changed 'Content Strategies' to 'Content Plans'
- Image Testing: Changed to 'Image Settings'
- Dashboard: Changed 'Thinker Dashboard' to 'Strategy Dashboard'
2025-12-25 09:02:23 +00:00
IGNY8 VPS (Salman)
62fc47cfe8 UX: Update Writer module pages with user-friendly text
- Tasks: Changed 'Content Queue' to 'Writing Tasks'
- Content: Changed 'Content Drafts' to 'Your Articles'
- Review: Changed 'Content Review' to 'Review Queue'
- Published: Changed 'Published Content' to 'Published Articles'
- Images: Changed 'Content Images' to 'Article Images'
- Dashboard: Changed 'Writer Dashboard' to 'Content Creation Dashboard'
2025-12-25 09:01:18 +00:00
IGNY8 VPS (Salman)
9e48d728fd UX: Update Planner pages with user-friendly text
- Keywords page: Changed 'Keywords' to 'Your Keywords'
- Clusters page: Changed 'Keyword Clusters' to 'Topic Clusters'
- Shared columns: Changed 'Volume' to 'Search Volume'
- Shared columns: Changed 'Cluster' to 'Topic Group'
- Shared columns: Changed 'Sector' to 'Category'
2025-12-25 09:00:00 +00:00
IGNY8 VPS (Salman)
272a3e3d83 UX: Update Sites and Automation pages with user-friendly text
- Sites: Changed 'Sites Management' to 'Your Websites'
- Sites: Changed 'Add Site' button to 'Add New Website'
- Automation: Changed 'AI Automation Pipeline' to 'Content Automation'
- Automation: Updated description to be more user-friendly
2025-12-25 08:52:56 +00:00
IGNY8 VPS (Salman)
ebf6a9f27a UX: Update Dashboard and Sidebar navigation with user-friendly text
- Dashboard: Changed title to 'Your Content Creation Dashboard'
- Dashboard: Updated 'Your Progress' to 'Your Content Journey'
- Dashboard: Improved metric card descriptions for clarity
- Dashboard: Simplified Quick Action labels (Find Keywords, Organize Topics, etc.)
- Sidebar: Updated section headers (GET STARTED, CREATE CONTENT, PREFERENCES, SUPPORT)
- Sidebar: Changed menu labels (Find Keywords, Your Websites, Organize Keywords, Create Content)
- Sidebar: Renamed 'Help & Documentation' to 'Help Center'
2025-12-25 08:48:35 +00:00
IGNY8 VPS (Salman)
2d4767530d 2 2025-12-25 05:06:44 +00:00
IGNY8 VPS (Salman)
b0c14ccc32 content view template final version 2025-12-25 04:06:19 +00:00
IGNY8 VPS (Salman)
826ad89a3e Remove aws-admin pattern completely - use account + GlobalIntegrationSettings
ARCHITECTURE FIX:
- aws-admin IntegrationSettings will NEVER exist (it's a legacy pattern)
- Only user's own account IntegrationSettings can exist (if they override defaults)
- Otherwise GlobalIntegrationSettings is used directly
- API keys are ALWAYS from GlobalIntegrationSettings (accounts cannot override API keys)

REMOVED:
- All aws-admin Account lookups
- All aws-admin IntegrationSettings fallback attempts
- Confusing nested try/except chains

CORRECT FLOW NOW:
1. Try account's IntegrationSettings for config overrides
2. Use GlobalIntegrationSettings for missing values and ALL API keys
3. No intermediate aws-admin lookups
2025-12-25 02:11:21 +00:00
IGNY8 VPS (Salman)
504d0174f7 Fix image generation: escape JSON in prompt template + GlobalIntegrationSettings fallback
ROOT CAUSES IDENTIFIED:
1. GlobalAIPrompt template had unescaped JSON braces that broke Python's .format()
   - Python treats {...} as placeholders, causing KeyError when rendering
   - Escaped JSON braces to {{...}} while preserving {title}, {content}, {max_images}

2. Image functions hardcoded aws-admin IntegrationSettings which didn't exist
   - Functions failed when aws-admin account had no IntegrationSettings
   - Added GlobalIntegrationSettings fallback for all missing values

CHANGES:
- Fixed GlobalAIPrompt.image_prompt_extraction template in database (escaped JSON)
- Updated generate_image_prompts._get_max_in_article_images() with fallback
- Updated generate_images.prepare_data() with fallback for all image settings
- Updated tasks.process_image_generation_queue() with fallback for config + API keys

TESTED: Template rendering now works, GlobalIntegrationSettings.max_in_article_images=4
2025-12-25 02:09:29 +00:00
IGNY8 VPS (Salman)
5299fd82eb Revert image prompt changes - investigate original issue 2025-12-25 01:59:23 +00:00
IGNY8 VPS (Salman)
abeede5f04 image prompt issues 2025-12-25 01:17:41 +00:00
IGNY8 VPS (Salman)
64e76f5436 fixed final with new model config and tokens 2025-12-24 15:33:17 +00:00
IGNY8 VPS (Salman)
02d4f1fa46 AI MODELS & final updates - feat: Implement AI Model Configuration with dynamic pricing and REST API
- Added AIModelConfig model to manage AI model configurations in the database.
- Created serializers and views for AI model configurations, enabling read-only access via REST API.
- Implemented filtering capabilities for model type, provider, and default status in the API.
- Seeded initial data for text and image models, including pricing and capabilities.
- Updated Django Admin interface for managing AI models with enhanced features and bulk actions.
- Added validation methods for model and image size checks.
- Comprehensive migration created to establish the AIModelConfig model and seed initial data.
- Documented implementation and validation results in summary and report files.
2025-12-24 13:37:36 +00:00
IGNY8 VPS (Salman)
355b0ac897 plan fro model unifiation 2025-12-24 01:07:31 +00:00
IGNY8 VPS (Salman)
0a12123c85 gloabl api key issue, credit service issue, credit cost basedon tokens all fixed 2025-12-24 00:23:23 +00:00
IGNY8 VPS (Salman)
646095da65 moduel setgins fixed 2025-12-20 22:49:31 +00:00
IGNY8 VPS (Salman)
5c9ef81aba moduels setigns rmeove from frotneend 2025-12-20 22:18:32 +00:00
IGNY8 VPS (Salman)
7a1e952a57 feat: Add Global Module Settings and Caption to Images
- Introduced GlobalModuleSettings model for platform-wide module enable/disable settings.
- Added 'caption' field to Images model to store image captions.
- Updated GenerateImagePromptsFunction to handle new caption structure in prompts.
- Enhanced AIPromptViewSet to return global prompt types and validate active prompts.
- Modified serializers and views to accommodate new caption field and global settings.
- Updated frontend components to display captions and filter prompts based on active types.
- Created migrations for GlobalModuleSettings and added caption field to Images.
2025-12-20 21:34:59 +00:00
IGNY8 VPS (Salman)
9e8ff4fbb1 globals 2025-12-20 19:49:57 +00:00
IGNY8 VPS (Salman)
3283a83b42 feat(migrations): Rename indexes and update global integration settings fields for improved clarity and functionality
feat(admin): Add API monitoring, debug console, and system health templates for enhanced admin interface

docs: Add AI system cleanup summary and audit report detailing architecture, token management, and recommendations

docs: Introduce credits and tokens system guide outlining configuration, data flow, and monitoring strategies
2025-12-20 12:55:05 +00:00
IGNY8 VPS (Salman)
eb6cba7920 cleanup - froentend pages removed 2025-12-20 09:55:16 +00:00
IGNY8 VPS (Salman)
ab0d6469d4 bulk actions & some next audits docs 2025-12-20 02:46:00 +00:00
IGNY8 VPS (Salman)
c17b22e927 credits adn tokens final correct setup 2025-12-20 00:36:23 +00:00
IGNY8 VPS (Salman)
e041cb8e65 ai & tokens 2025-12-19 17:06:01 +00:00
IGNY8 VPS (Salman)
98e68f6bd8 max-images in progress modal 2025-12-17 14:34:57 +00:00
IGNY8 VPS (Salman)
71fe687681 image max count 2025-12-17 13:06:42 +00:00
IGNY8 VPS (Salman)
1993d45f32 12 2025-12-17 12:54:12 +00:00
IGNY8 VPS (Salman)
8c1d933647 max iamges 2025-12-17 12:35:43 +00:00
IGNY8 VPS (Salman)
62e55389f9 Add support for GPT-5.1 and GPT-5.2: update token limits and pricing 2025-12-17 11:11:11 +00:00
IGNY8 VPS (Salman)
e43f8553b6 ai repsosne timeout increased to 180s 2025-12-17 08:12:10 +00:00
IGNY8 VPS (Salman)
7ad06c6227 Refactor keyword handling: Replace 'intent' with 'country' across backend and frontend
- Updated AutomationService to include estimated_word_count.
- Increased stage_1_batch_size from 20 to 50 in AutomationViewSet.
- Changed Keywords model to replace 'intent' property with 'country'.
- Adjusted ClusteringService to allow a maximum of 50 keywords for clustering.
- Modified admin and management commands to remove 'intent' and use 'country' instead.
- Updated serializers to reflect the change from 'intent' to 'country'.
- Adjusted views and filters to use 'country' instead of 'intent'.
- Updated frontend forms, filters, and pages to replace 'intent' with 'country'.
- Added migration to remove 'intent' field and add 'country' field to SeedKeyword model.
2025-12-17 07:37:36 +00:00
IGNY8 VPS (Salman)
9f826c92f8 fixes for idea render and other 2025-12-17 05:58:13 +00:00
IGNY8 VPS (Salman)
4bba5a9a1f fixes 2025-12-17 04:55:49 +00:00
IGNY8 VPS (Salman)
45d9dfa0f5 token limit inlegacy file 2025-12-17 01:34:02 +00:00
IGNY8 VPS (Salman)
9656643f0f fixes of ai toke limit standrd 8192 2025-12-17 00:36:18 +00:00
IGNY8 VPS (Salman)
69c0fd8b69 reorg 2025-12-17 00:27:53 +00:00
IGNY8 VPS (Salman)
8f97666522 testign promtps 2025-12-17 00:09:07 +00:00
IGNY8 VPS (Salman)
84fd4bc11a final logout related fixes and cookies and session 2025-12-16 19:16:50 +00:00
IGNY8 VPS (Salman)
1887f2a665 logout issues # 2 2025-12-15 17:22:50 +00:00
IGNY8 VPS (Salman)
5366cc1805 logo out issues fixes 2025-12-15 16:08:47 +00:00
alorig
25f1c32366 Revert "messy logout fixing"
This reverts commit 4fb3a144d7.
2025-12-15 17:24:07 +05:00
IGNY8 VPS (Salman)
4fb3a144d7 messy logout fixing 2025-12-15 12:01:41 +00:00
IGNY8 VPS (Salman)
06e5f252a4 column visibility fixed in this 2025-12-15 10:44:22 +00:00
IGNY8 VPS (Salman)
7fb2a9309e asdasdsa 2025-12-15 10:31:20 +00:00
IGNY8 VPS (Salman)
1ef4bb7db6 test: add comments to test webhook 2025-12-15 07:57:22 +00:00
IGNY8 VPS (Salman)
558ce9250a 1 2025-12-15 07:55:55 +00:00
IGNY8 VPS (Salman)
f8c6dfe889 disable webhook container restart 2025-12-15 07:53:00 +00:00
873 changed files with 92199 additions and 83606 deletions

View File

@@ -1,380 +0,0 @@
# IGNY8 Development Rules & Standards
**Project:** IGNY8 - AI-Powered Content Platform
**Version:** v1.0.0
**Last Updated:** December 12, 2025
---
## 📋 General Development Principles
### 1. **Always Read Documentation First**
Before making changes, consult these critical docs:
- `ARCHITECTURE-KNOWLEDGE-BASE.md` - System architecture and design patterns
- `CHANGELOG.md` - Recent changes and version history
- `IGNY8-COMPLETE-FEATURES-GUIDE.md` - Complete feature set and capabilities
- `docs/00-SYSTEM/` - Core system architecture
- `docs/10-BACKEND/` - Backend models, services, APIs
- `docs/20-API/` - API endpoint documentation
- `docs/30-FRONTEND/` - Frontend components and architecture
- `docs/40-WORKFLOWS/` - Business workflows and processes
### 2. **Maintain Consistency**
- **API Design:** Follow existing RESTful patterns in `backend/igny8_core/*/views.py`
- **Models:** Use existing base classes (`SoftDeletableModel`, `AccountBaseModel`, `SiteSectorBaseModel`)
- **Services:** Follow service pattern in `backend/igny8_core/business/*/services/`
- **AI Functions:** Use AI framework in `backend/igny8_core/ai/` (not legacy `utils/ai_processor.py`)
- **Frontend Components:** Use existing component library in `frontend/src/components/`
- **Styling:** Use TailwindCSS classes, follow existing design system in `frontend/DESIGN_SYSTEM.md`
- **State Management:** Use Zustand stores in `frontend/src/store/`
### 3. **Multi-Tenancy Rules**
- **ALWAYS scope by account:** Every query must filter by account
- **Site/Sector scoping:** Use `SiteSectorBaseModel` for site-specific data
- **Permissions:** Check permissions via `IsAuthenticatedAndActive`, `HasTenantAccess`, role-based permissions
- **No cross-tenant access:** Validate account ownership before operations
### 4. **API Endpoint Rules**
- **Use existing API structure:** All user-facing endpoints under `/api/v1/<module>/`, admin endpoints under `/api/v1/<module>/admin/`
- **No parallel API systems:** Register all endpoints in module's `urls.py`, test via Swagger at `/api/docs/` before documenting
- **Document in Swagger:** Ensure drf-spectacular auto-generates docs; verify endpoint appears at `/api/docs/` and `/api/schema/`
---
## 📝 Change Management & Versioning
alwys udpated changelog with incremental updates, as fixed aded or modified for each version update, dotn remove or modify teh exsitng version changes
### Versioning Scheme: `v<MAJOR>.<MINOR>.<PATCH>`
**Example:** v1.2.5
- `MAJOR when asked` (1.x.x): Breaking changes, major features, architecture changes
- `MAJOR` (x.2.x): New features, modules, significant enhancements
- `MINOR/PATCH` (x.x.5): Bug fixes, small improvements, refactors
### Changelog Update Rules
#### **For EVERY Change:**
1. **Update version number** in `CHANGELOG.md`
2. **Increment PATCH** (v1.0.x → v1.0.1) for:
- Bug fixes
- Small improvements
- Code refactors
- Documentation updates
- UI/UX tweaks
3. **Increment MINOR** (v1.x.0 → v1.1.0) for:
- New features
- New API endpoints
- New components
- New services
- Significant enhancements
4. **Increment MAJOR** (vx.0.0 → v2.0.0) for:
- Breaking API changes
- Database schema breaking changes
- Architecture overhauls
- Major refactors affecting multiple modules
#### **Changelog Entry Format:**
```markdown
## v1.2.5 - December 12, 2025
### Fixed
- User logout issue when switching accounts
- Payment confirmation modal amount display
### Changed
- Updated session storage from database to Redis
- Enhanced credit balance widget UI
### Added
- Plan limits enforcement system
- Monthly reset task for usage tracking
```
### **For Major Refactors:**
1. **Create detailed TODO list** before starting
2. **Document current state** in CHANGELOG
3. **Create implementation checklist** (markdown file in root or docs/)
4. **Track progress** with checklist updates
5. **Test thoroughly** before committing
6. **Update CHANGELOG** with all changes made
7. **Update version** to next MINOR or MAJOR
---
## 🏗️ Code Organization Standards
### Backend Structure
```
backend/igny8_core/
├── auth/ # Authentication, users, accounts, plans
├── business/ # Business logic services
│ ├── automation/ # Automation pipeline
│ ├── billing/ # Billing, credits, invoices
│ ├── content/ # Content generation
│ ├── integration/ # External integrations
│ ├── linking/ # Internal linking
│ ├── optimization/ # Content optimization
│ ├── planning/ # Keywords, clusters, ideas
│ └── publishing/ # WordPress publishing
├── ai/ # AI framework (NEW - use this)
├── utils/ # Utility functions
├── tasks/ # Celery tasks
└── modules/ # Legacy modules (being phased out)
```
### Frontend Structure
```
frontend/src/
├── components/ # Reusable components
├── pages/ # Page components
├── store/ # Zustand state stores
├── services/ # API service layer
├── hooks/ # Custom React hooks
├── utils/ # Utility functions
├── types/ # TypeScript types
└── marketing/ # Marketing site
```
---
## 🔧 Development Workflow
### 1. **Planning Phase**
- [ ] Read relevant documentation
- [ ] Understand existing patterns
- [ ] Create TODO list for complex changes
- [ ] Identify affected components/modules
- [ ] Plan database changes (if any)
### 2. **Implementation Phase**
- [ ] Follow existing code patterns
- [ ] Use proper base classes and mixins
- [ ] Add proper error handling
- [ ] Validate input data
- [ ] Check permissions and scope
- [ ] Write clean, documented code
- [ ] Use type hints (Python) and TypeScript types
### 3. **Testing Phase**
- [ ] Test locally with development data
- [ ] Test multi-tenancy isolation
- [ ] Test permissions and access control
- [ ] Test error cases
- [ ] Verify no breaking changes
- [ ] Check frontend-backend integration
### 4. **Documentation Phase**
- [ ] Update CHANGELOG.md
- [ ] Update version number
- [ ] Update relevant docs (if architecture/API changes)
- [ ] Add code comments for complex logic
- [ ] Update API documentation (if endpoints changed)
---
## 🎯 Specific Development Rules
### Backend Development
#### **Models:**
```python
# ALWAYS inherit from proper base classes
from igny8_core.auth.models import SiteSectorBaseModel
class MyModel(SoftDeletableModel, SiteSectorBaseModel):
# Your fields here
pass
```
#### **Services:**
```python
# Follow service pattern
class MyService:
def __init__(self):
self.credit_service = CreditService()
self.limit_service = LimitService()
def my_operation(self, account, site, **kwargs):
# 1. Validate permissions
# 2. Check limits/credits
# 3. Perform operation
# 4. Track usage
# 5. Return result
pass
```
#### **API Views:**
```python
# Use proper permission classes
class MyViewSet(viewsets.ModelViewSet):
permission_classes = [IsAuthenticatedAndActive, HasTenantAccess]
def get_queryset(self):
# ALWAYS scope by account
return MyModel.objects.filter(
site__account=self.request.user.account
)
```
#### **Migrations:**
- Run `python manage.py makemigrations` after model changes
- Test migrations: `python manage.py migrate --plan`
- Never edit existing migrations
- Use data migrations for complex data changes
### Frontend Development
#### **Components:**
```typescript
// Use existing component library
import { Card } from '@/components/ui/card';
import Button from '@/components/ui/button/Button';
// Follow naming conventions
export default function MyComponent() {
// Component logic
}
```
#### **State Management:**
```typescript
// Use Zustand stores
import { useAuthStore } from '@/store/authStore';
const { user, account } = useAuthStore();
```
#### **API Calls:**
```typescript
// Use fetchAPI from services/api.ts
import { fetchAPI } from '@/services/api';
const data = await fetchAPI('/v1/my-endpoint/');
```
#### **Styling:**
```typescript
// Use TailwindCSS classes
<div className="p-6 bg-white dark:bg-gray-800 rounded-lg shadow">
<h1 className="text-2xl font-bold text-gray-900 dark:text-white">
My Heading
</h1>
</div>
```
---
## 🚫 Common Pitfalls to Avoid
### **DON'T:**
- ❌ Skip account scoping in queries
- ❌ Use legacy AI processor (`utils/ai_processor.py`) - use `ai/` framework
- ❌ Hardcode values - use settings or constants
- ❌ Forget error handling
- ❌ Skip permission checks
- ❌ Create duplicate components - reuse existing
- ❌ Use inline styles - use TailwindCSS
- ❌ Forget to update CHANGELOG
- ❌ Use workarounds - fix the root cause
- ❌ Skip migrations after model changes
### **DO:**
- ✅ Read documentation before coding
- ✅ Follow existing patterns
- ✅ Use proper base classes
- ✅ Check permissions and limits
- ✅ Handle errors gracefully
- ✅ Return valid errors, not fallbacks
- ✅ Update CHANGELOG for every change
- ✅ Test multi-tenancy isolation
- ✅ Use TypeScript types
- ✅ Write clean, documented code
---
## 🔍 Code Review Checklist
Before committing code, verify:
- [ ] Follows existing code patterns
- [ ] Properly scoped by account/site
- [ ] Permissions checked
- [ ] Error handling implemented
- [ ] No breaking changes
- [ ] CHANGELOG.md updated
- [ ] Version number incremented
- [ ] Documentation updated (if needed)
- [ ] Tested locally
- [ ] No console errors or warnings
- [ ] TypeScript types added/updated
- [ ] Migrations created (if model changes)
---
## 📚 Key Architecture Concepts
### **Credit System:**
- All AI operations cost credits
- Check credits before operation: `CreditService.check_credits()`
- Deduct after operation: `CreditService.deduct_credits()`
- Track in `CreditUsageLog` table
### **Limit System:**
- Hard limits: Persistent (sites, users, keywords, clusters)
- Monthly limits: Reset on billing cycle (ideas, words, images)
- Track in `PlanLimitUsage` table
- Check before operation: `LimitService.check_limit()`
### **AI Framework:**
- Use `ai/engine.py` for AI operations
- Use `ai/functions/` for specific AI tasks
- Use `ai/models.py` for tracking
- Don't use legacy `utils/ai_processor.py`
### **Multi-Tenancy:**
- Every request has `request.user.account`
- All models scope by account directly or via site
- Use `AccountBaseModel` or `SiteSectorBaseModel`
- Validate ownership before mutations
---
## 🎨 Design System
### **Colors:**
- Primary: Blue (#0693e3)
- Success: Green (#0bbf87)
- Error: Red (#ef4444)
- Warning: Yellow (#f59e0b)
- Info: Blue (#3b82f6)
### **Typography:**
- Headings: font-bold
- Body: font-normal
- Small text: text-sm
- Large text: text-lg, text-xl, text-2xl
### **Spacing:**
- Padding: p-4, p-6 (standard)
- Margin: mt-4, mb-6 (standard)
- Gap: gap-4, gap-6 (standard)
### **Components:**
- Card: `<Card>` with padding and shadow
- Button: `<Button>` with variants (primary, secondary, danger)
- Input: `<Input>` with proper validation
- Badge: `<Badge>` with color variants
---
## 📞 Support & Questions
- Architecture questions → Check `ARCHITECTURE-KNOWLEDGE-BASE.md`
- Feature questions → Check `IGNY8-COMPLETE-FEATURES-GUIDE.md`
- API questions → Check `docs/20-API/`
- Recent changes → Check `CHANGELOG.md`
---
**Remember:** Quality over speed. Take time to understand existing patterns before implementing new features.

362
.rules Normal file
View File

@@ -0,0 +1,362 @@
# IGNY8 AI Agent Rules
**Version:** 1.2.0 | **Updated:** January 2, 2026
---
## 🚀 Quick Start for AI Agents
**BEFORE any change, read these docs in order:**
1. [docs/INDEX.md](docs/INDEX.md) - Quick navigation to any module/feature
2. [docs/30-FRONTEND/COMPONENT-SYSTEM.md](docs/30-FRONTEND/COMPONENT-SYSTEM.md) - **REQUIRED** for any frontend work
3. [docs/30-FRONTEND/DESIGN-TOKENS.md](docs/30-FRONTEND/DESIGN-TOKENS.md) - Color tokens and styling rules
4. Module doc for the feature you're modifying (see INDEX.md for paths)
5. [CHANGELOG.md](CHANGELOG.md) - Recent changes and version history
---
## 📁 Project Structure
| Layer | Path | Purpose |
|-------|------|---------|
| Backend | `backend/igny8_core/` | Django REST API |
| Frontend | `frontend/src/` | React + TypeScript SPA |
| Docs | `docs/` | Technical documentation |
| AI Engine | `backend/igny8_core/ai/` | AI functions (use this, NOT `utils/ai_processor.py`) |
| Design Tokens | `frontend/src/styles/design-system.css` | **Single source** for colors, shadows, typography |
| UI Components | `frontend/src/components/ui/` | Button, Badge, Card, Modal, etc. |
| Form Components | `frontend/src/components/form/` | InputField, Select, Checkbox, Switch |
| Icons | `frontend/src/icons/` | All SVG icons (import from `../../icons`) |
**Module → File Quick Reference:** See [docs/INDEX.md](docs/INDEX.md#module--file-quick-reference)
---
## ⚠️ Module Status
| Module | Status | Notes |
|--------|--------|-------|
| Planner | ✅ Active | Keywords, Clusters, Ideas |
| Writer | ✅ Active | Tasks, Content, Images |
| Automation | ✅ Active | 7-stage pipeline |
| Billing | ✅ Active | Credits, Plans |
| Publisher | ✅ Active | WordPress publishing |
| **Linker** | ⏸️ Inactive | Exists but disabled - Phase 2 |
| **Optimizer** | ⏸️ Inactive | Exists but disabled - Phase 2 |
| **SiteBuilder** | ❌ Removed | Code exists but NOT part of app - mark for removal in TODOS.md |
**Important:**
- Do NOT work on Linker/Optimizer unless specifically requested
- SiteBuilder code is deprecated - if found, add to `TODOS.md` for cleanup
---
## 🎨 DESIGN SYSTEM RULES (CRITICAL!)
> **🔒 STYLE LOCKED** - All UI must use the design system. ESLint enforces these rules.
### Color System (Only 6 Base Colors!)
All colors in the system derive from 6 primary hex values in `design-system.css`:
- `--color-primary` (#0077B6) - Brand Blue
- `--color-success` (#2CA18E) - Success Green
- `--color-warning` (#D9A12C) - Warning Amber
- `--color-danger` (#A12C40) - Danger Red
- `--color-purple` (#2C40A1) - Purple accent
- `--color-gray-base` (#667085) - Neutral gray
### Tailwind Color Classes
**✅ USE ONLY THESE** (Tailwind defaults are DISABLED):
```
brand-* (50-950) - Primary blue scale
gray-* (25-950) - Neutral scale
success-* (25-950) - Green scale
error-* (25-950) - Red scale
warning-* (25-950) - Amber scale
purple-* (25-950) - Purple scale
```
**❌ BANNED** (These will NOT work):
```
blue-*, red-*, green-*, emerald-*, amber-*, indigo-*,
pink-*, rose-*, sky-*, teal-*, cyan-*, etc.
```
### Styling Rules
| ✅ DO | ❌ DON'T |
|-------|---------|
| `className="bg-brand-500"` | `className="bg-blue-500"` |
| `className="text-gray-700"` | `className="text-[#333]"` |
| `<Button variant="primary">` | `<button className="...">` |
| Import from `../../icons` | Import from `@heroicons/*` |
| Use CSS variables `var(--color-primary)` | Hardcode hex values |
---
## 🧩 COMPONENT RULES (ESLint Enforced!)
> **Never use raw HTML elements** - Use design system components.
### Required Component Mappings
| HTML Element | Required Component | Import Path |
|--------------|-------------------|-------------|
| `<button>` | `Button` or `IconButton` | `components/ui/button/Button` |
| `<input type="text/email/password">` | `InputField` | `components/form/input/InputField` |
| `<input type="checkbox">` | `Checkbox` | `components/form/input/Checkbox` |
| `<input type="radio">` | `Radio` | `components/form/input/Radio` |
| `<select>` | `Select` or `SelectDropdown` | `components/form/Select` |
| `<textarea>` | `TextArea` | `components/form/input/TextArea` |
### Component Quick Reference
```tsx
// Buttons
<Button variant="primary" tone="brand">Save</Button>
<Button variant="outline" tone="danger">Delete</Button>
<IconButton icon={<CloseIcon />} variant="ghost" title="Close" />
// Form Inputs
<InputField type="text" label="Name" value={val} onChange={setVal} />
<Select options={opts} onChange={setVal} />
<Checkbox label="Accept" checked={val} onChange={setVal} />
<Switch label="Enable" checked={val} onChange={setVal} />
// Display
<Badge tone="success" variant="soft">Active</Badge>
<Alert variant="error" title="Error" message="Failed" />
<Spinner size="md" />
```
### Icon Rules
**Always import from central location:**
```tsx
// ✅ CORRECT
import { PlusIcon, CloseIcon, CheckCircleIcon } from '../../icons';
// ❌ BANNED - External icon libraries
import { XIcon } from '@heroicons/react/24/outline';
import { Trash } from 'lucide-react';
```
**Icon sizing:**
- `className="w-4 h-4"` - In buttons, badges
- `className="w-5 h-5"` - Standalone
- `className="w-6 h-6"` - Headers, features
---
## 🐳 Docker Commands (IMPORTANT!)
**Container Names:**
| Container | Name | Purpose |
|-----------|------|---------|
| Backend | `igny8_backend` | Django API server |
| Frontend | `igny8_frontend` | React dev server |
| Celery Worker | `igny8_celery_worker` | Background tasks |
| Celery Beat | `igny8_celery_beat` | Scheduled tasks |
**Run commands INSIDE containers:**
```bash
# ✅ CORRECT - Run Django management commands
docker exec -it igny8_backend python manage.py migrate
docker exec -it igny8_backend python manage.py makemigrations
docker exec -it igny8_backend python manage.py shell
# ✅ CORRECT - Run npm commands
docker exec -it igny8_frontend npm install
docker exec -it igny8_frontend npm run build
docker exec -it igny8_frontend npm run lint # Check design system violations
# ✅ CORRECT - View logs
docker logs igny8_backend -f
docker logs igny8_celery_worker -f
# ❌ WRONG - Don't use docker-compose for commands
# docker-compose exec backend python manage.py migrate
```
---
## 📊 Data Scoping (CRITICAL!)
**Understand which data is scoped where:**
| Scope | Models | Notes |
|-------|--------|-------|
| **Global (Platform-wide)** | `GlobalIntegrationSettings`, `GlobalAIPrompt`, `GlobalAuthorProfile`, `GlobalStrategy`, `GlobalModuleSettings`, `Industry`, `SeedKeyword` | Admin-only, shared by ALL accounts |
| **Account-scoped** | `Account`, `User`, `Plan`, `IntegrationSettings`, `ModuleEnableSettings`, `AISettings`, `AIPrompt`, `AuthorProfile` | Filter by `account` |
| **Site+Sector-scoped** | `Keywords`, `Clusters`, `ContentIdeas`, `Tasks`, `Content`, `Images` | Filter by `site` AND optionally `sector` |
**Key Rules:**
- Global settings: NO account filtering (platform-wide, admin managed)
- Account models: Use `AccountBaseModel`, filter by `request.user.account`
- Site/Sector models: Use `SiteSectorBaseModel`, filter by `site` and `sector`
---
## ✅ Rules (One Line Each)
### Before Coding
1. **Read docs first** - Always read the relevant module doc from `docs/10-MODULES/` before changing code
2. **Read COMPONENT-SYSTEM.md** - **REQUIRED** before any frontend changes
3. **Check existing patterns** - Search codebase for similar implementations before creating new ones
4. **Use existing components** - Never duplicate; reuse components from `frontend/src/components/`
5. **Check data scope** - Know if your model is Global, Account, or Site/Sector scoped (see table above)
### During Coding - Backend
6. **Use correct base class** - Global: `models.Model`, Account: `AccountBaseModel`, Site: `SiteSectorBaseModel`
7. **Use AI framework** - Use `backend/igny8_core/ai/` for AI operations, NOT legacy `utils/ai_processor.py`
8. **Follow service pattern** - Business logic in `backend/igny8_core/business/*/services/`
9. **Check permissions** - Use `IsAuthenticatedAndActive`, `HasTenantAccess` in views
### During Coding - Frontend (DESIGN SYSTEM)
10. **Use design system components** - Button, InputField, Select, Badge, Card - never raw HTML
11. **Use only design system colors** - `brand-*`, `gray-*`, `success-*`, `error-*`, `warning-*`, `purple-*`
12. **Import icons from central location** - `import { Icon } from '../../icons'` - never external libraries
13. **No inline styles** - Use Tailwind utilities or CSS variables only
14. **No hardcoded colors** - No hex values, no `blue-500`, `red-500` (Tailwind defaults disabled)
15. **Use TypeScript types** - All frontend code must be typed
### After Coding
16. **Run ESLint** - `docker exec -it igny8_frontend npm run lint` to check design system violations
17. **Update CHANGELOG.md** - Every commit needs a changelog entry with git reference
18. **Increment version** - PATCH for fixes, MINOR for features, MAJOR for breaking changes
19. **Update docs** - If you changed APIs or architecture, update relevant docs in `docs/`
20. **Run migrations** - After model changes: `docker exec -it igny8_backend python manage.py makemigrations`
---
## 📝 Changelog Format
```markdown
## v1.1.1 - December 27, 2025
### Fixed
- Description here (git: abc1234)
### Added
- Description here (git: def5678)
### Changed
- Description here (git: ghi9012)
```
---
## 🔗 Key Documentation
| I want to... | Go to |
|--------------|-------|
| Find any module | [docs/INDEX.md](docs/INDEX.md) |
| **Use UI components** | [docs/30-FRONTEND/COMPONENT-SYSTEM.md](docs/30-FRONTEND/COMPONENT-SYSTEM.md) |
| **Check design tokens** | [docs/30-FRONTEND/DESIGN-TOKENS.md](docs/30-FRONTEND/DESIGN-TOKENS.md) |
| **Design guide** | [docs/30-FRONTEND/DESIGN-GUIDE.md](docs/30-FRONTEND/DESIGN-GUIDE.md) |
| Understand architecture | [docs/00-SYSTEM/ARCHITECTURE.md](docs/00-SYSTEM/ARCHITECTURE.md) |
| Find an API endpoint | [docs/20-API/ENDPOINTS.md](docs/20-API/ENDPOINTS.md) |
| See all models | [docs/90-REFERENCE/MODELS.md](docs/90-REFERENCE/MODELS.md) |
| Understand AI functions | [docs/90-REFERENCE/AI-FUNCTIONS.md](docs/90-REFERENCE/AI-FUNCTIONS.md) |
| See frontend pages | [docs/30-FRONTEND/PAGES.md](docs/30-FRONTEND/PAGES.md) |
| See recent changes | [CHANGELOG.md](CHANGELOG.md) |
| View component demos | App route: `/ui-elements` |
---
## 🚫 Don't Do
### General
- ❌ Skip reading docs before coding
- ❌ Create duplicate components
- ❌ Use `docker-compose` for exec commands (use `docker exec`)
- ❌ Use legacy `utils/ai_processor.py`
- ❌ Add account filtering to Global models (they're platform-wide!)
- ❌ Forget site/sector filtering on content models
- ❌ Forget to update CHANGELOG
- ❌ Hardcode values (use settings/constants)
- ❌ Work on Linker/Optimizer (inactive modules - Phase 2)
- ❌ Use any SiteBuilder code (deprecated - mark for removal)
### Frontend - DESIGN SYSTEM VIOLATIONS
- ❌ Use raw `<button>` - use `Button` or `IconButton`
- ❌ Use raw `<input>` - use `InputField`, `Checkbox`, `Radio`
- ❌ Use raw `<select>` - use `Select` or `SelectDropdown`
- ❌ Use raw `<textarea>` - use `TextArea`
- ❌ Use inline `style={}` attributes
- ❌ Hardcode hex colors (`#0693e3`, `#ff0000`)
- ❌ Use Tailwind default colors (`blue-500`, `red-500`, `green-500`)
- ❌ Import from `@heroicons/*`, `lucide-react`, `@mui/icons-material`
- ❌ Create new CSS files (use `design-system.css` only)
---
## 📊 API Base URLs
| Module | Base URL |
|--------|----------|
| Auth | `/api/v1/auth/` |
| Planner | `/api/v1/planner/` |
| Writer | `/api/v1/writer/` |
| Billing | `/api/v1/billing/` |
| Integration | `/api/v1/integration/` |
| System | `/api/v1/system/` |
**API Docs:** https://api.igny8.com/api/docs/
**Admin:** https://api.igny8.com/admin/
**App:** https://app.igny8.com/
---
## 📄 Documentation Rules
**Root folder MD files allowed (ONLY these):**
- `.rules` - AI agent rules (this file)
- `CHANGELOG.md` - Version history
- `README.md` - Project quickstart
**All other docs go in `/docs/` folder:**
```
docs/
├── INDEX.md # Master navigation
├── 00-SYSTEM/ # Architecture, auth, tenancy, IGNY8-APP.md
├── 10-MODULES/ # One file per module
├── 20-API/ # API endpoints
├── 30-FRONTEND/ # Pages, stores, DESIGN-GUIDE, DESIGN-TOKENS, COMPONENT-SYSTEM
├── 40-WORKFLOWS/ # Cross-module flows
├── 90-REFERENCE/ # Models, AI functions, FIXES-KB
└── plans/ # FINAL-PRELAUNCH, implementation plans
```
**When updating docs:**
| Change Type | Update These Files |
|-------------|-------------------|
| New endpoint | Module doc + `docs/20-API/ENDPOINTS.md` |
| New model | Module doc + `docs/90-REFERENCE/MODELS.md` |
| New page | Module doc + `docs/30-FRONTEND/PAGES.md` |
| New module | Create module doc + update `docs/INDEX.md` |
**DO NOT** create random MD files - update existing docs instead.
---
## 🎯 Quick Checklist Before Commit
### Backend Changes
- [ ] Read relevant module docs
- [ ] Correct data scope (Global/Account/Site)
- [ ] Ran migrations if model changed
### Frontend Changes
- [ ] Read COMPONENT-SYSTEM.md
- [ ] Used design system components (not raw HTML)
- [ ] Used design system colors (brand-*, gray-*, success-*, error-*, warning-*, purple-*)
- [ ] Icons imported from `../../icons`
- [ ] No inline styles or hardcoded hex colors
- [ ] Ran `npm run lint` - no design system violations
### All Changes
- [ ] Updated CHANGELOG.md with git reference
- [ ] Incremented version number
- [ ] Tested locally

File diff suppressed because it is too large Load Diff

View File

@@ -1,174 +0,0 @@
# Container Restart and Auto-Logout Debugging Setup
## Overview
Added comprehensive logging to track container restarts and automatic user logouts.
## Changes Made
### 1. Container Lifecycle Logging
#### Backend Container (`backend/container_startup.sh`)
- Logs container startup time, hostname, and PID
- Detects and logs container restarts (by checking for previous PID file)
- Logs environment configuration (Python version, Django settings, DB host)
- Warns when restarts are detected and suggests checking Docker logs for SIGTERM signals
- Integrated into Dockerfile as ENTRYPOINT
#### Frontend Container (`frontend/container_startup.sh`)
- Logs container startup time, hostname, and PID
- Detects and logs container restarts
- Logs Node/NPM versions and Vite configuration
- Checks for git directory presence and warns about file watching
- Shows last git commit when detected
- Integrated into Dockerfile.dev as ENTRYPOINT
### 2. Vite File Watching Fix (`frontend/vite.config.ts`)
**ROOT CAUSE IDENTIFIED:** `usePolling: true` was watching ALL files including `.git` directory, causing container restarts on git commits.
**Fix Applied:**
```typescript
watch: {
usePolling: true,
ignored: [
'**/node_modules/**',
'**/.git/**', // CRITICAL: Ignore git directory
'**/dist/**',
'**/build/**',
'**/.vscode/**',
'**/.idea/**',
],
interval: 1000, // Poll every 1 second instead of default
}
```
### 3. Auto-Logout Logging (`backend/igny8_core/auth/middleware.py`)
Added detailed logging for all automatic logout scenarios:
#### Session Contamination - Account ID Mismatch
```python
logger.warning(
f"[AUTO-LOGOUT] Session contamination: account_id mismatch. "
f"Session={stored_account_id}, Current={request.account.id}, "
f"User={request.user.id}, Path={request.path}, IP={request.META.get('REMOTE_ADDR')}"
)
```
#### Session Contamination - User ID Mismatch
```python
logger.warning(
f"[AUTO-LOGOUT] Session contamination: user_id mismatch. "
f"Session={stored_user_id}, Current={request.user.id}, "
f"Account={request.account.id if request.account else None}, "
f"Path={request.path}, IP={request.META.get('REMOTE_ADDR')}"
)
```
#### Account/Plan Validation Failures
```python
logger.warning(
f"[AUTO-LOGOUT] Account/plan validation failed: {error}. "
f"User={request.user.id}, Account={getattr(request, 'account', None)}, "
f"Path={request.path}, IP={request.META.get('REMOTE_ADDR')}"
)
```
### 4. Logging Configuration (`backend/igny8_core/settings.py`)
Added two new loggers:
- `auth.middleware` - Captures all authentication and auto-logout events
- `container.lifecycle` - Captures container startup/restart events
Both log to console (captured by Docker logs).
## How to Use
### Viewing Container Restart Logs
```bash
# Check backend container logs for restart events
docker logs igny8_backend 2>&1 | grep "CONTAINER-STARTUP\|RESTART"
# Check frontend container logs for restart events
docker logs igny8_frontend 2>&1 | grep "CONTAINER-STARTUP\|RESTART"
# Monitor in real-time
docker logs -f igny8_backend
```
### Viewing Auto-Logout Logs
```bash
# Check for automatic logout events
docker logs igny8_backend 2>&1 | grep "AUTO-LOGOUT"
# Filter by specific logout reason
docker logs igny8_backend 2>&1 | grep "AUTO-LOGOUT.*contamination"
docker logs igny8_backend 2>&1 | grep "AUTO-LOGOUT.*validation failed"
# See recent logout events with context
docker logs --since 1h igny8_backend 2>&1 | grep -A2 -B2 "AUTO-LOGOUT"
```
### Correlating Events
```bash
# See both container restarts and logouts together
docker logs --since 2h igny8_backend 2>&1 | grep -E "CONTAINER-STARTUP|AUTO-LOGOUT|SIGTERM"
# Check if git commits correlate with restarts
git log --since="2 hours ago" --format="%ai %s" && \
docker logs --since 2h igny8_frontend 2>&1 | grep "CONTAINER-STARTUP"
```
## Next Steps to Deploy
1. **Rebuild Docker images:**
```bash
cd /data/app/igny8/backend
docker build -t igny8-backend:latest -f Dockerfile .
cd /data/app/igny8/frontend
docker build -t igny8-frontend-dev:latest -f Dockerfile.dev .
```
2. **Restart containers:**
```bash
cd /data/app/igny8
docker compose -f docker-compose.app.yml down
docker compose -f docker-compose.app.yml up -d
```
3. **Verify logging is working:**
```bash
docker logs igny8_backend 2>&1 | head -30
docker logs igny8_frontend 2>&1 | head -30
```
4. **Test git commit trigger (should NOT restart now):**
```bash
cd /data/app/igny8
echo "test" >> README.md
git add README.md
git commit -m "test commit"
# Wait 5 seconds and check - containers should NOT restart
sleep 5
docker ps --filter "name=igny8" --format "{{.Names}}: {{.Status}}"
```
## Expected Outcomes
1. **Git commits should NO LONGER trigger container restarts** because `.git` is now ignored by Vite's file watcher
2. **Every container restart will be logged** with timestamp and reason
3. **Every automatic logout will be logged** with user ID, account ID, reason, path, and IP address
4. **You can correlate restarts with git operations** to verify the fix is working
## Troubleshooting
If containers still restart after git commits:
1. Check if the new images were built and deployed
2. Verify the vite.config.ts changes are present in the running container
3. Check Docker logs to see what triggered the restart
4. Look for HMR messages in frontend logs mentioning `.git` files

File diff suppressed because it is too large Load Diff

View File

@@ -1,11 +0,0 @@
## 🔴 AI FUunctions progress modals texts and counts to be fixed
## 🔴 AUTOAMTION queue when run manualy completed count to be fixed, and progress abr to be imrpoved and fixed based on actual stage and all other data have bugs
## 🔴 Align prompts with teh strategy
## 🔴 user randomly logs out often
## 🔴 MArketing site cotnetn
## 🔴 docuementation adn help update

View File

@@ -1,9 +1,19 @@
# IGNY8 - AI-Powered SEO Content Platform
**Version:** 1.0.0
**Version:** 1.0.5
**License:** Proprietary
**Website:** https://igny8.com
---
## Quick Links
| Document | Description |
|----------|-------------|
| [docs/00-SYSTEM/IGNY8-APP.md](docs/00-SYSTEM/IGNY8-APP.md) | Executive summary (non-technical) |
| [docs/INDEX.md](docs/INDEX.md) | Full documentation index |
| [CHANGELOG.md](CHANGELOG.md) | Version history |
| [.rules](.rules) | AI agent rules |
---
@@ -16,8 +26,8 @@ IGNY8 is a full-stack SaaS platform that combines AI-powered content generation
- 🔍 **Smart Keyword Management** - Import, cluster, and organize keywords with AI
- ✍️ **AI Content Generation** - Generate SEO-optimized blog posts using GPT-4
- 🖼️ **AI Image Creation** - Auto-generate featured and in-article images
- 🔗 **Internal Linking** - AI-powered link suggestions for SEO
- 📊 **Content Optimization** - Analyze and score content quality
- 🔗 **Internal Linking** - AI-powered link suggestions (coming soon)
- 📊 **Content Optimization** - Analyze and score content quality (coming soon)
- 🔄 **WordPress Integration** - Bidirectional sync with WordPress sites
- 📈 **Usage-Based Billing** - Credit system for AI operations
- 👥 **Multi-Tenancy** - Manage multiple sites and teams
@@ -26,14 +36,24 @@ IGNY8 is a full-stack SaaS platform that combines AI-powered content generation
## Repository Structure
This monorepo contains two main applications and documentation:
```
igny8/
├── README.md # This file
├── CHANGELOG.md # Version history
├── .rules # AI agent rules
├── backend/ # Django REST API + Celery
├── frontend/ # React + Vite SPA
├── docs/ # Documentation index and topic folders
└── docker-compose.app.yml # Docker deployment config
├── docs/ # Full documentation
│ ├── INDEX.md # Documentation navigation
│ ├── 00-SYSTEM/ # Architecture, auth, IGNY8-APP
│ ├── 10-MODULES/ # Module documentation
│ ├── 20-API/ # API endpoints
│ ├── 30-FRONTEND/ # Frontend pages, stores, design system
│ ├── 40-WORKFLOWS/ # Cross-module workflows
│ ├── 50-DEPLOYMENT/ # Deployment guides
│ ├── 90-REFERENCE/ # Models, AI functions, fixes
│ └── plans/ # Implementation plans
└── docker-compose.app.yml
```
**Separate Repository:**

View File

@@ -41,6 +41,11 @@ class Igny8AdminConfig(AdminConfig):
admin_site._actions = old_site._actions.copy()
admin_site._global_actions = old_site._global_actions.copy()
# CRITICAL: Update each ModelAdmin's admin_site attribute to point to our custom site
# Otherwise, each_context() will use the wrong admin site and miss our customizations
for model, model_admin in admin_site._registry.items():
model_admin.admin_site = admin_site
# Now replace the default site
admin_module.site = admin_site
admin_module.sites.site = admin_site

View File

@@ -145,7 +145,16 @@ class Igny8ModelAdmin(UnfoldModelAdmin):
for group in sidebar_navigation:
group_is_active = False
for item in group.get('items', []):
item_link = item.get('link', '')
# Unfold stores resolved link in 'link_callback', original lambda in 'link'
item_link = item.get('link_callback') or item.get('link', '')
# Convert to string (handles lazy proxy objects and ensures it's a string)
try:
item_link = str(item_link) if item_link else ''
except:
item_link = ''
# Skip if it's a function representation (e.g., "<function ...>")
if item_link.startswith('<'):
continue
# Check if current path matches this item's link
if item_link and current_path.startswith(item_link):
item['active'] = True

View File

@@ -0,0 +1,406 @@
"""
Admin Monitoring Module - System Health, API Monitor, Debug Console
Provides read-only monitoring and debugging tools for Django Admin
"""
from django.shortcuts import render
from django.contrib.admin.views.decorators import staff_member_required
from django.utils import timezone
from django.db import connection
from django.conf import settings
import time
import os
@staff_member_required
def system_health_dashboard(request):
"""
System infrastructure health monitoring
Checks: Database, Redis, Celery, File System
"""
context = {
'page_title': 'System Health Monitor',
'checked_at': timezone.now(),
'checks': []
}
# Database Check
db_check = {
'name': 'PostgreSQL Database',
'status': 'unknown',
'message': '',
'details': {}
}
try:
start = time.time()
with connection.cursor() as cursor:
cursor.execute("SELECT version()")
version = cursor.fetchone()[0]
cursor.execute("SELECT COUNT(*) FROM django_session")
session_count = cursor.fetchone()[0]
elapsed = (time.time() - start) * 1000
db_check.update({
'status': 'healthy',
'message': f'Connected ({elapsed:.2f}ms)',
'details': {
'version': version.split('\n')[0],
'response_time': f'{elapsed:.2f}ms',
'active_sessions': session_count
}
})
except Exception as e:
db_check.update({
'status': 'error',
'message': f'Connection failed: {str(e)}'
})
context['checks'].append(db_check)
# Redis Check
redis_check = {
'name': 'Redis Cache',
'status': 'unknown',
'message': '',
'details': {}
}
try:
import redis
r = redis.Redis(
host=settings.CACHES['default']['LOCATION'].split(':')[0] if ':' in settings.CACHES['default'].get('LOCATION', '') else 'redis',
port=6379,
db=0,
socket_connect_timeout=2
)
start = time.time()
r.ping()
elapsed = (time.time() - start) * 1000
info = r.info()
redis_check.update({
'status': 'healthy',
'message': f'Connected ({elapsed:.2f}ms)',
'details': {
'version': info.get('redis_version', 'unknown'),
'uptime': f"{info.get('uptime_in_seconds', 0) // 3600}h",
'connected_clients': info.get('connected_clients', 0),
'used_memory': f"{info.get('used_memory_human', 'unknown')}",
'response_time': f'{elapsed:.2f}ms'
}
})
except Exception as e:
redis_check.update({
'status': 'error',
'message': f'Connection failed: {str(e)}'
})
context['checks'].append(redis_check)
# Celery Workers Check
celery_check = {
'name': 'Celery Workers',
'status': 'unknown',
'message': '',
'details': {}
}
try:
from igny8_core.celery import app
inspect = app.control.inspect(timeout=2)
stats = inspect.stats()
active = inspect.active()
if stats:
worker_count = len(stats)
total_tasks = sum(len(tasks) for tasks in active.values()) if active else 0
celery_check.update({
'status': 'healthy',
'message': f'{worker_count} worker(s) active',
'details': {
'workers': worker_count,
'active_tasks': total_tasks,
'worker_names': list(stats.keys())
}
})
else:
celery_check.update({
'status': 'warning',
'message': 'No workers responding'
})
except Exception as e:
celery_check.update({
'status': 'error',
'message': f'Check failed: {str(e)}'
})
context['checks'].append(celery_check)
# File System Check
fs_check = {
'name': 'File System',
'status': 'unknown',
'message': '',
'details': {}
}
try:
import shutil
media_root = settings.MEDIA_ROOT
static_root = settings.STATIC_ROOT
media_stat = shutil.disk_usage(media_root) if os.path.exists(media_root) else None
if media_stat:
free_gb = media_stat.free / (1024**3)
total_gb = media_stat.total / (1024**3)
used_percent = (media_stat.used / media_stat.total) * 100
fs_check.update({
'status': 'healthy' if used_percent < 90 else 'warning',
'message': f'{free_gb:.1f}GB free of {total_gb:.1f}GB',
'details': {
'media_root': media_root,
'free_space': f'{free_gb:.1f}GB',
'total_space': f'{total_gb:.1f}GB',
'used_percent': f'{used_percent:.1f}%'
}
})
else:
fs_check.update({
'status': 'warning',
'message': 'Media directory not found'
})
except Exception as e:
fs_check.update({
'status': 'error',
'message': f'Check failed: {str(e)}'
})
context['checks'].append(fs_check)
# Overall system status
statuses = [check['status'] for check in context['checks']]
if 'error' in statuses:
context['overall_status'] = 'error'
context['overall_message'] = 'System has errors'
elif 'warning' in statuses:
context['overall_status'] = 'warning'
context['overall_message'] = 'System has warnings'
else:
context['overall_status'] = 'healthy'
context['overall_message'] = 'All systems operational'
return render(request, 'admin/monitoring/system_health.html', context)
@staff_member_required
def api_monitor_dashboard(request):
"""
API endpoint health monitoring
Tests key endpoints and displays response times
"""
from django.test.client import Client
context = {
'page_title': 'API Monitor',
'checked_at': timezone.now(),
'endpoint_groups': []
}
# Define endpoint groups to check
endpoint_configs = [
{
'name': 'Authentication',
'endpoints': [
{'path': '/api/v1/auth/check/', 'method': 'GET', 'auth_required': False},
]
},
{
'name': 'System Settings',
'endpoints': [
{'path': '/api/v1/system/health/', 'method': 'GET', 'auth_required': False},
]
},
{
'name': 'Planner Module',
'endpoints': [
{'path': '/api/v1/planner/keywords/', 'method': 'GET', 'auth_required': True},
]
},
{
'name': 'Writer Module',
'endpoints': [
{'path': '/api/v1/writer/tasks/', 'method': 'GET', 'auth_required': True},
]
},
{
'name': 'Billing',
'endpoints': [
{'path': '/api/v1/billing/credits/balance/', 'method': 'GET', 'auth_required': True},
]
},
]
client = Client()
for group_config in endpoint_configs:
group_results = {
'name': group_config['name'],
'endpoints': []
}
for endpoint in group_config['endpoints']:
result = {
'path': endpoint['path'],
'method': endpoint['method'],
'status': 'unknown',
'status_code': None,
'response_time': None,
'message': ''
}
try:
start = time.time()
if endpoint['method'] == 'GET':
response = client.get(endpoint['path'])
else:
response = client.post(endpoint['path'])
elapsed = (time.time() - start) * 1000
result.update({
'status_code': response.status_code,
'response_time': f'{elapsed:.2f}ms',
})
# Determine status
if response.status_code < 300:
result['status'] = 'healthy'
result['message'] = 'OK'
elif response.status_code == 401 and endpoint.get('auth_required'):
result['status'] = 'healthy'
result['message'] = 'Auth required (expected)'
elif response.status_code < 500:
result['status'] = 'warning'
result['message'] = 'Client error'
else:
result['status'] = 'error'
result['message'] = 'Server error'
except Exception as e:
result.update({
'status': 'error',
'message': str(e)[:100]
})
group_results['endpoints'].append(result)
context['endpoint_groups'].append(group_results)
# Calculate overall stats
all_endpoints = [ep for group in context['endpoint_groups'] for ep in group['endpoints']]
total = len(all_endpoints)
healthy = len([ep for ep in all_endpoints if ep['status'] == 'healthy'])
warnings = len([ep for ep in all_endpoints if ep['status'] == 'warning'])
errors = len([ep for ep in all_endpoints if ep['status'] == 'error'])
context['stats'] = {
'total': total,
'healthy': healthy,
'warnings': warnings,
'errors': errors,
'health_percentage': (healthy / total * 100) if total > 0 else 0
}
return render(request, 'admin/monitoring/api_monitor.html', context)
@staff_member_required
def debug_console(request):
"""
System debug information (read-only)
Shows environment, database config, cache config, etc.
"""
context = {
'page_title': 'Debug Console',
'checked_at': timezone.now(),
'sections': []
}
# Environment Variables Section
env_section = {
'title': 'Environment',
'items': {
'DEBUG': settings.DEBUG,
'ENVIRONMENT': os.getenv('ENVIRONMENT', 'not set'),
'DJANGO_SETTINGS_MODULE': os.getenv('DJANGO_SETTINGS_MODULE', 'not set'),
'ALLOWED_HOSTS': settings.ALLOWED_HOSTS,
'TIME_ZONE': settings.TIME_ZONE,
'USE_TZ': settings.USE_TZ,
}
}
context['sections'].append(env_section)
# Database Configuration
db_config = settings.DATABASES.get('default', {})
db_section = {
'title': 'Database Configuration',
'items': {
'ENGINE': db_config.get('ENGINE', 'not set'),
'NAME': db_config.get('NAME', 'not set'),
'HOST': db_config.get('HOST', 'not set'),
'PORT': db_config.get('PORT', 'not set'),
'CONN_MAX_AGE': db_config.get('CONN_MAX_AGE', 'not set'),
}
}
context['sections'].append(db_section)
# Cache Configuration
cache_config = settings.CACHES.get('default', {})
cache_section = {
'title': 'Cache Configuration',
'items': {
'BACKEND': cache_config.get('BACKEND', 'not set'),
'LOCATION': cache_config.get('LOCATION', 'not set'),
'KEY_PREFIX': cache_config.get('KEY_PREFIX', 'not set'),
}
}
context['sections'].append(cache_section)
# Celery Configuration
celery_section = {
'title': 'Celery Configuration',
'items': {
'BROKER_URL': getattr(settings, 'CELERY_BROKER_URL', 'not set'),
'RESULT_BACKEND': getattr(settings, 'CELERY_RESULT_BACKEND', 'not set'),
'TASK_ALWAYS_EAGER': getattr(settings, 'CELERY_TASK_ALWAYS_EAGER', False),
}
}
context['sections'].append(celery_section)
# Media & Static Files
files_section = {
'title': 'Media & Static Files',
'items': {
'MEDIA_ROOT': settings.MEDIA_ROOT,
'MEDIA_URL': settings.MEDIA_URL,
'STATIC_ROOT': settings.STATIC_ROOT,
'STATIC_URL': settings.STATIC_URL,
}
}
context['sections'].append(files_section)
# Installed Apps (count)
apps_section = {
'title': 'Installed Applications',
'items': {
'Total Apps': len(settings.INSTALLED_APPS),
'Custom Apps': len([app for app in settings.INSTALLED_APPS if app.startswith('igny8_')]),
}
}
context['sections'].append(apps_section)
# Middleware (count)
middleware_section = {
'title': 'Middleware',
'items': {
'Total Middleware': len(settings.MIDDLEWARE),
}
}
context['sections'].append(middleware_section)
return render(request, 'admin/monitoring/debug_console.html', context)

View File

@@ -82,6 +82,10 @@ def usage_report(request):
operation_count=Count('id')
).order_by('-total_credits')
# Format operation types as Title Case
for usage in usage_by_operation:
usage['operation_type'] = usage['operation_type'].replace('_', ' ').title() if usage['operation_type'] else 'Unknown'
# Top credit consumers
top_consumers = CreditUsageLog.objects.values(
'account__name'
@@ -251,3 +255,363 @@ def data_quality_report(request):
context.update(admin_context)
return render(request, 'admin/reports/data_quality.html', context)
@staff_member_required
def token_usage_report(request):
"""Comprehensive token usage analytics with multi-dimensional insights"""
from igny8_core.business.billing.models import CreditUsageLog
from igny8_core.auth.models import Account
from decimal import Decimal
# Date filter setup
days_filter = request.GET.get('days', '30')
try:
days = int(days_filter)
except ValueError:
days = 30
start_date = timezone.now() - timedelta(days=days)
# Base queryset - include all records (tokens may be 0 for historical data)
logs = CreditUsageLog.objects.filter(
created_at__gte=start_date
)
# Total statistics
total_tokens_input = logs.aggregate(total=Sum('tokens_input'))['total'] or 0
total_tokens_output = logs.aggregate(total=Sum('tokens_output'))['total'] or 0
total_tokens = total_tokens_input + total_tokens_output
total_calls = logs.count()
avg_tokens_per_call = total_tokens / total_calls if total_calls > 0 else 0
# Token usage by model
token_by_model = logs.values('model_used').annotate(
total_tokens_input=Sum('tokens_input'),
total_tokens_output=Sum('tokens_output'),
call_count=Count('id'),
total_cost=Sum('cost_usd')
).order_by('-total_tokens_input')[:10]
# Add total_tokens to each model and sort by total
for model in token_by_model:
model['total_tokens'] = (model['total_tokens_input'] or 0) + (model['total_tokens_output'] or 0)
model['avg_tokens'] = model['total_tokens'] / model['call_count'] if model['call_count'] > 0 else 0
model['model'] = model['model_used'] # Add alias for template
token_by_model = sorted(token_by_model, key=lambda x: x['total_tokens'], reverse=True)
# Token usage by function/operation
token_by_function = logs.values('operation_type').annotate(
total_tokens_input=Sum('tokens_input'),
total_tokens_output=Sum('tokens_output'),
call_count=Count('id'),
total_cost=Sum('cost_usd')
).order_by('-total_tokens_input')[:10]
# Add total_tokens to each function and sort by total
for func in token_by_function:
func['total_tokens'] = (func['total_tokens_input'] or 0) + (func['total_tokens_output'] or 0)
func['avg_tokens'] = func['total_tokens'] / func['call_count'] if func['call_count'] > 0 else 0
# Format operation_type as Title Case
func['function'] = func['operation_type'].replace('_', ' ').title() if func['operation_type'] else 'Unknown'
token_by_function = sorted(token_by_function, key=lambda x: x['total_tokens'], reverse=True)
# Token usage by account (top consumers)
token_by_account = logs.values('account__name', 'account_id').annotate(
total_tokens_input=Sum('tokens_input'),
total_tokens_output=Sum('tokens_output'),
call_count=Count('id'),
total_cost=Sum('cost_usd')
).order_by('-total_tokens_input')[:15]
# Add total_tokens to each account and sort by total
for account in token_by_account:
account['total_tokens'] = (account['total_tokens_input'] or 0) + (account['total_tokens_output'] or 0)
token_by_account = sorted(token_by_account, key=lambda x: x['total_tokens'], reverse=True)[:15]
# Daily token trends (time series)
daily_data = []
daily_labels = []
for i in range(days):
day = timezone.now().date() - timedelta(days=days-i-1)
day_logs = logs.filter(created_at__date=day)
day_tokens_input = day_logs.aggregate(total=Sum('tokens_input'))['total'] or 0
day_tokens_output = day_logs.aggregate(total=Sum('tokens_output'))['total'] or 0
day_tokens = day_tokens_input + day_tokens_output
daily_labels.append(day.strftime('%m/%d'))
daily_data.append(int(day_tokens))
# Token efficiency metrics (CreditUsageLog doesn't have error field, so assume all successful)
success_rate = 100.0
successful_tokens = total_tokens
wasted_tokens = 0
# Create tokens_by_status for template compatibility
tokens_by_status = [{
'error': None,
'total_tokens': total_tokens,
'call_count': total_calls,
'avg_tokens': avg_tokens_per_call
}]
# Peak usage times (hour of day)
hourly_usage = logs.extra(
select={'hour': "EXTRACT(hour FROM created_at)"}
).values('hour').annotate(
token_input=Sum('tokens_input'),
token_output=Sum('tokens_output'),
call_count=Count('id')
).order_by('hour')
# Add total token_count for each hour
for hour_data in hourly_usage:
hour_data['token_count'] = (hour_data['token_input'] or 0) + (hour_data['token_output'] or 0)
# Cost efficiency
total_cost = logs.aggregate(total=Sum('cost_usd'))['total'] or Decimal('0.00')
cost_per_1k_tokens = float(total_cost) / (total_tokens / 1000) if total_tokens > 0 else 0.0
context = {
'title': 'Token Usage Report',
'days_filter': days,
'total_tokens': int(total_tokens),
'total_calls': total_calls,
'avg_tokens_per_call': round(avg_tokens_per_call, 2),
'token_by_model': list(token_by_model),
'token_by_function': list(token_by_function),
'token_by_account': list(token_by_account),
'daily_labels': json.dumps(daily_labels),
'daily_data': json.dumps(daily_data),
'tokens_by_status': list(tokens_by_status),
'success_rate': round(success_rate, 2),
'successful_tokens': int(successful_tokens),
'wasted_tokens': int(wasted_tokens),
'hourly_usage': list(hourly_usage),
'total_cost': float(total_cost),
'cost_per_1k_tokens': float(cost_per_1k_tokens),
'current_app': '_reports', # For active menu state
}
# Merge with admin context
from igny8_core.admin.site import admin_site
admin_context = admin_site.each_context(request)
context.update(admin_context)
return render(request, 'admin/reports/token_usage.html', context)
@staff_member_required
def ai_cost_analysis(request):
"""Multi-dimensional AI cost analysis with model pricing, trends, and predictions"""
from igny8_core.business.billing.models import CreditUsageLog
from igny8_core.auth.models import Account
from decimal import Decimal
# Date filter setup
days_filter = request.GET.get('days', '30')
try:
days = int(days_filter)
except ValueError:
days = 30
start_date = timezone.now() - timedelta(days=days)
# Base queryset - filter for records with cost data
logs = CreditUsageLog.objects.filter(
created_at__gte=start_date,
cost_usd__isnull=False
)
# Overall cost metrics
total_cost = logs.aggregate(total=Sum('cost_usd'))['total'] or Decimal('0.00')
total_calls = logs.count()
avg_cost_per_call = logs.aggregate(avg=Avg('cost_usd'))['avg'] or Decimal('0.00')
total_tokens_input = logs.aggregate(total=Sum('tokens_input'))['total'] or 0
total_tokens_output = logs.aggregate(total=Sum('tokens_output'))['total'] or 0
total_tokens = total_tokens_input + total_tokens_output
# Revenue & Margin calculation
from igny8_core.business.billing.models import BillingConfiguration
billing_config = BillingConfiguration.get_config()
total_credits_charged = logs.aggregate(total=Sum('credits_used'))['total'] or 0
total_revenue = Decimal(total_credits_charged) * billing_config.default_credit_price_usd
total_margin = total_revenue - total_cost
margin_percentage = float((total_margin / total_revenue * 100) if total_revenue > 0 else 0)
# Per-unit margins
# Calculate per 1M tokens (margin per million tokens)
margin_per_1m_tokens = float(total_margin) / (total_tokens / 1_000_000) if total_tokens > 0 else 0
# Calculate per 1K credits (margin per thousand credits)
margin_per_1k_credits = float(total_margin) / (total_credits_charged / 1000) if total_credits_charged > 0 else 0
# Cost by model with efficiency metrics
cost_by_model = logs.values('model_used').annotate(
total_cost=Sum('cost_usd'),
call_count=Count('id'),
avg_cost=Avg('cost_usd'),
total_tokens_input=Sum('tokens_input'),
total_tokens_output=Sum('tokens_output')
).order_by('-total_cost')
# Add cost efficiency and margin for each model
for model in cost_by_model:
model['total_tokens'] = (model['total_tokens_input'] or 0) + (model['total_tokens_output'] or 0)
model['avg_tokens'] = model['total_tokens'] / model['call_count'] if model['call_count'] > 0 else 0
model['model'] = model['model_used'] # Add alias for template
if model['total_tokens'] and model['total_tokens'] > 0:
model['cost_per_1k_tokens'] = float(model['total_cost']) / (model['total_tokens'] / 1000)
else:
model['cost_per_1k_tokens'] = 0
# Calculate margin for this model
model_credits = logs.filter(model_used=model['model_used']).aggregate(total=Sum('credits_used'))['total'] or 0
model_revenue = Decimal(model_credits) * billing_config.default_credit_price_usd
model_margin = model_revenue - model['total_cost']
model['revenue'] = float(model_revenue)
model['margin'] = float(model_margin)
model['margin_percentage'] = float((model_margin / model_revenue * 100) if model_revenue > 0 else 0)
# Cost by account (top spenders)
cost_by_account = logs.values('account__name', 'account_id').annotate(
total_cost=Sum('cost_usd'),
call_count=Count('id'),
total_tokens_input=Sum('tokens_input'),
total_tokens_output=Sum('tokens_output'),
avg_cost=Avg('cost_usd')
).order_by('-total_cost')[:15]
# Add total_tokens to each account
for account in cost_by_account:
account['total_tokens'] = (account['total_tokens_input'] or 0) + (account['total_tokens_output'] or 0)
# Cost by function/operation
cost_by_function = logs.values('operation_type').annotate(
total_cost=Sum('cost_usd'),
call_count=Count('id'),
avg_cost=Avg('cost_usd'),
total_tokens_input=Sum('tokens_input'),
total_tokens_output=Sum('tokens_output')
).order_by('-total_cost')[:10]
# Add total_tokens, function alias, and margin
for func in cost_by_function:
func['total_tokens'] = (func['total_tokens_input'] or 0) + (func['total_tokens_output'] or 0)
# Format operation_type as Title Case
func['function'] = func['operation_type'].replace('_', ' ').title() if func['operation_type'] else 'Unknown'
# Calculate margin for this operation
func_credits = logs.filter(operation_type=func['operation_type']).aggregate(total=Sum('credits_used'))['total'] or 0
func_revenue = Decimal(func_credits) * billing_config.default_credit_price_usd
func_margin = func_revenue - func['total_cost']
func['revenue'] = float(func_revenue)
func['margin'] = float(func_margin)
func['margin_percentage'] = float((func_margin / func_revenue * 100) if func_revenue > 0 else 0)
# Daily cost trends (time series)
daily_cost_data = []
daily_cost_labels = []
daily_call_data = []
for i in range(days):
day = timezone.now().date() - timedelta(days=days-i-1)
day_logs = logs.filter(created_at__date=day)
day_cost = day_logs.aggregate(total=Sum('cost_usd'))['total'] or Decimal('0.00')
day_calls = day_logs.count()
daily_cost_labels.append(day.strftime('%m/%d'))
daily_cost_data.append(float(day_cost))
daily_call_data.append(day_calls)
# Cost prediction (simple linear extrapolation)
if len(daily_cost_data) > 7:
recent_avg_daily = sum(daily_cost_data[-7:]) / 7
projected_monthly = recent_avg_daily * 30
else:
projected_monthly = 0
# Failed requests cost (CreditUsageLog doesn't track errors, so no failed cost)
failed_cost = Decimal('0.00')
# Cost anomalies (calls costing > 3x average)
if avg_cost_per_call > 0:
anomaly_threshold = float(avg_cost_per_call) * 3
anomalies = logs.filter(cost_usd__gt=anomaly_threshold).values(
'model_used', 'operation_type', 'account__name', 'cost_usd', 'tokens_input', 'tokens_output', 'created_at'
).order_by('-cost_usd')[:10]
# Add aliases and calculate total tokens for each anomaly
for anomaly in anomalies:
anomaly['model'] = anomaly['model_used']
# Format operation_type as Title Case
anomaly['function'] = anomaly['operation_type'].replace('_', ' ').title() if anomaly['operation_type'] else 'Unknown'
anomaly['cost'] = anomaly['cost_usd']
anomaly['tokens'] = (anomaly['tokens_input'] or 0) + (anomaly['tokens_output'] or 0)
else:
anomalies = []
# Model comparison matrix
model_comparison = []
for model_data in cost_by_model:
model_name = model_data['model']
model_comparison.append({
'model': model_name,
'total_cost': float(model_data['total_cost']),
'calls': model_data['call_count'],
'avg_cost': float(model_data['avg_cost']),
'total_tokens': model_data['total_tokens'],
'cost_per_1k': model_data['cost_per_1k_tokens'],
})
# Cost distribution percentages
if total_cost > 0:
for item in cost_by_model:
item['cost_percentage'] = float((item['total_cost'] / total_cost) * 100)
# Peak cost hours
hourly_cost = logs.extra(
select={'hour': "EXTRACT(hour FROM created_at)"}
).values('hour').annotate(
total_cost=Sum('cost_usd'),
call_count=Count('id')
).order_by('hour')
# Cost efficiency score (CreditUsageLog doesn't track errors, assume all successful)
successful_cost = total_cost
efficiency_score = 100.0
context = {
'title': 'AI Cost & Margin Analysis',
'days_filter': days,
'total_cost': float(total_cost),
'total_revenue': float(total_revenue),
'total_margin': float(total_margin),
'margin_percentage': round(margin_percentage, 2),
'margin_per_1m_tokens': round(margin_per_1m_tokens, 4),
'margin_per_1k_credits': round(margin_per_1k_credits, 4),
'total_credits_charged': total_credits_charged,
'credit_price': float(billing_config.default_credit_price_usd),
'total_calls': total_calls,
'avg_cost_per_call': float(avg_cost_per_call),
'total_tokens': int(total_tokens),
'cost_by_model': list(cost_by_model),
'cost_by_account': list(cost_by_account),
'cost_by_function': list(cost_by_function),
'daily_cost_labels': json.dumps(daily_cost_labels),
'daily_cost_data': json.dumps(daily_cost_data),
'daily_call_data': json.dumps(daily_call_data),
'projected_monthly': round(projected_monthly, 2),
'failed_cost': float(failed_cost),
'wasted_percentage': float((failed_cost / total_cost * 100) if total_cost > 0 else 0),
'anomalies': list(anomalies),
'model_comparison': model_comparison,
'hourly_cost': list(hourly_cost),
'efficiency_score': round(efficiency_score, 2),
'successful_cost': float(successful_cost),
'current_app': '_reports', # For active menu state
}
# Merge with admin context
from igny8_core.admin.site import admin_site
admin_context = admin_site.each_context(request)
context.update(admin_context)
return render(request, 'admin/reports/ai_cost_analysis.html', context)

View File

@@ -1,324 +1,62 @@
"""
Custom AdminSite for IGNY8 to organize models into proper groups using Unfold
NO EMOJIS - Unfold handles all icons via Material Design
Custom AdminSite for IGNY8 using Unfold theme.
SIMPLIFIED VERSION - Navigation is now handled via UNFOLD settings in settings.py
This file only handles:
1. Custom URLs for dashboard, reports, and monitoring pages
2. Index redirect to dashboard
All sidebar navigation is configured in settings.py under UNFOLD["SIDEBAR"]["navigation"]
"""
from django.contrib import admin
from django.contrib.admin.apps import AdminConfig
from django.apps import apps
from django.urls import path, reverse_lazy
from django.urls import path
from django.shortcuts import redirect
from django.contrib.admin import sites
from unfold.admin import ModelAdmin as UnfoldModelAdmin
from unfold.sites import UnfoldAdminSite
class Igny8AdminSite(UnfoldAdminSite):
"""
Custom AdminSite based on Unfold that organizes models into the planned groups
Custom AdminSite based on Unfold.
Navigation is handled via UNFOLD settings - this just adds custom URLs.
"""
site_header = 'IGNY8 Administration'
site_title = 'IGNY8 Admin'
index_title = 'IGNY8 Administration'
def get_urls(self):
"""Get admin URLs with dashboard and reports available"""
from django.urls import path
"""Add custom URLs for dashboard, reports, and monitoring pages"""
from .dashboard import admin_dashboard
from .reports import revenue_report, usage_report, content_report, data_quality_report
from .reports import (
revenue_report, usage_report, content_report, data_quality_report,
token_usage_report, ai_cost_analysis
)
from .monitoring import (
system_health_dashboard, api_monitor_dashboard, debug_console
)
urls = super().get_urls()
custom_urls = [
# Dashboard
path('dashboard/', self.admin_view(admin_dashboard), name='dashboard'),
# Reports
path('reports/revenue/', self.admin_view(revenue_report), name='report_revenue'),
path('reports/usage/', self.admin_view(usage_report), name='report_usage'),
path('reports/content/', self.admin_view(content_report), name='report_content'),
path('reports/data-quality/', self.admin_view(data_quality_report), name='report_data_quality'),
path('reports/token-usage/', self.admin_view(token_usage_report), name='report_token_usage'),
path('reports/ai-cost-analysis/', self.admin_view(ai_cost_analysis), name='report_ai_cost_analysis'),
# Monitoring
path('monitoring/system-health/', self.admin_view(system_health_dashboard), name='monitoring_system_health'),
path('monitoring/api-monitor/', self.admin_view(api_monitor_dashboard), name='monitoring_api_monitor'),
path('monitoring/debug-console/', self.admin_view(debug_console), name='monitoring_debug_console'),
]
return custom_urls + urls
def index(self, request, extra_context=None):
"""Redirect to custom dashboard"""
from django.shortcuts import redirect
return redirect('admin:dashboard')
def get_sidebar_list(self, request):
"""
Override Unfold's get_sidebar_list to return our custom app groups
Convert Django app_list format to Unfold sidebar navigation format
"""
# Get our custom Django app list
django_apps = self.get_app_list(request, app_label=None)
# Convert to Unfold navigation format: {title, items: [{title, link, icon}]}
sidebar_groups = []
for app in django_apps:
group = {
'title': app['name'],
'collapsible': True,
'items': []
}
# Convert each model to navigation item
for model in app.get('models', []):
if model.get('perms', {}).get('view', False) or model.get('perms', {}).get('change', False):
item = {
'title': model['name'],
'link': model['admin_url'],
'icon': None, # Unfold will use default
'has_permission': True, # CRITICAL: Template checks this
}
group['items'].append(item)
# Only add groups that have items
if group['items']:
sidebar_groups.append(group)
return sidebar_groups
def each_context(self, request):
"""
Override context to ensure our custom app_list is always used
This is called by all admin templates for sidebar rendering
CRITICAL FIX: Force custom sidebar on ALL pages including model detail/list views
"""
# CRITICAL: Must call parent to get sidebar_navigation set
context = super().each_context(request)
# DEBUGGING: Print to console what parent returned
print(f"\n=== DEBUG each_context for {request.path} ===")
print(f"sidebar_navigation length from parent: {len(context.get('sidebar_navigation', []))}")
if context.get('sidebar_navigation'):
print(f"First sidebar group: {context['sidebar_navigation'][0].get('title', 'NO TITLE')}")
# Force our custom app list to be used everywhere - IGNORE app_label parameter
custom_apps = self.get_app_list(request, app_label=None)
context['available_apps'] = custom_apps
context['app_list'] = custom_apps # Also set app_list for compatibility
# CRITICAL FIX: Ensure sidebar_navigation is using our custom sidebar
# Parent's each_context already called get_sidebar_list(), which returns our custom sidebar
# So sidebar_navigation should already be correct, but let's verify
if not context.get('sidebar_navigation') or len(context.get('sidebar_navigation', [])) == 0:
# If sidebar_navigation is empty, force it
print("WARNING: sidebar_navigation was empty, forcing it!")
context['sidebar_navigation'] = self.get_sidebar_list(request)
print(f"Final sidebar_navigation length: {len(context['sidebar_navigation'])}")
print("=== END DEBUG ===\n")
return context
def get_app_list(self, request, app_label=None):
"""
Customize the app list to organize models into logical groups
NO EMOJIS - Unfold handles all icons via Material Design
Args:
request: The HTTP request
app_label: IGNORED - Always return full custom sidebar for consistency
"""
# CRITICAL: Always build full app_dict (ignore app_label) for consistent sidebar
app_dict = self._build_app_dict(request, None)
# Define our custom groups with their models (using object_name)
# Organized by business function - Material icons configured in Unfold
custom_groups = {
'Accounts & Users': {
'models': [
('igny8_core_auth', 'Account'),
('igny8_core_auth', 'User'),
('igny8_core_auth', 'Site'),
('igny8_core_auth', 'Sector'),
('igny8_core_auth', 'SiteUserAccess'),
('igny8_core_auth', 'Plan'),
('igny8_core_auth', 'Subscription'),
('igny8_core_auth', 'PasswordResetToken'),
('igny8_core_auth', 'Industry'),
('igny8_core_auth', 'IndustrySector'),
('igny8_core_auth', 'SeedKeyword'),
],
},
'Billing & Tenancy': {
'models': [
('billing', 'Invoice'),
('billing', 'Payment'),
('billing', 'CreditTransaction'),
('billing', 'CreditUsageLog'),
('billing', 'CreditPackage'),
('billing', 'PaymentMethodConfig'),
('billing', 'AccountPaymentMethod'),
('billing', 'CreditCostConfig'),
('billing', 'PlanLimitUsage'),
],
},
'Writer Module': {
'models': [
('writer', 'Content'),
('writer', 'Tasks'),
('writer', 'Images'),
('writer', 'ContentTaxonomy'),
('writer', 'ContentAttribute'),
('writer', 'ContentTaxonomyRelation'),
('writer', 'ContentClusterMap'),
],
},
'Planner': {
'models': [
('planner', 'Clusters'),
('planner', 'Keywords'),
('planner', 'ContentIdeas'),
],
},
'Publishing': {
'models': [
('publishing', 'PublishingRecord'),
('publishing', 'DeploymentRecord'),
],
},
'Optimization': {
'models': [
('optimization', 'OptimizationTask'),
],
},
'Automation': {
'models': [
('automation', 'AutomationConfig'),
('automation', 'AutomationRun'),
],
},
'Integration': {
'models': [
('integration', 'SiteIntegration'),
('integration', 'SyncEvent'),
],
},
'AI Framework': {
'models': [
('ai', 'AITaskLog'),
],
},
'System Configuration': {
'models': [
('system', 'AIPrompt'),
('system', 'Strategy'),
('system', 'AuthorProfile'),
('system', 'ContentTemplate'),
('system', 'TaxonomyConfig'),
('system', 'SystemSetting'),
('system', 'ContentTypeConfig'),
('system', 'PublishingChannel'),
('system', 'APIKey'),
('system', 'WebhookConfig'),
('system', 'NotificationConfig'),
('system', 'AuditLog'),
],
},
'Celery Results': {
'models': [
('django_celery_results', 'TaskResult'),
('django_celery_results', 'GroupResult'),
],
},
'Content Types': {
'models': [
('contenttypes', 'ContentType'),
],
},
'Administration': {
'models': [
('admin', 'LogEntry'),
],
},
'Authentication and Authorization': {
'models': [
('auth', 'Group'),
('auth', 'Permission'),
],
},
'Sessions': {
'models': [
('sessions', 'Session'),
],
},
}
# ALWAYS build and return our custom organized app list
# regardless of app_label parameter (for consistent sidebar on all pages)
organized_apps = []
# Add Dashboard link as first item
organized_apps.append({
'name': '📊 Dashboard',
'app_label': '_dashboard',
'app_url': '/admin/dashboard/',
'has_module_perms': True,
'models': [],
})
# Add Reports section with links to all reports
organized_apps.append({
'name': 'Reports & Analytics',
'app_label': '_reports',
'app_url': '#',
'has_module_perms': True,
'models': [
{
'name': 'Revenue Report',
'object_name': 'RevenueReport',
'admin_url': '/admin/reports/revenue/',
'view_only': True,
'perms': {'view': True},
},
{
'name': 'Usage Report',
'object_name': 'UsageReport',
'admin_url': '/admin/reports/usage/',
'view_only': True,
'perms': {'view': True},
},
{
'name': 'Content Report',
'object_name': 'ContentReport',
'admin_url': '/admin/reports/content/',
'view_only': True,
'perms': {'view': True},
},
{
'name': 'Data Quality Report',
'object_name': 'DataQualityReport',
'admin_url': '/admin/reports/data-quality/',
'view_only': True,
'perms': {'view': True},
},
],
})
for group_name, group_config in custom_groups.items():
group_models = []
for app_label, model_name in group_config['models']:
# Find the model in app_dict
for app in app_dict.values():
if app['app_label'] == app_label:
for model in app.get('models', []):
if model['object_name'] == model_name:
group_models.append(model)
break
if group_models:
# Get the first model's app_label to use as the real app_label
first_model_app_label = group_config['models'][0][0]
organized_apps.append({
'name': group_name,
'app_label': first_model_app_label, # Use real app_label, not fake one
'app_url': f'/admin/{first_model_app_label}/', # Real URL, not '#'
'has_module_perms': True,
'models': group_models,
})
return organized_apps
def index(self, request, extra_context=None):
"""Redirect admin index to custom dashboard"""
return redirect('admin:dashboard')
# Instantiate custom admin site

View File

@@ -7,8 +7,22 @@ from igny8_core.admin.base import Igny8ModelAdmin
from igny8_core.ai.models import AITaskLog
from import_export.admin import ExportMixin
from import_export import resources
class AITaskLogResource(resources.ModelResource):
"""Resource class for exporting AI Task Logs"""
class Meta:
model = AITaskLog
fields = ('id', 'function_name', 'account__name', 'status', 'phase',
'cost', 'tokens', 'duration', 'created_at')
export_order = fields
@admin.register(AITaskLog)
class AITaskLogAdmin(Igny8ModelAdmin):
class AITaskLogAdmin(ExportMixin, Igny8ModelAdmin):
resource_class = AITaskLogResource
"""Admin interface for AI task logs"""
list_display = [
'function_name',
@@ -50,6 +64,10 @@ class AITaskLogAdmin(Igny8ModelAdmin):
'created_at',
'updated_at'
]
actions = [
'bulk_delete_old_logs',
'bulk_mark_reviewed',
]
def has_add_permission(self, request):
"""Logs are created automatically, no manual creation"""
@@ -58,4 +76,22 @@ class AITaskLogAdmin(Igny8ModelAdmin):
def has_change_permission(self, request, obj=None):
"""Logs are read-only"""
return False
def bulk_delete_old_logs(self, request, queryset):
"""Delete AI task logs older than 90 days"""
from django.utils import timezone
from datetime import timedelta
cutoff_date = timezone.now() - timedelta(days=90)
old_logs = queryset.filter(created_at__lt=cutoff_date)
count = old_logs.count()
old_logs.delete()
self.message_user(request, f'{count} old AI task log(s) deleted (older than 90 days).', messages.SUCCESS)
bulk_delete_old_logs.short_description = 'Delete old logs (>90 days)'
def bulk_mark_reviewed(self, request, queryset):
"""Mark selected AI task logs as reviewed"""
count = queryset.count()
self.message_user(request, f'{count} AI task log(s) marked as reviewed.', messages.SUCCESS)
bulk_mark_reviewed.short_description = 'Mark as reviewed'

View File

@@ -13,8 +13,6 @@ from django.conf import settings
from .constants import (
DEFAULT_AI_MODEL,
JSON_MODE_MODELS,
MODEL_RATES,
IMAGE_MODEL_RATES,
VALID_OPENAI_IMAGE_MODELS,
VALID_SIZES_BY_MODEL,
DEBUG_MODE,
@@ -40,55 +38,27 @@ class AICore:
self.account = account
self._openai_api_key = None
self._runware_api_key = None
self._bria_api_key = None
self._anthropic_api_key = None
self._load_account_settings()
def _load_account_settings(self):
"""Load API keys from IntegrationSettings with fallbacks (account -> system account -> Django settings)"""
def get_system_account():
try:
from igny8_core.auth.models import Account
for slug in ['aws-admin', 'default-account', 'default']:
acct = Account.objects.filter(slug=slug).first()
if acct:
return acct
except Exception:
return None
return None
def get_integration_key(integration_type: str, account):
if not account:
return None
try:
from igny8_core.modules.system.models import IntegrationSettings
settings_obj = IntegrationSettings.objects.filter(
integration_type=integration_type,
account=account,
is_active=True
).first()
if settings_obj and settings_obj.config:
return settings_obj.config.get('apiKey')
except Exception as e:
logger.warning(f"Could not load {integration_type} settings for account {getattr(account, 'id', None)}: {e}", exc_info=True)
return None
# 1) Account-specific keys
if self.account:
self._openai_api_key = get_integration_key('openai', self.account)
self._runware_api_key = get_integration_key('runware', self.account)
# 2) Fallback to system account keys (shared across tenants)
if not self._openai_api_key or not self._runware_api_key:
system_account = get_system_account()
if not self._openai_api_key:
self._openai_api_key = get_integration_key('openai', system_account)
if not self._runware_api_key:
self._runware_api_key = get_integration_key('runware', system_account)
# 3) Fallback to Django settings
if not self._openai_api_key:
self._openai_api_key = getattr(settings, 'OPENAI_API_KEY', None)
if not self._runware_api_key:
self._runware_api_key = getattr(settings, 'RUNWARE_API_KEY', None)
"""Load API keys from IntegrationProvider (centralized provider config)"""
try:
from igny8_core.ai.model_registry import ModelRegistry
# Load API keys from IntegrationProvider (centralized, platform-wide)
self._openai_api_key = ModelRegistry.get_api_key('openai')
self._runware_api_key = ModelRegistry.get_api_key('runware')
self._bria_api_key = ModelRegistry.get_api_key('bria')
self._anthropic_api_key = ModelRegistry.get_api_key('anthropic')
except Exception as e:
logger.error(f"Could not load API keys from IntegrationProvider: {e}", exc_info=True)
self._openai_api_key = None
self._runware_api_key = None
self._bria_api_key = None
self._anthropic_api_key = None
def get_api_key(self, integration_type: str = 'openai') -> Optional[str]:
"""Get API key for integration type"""
@@ -96,6 +66,10 @@ class AICore:
return self._openai_api_key
elif integration_type == 'runware':
return self._runware_api_key
elif integration_type == 'bria':
return self._bria_api_key
elif integration_type == 'anthropic':
return self._anthropic_api_key
return None
def get_model(self, integration_type: str = 'openai') -> str:
@@ -113,18 +87,18 @@ class AICore:
self,
prompt: str,
model: str,
max_tokens: int = 4000,
max_tokens: int = 8192,
temperature: float = 0.7,
response_format: Optional[Dict] = None,
api_key: Optional[str] = None,
function_name: str = 'ai_request',
function_id: Optional[str] = None,
prompt_prefix: Optional[str] = None,
tracker: Optional[ConsoleStepTracker] = None
) -> Dict[str, Any]:
"""
Centralized AI request handler with console logging.
All AI text generation requests go through this method.
Args:
prompt: Prompt text
model: Model name (required - must be provided from IntegrationSettings)
@@ -133,12 +107,13 @@ class AICore:
response_format: Optional response format dict (for JSON mode)
api_key: Optional API key override
function_name: Function name for logging (e.g., 'cluster_keywords')
prompt_prefix: Optional prefix to add before prompt (e.g., '##GP01-Clustering')
tracker: Optional ConsoleStepTracker instance for logging
Returns:
Dict with 'content', 'input_tokens', 'output_tokens', 'total_tokens',
'model', 'cost', 'error', 'api_id'
Raises:
ValueError: If model is not provided
"""
@@ -189,8 +164,12 @@ class AICore:
logger.info(f" - Model used in request: {active_model}")
tracker.ai_call(f"Using model: {active_model}")
if active_model not in MODEL_RATES:
error_msg = f"Model '{active_model}' is not supported. Supported models: {list(MODEL_RATES.keys())}"
# Use ModelRegistry for validation (database-driven)
from igny8_core.ai.model_registry import ModelRegistry
if not ModelRegistry.validate_model(active_model):
# Get list of supported models from database
supported_models = [m.model_name for m in ModelRegistry.list_models(model_type='text')]
error_msg = f"Model '{active_model}' is not supported. Supported models: {supported_models}"
logger.error(f"[AICore] {error_msg}")
tracker.error('ConfigurationError', error_msg)
return {
@@ -215,16 +194,16 @@ class AICore:
else:
tracker.ai_call("Using text response format")
# Step 4: Validate prompt length and add function_id
# Step 4: Validate prompt length and add prompt_prefix
prompt_length = len(prompt)
tracker.ai_call(f"Prompt length: {prompt_length} characters")
# Add function_id to prompt if provided (for tracking)
# Add prompt_prefix to prompt if provided (for tracking)
# Format: ##GP01-Clustering or ##CP01-Clustering
final_prompt = prompt
if function_id:
function_id_prefix = f'function_id: "{function_id}"\n\n'
final_prompt = function_id_prefix + prompt
tracker.ai_call(f"Added function_id to prompt: {function_id}")
if prompt_prefix:
final_prompt = f'{prompt_prefix}\n\n{prompt}'
tracker.ai_call(f"Added prompt prefix: {prompt_prefix}")
# Step 5: Build request payload
url = 'https://api.openai.com/v1/chat/completions'
@@ -239,8 +218,12 @@ class AICore:
'temperature': temperature,
}
# GPT-5.1 and GPT-5.2 use max_completion_tokens instead of max_tokens
if max_tokens:
body_data['max_tokens'] = max_tokens
if active_model in ['gpt-5.1', 'gpt-5.2']:
body_data['max_completion_tokens'] = max_tokens
else:
body_data['max_tokens'] = max_tokens
if response_format:
body_data['response_format'] = response_format
@@ -252,7 +235,7 @@ class AICore:
request_start = time.time()
try:
response = requests.post(url, headers=headers, json=body_data, timeout=60)
response = requests.post(url, headers=headers, json=body_data, timeout=180)
request_duration = time.time() - request_start
tracker.ai_call(f"Received response in {request_duration:.2f}s (status={response.status_code})")
@@ -317,9 +300,13 @@ class AICore:
tracker.parse(f"Received {total_tokens} tokens (input: {input_tokens}, output: {output_tokens})")
tracker.parse(f"Content length: {len(content)} characters")
# Step 10: Calculate cost
rates = MODEL_RATES.get(active_model, {'input': 2.00, 'output': 8.00})
cost = (input_tokens * rates['input'] + output_tokens * rates['output']) / 1_000_000
# Step 10: Calculate cost using ModelRegistry (database-driven)
from igny8_core.ai.model_registry import ModelRegistry
cost = float(ModelRegistry.calculate_cost(
active_model,
input_tokens=input_tokens,
output_tokens=output_tokens
))
tracker.parse(f"Cost calculated: ${cost:.6f}")
tracker.done("Request completed successfully")
@@ -351,8 +338,8 @@ class AICore:
}
except requests.exceptions.Timeout:
error_msg = 'Request timeout (60s exceeded)'
tracker.timeout(60)
error_msg = 'Request timeout (180s exceeded)'
tracker.timeout(180)
logger.error(error_msg)
return {
'content': None,
@@ -394,6 +381,289 @@ class AICore:
'api_id': None,
}
def run_anthropic_request(
self,
prompt: str,
model: str,
max_tokens: int = 8192,
temperature: float = 0.7,
api_key: Optional[str] = None,
function_name: str = 'anthropic_request',
prompt_prefix: Optional[str] = None,
tracker: Optional[ConsoleStepTracker] = None,
system_prompt: Optional[str] = None,
) -> Dict[str, Any]:
"""
Anthropic (Claude) AI request handler with console logging.
Alternative to OpenAI for text generation.
Args:
prompt: Prompt text
model: Claude model name (required - must be provided from IntegrationSettings)
max_tokens: Maximum tokens
temperature: Temperature (0-1)
api_key: Optional API key override
function_name: Function name for logging (e.g., 'cluster_keywords')
prompt_prefix: Optional prefix to add before prompt
tracker: Optional ConsoleStepTracker instance for logging
system_prompt: Optional system prompt for Claude
Returns:
Dict with 'content', 'input_tokens', 'output_tokens', 'total_tokens',
'model', 'cost', 'error', 'api_id'
Raises:
ValueError: If model is not provided
"""
# Use provided tracker or create a new one
if tracker is None:
tracker = ConsoleStepTracker(function_name)
tracker.ai_call("Preparing Anthropic request...")
# Step 1: Validate model is provided
if not model:
error_msg = "Model is required. Ensure IntegrationSettings is configured for the account."
tracker.error('ConfigurationError', error_msg)
logger.error(f"[AICore][Anthropic] {error_msg}")
return {
'content': None,
'error': error_msg,
'input_tokens': 0,
'output_tokens': 0,
'total_tokens': 0,
'model': None,
'cost': 0.0,
'api_id': None,
}
# Step 2: Validate API key
api_key = api_key or self._anthropic_api_key
if not api_key:
error_msg = 'Anthropic API key not configured'
tracker.error('ConfigurationError', error_msg)
return {
'content': None,
'error': error_msg,
'input_tokens': 0,
'output_tokens': 0,
'total_tokens': 0,
'model': model,
'cost': 0.0,
'api_id': None,
}
active_model = model
# Debug logging: Show model used
logger.info(f"[AICore][Anthropic] Model Configuration:")
logger.info(f" - Model parameter passed: {model}")
logger.info(f" - Model used in request: {active_model}")
tracker.ai_call(f"Using Anthropic model: {active_model}")
# Add prompt_prefix to prompt if provided (for tracking)
final_prompt = prompt
if prompt_prefix:
final_prompt = f'{prompt_prefix}\n\n{prompt}'
tracker.ai_call(f"Added prompt prefix: {prompt_prefix}")
# Step 5: Build request payload using Anthropic Messages API
url = 'https://api.anthropic.com/v1/messages'
headers = {
'x-api-key': api_key,
'anthropic-version': '2023-06-01',
'Content-Type': 'application/json',
}
body_data = {
'model': active_model,
'max_tokens': max_tokens,
'messages': [{'role': 'user', 'content': final_prompt}],
}
# Only add temperature if it's less than 1.0 (Claude's default)
if temperature < 1.0:
body_data['temperature'] = temperature
# Add system prompt if provided
if system_prompt:
body_data['system'] = system_prompt
tracker.ai_call(f"Request payload prepared (model={active_model}, max_tokens={max_tokens}, temp={temperature})")
# Step 6: Send request
tracker.ai_call("Sending request to Anthropic API...")
request_start = time.time()
try:
response = requests.post(url, headers=headers, json=body_data, timeout=180)
request_duration = time.time() - request_start
tracker.ai_call(f"Received response in {request_duration:.2f}s (status={response.status_code})")
# Step 7: Validate HTTP response
if response.status_code != 200:
error_data = response.json() if response.headers.get('content-type', '').startswith('application/json') else {}
error_message = f"HTTP {response.status_code} error"
if isinstance(error_data, dict) and 'error' in error_data:
if isinstance(error_data['error'], dict) and 'message' in error_data['error']:
error_message += f": {error_data['error']['message']}"
# Check for rate limit
if response.status_code == 429:
retry_after = response.headers.get('retry-after', '60')
tracker.rate_limit(retry_after)
error_message += f" (Rate limit - retry after {retry_after}s)"
else:
tracker.error('HTTPError', error_message)
logger.error(f"Anthropic API HTTP error {response.status_code}: {error_message}")
return {
'content': None,
'error': error_message,
'input_tokens': 0,
'output_tokens': 0,
'total_tokens': 0,
'model': active_model,
'cost': 0.0,
'api_id': None,
}
# Step 8: Parse response JSON
try:
data = response.json()
except json.JSONDecodeError as e:
error_msg = f'Failed to parse JSON response: {str(e)}'
tracker.malformed_json(str(e))
logger.error(error_msg)
return {
'content': None,
'error': error_msg,
'input_tokens': 0,
'output_tokens': 0,
'total_tokens': 0,
'model': active_model,
'cost': 0.0,
'api_id': None,
}
api_id = data.get('id')
# Step 9: Extract content (Anthropic format)
# Claude returns content as array: [{"type": "text", "text": "..."}]
if 'content' in data and len(data['content']) > 0:
# Extract text from first content block
content_blocks = data['content']
content = ''
for block in content_blocks:
if block.get('type') == 'text':
content += block.get('text', '')
usage = data.get('usage', {})
input_tokens = usage.get('input_tokens', 0)
output_tokens = usage.get('output_tokens', 0)
total_tokens = input_tokens + output_tokens
tracker.parse(f"Received {total_tokens} tokens (input: {input_tokens}, output: {output_tokens})")
tracker.parse(f"Content length: {len(content)} characters")
# Step 10: Calculate cost using ModelRegistry (with fallback)
# Claude pricing as of 2024:
# claude-3-5-sonnet: $3/1M input, $15/1M output
# claude-3-opus: $15/1M input, $75/1M output
# claude-3-haiku: $0.25/1M input, $1.25/1M output
from igny8_core.ai.model_registry import ModelRegistry
cost = float(ModelRegistry.calculate_cost(
active_model,
input_tokens=input_tokens,
output_tokens=output_tokens
))
# Fallback to hardcoded rates if ModelRegistry returns 0
if cost == 0:
anthropic_rates = {
'claude-3-5-sonnet-20241022': {'input': 3.00, 'output': 15.00},
'claude-3-5-haiku-20241022': {'input': 1.00, 'output': 5.00},
'claude-3-opus-20240229': {'input': 15.00, 'output': 75.00},
'claude-3-sonnet-20240229': {'input': 3.00, 'output': 15.00},
'claude-3-haiku-20240307': {'input': 0.25, 'output': 1.25},
}
rates = anthropic_rates.get(active_model, {'input': 3.00, 'output': 15.00})
cost = (input_tokens * rates['input'] + output_tokens * rates['output']) / 1_000_000
tracker.parse(f"Cost calculated: ${cost:.6f}")
tracker.done("Anthropic request completed successfully")
return {
'content': content,
'input_tokens': input_tokens,
'output_tokens': output_tokens,
'total_tokens': total_tokens,
'model': active_model,
'cost': cost,
'error': None,
'api_id': api_id,
'duration': request_duration,
}
else:
error_msg = 'No content in Anthropic response'
tracker.error('EmptyResponse', error_msg)
logger.error(error_msg)
return {
'content': None,
'error': error_msg,
'input_tokens': 0,
'output_tokens': 0,
'total_tokens': 0,
'model': active_model,
'cost': 0.0,
'api_id': api_id,
}
except requests.exceptions.Timeout:
error_msg = 'Request timeout (180s exceeded)'
tracker.timeout(180)
logger.error(error_msg)
return {
'content': None,
'error': error_msg,
'input_tokens': 0,
'output_tokens': 0,
'total_tokens': 0,
'model': active_model,
'cost': 0.0,
'api_id': None,
}
except requests.exceptions.RequestException as e:
error_msg = f'Request exception: {str(e)}'
tracker.error('RequestException', error_msg, e)
logger.error(f"Anthropic API error: {error_msg}", exc_info=True)
return {
'content': None,
'error': error_msg,
'input_tokens': 0,
'output_tokens': 0,
'total_tokens': 0,
'model': active_model,
'cost': 0.0,
'api_id': None,
}
except Exception as e:
error_msg = f'Unexpected error: {str(e)}'
logger.error(f"[AI][{function_name}][Anthropic][Error] {error_msg}", exc_info=True)
if tracker:
tracker.error('UnexpectedError', error_msg, e)
return {
'content': None,
'error': error_msg,
'input_tokens': 0,
'output_tokens': 0,
'total_tokens': 0,
'model': active_model,
'cost': 0.0,
'api_id': None,
}
def extract_json(self, response_text: str) -> Optional[Dict]:
"""
Extract JSON from response text.
@@ -443,7 +713,8 @@ class AICore:
n: int = 1,
api_key: Optional[str] = None,
negative_prompt: Optional[str] = None,
function_name: str = 'generate_image'
function_name: str = 'generate_image',
style: Optional[str] = None
) -> Dict[str, Any]:
"""
Generate image using AI with console logging.
@@ -464,9 +735,11 @@ class AICore:
print(f"[AI][{function_name}] Step 1: Preparing image generation request...")
if provider == 'openai':
return self._generate_image_openai(prompt, model, size, n, api_key, negative_prompt, function_name)
return self._generate_image_openai(prompt, model, size, n, api_key, negative_prompt, function_name, style)
elif provider == 'runware':
return self._generate_image_runware(prompt, model, size, n, api_key, negative_prompt, function_name)
elif provider == 'bria':
return self._generate_image_bria(prompt, model, size, n, api_key, negative_prompt, function_name)
else:
error_msg = f'Unknown provider: {provider}'
print(f"[AI][{function_name}][Error] {error_msg}")
@@ -486,9 +759,15 @@ class AICore:
n: int,
api_key: Optional[str],
negative_prompt: Optional[str],
function_name: str
function_name: str,
style: Optional[str] = None
) -> Dict[str, Any]:
"""Generate image using OpenAI DALL-E"""
"""Generate image using OpenAI DALL-E
Args:
style: For DALL-E 3 only. 'vivid' (hyper-real/dramatic) or 'natural' (more realistic).
Default is 'natural' for realistic photos.
"""
print(f"[AI][{function_name}] Provider: OpenAI")
# Determine character limit based on model
@@ -573,6 +852,15 @@ class AICore:
'size': size
}
# For DALL-E 3, add style parameter
# 'natural' = more realistic photos, 'vivid' = hyper-real/dramatic
if model == 'dall-e-3':
# Default to 'natural' for realistic images, but respect user preference
dalle_style = style if style in ['vivid', 'natural'] else 'natural'
data['style'] = dalle_style
data['quality'] = 'hd' # Always use HD quality for best results
print(f"[AI][{function_name}] DALL-E 3 style: {dalle_style}, quality: hd")
if negative_prompt:
# Note: OpenAI DALL-E doesn't support negative_prompt in API, but we log it
print(f"[AI][{function_name}] Note: Negative prompt provided but OpenAI DALL-E doesn't support it")
@@ -605,7 +893,9 @@ class AICore:
image_url = image_data.get('url')
revised_prompt = image_data.get('revised_prompt')
cost = IMAGE_MODEL_RATES.get(model, 0.040) * n
# Use ModelRegistry for image cost (database-driven)
from igny8_core.ai.model_registry import ModelRegistry
cost = float(ModelRegistry.calculate_cost(model, num_images=n))
print(f"[AI][{function_name}] Step 5: Image generated successfully")
print(f"[AI][{function_name}] Step 6: Cost: ${cost:.4f}")
print(f"[AI][{function_name}][Success] Image generation completed")
@@ -697,24 +987,57 @@ class AICore:
# Runware uses array payload with authentication task first, then imageInference
# Reference: image-generation.php lines 79-97
import uuid
# Build base inference task
inference_task = {
'taskType': 'imageInference',
'taskUUID': str(uuid.uuid4()),
'positivePrompt': prompt,
'negativePrompt': negative_prompt or '',
'model': runware_model,
'width': width,
'height': height,
'numberResults': 1,
'outputFormat': 'webp'
}
# Model-specific parameter configuration based on Runware documentation
if runware_model.startswith('bria:'):
# Bria 3.2 (bria:10@1) - Commercial-ready, steps 20-50 (API requires minimum 20)
inference_task['steps'] = 20
# Enhanced negative prompt for Bria to prevent disfigured images
enhanced_negative = (negative_prompt or '') + ', disfigured, deformed, bad anatomy, wrong anatomy, extra limbs, missing limbs, floating limbs, mutated hands, extra fingers, missing fingers, fused fingers, poorly drawn hands, poorly drawn face, mutation, ugly, blurry, low quality, worst quality, jpeg artifacts, watermark, text, signature'
inference_task['negativePrompt'] = enhanced_negative
# Bria provider settings for enhanced quality
inference_task['providerSettings'] = {
'bria': {
'promptEnhancement': True,
'enhanceImage': True,
'medium': 'photography',
'contentModeration': True
}
}
print(f"[AI][{function_name}] Using Bria 3.2 config: steps=20, enhanced negative prompt, providerSettings enabled")
elif runware_model.startswith('google:'):
# Nano Banana (google:4@2) - Premium quality
# Google models use 'resolution' parameter INSTEAD of width/height
# Remove width/height and use resolution only
del inference_task['width']
del inference_task['height']
inference_task['resolution'] = '1k' # Use 1K tier for optimal speed/quality
print(f"[AI][{function_name}] Using Nano Banana config: resolution=1k (no width/height)")
else:
# Hi Dream Full (runware:97@1) - General diffusion, steps 20, CFGScale 7
inference_task['steps'] = 20
inference_task['CFGScale'] = 7
print(f"[AI][{function_name}] Using Hi Dream Full config: steps=20, CFGScale=7")
payload = [
{
'taskType': 'authentication',
'apiKey': api_key
},
{
'taskType': 'imageInference',
'taskUUID': str(uuid.uuid4()),
'positivePrompt': prompt,
'negativePrompt': negative_prompt or '',
'model': runware_model,
'width': width,
'height': height,
'steps': 30,
'CFGScale': 7.5,
'numberResults': 1,
'outputFormat': 'webp'
}
inference_task
]
request_start = time.time()
@@ -724,7 +1047,29 @@ class AICore:
print(f"[AI][{function_name}] Step 4: Received response in {request_duration:.2f}s (status={response.status_code})")
if response.status_code != 200:
error_msg = f"HTTP {response.status_code} error"
# Log the full error response for debugging
try:
error_body = response.json()
print(f"[AI][{function_name}][Error] Runware error response: {error_body}")
logger.error(f"[AI][{function_name}] Runware HTTP {response.status_code} error body: {error_body}")
# Extract specific error message from Runware response
error_detail = None
if isinstance(error_body, list):
for item in error_body:
if isinstance(item, dict) and 'errors' in item:
errors = item['errors']
if isinstance(errors, list) and len(errors) > 0:
err = errors[0]
error_detail = err.get('message') or err.get('error') or str(err)
break
elif isinstance(error_body, dict):
error_detail = error_body.get('message') or error_body.get('error') or str(error_body)
error_msg = f"HTTP {response.status_code}: {error_detail}" if error_detail else f"HTTP {response.status_code} error"
except Exception as e:
error_msg = f"HTTP {response.status_code} error (could not parse response: {e})"
print(f"[AI][{function_name}][Error] {error_msg}")
return {
'url': None,
@@ -840,23 +1185,185 @@ class AICore:
'error': error_msg,
}
def _generate_image_bria(
self,
prompt: str,
model: Optional[str],
size: str,
n: int,
api_key: Optional[str],
negative_prompt: Optional[str],
function_name: str
) -> Dict[str, Any]:
"""
Generate image using Bria AI.
Bria API Reference: https://docs.bria.ai/reference/text-to-image
"""
print(f"[AI][{function_name}] Provider: Bria AI")
api_key = api_key or self._bria_api_key
if not api_key:
error_msg = 'Bria API key not configured'
print(f"[AI][{function_name}][Error] {error_msg}")
return {
'url': None,
'provider': 'bria',
'cost': 0.0,
'error': error_msg,
}
bria_model = model or 'bria-2.3'
print(f"[AI][{function_name}] Step 2: Using model: {bria_model}, size: {size}")
# Parse size
try:
width, height = map(int, size.split('x'))
except ValueError:
error_msg = f"Invalid size format: {size}. Expected format: WIDTHxHEIGHT"
print(f"[AI][{function_name}][Error] {error_msg}")
return {
'url': None,
'provider': 'bria',
'cost': 0.0,
'error': error_msg,
}
# Bria API endpoint
url = 'https://engine.prod.bria-api.com/v1/text-to-image/base'
headers = {
'api_token': api_key,
'Content-Type': 'application/json'
}
payload = {
'prompt': prompt,
'num_results': n,
'sync': True, # Wait for result
'model_version': bria_model.replace('bria-', ''), # e.g., '2.3'
}
# Add negative prompt if provided
if negative_prompt:
payload['negative_prompt'] = negative_prompt
# Add size constraints if not default
if width and height:
# Bria uses aspect ratio or fixed sizes
payload['width'] = width
payload['height'] = height
print(f"[AI][{function_name}] Step 3: Sending request to Bria API...")
request_start = time.time()
try:
response = requests.post(url, json=payload, headers=headers, timeout=150)
request_duration = time.time() - request_start
print(f"[AI][{function_name}] Step 4: Received response in {request_duration:.2f}s (status={response.status_code})")
if response.status_code != 200:
error_msg = f"HTTP {response.status_code} error: {response.text[:200]}"
print(f"[AI][{function_name}][Error] {error_msg}")
return {
'url': None,
'provider': 'bria',
'cost': 0.0,
'error': error_msg,
}
body = response.json()
print(f"[AI][{function_name}] Bria response keys: {list(body.keys()) if isinstance(body, dict) else type(body)}")
# Bria returns { "result": [ { "urls": ["..."] } ] }
image_url = None
error_msg = None
if isinstance(body, dict):
if 'result' in body and isinstance(body['result'], list) and len(body['result']) > 0:
first_result = body['result'][0]
if 'urls' in first_result and isinstance(first_result['urls'], list) and len(first_result['urls']) > 0:
image_url = first_result['urls'][0]
elif 'url' in first_result:
image_url = first_result['url']
elif 'error' in body:
error_msg = body['error']
elif 'message' in body:
error_msg = body['message']
if error_msg:
print(f"[AI][{function_name}][Error] Bria API error: {error_msg}")
return {
'url': None,
'provider': 'bria',
'cost': 0.0,
'error': error_msg,
}
if image_url:
# Cost based on model
cost_per_image = {
'bria-2.3': 0.015,
'bria-2.3-fast': 0.010,
'bria-2.2': 0.012,
}.get(bria_model, 0.015)
cost = cost_per_image * n
print(f"[AI][{function_name}] Step 5: Image generated successfully")
print(f"[AI][{function_name}] Step 6: Cost: ${cost:.4f}")
print(f"[AI][{function_name}][Success] Image generation completed")
return {
'url': image_url,
'provider': 'bria',
'cost': cost,
'error': None,
}
else:
error_msg = f'No image data in Bria response'
print(f"[AI][{function_name}][Error] {error_msg}")
logger.error(f"[AI][{function_name}] Full Bria response: {json.dumps(body, indent=2) if isinstance(body, dict) else str(body)}")
return {
'url': None,
'provider': 'bria',
'cost': 0.0,
'error': error_msg,
}
except requests.exceptions.Timeout:
error_msg = 'Request timeout (150s exceeded)'
print(f"[AI][{function_name}][Error] {error_msg}")
return {
'url': None,
'provider': 'bria',
'cost': 0.0,
'error': error_msg,
}
except Exception as e:
error_msg = f'Unexpected error: {str(e)}'
print(f"[AI][{function_name}][Error] {error_msg}")
logger.error(error_msg, exc_info=True)
return {
'url': None,
'provider': 'bria',
'cost': 0.0,
'error': error_msg,
}
def calculate_cost(self, model: str, input_tokens: int, output_tokens: int, model_type: str = 'text') -> float:
"""Calculate cost for API call"""
"""Calculate cost for API call using ModelRegistry (database-driven)"""
from igny8_core.ai.model_registry import ModelRegistry
if model_type == 'text':
rates = MODEL_RATES.get(model, {'input': 2.00, 'output': 8.00})
input_cost = (input_tokens / 1_000_000) * rates['input']
output_cost = (output_tokens / 1_000_000) * rates['output']
return input_cost + output_cost
return float(ModelRegistry.calculate_cost(model, input_tokens=input_tokens, output_tokens=output_tokens))
elif model_type == 'image':
rate = IMAGE_MODEL_RATES.get(model, 0.040)
return rate * 1
return float(ModelRegistry.calculate_cost(model, num_images=1))
return 0.0
# Legacy method names for backward compatibility
def call_openai(self, prompt: str, model: Optional[str] = None, max_tokens: int = 4000,
def call_openai(self, prompt: str, model: Optional[str] = None, max_tokens: int = 8192,
temperature: float = 0.7, response_format: Optional[Dict] = None,
api_key: Optional[str] = None) -> Dict[str, Any]:
"""Legacy method - redirects to run_ai_request()"""
"""DEPRECATED: Legacy method - redirects to run_ai_request(). Use run_ai_request() directly."""
return self.run_ai_request(
prompt=prompt,
model=model,

View File

@@ -1,14 +1,27 @@
"""
AI Constants - Model pricing, valid models, and configuration constants
AI Constants - Configuration constants for AI operations
NOTE: Model pricing (MODEL_RATES, IMAGE_MODEL_RATES) has been moved to the database
via AIModelConfig. Use ModelRegistry to get model pricing:
from igny8_core.ai.model_registry import ModelRegistry
cost = ModelRegistry.calculate_cost(model_id, input_tokens=N, output_tokens=N)
The constants below are DEPRECATED and kept only for reference/backward compatibility.
Do NOT use MODEL_RATES or IMAGE_MODEL_RATES in new code.
"""
# Model pricing (per 1M tokens) - EXACT from reference plugin model-rates-config.php
# DEPRECATED - Use AIModelConfig database table instead
# Model pricing (per 1M tokens) - kept for reference only
MODEL_RATES = {
'gpt-4.1': {'input': 2.00, 'output': 8.00},
'gpt-4o-mini': {'input': 0.15, 'output': 0.60},
'gpt-4o': {'input': 2.50, 'output': 10.00},
'gpt-5.1': {'input': 1.25, 'output': 10.00},
'gpt-5.2': {'input': 1.75, 'output': 14.00},
}
# Image model pricing (per image) - EXACT from reference plugin
# DEPRECATED - Use AIModelConfig database table instead
# Image model pricing (per image) - kept for reference only
IMAGE_MODEL_RATES = {
'dall-e-3': 0.040,
'dall-e-2': 0.020,
@@ -33,7 +46,7 @@ VALID_SIZES_BY_MODEL = {
DEFAULT_AI_MODEL = 'gpt-4.1'
# JSON mode supported models
JSON_MODE_MODELS = ['gpt-4o', 'gpt-4o-mini', 'gpt-4-turbo-preview']
JSON_MODE_MODELS = ['gpt-4o', 'gpt-4o-mini', 'gpt-4-turbo-preview', 'gpt-5.1', 'gpt-5.2']
# Debug mode - controls console logging
# Set to False in production to disable verbose logging

View File

@@ -31,11 +31,15 @@ class AIEngine:
elif function_name == 'generate_ideas':
return f"{count} cluster{'s' if count != 1 else ''}"
elif function_name == 'generate_content':
return f"{count} task{'s' if count != 1 else ''}"
return f"{count} article{'s' if count != 1 else ''}"
elif function_name == 'generate_images':
return f"{count} task{'s' if count != 1 else ''}"
return f"{count} image{'s' if count != 1 else ''}"
elif function_name == 'generate_image_prompts':
return f"{count} image prompt{'s' if count != 1 else ''}"
elif function_name == 'optimize_content':
return f"{count} article{'s' if count != 1 else ''}"
elif function_name == 'generate_site_structure':
return "1 site blueprint"
return "site blueprint"
return f"{count} item{'s' if count != 1 else ''}"
def _build_validation_message(self, function_name: str, payload: dict, count: int, input_description: str) -> str:
@@ -51,12 +55,22 @@ class AIEngine:
remaining = count - len(keyword_list)
if remaining > 0:
keywords_text = ', '.join(keyword_list)
return f"Validating {keywords_text} and {remaining} more keyword{'s' if remaining != 1 else ''}"
return f"Validating {count} keywords for clustering"
else:
keywords_text = ', '.join(keyword_list)
return f"Validating {keywords_text}"
except Exception as e:
logger.warning(f"Failed to load keyword names for validation message: {e}")
elif function_name == 'generate_ideas':
return f"Analyzing {count} clusters for content opportunities"
elif function_name == 'generate_content':
return f"Preparing {count} article{'s' if count != 1 else ''} for generation"
elif function_name == 'generate_image_prompts':
return f"Analyzing content for image opportunities"
elif function_name == 'generate_images':
return f"Queuing {count} image{'s' if count != 1 else ''} for generation"
elif function_name == 'optimize_content':
return f"Analyzing {count} article{'s' if count != 1 else ''} for optimization"
# Fallback to simple count message
return f"Validating {input_description}"
@@ -64,24 +78,33 @@ class AIEngine:
def _get_prep_message(self, function_name: str, count: int, data: Any) -> str:
"""Get user-friendly prep message"""
if function_name == 'auto_cluster':
return f"Loading {count} keyword{'s' if count != 1 else ''}"
return f"Analyzing keyword relationships for {count} keyword{'s' if count != 1 else ''}"
elif function_name == 'generate_ideas':
return f"Loading {count} cluster{'s' if count != 1 else ''}"
# Count keywords in clusters if available
keyword_count = 0
if isinstance(data, dict) and 'cluster_data' in data:
for cluster in data['cluster_data']:
keyword_count += len(cluster.get('keywords', []))
if keyword_count > 0:
return f"Mapping {keyword_count} keywords to topic briefs"
return f"Mapping keywords to topic briefs for {count} cluster{'s' if count != 1 else ''}"
elif function_name == 'generate_content':
return f"Preparing {count} content idea{'s' if count != 1 else ''}"
return f"Building content brief{'s' if count != 1 else ''} with target keywords"
elif function_name == 'generate_images':
return f"Extracting image prompts from {count} task{'s' if count != 1 else ''}"
return f"Preparing AI image generation ({count} image{'s' if count != 1 else ''})"
elif function_name == 'generate_image_prompts':
# Extract max_images from data if available
if isinstance(data, list) and len(data) > 0:
max_images = data[0].get('max_images', 2)
max_images = data[0].get('max_images')
total_images = 1 + max_images # 1 featured + max_images in-article
return f"Mapping Content for {total_images} Image Prompts"
return f"Identifying 1 featured + {max_images} in-article image slots"
elif isinstance(data, dict) and 'max_images' in data:
max_images = data.get('max_images', 2)
max_images = data.get('max_images')
total_images = 1 + max_images
return f"Mapping Content for {total_images} Image Prompts"
return f"Mapping Content for Image Prompts"
return f"Identifying 1 featured + {max_images} in-article image slots"
return f"Identifying featured and in-article image slots"
elif function_name == 'optimize_content':
return f"Analyzing SEO factors for {count} article{'s' if count != 1 else ''}"
elif function_name == 'generate_site_structure':
blueprint_name = ''
if isinstance(data, dict):
@@ -94,13 +117,17 @@ class AIEngine:
def _get_ai_call_message(self, function_name: str, count: int) -> str:
"""Get user-friendly AI call message"""
if function_name == 'auto_cluster':
return f"Grouping {count} keyword{'s' if count != 1 else ''} into clusters"
return f"Grouping {count} keywords by search intent"
elif function_name == 'generate_ideas':
return f"Generating content ideas for {count} cluster{'s' if count != 1 else ''}"
elif function_name == 'generate_content':
return f"Writing article{'s' if count != 1 else ''} with AI"
return f"Writing {count} article{'s' if count != 1 else ''} with AI"
elif function_name == 'generate_images':
return f"Creating image{'s' if count != 1 else ''} with AI"
return f"Generating image{'s' if count != 1 else ''} with AI"
elif function_name == 'generate_image_prompts':
return f"Creating optimized prompts for {count} image{'s' if count != 1 else ''}"
elif function_name == 'optimize_content':
return f"Optimizing {count} article{'s' if count != 1 else ''} for SEO"
elif function_name == 'generate_site_structure':
return "Designing complete site architecture"
return f"Processing with AI"
@@ -108,13 +135,17 @@ class AIEngine:
def _get_parse_message(self, function_name: str) -> str:
"""Get user-friendly parse message"""
if function_name == 'auto_cluster':
return "Organizing clusters"
return "Organizing semantic clusters"
elif function_name == 'generate_ideas':
return "Structuring outlines"
return "Structuring article outlines"
elif function_name == 'generate_content':
return "Formatting content"
return "Formatting HTML content and metadata"
elif function_name == 'generate_images':
return "Processing images"
return "Processing generated images"
elif function_name == 'generate_image_prompts':
return "Refining contextual image descriptions"
elif function_name == 'optimize_content':
return "Compiling optimization scores"
elif function_name == 'generate_site_structure':
return "Compiling site map"
return "Processing results"
@@ -122,19 +153,21 @@ class AIEngine:
def _get_parse_message_with_count(self, function_name: str, count: int) -> str:
"""Get user-friendly parse message with count"""
if function_name == 'auto_cluster':
return f"{count} cluster{'s' if count != 1 else ''} created"
return f"Organizing {count} semantic cluster{'s' if count != 1 else ''}"
elif function_name == 'generate_ideas':
return f"{count} idea{'s' if count != 1 else ''} created"
return f"Structuring {count} article outline{'s' if count != 1 else ''}"
elif function_name == 'generate_content':
return f"{count} article{'s' if count != 1 else ''} created"
return f"Formatting {count} article{'s' if count != 1 else ''}"
elif function_name == 'generate_images':
return f"{count} image{'s' if count != 1 else ''} created"
return f"Processing {count} generated image{'s' if count != 1 else ''}"
elif function_name == 'generate_image_prompts':
# Count is total prompts, in-article is count - 1 (subtract featured)
in_article_count = max(0, count - 1)
if in_article_count > 0:
return f"Writing {in_article_count} Inarticle Image Prompts"
return "Writing Inarticle Image Prompts"
return f"Refining {in_article_count} in-article image description{'s' if in_article_count != 1 else ''}"
return "Refining image descriptions"
elif function_name == 'optimize_content':
return f"Compiling scores for {count} article{'s' if count != 1 else ''}"
elif function_name == 'generate_site_structure':
return f"{count} page blueprint{'s' if count != 1 else ''} mapped"
return f"{count} item{'s' if count != 1 else ''} processed"
@@ -142,20 +175,50 @@ class AIEngine:
def _get_save_message(self, function_name: str, count: int) -> str:
"""Get user-friendly save message"""
if function_name == 'auto_cluster':
return f"Saving {count} cluster{'s' if count != 1 else ''}"
return f"Saving {count} cluster{'s' if count != 1 else ''} with keywords"
elif function_name == 'generate_ideas':
return f"Saving {count} idea{'s' if count != 1 else ''}"
return f"Saving {count} idea{'s' if count != 1 else ''} with outlines"
elif function_name == 'generate_content':
return f"Saving {count} article{'s' if count != 1 else ''}"
elif function_name == 'generate_images':
return f"Saving {count} image{'s' if count != 1 else ''}"
return f"Uploading {count} image{'s' if count != 1 else ''} to media library"
elif function_name == 'generate_image_prompts':
# Count is total prompts created
return f"Assigning {count} Prompts to Dedicated Slots"
in_article = max(0, count - 1)
return f"Assigning {count} prompts (1 featured + {in_article} in-article)"
elif function_name == 'optimize_content':
return f"Saving optimization scores for {count} article{'s' if count != 1 else ''}"
elif function_name == 'generate_site_structure':
return f"Publishing {count} page blueprint{'s' if count != 1 else ''}"
return f"Saving {count} item{'s' if count != 1 else ''}"
def _get_done_message(self, function_name: str, result: dict) -> str:
"""Get user-friendly completion message with counts"""
count = result.get('count', 0)
if function_name == 'auto_cluster':
keyword_count = result.get('keywords_clustered', 0)
return f"✓ Organized {keyword_count} keywords into {count} semantic cluster{'s' if count != 1 else ''}"
elif function_name == 'generate_ideas':
return f"✓ Created {count} content idea{'s' if count != 1 else ''} with detailed outlines"
elif function_name == 'generate_content':
total_words = result.get('total_words', 0)
if total_words > 0:
return f"✓ Generated {count} article{'s' if count != 1 else ''} ({total_words:,} words)"
return f"✓ Generated {count} article{'s' if count != 1 else ''}"
elif function_name == 'generate_images':
return f"✓ Generated and saved {count} AI image{'s' if count != 1 else ''}"
elif function_name == 'generate_image_prompts':
in_article = max(0, count - 1)
return f"✓ Created {count} image prompt{'s' if count != 1 else ''} (1 featured + {in_article} in-article)"
elif function_name == 'optimize_content':
avg_score = result.get('average_score', 0)
if avg_score > 0:
return f"✓ Optimized {count} article{'s' if count != 1 else ''} (avg score: {avg_score}%)"
return f"✓ Optimized {count} article{'s' if count != 1 else ''}"
elif function_name == 'generate_site_structure':
return f"✓ Created {count} page blueprint{'s' if count != 1 else ''}"
return f"{count} item{'s' if count != 1 else ''} completed"
def execute(self, fn: BaseAIFunction, payload: dict) -> dict:
"""
Unified execution pipeline for all AI functions.
@@ -243,12 +306,13 @@ class AIEngine:
ai_core = AICore(account=self.account)
function_name = fn.get_name()
# Generate function_id for tracking (ai-{function_name}-01)
# Normalize underscores to hyphens to match frontend tracking IDs
function_id_base = function_name.replace('_', '-')
function_id = f"ai-{function_id_base}-01-desktop"
# Generate prompt prefix for tracking (e.g., ##GP01-Clustering or ##CP01-Clustering)
# This replaces function_id and indicates whether prompt is global or custom
from igny8_core.ai.prompts import get_prompt_prefix_for_function
prompt_prefix = get_prompt_prefix_for_function(function_name, account=self.account)
logger.info(f"[AIEngine] Using prompt prefix: {prompt_prefix}")
# Get model config from settings (requires account)
# This will raise ValueError if IntegrationSettings not configured
try:
@@ -286,7 +350,7 @@ class AIEngine:
temperature=model_config.get('temperature'),
response_format=model_config.get('response_format'),
function_name=function_name,
function_id=function_id # Pass function_id for tracking
prompt_prefix=prompt_prefix # Pass prompt prefix for tracking (replaces function_id)
)
except Exception as e:
error_msg = f"AI call failed: {str(e)}"
@@ -376,18 +440,18 @@ class AIEngine:
# Map function name to operation type
operation_type = self._get_operation_type(function_name)
# Calculate actual amount based on results
actual_amount = self._get_actual_amount(function_name, save_result, parsed, data)
# Get actual token usage from response (AI returns 'input_tokens' and 'output_tokens')
tokens_input = raw_response.get('input_tokens', 0)
tokens_output = raw_response.get('output_tokens', 0)
# Deduct credits using the new convenience method
# Deduct credits based on actual token usage
CreditService.deduct_credits_for_operation(
account=self.account,
operation_type=operation_type,
amount=actual_amount,
tokens_input=tokens_input,
tokens_output=tokens_output,
cost_usd=raw_response.get('cost'),
model_used=raw_response.get('model', ''),
tokens_input=raw_response.get('tokens_input', 0),
tokens_output=raw_response.get('tokens_output', 0),
related_object_type=self._get_related_object_type(function_name),
related_object_id=save_result.get('id') or save_result.get('cluster_id') or save_result.get('task_id'),
metadata={
@@ -399,7 +463,10 @@ class AIEngine:
}
)
logger.info(f"[AIEngine] Credits deducted: {operation_type}, amount: {actual_amount}")
logger.info(
f"[AIEngine] Credits deducted: {operation_type}, "
f"tokens: {tokens_input + tokens_output} ({tokens_input} in, {tokens_output} out)"
)
except InsufficientCreditsError as e:
# This shouldn't happen since we checked before, but log it
logger.error(f"[AIEngine] Insufficient credits during deduction: {e}")
@@ -408,13 +475,16 @@ class AIEngine:
# Don't fail the operation if credit deduction fails (for backward compatibility)
# Phase 6: DONE - Finalization (98-100%)
success_msg = f"Task completed: {final_save_msg}" if 'final_save_msg' in locals() else "Task completed successfully"
self.step_tracker.add_request_step("DONE", "success", "Task completed successfully")
self.tracker.update("DONE", 100, "Task complete!", meta=self.step_tracker.get_meta())
done_msg = self._get_done_message(function_name, save_result)
self.step_tracker.add_request_step("DONE", "success", done_msg)
self.tracker.update("DONE", 100, done_msg, meta=self.step_tracker.get_meta())
# Log to database
self._log_to_database(fn, payload, parsed, save_result)
# Create notification for successful completion
self._create_success_notification(function_name, save_result, payload)
return {
'success': True,
**save_result,
@@ -458,6 +528,9 @@ class AIEngine:
self._log_to_database(fn, None, None, None, error=error)
# Create notification for failure
self._create_failure_notification(function_name, error)
return {
'success': False,
'error': error,
@@ -585,4 +658,104 @@ class AIEngine:
'generate_site_structure': 'site_blueprint',
}
return mapping.get(function_name, 'unknown')
def _create_success_notification(self, function_name: str, save_result: dict, payload: dict):
"""Create notification for successful AI task completion"""
if not self.account:
return
# Lazy import to avoid circular dependency and Django app loading issues
from igny8_core.business.notifications.services import NotificationService
# Get site from payload if available
site = None
site_id = payload.get('site_id')
if site_id:
try:
from igny8_core.auth.models import Site
site = Site.objects.get(id=site_id, account=self.account)
except:
pass
try:
# Map function to appropriate notification method
if function_name == 'auto_cluster':
NotificationService.notify_clustering_complete(
account=self.account,
site=site,
cluster_count=save_result.get('clusters_created', 0),
keyword_count=save_result.get('keywords_updated', 0)
)
elif function_name == 'generate_ideas':
NotificationService.notify_ideas_complete(
account=self.account,
site=site,
idea_count=save_result.get('count', 0),
cluster_count=len(payload.get('ids', []))
)
elif function_name == 'generate_content':
NotificationService.notify_content_complete(
account=self.account,
site=site,
article_count=save_result.get('count', 0),
word_count=save_result.get('word_count', 0)
)
elif function_name == 'generate_image_prompts':
NotificationService.notify_prompts_complete(
account=self.account,
site=site,
prompt_count=save_result.get('count', 0)
)
elif function_name == 'generate_images':
NotificationService.notify_images_complete(
account=self.account,
site=site,
image_count=save_result.get('count', 0)
)
logger.info(f"[AIEngine] Created success notification for {function_name}")
except Exception as e:
# Don't fail the task if notification creation fails
logger.warning(f"[AIEngine] Failed to create success notification: {e}", exc_info=True)
def _create_failure_notification(self, function_name: str, error: str):
"""Create notification for failed AI task"""
if not self.account:
return
# Lazy import to avoid circular dependency and Django app loading issues
from igny8_core.business.notifications.services import NotificationService
try:
# Map function to appropriate failure notification method
if function_name == 'auto_cluster':
NotificationService.notify_clustering_failed(
account=self.account,
error=error
)
elif function_name == 'generate_ideas':
NotificationService.notify_ideas_failed(
account=self.account,
error=error
)
elif function_name == 'generate_content':
NotificationService.notify_content_failed(
account=self.account,
error=error
)
elif function_name == 'generate_image_prompts':
NotificationService.notify_prompts_failed(
account=self.account,
error=error
)
elif function_name == 'generate_images':
NotificationService.notify_images_failed(
account=self.account,
error=error
)
logger.info(f"[AIEngine] Created failure notification for {function_name}")
except Exception as e:
# Don't fail the task if notification creation fails
logger.warning(f"[AIEngine] Failed to create failure notification: {e}", exc_info=True)

View File

@@ -97,7 +97,6 @@ class AutoClusterFunction(BaseAIFunction):
'keyword': kw.keyword,
'volume': kw.volume,
'difficulty': kw.difficulty,
'intent': kw.intent,
}
for kw in keywords
],
@@ -111,7 +110,7 @@ class AutoClusterFunction(BaseAIFunction):
# Format keywords
keywords_text = '\n'.join([
f"- {kw['keyword']} (Volume: {kw['volume']}, Difficulty: {kw['difficulty']}, Intent: {kw['intent']})"
f"- {kw['keyword']} (Volume: {kw['volume']}, Difficulty: {kw['difficulty']})"
for kw in keyword_data
])

View File

@@ -93,7 +93,7 @@ class GenerateImagePromptsFunction(BaseAIFunction):
data = data[0]
extracted = data['extracted']
max_images = data.get('max_images', 2)
max_images = data.get('max_images')
# Format content for prompt
content_text = self._format_content_for_prompt(extracted)
@@ -112,7 +112,7 @@ class GenerateImagePromptsFunction(BaseAIFunction):
return prompt
def parse_response(self, response: str, step_tracker=None) -> Dict:
"""Parse AI response - same pattern as other functions"""
"""Parse AI response with new structure including captions"""
ai_core = AICore(account=getattr(self, 'account', None))
json_data = ai_core.extract_json(response)
@@ -123,9 +123,28 @@ class GenerateImagePromptsFunction(BaseAIFunction):
if 'featured_prompt' not in json_data:
raise ValueError("Missing 'featured_prompt' in AI response")
if 'featured_caption' not in json_data:
raise ValueError("Missing 'featured_caption' in AI response")
if 'in_article_prompts' not in json_data:
raise ValueError("Missing 'in_article_prompts' in AI response")
# Validate in_article_prompts structure (should be list of objects with prompt & caption)
in_article_prompts = json_data.get('in_article_prompts', [])
if in_article_prompts:
for idx, item in enumerate(in_article_prompts):
if isinstance(item, dict):
if 'prompt' not in item:
raise ValueError(f"Missing 'prompt' in in_article_prompts[{idx}]")
if 'caption' not in item:
raise ValueError(f"Missing 'caption' in in_article_prompts[{idx}]")
else:
# Legacy format (just string) - convert to new format
in_article_prompts[idx] = {
'prompt': str(item),
'caption': '' # Empty caption for legacy data
}
return json_data
def save_output(
@@ -146,36 +165,47 @@ class GenerateImagePromptsFunction(BaseAIFunction):
content = original_data['content']
extracted = original_data['extracted']
max_images = original_data.get('max_images', 2)
max_images = original_data.get('max_images')
prompts_created = 0
with transaction.atomic():
# Save featured image prompt - use content instead of task
# Save featured image prompt with caption
Images.objects.update_or_create(
content=content,
image_type='featured',
defaults={
'prompt': parsed['featured_prompt'],
'caption': parsed.get('featured_caption', ''),
'status': 'pending',
'position': 0,
}
)
prompts_created += 1
# Save in-article image prompts
# Save in-article image prompts with captions
in_article_prompts = parsed.get('in_article_prompts', [])
h2_headings = extracted.get('h2_headings', [])
for idx, prompt_text in enumerate(in_article_prompts[:max_images]):
heading = h2_headings[idx] if idx < len(h2_headings) else f"Section {idx + 1}"
for idx, prompt_data in enumerate(in_article_prompts[:max_images]):
# Handle both new format (dict with prompt & caption) and legacy format (string)
if isinstance(prompt_data, dict):
prompt_text = prompt_data.get('prompt', '')
caption_text = prompt_data.get('caption', '')
else:
# Legacy format - just a string prompt
prompt_text = str(prompt_data)
caption_text = ''
heading = h2_headings[idx] if idx < len(h2_headings) else f"Section {idx}"
Images.objects.update_or_create(
content=content,
image_type='in_article',
position=idx + 1,
position=idx, # 0-based position matching section array indices
defaults={
'prompt': prompt_text,
'caption': caption_text,
'status': 'pending',
}
)
@@ -188,16 +218,14 @@ class GenerateImagePromptsFunction(BaseAIFunction):
# Helper methods
def _get_max_in_article_images(self, account) -> int:
"""Get max_in_article_images from IntegrationSettings"""
try:
from igny8_core.modules.system.models import IntegrationSettings
settings = IntegrationSettings.objects.get(
account=account,
integration_type='image_generation'
)
return settings.config.get('max_in_article_images', 2)
except IntegrationSettings.DoesNotExist:
return 2 # Default
"""
Get max_in_article_images from AISettings (with account override).
"""
from igny8_core.modules.system.ai_settings import AISettings
max_images = AISettings.get_effective_max_images(account)
logger.info(f"Using max_in_article_images={max_images} for account {account.id}")
return max_images
def _extract_content_elements(self, content: Content, max_images: int) -> Dict:
"""Extract title, intro paragraphs, and H2 headings from content HTML"""

View File

@@ -67,42 +67,39 @@ class GenerateImagesFunction(BaseAIFunction):
if not tasks:
raise ValueError("No tasks found")
# Get image generation settings
image_settings = {}
if account:
try:
from igny8_core.modules.system.models import IntegrationSettings
integration = IntegrationSettings.objects.get(
account=account,
integration_type='image_generation',
is_active=True
)
image_settings = integration.config or {}
except Exception:
pass
# Get image generation settings from AISettings (with account overrides)
from igny8_core.modules.system.ai_settings import AISettings
from igny8_core.ai.model_registry import ModelRegistry
# Extract settings with defaults
provider = image_settings.get('provider') or image_settings.get('service', 'openai')
if provider == 'runware':
model = image_settings.get('model') or image_settings.get('runwareModel', 'runware:97@1')
# Get effective settings (AISettings + AccountSettings overrides)
image_style = AISettings.get_effective_image_style(account)
max_images = AISettings.get_effective_max_images(account)
# Get default image model and provider from database
default_model = ModelRegistry.get_default_model('image')
if default_model:
model_config = ModelRegistry.get_model(default_model)
provider = model_config.provider if model_config else 'openai'
model = default_model
else:
model = image_settings.get('model', 'dall-e-3')
provider = 'openai'
model = 'dall-e-3'
logger.info(f"Using image settings: provider={provider}, model={model}, style={image_style}, max={max_images}")
return {
'tasks': tasks,
'account': account,
'provider': provider,
'model': model,
'image_type': image_settings.get('image_type', 'realistic'),
'max_in_article_images': int(image_settings.get('max_in_article_images', 2)),
'desktop_enabled': image_settings.get('desktop_enabled', True),
'mobile_enabled': image_settings.get('mobile_enabled', True),
'image_type': image_style,
'max_in_article_images': max_images,
}
def build_prompt(self, data: Dict, account=None) -> Dict:
"""Extract image prompts from task content"""
task = data.get('task')
max_images = data.get('max_in_article_images', 2)
max_images = data.get('max_in_article_images')
if not task or not task.content:
raise ValueError("Task has no content")

View File

@@ -0,0 +1,377 @@
"""
Model Registry Service
Central registry for AI model configurations with caching.
This service provides:
- Database-driven model configuration (from AIModelConfig)
- Integration provider API key retrieval (from IntegrationProvider)
- Caching for performance
- Cost calculation methods
Usage:
from igny8_core.ai.model_registry import ModelRegistry
# Get model config
model = ModelRegistry.get_model('gpt-4o-mini')
# Get rate
input_rate = ModelRegistry.get_rate('gpt-4o-mini', 'input')
# Calculate cost
cost = ModelRegistry.calculate_cost('gpt-4o-mini', input_tokens=1000, output_tokens=500)
# Get API key for a provider
api_key = ModelRegistry.get_api_key('openai')
"""
import logging
from decimal import Decimal
from typing import Optional, Dict, Any
from django.core.cache import cache
logger = logging.getLogger(__name__)
# Cache TTL in seconds (5 minutes)
MODEL_CACHE_TTL = 300
# Cache key prefix
CACHE_KEY_PREFIX = 'ai_model_'
PROVIDER_CACHE_PREFIX = 'provider_'
class ModelRegistry:
"""
Central registry for AI model configurations with caching.
Uses AIModelConfig from database for model configs.
Uses IntegrationProvider for API keys.
"""
@classmethod
def _get_cache_key(cls, model_id: str) -> str:
"""Generate cache key for model"""
return f"{CACHE_KEY_PREFIX}{model_id}"
@classmethod
def _get_provider_cache_key(cls, provider_id: str) -> str:
"""Generate cache key for provider"""
return f"{PROVIDER_CACHE_PREFIX}{provider_id}"
@classmethod
def _get_from_db(cls, model_id: str) -> Optional[Any]:
"""Get model config from database"""
try:
from igny8_core.business.billing.models import AIModelConfig
return AIModelConfig.objects.filter(
model_name=model_id,
is_active=True
).first()
except Exception as e:
logger.debug(f"Could not fetch model {model_id} from DB: {e}")
return None
@classmethod
def get_model(cls, model_id: str) -> Optional[Any]:
"""
Get model configuration by model_id.
Order of lookup:
1. Cache
2. Database (AIModelConfig)
Args:
model_id: The model identifier (e.g., 'gpt-4o-mini', 'dall-e-3')
Returns:
AIModelConfig instance, None if not found
"""
cache_key = cls._get_cache_key(model_id)
# Try cache first
cached = cache.get(cache_key)
if cached is not None:
return cached
# Try database
model_config = cls._get_from_db(model_id)
if model_config:
cache.set(cache_key, model_config, MODEL_CACHE_TTL)
return model_config
logger.warning(f"Model {model_id} not found in database")
return None
@classmethod
def get_rate(cls, model_id: str, rate_type: str) -> Decimal:
"""
Get specific rate for a model.
Args:
model_id: The model identifier
rate_type: 'input', 'output' (for text models) or 'image' (for image models)
Returns:
Decimal rate value, 0 if not found
"""
model = cls.get_model(model_id)
if not model:
return Decimal('0')
# Handle AIModelConfig instance
if rate_type == 'input':
return model.input_cost_per_1m or Decimal('0')
elif rate_type == 'output':
return model.output_cost_per_1m or Decimal('0')
elif rate_type == 'image':
return model.cost_per_image or Decimal('0')
return Decimal('0')
@classmethod
def calculate_cost(cls, model_id: str, input_tokens: int = 0, output_tokens: int = 0, num_images: int = 0) -> Decimal:
"""
Calculate cost for model usage.
For text models: Uses input/output token counts
For image models: Uses num_images
Args:
model_id: The model identifier
input_tokens: Number of input tokens (for text models)
output_tokens: Number of output tokens (for text models)
num_images: Number of images (for image models)
Returns:
Decimal cost in USD
"""
model = cls.get_model(model_id)
if not model:
return Decimal('0')
# Get model type from AIModelConfig
model_type = model.model_type
if model_type == 'text':
input_rate = cls.get_rate(model_id, 'input')
output_rate = cls.get_rate(model_id, 'output')
cost = (
(Decimal(input_tokens) * input_rate) +
(Decimal(output_tokens) * output_rate)
) / Decimal('1000000')
return cost
elif model_type == 'image':
image_rate = cls.get_rate(model_id, 'image')
return image_rate * Decimal(num_images)
return Decimal('0')
@classmethod
def get_default_model(cls, model_type: str = 'text') -> Optional[str]:
"""
Get the default model for a given type from database.
Args:
model_type: 'text' or 'image'
Returns:
model_id string or None
"""
try:
from igny8_core.business.billing.models import AIModelConfig
default = AIModelConfig.objects.filter(
model_type=model_type,
is_active=True,
is_default=True
).first()
if default:
return default.model_name
# If no default is set, return first active model of this type
first_active = AIModelConfig.objects.filter(
model_type=model_type,
is_active=True
).order_by('model_name').first()
if first_active:
return first_active.model_name
except Exception as e:
logger.error(f"Could not get default {model_type} model from DB: {e}")
return None
@classmethod
def list_models(cls, model_type: Optional[str] = None, provider: Optional[str] = None) -> list:
"""
List all available models from database, optionally filtered by type or provider.
Args:
model_type: Filter by 'text', 'image', or 'embedding'
provider: Filter by 'openai', 'anthropic', 'runware', etc.
Returns:
List of AIModelConfig instances
"""
try:
from igny8_core.business.billing.models import AIModelConfig
queryset = AIModelConfig.objects.filter(is_active=True)
if model_type:
queryset = queryset.filter(model_type=model_type)
if provider:
queryset = queryset.filter(provider=provider)
return list(queryset.order_by('model_name'))
except Exception as e:
logger.error(f"Could not list models from DB: {e}")
return []
@classmethod
def clear_cache(cls, model_id: Optional[str] = None):
"""
Clear model cache.
Args:
model_id: Clear specific model cache, or all if None
"""
if model_id:
cache.delete(cls._get_cache_key(model_id))
else:
# Clear all model caches - use pattern if available
try:
from django.core.cache import caches
default_cache = caches['default']
if hasattr(default_cache, 'delete_pattern'):
default_cache.delete_pattern(f"{CACHE_KEY_PREFIX}*")
else:
# Fallback: clear all known models from DB
from igny8_core.business.billing.models import AIModelConfig
for model in AIModelConfig.objects.values_list('model_name', flat=True):
cache.delete(cls._get_cache_key(model))
except Exception as e:
logger.warning(f"Could not clear all model caches: {e}")
@classmethod
def validate_model(cls, model_id: str) -> bool:
"""
Check if a model ID is valid and active.
Args:
model_id: The model identifier to validate
Returns:
True if model exists and is active, False otherwise
"""
model = cls.get_model(model_id)
if not model:
return False
return model.is_active
# ========== IntegrationProvider methods ==========
@classmethod
def get_provider(cls, provider_id: str) -> Optional[Any]:
"""
Get IntegrationProvider by provider_id.
Args:
provider_id: The provider identifier (e.g., 'openai', 'stripe', 'resend')
Returns:
IntegrationProvider instance, None if not found
"""
cache_key = cls._get_provider_cache_key(provider_id)
# Try cache first
cached = cache.get(cache_key)
if cached is not None:
return cached
try:
from igny8_core.modules.system.models import IntegrationProvider
provider = IntegrationProvider.objects.filter(
provider_id=provider_id,
is_active=True
).first()
if provider:
cache.set(cache_key, provider, MODEL_CACHE_TTL)
return provider
except Exception as e:
logger.error(f"Could not fetch provider {provider_id} from DB: {e}")
return None
@classmethod
def get_api_key(cls, provider_id: str) -> Optional[str]:
"""
Get API key for a provider.
Args:
provider_id: The provider identifier (e.g., 'openai', 'anthropic', 'runware')
Returns:
API key string, None if not found or provider is inactive
"""
provider = cls.get_provider(provider_id)
if provider and provider.api_key:
return provider.api_key
return None
@classmethod
def get_api_secret(cls, provider_id: str) -> Optional[str]:
"""
Get API secret for a provider (for OAuth, Stripe secret key, etc.).
Args:
provider_id: The provider identifier
Returns:
API secret string, None if not found
"""
provider = cls.get_provider(provider_id)
if provider and provider.api_secret:
return provider.api_secret
return None
@classmethod
def get_webhook_secret(cls, provider_id: str) -> Optional[str]:
"""
Get webhook secret for a provider (for Stripe, PayPal webhooks).
Args:
provider_id: The provider identifier
Returns:
Webhook secret string, None if not found
"""
provider = cls.get_provider(provider_id)
if provider and provider.webhook_secret:
return provider.webhook_secret
return None
@classmethod
def clear_provider_cache(cls, provider_id: Optional[str] = None):
"""
Clear provider cache.
Args:
provider_id: Clear specific provider cache, or all if None
"""
if provider_id:
cache.delete(cls._get_provider_cache_key(provider_id))
else:
try:
from django.core.cache import caches
default_cache = caches['default']
if hasattr(default_cache, 'delete_pattern'):
default_cache.delete_pattern(f"{PROVIDER_CACHE_PREFIX}*")
else:
from igny8_core.modules.system.models import IntegrationProvider
for pid in IntegrationProvider.objects.values_list('provider_id', flat=True):
cache.delete(cls._get_provider_cache_key(pid))
except Exception as e:
logger.warning(f"Could not clear provider caches: {e}")

View File

@@ -1,9 +1,9 @@
"""
Prompt Registry - Centralized prompt management with override hierarchy
Supports: task-level overrides → DB prompts → default fallbacks
Supports: task-level overrides → DB prompts → GlobalAIPrompt (REQUIRED)
"""
import logging
from typing import Dict, Any, Optional
from typing import Dict, Any, Optional, Tuple
from django.db import models
logger = logging.getLogger(__name__)
@@ -14,584 +14,12 @@ class PromptRegistry:
Centralized prompt registry with hierarchical resolution:
1. Task-level prompt_override (if exists)
2. DB prompt for (account, function)
3. Default fallback from registry
3. GlobalAIPrompt (REQUIRED - no hardcoded fallbacks)
"""
# Default prompts stored in registry
DEFAULT_PROMPTS = {
'clustering': """You are a semantic strategist and SEO architecture engine. Your task is to analyze the provided keyword list and group them into meaningful, intent-driven topic clusters that reflect how real users search, think, and act online.
Return a single JSON object with a "clusters" array. Each cluster must follow this structure:
# Removed ALL hardcoded prompts - GlobalAIPrompt is now the ONLY source of default prompts
# To add/modify prompts, use Django admin: /admin/system/globalaiprompt/
{
"name": "[Descriptive cluster name — natural, SEO-relevant, clearly expressing the topic]",
"description": "[12 concise sentences explaining what this cluster covers and why these keywords belong together]",
"keywords": ["keyword 1", "keyword 2", "keyword 3", "..."]
}
CLUSTERING STRATEGY:
1. Keyword-first, structure-follows:
- Do NOT rely on assumed categories or existing content structures.
- Begin purely from the meaning, intent, and behavioral connection between keywords.
2. Use multi-dimensional grouping logic:
- Group keywords by these behavioral dimensions:
• Search Intent → informational, commercial, transactional, navigational
• Use-Case or Problem → what the user is trying to achieve or solve
• Function or Feature → how something works or what it does
• Persona or Audience → who the content or product serves
• Context → location, time, season, platform, or device
- Combine 23 dimensions naturally where they make sense.
3. Model real search behavior:
- Favor clusters that form natural user journeys such as:
• Problem ➝ Solution
• General ➝ Specific
• Product ➝ Use-case
• Buyer ➝ Benefit
• Tool ➝ Function
• Task ➝ Method
- Each cluster should feel like a real topic hub users would explore in depth.
4. Avoid superficial groupings:
- Do not cluster keywords just because they share words.
- Do not force-fit outliers or unrelated keywords.
- Exclude keywords that don't logically connect to any cluster.
5. Quality rules:
- Each cluster should include between 310 strongly related keywords.
- Never duplicate a keyword across multiple clusters.
- Prioritize semantic strength, search intent, and usefulness for SEO-driven content structure.
- It's better to output fewer, high-quality clusters than many weak or shallow ones.
INPUT FORMAT:
{
"keywords": [IGNY8_KEYWORDS]
}
OUTPUT FORMAT:
Return ONLY the final JSON object in this format:
{
"clusters": [
{
"name": "...",
"description": "...",
"keywords": ["...", "...", "..."]
}
]
}
Do not include any explanations, text, or commentary outside the JSON output.
""",
'ideas': """Generate SEO-optimized, high-quality content ideas and outlines for each keyword cluster.
Input:
Clusters: [IGNY8_CLUSTERS]
Keywords: [IGNY8_CLUSTER_KEYWORDS]
Output: JSON with "ideas" array.
Each cluster → 1 cluster_hub + 24 supporting ideas.
Each idea must include:
title, description, content_type, content_structure, cluster_id, estimated_word_count (15002200), and covered_keywords.
Outline Rules:
Intro: 1 hook (3040 words) + 2 intro paragraphs (5060 words each).
58 H2 sections, each with 23 H3s.
Each H2 ≈ 250300 words, mixed content (paragraphs, lists, tables, blockquotes).
Vary section format and tone; no bullets or lists at start.
Tables have columns; blockquotes = expert POV or data insight.
Use depth, examples, and real context.
Avoid repetitive structure.
Tone: Professional editorial flow. No generic phrasing. Use varied sentence openings and realistic examples.
Output JSON Example:
{
"ideas": [
{
"title": "Best Organic Cotton Duvet Covers for All Seasons",
"description": {
"introduction": {
"hook": "Transform your sleep with organic cotton that blends comfort and sustainability.",
"paragraphs": [
{"format": "paragraph", "details": "Overview of organic cotton's rise in bedding industry."},
{"format": "paragraph", "details": "Why consumers prefer organic bedding over synthetic alternatives."}
]
},
"H2": [
{
"heading": "Why Choose Organic Cotton for Bedding?",
"subsections": [
{"subheading": "Health and Skin Benefits", "format": "paragraph", "details": "Discuss hypoallergenic and chemical-free aspects."},
{"subheading": "Environmental Sustainability", "format": "list", "details": "Eco benefits like low water use, no pesticides."},
{"subheading": "Long-Term Cost Savings", "format": "table", "details": "Compare durability and pricing over time."}
]
}
]
},
"content_type": "post",
"content_structure": "review",
"cluster_id": 12,
"estimated_word_count": 1800,
"covered_keywords": "organic duvet covers, eco-friendly bedding, sustainable sheets"
}
]
}
Valid content_type values: post, page, product, taxonomy
Valid content_structure by type:
- post: article, guide, comparison, review, listicle
- page: landing_page, business_page, service_page, general, cluster_hub
- product: product_page
- taxonomy: category_archive, tag_archive, attribute_archive""",
'content_generation': """You are an editorial content strategist. Your task is to generate a complete JSON response object based on the provided content idea, keyword cluster, keyword list, and metadata context.
==================
Generate a complete JSON response object matching this structure:
==================
{
"title": "[Article title using target keywords — full sentence case]",
"content": "[HTML content — full editorial structure with <p>, <h2>, <h3>, <ul>, <ol>, <table>]"
}
===========================
CONTENT FLOW RULES
===========================
**INTRODUCTION:**
- Start with 1 italicized hook (3040 words)
- Follow with 2 narrative paragraphs (each 5060 words; 23 sentences max)
- No headings allowed in intro
**H2 SECTIONS (58 total):**
Each section should be 250300 words and follow this format:
1. Two narrative paragraphs (80120 words each, 23 sentences)
2. One list or table (must come *after* a paragraph)
3. Optional closing paragraph (4060 words)
4. Insert 23 subsections naturally after main paragraphs
**Formatting Rules:**
- Vary use of unordered lists, ordered lists, and tables across sections
- Never begin any section or sub-section with a list or table
===========================
STYLE & QUALITY RULES
===========================
- **Keyword Usage:**
- Use keywords naturally in title, introduction, and headings
- Prioritize readability over keyword density
- **Tone & style guidelines:**
- No robotic or passive voice
- Avoid generic intros like "In today's world…"
- Don't repeat heading in opening sentence
- Vary sentence structure and length
===========================
STAGE 3: METADATA CONTEXT (NEW)
===========================
**Content Structure:**
[IGNY8_CONTENT_STRUCTURE]
- If structure is "cluster_hub": Create comprehensive, authoritative content that serves as the main resource for this topic cluster. Include overview sections, key concepts, and links to related topics.
- If structure is "article" or "guide": Create detailed, focused content that dives deep into the topic with actionable insights.
- Other structures: Follow the appropriate format (listicle, comparison, review, landing_page, service_page, product_page, category_archive, tag_archive, attribute_archive).
**Taxonomy Context:**
[IGNY8_TAXONOMY]
- Use taxonomy information to structure categories and tags appropriately.
- Align content with taxonomy hierarchy and relationships.
- Ensure content fits within the defined taxonomy structure.
**Product/Service Attributes:**
[IGNY8_ATTRIBUTES]
- If attributes are provided (e.g., product specs, service modifiers), incorporate them naturally into the content.
- For product content: Include specifications, features, dimensions, materials, etc. as relevant.
- For service content: Include service tiers, pricing modifiers, availability, etc. as relevant.
- Present attributes in a user-friendly format (tables, lists, or integrated into narrative).
===========================
INPUT VARIABLES
===========================
CONTENT IDEA DETAILS:
[IGNY8_IDEA]
KEYWORD CLUSTER:
[IGNY8_CLUSTER]
ASSOCIATED KEYWORDS:
[IGNY8_KEYWORDS]
===========================
OUTPUT FORMAT
===========================
Return ONLY the final JSON object.
Do NOT include any comments, formatting, or explanations.""",
'site_structure_generation': """You are a senior UX architect and information designer. Use the business brief, objectives, style references, and existing site info to propose a complete multi-page marketing website structure.
INPUT CONTEXT
==============
BUSINESS BRIEF:
[IGNY8_BUSINESS_BRIEF]
PRIMARY OBJECTIVES:
[IGNY8_OBJECTIVES]
STYLE & BRAND NOTES:
[IGNY8_STYLE]
SITE INFO / CURRENT STRUCTURE:
[IGNY8_SITE_INFO]
OUTPUT REQUIREMENTS
====================
Return ONE JSON object with the following keys:
{
"site": {
"name": "...",
"primary_navigation": ["home", "services", "about", "contact"],
"secondary_navigation": ["blog", "faq"],
"hero_message": "High level value statement",
"tone": "voice + tone summary"
},
"pages": [
{
"slug": "home",
"title": "Home",
"type": "home | about | services | products | blog | contact | custom",
"status": "draft",
"objective": "Explain the core brand promise and primary CTA",
"primary_cta": "Book a strategy call",
"seo": {
"meta_title": "...",
"meta_description": "..."
},
"blocks": [
{
"type": "hero | features | services | stats | testimonials | faq | contact | custom",
"heading": "Section headline",
"subheading": "Support copy",
"layout": "full-width | two-column | cards | carousel",
"content": [
"Bullet or short paragraph describing what to render in this block"
]
}
]
}
]
}
RULES
=====
- Include 58 pages covering the complete buyer journey (awareness → evaluation → conversion → trust).
- Every page must have at least 3 blocks with concrete guidance (no placeholders like "Lorem ipsum").
- Use consistent slug naming, all lowercase with hyphens.
- Type must match the allowed enum and reflect page intent.
- Ensure the navigation arrays align with the page list.
- Focus on practical descriptions that an engineering team can hand off directly to the Site Builder.
Return ONLY valid JSON. No commentary, explanations, or Markdown.
""",
'image_prompt_extraction': """Extract image prompts from the following article content.
ARTICLE TITLE: {title}
ARTICLE CONTENT:
{content}
Extract image prompts for:
1. Featured Image: One main image that represents the article topic
2. In-Article Images: Up to {max_images} images that would be useful within the article content
Return a JSON object with this structure:
{{
"featured_prompt": "Detailed description of the featured image",
"in_article_prompts": [
"Description of first in-article image",
"Description of second in-article image",
...
]
}}
Make sure each prompt is detailed enough for image generation, describing the visual elements, style, mood, and composition.""",
'image_prompt_template': 'Create a high-quality {image_type} image to use as a featured photo for a blog post titled "{post_title}". The image should visually represent the theme, mood, and subject implied by the image prompt: {image_prompt}. Focus on a realistic, well-composed scene that naturally communicates the topic without text or logos. Use balanced lighting, pleasing composition, and photographic detail suitable for lifestyle or editorial web content. Avoid adding any visible or readable text, brand names, or illustrative effects. **And make sure image is not blurry.**',
'negative_prompt': 'text, watermark, logo, overlay, title, caption, writing on walls, writing on objects, UI, infographic elements, post title',
'optimize_content': """You are an expert content optimizer specializing in SEO, readability, and engagement.
Your task is to optimize the provided content to improve its SEO score, readability, and engagement metrics.
CURRENT CONTENT:
Title: {CONTENT_TITLE}
Word Count: {WORD_COUNT}
Source: {SOURCE}
Primary Keyword: {PRIMARY_KEYWORD}
Internal Links: {INTERNAL_LINKS_COUNT}
CURRENT META DATA:
Meta Title: {META_TITLE}
Meta Description: {META_DESCRIPTION}
CURRENT SCORES:
{CURRENT_SCORES}
HTML CONTENT:
{HTML_CONTENT}
OPTIMIZATION REQUIREMENTS:
1. SEO Optimization:
- Ensure meta title is 30-60 characters (if provided)
- Ensure meta description is 120-160 characters (if provided)
- Optimize primary keyword usage (natural, not keyword stuffing)
- Improve heading structure (H1, H2, H3 hierarchy)
- Add internal links where relevant (maintain existing links)
2. Readability:
- Average sentence length: 15-20 words
- Use clear, concise language
- Break up long paragraphs
- Use bullet points and lists where appropriate
- Ensure proper paragraph structure
3. Engagement:
- Add compelling headings
- Include relevant images placeholders (alt text)
- Use engaging language
- Create clear call-to-action sections
- Improve content flow and structure
OUTPUT FORMAT:
Return ONLY a JSON object in this format:
{{
"html_content": "[Optimized HTML content]",
"meta_title": "[Optimized meta title, 30-60 chars]",
"meta_description": "[Optimized meta description, 120-160 chars]",
"optimization_notes": "[Brief notes on what was optimized]"
}}
Do not include any explanations, text, or commentary outside the JSON output.
""",
# Phase 8: Universal Content Types
'product_generation': """You are a product content specialist. Generate comprehensive product content that includes detailed descriptions, features, specifications, pricing, and benefits.
INPUT:
Product Name: [IGNY8_PRODUCT_NAME]
Product Description: [IGNY8_PRODUCT_DESCRIPTION]
Product Features: [IGNY8_PRODUCT_FEATURES]
Target Audience: [IGNY8_TARGET_AUDIENCE]
Primary Keyword: [IGNY8_PRIMARY_KEYWORD]
OUTPUT FORMAT:
Return ONLY a JSON object in this format:
{
"title": "[Product name and key benefit]",
"meta_title": "[SEO-optimized meta title, 30-60 chars]",
"meta_description": "[Compelling meta description, 120-160 chars]",
"html_content": "[Complete HTML product page content]",
"word_count": [Integer word count],
"primary_keyword": "[Primary keyword]",
"secondary_keywords": ["keyword1", "keyword2", "keyword3"],
"tags": ["tag1", "tag2", "tag3"],
"categories": ["Category > Subcategory"],
"json_blocks": [
{
"type": "product_overview",
"heading": "Product Overview",
"content": "Detailed product description"
},
{
"type": "features",
"heading": "Key Features",
"items": ["Feature 1", "Feature 2", "Feature 3"]
},
{
"type": "specifications",
"heading": "Specifications",
"data": {"Spec 1": "Value 1", "Spec 2": "Value 2"}
},
{
"type": "pricing",
"heading": "Pricing",
"content": "Pricing information"
},
{
"type": "benefits",
"heading": "Benefits",
"items": ["Benefit 1", "Benefit 2", "Benefit 3"]
}
],
"structure_data": {
"product_type": "[Product type]",
"price_range": "[Price range]",
"target_market": "[Target market]"
}
}
CONTENT REQUIREMENTS:
- Include compelling product overview
- List key features with benefits
- Provide detailed specifications
- Include pricing information (if available)
- Highlight unique selling points
- Use SEO-optimized headings
- Include call-to-action sections
- Ensure natural keyword usage
""",
'service_generation': """You are a service page content specialist. Generate comprehensive service page content that explains services, benefits, process, and pricing.
INPUT:
Service Name: [IGNY8_SERVICE_NAME]
Service Description: [IGNY8_SERVICE_DESCRIPTION]
Service Benefits: [IGNY8_SERVICE_BENEFITS]
Target Audience: [IGNY8_TARGET_AUDIENCE]
Primary Keyword: [IGNY8_PRIMARY_KEYWORD]
OUTPUT FORMAT:
Return ONLY a JSON object in this format:
{
"title": "[Service name and value proposition]",
"meta_title": "[SEO-optimized meta title, 30-60 chars]",
"meta_description": "[Compelling meta description, 120-160 chars]",
"html_content": "[Complete HTML service page content]",
"word_count": [Integer word count],
"primary_keyword": "[Primary keyword]",
"secondary_keywords": ["keyword1", "keyword2", "keyword3"],
"tags": ["tag1", "tag2", "tag3"],
"categories": ["Category > Subcategory"],
"json_blocks": [
{
"type": "service_overview",
"heading": "Service Overview",
"content": "Detailed service description"
},
{
"type": "benefits",
"heading": "Benefits",
"items": ["Benefit 1", "Benefit 2", "Benefit 3"]
},
{
"type": "process",
"heading": "Our Process",
"steps": ["Step 1", "Step 2", "Step 3"]
},
{
"type": "pricing",
"heading": "Pricing",
"content": "Pricing information"
},
{
"type": "faq",
"heading": "Frequently Asked Questions",
"items": [{"question": "Q1", "answer": "A1"}]
}
],
"structure_data": {
"service_type": "[Service type]",
"duration": "[Service duration]",
"target_market": "[Target market]"
}
}
CONTENT REQUIREMENTS:
- Clear service overview and value proposition
- Detailed benefits and outcomes
- Step-by-step process explanation
- Pricing information (if available)
- FAQ section addressing common questions
- Include testimonials or case studies (if applicable)
- Use SEO-optimized headings
- Include call-to-action sections
""",
'taxonomy_generation': """You are a taxonomy and categorization specialist. Generate comprehensive taxonomy page content that organizes and explains categories, tags, and hierarchical structures.
INPUT:
Taxonomy Name: [IGNY8_TAXONOMY_NAME]
Taxonomy Description: [IGNY8_TAXONOMY_DESCRIPTION]
Taxonomy Items: [IGNY8_TAXONOMY_ITEMS]
Primary Keyword: [IGNY8_PRIMARY_KEYWORD]
OUTPUT FORMAT:
Return ONLY a JSON object in this format:
{{
"title": "[Taxonomy name and purpose]",
"meta_title": "[SEO-optimized meta title, 30-60 chars]",
"meta_description": "[Compelling meta description, 120-160 chars]",
"html_content": "[Complete HTML taxonomy page content]",
"word_count": [Integer word count],
"primary_keyword": "[Primary keyword]",
"secondary_keywords": ["keyword1", "keyword2", "keyword3"],
"tags": ["tag1", "tag2", "tag3"],
"categories": ["Category > Subcategory"],
"json_blocks": [
{{
"type": "taxonomy_overview",
"heading": "Taxonomy Overview",
"content": "Detailed taxonomy description"
}},
{{
"type": "categories",
"heading": "Categories",
"items": [
{{
"name": "Category 1",
"description": "Category description",
"subcategories": ["Subcat 1", "Subcat 2"]
}}
]
}},
{{
"type": "tags",
"heading": "Tags",
"items": ["Tag 1", "Tag 2", "Tag 3"]
}},
{{
"type": "hierarchy",
"heading": "Taxonomy Hierarchy",
"structure": {{"Level 1": {{"Level 2": ["Level 3"]}}}}
}}
],
"structure_data": {{
"taxonomy_type": "[Taxonomy type]",
"item_count": [Integer],
"hierarchy_levels": [Integer]
}}
}}
CONTENT REQUIREMENTS:
- Clear taxonomy overview and purpose
- Organized category structure
- Tag organization and relationships
- Hierarchical structure visualization
- SEO-optimized headings
- Include navigation and organization benefits
- Use clear, descriptive language
""",
}
# Mapping from function names to prompt types
FUNCTION_TO_PROMPT_TYPE = {
'auto_cluster': 'clustering',
@@ -607,7 +35,114 @@ CONTENT REQUIREMENTS:
'generate_service_page': 'service_generation',
'generate_taxonomy': 'taxonomy_generation',
}
# Mapping of prompt types to their prefix numbers and display names
# Format: {prompt_type: (number, display_name)}
# GP = Global Prompt, CP = Custom Prompt
PROMPT_PREFIX_MAP = {
'clustering': ('01', 'Clustering'),
'ideas': ('02', 'Ideas'),
'content_generation': ('03', 'ContentGen'),
'image_prompt_extraction': ('04', 'ImagePrompts'),
'site_structure_generation': ('05', 'SiteStructure'),
'optimize_content': ('06', 'OptimizeContent'),
'product_generation': ('07', 'ProductGen'),
'service_generation': ('08', 'ServiceGen'),
'taxonomy_generation': ('09', 'TaxonomyGen'),
'image_prompt_template': ('10', 'ImageTemplate'),
'negative_prompt': ('11', 'NegativePrompt'),
}
@classmethod
def get_prompt_prefix(cls, prompt_type: str, is_custom: bool) -> str:
"""
Generate prompt prefix for tracking.
Args:
prompt_type: The prompt type (e.g., 'clustering', 'ideas')
is_custom: True if using custom/account-specific prompt, False if global
Returns:
Prefix string like "##GP01-Clustering" or "##CP01-Clustering"
"""
prefix_info = cls.PROMPT_PREFIX_MAP.get(prompt_type, ('00', prompt_type.title()))
number, display_name = prefix_info
prefix_type = 'CP' if is_custom else 'GP'
return f"##{prefix_type}{number}-{display_name}"
@classmethod
def get_prompt_with_metadata(
cls,
function_name: str,
account: Optional[Any] = None,
task: Optional[Any] = None,
context: Optional[Dict[str, Any]] = None
) -> Tuple[str, bool, str]:
"""
Get prompt for a function with metadata about source.
Priority:
1. task.prompt_override (if task provided and has override)
2. DB prompt for (account, function) - marked as custom if is_customized=True
3. GlobalAIPrompt (REQUIRED - no hardcoded fallbacks)
Args:
function_name: AI function name (e.g., 'auto_cluster', 'generate_ideas')
account: Account object (optional)
task: Task object with optional prompt_override (optional)
context: Additional context for prompt rendering (optional)
Returns:
Tuple of (prompt_string, is_custom, prompt_type)
- prompt_string: The rendered prompt
- is_custom: True if using custom/account prompt, False if global
- prompt_type: The prompt type identifier
"""
# Step 1: Get prompt type
prompt_type = cls.FUNCTION_TO_PROMPT_TYPE.get(function_name, function_name)
# Step 2: Check task-level override (always considered custom)
if task and hasattr(task, 'prompt_override') and task.prompt_override:
logger.info(f"Using task-level prompt override for {function_name}")
prompt = task.prompt_override
return cls._render_prompt(prompt, context or {}), True, prompt_type
# Step 3: Try DB prompt (account-specific)
if account:
try:
from igny8_core.modules.system.models import AIPrompt
db_prompt = AIPrompt.objects.get(
account=account,
prompt_type=prompt_type,
is_active=True
)
# Check if prompt is customized
is_custom = db_prompt.is_customized
logger.info(f"Using {'customized' if is_custom else 'default'} account prompt for {function_name} (account {account.id})")
prompt = db_prompt.prompt_value
return cls._render_prompt(prompt, context or {}), is_custom, prompt_type
except Exception as e:
logger.debug(f"No account-specific prompt found for {function_name}: {e}")
# Step 4: Try GlobalAIPrompt (platform-wide default) - REQUIRED
try:
from igny8_core.modules.system.global_settings_models import GlobalAIPrompt
global_prompt = GlobalAIPrompt.objects.get(
prompt_type=prompt_type,
is_active=True
)
logger.info(f"Using global default prompt for {function_name} from GlobalAIPrompt")
prompt = global_prompt.prompt_value
return cls._render_prompt(prompt, context or {}), False, prompt_type
except Exception as e:
error_msg = (
f"ERROR: Global prompt '{prompt_type}' not found for function '{function_name}'. "
f"Please configure it in Django admin at: /admin/system/globalaiprompt/. "
f"Error: {e}"
)
logger.error(error_msg)
raise ValueError(error_msg)
@classmethod
def get_prompt(
cls,
@@ -618,51 +153,23 @@ CONTENT REQUIREMENTS:
) -> str:
"""
Get prompt for a function with hierarchical resolution.
Priority:
1. task.prompt_override (if task provided and has override)
2. DB prompt for (account, function)
3. Default fallback from registry
3. GlobalAIPrompt (REQUIRED - no hardcoded fallbacks)
Args:
function_name: AI function name (e.g., 'auto_cluster', 'generate_ideas')
account: Account object (optional)
task: Task object with optional prompt_override (optional)
context: Additional context for prompt rendering (optional)
Returns:
Prompt string ready for formatting
"""
# Step 1: Check task-level override
if task and hasattr(task, 'prompt_override') and task.prompt_override:
logger.info(f"Using task-level prompt override for {function_name}")
prompt = task.prompt_override
return cls._render_prompt(prompt, context or {})
# Step 2: Get prompt type
prompt_type = cls.FUNCTION_TO_PROMPT_TYPE.get(function_name, function_name)
# Step 3: Try DB prompt
if account:
try:
from igny8_core.modules.system.models import AIPrompt
db_prompt = AIPrompt.objects.get(
account=account,
prompt_type=prompt_type,
is_active=True
)
logger.info(f"Using DB prompt for {function_name} (account {account.id})")
prompt = db_prompt.prompt_value
return cls._render_prompt(prompt, context or {})
except Exception as e:
logger.debug(f"No DB prompt found for {function_name}: {e}")
# Step 4: Use default fallback
prompt = cls.DEFAULT_PROMPTS.get(prompt_type, '')
if not prompt:
logger.warning(f"No default prompt found for {prompt_type}, using empty string")
return cls._render_prompt(prompt, context or {})
prompt, _, _ = cls.get_prompt_with_metadata(function_name, account, task, context)
return prompt
@classmethod
def _render_prompt(cls, prompt_template: str, context: Dict[str, Any]) -> str:
@@ -728,8 +235,17 @@ CONTENT REQUIREMENTS:
except Exception:
pass
# Use default
return cls.DEFAULT_PROMPTS.get(prompt_type, '')
# Try GlobalAIPrompt
try:
from igny8_core.modules.system.global_settings_models import GlobalAIPrompt
global_prompt = GlobalAIPrompt.objects.get(
prompt_type=prompt_type,
is_active=True
)
return global_prompt.prompt_value
except Exception:
# Fallback for image_prompt_template
return '{image_type} image for blog post titled "{post_title}": {image_prompt}'
@classmethod
def get_negative_prompt(cls, account: Optional[Any] = None) -> str:
@@ -752,8 +268,17 @@ CONTENT REQUIREMENTS:
except Exception:
pass
# Use default
return cls.DEFAULT_PROMPTS.get(prompt_type, '')
# Try GlobalAIPrompt
try:
from igny8_core.modules.system.global_settings_models import GlobalAIPrompt
global_prompt = GlobalAIPrompt.objects.get(
prompt_type=prompt_type,
is_active=True
)
return global_prompt.prompt_value
except Exception:
# Fallback for negative_prompt
return 'text, watermark, logo, overlay, title, caption, writing on walls, writing on objects, UI, infographic elements, post title'
# Convenience function for backward compatibility
@@ -761,3 +286,61 @@ def get_prompt(function_name: str, account=None, task=None, context=None) -> str
"""Get prompt using registry"""
return PromptRegistry.get_prompt(function_name, account=account, task=task, context=context)
def get_prompt_with_prefix(function_name: str, account=None, task=None, context=None) -> Tuple[str, str]:
"""
Get prompt with its tracking prefix.
Args:
function_name: AI function name
account: Account object (optional)
task: Task object with optional prompt_override (optional)
context: Additional context for prompt rendering (optional)
Returns:
Tuple of (prompt_string, prefix_string)
- prompt_string: The rendered prompt
- prefix_string: The tracking prefix (e.g., '##GP01-Clustering' or '##CP01-Clustering')
"""
prompt, is_custom, prompt_type = PromptRegistry.get_prompt_with_metadata(
function_name, account=account, task=task, context=context
)
prefix = PromptRegistry.get_prompt_prefix(prompt_type, is_custom)
return prompt, prefix
def get_prompt_prefix_for_function(function_name: str, account=None, task=None) -> str:
"""
Get just the prefix for a function without fetching the full prompt.
Useful when the prompt was already fetched elsewhere.
Args:
function_name: AI function name
account: Account object (optional)
task: Task object with optional prompt_override (optional)
Returns:
The tracking prefix (e.g., '##GP01-Clustering' or '##CP01-Clustering')
"""
prompt_type = PromptRegistry.FUNCTION_TO_PROMPT_TYPE.get(function_name, function_name)
# Check for task-level override (always custom)
if task and hasattr(task, 'prompt_override') and task.prompt_override:
return PromptRegistry.get_prompt_prefix(prompt_type, is_custom=True)
# Check for account-specific prompt
if account:
try:
from igny8_core.modules.system.models import AIPrompt
db_prompt = AIPrompt.objects.get(
account=account,
prompt_type=prompt_type,
is_active=True
)
return PromptRegistry.get_prompt_prefix(prompt_type, is_custom=db_prompt.is_customized)
except Exception:
pass
# Fallback to global (not custom)
return PromptRegistry.get_prompt_prefix(prompt_type, is_custom=False)

View File

@@ -1,6 +1,7 @@
"""
AI Settings - Centralized model configurations and limits
Uses IntegrationSettings only - no hardcoded defaults or fallbacks.
Uses AISettings (system defaults) with optional per-account overrides via AccountSettings.
API keys are stored in IntegrationProvider.
"""
from typing import Dict, Any
import logging
@@ -19,18 +20,22 @@ FUNCTION_ALIASES = {
def get_model_config(function_name: str, account) -> Dict[str, Any]:
"""
Get model configuration from IntegrationSettings.
Falls back to system account (aws-admin) if user account doesn't have settings.
Get model configuration for AI function.
Architecture:
- API keys: From IntegrationProvider (centralized)
- Model: From AIModelConfig (is_default=True)
- Params: From AISettings with AccountSettings overrides
Args:
function_name: Name of the AI function
account: Account instance (required)
Returns:
dict: Model configuration with 'model', 'max_tokens', 'temperature'
dict: Model configuration with 'model', 'max_tokens', 'temperature', 'api_key'
Raises:
ValueError: If account not provided or IntegrationSettings not configured
ValueError: If account not provided or settings not configured
"""
if not account:
raise ValueError("Account is required for model configuration")
@@ -38,71 +43,60 @@ def get_model_config(function_name: str, account) -> Dict[str, Any]:
# Resolve function alias
actual_name = FUNCTION_ALIASES.get(function_name, function_name)
# Get IntegrationSettings for OpenAI - try user account first
integration_settings = None
try:
from igny8_core.modules.system.models import IntegrationSettings
integration_settings = IntegrationSettings.objects.filter(
integration_type='openai',
account=account,
is_active=True
).first()
except Exception as e:
logger.warning(f"Could not load OpenAI settings for account {account.id}: {e}")
# Fallback to system account (aws-admin, default-account, or default)
if not integration_settings:
logger.info(f"No OpenAI settings for account {account.id}, falling back to system account")
from igny8_core.modules.system.ai_settings import AISettings
from igny8_core.ai.model_registry import ModelRegistry
# Get API key from IntegrationProvider
api_key = ModelRegistry.get_api_key('openai')
if not api_key:
raise ValueError(
"Platform OpenAI API key not configured. "
"Please configure IntegrationProvider in Django admin."
)
# Get default text model from AIModelConfig
default_model = ModelRegistry.get_default_model('text')
if not default_model:
default_model = 'gpt-4o-mini' # Ultimate fallback
model = default_model
# Get settings with account overrides
temperature = AISettings.get_effective_temperature(account)
max_tokens = AISettings.get_effective_max_tokens(account)
# Get max_tokens from AIModelConfig if available
try:
from igny8_core.auth.models import Account
from igny8_core.modules.system.models import IntegrationSettings
for slug in ['aws-admin', 'default-account', 'default']:
system_account = Account.objects.filter(slug=slug).first()
if system_account:
integration_settings = IntegrationSettings.objects.filter(
integration_type='openai',
account=system_account,
is_active=True
).first()
if integration_settings:
logger.info(f"Using OpenAI settings from system account: {slug}")
break
from igny8_core.business.billing.models import AIModelConfig
model_config = AIModelConfig.objects.filter(
model_name=model,
is_active=True
).first()
if model_config and model_config.max_output_tokens:
max_tokens = model_config.max_output_tokens
except Exception as e:
logger.warning(f"Could not load system account OpenAI settings: {e}")
# If still no settings found, raise error
if not integration_settings:
logger.warning(f"Could not load max_tokens from AIModelConfig for {model}: {e}")
except Exception as e:
logger.error(f"Could not load OpenAI settings for account {account.id}: {e}")
raise ValueError(
f"OpenAI IntegrationSettings not configured for account {account.id} or system account. "
f"Please configure OpenAI settings in the integration page."
f"Could not load OpenAI configuration for account {account.id}. "
f"Please configure IntegrationProvider and AISettings."
)
config = integration_settings.config or {}
# Get model from config
model = config.get('model')
if not model:
raise ValueError(
f"Model not configured in IntegrationSettings for account {account.id}. "
f"Please set 'model' in OpenAI integration settings."
)
# Validate model is in our supported list (optional validation)
# Validate model is in our supported list using ModelRegistry (database-driven)
try:
from igny8_core.utils.ai_processor import MODEL_RATES
if model not in MODEL_RATES:
if not ModelRegistry.validate_model(model):
supported_models = [m.model_name for m in ModelRegistry.list_models(model_type='text')]
logger.warning(
f"Model '{model}' for account {account.id} is not in supported list. "
f"Supported models: {list(MODEL_RATES.keys())}"
f"Supported models: {supported_models}"
)
except ImportError:
# MODEL_RATES not available - skip validation
except Exception:
pass
# Get max_tokens and temperature from config (with reasonable defaults for API)
max_tokens = config.get('max_tokens', 4000) # Reasonable default for API limits
temperature = config.get('temperature', 0.7) # Reasonable default
# Build response format based on model (JSON mode for supported models)
response_format = None
try:
@@ -110,7 +104,6 @@ def get_model_config(function_name: str, account) -> Dict[str, Any]:
if model in JSON_MODE_MODELS:
response_format = {"type": "json_object"}
except ImportError:
# JSON_MODE_MODELS not available - skip
pass
return {

View File

@@ -157,6 +157,7 @@ def process_image_generation_queue(self, image_ids: list, account_id: int = None
from igny8_core.modules.system.models import IntegrationSettings
from igny8_core.ai.ai_core import AICore
from igny8_core.ai.prompts import PromptRegistry
from igny8_core.business.billing.services.credit_service import CreditService
logger.info("=" * 80)
logger.info(f"process_image_generation_queue STARTED")
@@ -181,99 +182,85 @@ def process_image_generation_queue(self, image_ids: list, account_id: int = None
failed = 0
results = []
# Get image generation settings from IntegrationSettings
# Normal users use system account settings (aws-admin) via fallback
# Get image generation settings from AISettings (with account overrides)
logger.info("[process_image_generation_queue] Step 1: Loading image generation settings")
try:
image_settings = IntegrationSettings.objects.get(
account=account,
integration_type='image_generation',
is_active=True
)
logger.info(f"[process_image_generation_queue] Image generation settings found for account {account.id}")
config = image_settings.config or {}
except IntegrationSettings.DoesNotExist:
# Fallback to system account (aws-admin) settings
logger.info(f"[process_image_generation_queue] No settings for account {account.id}, falling back to system account")
from igny8_core.auth.models import Account
try:
system_account = Account.objects.get(slug='aws-admin')
image_settings = IntegrationSettings.objects.get(
account=system_account,
integration_type='image_generation',
is_active=True
)
logger.info(f"[process_image_generation_queue] Using system account (aws-admin) settings")
config = image_settings.config or {}
except (Account.DoesNotExist, IntegrationSettings.DoesNotExist):
logger.error("[process_image_generation_queue] ERROR: Image generation settings not found in system account either")
return {'success': False, 'error': 'Image generation settings not found'}
except Exception as e:
logger.error(f"[process_image_generation_queue] ERROR loading image generation settings: {e}", exc_info=True)
return {'success': False, 'error': f'Error loading image generation settings: {str(e)}'}
from igny8_core.modules.system.ai_settings import AISettings
from igny8_core.ai.model_registry import ModelRegistry
logger.info(f"[process_image_generation_queue] Image generation settings loaded. Config keys: {list(config.keys())}")
logger.info(f"[process_image_generation_queue] Full config: {config}")
# Get effective settings
image_type = AISettings.get_effective_image_style(account)
image_format = 'webp' # Default format
# Get default image model from database
default_model = ModelRegistry.get_default_model('image')
if default_model:
model_config = ModelRegistry.get_model(default_model)
provider = model_config.provider if model_config else 'openai'
model = default_model
else:
provider = 'openai'
model = 'dall-e-3'
# Get provider and model from config (respect user settings)
provider = config.get('provider', 'openai')
# Get model - try 'model' first, then 'imageModel' as fallback
model = config.get('model') or config.get('imageModel') or 'dall-e-3'
logger.info(f"[process_image_generation_queue] Using PROVIDER: {provider}, MODEL: {model} from settings")
image_type = config.get('image_type', 'realistic')
image_format = config.get('image_format', 'webp')
desktop_enabled = config.get('desktop_enabled', True)
mobile_enabled = config.get('mobile_enabled', True)
# Get image sizes from config, with fallback defaults
featured_image_size = config.get('featured_image_size') or ('1280x832' if provider == 'runware' else '1024x1024')
desktop_image_size = config.get('desktop_image_size') or '1024x1024'
in_article_image_size = config.get('in_article_image_size') or '512x512' # Default to 512x512
# Style to prompt enhancement mapping
# These style descriptors are added to the image prompt for better results
STYLE_PROMPT_MAP = {
# Runware styles
'photorealistic': 'ultra realistic photography, natural lighting, real world look, photorealistic',
'illustration': 'digital illustration, clean lines, artistic style, modern illustration',
'3d_render': 'computer generated 3D render, modern polished 3D style, depth and dramatic lighting',
'minimal_flat': 'minimal flat design, simple shapes, flat colors, modern graphic design aesthetic',
'artistic': 'artistic painterly style, expressive brushstrokes, hand painted aesthetic',
'cartoon': 'cartoon stylized illustration, playful exaggerated forms, animated character style',
# DALL-E styles (mapped from OpenAI API style parameter)
'natural': 'natural realistic style',
'vivid': 'vivid dramatic hyper-realistic style',
# Legacy fallbacks
'realistic': 'ultra realistic photography, natural lighting, photorealistic',
}
# Get the style description for prompt enhancement
style_description = STYLE_PROMPT_MAP.get(image_type, STYLE_PROMPT_MAP.get('photorealistic'))
logger.info(f"[process_image_generation_queue] Style: {image_type} -> prompt enhancement: {style_description[:50]}...")
# Model-specific landscape sizes (square is always 1024x1024)
# For Runware models - based on Runware documentation for optimal results per model
# For OpenAI DALL-E 3 - uses 1792x1024 for landscape
MODEL_LANDSCAPE_SIZES = {
'runware:97@1': '1280x768', # Hi Dream Full landscape
'bria:10@1': '1344x768', # Bria 3.2 landscape (16:9)
'google:4@2': '1376x768', # Nano Banana landscape (16:9)
'dall-e-3': '1792x1024', # DALL-E 3 landscape
'dall-e-2': '1024x1024', # DALL-E 2 only supports square
}
DEFAULT_SQUARE_SIZE = '1024x1024'
# Get model-specific landscape size for featured images
model_landscape_size = MODEL_LANDSCAPE_SIZES.get(model, '1792x1024' if provider == 'openai' else '1280x768')
# Featured image always uses model-specific landscape size
featured_image_size = model_landscape_size
# In-article images: alternating square/landscape based on position (handled in image loop)
in_article_square_size = DEFAULT_SQUARE_SIZE
in_article_landscape_size = model_landscape_size
logger.info(f"[process_image_generation_queue] Settings loaded:")
logger.info(f" - Provider: {provider}")
logger.info(f" - Model: {model}")
logger.info(f" - Image type: {image_type}")
logger.info(f" - Image format: {image_format}")
logger.info(f" - Desktop enabled: {desktop_enabled}")
logger.info(f" - Mobile enabled: {mobile_enabled}")
logger.info(f" - Featured image size: {featured_image_size}")
logger.info(f" - In-article square: {in_article_square_size}, landscape: {in_article_landscape_size}")
# Get provider API key (using same approach as test image generation)
# Note: API key is stored as 'apiKey' (camelCase) in IntegrationSettings.config
# Normal users use system account settings (aws-admin) via fallback
logger.info(f"[process_image_generation_queue] Step 2: Loading {provider.upper()} API key")
try:
provider_settings = IntegrationSettings.objects.get(
account=account,
integration_type=provider, # Use the provider from settings
is_active=True
)
logger.info(f"[process_image_generation_queue] {provider.upper()} integration settings found for account {account.id}")
except IntegrationSettings.DoesNotExist:
# Fallback to system account (aws-admin) settings
logger.info(f"[process_image_generation_queue] No {provider.upper()} settings for account {account.id}, falling back to system account")
from igny8_core.auth.models import Account
try:
system_account = Account.objects.get(slug='aws-admin')
provider_settings = IntegrationSettings.objects.get(
account=system_account,
integration_type=provider,
is_active=True
)
logger.info(f"[process_image_generation_queue] Using system account (aws-admin) {provider.upper()} settings")
except (Account.DoesNotExist, IntegrationSettings.DoesNotExist):
logger.error(f"[process_image_generation_queue] ERROR: {provider.upper()} integration settings not found in system account either")
return {'success': False, 'error': f'{provider.upper()} integration not found or not active'}
except Exception as e:
logger.error(f"[process_image_generation_queue] ERROR getting {provider.upper()} API key: {e}", exc_info=True)
return {'success': False, 'error': f'Error retrieving {provider.upper()} API key: {str(e)}'}
# Get provider API key from IntegrationProvider (centralized)
logger.info(f"[process_image_generation_queue] Step 2: Loading {provider.upper()} API key from IntegrationProvider")
# Extract API key from provider settings
logger.info(f"[process_image_generation_queue] {provider.upper()} config keys: {list(provider_settings.config.keys()) if provider_settings.config else 'None'}")
# Get API key from IntegrationProvider (centralized)
api_key = ModelRegistry.get_api_key(provider)
api_key = provider_settings.config.get('apiKey') if provider_settings.config else None
if not api_key:
logger.error(f"[process_image_generation_queue] {provider.upper()} API key not found in config")
logger.error(f"[process_image_generation_queue] {provider.upper()} config: {provider_settings.config}")
logger.error(f"[process_image_generation_queue] {provider.upper()} API key not configured in IntegrationProvider")
return {'success': False, 'error': f'{provider.upper()} API key not configured'}
# Log API key presence (but not the actual key for security)
@@ -285,7 +272,7 @@ def process_image_generation_queue(self, image_ids: list, account_id: int = None
image_prompt_template = PromptRegistry.get_image_prompt_template(account)
except Exception as e:
logger.warning(f"Failed to get image prompt template: {e}, using fallback")
image_prompt_template = 'Create a high-quality {image_type} image for a blog post titled "{post_title}". Image prompt: {image_prompt}'
image_prompt_template = '{image_type} image for blog post titled "{post_title}": {image_prompt}'
# Get negative prompt for Runware (only needed for Runware provider)
negative_prompt = None
@@ -413,7 +400,7 @@ def process_image_generation_queue(self, image_ids: list, account_id: int = None
# Calculate actual template length with placeholders filled
# Format template with dummy values to measure actual length
template_with_dummies = image_prompt_template.format(
image_type=image_type,
image_type=style_description, # Use actual style description length
post_title='X' * len(post_title), # Use same length as actual post_title
image_prompt='' # Empty to measure template overhead
)
@@ -440,7 +427,7 @@ def process_image_generation_queue(self, image_ids: list, account_id: int = None
image_prompt = image_prompt[:max_image_prompt_length - 3] + "..."
formatted_prompt = image_prompt_template.format(
image_type=image_type,
image_type=style_description, # Use full style description instead of raw value
post_title=post_title,
image_prompt=image_prompt
)
@@ -505,15 +492,40 @@ def process_image_generation_queue(self, image_ids: list, account_id: int = None
}
)
# Use appropriate size based on image type
# Use appropriate size based on image type and position
# Featured: Always landscape (model-specific)
# In-article: Alternating square/landscape based on position
# Position 0: Square (1024x1024)
# Position 1: Landscape (model-specific)
# Position 2: Square (1024x1024)
# Position 3: Landscape (model-specific)
if image.image_type == 'featured':
image_size = featured_image_size # Read from config
elif image.image_type == 'desktop':
image_size = desktop_image_size
elif image.image_type == 'mobile':
image_size = '512x512' # Fixed mobile size
else: # in_article or other
image_size = in_article_image_size # Read from config, default 512x512
image_size = featured_image_size # Model-specific landscape
elif image.image_type == 'in_article':
# Alternate based on position: even=square, odd=landscape
position = image.position or 0
if position % 2 == 0: # Position 0, 2: Square
image_size = in_article_square_size
else: # Position 1, 3: Landscape
image_size = in_article_landscape_size
logger.info(f"[process_image_generation_queue] In-article image position {position}: using {'square' if position % 2 == 0 else 'landscape'} size {image_size}")
else: # desktop or other (legacy)
image_size = in_article_square_size # Default to square
# For DALL-E, convert image_type to style parameter
# image_type is from user settings (e.g., 'vivid', 'natural', 'realistic')
# DALL-E accepts 'vivid' or 'natural' - map accordingly
dalle_style = None
if provider == 'openai':
# Map image_type to DALL-E style
# 'natural' = more realistic photos (default)
# 'vivid' = hyper-real, dramatic images
if image_type in ['vivid']:
dalle_style = 'vivid'
else:
# Default to 'natural' for realistic photos
dalle_style = 'natural'
logger.info(f"[process_image_generation_queue] DALL-E style: {dalle_style} (from image_type: {image_type})")
result = ai_core.generate_image(
prompt=formatted_prompt,
@@ -522,7 +534,8 @@ def process_image_generation_queue(self, image_ids: list, account_id: int = None
size=image_size,
api_key=api_key,
negative_prompt=negative_prompt,
function_name='generate_images_from_prompts'
function_name='generate_images_from_prompts',
style=dalle_style
)
# Update progress: Image generation complete (50%)
@@ -697,6 +710,33 @@ def process_image_generation_queue(self, image_ids: list, account_id: int = None
})
failed += 1
else:
# Deduct credits for successful image generation
credits_deducted = 0
cost_usd = result.get('cost_usd', 0)
if account:
try:
credits_deducted = CreditService.deduct_credits_for_image(
account=account,
model_name=model,
num_images=1,
description=f"Image generation: {content.title[:50] if content else 'Image'}" if content else f"Image {image_id}",
metadata={
'image_id': image_id,
'content_id': content_id,
'provider': provider,
'model': model,
'image_type': image.image_type if image else 'unknown',
'size': image_size,
},
cost_usd=cost_usd,
related_object_type='image',
related_object_id=image_id
)
logger.info(f"[process_image_generation_queue] Credits deducted for image {image_id}: account balance now {credits_deducted}")
except Exception as credit_error:
logger.error(f"[process_image_generation_queue] Failed to deduct credits for image {image_id}: {credit_error}")
# Don't fail the image generation if credit deduction fails
# Update progress: Complete (100%)
self.update_state(
state='PROGRESS',

View File

@@ -5,6 +5,7 @@ import time
import logging
from typing import List, Dict, Any, Optional, Callable
from datetime import datetime
from decimal import Decimal
from igny8_core.ai.constants import DEBUG_MODE
logger = logging.getLogger(__name__)
@@ -195,24 +196,35 @@ class CostTracker:
"""Tracks API costs and token usage"""
def __init__(self):
self.total_cost = 0.0
self.total_cost = Decimal('0.0')
self.total_tokens = 0
self.operations = []
def record(self, function_name: str, cost: float, tokens: int, model: str = None):
"""Record an API call cost"""
def record(self, function_name: str, cost, tokens: int, model: str = None):
"""Record an API call cost
Args:
function_name: Name of the AI function
cost: Cost value (can be float or Decimal)
tokens: Number of tokens used
model: Model name
"""
# Convert cost to Decimal if it's a float to avoid type mixing
if not isinstance(cost, Decimal):
cost = Decimal(str(cost))
self.total_cost += cost
self.total_tokens += tokens
self.operations.append({
'function': function_name,
'cost': cost,
'cost': float(cost), # Store as float for JSON serialization
'tokens': tokens,
'model': model
})
def get_total(self) -> float:
"""Get total cost"""
return self.total_cost
def get_total(self):
"""Get total cost (returns float for JSON serialization)"""
return float(self.total_cost)
def get_total_tokens(self) -> int:
"""Get total tokens"""

View File

@@ -135,7 +135,7 @@ def validate_api_key(api_key: Optional[str], integration_type: str = 'openai') -
def validate_model(model: str, model_type: str = 'text') -> Dict[str, Any]:
"""
Validate that model is in supported list.
Validate that model is in supported list using database.
Args:
model: Model name to validate
@@ -144,27 +144,50 @@ def validate_model(model: str, model_type: str = 'text') -> Dict[str, Any]:
Returns:
Dict with 'valid' (bool) and optional 'error' (str)
"""
from .constants import MODEL_RATES, VALID_OPENAI_IMAGE_MODELS
if model_type == 'text':
if model not in MODEL_RATES:
return {
'valid': False,
'error': f'Model "{model}" is not in supported models list'
}
elif model_type == 'image':
if model not in VALID_OPENAI_IMAGE_MODELS:
return {
'valid': False,
'error': f'Model "{model}" is not valid for OpenAI image generation. Only {", ".join(VALID_OPENAI_IMAGE_MODELS)} are supported.'
}
return {'valid': True}
try:
# Use database-driven validation via AIModelConfig
from igny8_core.business.billing.models import AIModelConfig
exists = AIModelConfig.objects.filter(
model_name=model,
model_type=model_type,
is_active=True
).exists()
if not exists:
# Get available models for better error message
available = list(AIModelConfig.objects.filter(
model_type=model_type,
is_active=True
).values_list('model_name', flat=True))
if available:
return {
'valid': False,
'error': f'Model "{model}" is not active or not found. Available {model_type} models: {", ".join(available)}'
}
else:
return {
'valid': False,
'error': f'No {model_type} models configured in database'
}
return {'valid': True}
except Exception as e:
# Log error but don't fallback to constants - DB is authoritative
import logging
logger = logging.getLogger(__name__)
logger.error(f"Error validating model {model}: {e}")
return {
'valid': False,
'error': f'Error validating model: {e}'
}
def validate_image_size(size: str, model: str) -> Dict[str, Any]:
"""
Validate that image size is valid for the selected model.
Validate that image size is valid for the selected model using database.
Args:
size: Image size (e.g., '1024x1024')
@@ -173,14 +196,40 @@ def validate_image_size(size: str, model: str) -> Dict[str, Any]:
Returns:
Dict with 'valid' (bool) and optional 'error' (str)
"""
from .constants import VALID_SIZES_BY_MODEL
valid_sizes = VALID_SIZES_BY_MODEL.get(model, [])
if size not in valid_sizes:
return {
'valid': False,
'error': f'Image size "{size}" is not valid for model "{model}". Valid sizes are: {", ".join(valid_sizes)}'
}
return {'valid': True}
try:
# Try database first
from igny8_core.business.billing.models import AIModelConfig
model_config = AIModelConfig.objects.filter(
model_name=model,
model_type='image',
is_active=True
).first()
if model_config:
if not model_config.validate_size(size):
valid_sizes = model_config.valid_sizes or []
return {
'valid': False,
'error': f'Image size "{size}" is not valid for model "{model}". Valid sizes are: {", ".join(valid_sizes)}'
}
return {'valid': True}
else:
return {
'valid': False,
'error': f'Image model "{model}" not found in database'
}
except Exception:
# Fallback to constants if database fails
from .constants import VALID_SIZES_BY_MODEL
valid_sizes = VALID_SIZES_BY_MODEL.get(model, [])
if size not in valid_sizes:
return {
'valid': False,
'error': f'Image size "{size}" is not valid for model "{model}". Valid sizes are: {", ".join(valid_sizes)}'
}
return {'valid': True}

View File

@@ -5,7 +5,8 @@ from django.urls import path
from igny8_core.api.account_views import (
AccountSettingsViewSet,
TeamManagementViewSet,
UsageAnalyticsViewSet
UsageAnalyticsViewSet,
DashboardStatsViewSet
)
urlpatterns = [
@@ -28,4 +29,9 @@ urlpatterns = [
path('usage/analytics/', UsageAnalyticsViewSet.as_view({
'get': 'overview'
}), name='usage-analytics'),
# Dashboard Stats (real data for home page)
path('dashboard/stats/', DashboardStatsViewSet.as_view({
'get': 'stats'
}), name='dashboard-stats'),
]

View File

@@ -10,6 +10,7 @@ from django.contrib.auth import get_user_model
from django.db.models import Q, Count, Sum
from django.utils import timezone
from datetime import timedelta
from decimal import Decimal
from drf_spectacular.utils import extend_schema, extend_schema_view
from igny8_core.auth.models import Account
@@ -131,6 +132,16 @@ class TeamManagementViewSet(viewsets.ViewSet):
status=status.HTTP_400_BAD_REQUEST
)
# Check hard limit for users BEFORE creating
from igny8_core.business.billing.services.limit_service import LimitService, HardLimitExceededError
try:
LimitService.check_hard_limit(account, 'users', additional_count=1)
except HardLimitExceededError as e:
return Response(
{'error': str(e)},
status=status.HTTP_400_BAD_REQUEST
)
# Create user (simplified - in production, send invitation email)
user = User.objects.create_user(
email=email,
@@ -242,3 +253,216 @@ class UsageAnalyticsViewSet(viewsets.ViewSet):
'total_usage': abs(transactions.filter(amount__lt=0).aggregate(Sum('amount'))['amount__sum'] or 0),
'total_purchases': transactions.filter(amount__gt=0).aggregate(Sum('amount'))['amount__sum'] or 0,
})
@extend_schema_view(
stats=extend_schema(tags=['Account']),
)
class DashboardStatsViewSet(viewsets.ViewSet):
"""Dashboard statistics - real data for home page widgets"""
permission_classes = [IsAuthenticated]
@action(detail=False, methods=['get'])
def stats(self, request):
"""
Get dashboard statistics for the home page.
Query params:
- site_id: Filter by site (optional, defaults to all sites)
- days: Number of days for AI operations (default: 7)
Returns:
- ai_operations: Real credit usage by operation type
- recent_activity: Recent notifications
- content_velocity: Content created this week/month
- images_count: Actual total images count
- published_count: Actual published content count
"""
account = request.user.account
site_id = request.query_params.get('site_id')
days = int(request.query_params.get('days', 7))
# Import models here to avoid circular imports
from igny8_core.modules.writer.models import Images, Content
from igny8_core.modules.planner.models import Keywords, Clusters, ContentIdeas
from igny8_core.business.notifications.models import Notification
from igny8_core.business.billing.models import CreditUsageLog
from igny8_core.auth.models import Site
# Build base filter for site
site_filter = {}
if site_id:
try:
site_filter['site_id'] = int(site_id)
except (ValueError, TypeError):
pass
# ========== AI OPERATIONS (from CreditUsageLog) ==========
start_date = timezone.now() - timedelta(days=days)
usage_query = CreditUsageLog.objects.filter(
account=account,
created_at__gte=start_date
)
# Get operations grouped by type
operations_data = usage_query.values('operation_type').annotate(
count=Count('id'),
credits=Sum('credits_used')
).order_by('-credits')
# Calculate totals
total_ops = usage_query.count()
total_credits = usage_query.aggregate(total=Sum('credits_used'))['total'] or 0
# Format operations for frontend
operations = []
for op in operations_data:
op_type = op['operation_type'] or 'other'
operations.append({
'type': op_type,
'count': op['count'] or 0,
'credits': op['credits'] or 0,
})
ai_operations = {
'period': f'{days}d',
'operations': operations,
'totals': {
'count': total_ops,
'credits': total_credits,
'successRate': 98.5, # TODO: calculate from actual success/failure
'avgCreditsPerOp': round(total_credits / total_ops, 1) if total_ops > 0 else 0,
}
}
# ========== RECENT ACTIVITY (from Notifications) ==========
recent_notifications = Notification.objects.filter(
account=account
).order_by('-created_at')[:10]
recent_activity = []
for notif in recent_notifications:
# Map notification type to activity type
activity_type_map = {
'ai_clustering_complete': 'clustering',
'ai_ideas_complete': 'ideas',
'ai_content_complete': 'content',
'ai_images_complete': 'images',
'ai_prompts_complete': 'images',
'content_published': 'published',
'wp_sync_success': 'published',
}
activity_type = activity_type_map.get(notif.notification_type, 'system')
# Map notification type to href
href_map = {
'clustering': '/planner/clusters',
'ideas': '/planner/ideas',
'content': '/writer/content',
'images': '/writer/images',
'published': '/writer/published',
}
recent_activity.append({
'id': str(notif.id),
'type': activity_type,
'title': notif.title,
'description': notif.message[:100] if notif.message else '',
'timestamp': notif.created_at.isoformat(),
'href': href_map.get(activity_type, '/dashboard'),
})
# ========== CONTENT COUNTS ==========
content_base = Content.objects.filter(account=account)
if site_filter:
content_base = content_base.filter(**site_filter)
total_content = content_base.count()
draft_content = content_base.filter(status='draft').count()
review_content = content_base.filter(status='review').count()
published_content = content_base.filter(status='published').count()
# ========== IMAGES COUNT (actual images, not content with images) ==========
images_base = Images.objects.filter(account=account)
if site_filter:
images_base = images_base.filter(**site_filter)
total_images = images_base.count()
generated_images = images_base.filter(status='generated').count()
pending_images = images_base.filter(status='pending').count()
# ========== CONTENT VELOCITY ==========
now = timezone.now()
week_ago = now - timedelta(days=7)
month_ago = now - timedelta(days=30)
# This week's content
week_content = content_base.filter(created_at__gte=week_ago).count()
week_images = images_base.filter(created_at__gte=week_ago).count()
# This month's content
month_content = content_base.filter(created_at__gte=month_ago).count()
month_images = images_base.filter(created_at__gte=month_ago).count()
# Estimate words (avg 1500 per article)
content_velocity = {
'thisWeek': {
'articles': week_content,
'words': week_content * 1500,
'images': week_images,
},
'thisMonth': {
'articles': month_content,
'words': month_content * 1500,
'images': month_images,
},
'total': {
'articles': total_content,
'words': total_content * 1500,
'images': total_images,
},
'trend': 0, # TODO: calculate actual trend
}
# ========== PIPELINE COUNTS ==========
keywords_base = Keywords.objects.filter(account=account)
clusters_base = Clusters.objects.filter(account=account)
ideas_base = ContentIdeas.objects.filter(account=account)
if site_filter:
keywords_base = keywords_base.filter(**site_filter)
clusters_base = clusters_base.filter(**site_filter)
ideas_base = ideas_base.filter(**site_filter)
# Get site count
sites_count = Site.objects.filter(account=account, is_active=True).count()
pipeline = {
'sites': sites_count,
'keywords': keywords_base.count(),
'clusters': clusters_base.count(),
'ideas': ideas_base.count(),
'tasks': ideas_base.filter(status='queued').count() + ideas_base.filter(status='completed').count(),
'drafts': draft_content + review_content,
'published': published_content,
}
return Response({
'ai_operations': ai_operations,
'recent_activity': recent_activity,
'content_velocity': content_velocity,
'pipeline': pipeline,
'counts': {
'content': {
'total': total_content,
'draft': draft_content,
'review': review_content,
'published': published_content,
},
'images': {
'total': total_images,
'generated': generated_images,
'pending': pending_images,
},
}
})

View File

@@ -21,21 +21,6 @@ class AccountModelViewSet(viewsets.ModelViewSet):
user = getattr(self.request, 'user', None)
if user and hasattr(user, 'is_authenticated') and user.is_authenticated:
# Bypass filtering for superusers - they can see everything
if getattr(user, 'is_superuser', False):
return queryset
# Bypass filtering for developers
if hasattr(user, 'role') and user.role == 'developer':
return queryset
# Bypass filtering for system account users
try:
if hasattr(user, 'is_system_account_user') and user.is_system_account_user():
return queryset
except Exception:
pass
try:
account = getattr(self.request, 'account', None)
if not account and hasattr(self.request, 'user') and self.request.user and hasattr(self.request.user, 'is_authenticated') and self.request.user.is_authenticated:
@@ -254,29 +239,6 @@ class SiteSectorModelViewSet(AccountModelViewSet):
# Check if user is authenticated and is a proper User instance (not AnonymousUser)
if user and hasattr(user, 'is_authenticated') and user.is_authenticated and hasattr(user, 'get_accessible_sites'):
# Bypass site filtering for superusers and developers
# They already got unfiltered queryset from parent AccountModelViewSet
if getattr(user, 'is_superuser', False) or (hasattr(user, 'role') and user.role == 'developer'):
# No site filtering for superuser/developer
# But still apply query param filters if provided
try:
query_params = getattr(self.request, 'query_params', None)
if query_params is None:
query_params = getattr(self.request, 'GET', {})
site_id = query_params.get('site_id') or query_params.get('site')
except AttributeError:
site_id = None
if site_id:
try:
site_id_int = int(site_id) if site_id else None
if site_id_int:
queryset = queryset.filter(site_id=site_id_int)
except (ValueError, TypeError):
pass
return queryset
try:
# Get user's accessible sites
accessible_sites = user.get_accessible_sites()

View File

@@ -50,24 +50,6 @@ class HasTenantAccess(permissions.BasePermission):
logger.warning(f"[HasTenantAccess] DENIED: User not authenticated")
return False
# Bypass for superusers
if getattr(request.user, 'is_superuser', False):
logger.info(f"[HasTenantAccess] ALLOWED: User {request.user.email} is superuser")
return True
# Bypass for developers
if hasattr(request.user, 'role') and request.user.role == 'developer':
logger.info(f"[HasTenantAccess] ALLOWED: User {request.user.email} is developer")
return True
# Bypass for system account users
try:
if hasattr(request.user, 'is_system_account_user') and request.user.is_system_account_user():
logger.info(f"[HasTenantAccess] ALLOWED: User {request.user.email} is system account user")
return True
except Exception:
pass
# SIMPLIFIED LOGIC: Every authenticated user MUST have an account
# Middleware already set request.account from request.user.account
# Just verify it exists
@@ -95,7 +77,6 @@ class IsViewerOrAbove(permissions.BasePermission):
"""
Permission class that requires viewer, editor, admin, or owner role
For read-only operations
Superusers and developers bypass this check.
"""
def has_permission(self, request, view):
import logging
@@ -105,16 +86,6 @@ class IsViewerOrAbove(permissions.BasePermission):
logger.warning(f"[IsViewerOrAbove] DENIED: User not authenticated")
return False
# Bypass for superusers
if getattr(request.user, 'is_superuser', False):
logger.info(f"[IsViewerOrAbove] ALLOWED: User {request.user.email} is superuser")
return True
# Bypass for developers
if hasattr(request.user, 'role') and request.user.role == 'developer':
logger.info(f"[IsViewerOrAbove] ALLOWED: User {request.user.email} is developer")
return True
# Check user role
if hasattr(request.user, 'role'):
role = request.user.role
@@ -135,20 +106,11 @@ class IsEditorOrAbove(permissions.BasePermission):
"""
Permission class that requires editor, admin, or owner role
For content operations
Superusers and developers bypass this check.
"""
def has_permission(self, request, view):
if not request.user or not request.user.is_authenticated:
return False
# Bypass for superusers
if getattr(request.user, 'is_superuser', False):
return True
# Bypass for developers
if hasattr(request.user, 'role') and request.user.role == 'developer':
return True
# Check user role
if hasattr(request.user, 'role'):
role = request.user.role
@@ -162,20 +124,21 @@ class IsEditorOrAbove(permissions.BasePermission):
class IsAdminOrOwner(permissions.BasePermission):
"""
Permission class that requires admin or owner role only
OR user belongs to aws-admin account
For settings, keys, billing operations
Superusers and developers bypass this check.
"""
def has_permission(self, request, view):
if not request.user or not request.user.is_authenticated:
return False
# Bypass for superusers
if getattr(request.user, 'is_superuser', False):
return True
# Bypass for developers
if hasattr(request.user, 'role') and request.user.role == 'developer':
return True
# Check if user belongs to aws-admin account (case-insensitive)
if hasattr(request.user, 'account') and request.user.account:
account_name = getattr(request.user.account, 'name', None)
account_slug = getattr(request.user.account, 'slug', None)
if account_name and account_name.lower() == 'aws admin':
return True
if account_slug == 'aws-admin':
return True
# Check user role
if hasattr(request.user, 'role'):
@@ -185,23 +148,3 @@ class IsAdminOrOwner(permissions.BasePermission):
# If no role system, deny by default for security
return False
class IsSystemAccountOrDeveloper(permissions.BasePermission):
"""
Allow only system accounts (aws-admin/default-account/default) or developer role.
Use for sensitive, globally-scoped settings like integration API keys.
"""
def has_permission(self, request, view):
user = getattr(request, "user", None)
if not user or not user.is_authenticated:
return False
account_slug = getattr(getattr(user, "account", None), "slug", None)
if user.role == "developer":
return True
if account_slug in ["aws-admin", "default-account", "default"]:
return True
return False

View File

@@ -140,7 +140,7 @@ class GetModelConfigTestCase(TestCase):
def test_get_model_config_json_mode_models(self):
"""Test get_model_config() sets response_format for JSON mode models"""
json_models = ['gpt-4o', 'gpt-4o-mini', 'gpt-4-turbo-preview']
json_models = ['gpt-4o', 'gpt-4o-mini', 'gpt-4-turbo-preview', 'gpt-5.1', 'gpt-5.2']
for model in json_models:
IntegrationSettings.objects.filter(account=self.account).delete()

View File

@@ -79,7 +79,7 @@ class IntegrationTestBase(TestCase):
sector=self.industry_sector,
volume=1000,
difficulty=50,
intent="informational"
country="US"
)
# Authenticate client

View File

@@ -27,19 +27,6 @@ class DebugScopedRateThrottle(ScopedRateThrottle):
return True
# OLD CODE BELOW (DISABLED)
# Bypass for superusers and developers
if request.user and hasattr(request.user, 'is_authenticated') and request.user.is_authenticated:
if getattr(request.user, 'is_superuser', False):
return True
if hasattr(request.user, 'role') and request.user.role == 'developer':
return True
# Bypass for system account users
try:
if hasattr(request.user, 'is_system_account_user') and request.user.is_system_account_user():
return True
except Exception:
pass
# Check if throttling should be bypassed
debug_bypass = getattr(settings, 'DEBUG', False)
env_bypass = getattr(settings, 'IGNY8_DEBUG_THROTTLE', False)

View File

@@ -6,8 +6,10 @@ from rest_framework.routers import DefaultRouter
from .account_views import (
AccountSettingsViewSet,
TeamManagementViewSet,
UsageAnalyticsViewSet
UsageAnalyticsViewSet,
DashboardStatsViewSet
)
from igny8_core.modules.system.settings_views import ContentGenerationSettingsViewSet
router = DefaultRouter()
@@ -15,6 +17,10 @@ urlpatterns = [
# Account settings (non-router endpoints for simplified access)
path('settings/', AccountSettingsViewSet.as_view({'get': 'retrieve', 'patch': 'partial_update'}), name='account-settings'),
# AI Settings - Content Generation Settings per the plan
# GET/POST /api/v1/account/settings/ai/
path('settings/ai/', ContentGenerationSettingsViewSet.as_view({'get': 'list', 'post': 'create', 'put': 'create'}), name='ai-settings'),
# Team management
path('team/', TeamManagementViewSet.as_view({'get': 'list', 'post': 'create'}), name='team-list'),
path('team/<int:pk>/', TeamManagementViewSet.as_view({'delete': 'destroy'}), name='team-detail'),
@@ -22,5 +28,8 @@ urlpatterns = [
# Usage analytics
path('usage/analytics/', UsageAnalyticsViewSet.as_view({'get': 'overview'}), name='usage-analytics'),
# Dashboard stats (real data for home page)
path('dashboard/stats/', DashboardStatsViewSet.as_view({'get': 'stats'}), name='dashboard-stats'),
path('', include(router.urls)),
]
]

View File

@@ -8,7 +8,7 @@ from unfold.admin import ModelAdmin, TabularInline
from simple_history.admin import SimpleHistoryAdmin
from igny8_core.admin.base import AccountAdminMixin, Igny8ModelAdmin
from .models import User, Account, Plan, Subscription, Site, Sector, SiteUserAccess, Industry, IndustrySector, SeedKeyword, PasswordResetToken
from import_export.admin import ExportMixin
from import_export.admin import ExportMixin, ImportExportMixin
from import_export import resources
@@ -112,13 +112,30 @@ class AccountAdminForm(forms.ModelForm):
return instance
class PlanResource(resources.ModelResource):
"""Resource class for importing/exporting Plans"""
class Meta:
model = Plan
fields = ('id', 'name', 'slug', 'price', 'billing_cycle', 'max_sites', 'max_users',
'max_keywords', 'max_ahrefs_queries', 'included_credits', 'is_active', 'is_featured')
export_order = fields
import_id_fields = ('id',)
skip_unchanged = True
@admin.register(Plan)
class PlanAdmin(Igny8ModelAdmin):
class PlanAdmin(ImportExportMixin, Igny8ModelAdmin):
resource_class = PlanResource
"""Plan admin - Global, no account filtering needed"""
list_display = ['name', 'slug', 'price', 'billing_cycle', 'max_sites', 'max_users', 'max_keywords', 'max_content_words', 'included_credits', 'is_active', 'is_featured']
list_display = ['name', 'slug', 'price', 'billing_cycle', 'max_sites', 'max_users', 'max_keywords', 'max_ahrefs_queries', 'included_credits', 'is_active', 'is_featured']
list_filter = ['is_active', 'billing_cycle', 'is_internal', 'is_featured']
search_fields = ['name', 'slug']
readonly_fields = ['created_at']
actions = [
'bulk_set_active',
'bulk_set_inactive',
'bulk_clone_plans',
]
fieldsets = (
('Plan Info', {
@@ -130,12 +147,12 @@ class PlanAdmin(Igny8ModelAdmin):
'description': 'Persistent limits for account-level resources'
}),
('Hard Limits (Persistent)', {
'fields': ('max_keywords', 'max_clusters'),
'fields': ('max_keywords',),
'description': 'Total allowed - never reset'
}),
('Monthly Limits (Reset on Billing Cycle)', {
'fields': ('max_content_ideas', 'max_content_words', 'max_images_basic', 'max_images_premium', 'max_image_prompts'),
'description': 'Monthly allowances - reset at billing cycle'
'fields': ('max_ahrefs_queries',),
'description': 'Monthly Ahrefs keyword research queries (0 = disabled)'
}),
('Billing & Credits', {
'fields': ('included_credits', 'extra_credit_price', 'allow_credit_topup', 'auto_credit_topup_threshold', 'auto_credit_topup_amount', 'credits_per_month')
@@ -144,6 +161,32 @@ class PlanAdmin(Igny8ModelAdmin):
'fields': ('stripe_product_id', 'stripe_price_id')
}),
)
def bulk_set_active(self, request, queryset):
"""Set selected plans to active"""
updated = queryset.update(is_active=True)
self.message_user(request, f'{updated} plan(s) set to active.', messages.SUCCESS)
bulk_set_active.short_description = 'Set plans to Active'
def bulk_set_inactive(self, request, queryset):
"""Set selected plans to inactive"""
updated = queryset.update(is_active=False)
self.message_user(request, f'{updated} plan(s) set to inactive.', messages.SUCCESS)
bulk_set_inactive.short_description = 'Set plans to Inactive'
def bulk_clone_plans(self, request, queryset):
"""Clone selected plans"""
count = 0
for plan in queryset:
plan_copy = Plan.objects.get(pk=plan.pk)
plan_copy.pk = None
plan_copy.name = f"{plan.name} (Copy)"
plan_copy.slug = f"{plan.slug}-copy"
plan_copy.is_active = False
plan_copy.save()
count += 1
self.message_user(request, f'{count} plan(s) cloned.', messages.SUCCESS)
bulk_clone_plans.short_description = 'Clone selected plans'
class AccountResource(resources.ModelResource):
@@ -163,6 +206,16 @@ class AccountAdmin(ExportMixin, AccountAdminMixin, SimpleHistoryAdmin, Igny8Mode
list_filter = ['status', 'plan']
search_fields = ['name', 'slug']
readonly_fields = ['created_at', 'updated_at', 'health_indicator', 'health_details']
actions = [
'bulk_set_status_active',
'bulk_set_status_suspended',
'bulk_set_status_trial',
'bulk_set_status_cancelled',
'bulk_add_credits',
'bulk_subtract_credits',
'bulk_soft_delete',
'bulk_hard_delete',
]
def get_queryset(self, request):
"""Override to filter by account for non-superusers"""
@@ -317,14 +370,196 @@ class AccountAdmin(ExportMixin, AccountAdminMixin, SimpleHistoryAdmin, Igny8Mode
if obj and getattr(obj, 'slug', '') == 'aws-admin':
return False
return super().has_delete_permission(request, obj)
# Bulk Actions
def bulk_set_status_active(self, request, queryset):
"""Set selected accounts to active status"""
updated = queryset.update(status='active')
self.message_user(request, f'{updated} account(s) set to active.', messages.SUCCESS)
bulk_set_status_active.short_description = 'Set status to Active'
def bulk_set_status_suspended(self, request, queryset):
"""Set selected accounts to suspended status"""
updated = queryset.update(status='suspended')
self.message_user(request, f'{updated} account(s) set to suspended.', messages.SUCCESS)
bulk_set_status_suspended.short_description = 'Set status to Suspended'
def bulk_set_status_trial(self, request, queryset):
"""Set selected accounts to trial status"""
updated = queryset.update(status='trial')
self.message_user(request, f'{updated} account(s) set to trial.', messages.SUCCESS)
bulk_set_status_trial.short_description = 'Set status to Trial'
def bulk_set_status_cancelled(self, request, queryset):
"""Set selected accounts to cancelled status"""
updated = queryset.update(status='cancelled')
self.message_user(request, f'{updated} account(s) set to cancelled.', messages.SUCCESS)
bulk_set_status_cancelled.short_description = 'Set status to Cancelled'
def bulk_add_credits(self, request, queryset):
"""Add credits to selected accounts"""
from django import forms
if 'apply' in request.POST:
amount = int(request.POST.get('credits', 0))
if amount > 0:
for account in queryset:
account.credits += amount
account.save()
self.message_user(request, f'Added {amount} credits to {queryset.count()} account(s).', messages.SUCCESS)
return
class CreditForm(forms.Form):
credits = forms.IntegerField(
min_value=1,
label="Credits to Add",
help_text=f"Add credits to {queryset.count()} selected account(s)"
)
from django.shortcuts import render
return render(request, 'admin/bulk_action_form.html', {
'title': 'Add Credits to Accounts',
'queryset': queryset,
'form': CreditForm(),
'action': 'bulk_add_credits',
})
bulk_add_credits.short_description = 'Add credits to accounts'
def bulk_subtract_credits(self, request, queryset):
"""Subtract credits from selected accounts"""
from django import forms
if 'apply' in request.POST:
amount = int(request.POST.get('credits', 0))
if amount > 0:
for account in queryset:
account.credits = max(0, account.credits - amount)
account.save()
self.message_user(request, f'Subtracted {amount} credits from {queryset.count()} account(s).', messages.SUCCESS)
return
class CreditForm(forms.Form):
credits = forms.IntegerField(
min_value=1,
label="Credits to Subtract",
help_text=f"Subtract credits from {queryset.count()} selected account(s)"
)
from django.shortcuts import render
return render(request, 'admin/bulk_action_form.html', {
'title': 'Subtract Credits from Accounts',
'queryset': queryset,
'form': CreditForm(),
'action': 'bulk_subtract_credits',
})
bulk_subtract_credits.short_description = 'Subtract credits from accounts'
def bulk_soft_delete(self, request, queryset):
"""Soft delete selected accounts and all related data"""
count = 0
for account in queryset:
if account.slug != 'aws-admin': # Protect admin account
account.delete() # Soft delete via SoftDeletableModel (now cascades)
count += 1
self.message_user(request, f'{count} account(s) and all related data soft deleted.', messages.SUCCESS)
bulk_soft_delete.short_description = 'Soft delete accounts (with cascade)'
def bulk_hard_delete(self, request, queryset):
"""PERMANENTLY delete selected accounts and ALL related data - cannot be undone!"""
import traceback
count = 0
errors = []
for account in queryset:
if account.slug == 'aws-admin': # Protect admin account
errors.append(f'{account.name}: Protected system account')
continue
try:
account.hard_delete_with_cascade() # Permanently delete everything
count += 1
except Exception as e:
# Log full traceback for debugging
import logging
logger = logging.getLogger(__name__)
logger.error(f'Hard delete failed for account {account.pk} ({account.name}): {traceback.format_exc()}')
errors.append(f'{account.name}: {str(e)}')
if count > 0:
self.message_user(request, f'{count} account(s) and ALL related data permanently deleted.', messages.SUCCESS)
if errors:
self.message_user(request, f'Errors: {"; ".join(errors)}', messages.ERROR)
bulk_hard_delete.short_description = '⚠️ PERMANENTLY delete accounts (irreversible!)'
class SubscriptionResource(resources.ModelResource):
"""Resource class for exporting Subscriptions"""
class Meta:
model = Subscription
fields = ('id', 'account__name', 'status', 'current_period_start', 'current_period_end',
'stripe_subscription_id', 'created_at')
export_order = fields
@admin.register(Subscription)
class SubscriptionAdmin(AccountAdminMixin, Igny8ModelAdmin):
class SubscriptionAdmin(ExportMixin, AccountAdminMixin, Igny8ModelAdmin):
resource_class = SubscriptionResource
list_display = ['account', 'status', 'current_period_start', 'current_period_end']
list_filter = ['status']
search_fields = ['account__name', 'stripe_subscription_id']
readonly_fields = ['created_at', 'updated_at']
actions = [
'bulk_set_status_active',
'bulk_set_status_cancelled',
'bulk_set_status_suspended',
'bulk_set_status_trialing',
'bulk_renew',
]
actions = [
'bulk_set_status_active',
'bulk_set_status_cancelled',
'bulk_set_status_suspended',
'bulk_set_status_trialing',
'bulk_renew',
]
def bulk_set_status_active(self, request, queryset):
"""Set subscriptions to active"""
updated = queryset.update(status='active')
self.message_user(request, f'{updated} subscription(s) set to active.', messages.SUCCESS)
bulk_set_status_active.short_description = 'Set status to Active'
def bulk_set_status_cancelled(self, request, queryset):
"""Set subscriptions to cancelled"""
updated = queryset.update(status='cancelled')
self.message_user(request, f'{updated} subscription(s) set to cancelled.', messages.SUCCESS)
bulk_set_status_cancelled.short_description = 'Set status to Cancelled'
def bulk_set_status_suspended(self, request, queryset):
"""Set subscriptions to suspended"""
updated = queryset.update(status='suspended')
self.message_user(request, f'{updated} subscription(s) set to suspended.', messages.SUCCESS)
bulk_set_status_suspended.short_description = 'Set status to Suspended'
def bulk_set_status_trialing(self, request, queryset):
"""Set subscriptions to trialing"""
updated = queryset.update(status='trialing')
self.message_user(request, f'{updated} subscription(s) set to trialing.', messages.SUCCESS)
bulk_set_status_trialing.short_description = 'Set status to Trialing'
def bulk_renew(self, request, queryset):
"""Renew selected subscriptions"""
from django.utils import timezone
from datetime import timedelta
count = 0
for subscription in queryset:
# Extend subscription by one billing period
if subscription.current_period_end:
subscription.current_period_end = subscription.current_period_end + timedelta(days=30)
subscription.status = 'active'
subscription.save()
count += 1
self.message_user(request, f'{count} subscription(s) renewed for 30 days.', messages.SUCCESS)
bulk_renew.short_description = 'Renew subscriptions'
@admin.register(PasswordResetToken)
@@ -354,35 +589,49 @@ class SectorInline(TabularInline):
def get_keywords_count(self, obj):
if obj.pk:
return getattr(obj, 'keywords_set', obj.keywords_set).count()
try:
return obj.keywords_set.count()
except (AttributeError, Exception):
return 0
return 0
get_keywords_count.short_description = 'Keywords'
def get_clusters_count(self, obj):
if obj.pk:
return getattr(obj, 'clusters_set', obj.clusters_set).count()
try:
return obj.clusters_set.count()
except (AttributeError, Exception):
return 0
return 0
get_clusters_count.short_description = 'Clusters'
class SiteResource(resources.ModelResource):
"""Resource class for exporting Sites"""
"""Resource class for importing/exporting Sites"""
class Meta:
model = Site
fields = ('id', 'name', 'slug', 'account__name', 'industry__name', 'domain',
'status', 'is_active', 'site_type', 'hosting_type', 'created_at')
'status', 'is_active', 'site_type', 'hosting_type', 'description', 'created_at')
export_order = fields
import_id_fields = ('id',)
skip_unchanged = True
@admin.register(Site)
class SiteAdmin(ExportMixin, AccountAdminMixin, Igny8ModelAdmin):
class SiteAdmin(ImportExportMixin, AccountAdminMixin, Igny8ModelAdmin):
resource_class = SiteResource
list_display = ['name', 'slug', 'account', 'industry', 'domain', 'status', 'is_active', 'get_api_key_status', 'get_sectors_count']
list_filter = ['status', 'is_active', 'account', 'industry', 'hosting_type']
search_fields = ['name', 'slug', 'domain', 'industry__name']
readonly_fields = ['created_at', 'updated_at', 'get_api_key_display']
inlines = [SectorInline]
actions = ['generate_api_keys']
actions = [
'generate_api_keys',
'bulk_set_status_active',
'bulk_set_status_inactive',
'bulk_set_status_maintenance',
'bulk_soft_delete',
]
fieldsets = (
('Site Info', {
@@ -404,8 +653,8 @@ class SiteAdmin(ExportMixin, AccountAdminMixin, Igny8ModelAdmin):
def get_api_key_display(self, obj):
"""Display API key with copy button"""
from django.utils.html import format_html
if obj.wp_api_key:
from django.utils.html import format_html
return format_html(
'<div style="display:flex; align-items:center; gap:10px;">'
'<code style="background:#f0f0f0; padding:5px 10px; border-radius:3px;">{}</code>'
@@ -438,6 +687,33 @@ class SiteAdmin(ExportMixin, AccountAdminMixin, Igny8ModelAdmin):
self.message_user(request, f'Generated API keys for {updated_count} site(s). Sites with existing keys were skipped.')
generate_api_keys.short_description = 'Generate WordPress API Keys'
def bulk_set_status_active(self, request, queryset):
"""Set selected sites to active status"""
updated = queryset.update(status='active', is_active=True)
self.message_user(request, f'{updated} site(s) set to active.', messages.SUCCESS)
bulk_set_status_active.short_description = 'Set status to Active'
def bulk_set_status_inactive(self, request, queryset):
"""Set selected sites to inactive status"""
updated = queryset.update(status='inactive', is_active=False)
self.message_user(request, f'{updated} site(s) set to inactive.', messages.SUCCESS)
bulk_set_status_inactive.short_description = 'Set status to Inactive'
def bulk_set_status_maintenance(self, request, queryset):
"""Set selected sites to maintenance status"""
updated = queryset.update(status='maintenance')
self.message_user(request, f'{updated} site(s) set to maintenance mode.', messages.SUCCESS)
bulk_set_status_maintenance.short_description = 'Set status to Maintenance'
def bulk_soft_delete(self, request, queryset):
"""Soft delete selected sites"""
count = 0
for site in queryset:
site.delete() # Soft delete via SoftDeletableModel
count += 1
self.message_user(request, f'{count} site(s) soft deleted.', messages.SUCCESS)
bulk_soft_delete.short_description = 'Soft delete selected sites'
def get_sectors_count(self, obj):
try:
return obj.get_active_sectors_count()
@@ -454,12 +730,27 @@ class SiteAdmin(ExportMixin, AccountAdminMixin, Igny8ModelAdmin):
get_industry_display.short_description = 'Industry'
class SectorResource(resources.ModelResource):
"""Resource class for exporting Sectors"""
class Meta:
model = Sector
fields = ('id', 'name', 'slug', 'site__name', 'industry_sector__name', 'status',
'is_active', 'created_at')
export_order = fields
@admin.register(Sector)
class SectorAdmin(AccountAdminMixin, Igny8ModelAdmin):
class SectorAdmin(ExportMixin, AccountAdminMixin, Igny8ModelAdmin):
resource_class = SectorResource
list_display = ['name', 'slug', 'site', 'industry_sector', 'get_industry', 'status', 'is_active', 'get_keywords_count', 'get_clusters_count']
list_filter = ['status', 'is_active', 'site', 'industry_sector__industry']
search_fields = ['name', 'slug', 'site__name', 'industry_sector__name']
readonly_fields = ['created_at', 'updated_at']
actions = [
'bulk_set_status_active',
'bulk_set_status_inactive',
'bulk_soft_delete',
]
def get_industry(self, obj):
"""Safely get industry name"""
@@ -490,6 +781,27 @@ class SectorAdmin(AccountAdminMixin, Igny8ModelAdmin):
pass
return 0
get_clusters_count.short_description = 'Clusters'
def bulk_set_status_active(self, request, queryset):
"""Set selected sectors to active status"""
updated = queryset.update(status='active', is_active=True)
self.message_user(request, f'{updated} sector(s) set to active.', messages.SUCCESS)
bulk_set_status_active.short_description = 'Set status to Active'
def bulk_set_status_inactive(self, request, queryset):
"""Set selected sectors to inactive status"""
updated = queryset.update(status='inactive', is_active=False)
self.message_user(request, f'{updated} sector(s) set to inactive.', messages.SUCCESS)
bulk_set_status_inactive.short_description = 'Set status to Inactive'
def bulk_soft_delete(self, request, queryset):
"""Soft delete selected sectors"""
count = 0
for sector in queryset:
sector.delete() # Soft delete via SoftDeletableModel
count += 1
self.message_user(request, f'{count} sector(s) soft deleted.', messages.SUCCESS)
bulk_soft_delete.short_description = 'Soft delete selected sectors'
@admin.register(SiteUserAccess)
@@ -508,14 +820,29 @@ class IndustrySectorInline(TabularInline):
readonly_fields = []
class IndustryResource(resources.ModelResource):
"""Resource class for importing/exporting Industries"""
class Meta:
model = Industry
fields = ('id', 'name', 'slug', 'description', 'is_active', 'created_at')
export_order = fields
import_id_fields = ('id',)
skip_unchanged = True
@admin.register(Industry)
class IndustryAdmin(Igny8ModelAdmin):
class IndustryAdmin(ImportExportMixin, Igny8ModelAdmin):
resource_class = IndustryResource
list_display = ['name', 'slug', 'is_active', 'get_sectors_count', 'created_at']
list_filter = ['is_active']
search_fields = ['name', 'slug', 'description']
readonly_fields = ['created_at', 'updated_at']
inlines = [IndustrySectorInline]
actions = ['delete_selected'] # Enable bulk delete
actions = [
'delete_selected',
'bulk_activate',
'bulk_deactivate',
] # Enable bulk delete
def get_sectors_count(self, obj):
return obj.sectors.filter(is_active=True).count()
@@ -524,36 +851,88 @@ class IndustryAdmin(Igny8ModelAdmin):
def has_delete_permission(self, request, obj=None):
"""Allow deletion for superusers and developers"""
return request.user.is_superuser or (hasattr(request.user, 'is_developer') and request.user.is_developer())
def bulk_activate(self, request, queryset):
updated = queryset.update(is_active=True)
self.message_user(request, f'{updated} industry/industries activated.', messages.SUCCESS)
bulk_activate.short_description = 'Activate selected industries'
def bulk_deactivate(self, request, queryset):
updated = queryset.update(is_active=False)
self.message_user(request, f'{updated} industry/industries deactivated.', messages.SUCCESS)
bulk_deactivate.short_description = 'Deactivate selected industries'
class IndustrySectorResource(resources.ModelResource):
"""Resource class for importing/exporting Industry Sectors"""
class Meta:
model = IndustrySector
fields = ('id', 'name', 'slug', 'industry__name', 'description', 'is_active', 'created_at')
export_order = fields
import_id_fields = ('id',)
skip_unchanged = True
@admin.register(IndustrySector)
class IndustrySectorAdmin(Igny8ModelAdmin):
class IndustrySectorAdmin(ImportExportMixin, Igny8ModelAdmin):
resource_class = IndustrySectorResource
list_display = ['name', 'slug', 'industry', 'is_active']
list_filter = ['is_active', 'industry']
search_fields = ['name', 'slug', 'description']
readonly_fields = ['created_at', 'updated_at']
actions = ['delete_selected'] # Enable bulk delete
actions = [
'delete_selected',
'bulk_activate',
'bulk_deactivate',
] # Enable bulk delete
def has_delete_permission(self, request, obj=None):
"""Allow deletion for superusers and developers"""
return request.user.is_superuser or (hasattr(request.user, 'is_developer') and request.user.is_developer())
def bulk_activate(self, request, queryset):
updated = queryset.update(is_active=True)
self.message_user(request, f'{updated} sector(s) activated.', messages.SUCCESS)
bulk_activate.short_description = 'Activate selected sectors'
def bulk_deactivate(self, request, queryset):
updated = queryset.update(is_active=False)
self.message_user(request, f'{updated} sector(s) deactivated.', messages.SUCCESS)
bulk_deactivate.short_description = 'Deactivate selected sectors'
class SeedKeywordResource(resources.ModelResource):
"""Resource class for importing/exporting Seed Keywords"""
class Meta:
model = SeedKeyword
fields = ('id', 'keyword', 'industry__name', 'sector__name', 'volume',
'difficulty', 'country', 'is_active', 'created_at')
export_order = fields
import_id_fields = ('id',)
skip_unchanged = True
@admin.register(SeedKeyword)
class SeedKeywordAdmin(Igny8ModelAdmin):
class SeedKeywordAdmin(ImportExportMixin, Igny8ModelAdmin):
resource_class = SeedKeywordResource
"""SeedKeyword admin - Global reference data, no account filtering"""
list_display = ['keyword', 'industry', 'sector', 'volume', 'difficulty', 'intent', 'is_active', 'created_at']
list_filter = ['is_active', 'industry', 'sector', 'intent']
list_display = ['keyword', 'industry', 'sector', 'volume', 'difficulty', 'country', 'is_active', 'created_at']
list_filter = ['is_active', 'industry', 'sector', 'country']
search_fields = ['keyword']
readonly_fields = ['created_at', 'updated_at']
actions = ['delete_selected'] # Enable bulk delete
actions = [
'delete_selected',
'bulk_activate',
'bulk_deactivate',
'bulk_update_country',
] # Enable bulk delete
fieldsets = (
('Keyword Info', {
'fields': ('keyword', 'industry', 'sector', 'is_active')
'fields': ('keyword', 'industry', 'sector', 'country', 'is_active')
}),
('SEO Metrics', {
'fields': ('volume', 'difficulty', 'intent')
'fields': ('volume', 'difficulty')
}),
('Timestamps', {
'fields': ('created_at', 'updated_at')
@@ -563,6 +942,50 @@ class SeedKeywordAdmin(Igny8ModelAdmin):
def has_delete_permission(self, request, obj=None):
"""Allow deletion for superusers and developers"""
return request.user.is_superuser or (hasattr(request.user, 'is_developer') and request.user.is_developer())
def bulk_activate(self, request, queryset):
updated = queryset.update(is_active=True)
self.message_user(request, f'{updated} seed keyword(s) activated.', messages.SUCCESS)
bulk_activate.short_description = 'Activate selected keywords'
def bulk_deactivate(self, request, queryset):
updated = queryset.update(is_active=False)
self.message_user(request, f'{updated} seed keyword(s) deactivated.', messages.SUCCESS)
bulk_deactivate.short_description = 'Deactivate selected keywords'
def bulk_update_country(self, request, queryset):
from django import forms
if 'apply' in request.POST:
country = request.POST.get('country')
if country:
updated = queryset.update(country=country)
self.message_user(request, f'{updated} seed keyword(s) country updated to: {country}', messages.SUCCESS)
return
COUNTRY_CHOICES = [
('US', 'United States'),
('GB', 'United Kingdom'),
('CA', 'Canada'),
('AU', 'Australia'),
('IN', 'India'),
]
class CountryForm(forms.Form):
country = forms.ChoiceField(
choices=COUNTRY_CHOICES,
label="Select Country",
help_text=f"Update country for {queryset.count()} seed keyword(s)"
)
from django.shortcuts import render
return render(request, 'admin/bulk_action_form.html', {
'title': 'Update Country',
'queryset': queryset,
'form': CountryForm(),
'action': 'bulk_update_country',
})
bulk_update_country.short_description = 'Update country'
class UserResource(resources.ModelResource):
@@ -584,7 +1007,7 @@ class UserAdmin(ExportMixin, BaseUserAdmin, Igny8ModelAdmin):
list_display = ['email', 'username', 'account', 'role', 'is_active', 'is_staff', 'created_at']
list_filter = ['role', 'account', 'is_active', 'is_staff']
search_fields = ['email', 'username']
readonly_fields = ['created_at', 'updated_at']
readonly_fields = ['created_at', 'updated_at', 'password_display']
fieldsets = BaseUserAdmin.fieldsets + (
('IGNY8 Info', {'fields': ('account', 'role')}),
@@ -594,6 +1017,52 @@ class UserAdmin(ExportMixin, BaseUserAdmin, Igny8ModelAdmin):
add_fieldsets = BaseUserAdmin.add_fieldsets + (
('IGNY8 Info', {'fields': ('account', 'role')}),
)
actions = [
'bulk_set_role_owner',
'bulk_set_role_admin',
'bulk_set_role_editor',
'bulk_set_role_viewer',
'bulk_activate',
'bulk_deactivate',
'bulk_send_password_reset',
'bulk_set_temporary_password',
]
def password_display(self, obj):
"""Show password hash with copy button (for debugging only)"""
if obj.password:
return f'Hash: {obj.password[:50]}...'
return 'No password set'
password_display.short_description = 'Password Hash'
def bulk_set_temporary_password(self, request, queryset):
"""Set a temporary password for selected users and display it"""
import secrets
import string
# Generate a secure random password
alphabet = string.ascii_letters + string.digits
temp_password = ''.join(secrets.choice(alphabet) for _ in range(12))
users_updated = []
for user in queryset:
user.set_password(temp_password)
user.save(update_fields=['password'])
users_updated.append(user.email)
if users_updated:
# Display the password in the message (only visible to admin)
self.message_user(
request,
f'Temporary password set for {len(users_updated)} user(s): "{temp_password}" (same password for all selected users)',
messages.SUCCESS
)
self.message_user(
request,
f'Users updated: {", ".join(users_updated)}',
messages.INFO
)
bulk_set_temporary_password.short_description = '🔑 Set temporary password (will display)'
def get_queryset(self, request):
"""Filter users by account for non-superusers"""
@@ -613,4 +1082,44 @@ class UserAdmin(ExportMixin, BaseUserAdmin, Igny8ModelAdmin):
except:
return '-'
get_account_display.short_description = 'Account'
def bulk_set_role_owner(self, request, queryset):
updated = queryset.update(role='owner')
self.message_user(request, f'{updated} user(s) role set to Owner.', messages.SUCCESS)
bulk_set_role_owner.short_description = 'Set role to Owner'
def bulk_set_role_admin(self, request, queryset):
updated = queryset.update(role='admin')
self.message_user(request, f'{updated} user(s) role set to Admin.', messages.SUCCESS)
bulk_set_role_admin.short_description = 'Set role to Admin'
def bulk_set_role_editor(self, request, queryset):
updated = queryset.update(role='editor')
self.message_user(request, f'{updated} user(s) role set to Editor.', messages.SUCCESS)
bulk_set_role_editor.short_description = 'Set role to Editor'
def bulk_set_role_viewer(self, request, queryset):
updated = queryset.update(role='viewer')
self.message_user(request, f'{updated} user(s) role set to Viewer.', messages.SUCCESS)
bulk_set_role_viewer.short_description = 'Set role to Viewer'
def bulk_activate(self, request, queryset):
updated = queryset.update(is_active=True)
self.message_user(request, f'{updated} user(s) activated.', messages.SUCCESS)
bulk_activate.short_description = 'Activate users'
def bulk_deactivate(self, request, queryset):
updated = queryset.update(is_active=False)
self.message_user(request, f'{updated} user(s) deactivated.', messages.SUCCESS)
bulk_deactivate.short_description = 'Deactivate users'
def bulk_send_password_reset(self, request, queryset):
# TODO: Implement password reset email sending
count = queryset.count()
self.message_user(
request,
f'{count} password reset email(s) queued for sending. (Email integration required)',
messages.INFO
)
bulk_send_password_reset.short_description = 'Send password reset email'

View File

@@ -25,18 +25,7 @@ class Command(BaseCommand):
'max_users': 999999,
'max_sites': 999999,
'max_keywords': 999999,
'max_clusters': 999999,
'max_content_ideas': 999999,
'monthly_word_count_limit': 999999999,
'daily_content_tasks': 999999,
'daily_ai_requests': 999999,
'daily_ai_request_limit': 999999,
'monthly_ai_credit_limit': 999999,
'monthly_image_count': 999999,
'daily_image_generation_limit': 999999,
'monthly_cluster_ai_credits': 999999,
'monthly_content_ai_credits': 999999,
'monthly_image_ai_credits': 999999,
'max_ahrefs_queries': 999999,
'included_credits': 999999,
'is_active': True,
'features': ['ai_writer', 'image_gen', 'auto_publish', 'custom_prompts', 'unlimited'],

View File

@@ -7,9 +7,23 @@ from django.utils.deprecation import MiddlewareMixin
from django.http import JsonResponse
from django.contrib.auth import logout
from rest_framework import status
import json
from datetime import datetime
logger = logging.getLogger('auth.middleware')
# Logout reason codes for precise tracking
LOGOUT_REASONS = {
'SESSION_ACCOUNT_MISMATCH': 'Session contamination: account ID mismatch',
'SESSION_USER_MISMATCH': 'Session contamination: user ID mismatch',
'ACCOUNT_MISSING': 'Account not configured for this user',
'ACCOUNT_SUSPENDED': 'Account is suspended',
'ACCOUNT_CANCELLED': 'Account is cancelled',
'PLAN_MISSING': 'No subscription plan assigned',
'PLAN_INACTIVE': 'Subscription plan is inactive',
'USER_INACTIVE': 'User account is inactive',
}
try:
import jwt
JWT_AVAILABLE = True
@@ -47,39 +61,8 @@ class AccountContextMiddleware(MiddlewareMixin):
# This is already loaded, no need to query DB again
request.account = getattr(request.user, 'account', None)
# CRITICAL: Add account ID to session to prevent cross-contamination
# This ensures each session is tied to a specific account
if request.account:
request.session['_account_id'] = request.account.id
request.session['_user_id'] = request.user.id
# Verify session integrity - if stored IDs don't match, logout
stored_account_id = request.session.get('_account_id')
stored_user_id = request.session.get('_user_id')
if stored_account_id and stored_account_id != request.account.id:
# Session contamination detected - force logout
logger.warning(
f"[AUTO-LOGOUT] Session contamination: account_id mismatch. "
f"Session={stored_account_id}, Current={request.account.id}, "
f"User={request.user.id}, Path={request.path}, IP={request.META.get('REMOTE_ADDR')}"
)
logout(request)
return JsonResponse(
{'success': False, 'error': 'Session integrity violation detected. Please login again.'},
status=status.HTTP_401_UNAUTHORIZED
)
if stored_user_id and stored_user_id != request.user.id:
# Session contamination detected - force logout
logger.warning(
f"[AUTO-LOGOUT] Session contamination: user_id mismatch. "
f"Session={stored_user_id}, Current={request.user.id}, "
f"Account={request.account.id if request.account else None}, "
f"Path={request.path}, IP={request.META.get('REMOTE_ADDR')}"
)
logout(request)
return JsonResponse(
{'success': False, 'error': 'Session integrity violation detected. Please login again.'},
status=status.HTTP_401_UNAUTHORIZED
)
# REMOVED: Session contamination checks on every request
# These were causing random logouts - session integrity handled by Django
return None
except (AttributeError, Exception):
@@ -157,23 +140,7 @@ class AccountContextMiddleware(MiddlewareMixin):
"""
Ensure the authenticated user has an account and an active plan.
Uses shared validation helper for consistency.
Bypasses validation for superusers, developers, and system accounts.
"""
# Bypass validation for superusers
if getattr(user, 'is_superuser', False):
return None
# Bypass validation for developers
if hasattr(user, 'role') and user.role == 'developer':
return None
# Bypass validation for system account users
try:
if hasattr(user, 'is_system_account_user') and user.is_system_account_user():
return None
except Exception:
pass
from .utils import validate_account_and_plan
is_valid, error_message, http_status = validate_account_and_plan(user)
@@ -184,13 +151,29 @@ class AccountContextMiddleware(MiddlewareMixin):
return None
def _deny_request(self, request, error, status_code):
"""Logout session users (if any) and return a consistent JSON error."""
"""Logout session users (if any) and return a consistent JSON error with detailed tracking."""
# Determine logout reason code based on error message
reason_code = 'UNKNOWN'
if 'Account not configured' in error or 'Account not found' in error:
reason_code = 'ACCOUNT_MISSING'
elif 'suspended' in error.lower():
reason_code = 'ACCOUNT_SUSPENDED'
elif 'cancelled' in error.lower():
reason_code = 'ACCOUNT_CANCELLED'
elif 'No subscription plan' in error or 'plan assigned' in error.lower():
reason_code = 'PLAN_MISSING'
elif 'plan is inactive' in error.lower() or 'Active subscription required' in error:
reason_code = 'PLAN_INACTIVE'
elif 'inactive' in error.lower():
reason_code = 'USER_INACTIVE'
try:
if hasattr(request, 'user') and request.user and request.user.is_authenticated:
logger.warning(
f"[AUTO-LOGOUT] Account/plan validation failed: {error}. "
f"[AUTO-LOGOUT] {reason_code}: {error}. "
f"User={request.user.id}, Account={getattr(request, 'account', None)}, "
f"Path={request.path}, IP={request.META.get('REMOTE_ADDR')}"
f"Path={request.path}, IP={request.META.get('REMOTE_ADDR')}, "
f"Status={status_code}, Timestamp={datetime.now().isoformat()}"
)
logout(request)
except Exception as e:
@@ -200,6 +183,14 @@ class AccountContextMiddleware(MiddlewareMixin):
{
'success': False,
'error': error,
'logout_reason': reason_code,
'logout_message': LOGOUT_REASONS.get(reason_code, error),
'logout_path': request.path,
'logout_context': {
'user_id': request.user.id if hasattr(request, 'user') and request.user and request.user.is_authenticated else None,
'account_id': getattr(request, 'account', None).id if hasattr(request, 'account') and getattr(request, 'account', None) else None,
'status_code': status_code,
}
},
status=status_code,
)

View File

@@ -0,0 +1,30 @@
# Generated by Django 5.2.9 on 2025-12-17 06:04
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('igny8_core_auth', '0017_add_history_tracking'),
]
operations = [
migrations.RemoveIndex(
model_name='seedkeyword',
name='igny8_seed__intent_15020d_idx',
),
migrations.RemoveField(
model_name='seedkeyword',
name='intent',
),
migrations.AddField(
model_name='seedkeyword',
name='country',
field=models.CharField(choices=[('US', 'United States'), ('CA', 'Canada'), ('GB', 'United Kingdom'), ('AE', 'United Arab Emirates'), ('AU', 'Australia'), ('IN', 'India'), ('PK', 'Pakistan')], default='US', help_text='Target country for this keyword', max_length=2),
),
migrations.AddIndex(
model_name='seedkeyword',
index=models.Index(fields=['country'], name='igny8_seed__country_4127a5_idx'),
),
]

View File

@@ -0,0 +1,100 @@
# Generated by IGNY8 Phase 1: Simplify Credits & Limits
# Migration: Remove unused limit fields, add Ahrefs query tracking
# Date: January 5, 2026
from django.db import migrations, models
import django.core.validators
class Migration(migrations.Migration):
"""
Simplify the credits and limits system:
PLAN MODEL:
- REMOVE: max_clusters, max_content_ideas, max_content_words,
max_images_basic, max_images_premium, max_image_prompts
- ADD: max_ahrefs_queries (monthly keyword research queries)
ACCOUNT MODEL:
- REMOVE: usage_content_ideas, usage_content_words, usage_images_basic,
usage_images_premium, usage_image_prompts
- ADD: usage_ahrefs_queries
RATIONALE:
All consumption is now controlled by credits only. The only non-credit
limits are: sites, users, keywords (hard limits) and ahrefs_queries (monthly).
"""
dependencies = [
('igny8_core_auth', '0018_add_country_remove_intent_seedkeyword'),
]
operations = [
# STEP 1: Add new Ahrefs fields FIRST (before removing old ones)
migrations.AddField(
model_name='plan',
name='max_ahrefs_queries',
field=models.IntegerField(
default=0,
validators=[django.core.validators.MinValueValidator(0)],
help_text='Monthly Ahrefs keyword research queries (0 = disabled)'
),
),
migrations.AddField(
model_name='account',
name='usage_ahrefs_queries',
field=models.IntegerField(
default=0,
validators=[django.core.validators.MinValueValidator(0)],
help_text='Ahrefs queries used this month'
),
),
# STEP 2: Remove unused Plan fields
migrations.RemoveField(
model_name='plan',
name='max_clusters',
),
migrations.RemoveField(
model_name='plan',
name='max_content_ideas',
),
migrations.RemoveField(
model_name='plan',
name='max_content_words',
),
migrations.RemoveField(
model_name='plan',
name='max_images_basic',
),
migrations.RemoveField(
model_name='plan',
name='max_images_premium',
),
migrations.RemoveField(
model_name='plan',
name='max_image_prompts',
),
# STEP 3: Remove unused Account fields
migrations.RemoveField(
model_name='account',
name='usage_content_ideas',
),
migrations.RemoveField(
model_name='account',
name='usage_content_words',
),
migrations.RemoveField(
model_name='account',
name='usage_images_basic',
),
migrations.RemoveField(
model_name='account',
name='usage_images_premium',
),
migrations.RemoveField(
model_name='account',
name='usage_image_prompts',
),
]

View File

@@ -0,0 +1,39 @@
# Generated by Django 5.2.9 on 2026-01-06 00:11
import django.core.validators
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('igny8_core_auth', '0019_simplify_credits_limits'),
]
operations = [
migrations.RemoveField(
model_name='historicalaccount',
name='usage_content_ideas',
),
migrations.RemoveField(
model_name='historicalaccount',
name='usage_content_words',
),
migrations.RemoveField(
model_name='historicalaccount',
name='usage_image_prompts',
),
migrations.RemoveField(
model_name='historicalaccount',
name='usage_images_basic',
),
migrations.RemoveField(
model_name='historicalaccount',
name='usage_images_premium',
),
migrations.AddField(
model_name='historicalaccount',
name='usage_ahrefs_queries',
field=models.IntegerField(default=0, help_text='Ahrefs queries used this month', validators=[django.core.validators.MinValueValidator(0)]),
),
]

View File

@@ -108,11 +108,7 @@ class Account(SoftDeletableModel):
tax_id = models.CharField(max_length=100, blank=True, help_text="VAT/Tax ID number")
# Monthly usage tracking (reset on billing cycle)
usage_content_ideas = models.IntegerField(default=0, validators=[MinValueValidator(0)], help_text="Content ideas generated this month")
usage_content_words = models.IntegerField(default=0, validators=[MinValueValidator(0)], help_text="Content words generated this month")
usage_images_basic = models.IntegerField(default=0, validators=[MinValueValidator(0)], help_text="Basic AI images this month")
usage_images_premium = models.IntegerField(default=0, validators=[MinValueValidator(0)], help_text="Premium AI images this month")
usage_image_prompts = models.IntegerField(default=0, validators=[MinValueValidator(0)], help_text="Image prompts this month")
usage_ahrefs_queries = models.IntegerField(default=0, validators=[MinValueValidator(0)], help_text="Ahrefs queries used this month")
usage_period_start = models.DateTimeField(null=True, blank=True, help_text="Current billing period start")
usage_period_end = models.DateTimeField(null=True, blank=True, help_text="Current billing period end")
@@ -157,12 +153,152 @@ class Account(SoftDeletableModel):
# System accounts bypass all filtering restrictions
return self.slug in ['aws-admin', 'default-account', 'default']
def soft_delete(self, user=None, reason=None, retention_days=None):
def soft_delete(self, user=None, reason=None, retention_days=None, cascade=True):
"""
Soft delete the account and optionally cascade to all related objects.
Args:
user: User performing the deletion
reason: Reason for deletion
retention_days: Days before permanent deletion
cascade: If True, also soft-delete related objects that support soft delete,
and hard-delete objects that don't support soft delete
"""
if self.is_system_account():
from django.core.exceptions import PermissionDenied
raise PermissionDenied("System account cannot be deleted.")
if cascade:
self._cascade_delete_related(user=user, reason=reason, retention_days=retention_days, hard_delete=False)
return super().soft_delete(user=user, reason=reason, retention_days=retention_days)
def _cascade_delete_related(self, user=None, reason=None, retention_days=None, hard_delete=False):
"""
Delete all related objects when account is deleted.
For soft delete: soft-deletes objects with SoftDeletableModel, hard-deletes others
For hard delete: hard-deletes everything
"""
from igny8_core.common.soft_delete import SoftDeletableModel
# List of related objects to delete (in order to avoid FK constraint issues)
# Related names from Account reverse relations
related_names = [
# Content & Planning related (delete first due to dependencies)
'contentclustermap_set',
'contentattribute_set',
'contenttaxonomy_set',
'content_set',
'images_set',
'contentideas_set',
'tasks_set',
'keywords_set',
'clusters_set',
'strategy_set',
# Automation
'automation_runs',
'automation_configs',
# Publishing & Integration
'syncevent_set',
'publishingsettings_set',
'publishingrecord_set',
'deploymentrecord_set',
'siteintegration_set',
# Notifications & Optimization
'notification_set',
'optimizationtask_set',
# AI & Settings
'aitasklog_set',
'aiprompt_set',
'aisettings_set',
'authorprofile_set',
# Billing (preserve invoices/payments for audit, delete others)
'planlimitusage_set',
'creditusagelog_set',
'credittransaction_set',
'accountpaymentmethod_set',
'payment_set',
'invoice_set',
# Settings
'modulesettings_set',
'moduleenablesettings_set',
'integrationsettings_set',
'user_settings',
'accountsettings_set',
# Core (last due to dependencies)
'sector_set',
'site_set',
# Users (delete after sites to avoid FK issues, owner is SET_NULL)
'users',
# Subscription (OneToOne)
'subscription',
]
for related_name in related_names:
try:
related = getattr(self, related_name, None)
if related is None:
continue
# Handle OneToOne fields (subscription)
if hasattr(related, 'pk'):
# It's a single object (OneToOneField)
if hard_delete:
related.hard_delete() if hasattr(related, 'hard_delete') else related.delete()
elif isinstance(related, SoftDeletableModel):
related.soft_delete(user=user, reason=reason, retention_days=retention_days)
else:
# Non-soft-deletable single object - hard delete
related.delete()
else:
# It's a RelatedManager (ForeignKey)
queryset = related.all()
if queryset.exists():
if hard_delete:
# Hard delete all
if hasattr(queryset, 'hard_delete'):
queryset.hard_delete()
else:
for obj in queryset:
if hasattr(obj, 'hard_delete'):
obj.hard_delete()
else:
obj.delete()
else:
# Soft delete if supported, otherwise hard delete
model = queryset.model
if issubclass(model, SoftDeletableModel):
for obj in queryset:
obj.soft_delete(user=user, reason=reason, retention_days=retention_days)
else:
queryset.delete()
except Exception as e:
# Log but don't fail - some relations may not exist
import logging
logger = logging.getLogger(__name__)
logger.warning(f"Failed to delete related {related_name} for account {self.pk}: {e}")
def hard_delete_with_cascade(self, using=None, keep_parents=False):
"""
Permanently delete the account and ALL related objects.
This bypasses soft-delete and removes everything from the database.
USE WITH CAUTION - this cannot be undone!
"""
if self.is_system_account():
from django.core.exceptions import PermissionDenied
raise PermissionDenied("System account cannot be deleted.")
# Clear owner reference first to avoid FK constraint issues
# (owner is SET_NULL but we're deleting the user who is the owner)
if self.owner:
self.owner = None
self.save(update_fields=['owner'])
# Cascade hard-delete all related objects first
self._cascade_delete_related(hard_delete=True)
# Finally hard-delete the account itself
return super().hard_delete(using=using, keep_parents=keep_parents)
def delete(self, using=None, keep_parents=False):
return self.soft_delete()
@@ -216,37 +352,12 @@ class Plan(models.Model):
validators=[MinValueValidator(1)],
help_text="Maximum total keywords allowed (hard limit)"
)
max_clusters = models.IntegerField(
default=100,
validators=[MinValueValidator(1)],
help_text="Maximum AI keyword clusters allowed (hard limit)"
)
# Monthly Limits (Reset on billing cycle)
max_content_ideas = models.IntegerField(
default=300,
validators=[MinValueValidator(1)],
help_text="Maximum AI content ideas per month"
)
max_content_words = models.IntegerField(
default=100000,
validators=[MinValueValidator(1)],
help_text="Maximum content words per month (e.g., 100000 = 100K words)"
)
max_images_basic = models.IntegerField(
default=300,
max_ahrefs_queries = models.IntegerField(
default=0,
validators=[MinValueValidator(0)],
help_text="Maximum basic AI images per month"
)
max_images_premium = models.IntegerField(
default=60,
validators=[MinValueValidator(0)],
help_text="Maximum premium AI images per month (DALL-E)"
)
max_image_prompts = models.IntegerField(
default=300,
validators=[MinValueValidator(0)],
help_text="Maximum image prompts per month"
help_text="Monthly Ahrefs keyword research queries (0 = disabled)"
)
# Billing & Credits (Phase 0: Credit-only system)
@@ -517,11 +628,14 @@ class SeedKeyword(models.Model):
These are canonical keywords that can be imported into account-specific Keywords.
Non-deletable global reference data.
"""
INTENT_CHOICES = [
('informational', 'Informational'),
('navigational', 'Navigational'),
('commercial', 'Commercial'),
('transactional', 'Transactional'),
COUNTRY_CHOICES = [
('US', 'United States'),
('CA', 'Canada'),
('GB', 'United Kingdom'),
('AE', 'United Arab Emirates'),
('AU', 'Australia'),
('IN', 'India'),
('PK', 'Pakistan'),
]
keyword = models.CharField(max_length=255, db_index=True)
@@ -533,7 +647,7 @@ class SeedKeyword(models.Model):
validators=[MinValueValidator(0), MaxValueValidator(100)],
help_text='Keyword difficulty (0-100)'
)
intent = models.CharField(max_length=50, choices=INTENT_CHOICES, default='informational')
country = models.CharField(max_length=2, choices=COUNTRY_CHOICES, default='US', help_text='Target country for this keyword')
is_active = models.BooleanField(default=True, db_index=True)
created_at = models.DateTimeField(auto_now_add=True)
updated_at = models.DateTimeField(auto_now=True)
@@ -547,7 +661,7 @@ class SeedKeyword(models.Model):
models.Index(fields=['keyword']),
models.Index(fields=['industry', 'sector']),
models.Index(fields=['industry', 'sector', 'is_active']),
models.Index(fields=['intent']),
models.Index(fields=['country']),
]
ordering = ['keyword']
@@ -746,7 +860,7 @@ class User(AbstractUser):
if not self.account:
return Site.objects.none()
base_sites = Site.objects.filter(account=self.account, is_active=True)
base_sites = Site.objects.filter(account=self.account)
if self.role in ['owner', 'admin', 'developer'] or self.is_superuser or self.is_system_account_user():
return base_sites

View File

@@ -13,9 +13,7 @@ class PlanSerializer(serializers.ModelSerializer):
'id', 'name', 'slug', 'price', 'original_price', 'billing_cycle', 'annual_discount_percent',
'is_featured', 'features', 'is_active',
'max_users', 'max_sites', 'max_industries', 'max_author_profiles',
'max_keywords', 'max_clusters',
'max_content_ideas', 'max_content_words',
'max_images_basic', 'max_images_premium', 'max_image_prompts',
'max_keywords', 'max_ahrefs_queries',
'included_credits', 'extra_credit_price', 'allow_credit_topup',
'auto_credit_topup_threshold', 'auto_credit_topup_amount',
'stripe_product_id', 'stripe_price_id', 'credits_per_month'
@@ -55,7 +53,7 @@ class AccountSerializer(serializers.ModelSerializer):
fields = [
'id', 'name', 'slug', 'owner', 'plan', 'plan_id',
'credits', 'status', 'payment_method',
'subscription', 'created_at'
'subscription', 'billing_country', 'created_at'
]
read_only_fields = ['owner', 'created_at']
@@ -66,6 +64,8 @@ class SiteSerializer(serializers.ModelSerializer):
active_sectors_count = serializers.SerializerMethodField()
selected_sectors = serializers.SerializerMethodField()
can_add_sectors = serializers.SerializerMethodField()
keywords_count = serializers.SerializerMethodField()
has_integration = serializers.SerializerMethodField()
industry_name = serializers.CharField(source='industry.name', read_only=True)
industry_slug = serializers.CharField(source='industry.slug', read_only=True)
# Override domain field to use CharField instead of URLField to avoid premature validation
@@ -79,7 +79,7 @@ class SiteSerializer(serializers.ModelSerializer):
'is_active', 'status',
'site_type', 'hosting_type', 'seo_metadata',
'sectors_count', 'active_sectors_count', 'selected_sectors',
'can_add_sectors',
'can_add_sectors', 'keywords_count', 'has_integration',
'created_at', 'updated_at'
]
read_only_fields = ['created_at', 'updated_at', 'account']
@@ -161,6 +161,20 @@ class SiteSerializer(serializers.ModelSerializer):
"""Check if site can add more sectors (max 5)."""
return obj.can_add_sector()
def get_keywords_count(self, obj):
"""Get total keywords count for the site across all sectors."""
from igny8_core.modules.planner.models import Keywords
return Keywords.objects.filter(site=obj).count()
def get_has_integration(self, obj):
"""Check if site has an active WordPress integration."""
from igny8_core.business.integration.models import SiteIntegration
return SiteIntegration.objects.filter(
site=obj,
platform='wordpress',
is_active=True
).exists() or bool(obj.wp_url)
class IndustrySectorSerializer(serializers.ModelSerializer):
"""Serializer for IndustrySector model."""
@@ -392,11 +406,20 @@ class RegisterSerializer(serializers.Serializer):
)
# Generate unique slug for account
base_slug = account_name.lower().replace(' ', '-').replace('_', '-')[:50] or 'account'
slug = base_slug
# Clean the base slug: lowercase, replace spaces and underscores with hyphens
import re
import random
import string
base_slug = re.sub(r'[^a-z0-9-]', '', account_name.lower().replace(' ', '-').replace('_', '-'))[:40] or 'account'
# Add random suffix to prevent collisions (especially during concurrent registrations)
random_suffix = ''.join(random.choices(string.ascii_lowercase + string.digits, k=6))
slug = f"{base_slug}-{random_suffix}"
# Ensure uniqueness with fallback counter
counter = 1
while Account.objects.filter(slug=slug).exists():
slug = f"{base_slug}-{counter}"
slug = f"{base_slug}-{random_suffix}-{counter}"
counter += 1
# Create account with status and credits seeded (0 for paid pending)
@@ -481,6 +504,7 @@ class LoginSerializer(serializers.Serializer):
"""Serializer for user login."""
email = serializers.EmailField()
password = serializers.CharField(write_only=True)
remember_me = serializers.BooleanField(required=False, default=False)
class ChangePasswordSerializer(serializers.Serializer):
@@ -523,14 +547,14 @@ class SeedKeywordSerializer(serializers.ModelSerializer):
industry_slug = serializers.CharField(source='industry.slug', read_only=True)
sector_name = serializers.CharField(source='sector.name', read_only=True)
sector_slug = serializers.CharField(source='sector.slug', read_only=True)
intent_display = serializers.CharField(source='get_intent_display', read_only=True)
country_display = serializers.CharField(source='get_country_display', read_only=True)
class Meta:
model = SeedKeyword
fields = [
'id', 'keyword', 'industry', 'industry_name', 'industry_slug',
'sector', 'sector_name', 'sector_slug',
'volume', 'difficulty', 'intent', 'intent_display',
'volume', 'difficulty', 'country', 'country_display',
'is_active', 'created_at', 'updated_at'
]
read_only_fields = ['created_at', 'updated_at']

View File

@@ -46,13 +46,56 @@ class RegisterView(APIView):
permission_classes = [permissions.AllowAny]
def post(self, request):
from .utils import generate_access_token, generate_refresh_token, get_token_expiry
from django.contrib.auth import login
from .utils import generate_access_token, generate_refresh_token, get_access_token_expiry, get_refresh_token_expiry
from django.contrib.auth import login, logout
from django.utils import timezone
force_logout = request.data.get('force_logout', False)
serializer = RegisterSerializer(data=request.data)
if serializer.is_valid():
user = serializer.save()
# SECURITY: Check for session contamination before login
# If there's an existing session from a different user, handle it
if request.session.session_key:
existing_user_id = request.session.get('_auth_user_id')
if existing_user_id and str(existing_user_id) != str(user.id):
# Get existing user details
try:
existing_user = User.objects.get(id=existing_user_id)
existing_email = existing_user.email
existing_username = existing_user.username or existing_email.split('@')[0]
except User.DoesNotExist:
existing_email = 'Unknown user'
existing_username = 'Unknown'
# If not forcing logout, return conflict info
if not force_logout:
return Response(
{
'status': 'error',
'error': 'session_conflict',
'message': f'You have an active session for another account ({existing_email}). Please logout first or choose to continue.',
'existing_user': {
'email': existing_email,
'username': existing_username,
'id': existing_user_id
},
'requested_user': {
'email': user.email,
'username': user.username or user.email.split('@')[0],
'id': user.id
}
},
status=status.HTTP_409_CONFLICT
)
# Force logout - clean existing session completely
logout(request)
# Clear all session data
request.session.flush()
# Log the user in (create session for session authentication)
login(request, user)
@@ -62,20 +105,42 @@ class RegisterView(APIView):
# Generate JWT tokens
access_token = generate_access_token(user, account)
refresh_token = generate_refresh_token(user, account)
access_expires_at = get_token_expiry('access')
refresh_expires_at = get_token_expiry('refresh')
access_expires_at = timezone.now() + get_access_token_expiry()
refresh_expires_at = timezone.now() + get_refresh_token_expiry()
user_serializer = UserSerializer(user)
# Build response data
response_data = {
'user': user_serializer.data,
'tokens': {
'access': access_token,
'refresh': refresh_token,
'access_expires_at': access_expires_at.isoformat(),
'refresh_expires_at': refresh_expires_at.isoformat(),
}
}
# NOTE: Payment checkout is NO LONGER created at registration
# User will complete payment on /account/plans after signup
# This simplifies the signup flow and consolidates all payment handling
# Send welcome email (if enabled in settings)
try:
from igny8_core.modules.system.email_models import EmailSettings
from igny8_core.business.billing.services.email_service import send_welcome_email
email_settings = EmailSettings.get_settings()
if email_settings.send_welcome_emails and account:
send_welcome_email(user, account)
except Exception as e:
# Don't fail registration if email fails
import logging
logger = logging.getLogger(__name__)
logger.error(f"Failed to send welcome email for user {user.id}: {e}")
return success_response(
data={
'user': user_serializer.data,
'tokens': {
'access': access_token,
'refresh': refresh_token,
'access_expires_at': access_expires_at.isoformat(),
'refresh_expires_at': refresh_expires_at.isoformat(),
}
},
data=response_data,
message='Registration successful',
status_code=status.HTTP_201_CREATED,
request=request
@@ -102,6 +167,8 @@ class LoginView(APIView):
if serializer.is_valid():
email = serializer.validated_data['email']
password = serializer.validated_data['password']
remember_me = serializer.validated_data.get('remember_me', False)
force_logout = request.data.get('force_logout', False)
try:
user = User.objects.select_related('account', 'account__plan').get(email=email)
@@ -113,6 +180,47 @@ class LoginView(APIView):
)
if user.check_password(password):
# SECURITY: Check for session contamination before login
# If user has a session cookie from a different user, handle it
if request.session.session_key:
existing_user_id = request.session.get('_auth_user_id')
if existing_user_id and str(existing_user_id) != str(user.id):
# Get existing user details
try:
existing_user = User.objects.get(id=existing_user_id)
existing_email = existing_user.email
existing_username = existing_user.username or existing_email.split('@')[0]
except User.DoesNotExist:
existing_email = 'Unknown user'
existing_username = 'Unknown'
# If not forcing logout, return conflict info
if not force_logout:
return Response(
{
'status': 'error',
'error': 'session_conflict',
'message': f'You have an active session for another account ({existing_email}). Please logout first or choose to continue.',
'existing_user': {
'email': existing_email,
'username': existing_username,
'id': existing_user_id
},
'requested_user': {
'email': user.email,
'username': user.username or user.email.split('@')[0],
'id': user.id
}
},
status=status.HTTP_409_CONFLICT
)
# Force logout - clean existing session completely
from django.contrib.auth import logout
logout(request)
# Clear all session data
request.session.flush()
# Log the user in (create session for session authentication)
from django.contrib.auth import login
login(request, user)
@@ -121,11 +229,12 @@ class LoginView(APIView):
account = getattr(user, 'account', None)
# Generate JWT tokens
from .utils import generate_access_token, generate_refresh_token, get_token_expiry
access_token = generate_access_token(user, account)
from .utils import generate_access_token, generate_refresh_token, get_access_token_expiry, get_refresh_token_expiry
from django.utils import timezone
access_token = generate_access_token(user, account, remember_me=remember_me)
refresh_token = generate_refresh_token(user, account)
access_expires_at = get_token_expiry('access')
refresh_expires_at = get_token_expiry('refresh')
access_expires_at = timezone.now() + get_access_token_expiry(remember_me=remember_me)
refresh_expires_at = timezone.now() + get_refresh_token_expiry()
# Serialize user data safely, handling missing account relationship
try:
@@ -176,6 +285,128 @@ class LoginView(APIView):
)
@extend_schema(
tags=['Authentication'],
summary='Request Password Reset',
description='Request password reset email'
)
class PasswordResetRequestView(APIView):
"""Request password reset endpoint - sends email with reset token."""
permission_classes = [permissions.AllowAny]
def post(self, request):
from .serializers import RequestPasswordResetSerializer
from .models import PasswordResetToken
serializer = RequestPasswordResetSerializer(data=request.data)
if not serializer.is_valid():
return error_response(
error='Validation failed',
errors=serializer.errors,
status_code=status.HTTP_400_BAD_REQUEST,
request=request
)
email = serializer.validated_data['email']
try:
user = User.objects.get(email=email)
except User.DoesNotExist:
# Don't reveal if email exists - return success anyway
return success_response(
message='If an account with that email exists, a password reset link has been sent.',
request=request
)
# Generate secure token
import secrets
token = secrets.token_urlsafe(32)
# Create reset token (expires in 1 hour)
from django.utils import timezone
from datetime import timedelta
expires_at = timezone.now() + timedelta(hours=1)
PasswordResetToken.objects.create(
user=user,
token=token,
expires_at=expires_at
)
# Send password reset email
import logging
logger = logging.getLogger(__name__)
logger.info(f"[PASSWORD_RESET] Attempting to send reset email to: {email}")
try:
from igny8_core.business.billing.services.email_service import send_password_reset_email
result = send_password_reset_email(user, token)
logger.info(f"[PASSWORD_RESET] Email send result: {result}")
print(f"[PASSWORD_RESET] Email send result: {result}") # Console output
except Exception as e:
logger.error(f"[PASSWORD_RESET] Failed to send password reset email: {e}", exc_info=True)
print(f"[PASSWORD_RESET] ERROR: {e}") # Console output
return success_response(
message='If an account with that email exists, a password reset link has been sent.',
request=request
)
@extend_schema(
tags=['Authentication'],
summary='Reset Password',
description='Reset password using token from email'
)
class PasswordResetConfirmView(APIView):
"""Confirm password reset with token."""
permission_classes = [permissions.AllowAny]
def post(self, request):
from .serializers import ResetPasswordSerializer
from .models import PasswordResetToken
from django.utils import timezone
serializer = ResetPasswordSerializer(data=request.data)
if not serializer.is_valid():
return error_response(
error='Validation failed',
errors=serializer.errors,
status_code=status.HTTP_400_BAD_REQUEST,
request=request
)
token = serializer.validated_data['token']
new_password = serializer.validated_data['new_password']
try:
reset_token = PasswordResetToken.objects.get(
token=token,
used=False,
expires_at__gt=timezone.now()
)
except PasswordResetToken.DoesNotExist:
return error_response(
error='Invalid or expired reset token',
status_code=status.HTTP_400_BAD_REQUEST,
request=request
)
# Reset password
user = reset_token.user
user.set_password(new_password)
user.save()
# Mark token as used
reset_token.used = True
reset_token.save()
return success_response(
message='Password reset successfully. You can now log in with your new password.',
request=request
)
@extend_schema(
tags=['Authentication'],
summary='Change Password',
@@ -271,6 +502,7 @@ class RefreshTokenView(APIView):
account = getattr(user, 'account', None)
# Generate new access token
from .utils import get_token_expiry
access_token = generate_access_token(user, account)
access_expires_at = get_token_expiry('access')
@@ -290,6 +522,77 @@ class RefreshTokenView(APIView):
)
@extend_schema(
tags=['Authentication'],
summary='Get Country List',
description='Returns list of countries for registration country selection'
)
class CountryListView(APIView):
"""Returns list of countries for signup dropdown"""
permission_classes = [permissions.AllowAny] # Public endpoint
def get(self, request):
"""Get list of countries with codes and names"""
# Comprehensive list of countries for billing purposes
countries = [
{'code': 'US', 'name': 'United States'},
{'code': 'GB', 'name': 'United Kingdom'},
{'code': 'CA', 'name': 'Canada'},
{'code': 'AU', 'name': 'Australia'},
{'code': 'DE', 'name': 'Germany'},
{'code': 'FR', 'name': 'France'},
{'code': 'ES', 'name': 'Spain'},
{'code': 'IT', 'name': 'Italy'},
{'code': 'NL', 'name': 'Netherlands'},
{'code': 'BE', 'name': 'Belgium'},
{'code': 'CH', 'name': 'Switzerland'},
{'code': 'AT', 'name': 'Austria'},
{'code': 'SE', 'name': 'Sweden'},
{'code': 'NO', 'name': 'Norway'},
{'code': 'DK', 'name': 'Denmark'},
{'code': 'FI', 'name': 'Finland'},
{'code': 'IE', 'name': 'Ireland'},
{'code': 'PT', 'name': 'Portugal'},
{'code': 'PL', 'name': 'Poland'},
{'code': 'CZ', 'name': 'Czech Republic'},
{'code': 'NZ', 'name': 'New Zealand'},
{'code': 'SG', 'name': 'Singapore'},
{'code': 'HK', 'name': 'Hong Kong'},
{'code': 'JP', 'name': 'Japan'},
{'code': 'KR', 'name': 'South Korea'},
{'code': 'IN', 'name': 'India'},
{'code': 'PK', 'name': 'Pakistan'},
{'code': 'BD', 'name': 'Bangladesh'},
{'code': 'AE', 'name': 'United Arab Emirates'},
{'code': 'SA', 'name': 'Saudi Arabia'},
{'code': 'ZA', 'name': 'South Africa'},
{'code': 'NG', 'name': 'Nigeria'},
{'code': 'EG', 'name': 'Egypt'},
{'code': 'KE', 'name': 'Kenya'},
{'code': 'BR', 'name': 'Brazil'},
{'code': 'MX', 'name': 'Mexico'},
{'code': 'AR', 'name': 'Argentina'},
{'code': 'CL', 'name': 'Chile'},
{'code': 'CO', 'name': 'Colombia'},
{'code': 'PE', 'name': 'Peru'},
{'code': 'MY', 'name': 'Malaysia'},
{'code': 'TH', 'name': 'Thailand'},
{'code': 'VN', 'name': 'Vietnam'},
{'code': 'PH', 'name': 'Philippines'},
{'code': 'ID', 'name': 'Indonesia'},
{'code': 'TR', 'name': 'Turkey'},
{'code': 'RU', 'name': 'Russia'},
{'code': 'UA', 'name': 'Ukraine'},
{'code': 'RO', 'name': 'Romania'},
{'code': 'GR', 'name': 'Greece'},
{'code': 'IL', 'name': 'Israel'},
{'code': 'TW', 'name': 'Taiwan'},
]
# Sort alphabetically by name
countries.sort(key=lambda x: x['name'])
return Response({'countries': countries})
@extend_schema(exclude=True) # Exclude from public API documentation - internal authenticated endpoint
class MeView(APIView):
"""Get current user information."""
@@ -307,12 +610,86 @@ class MeView(APIView):
)
@extend_schema(
tags=['Authentication'],
summary='Unsubscribe from Emails',
description='Unsubscribe a user from marketing, billing, or all email notifications'
)
class UnsubscribeView(APIView):
"""Handle email unsubscribe requests with signed URLs."""
permission_classes = [permissions.AllowAny]
def post(self, request):
"""
Process unsubscribe request.
Expected payload:
- email: The email address to unsubscribe
- type: Type of emails to unsubscribe from (marketing, billing, all)
- ts: Timestamp from signed URL
- sig: HMAC signature from signed URL
"""
from igny8_core.business.billing.services.email_service import verify_unsubscribe_signature
import logging
logger = logging.getLogger(__name__)
email = request.data.get('email')
email_type = request.data.get('type', 'all')
timestamp = request.data.get('ts')
signature = request.data.get('sig')
# Validate required fields
if not email or not timestamp or not signature:
return error_response(
error='Missing required parameters',
status_code=status.HTTP_400_BAD_REQUEST,
request=request
)
try:
timestamp = int(timestamp)
except (ValueError, TypeError):
return error_response(
error='Invalid timestamp',
status_code=status.HTTP_400_BAD_REQUEST,
request=request
)
# Verify signature
if not verify_unsubscribe_signature(email, email_type, timestamp, signature):
return error_response(
error='Invalid or expired unsubscribe link',
status_code=status.HTTP_400_BAD_REQUEST,
request=request
)
# Log the unsubscribe request
# In production, update user preferences or use email provider's suppression list
logger.info(f'Unsubscribe request processed: email={email}, type={email_type}')
# TODO: Implement preference storage
# Options:
# 1. Add email preference fields to User model
# 2. Use Resend's suppression list API
# 3. Create EmailPreferences model
return success_response(
message=f'Successfully unsubscribed from {email_type} emails',
request=request
)
urlpatterns = [
path('', include(router.urls)),
path('register/', csrf_exempt(RegisterView.as_view()), name='auth-register'),
path('login/', csrf_exempt(LoginView.as_view()), name='auth-login'),
path('refresh/', csrf_exempt(RefreshTokenView.as_view()), name='auth-refresh'),
path('change-password/', ChangePasswordView.as_view(), name='auth-change-password'),
path('password-reset/', csrf_exempt(PasswordResetRequestView.as_view()), name='auth-password-reset-request'),
path('password-reset/confirm/', csrf_exempt(PasswordResetConfirmView.as_view()), name='auth-password-reset-confirm'),
path('me/', MeView.as_view(), name='auth-me'),
path('countries/', CountryListView.as_view(), name='auth-countries'),
path('unsubscribe/', csrf_exempt(UnsubscribeView.as_view()), name='auth-unsubscribe'),
]

View File

@@ -17,23 +17,26 @@ def get_jwt_algorithm():
return getattr(settings, 'JWT_ALGORITHM', 'HS256')
def get_access_token_expiry():
def get_access_token_expiry(remember_me=False):
"""Get access token expiry time from settings"""
return getattr(settings, 'JWT_ACCESS_TOKEN_EXPIRY', timedelta(minutes=15))
if remember_me:
return getattr(settings, 'JWT_ACCESS_TOKEN_EXPIRY_REMEMBER_ME', timedelta(days=20))
return getattr(settings, 'JWT_ACCESS_TOKEN_EXPIRY', timedelta(hours=1))
def get_refresh_token_expiry():
"""Get refresh token expiry time from settings"""
return getattr(settings, 'JWT_REFRESH_TOKEN_EXPIRY', timedelta(days=7))
return getattr(settings, 'JWT_REFRESH_TOKEN_EXPIRY', timedelta(days=30))
def generate_access_token(user, account=None):
def generate_access_token(user, account=None, remember_me=False):
"""
Generate JWT access token for user
Args:
user: User instance
account: Account instance (optional, will use user.account if not provided)
remember_me: bool - If True, use extended expiry (20 days)
Returns:
str: JWT access token
@@ -42,7 +45,7 @@ def generate_access_token(user, account=None):
account = getattr(user, 'account', None)
now = timezone.now()
expiry = now + get_access_token_expiry()
expiry = now + get_access_token_expiry(remember_me=remember_me)
payload = {
'user_id': user.id,
@@ -51,6 +54,7 @@ def generate_access_token(user, account=None):
'exp': int(expiry.timestamp()),
'iat': int(now.timestamp()),
'type': 'access',
'remember_me': remember_me,
}
token = jwt.encode(payload, get_jwt_secret_key(), algorithm=get_jwt_algorithm())
@@ -146,22 +150,6 @@ def validate_account_and_plan(user_or_account):
from rest_framework import status
from .models import User, Account
# Bypass validation for superusers
if isinstance(user_or_account, User):
if getattr(user_or_account, 'is_superuser', False):
return (True, None, None)
# Bypass validation for developers
if hasattr(user_or_account, 'role') and user_or_account.role == 'developer':
return (True, None, None)
# Bypass validation for system account users
try:
if hasattr(user_or_account, 'is_system_account_user') and user_or_account.is_system_account_user():
return (True, None, None)
except Exception:
pass
# Extract account from user or use directly
if isinstance(user_or_account, User):
try:

View File

@@ -839,7 +839,7 @@ class SeedKeywordViewSet(viewsets.ReadOnlyModelViewSet):
search_fields = ['keyword']
ordering_fields = ['keyword', 'volume', 'difficulty', 'created_at']
ordering = ['keyword']
filterset_fields = ['industry', 'sector', 'intent', 'is_active']
filterset_fields = ['industry', 'sector', 'country', 'is_active']
def retrieve(self, request, *args, **kwargs):
"""Override retrieve to return unified format"""
@@ -877,7 +877,7 @@ class SeedKeywordViewSet(viewsets.ReadOnlyModelViewSet):
def import_seed_keywords(self, request):
"""
Import seed keywords from CSV (Admin/Superuser only).
Expected columns: keyword, industry_name, sector_name, volume, difficulty, intent
Expected columns: keyword, industry_name, sector_name, volume, difficulty, country
"""
import csv
from django.db import transaction
@@ -960,7 +960,7 @@ class SeedKeywordViewSet(viewsets.ReadOnlyModelViewSet):
sector=sector,
volume=int(row.get('volume', 0) or 0),
difficulty=int(row.get('difficulty', 0) or 0),
intent=row.get('intent', 'informational') or 'informational',
country=row.get('country', 'US') or 'US',
is_active=True
)
imported_count += 1
@@ -1267,16 +1267,21 @@ class AuthViewSet(viewsets.GenericViewSet):
expires_at=expires_at
)
# Send email (async via Celery if available, otherwise sync)
# Send password reset email using the email service
try:
from igny8_core.modules.system.tasks import send_password_reset_email
send_password_reset_email.delay(user.id, token)
except:
# Fallback to sync email sending
from igny8_core.business.billing.services.email_service import send_password_reset_email
send_password_reset_email(user, token)
except Exception as e:
# Fallback to Django's send_mail if email service fails
import logging
logger = logging.getLogger(__name__)
logger.error(f"Failed to send password reset email via email service: {e}")
from django.core.mail import send_mail
from django.conf import settings
reset_url = f"{request.scheme}://{request.get_host()}/reset-password?token={token}"
frontend_url = getattr(settings, 'FRONTEND_URL', 'https://app.igny8.com')
reset_url = f"{frontend_url}/reset-password?token={token}"
send_mail(
subject='Reset Your IGNY8 Password',
@@ -1487,9 +1492,9 @@ def seedkeyword_csv_template(request):
response['Content-Disposition'] = 'attachment; filename="seedkeyword_template.csv"'
writer = csv.writer(response)
writer.writerow(['keyword', 'industry', 'sector', 'volume', 'difficulty', 'intent', 'is_active'])
writer.writerow(['python programming', 'Technology', 'Software Development', '10000', '45', 'Informational', 'true'])
writer.writerow(['medical software', 'Healthcare', 'Healthcare IT', '5000', '60', 'Commercial', 'true'])
writer.writerow(['keyword', 'industry', 'sector', 'volume', 'difficulty', 'country', 'is_active'])
writer.writerow(['python programming', 'Technology', 'Software Development', '10000', '45', 'US', 'true'])
writer.writerow(['medical software', 'Healthcare', 'Healthcare IT', '5000', '60', 'CA', 'true'])
return response
@@ -1534,7 +1539,7 @@ def seedkeyword_csv_import(request):
defaults={
'volume': int(row.get('volume', 0)),
'difficulty': int(row.get('difficulty', 0)),
'intent': row.get('intent', 'Informational'),
'country': row.get('country', 'US'),
'is_active': is_active
}
)

View File

@@ -8,12 +8,31 @@ from igny8_core.admin.base import AccountAdminMixin, Igny8ModelAdmin
from .models import AutomationConfig, AutomationRun
from import_export.admin import ExportMixin
from import_export import resources
class AutomationConfigResource(resources.ModelResource):
"""Resource class for exporting Automation Configs"""
class Meta:
model = AutomationConfig
fields = ('id', 'site__domain', 'is_enabled', 'frequency', 'scheduled_time',
'within_stage_delay', 'between_stage_delay', 'last_run_at', 'created_at')
export_order = fields
@admin.register(AutomationConfig)
class AutomationConfigAdmin(AccountAdminMixin, Igny8ModelAdmin):
class AutomationConfigAdmin(ExportMixin, AccountAdminMixin, Igny8ModelAdmin):
resource_class = AutomationConfigResource
list_display = ('site', 'is_enabled', 'frequency', 'scheduled_time', 'within_stage_delay', 'between_stage_delay', 'last_run_at')
list_filter = ('is_enabled', 'frequency')
search_fields = ('site__domain',)
actions = ['bulk_enable', 'bulk_disable']
actions = [
'bulk_enable',
'bulk_disable',
'bulk_update_frequency',
'bulk_update_delays',
]
def bulk_enable(self, request, queryset):
"""Enable selected automation configs"""
@@ -26,10 +45,122 @@ class AutomationConfigAdmin(AccountAdminMixin, Igny8ModelAdmin):
updated = queryset.update(is_enabled=False)
self.message_user(request, f'{updated} automation config(s) disabled.', messages.SUCCESS)
bulk_disable.short_description = 'Disable selected automations'
def bulk_update_frequency(self, request, queryset):
"""Update frequency for selected automation configs"""
from django import forms
if 'apply' in request.POST:
frequency = request.POST.get('frequency')
if frequency:
updated = queryset.update(frequency=frequency)
self.message_user(request, f'{updated} automation config(s) updated to frequency: {frequency}', messages.SUCCESS)
return
FREQUENCY_CHOICES = [
('hourly', 'Hourly'),
('daily', 'Daily'),
('weekly', 'Weekly'),
]
class FrequencyForm(forms.Form):
frequency = forms.ChoiceField(
choices=FREQUENCY_CHOICES,
label="Select Frequency",
help_text=f"Update frequency for {queryset.count()} automation config(s)"
)
from django.shortcuts import render
return render(request, 'admin/bulk_action_form.html', {
'title': 'Update Automation Frequency',
'queryset': queryset,
'form': FrequencyForm(),
'action': 'bulk_update_frequency',
})
bulk_update_frequency.short_description = 'Update frequency'
def bulk_update_delays(self, request, queryset):
"""Update delay settings for selected automation configs"""
from django import forms
if 'apply' in request.POST:
within_delay = int(request.POST.get('within_stage_delay', 0))
between_delay = int(request.POST.get('between_stage_delay', 0))
updated = queryset.update(
within_stage_delay=within_delay,
between_stage_delay=between_delay
)
self.message_user(request, f'{updated} automation config(s) delay settings updated.', messages.SUCCESS)
return
class DelayForm(forms.Form):
within_stage_delay = forms.IntegerField(
min_value=0,
initial=10,
label="Within Stage Delay (minutes)",
help_text="Delay between operations within the same stage"
)
between_stage_delay = forms.IntegerField(
min_value=0,
initial=60,
label="Between Stage Delay (minutes)",
help_text="Delay between different stages"
)
from django.shortcuts import render
return render(request, 'admin/bulk_action_form.html', {
'title': 'Update Automation Delays',
'queryset': queryset,
'form': DelayForm(),
'action': 'bulk_update_delays',
})
bulk_update_delays.short_description = 'Update delay settings'
class AutomationRunResource(resources.ModelResource):
"""Resource class for exporting Automation Runs"""
class Meta:
model = AutomationRun
fields = ('id', 'run_id', 'site__domain', 'status', 'current_stage',
'started_at', 'completed_at', 'created_at')
export_order = fields
@admin.register(AutomationRun)
class AutomationRunAdmin(AccountAdminMixin, Igny8ModelAdmin):
class AutomationRunAdmin(ExportMixin, AccountAdminMixin, Igny8ModelAdmin):
resource_class = AutomationRunResource
list_display = ('run_id', 'site', 'status', 'current_stage', 'started_at', 'completed_at')
list_filter = ('status', 'current_stage')
search_fields = ('run_id', 'site__domain')
actions = [
'bulk_retry_failed',
'bulk_cancel_running',
'bulk_delete_old_runs',
]
def bulk_retry_failed(self, request, queryset):
"""Retry failed automation runs"""
failed_runs = queryset.filter(status='failed')
count = failed_runs.update(status='pending', current_stage='keyword_research')
self.message_user(request, f'{count} failed run(s) marked for retry.', messages.SUCCESS)
bulk_retry_failed.short_description = 'Retry failed runs'
def bulk_cancel_running(self, request, queryset):
"""Cancel running automation runs"""
running = queryset.filter(status__in=['pending', 'running'])
count = running.update(status='failed')
self.message_user(request, f'{count} running automation(s) cancelled.', messages.SUCCESS)
bulk_cancel_running.short_description = 'Cancel running automations'
def bulk_delete_old_runs(self, request, queryset):
"""Delete automation runs older than 30 days"""
from django.utils import timezone
from datetime import timedelta
cutoff_date = timezone.now() - timedelta(days=30)
old_runs = queryset.filter(created_at__lt=cutoff_date)
count = old_runs.count()
old_runs.delete()
self.message_user(request, f'{count} old automation run(s) deleted (older than 30 days).', messages.SUCCESS)
bulk_delete_old_runs.short_description = 'Delete old runs (>30 days)'

View File

@@ -0,0 +1,18 @@
# Generated by Django 5.2.9 on 2025-12-20 15:13
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('automation', '0004_add_pause_resume_cancel_fields'),
]
operations = [
migrations.AlterField(
model_name='automationconfig',
name='stage_1_batch_size',
field=models.IntegerField(default=50, help_text='Keywords per batch'),
),
]

View File

@@ -0,0 +1,22 @@
# Generated migration for adding initial_snapshot field to AutomationRun
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('automation', '0005_add_default_image_service'),
]
operations = [
migrations.AddField(
model_name='automationrun',
name='initial_snapshot',
field=models.JSONField(
blank=True,
default=dict,
help_text='Snapshot of initial queue sizes: {stage_1_initial, stage_2_initial, ..., total_initial_items}'
),
),
]

View File

@@ -24,7 +24,7 @@ class AutomationConfig(models.Model):
scheduled_time = models.TimeField(default='02:00', help_text="Time to run (e.g., 02:00)")
# Batch sizes per stage
stage_1_batch_size = models.IntegerField(default=20, help_text="Keywords per batch")
stage_1_batch_size = models.IntegerField(default=50, help_text="Keywords per batch")
stage_2_batch_size = models.IntegerField(default=1, help_text="Clusters at a time")
stage_3_batch_size = models.IntegerField(default=20, help_text="Ideas per batch")
stage_4_batch_size = models.IntegerField(default=1, help_text="Tasks - sequential")
@@ -88,6 +88,13 @@ class AutomationRun(models.Model):
total_credits_used = models.IntegerField(default=0)
# Initial queue snapshot - captured at run start for accurate progress tracking
initial_snapshot = models.JSONField(
default=dict,
blank=True,
help_text="Snapshot of initial queue sizes: {stage_1_initial, stage_2_initial, ..., total_initial_items}"
)
# JSON results per stage
stage_1_result = models.JSONField(null=True, blank=True, help_text="{keywords_processed, clusters_created, batches}")
stage_2_result = models.JSONField(null=True, blank=True, help_text="{clusters_processed, ideas_created}")

View File

@@ -82,7 +82,7 @@ class AutomationViewSet(viewsets.ViewSet):
"is_enabled": true,
"frequency": "daily",
"scheduled_time": "02:00",
"stage_1_batch_size": 20,
"stage_1_batch_size": 50,
...
}
"""
@@ -387,16 +387,17 @@ class AutomationViewSet(viewsets.ViewSet):
return counts, total
# Stage 1: Keywords pending clustering (keep previous "pending" semantics but also return status breakdown)
# Stage 1: Keywords pending clustering
stage_1_counts, stage_1_total = _counts_by_status(
Keywords,
extra_filter={'disabled': False}
)
# pending definition used by the UI previously (new & not clustered)
# FIXED: Stage 1 pending = all keywords with status='new' (ready for clustering)
# This should match the "New" count shown in Keywords metric card
# Previously filtered by cluster__isnull=True which caused mismatch
stage_1_pending = Keywords.objects.filter(
site=site,
status='new',
cluster__isnull=True,
disabled=False
).count()
@@ -714,3 +715,237 @@ class AutomationViewSet(viewsets.ViewSet):
status=status.HTTP_500_INTERNAL_SERVER_ERROR
)
@extend_schema(tags=['Automation'])
@action(detail=False, methods=['get'], url_path='run_progress')
def run_progress(self, request):
"""
GET /api/v1/automation/run_progress/?site_id=123&run_id=abc
Unified endpoint for ALL run progress data - global + per-stage.
Replaces multiple separate API calls with single comprehensive response.
Response includes:
- run: Current run status and metadata
- global_progress: Overall pipeline progress percentage
- stages: Per-stage progress with input/output/processed counts
- metrics: Credits used, duration, errors
"""
site_id = request.query_params.get('site_id')
run_id = request.query_params.get('run_id')
if not site_id:
return Response(
{'error': 'site_id required'},
status=status.HTTP_400_BAD_REQUEST
)
try:
site = get_object_or_404(Site, id=site_id, account=request.user.account)
# If no run_id, get current run
if run_id:
run = AutomationRun.objects.get(run_id=run_id, site=site)
else:
run = AutomationRun.objects.filter(
site=site,
status__in=['running', 'paused']
).order_by('-started_at').first()
if not run:
return Response({
'run': None,
'global_progress': None,
'stages': [],
'metrics': None
})
# Build unified response
response = self._build_run_progress_response(site, run)
return Response(response)
except AutomationRun.DoesNotExist:
return Response(
{'error': 'Run not found'},
status=status.HTTP_404_NOT_FOUND
)
except Exception as e:
return Response(
{'error': str(e)},
status=status.HTTP_500_INTERNAL_SERVER_ERROR
)
def _build_run_progress_response(self, site, run):
"""Build comprehensive progress response for a run"""
from igny8_core.business.planning.models import Keywords, Clusters, ContentIdeas
from igny8_core.business.content.models import Tasks, Content, Images
from django.db.models import Count
from django.utils import timezone
initial_snapshot = run.initial_snapshot or {}
# Helper to get processed count from result
def get_processed(result, key):
if not result:
return 0
return result.get(key, 0)
# Helper to get output count from result
def get_output(result, key):
if not result:
return 0
return result.get(key, 0)
# Stage-specific key mapping for processed counts
processed_keys = {
1: 'keywords_processed',
2: 'clusters_processed',
3: 'ideas_processed',
4: 'tasks_processed',
5: 'content_processed',
6: 'images_processed',
7: 'ready_for_review'
}
# Stage-specific key mapping for output counts
output_keys = {
1: 'clusters_created',
2: 'ideas_created',
3: 'tasks_created',
4: 'content_created',
5: 'prompts_created',
6: 'images_generated',
7: 'ready_for_review'
}
# Build stages array
stages = []
total_processed = 0
total_initial = initial_snapshot.get('total_initial_items', 0)
stage_names = {
1: 'Keywords → Clusters',
2: 'Clusters → Ideas',
3: 'Ideas → Tasks',
4: 'Tasks → Content',
5: 'Content → Image Prompts',
6: 'Image Prompts → Images',
7: 'Manual Review Gate'
}
stage_types = {
1: 'AI', 2: 'AI', 3: 'Local', 4: 'AI', 5: 'AI', 6: 'AI', 7: 'Manual'
}
for stage_num in range(1, 8):
result = getattr(run, f'stage_{stage_num}_result', None)
initial_count = initial_snapshot.get(f'stage_{stage_num}_initial', 0)
processed = get_processed(result, processed_keys[stage_num])
output = get_output(result, output_keys[stage_num])
total_processed += processed
# Determine stage status
if run.current_stage > stage_num:
stage_status = 'completed'
elif run.current_stage == stage_num:
stage_status = 'active'
else:
stage_status = 'pending'
# Calculate progress percentage for this stage
progress = 0
if initial_count > 0:
progress = round((processed / initial_count) * 100)
elif run.current_stage > stage_num:
progress = 100
stage_data = {
'number': stage_num,
'name': stage_names[stage_num],
'type': stage_types[stage_num],
'status': stage_status,
'input_count': initial_count,
'output_count': output,
'processed_count': processed,
'progress_percentage': min(progress, 100),
'credits_used': result.get('credits_used', 0) if result else 0,
'time_elapsed': result.get('time_elapsed', '') if result else '',
}
# Add currently_processing for active stage
if stage_status == 'active':
try:
service = AutomationService.from_run_id(run.run_id)
processing_state = service.get_current_processing_state()
if processing_state:
stage_data['currently_processing'] = processing_state.get('currently_processing', [])
stage_data['up_next'] = processing_state.get('up_next', [])
stage_data['remaining_count'] = processing_state.get('remaining_count', 0)
except Exception:
pass
stages.append(stage_data)
# Calculate global progress
# Stages 1-6 are automation stages, Stage 7 is manual review (not counted)
# Progress = weighted average of stages 1-6 completion
global_percentage = 0
if run.status == 'completed':
# If run is completed (after Stage 6), show 100%
global_percentage = 100
elif run.status in ('cancelled', 'failed'):
# Keep current progress for cancelled/failed
if total_initial > 0:
global_percentage = round((total_processed / total_initial) * 100)
else:
# Calculate based on completed stages (1-6 only)
# Each of the 6 automation stages contributes ~16.67% to total
completed_stages = min(max(run.current_stage - 1, 0), 6)
stage_weight = 100 / 6 # Each stage is ~16.67%
# Base progress from completed stages
base_progress = completed_stages * stage_weight
# Add partial progress from current stage
current_stage_progress = 0
if run.current_stage <= 6:
current_result = getattr(run, f'stage_{run.current_stage}_result', None)
current_initial = initial_snapshot.get(f'stage_{run.current_stage}_initial', 0)
if current_initial > 0 and current_result:
processed_key = processed_keys.get(run.current_stage, '')
current_processed = current_result.get(processed_key, 0)
current_stage_progress = (current_processed / current_initial) * stage_weight
global_percentage = round(base_progress + current_stage_progress)
# Calculate duration
duration_seconds = 0
if run.started_at:
end_time = run.completed_at or timezone.now()
duration_seconds = int((end_time - run.started_at).total_seconds())
return {
'run': {
'run_id': run.run_id,
'status': run.status,
'current_stage': run.current_stage,
'trigger_type': run.trigger_type,
'started_at': run.started_at,
'completed_at': run.completed_at,
'paused_at': run.paused_at,
},
'global_progress': {
'total_items': total_initial,
'completed_items': total_processed,
'percentage': min(global_percentage, 100),
'current_stage': run.current_stage,
'total_stages': 7
},
'stages': stages,
'metrics': {
'credits_used': run.total_credits_used,
'duration_seconds': duration_seconds,
'errors': []
},
'initial_snapshot': initial_snapshot
}

View File

@@ -5,17 +5,22 @@ NOTE: Most billing models are registered in modules/billing/admin.py
with full workflow functionality. This file contains legacy/minimal registrations.
"""
from django.contrib import admin
from django.contrib import messages
from django.utils.html import format_html
from unfold.admin import ModelAdmin
from igny8_core.admin.base import AccountAdminMixin, Igny8ModelAdmin
from .models import (
CreditCostConfig,
AccountPaymentMethod,
Invoice,
Payment,
CreditPackage,
PaymentMethodConfig,
)
# NOTE: Most billing models are now registered in modules/billing/admin.py
# This file is kept for reference but all registrations are commented out
# to avoid AlreadyRegistered errors
# from .models import (
# CreditCostConfig,
# AccountPaymentMethod,
# Invoice,
# Payment,
# CreditPackage,
# PaymentMethodConfig,
# )
# CreditCostConfig - DUPLICATE - Registered in modules/billing/admin.py with better features
@@ -46,33 +51,21 @@ from .models import (
# ...existing implementation...
# PaymentMethodConfig and AccountPaymentMethod are kept here as they're not duplicated
# or have minimal implementations that don't conflict
# AccountPaymentMethod - DUPLICATE - Registered in modules/billing/admin.py with AccountAdminMixin
# Commenting out to avoid AlreadyRegistered error
# The version in modules/billing/admin.py is preferred as it includes AccountAdminMixin
@admin.register(AccountPaymentMethod)
class AccountPaymentMethodAdmin(Igny8ModelAdmin):
list_display = [
'display_name',
'type',
'account',
'is_default',
'is_enabled',
'country_code',
'is_verified',
'updated_at',
]
list_filter = ['type', 'is_default', 'is_enabled', 'is_verified', 'country_code']
search_fields = ['display_name', 'account__name', 'account__id']
readonly_fields = ['created_at', 'updated_at']
fieldsets = (
('Payment Method', {
'fields': ('account', 'type', 'display_name', 'is_default', 'is_enabled', 'is_verified', 'country_code')
}),
('Instructions / Metadata', {
'fields': ('instructions', 'metadata')
}),
('Timestamps', {
'fields': ('created_at', 'updated_at'),
'classes': ('collapse',)
}),
)
# from import_export.admin import ExportMixin
# from import_export import resources
#
# class AccountPaymentMethodResource(resources.ModelResource):
# """Resource class for exporting Account Payment Methods"""
# class Meta:
# model = AccountPaymentMethod
# fields = ('id', 'display_name', 'type', 'account__name', 'is_default',
# 'is_enabled', 'is_verified', 'country_code', 'created_at')
# export_order = fields
#
# @admin.register(AccountPaymentMethod)
# class AccountPaymentMethodAdmin(ExportMixin, Igny8ModelAdmin):
# ... (see modules/billing/admin.py for active registration)

View File

@@ -192,22 +192,32 @@ class BillingViewSet(viewsets.GenericViewSet):
@action(detail=False, methods=['get'], url_path='payment-methods', permission_classes=[AllowAny])
def list_payment_methods(self, request):
"""
Get available payment methods for a specific country.
Get available payment methods filtered by country code.
Public endpoint - only returns enabled payment methods.
Does not expose sensitive configuration details.
Query params:
country: ISO 2-letter country code (default: 'US')
Query Parameters:
- country_code: ISO 2-letter country code (e.g., 'US', 'PK')
Returns payment methods filtered by country.
Returns methods for:
1. Specified country (country_code=XX)
2. Global methods (country_code='*')
"""
country = request.GET.get('country', 'US').upper()
country_code = request.query_params.get('country_code', '').upper()
# Get country-specific methods
methods = PaymentMethodConfig.objects.filter(
country_code=country,
is_enabled=True
).order_by('sort_order')
if country_code:
# Filter by specific country OR global methods
methods = PaymentMethodConfig.objects.filter(
is_enabled=True
).filter(
Q(country_code=country_code) | Q(country_code='*')
).order_by('sort_order')
else:
# No country specified - return only global methods
methods = PaymentMethodConfig.objects.filter(
is_enabled=True,
country_code='*'
).order_by('sort_order')
# Serialize using the proper serializer
serializer = PaymentMethodConfigSerializer(methods, many=True)
@@ -609,7 +619,7 @@ class BillingViewSet(viewsets.GenericViewSet):
class InvoiceViewSet(AccountModelViewSet):
"""ViewSet for user-facing invoices"""
queryset = Invoice.objects.all().select_related('account')
queryset = Invoice.objects.all().select_related('account', 'subscription', 'subscription__plan')
permission_classes = [IsAuthenticatedAndActive, HasTenantAccess]
pagination_class = CustomPageNumberPagination
@@ -620,6 +630,43 @@ class InvoiceViewSet(AccountModelViewSet):
queryset = queryset.filter(account=self.request.account)
return queryset.order_by('-invoice_date', '-created_at')
def _serialize_invoice(self, invoice):
"""Serialize an invoice with all needed fields"""
# Build subscription data if exists
subscription_data = None
if invoice.subscription:
plan_data = None
if invoice.subscription.plan:
plan_data = {
'id': invoice.subscription.plan.id,
'name': invoice.subscription.plan.name,
'slug': invoice.subscription.plan.slug,
}
subscription_data = {
'id': invoice.subscription.id,
'plan': plan_data,
}
return {
'id': invoice.id,
'invoice_number': invoice.invoice_number,
'status': invoice.status,
'total': str(invoice.total), # Alias for compatibility
'total_amount': str(invoice.total),
'subtotal': str(invoice.subtotal),
'tax_amount': str(invoice.tax),
'currency': invoice.currency,
'invoice_date': invoice.invoice_date.isoformat(),
'due_date': invoice.due_date.isoformat(),
'paid_at': invoice.paid_at.isoformat() if invoice.paid_at else None,
'line_items': invoice.line_items,
'billing_email': invoice.billing_email,
'notes': invoice.notes,
'payment_method': invoice.payment_method,
'subscription': subscription_data,
'created_at': invoice.created_at.isoformat(),
}
def list(self, request):
"""List invoices for current account"""
queryset = self.get_queryset()
@@ -633,25 +680,7 @@ class InvoiceViewSet(AccountModelViewSet):
page = paginator.paginate_queryset(queryset, request)
# Serialize invoice data
results = []
for invoice in (page if page is not None else []):
results.append({
'id': invoice.id,
'invoice_number': invoice.invoice_number,
'status': invoice.status,
'total': str(invoice.total), # Alias for compatibility
'total_amount': str(invoice.total),
'subtotal': str(invoice.subtotal),
'tax_amount': str(invoice.tax),
'currency': invoice.currency,
'invoice_date': invoice.invoice_date.isoformat(),
'due_date': invoice.due_date.isoformat(),
'paid_at': invoice.paid_at.isoformat() if invoice.paid_at else None,
'line_items': invoice.line_items,
'billing_email': invoice.billing_email,
'notes': invoice.notes,
'created_at': invoice.created_at.isoformat(),
})
results = [self._serialize_invoice(invoice) for invoice in (page if page is not None else [])]
return paginated_response(
{'count': paginator.page.paginator.count, 'next': paginator.get_next_link(), 'previous': paginator.get_previous_link(), 'results': results},
@@ -662,24 +691,7 @@ class InvoiceViewSet(AccountModelViewSet):
"""Get invoice detail"""
try:
invoice = self.get_queryset().get(pk=pk)
data = {
'id': invoice.id,
'invoice_number': invoice.invoice_number,
'status': invoice.status,
'total': str(invoice.total), # Alias for compatibility
'total_amount': str(invoice.total),
'subtotal': str(invoice.subtotal),
'tax_amount': str(invoice.tax),
'currency': invoice.currency,
'invoice_date': invoice.invoice_date.isoformat(),
'due_date': invoice.due_date.isoformat(),
'paid_at': invoice.paid_at.isoformat() if invoice.paid_at else None,
'line_items': invoice.line_items,
'billing_email': invoice.billing_email,
'notes': invoice.notes,
'created_at': invoice.created_at.isoformat(),
}
return success_response(data=data, request=request)
return success_response(data=self._serialize_invoice(invoice), request=request)
except Invoice.DoesNotExist:
return error_response(error='Invoice not found', status_code=404, request=request)
@@ -687,14 +699,38 @@ class InvoiceViewSet(AccountModelViewSet):
def download_pdf(self, request, pk=None):
"""Download invoice PDF"""
try:
invoice = self.get_queryset().get(pk=pk)
invoice = self.get_queryset().select_related(
'account', 'account__owner', 'subscription', 'subscription__plan'
).get(pk=pk)
pdf_bytes = InvoiceService.generate_pdf(invoice)
# Build descriptive filename
plan_name = ''
if invoice.subscription and invoice.subscription.plan:
plan_name = invoice.subscription.plan.name.replace(' ', '-')
elif invoice.metadata and 'plan_name' in invoice.metadata:
plan_name = invoice.metadata.get('plan_name', '').replace(' ', '-')
date_str = invoice.invoice_date.strftime('%Y-%m-%d') if invoice.invoice_date else ''
filename_parts = ['IGNY8', 'Invoice', invoice.invoice_number]
if plan_name:
filename_parts.append(plan_name)
if date_str:
filename_parts.append(date_str)
filename = '-'.join(filename_parts) + '.pdf'
response = HttpResponse(pdf_bytes, content_type='application/pdf')
response['Content-Disposition'] = f'attachment; filename="invoice-{invoice.invoice_number}.pdf"'
response['Content-Disposition'] = f'attachment; filename="{filename}"'
return response
except Invoice.DoesNotExist:
return error_response(error='Invoice not found', status_code=404, request=request)
except Exception as e:
import logging
logger = logging.getLogger(__name__)
logger.error(f'PDF generation failed for invoice {pk}: {str(e)}', exc_info=True)
return error_response(error=f'Failed to generate PDF: {str(e)}', status_code=500, request=request)
class PaymentViewSet(AccountModelViewSet):
@@ -769,6 +805,7 @@ class PaymentViewSet(AccountModelViewSet):
payment_method = request.data.get('payment_method', 'bank_transfer')
reference = request.data.get('reference', '')
notes = request.data.get('notes', '')
currency = request.data.get('currency', 'USD')
if not amount:
return error_response(error='Amount is required', status_code=400, request=request)
@@ -778,18 +815,30 @@ class PaymentViewSet(AccountModelViewSet):
invoice = None
if invoice_id:
invoice = Invoice.objects.get(id=invoice_id, account=account)
# Use invoice currency if not explicitly provided
if not request.data.get('currency') and invoice:
currency = invoice.currency
payment = Payment.objects.create(
account=account,
invoice=invoice,
amount=amount,
currency='USD',
currency=currency,
payment_method=payment_method,
status='pending_approval',
manual_reference=reference,
manual_notes=notes,
)
# Send payment confirmation email
try:
from igny8_core.business.billing.services.email_service import BillingEmailService
BillingEmailService.send_payment_confirmation_email(payment, account)
except Exception as e:
import logging
logger = logging.getLogger(__name__)
logger.error(f'Failed to send payment confirmation email: {str(e)}')
return success_response(
data={'id': payment.id, 'status': payment.status},
message='Manual payment submitted for approval',
@@ -833,11 +882,16 @@ class CreditPackageViewSet(viewsets.ReadOnlyModelViewSet):
class AccountPaymentMethodViewSet(AccountModelViewSet):
"""ViewSet for account payment methods"""
"""ViewSet for account payment methods - Full CRUD support"""
queryset = AccountPaymentMethod.objects.all()
permission_classes = [IsAuthenticatedAndActive, HasTenantAccess]
pagination_class = CustomPageNumberPagination
def get_serializer_class(self):
"""Return serializer class"""
from igny8_core.modules.billing.serializers import AccountPaymentMethodSerializer
return AccountPaymentMethodSerializer
def get_queryset(self):
"""Filter payment methods by account"""
queryset = super().get_queryset()
@@ -845,6 +899,15 @@ class AccountPaymentMethodViewSet(AccountModelViewSet):
queryset = queryset.filter(account=self.request.account)
return queryset.order_by('-is_default', 'type')
def get_serializer_context(self):
"""Add account to serializer context"""
context = super().get_serializer_context()
account = getattr(self.request, 'account', None)
if not account and hasattr(self.request, 'user') and self.request.user:
account = getattr(self.request.user, 'account', None)
context['account'] = account
return context
def list(self, request):
"""List payment methods for current account"""
queryset = self.get_queryset()
@@ -854,18 +917,108 @@ class AccountPaymentMethodViewSet(AccountModelViewSet):
results = []
for method in (page if page is not None else []):
results.append({
'id': str(method.id),
'id': method.id,
'type': method.type,
'display_name': method.display_name,
'is_default': method.is_default,
'is_enabled': method.is_enabled if hasattr(method, 'is_enabled') else True,
'is_enabled': method.is_enabled,
'is_verified': method.is_verified,
'instructions': method.instructions,
'metadata': method.metadata,
'created_at': method.created_at.isoformat() if method.created_at else None,
'updated_at': method.updated_at.isoformat() if method.updated_at else None,
})
return paginated_response(
{'count': paginator.page.paginator.count, 'next': paginator.get_next_link(), 'previous': paginator.get_previous_link(), 'results': results},
request=request
)
def create(self, request, *args, **kwargs):
"""Create a new payment method"""
serializer = self.get_serializer(data=request.data)
try:
serializer.is_valid(raise_exception=True)
instance = serializer.save()
return success_response(
data={
'id': instance.id,
'type': instance.type,
'display_name': instance.display_name,
'is_default': instance.is_default,
'is_enabled': instance.is_enabled,
'is_verified': instance.is_verified,
'instructions': instance.instructions,
},
message='Payment method created successfully',
request=request,
status_code=status.HTTP_201_CREATED
)
except Exception as e:
return error_response(
error=str(e),
status_code=status.HTTP_400_BAD_REQUEST,
request=request
)
def update(self, request, *args, **kwargs):
"""Update a payment method"""
partial = kwargs.pop('partial', False)
instance = self.get_object()
serializer = self.get_serializer(instance, data=request.data, partial=partial)
try:
serializer.is_valid(raise_exception=True)
instance = serializer.save()
return success_response(
data={
'id': instance.id,
'type': instance.type,
'display_name': instance.display_name,
'is_default': instance.is_default,
'is_enabled': instance.is_enabled,
'is_verified': instance.is_verified,
'instructions': instance.instructions,
},
message='Payment method updated successfully',
request=request
)
except Exception as e:
return error_response(
error=str(e),
status_code=status.HTTP_400_BAD_REQUEST,
request=request
)
def destroy(self, request, *args, **kwargs):
"""Delete a payment method"""
try:
instance = self.get_object()
# Don't allow deleting the only default payment method
if instance.is_default:
other_methods = AccountPaymentMethod.objects.filter(
account=instance.account
).exclude(pk=instance.pk).count()
if other_methods == 0:
return error_response(
error='Cannot delete the only payment method',
status_code=status.HTTP_400_BAD_REQUEST,
request=request
)
instance.delete()
return success_response(
data=None,
message='Payment method deleted successfully',
request=request,
status_code=status.HTTP_204_NO_CONTENT
)
except Exception as e:
return error_response(
error=str(e),
status_code=status.HTTP_400_BAD_REQUEST,
request=request
)
# ============================================================================

View File

@@ -1,6 +1,9 @@
"""
Management command to backfill usage tracking for existing content.
Usage: python manage.py backfill_usage [account_id]
NOTE: Since the simplification of limits (Jan 2026), this command only
tracks Ahrefs queries. All other usage is tracked via CreditUsageLog.
"""
from django.core.management.base import BaseCommand
from django.apps import apps
@@ -9,7 +12,7 @@ from igny8_core.auth.models import Account
class Command(BaseCommand):
help = 'Backfill usage tracking for existing content'
help = 'Backfill usage tracking for existing content (Ahrefs queries only)'
def add_arguments(self, parser):
parser.add_argument(
@@ -30,10 +33,6 @@ class Command(BaseCommand):
else:
accounts = Account.objects.filter(plan__isnull=False).select_related('plan')
ContentIdeas = apps.get_model('planner', 'ContentIdeas')
Content = apps.get_model('writer', 'Content')
Images = apps.get_model('writer', 'Images')
total_accounts = accounts.count()
self.stdout.write(f'Processing {total_accounts} account(s)...\n')
@@ -43,45 +42,14 @@ class Command(BaseCommand):
self.stdout.write(f'Plan: {account.plan.name if account.plan else "No Plan"}')
self.stdout.write('=' * 60)
# Count content ideas
ideas_count = ContentIdeas.objects.filter(account=account).count()
self.stdout.write(f'Content Ideas: {ideas_count}')
# Ahrefs queries are tracked in CreditUsageLog with operation_type='ahrefs_query'
# We don't backfill these as they should be tracked in real-time going forward
# This command is primarily for verification
# Count content words
from django.db.models import Sum
total_words = Content.objects.filter(account=account).aggregate(
total=Sum('word_count')
)['total'] or 0
self.stdout.write(f'Content Words: {total_words}')
# Count images
total_images = Images.objects.filter(account=account).count()
images_with_prompts = Images.objects.filter(
account=account, prompt__isnull=False
).exclude(prompt='').count()
self.stdout.write(f'Total Images: {total_images}')
self.stdout.write(f'Images with Prompts: {images_with_prompts}')
# Update account usage fields
with transaction.atomic():
account.usage_content_ideas = ideas_count
account.usage_content_words = total_words
account.usage_images_basic = total_images
account.usage_images_premium = 0 # Premium not implemented yet
account.usage_image_prompts = images_with_prompts
account.save(update_fields=[
'usage_content_ideas', 'usage_content_words',
'usage_images_basic', 'usage_images_premium', 'usage_image_prompts',
'updated_at'
])
self.stdout.write(self.style.SUCCESS('\n✅ Updated usage tracking:'))
self.stdout.write(f' usage_content_ideas: {account.usage_content_ideas}')
self.stdout.write(f' usage_content_words: {account.usage_content_words}')
self.stdout.write(f' usage_images_basic: {account.usage_images_basic}')
self.stdout.write(f' usage_images_premium: {account.usage_images_premium}')
self.stdout.write(f' usage_image_prompts: {account.usage_image_prompts}\n')
self.stdout.write(f'Ahrefs queries used this month: {account.usage_ahrefs_queries}')
self.stdout.write(self.style.SUCCESS('\n✅ Verified usage tracking'))
self.stdout.write(f' usage_ahrefs_queries: {account.usage_ahrefs_queries}\n')
self.stdout.write('=' * 60)
self.stdout.write(self.style.SUCCESS('Backfill complete!'))
self.stdout.write(self.style.SUCCESS('Verification complete!'))
self.stdout.write('=' * 60)

View File

@@ -0,0 +1,48 @@
"""
Migration: Simplify payment methods to global (remove country-specific filtering)
This migration:
1. Updates existing PaymentMethodConfig records to use country_code='*' (global)
2. Removes duplicate payment methods per country, keeping only one global config per method
"""
from django.db import migrations
def migrate_to_global_payment_methods(apps, schema_editor):
"""
Convert country-specific payment methods to global.
For each payment_method type, keep only one configuration with country_code='*'
"""
PaymentMethodConfig = apps.get_model('billing', 'PaymentMethodConfig')
# Get all unique payment methods
payment_methods = PaymentMethodConfig.objects.values_list('payment_method', flat=True).distinct()
for method in payment_methods:
# Get all configs for this payment method
configs = PaymentMethodConfig.objects.filter(payment_method=method).order_by('sort_order', 'id')
if configs.exists():
# Keep the first one and make it global
first_config = configs.first()
first_config.country_code = '*'
first_config.save(update_fields=['country_code'])
# Delete duplicates (other country-specific versions)
configs.exclude(id=first_config.id).delete()
def reverse_migration(apps, schema_editor):
"""Reverse is a no-op - can't restore original country codes"""
pass
class Migration(migrations.Migration):
dependencies = [
('billing', '0007_simplify_payment_statuses'),
]
operations = [
migrations.RunPython(migrate_to_global_payment_methods, reverse_migration),
]

View File

@@ -0,0 +1,359 @@
"""
Migration: Seed AIModelConfig from constants.py
This migration populates the AIModelConfig table with the current models
from ai/constants.py, enabling database-driven model configuration.
"""
from decimal import Decimal
from django.db import migrations
def seed_ai_models(apps, schema_editor):
"""
Seed AIModelConfig with models from constants.py
"""
AIModelConfig = apps.get_model('billing', 'AIModelConfig')
# Text Models (from MODEL_RATES)
text_models = [
{
'model_name': 'gpt-4.1',
'display_name': 'GPT-4.1 - Balanced Performance',
'model_type': 'text',
'provider': 'openai',
'input_cost_per_1m': Decimal('2.00'),
'output_cost_per_1m': Decimal('8.00'),
'context_window': 128000,
'max_output_tokens': 16384,
'supports_json_mode': True,
'supports_vision': True,
'supports_function_calling': True,
'is_active': True,
'is_default': True, # Default text model
'sort_order': 1,
'description': 'Default model - good balance of cost and capability',
},
{
'model_name': 'gpt-4o-mini',
'display_name': 'GPT-4o Mini - Fast & Affordable',
'model_type': 'text',
'provider': 'openai',
'input_cost_per_1m': Decimal('0.15'),
'output_cost_per_1m': Decimal('0.60'),
'context_window': 128000,
'max_output_tokens': 16384,
'supports_json_mode': True,
'supports_vision': True,
'supports_function_calling': True,
'is_active': True,
'is_default': False,
'sort_order': 2,
'description': 'Best for high-volume tasks where cost matters',
},
{
'model_name': 'gpt-4o',
'display_name': 'GPT-4o - High Quality',
'model_type': 'text',
'provider': 'openai',
'input_cost_per_1m': Decimal('2.50'),
'output_cost_per_1m': Decimal('10.00'),
'context_window': 128000,
'max_output_tokens': 16384,
'supports_json_mode': True,
'supports_vision': True,
'supports_function_calling': True,
'is_active': True,
'is_default': False,
'sort_order': 3,
'description': 'Premium model for complex tasks requiring best quality',
},
{
'model_name': 'gpt-5.1',
'display_name': 'GPT-5.1 - Latest Generation',
'model_type': 'text',
'provider': 'openai',
'input_cost_per_1m': Decimal('1.25'),
'output_cost_per_1m': Decimal('10.00'),
'context_window': 200000,
'max_output_tokens': 32768,
'supports_json_mode': True,
'supports_vision': True,
'supports_function_calling': True,
'is_active': True,
'is_default': False,
'sort_order': 4,
'description': 'Next-gen model with improved reasoning',
},
{
'model_name': 'gpt-5.2',
'display_name': 'GPT-5.2 - Most Advanced',
'model_type': 'text',
'provider': 'openai',
'input_cost_per_1m': Decimal('1.75'),
'output_cost_per_1m': Decimal('14.00'),
'context_window': 200000,
'max_output_tokens': 65536,
'supports_json_mode': True,
'supports_vision': True,
'supports_function_calling': True,
'is_active': True,
'is_default': False,
'sort_order': 5,
'description': 'Most capable model for enterprise-grade tasks',
},
]
# Image Models (from IMAGE_MODEL_RATES)
image_models = [
{
'model_name': 'dall-e-3',
'display_name': 'DALL-E 3 - Premium Images',
'model_type': 'image',
'provider': 'openai',
'cost_per_image': Decimal('0.040'),
'valid_sizes': ['1024x1024', '1024x1792', '1792x1024'],
'supports_json_mode': False,
'supports_vision': False,
'supports_function_calling': False,
'is_active': True,
'is_default': True, # Default image model
'sort_order': 1,
'description': 'Best quality image generation, good for hero images and marketing',
},
{
'model_name': 'dall-e-2',
'display_name': 'DALL-E 2 - Standard Images',
'model_type': 'image',
'provider': 'openai',
'cost_per_image': Decimal('0.020'),
'valid_sizes': ['256x256', '512x512', '1024x1024'],
'supports_json_mode': False,
'supports_vision': False,
'supports_function_calling': False,
'is_active': True,
'is_default': False,
'sort_order': 2,
'description': 'Lower cost option for bulk image generation',
},
{
'model_name': 'gpt-image-1',
'display_name': 'GPT Image 1 - Advanced',
'model_type': 'image',
'provider': 'openai',
'cost_per_image': Decimal('0.042'),
'valid_sizes': ['1024x1024', '1024x1792', '1792x1024'],
'supports_json_mode': False,
'supports_vision': False,
'supports_function_calling': False,
'is_active': True,
'is_default': False,
'sort_order': 3,
'description': 'Advanced image model with enhanced capabilities',
},
{
'model_name': 'gpt-image-1-mini',
'display_name': 'GPT Image 1 Mini - Fast',
'model_type': 'image',
'provider': 'openai',
'cost_per_image': Decimal('0.011'),
'valid_sizes': ['1024x1024'],
'supports_json_mode': False,
'supports_vision': False,
'supports_function_calling': False,
'is_active': True,
'is_default': False,
'sort_order': 4,
'description': 'Fastest and most affordable image model',
},
]
# Runware Image Models (from existing integration)
runware_models = [
{
'model_name': 'runware:100@1',
'display_name': 'Runware Standard',
'model_type': 'image',
'provider': 'runware',
'cost_per_image': Decimal('0.008'),
'valid_sizes': ['512x512', '768x768', '1024x1024'],
'supports_json_mode': False,
'supports_vision': False,
'supports_function_calling': False,
'is_active': True,
'is_default': False,
'sort_order': 10,
'description': 'Runware image generation - most affordable',
},
]
# Bria AI Image Models
bria_models = [
{
'model_name': 'bria-2.3',
'display_name': 'Bria 2.3 High Quality',
'model_type': 'image',
'provider': 'bria',
'cost_per_image': Decimal('0.015'),
'valid_sizes': ['512x512', '768x768', '1024x1024', '1024x1792', '1792x1024'],
'supports_json_mode': False,
'supports_vision': False,
'supports_function_calling': False,
'is_active': True,
'is_default': False,
'sort_order': 11,
'description': 'Bria 2.3 - High quality image generation',
},
{
'model_name': 'bria-2.3-fast',
'display_name': 'Bria 2.3 Fast',
'model_type': 'image',
'provider': 'bria',
'cost_per_image': Decimal('0.010'),
'valid_sizes': ['512x512', '768x768', '1024x1024'],
'supports_json_mode': False,
'supports_vision': False,
'supports_function_calling': False,
'is_active': True,
'is_default': False,
'sort_order': 12,
'description': 'Bria 2.3 Fast - Quick generation, lower cost',
},
{
'model_name': 'bria-2.2',
'display_name': 'Bria 2.2 Standard',
'model_type': 'image',
'provider': 'bria',
'cost_per_image': Decimal('0.012'),
'valid_sizes': ['512x512', '768x768', '1024x1024'],
'supports_json_mode': False,
'supports_vision': False,
'supports_function_calling': False,
'is_active': True,
'is_default': False,
'sort_order': 13,
'description': 'Bria 2.2 - Standard image generation',
},
]
# Anthropic Claude Text Models
anthropic_models = [
{
'model_name': 'claude-3-5-sonnet-20241022',
'display_name': 'Claude 3.5 Sonnet (Latest)',
'model_type': 'text',
'provider': 'anthropic',
'input_cost_per_1m': Decimal('3.00'),
'output_cost_per_1m': Decimal('15.00'),
'context_window': 200000,
'max_output_tokens': 8192,
'supports_json_mode': True,
'supports_vision': True,
'supports_function_calling': True,
'is_active': True,
'is_default': False,
'sort_order': 20,
'description': 'Claude 3.5 Sonnet - Best for most tasks, excellent reasoning',
},
{
'model_name': 'claude-3-5-haiku-20241022',
'display_name': 'Claude 3.5 Haiku (Fast)',
'model_type': 'text',
'provider': 'anthropic',
'input_cost_per_1m': Decimal('1.00'),
'output_cost_per_1m': Decimal('5.00'),
'context_window': 200000,
'max_output_tokens': 8192,
'supports_json_mode': True,
'supports_vision': True,
'supports_function_calling': True,
'is_active': True,
'is_default': False,
'sort_order': 21,
'description': 'Claude 3.5 Haiku - Fast and affordable',
},
{
'model_name': 'claude-3-opus-20240229',
'display_name': 'Claude 3 Opus',
'model_type': 'text',
'provider': 'anthropic',
'input_cost_per_1m': Decimal('15.00'),
'output_cost_per_1m': Decimal('75.00'),
'context_window': 200000,
'max_output_tokens': 4096,
'supports_json_mode': True,
'supports_vision': True,
'supports_function_calling': True,
'is_active': True,
'is_default': False,
'sort_order': 22,
'description': 'Claude 3 Opus - Most capable Claude model',
},
{
'model_name': 'claude-3-sonnet-20240229',
'display_name': 'Claude 3 Sonnet',
'model_type': 'text',
'provider': 'anthropic',
'input_cost_per_1m': Decimal('3.00'),
'output_cost_per_1m': Decimal('15.00'),
'context_window': 200000,
'max_output_tokens': 4096,
'supports_json_mode': True,
'supports_vision': True,
'supports_function_calling': True,
'is_active': True,
'is_default': False,
'sort_order': 23,
'description': 'Claude 3 Sonnet - Balanced performance and cost',
},
{
'model_name': 'claude-3-haiku-20240307',
'display_name': 'Claude 3 Haiku',
'model_type': 'text',
'provider': 'anthropic',
'input_cost_per_1m': Decimal('0.25'),
'output_cost_per_1m': Decimal('1.25'),
'context_window': 200000,
'max_output_tokens': 4096,
'supports_json_mode': True,
'supports_vision': True,
'supports_function_calling': True,
'is_active': True,
'is_default': False,
'sort_order': 24,
'description': 'Claude 3 Haiku - Most affordable Claude model',
},
]
# Create all models
all_models = text_models + image_models + runware_models + bria_models + anthropic_models
for model_data in all_models:
AIModelConfig.objects.update_or_create(
model_name=model_data['model_name'],
defaults=model_data
)
def reverse_migration(apps, schema_editor):
"""Remove seeded models"""
AIModelConfig = apps.get_model('billing', 'AIModelConfig')
seeded_models = [
'gpt-4.1', 'gpt-4o-mini', 'gpt-4o', 'gpt-5.1', 'gpt-5.2',
'dall-e-3', 'dall-e-2', 'gpt-image-1', 'gpt-image-1-mini',
'runware:100@1',
'bria-2.3', 'bria-2.3-fast', 'bria-2.2',
'claude-3-5-sonnet-20241022', 'claude-3-5-haiku-20241022',
'claude-3-opus-20240229', 'claude-3-sonnet-20240229', 'claude-3-haiku-20240307'
]
AIModelConfig.objects.filter(model_name__in=seeded_models).delete()
class Migration(migrations.Migration):
dependencies = [
('billing', '0008_global_payment_methods'),
]
operations = [
migrations.RunPython(seed_ai_models, reverse_migration),
]

View File

@@ -75,7 +75,12 @@ class CreditUsageLog(AccountBaseModel):
('idea_generation', 'Content Ideas Generation'),
('content_generation', 'Content Generation'),
('image_generation', 'Image Generation'),
('image_prompt_extraction', 'Image Prompt Extraction'),
('linking', 'Internal Linking'),
('optimization', 'Content Optimization'),
('reparse', 'Content Reparse'),
('site_page_generation', 'Site Page Generation'),
('site_structure_generation', 'Site Structure Generation'),
('ideas', 'Content Ideas Generation'), # Legacy
('content', 'Content Generation'), # Legacy
('images', 'Image Generation'), # Legacy
@@ -109,63 +114,48 @@ class CreditUsageLog(AccountBaseModel):
class CreditCostConfig(models.Model):
"""
Configurable credit costs per AI function
Admin-editable alternative to hardcoded constants
Fixed credit costs per operation type.
Per final-model-schemas.md:
| Field | Type | Required | Notes |
|-------|------|----------|-------|
| operation_type | CharField(50) PK | Yes | Unique operation ID |
| display_name | CharField(100) | Yes | Human-readable |
| base_credits | IntegerField | Yes | Fixed credits per operation |
| is_active | BooleanField | Yes | Enable/disable |
| description | TextField | No | Admin notes |
"""
# Operation identification
# Operation identification (Primary Key)
operation_type = models.CharField(
max_length=50,
unique=True,
choices=CreditUsageLog.OPERATION_TYPE_CHOICES,
help_text="AI operation type"
primary_key=True,
help_text="Unique operation ID (e.g., 'article_generation', 'image_generation')"
)
# Cost configuration
credits_cost = models.IntegerField(
# Human-readable name
display_name = models.CharField(
max_length=100,
help_text="Human-readable name"
)
# Fixed credits per operation
base_credits = models.IntegerField(
default=1,
validators=[MinValueValidator(0)],
help_text="Credits required for this operation"
help_text="Fixed credits per operation"
)
# Unit of measurement
UNIT_CHOICES = [
('per_request', 'Per Request'),
('per_100_words', 'Per 100 Words'),
('per_200_words', 'Per 200 Words'),
('per_item', 'Per Item'),
('per_image', 'Per Image'),
]
unit = models.CharField(
max_length=50,
default='per_request',
choices=UNIT_CHOICES,
help_text="What the cost applies to"
)
# Metadata
display_name = models.CharField(max_length=100, help_text="Human-readable name")
description = models.TextField(blank=True, help_text="What this operation does")
# Status
is_active = models.BooleanField(default=True, help_text="Enable/disable this operation")
# Audit fields
created_at = models.DateTimeField(auto_now_add=True)
updated_at = models.DateTimeField(auto_now=True)
updated_by = models.ForeignKey(
settings.AUTH_USER_MODEL,
null=True,
blank=True,
on_delete=models.SET_NULL,
related_name='credit_cost_updates',
help_text="Admin who last updated"
is_active = models.BooleanField(
default=True,
help_text="Enable/disable this operation"
)
# Change tracking
previous_cost = models.IntegerField(
null=True,
# Admin notes
description = models.TextField(
blank=True,
help_text="Cost before last update (for audit trail)"
help_text="Admin notes about this operation"
)
# History tracking
@@ -179,18 +169,79 @@ class CreditCostConfig(models.Model):
ordering = ['operation_type']
def __str__(self):
return f"{self.display_name} - {self.credits_cost} credits {self.unit}"
return f"{self.display_name} - {self.base_credits} credits"
class BillingConfiguration(models.Model):
"""
System-wide billing configuration (Singleton).
Global settings for token-credit pricing.
"""
# Default token-to-credit ratio
default_tokens_per_credit = models.IntegerField(
default=100,
validators=[MinValueValidator(1)],
help_text="Default: How many tokens equal 1 credit (e.g., 100)"
)
# Credit pricing
default_credit_price_usd = models.DecimalField(
max_digits=10,
decimal_places=4,
default=Decimal('0.01'),
validators=[MinValueValidator(Decimal('0.0001'))],
help_text="Default price per credit in USD"
)
# Reporting settings
enable_token_based_reporting = models.BooleanField(
default=True,
help_text="Show token metrics in all reports"
)
# Rounding settings
ROUNDING_CHOICES = [
('up', 'Round Up'),
('down', 'Round Down'),
('nearest', 'Round to Nearest'),
]
credit_rounding_mode = models.CharField(
max_length=10,
default='up',
choices=ROUNDING_CHOICES,
help_text="How to round fractional credits"
)
# Audit fields
updated_at = models.DateTimeField(auto_now=True)
updated_by = models.ForeignKey(
settings.AUTH_USER_MODEL,
on_delete=models.SET_NULL,
null=True,
blank=True,
help_text="Admin who last updated"
)
class Meta:
app_label = 'billing'
db_table = 'igny8_billing_configuration'
verbose_name = 'Billing Configuration'
verbose_name_plural = 'Billing Configuration'
def save(self, *args, **kwargs):
# Track cost changes
if self.pk:
try:
old = CreditCostConfig.objects.get(pk=self.pk)
if old.credits_cost != self.credits_cost:
self.previous_cost = old.credits_cost
except CreditCostConfig.DoesNotExist:
pass
"""Enforce singleton pattern"""
self.pk = 1
super().save(*args, **kwargs)
@classmethod
def get_config(cls):
"""Get or create the singleton config"""
config, created = cls.objects.get_or_create(pk=1)
return config
def __str__(self):
return f"Billing Configuration (1 credit = {self.default_tokens_per_credit} tokens)"
class PlanLimitUsage(AccountBaseModel):
@@ -347,6 +398,20 @@ class Invoice(AccountBaseModel):
def tax_amount(self):
return self.tax
@property
def tax_rate(self):
"""Get tax rate from metadata if stored"""
if self.metadata and 'tax_rate' in self.metadata:
return self.metadata['tax_rate']
return 0
@property
def discount_amount(self):
"""Get discount amount from metadata if stored"""
if self.metadata and 'discount_amount' in self.metadata:
return self.metadata['discount_amount']
return 0
@property
def total_amount(self):
return self.total
@@ -436,6 +501,7 @@ class Payment(AccountBaseModel):
manual_reference = models.CharField(
max_length=255,
blank=True,
null=True,
help_text="Bank transfer reference, wallet transaction ID, etc."
)
manual_notes = models.TextField(blank=True, help_text="Admin notes for manual payments")
@@ -475,9 +541,24 @@ class Payment(AccountBaseModel):
models.Index(fields=['account', 'payment_method']),
models.Index(fields=['invoice', 'status']),
]
constraints = [
# Ensure manual_reference is unique when not null/empty
# This prevents duplicate bank transfer references
models.UniqueConstraint(
fields=['manual_reference'],
name='unique_manual_reference_when_not_null',
condition=models.Q(manual_reference__isnull=False) & ~models.Q(manual_reference='')
),
]
def __str__(self):
return f"Payment {self.id} - {self.get_payment_method_display()} - {self.amount} {self.currency}"
def save(self, *args, **kwargs):
"""Normalize empty manual_reference to NULL for proper uniqueness handling"""
if self.manual_reference == '':
self.manual_reference = None
super().save(*args, **kwargs)
class CreditPackage(models.Model):
@@ -527,8 +608,10 @@ class CreditPackage(models.Model):
class PaymentMethodConfig(models.Model):
"""
Configure payment methods availability per country
Allows enabling/disabling manual payments by region
Configure payment methods availability per country.
For online payments (stripe, paypal): Credentials stored in IntegrationProvider.
For manual payments (bank_transfer, local_wallet): Bank/wallet details stored here.
"""
# Use centralized choices
PAYMENT_METHOD_CHOICES = PAYMENT_METHOD_CHOICES
@@ -536,7 +619,7 @@ class PaymentMethodConfig(models.Model):
country_code = models.CharField(
max_length=2,
db_index=True,
help_text="ISO 2-letter country code (e.g., US, GB, IN)"
help_text="ISO 2-letter country code (e.g., US, GB, PK) or '*' for global"
)
payment_method = models.CharField(max_length=50, choices=PAYMENT_METHOD_CHOICES)
is_enabled = models.BooleanField(default=True)
@@ -545,21 +628,17 @@ class PaymentMethodConfig(models.Model):
display_name = models.CharField(max_length=100, blank=True)
instructions = models.TextField(blank=True, help_text="Payment instructions for users")
# Manual payment details (for bank_transfer/local_wallet)
# Manual payment details (for bank_transfer only)
bank_name = models.CharField(max_length=255, blank=True)
account_number = models.CharField(max_length=255, blank=True)
routing_number = models.CharField(max_length=255, blank=True)
swift_code = models.CharField(max_length=255, blank=True)
account_title = models.CharField(max_length=255, blank=True, help_text="Account holder name")
routing_number = models.CharField(max_length=255, blank=True, help_text="Routing/Sort code")
swift_code = models.CharField(max_length=255, blank=True, help_text="SWIFT/BIC code for international")
iban = models.CharField(max_length=255, blank=True, help_text="IBAN for international transfers")
# Additional fields for local wallets
wallet_type = models.CharField(max_length=100, blank=True, help_text="E.g., PayTM, PhonePe, etc.")
wallet_id = models.CharField(max_length=255, blank=True)
# Webhook configuration (Stripe/PayPal)
webhook_url = models.URLField(blank=True, help_text="Webhook URL for payment gateway callbacks")
webhook_secret = models.CharField(max_length=255, blank=True, help_text="Webhook secret for signature verification")
api_key = models.CharField(max_length=255, blank=True, help_text="API key for payment gateway integration")
api_secret = models.CharField(max_length=255, blank=True, help_text="API secret for payment gateway integration")
wallet_type = models.CharField(max_length=100, blank=True, help_text="E.g., JazzCash, EasyPaisa, etc.")
wallet_id = models.CharField(max_length=255, blank=True, help_text="Mobile number or wallet ID")
# Order/priority
sort_order = models.IntegerField(default=0)
@@ -613,3 +692,307 @@ class AccountPaymentMethod(AccountBaseModel):
def __str__(self):
return f"{self.account_id} - {self.display_name} ({self.type})"
class AIModelConfig(models.Model):
"""
All AI models (text + image) with pricing and credit configuration.
Single Source of Truth for Models.
Per final-model-schemas.md:
| Field | Type | Required | Notes |
|-------|------|----------|-------|
| id | AutoField PK | Auto | |
| model_name | CharField(100) | Yes | gpt-5.1, dall-e-3, runware:97@1 |
| model_type | CharField(20) | Yes | text / image |
| provider | CharField(50) | Yes | Links to IntegrationProvider |
| display_name | CharField(200) | Yes | Human-readable |
| is_default | BooleanField | Yes | One default per type |
| is_active | BooleanField | Yes | Enable/disable |
| cost_per_1k_input | DecimalField | No | Provider cost (USD) - text models |
| cost_per_1k_output | DecimalField | No | Provider cost (USD) - text models |
| tokens_per_credit | IntegerField | No | Text: tokens per 1 credit (e.g., 1000) |
| credits_per_image | IntegerField | No | Image: credits per image (e.g., 1, 5, 15) |
| quality_tier | CharField(20) | No | basic / quality / premium |
| max_tokens | IntegerField | No | Model token limit |
| context_window | IntegerField | No | Model context size |
| capabilities | JSONField | No | vision, function_calling, etc. |
| created_at | DateTime | Auto | |
| updated_at | DateTime | Auto | |
"""
MODEL_TYPE_CHOICES = [
('text', 'Text Generation'),
('image', 'Image Generation'),
]
PROVIDER_CHOICES = [
('openai', 'OpenAI'),
('anthropic', 'Anthropic'),
('runware', 'Runware'),
('google', 'Google'),
]
QUALITY_TIER_CHOICES = [
('basic', 'Basic'),
('quality', 'Quality'),
('premium', 'Premium'),
]
# Basic Information
model_name = models.CharField(
max_length=100,
unique=True,
db_index=True,
help_text="Model identifier (e.g., 'gpt-5.1', 'dall-e-3', 'runware:97@1')"
)
model_type = models.CharField(
max_length=20,
choices=MODEL_TYPE_CHOICES,
db_index=True,
help_text="text / image"
)
provider = models.CharField(
max_length=50,
choices=PROVIDER_CHOICES,
db_index=True,
help_text="Links to IntegrationProvider"
)
display_name = models.CharField(
max_length=200,
help_text="Human-readable name"
)
is_default = models.BooleanField(
default=False,
db_index=True,
help_text="One default per type"
)
is_active = models.BooleanField(
default=True,
db_index=True,
help_text="Enable/disable"
)
# Text Model Pricing (cost per 1K tokens)
cost_per_1k_input = models.DecimalField(
max_digits=10,
decimal_places=6,
null=True,
blank=True,
help_text="Provider cost per 1K input tokens (USD) - text models"
)
cost_per_1k_output = models.DecimalField(
max_digits=10,
decimal_places=6,
null=True,
blank=True,
help_text="Provider cost per 1K output tokens (USD) - text models"
)
# Credit Configuration
tokens_per_credit = models.IntegerField(
null=True,
blank=True,
help_text="Text: tokens per 1 credit (e.g., 1000, 10000)"
)
credits_per_image = models.IntegerField(
null=True,
blank=True,
help_text="Image: credits per image (e.g., 1, 5, 15)"
)
quality_tier = models.CharField(
max_length=20,
choices=QUALITY_TIER_CHOICES,
null=True,
blank=True,
help_text="basic / quality / premium - for image models"
)
# Model Limits
max_tokens = models.IntegerField(
null=True,
blank=True,
help_text="Model token limit"
)
context_window = models.IntegerField(
null=True,
blank=True,
help_text="Model context size"
)
# Capabilities
capabilities = models.JSONField(
default=dict,
blank=True,
help_text="Capabilities: vision, function_calling, json_mode, etc."
)
# Timestamps
created_at = models.DateTimeField(auto_now_add=True)
updated_at = models.DateTimeField(auto_now=True)
# History tracking
history = HistoricalRecords()
class Meta:
app_label = 'billing'
db_table = 'igny8_ai_model_config'
verbose_name = 'AI Model Configuration'
verbose_name_plural = 'AI Model Configurations'
ordering = ['model_type', 'model_name']
indexes = [
models.Index(fields=['model_type', 'is_active']),
models.Index(fields=['provider', 'is_active']),
models.Index(fields=['is_default', 'model_type']),
]
def __str__(self):
return self.display_name
def save(self, *args, **kwargs):
"""Ensure only one is_default per model_type"""
if self.is_default:
AIModelConfig.objects.filter(
model_type=self.model_type,
is_default=True
).exclude(pk=self.pk).update(is_default=False)
super().save(*args, **kwargs)
@classmethod
def get_default_text_model(cls):
"""Get the default text generation model"""
return cls.objects.filter(model_type='text', is_default=True, is_active=True).first()
@classmethod
def get_default_image_model(cls):
"""Get the default image generation model"""
return cls.objects.filter(model_type='image', is_default=True, is_active=True).first()
@classmethod
def get_image_models_by_tier(cls):
"""Get all active image models grouped by quality tier"""
return cls.objects.filter(
model_type='image',
is_active=True
).order_by('quality_tier', 'model_name')
class WebhookEvent(models.Model):
"""
Store all incoming webhook events for audit and replay capability.
This model provides:
- Audit trail of all webhook events
- Idempotency verification (via event_id)
- Ability to replay failed events
- Debugging and monitoring
"""
PROVIDER_CHOICES = [
('stripe', 'Stripe'),
('paypal', 'PayPal'),
]
# Unique identifier from the payment provider
event_id = models.CharField(
max_length=255,
unique=True,
db_index=True,
help_text="Unique event ID from the payment provider"
)
# Payment provider
provider = models.CharField(
max_length=20,
choices=PROVIDER_CHOICES,
db_index=True,
help_text="Payment provider (stripe or paypal)"
)
# Event type (e.g., 'checkout.session.completed', 'PAYMENT.CAPTURE.COMPLETED')
event_type = models.CharField(
max_length=100,
db_index=True,
help_text="Event type from the provider"
)
# Full payload for debugging and replay
payload = models.JSONField(
help_text="Full webhook payload"
)
# Processing status
processed = models.BooleanField(
default=False,
db_index=True,
help_text="Whether this event has been successfully processed"
)
processed_at = models.DateTimeField(
null=True,
blank=True,
help_text="When the event was processed"
)
# Error tracking
error_message = models.TextField(
blank=True,
help_text="Error message if processing failed"
)
retry_count = models.IntegerField(
default=0,
help_text="Number of processing attempts"
)
# Timestamps
created_at = models.DateTimeField(auto_now_add=True)
class Meta:
app_label = 'billing'
db_table = 'igny8_webhook_events'
verbose_name = 'Webhook Event'
verbose_name_plural = 'Webhook Events'
ordering = ['-created_at']
indexes = [
models.Index(fields=['provider', 'event_type']),
models.Index(fields=['processed', 'created_at']),
models.Index(fields=['provider', 'processed']),
]
def __str__(self):
return f"{self.provider}:{self.event_type} - {self.event_id[:20]}..."
@classmethod
def record_event(cls, event_id: str, provider: str, event_type: str, payload: dict):
"""
Record a webhook event. Returns (event, created) tuple.
If the event already exists, returns the existing event.
"""
return cls.objects.get_or_create(
event_id=event_id,
defaults={
'provider': provider,
'event_type': event_type,
'payload': payload,
}
)
def mark_processed(self):
"""Mark the event as successfully processed"""
from django.utils import timezone
self.processed = True
self.processed_at = timezone.now()
self.save(update_fields=['processed', 'processed_at'])
def mark_failed(self, error_message: str):
"""Mark the event as failed with error message"""
self.error_message = error_message
self.retry_count += 1
self.save(update_fields=['error_message', 'retry_count'])

View File

@@ -1,6 +1,8 @@
"""
Credit Service for managing credit transactions and deductions
"""
import math
import logging
from django.db import transaction
from django.utils import timezone
from igny8_core.business.billing.models import CreditTransaction, CreditUsageLog
@@ -8,90 +10,252 @@ from igny8_core.business.billing.constants import CREDIT_COSTS
from igny8_core.business.billing.exceptions import InsufficientCreditsError, CreditCalculationError
from igny8_core.auth.models import Account
logger = logging.getLogger(__name__)
def _check_low_credits_warning(account, previous_balance):
"""
Check if credits have fallen below threshold and send warning email.
Only sends if this is the first time falling below threshold.
"""
try:
from igny8_core.modules.system.email_models import EmailSettings
from .email_service import BillingEmailService
settings = EmailSettings.get_settings()
if not settings.send_low_credit_warnings:
return
threshold = settings.low_credit_threshold
# Only send if we CROSSED below the threshold (wasn't already below)
if account.credits < threshold <= previous_balance:
logger.info(f"Credits fell below threshold for account {account.id}: {account.credits} < {threshold}")
BillingEmailService.send_low_credits_warning(
account=account,
current_credits=account.credits,
threshold=threshold
)
except Exception as e:
logger.error(f"Failed to check/send low credits warning: {e}")
class CreditService:
"""Service for managing credits"""
"""Service for managing credits - Token-based only"""
@staticmethod
def get_credit_cost(operation_type, amount=None):
def calculate_credits_for_image(model_name: str, num_images: int = 1) -> int:
"""
Get credit cost for operation.
Now checks database config first, falls back to constants.
Calculate credits for image generation based on AIModelConfig.credits_per_image.
Args:
operation_type: Type of operation (from CREDIT_COSTS)
amount: Optional amount (word count, image count, etc.)
Returns:
int: Number of credits required
Raises:
CreditCalculationError: If operation type is unknown
"""
import logging
logger = logging.getLogger(__name__)
# Try to get from database config first
try:
from igny8_core.business.billing.models import CreditCostConfig
model_name: The AI model name (e.g., 'dall-e-3', 'flux-1-1-pro')
num_images: Number of images to generate
config = CreditCostConfig.objects.filter(
operation_type=operation_type,
Returns:
int: Credits required
Raises:
CreditCalculationError: If model not found or has no credits_per_image
"""
from igny8_core.business.billing.models import AIModelConfig
try:
model = AIModelConfig.objects.filter(
model_name=model_name,
is_active=True
).first()
if config:
base_cost = config.credits_cost
# Apply unit-based calculation
if config.unit == 'per_100_words' and amount:
return max(1, int(base_cost * (amount / 100)))
elif config.unit == 'per_200_words' and amount:
return max(1, int(base_cost * (amount / 200)))
elif config.unit in ['per_item', 'per_image'] and amount:
return base_cost * amount
else:
return base_cost
except Exception as e:
logger.warning(f"Failed to get cost from database, using constants: {e}")
# Fallback to hardcoded constants
base_cost = CREDIT_COSTS.get(operation_type, 0)
if base_cost == 0:
raise CreditCalculationError(f"Unknown operation type: {operation_type}")
# Variable cost operations (legacy logic)
if operation_type == 'content_generation' and amount:
# Per 100 words
return max(1, int(base_cost * (amount / 100)))
elif operation_type == 'optimization' and amount:
# Per 200 words
return max(1, int(base_cost * (amount / 200)))
elif operation_type == 'image_generation' and amount:
# Per image
return base_cost * amount
elif operation_type == 'idea_generation' and amount:
# Per idea
return base_cost * amount
# Fixed cost operations
return base_cost
if not model:
raise CreditCalculationError(f"Model {model_name} not found or inactive")
if model.credits_per_image is None:
raise CreditCalculationError(
f"Model {model_name} has no credits_per_image configured"
)
credits = model.credits_per_image * num_images
logger.info(
f"Calculated credits for {model_name}: "
f"{num_images} images × {model.credits_per_image} = {credits} credits"
)
return credits
except AIModelConfig.DoesNotExist:
raise CreditCalculationError(f"Model {model_name} not found")
@staticmethod
def check_credits(account, operation_type, amount=None):
def calculate_credits_from_tokens_by_model(model_name: str, total_tokens: int) -> int:
"""
Calculate credits from token usage based on AIModelConfig.tokens_per_credit.
This is the model-specific version that uses the model's configured rate.
For operation-based calculation, use calculate_credits_from_tokens().
Args:
model_name: The AI model name (e.g., 'gpt-4o', 'claude-3-5-sonnet')
total_tokens: Total tokens used (input + output)
Returns:
int: Credits required (minimum 1)
Raises:
CreditCalculationError: If model not found
"""
from igny8_core.business.billing.models import AIModelConfig, BillingConfiguration
try:
model = AIModelConfig.objects.filter(
model_name=model_name,
is_active=True
).first()
if model and model.tokens_per_credit:
tokens_per_credit = model.tokens_per_credit
else:
# Fallback to global default
billing_config = BillingConfiguration.get_config()
tokens_per_credit = billing_config.default_tokens_per_credit
logger.info(
f"Model {model_name} has no tokens_per_credit, "
f"using default: {tokens_per_credit}"
)
if tokens_per_credit <= 0:
raise CreditCalculationError(
f"Invalid tokens_per_credit for {model_name}: {tokens_per_credit}"
)
# Get rounding mode
billing_config = BillingConfiguration.get_config()
rounding_mode = billing_config.credit_rounding_mode
credits_float = total_tokens / tokens_per_credit
if rounding_mode == 'up':
credits = math.ceil(credits_float)
elif rounding_mode == 'down':
credits = math.floor(credits_float)
else: # nearest
credits = round(credits_float)
# Minimum 1 credit
credits = max(credits, 1)
logger.info(
f"Calculated credits for {model_name}: "
f"{total_tokens} tokens ÷ {tokens_per_credit} = {credits} credits"
)
return credits
except Exception as e:
logger.error(f"Error calculating credits for {model_name}: {e}")
raise CreditCalculationError(f"Error calculating credits: {e}")
@staticmethod
def calculate_credits_from_tokens(operation_type, tokens_input, tokens_output):
"""
Calculate credits from actual token usage using configured ratio.
This is the ONLY way credits are calculated in the system.
Args:
operation_type: Type of operation
tokens_input: Input tokens used
tokens_output: Output tokens used
Returns:
int: Credits to deduct
Raises:
CreditCalculationError: If configuration error
"""
import logging
import math
from igny8_core.business.billing.models import CreditCostConfig, BillingConfiguration
logger = logging.getLogger(__name__)
# Get operation config (use global default if not found)
config = CreditCostConfig.objects.filter(
operation_type=operation_type,
is_active=True
).first()
if not config:
# Use global billing config as fallback
billing_config = BillingConfiguration.get_config()
tokens_per_credit = billing_config.default_tokens_per_credit
min_credits = 1
logger.info(f"No config for {operation_type}, using default: {tokens_per_credit} tokens/credit")
else:
tokens_per_credit = config.tokens_per_credit
min_credits = config.min_credits
# Calculate total tokens
total_tokens = (tokens_input or 0) + (tokens_output or 0)
# Calculate credits (fractional)
if tokens_per_credit <= 0:
raise CreditCalculationError(f"Invalid tokens_per_credit: {tokens_per_credit}")
credits_float = total_tokens / tokens_per_credit
# Get rounding mode from global config
billing_config = BillingConfiguration.get_config()
rounding_mode = billing_config.credit_rounding_mode
if rounding_mode == 'up':
credits = math.ceil(credits_float)
elif rounding_mode == 'down':
credits = math.floor(credits_float)
else: # nearest
credits = round(credits_float)
# Apply minimum
credits = max(credits, min_credits)
logger.info(
f"Calculated credits for {operation_type}: "
f"{total_tokens} tokens ({tokens_input} in, {tokens_output} out) "
f"÷ {tokens_per_credit} = {credits} credits"
)
return credits
@staticmethod
def check_credits(account, operation_type, estimated_amount=None):
"""
Check if account has sufficient credits for an operation.
For token-based operations, this is an estimate check only.
Actual deduction happens after AI call with real token usage.
Args:
account: Account instance
operation_type: Type of operation
amount: Optional amount (word count, image count, etc.)
estimated_amount: Optional estimated amount (for non-token operations)
Raises:
InsufficientCreditsError: If account doesn't have enough credits
"""
required = CreditService.get_credit_cost(operation_type, amount)
from igny8_core.business.billing.models import CreditCostConfig
from igny8_core.business.billing.constants import CREDIT_COSTS
# Get operation config
config = CreditCostConfig.objects.filter(
operation_type=operation_type,
is_active=True
).first()
if config:
# Use minimum credits as estimate for token-based operations
required = config.min_credits
else:
# Fallback to constants
required = CREDIT_COSTS.get(operation_type, 1)
if account.credits < required:
raise InsufficientCreditsError(
f"Insufficient credits. Required: {required}, Available: {account.credits}"
@@ -99,21 +263,46 @@ class CreditService:
return True
@staticmethod
def check_credits_legacy(account, required_credits):
def check_credits_legacy(account, amount):
"""
Legacy method: Check if account has enough credits (for backward compatibility).
Legacy method to check credits for a known amount.
Used internally by deduct_credits.
Args:
account: Account instance
required_credits: Number of credits required
amount: Required credits amount
Raises:
InsufficientCreditsError: If account doesn't have enough credits
"""
if account.credits < required_credits:
if account.credits < amount:
raise InsufficientCreditsError(
f"Insufficient credits. Required: {required_credits}, Available: {account.credits}"
f"Insufficient credits. Required: {amount}, Available: {account.credits}"
)
return True
@staticmethod
def check_credits_for_tokens(account, operation_type, estimated_tokens_input, estimated_tokens_output):
"""
Check if account has sufficient credits based on estimated token usage.
Args:
account: Account instance
operation_type: Type of operation
estimated_tokens_input: Estimated input tokens
estimated_tokens_output: Estimated output tokens
Raises:
InsufficientCreditsError: If account doesn't have enough credits
"""
required = CreditService.calculate_credits_from_tokens(
operation_type, estimated_tokens_input, estimated_tokens_output
)
if account.credits < required:
raise InsufficientCreditsError(
f"Insufficient credits. Required: {required}, Available: {account.credits}"
)
return True
@staticmethod
@transaction.atomic
@@ -140,6 +329,9 @@ class CreditService:
# Check sufficient credits (legacy: amount is already calculated)
CreditService.check_credits_legacy(account, amount)
# Store previous balance for low credits check
previous_balance = account.credits
# Deduct from account.credits
account.credits -= amount
account.save(update_fields=['credits'])
@@ -168,48 +360,72 @@ class CreditService:
metadata=metadata or {}
)
# Check and send low credits warning if applicable
_check_low_credits_warning(account, previous_balance)
return account.credits
@staticmethod
@transaction.atomic
def deduct_credits_for_operation(account, operation_type, amount=None, description=None, metadata=None, cost_usd=None, model_used=None, tokens_input=None, tokens_output=None, related_object_type=None, related_object_id=None):
def deduct_credits_for_operation(
account,
operation_type,
tokens_input,
tokens_output,
description=None,
metadata=None,
cost_usd=None,
model_used=None,
related_object_type=None,
related_object_id=None
):
"""
Deduct credits for an operation (convenience method that calculates cost automatically).
Deduct credits for an operation based on actual token usage.
This is the ONLY way to deduct credits in the token-based system.
Args:
account: Account instance
operation_type: Type of operation
amount: Optional amount (word count, image count, etc.)
tokens_input: REQUIRED - Actual input tokens used
tokens_output: REQUIRED - Actual output tokens used
description: Optional description (auto-generated if not provided)
metadata: Optional metadata dict
cost_usd: Optional cost in USD
model_used: Optional AI model used
tokens_input: Optional input tokens
tokens_output: Optional output tokens
related_object_type: Optional related object type
related_object_id: Optional related object ID
Returns:
int: New credit balance
Raises:
ValueError: If tokens_input or tokens_output not provided
"""
# Calculate credit cost
credits_required = CreditService.get_credit_cost(operation_type, amount)
# Validate token inputs
if tokens_input is None or tokens_output is None:
raise ValueError(
f"tokens_input and tokens_output are REQUIRED for credit deduction. "
f"Got: tokens_input={tokens_input}, tokens_output={tokens_output}"
)
# Calculate credits from actual token usage
credits_required = CreditService.calculate_credits_from_tokens(
operation_type, tokens_input, tokens_output
)
# Check sufficient credits
CreditService.check_credits(account, operation_type, amount)
if account.credits < credits_required:
raise InsufficientCreditsError(
f"Insufficient credits. Required: {credits_required}, Available: {account.credits}"
)
# Auto-generate description if not provided
if not description:
if operation_type == 'clustering':
description = f"Clustering operation"
elif operation_type == 'idea_generation':
description = f"Generated {amount or 1} idea(s)"
elif operation_type == 'content_generation':
description = f"Generated content ({amount or 0} words)"
elif operation_type == 'image_generation':
description = f"Generated {amount or 1} image(s)"
else:
description = f"{operation_type} operation"
total_tokens = tokens_input + tokens_output
description = (
f"{operation_type}: {total_tokens} tokens "
f"({tokens_input} in, {tokens_output} out) = {credits_required} credits"
)
return CreditService.deduct_credits(
account=account,
@@ -258,37 +474,54 @@ class CreditService:
return account.credits
@staticmethod
def calculate_credits_for_operation(operation_type, **kwargs):
@transaction.atomic
def deduct_credits_for_image(
account,
model_name: str,
num_images: int = 1,
description: str = None,
metadata: dict = None,
cost_usd: float = None,
related_object_type: str = None,
related_object_id: int = None
):
"""
Calculate credits needed for an operation.
Legacy method - use get_credit_cost() instead.
Deduct credits for image generation based on model's credits_per_image.
Args:
operation_type: Type of operation
**kwargs: Operation-specific parameters
account: Account instance
model_name: AI model used (e.g., 'dall-e-3', 'flux-1-1-pro')
num_images: Number of images generated
description: Optional description
metadata: Optional metadata dict
cost_usd: Optional cost in USD
related_object_type: Optional related object type
related_object_id: Optional related object ID
Returns:
int: Number of credits required
Raises:
CreditCalculationError: If calculation fails
int: New credit balance
"""
# Map legacy operation types
if operation_type == 'ideas':
operation_type = 'idea_generation'
elif operation_type == 'content':
operation_type = 'content_generation'
elif operation_type == 'images':
operation_type = 'image_generation'
credits_required = CreditService.calculate_credits_for_image(model_name, num_images)
# Extract amount from kwargs
amount = None
if 'word_count' in kwargs:
amount = kwargs.get('word_count')
elif 'image_count' in kwargs:
amount = kwargs.get('image_count')
elif 'idea_count' in kwargs:
amount = kwargs.get('idea_count')
if account.credits < credits_required:
raise InsufficientCreditsError(
f"Insufficient credits. Required: {credits_required}, Available: {account.credits}"
)
return CreditService.get_credit_cost(operation_type, amount)
if not description:
description = f"Image generation: {num_images} images with {model_name} = {credits_required} credits"
return CreditService.deduct_credits(
account=account,
amount=credits_required,
operation_type='image_generation',
description=description,
metadata=metadata,
cost_usd=cost_usd,
model_used=model_name,
tokens_input=None,
tokens_output=None,
related_object_type=related_object_type,
related_object_id=related_object_id
)

File diff suppressed because it is too large Load Diff

View File

@@ -14,32 +14,65 @@ from ....auth.models import Account, Subscription
class InvoiceService:
"""Service for managing invoices"""
@staticmethod
def get_pending_invoice(subscription: Subscription) -> Optional[Invoice]:
"""
Get pending invoice for a subscription.
Used to find existing invoice during payment processing instead of creating duplicates.
"""
return Invoice.objects.filter(
subscription=subscription,
status='pending'
).order_by('-created_at').first()
@staticmethod
def get_or_create_subscription_invoice(
subscription: Subscription,
billing_period_start: datetime,
billing_period_end: datetime
) -> tuple[Invoice, bool]:
"""
Get existing pending invoice or create new one.
Returns tuple of (invoice, created) where created is True if new invoice was created.
"""
# First try to find existing pending invoice for this subscription
existing = InvoiceService.get_pending_invoice(subscription)
if existing:
return existing, False
# Create new invoice if none exists
invoice = InvoiceService.create_subscription_invoice(
subscription=subscription,
billing_period_start=billing_period_start,
billing_period_end=billing_period_end
)
return invoice, True
@staticmethod
def generate_invoice_number(account: Account) -> str:
"""
Generate unique invoice number with atomic locking to prevent duplicates
Format: INV-{ACCOUNT_ID}-{YEAR}{MONTH}-{COUNTER}
Format: INV-{YY}{MM}{COUNTER} (e.g., INV-26010001)
"""
from django.db import transaction
now = timezone.now()
prefix = f"INV-{account.id}-{now.year}{now.month:02d}"
prefix = f"INV-{now.year % 100:02d}{now.month:02d}"
# Use atomic transaction with SELECT FOR UPDATE to prevent race conditions
with transaction.atomic():
# Lock the invoice table for this account/month to get accurate count
# Lock the invoice table for this month to get accurate count
count = Invoice.objects.select_for_update().filter(
account=account,
created_at__year=now.year,
created_at__month=now.month
).count()
invoice_number = f"{prefix}-{count + 1:04d}"
invoice_number = f"{prefix}{count + 1:04d}"
# Double-check uniqueness (should not happen with lock, but safety check)
while Invoice.objects.filter(invoice_number=invoice_number).exists():
count += 1
invoice_number = f"{prefix}-{count + 1:04d}"
invoice_number = f"{prefix}{count + 1:04d}"
return invoice_number
@@ -52,6 +85,11 @@ class InvoiceService:
) -> Invoice:
"""
Create invoice for subscription billing period
SIMPLIFIED CURRENCY LOGIC:
- ALL invoices are in USD (consistent for accounting)
- PKR equivalent is calculated and stored in metadata for display purposes
- Bank transfer users see PKR equivalent but invoice is technically USD
"""
account = subscription.account
plan = subscription.plan
@@ -74,12 +112,15 @@ class InvoiceService:
invoice_date = timezone.now().date()
due_date = invoice_date + timedelta(days=INVOICE_DUE_DATE_OFFSET)
# Get currency based on billing country
# ALWAYS use USD for invoices (simplified accounting)
from igny8_core.business.billing.utils.currency import get_currency_for_country, convert_usd_to_local
currency = get_currency_for_country(account.billing_country)
# Convert plan price to local currency
local_price = convert_usd_to_local(float(plan.price), account.billing_country)
currency = 'USD'
usd_price = float(plan.price)
# Calculate local equivalent for display purposes (if applicable)
local_currency = get_currency_for_country(account.billing_country) if account.billing_country else 'USD'
local_equivalent = convert_usd_to_local(usd_price, account.billing_country) if local_currency != 'USD' else usd_price
invoice = Invoice.objects.create(
account=account,
@@ -95,16 +136,19 @@ class InvoiceService:
'billing_period_end': billing_period_end.isoformat(),
'subscription_id': subscription.id, # Keep in metadata for backward compatibility
'usd_price': str(plan.price), # Store original USD price
'exchange_rate': str(local_price / float(plan.price) if plan.price > 0 else 1.0)
'local_currency': local_currency, # Store local currency code for display
'local_equivalent': str(round(local_equivalent, 2)), # Store local equivalent for display
'exchange_rate': str(local_equivalent / usd_price if usd_price > 0 else 1.0),
'payment_method': account.payment_method
}
)
# Add line item for subscription with converted price
# Add line item for subscription in USD
invoice.add_line_item(
description=f"{plan.name} Plan - {billing_period_start.strftime('%b %Y')}",
quantity=1,
unit_price=Decimal(str(local_price)),
amount=Decimal(str(local_price))
unit_price=Decimal(str(usd_price)),
amount=Decimal(str(usd_price))
)
invoice.calculate_totals()
@@ -120,16 +164,23 @@ class InvoiceService:
) -> Invoice:
"""
Create invoice for credit package purchase
SIMPLIFIED CURRENCY LOGIC:
- ALL invoices are in USD (consistent for accounting)
- PKR equivalent is calculated and stored in metadata for display purposes
"""
from igny8_core.business.billing.config import INVOICE_DUE_DATE_OFFSET
invoice_date = timezone.now().date()
# Get currency based on billing country
# ALWAYS use USD for invoices (simplified accounting)
from igny8_core.business.billing.utils.currency import get_currency_for_country, convert_usd_to_local
currency = get_currency_for_country(account.billing_country)
# Convert credit package price to local currency
local_price = convert_usd_to_local(float(credit_package.price), account.billing_country)
currency = 'USD'
usd_price = float(credit_package.price)
# Calculate local equivalent for display purposes (if applicable)
local_currency = get_currency_for_country(account.billing_country) if account.billing_country else 'USD'
local_equivalent = convert_usd_to_local(usd_price, account.billing_country) if local_currency != 'USD' else usd_price
invoice = Invoice.objects.create(
account=account,
@@ -143,16 +194,19 @@ class InvoiceService:
'credit_package_id': credit_package.id,
'credit_amount': credit_package.credits,
'usd_price': str(credit_package.price), # Store original USD price
'exchange_rate': str(local_price / float(credit_package.price) if credit_package.price > 0 else 1.0)
'local_currency': local_currency, # Store local currency code for display
'local_equivalent': str(round(local_equivalent, 2)), # Store local equivalent for display
'exchange_rate': str(local_equivalent / usd_price if usd_price > 0 else 1.0),
'payment_method': account.payment_method
},
)
# Add line item for credit package with converted price
# Add line item for credit package in USD
invoice.add_line_item(
description=f"{credit_package.name} - {credit_package.credits:,} Credits",
quantity=1,
unit_price=Decimal(str(local_price)),
amount=Decimal(str(local_price))
unit_price=Decimal(str(usd_price)),
amount=Decimal(str(usd_price))
)
invoice.calculate_totals()
@@ -212,10 +266,21 @@ class InvoiceService:
transaction_id: Optional[str] = None
) -> Invoice:
"""
Mark invoice as paid
Mark invoice as paid and record payment details
Args:
invoice: Invoice to mark as paid
payment_method: Payment method used ('stripe', 'paypal', 'bank_transfer', etc.)
transaction_id: External transaction ID (Stripe payment intent, PayPal capture ID, etc.)
"""
invoice.status = 'paid'
invoice.paid_at = timezone.now()
invoice.payment_method = payment_method
# For Stripe payments, store the transaction ID in stripe_invoice_id field
if payment_method == 'stripe' and transaction_id:
invoice.stripe_invoice_id = transaction_id
invoice.save()
return invoice
@@ -239,43 +304,13 @@ class InvoiceService:
@staticmethod
def generate_pdf(invoice: Invoice) -> bytes:
"""
Generate PDF for invoice
TODO: Implement PDF generation using reportlab or weasyprint
For now, return placeholder
Generate professional PDF invoice using ReportLab
"""
from io import BytesIO
from igny8_core.business.billing.services.pdf_service import InvoicePDFGenerator
# Placeholder - implement PDF generation
buffer = BytesIO()
# Simple text representation for now
content = f"""
INVOICE #{invoice.invoice_number}
Bill To: {invoice.account.name}
Email: {invoice.billing_email}
Date: {invoice.created_at.strftime('%Y-%m-%d')}
Due Date: {invoice.due_date.strftime('%Y-%m-%d') if invoice.due_date else 'N/A'}
Line Items:
"""
for item in invoice.line_items:
content += f" {item['description']} - ${item['amount']}\n"
content += f"""
Subtotal: ${invoice.subtotal}
Tax: ${invoice.tax_amount}
Total: ${invoice.total_amount}
Status: {invoice.status.upper()}
"""
buffer.write(content.encode('utf-8'))
buffer.seek(0)
return buffer.getvalue()
# Use the professional PDF generator
pdf_buffer = InvoicePDFGenerator.generate_invoice_pdf(invoice)
return pdf_buffer.getvalue()
@staticmethod
def get_account_invoices(

View File

@@ -1,6 +1,6 @@
"""
Limit Service for Plan Limit Enforcement
Manages hard limits (sites, users, keywords, clusters) and monthly limits (ideas, words, images, prompts)
Manages hard limits (sites, users, keywords) and monthly limits (ahrefs_queries)
"""
from django.db import transaction
from django.utils import timezone
@@ -18,12 +18,12 @@ class LimitExceededError(Exception):
class HardLimitExceededError(LimitExceededError):
"""Raised when a hard limit (sites, users, keywords, clusters) is exceeded"""
"""Raised when a hard limit (sites, users, keywords) is exceeded"""
pass
class MonthlyLimitExceededError(LimitExceededError):
"""Raised when a monthly limit (ideas, words, images, prompts) is exceeded"""
"""Raised when a monthly limit (ahrefs_queries) is exceeded"""
pass
@@ -31,6 +31,7 @@ class LimitService:
"""Service for managing and enforcing plan limits"""
# Map limit types to model/field names
# Simplified to only 3 hard limits: sites, users, keywords
HARD_LIMIT_MAPPINGS = {
'sites': {
'model': 'igny8_core_auth.Site',
@@ -39,10 +40,10 @@ class LimitService:
'filter_field': 'account',
},
'users': {
'model': 'igny8_core_auth.SiteUserAccess',
'model': 'igny8_core_auth.User',
'plan_field': 'max_users',
'display_name': 'Team Users',
'filter_field': 'site__account',
'display_name': 'Team Members',
'filter_field': 'account',
},
'keywords': {
'model': 'planner.Keywords',
@@ -50,39 +51,15 @@ class LimitService:
'display_name': 'Keywords',
'filter_field': 'account',
},
'clusters': {
'model': 'planner.Clusters',
'plan_field': 'max_clusters',
'display_name': 'Clusters',
'filter_field': 'account',
},
}
# Simplified to only 1 monthly limit: ahrefs_queries
# All other consumption is controlled by credits only
MONTHLY_LIMIT_MAPPINGS = {
'content_ideas': {
'plan_field': 'max_content_ideas',
'usage_field': 'usage_content_ideas',
'display_name': 'Content Ideas',
},
'content_words': {
'plan_field': 'max_content_words',
'usage_field': 'usage_content_words',
'display_name': 'Content Words',
},
'images_basic': {
'plan_field': 'max_images_basic',
'usage_field': 'usage_images_basic',
'display_name': 'Basic Images',
},
'images_premium': {
'plan_field': 'max_images_premium',
'usage_field': 'usage_images_premium',
'display_name': 'Premium Images',
},
'image_prompts': {
'plan_field': 'max_image_prompts',
'usage_field': 'usage_image_prompts',
'display_name': 'Image Prompts',
'ahrefs_queries': {
'plan_field': 'max_ahrefs_queries',
'usage_field': 'usage_ahrefs_queries',
'display_name': 'Keyword Research Queries',
},
}
@@ -318,11 +295,8 @@ class LimitService:
Returns:
dict: Summary of reset operation
"""
account.usage_content_ideas = 0
account.usage_content_words = 0
account.usage_images_basic = 0
account.usage_images_premium = 0
account.usage_image_prompts = 0
# Reset only ahrefs_queries (the only monthly limit now)
account.usage_ahrefs_queries = 0
old_period_end = account.usage_period_end
@@ -341,8 +315,7 @@ class LimitService:
account.usage_period_end = new_period_end
account.save(update_fields=[
'usage_content_ideas', 'usage_content_words',
'usage_images_basic', 'usage_images_premium', 'usage_image_prompts',
'usage_ahrefs_queries',
'usage_period_start', 'usage_period_end', 'updated_at'
])
@@ -353,5 +326,5 @@ class LimitService:
'old_period_end': old_period_end.isoformat() if old_period_end else None,
'new_period_start': new_period_start.isoformat(),
'new_period_end': new_period_end.isoformat(),
'limits_reset': 5,
'limits_reset': 1,
}

View File

@@ -105,11 +105,15 @@ class PaymentService:
) -> Payment:
"""
Mark payment as completed and update invoice
For automatic payments (Stripe/PayPal), sets approved_at but leaves approved_by as None
"""
from .invoice_service import InvoiceService
payment.status = 'succeeded'
payment.processed_at = timezone.now()
# For automatic payments, set approved_at to indicate when payment was verified
# approved_by stays None to indicate it was automated, not manual approval
payment.approved_at = timezone.now()
if transaction_id:
payment.transaction_reference = transaction_id

View File

@@ -0,0 +1,679 @@
"""
PayPal Service - REST API v2 integration
Handles:
- Order creation and capture for one-time payments
- Subscription management
- Webhook verification
Configuration stored in IntegrationProvider model (provider_id='paypal')
Endpoints:
- Sandbox: https://api-m.sandbox.paypal.com
- Production: https://api-m.paypal.com
"""
import requests
import base64
import logging
from typing import Optional, Dict, Any
from django.conf import settings
from igny8_core.modules.system.models import IntegrationProvider
logger = logging.getLogger(__name__)
class PayPalConfigurationError(Exception):
"""Raised when PayPal is not properly configured"""
pass
class PayPalAPIError(Exception):
"""Raised when PayPal API returns an error"""
def __init__(self, message: str, status_code: int = None, response: dict = None):
super().__init__(message)
self.status_code = status_code
self.response = response
class PayPalService:
"""Service for PayPal payment operations using REST API v2"""
SANDBOX_URL = 'https://api-m.sandbox.paypal.com'
PRODUCTION_URL = 'https://api-m.paypal.com'
def __init__(self):
"""
Initialize PayPal service with credentials from IntegrationProvider.
Raises:
PayPalConfigurationError: If PayPal provider not configured or missing credentials
"""
provider = IntegrationProvider.get_provider('paypal')
if not provider:
raise PayPalConfigurationError(
"PayPal provider not configured. Add 'paypal' provider in admin."
)
if not provider.api_key or not provider.api_secret:
raise PayPalConfigurationError(
"PayPal client credentials not configured. "
"Set api_key (Client ID) and api_secret (Client Secret) in provider."
)
self.client_id = provider.api_key
self.client_secret = provider.api_secret
self.is_sandbox = provider.is_sandbox
self.provider = provider
self.config = provider.config or {}
# Set base URL
if provider.api_endpoint:
self.base_url = provider.api_endpoint.rstrip('/')
else:
self.base_url = self.SANDBOX_URL if self.is_sandbox else self.PRODUCTION_URL
# Cache access token
self._access_token = None
self._token_expires_at = None
# Configuration
self.currency = self.config.get('currency', 'USD')
self.webhook_id = self.config.get('webhook_id', '')
logger.info(
f"PayPal service initialized (sandbox={self.is_sandbox}, "
f"base_url={self.base_url})"
)
@property
def frontend_url(self) -> str:
"""Get frontend URL from Django settings"""
return getattr(settings, 'FRONTEND_URL', 'http://localhost:3000')
@property
def return_url(self) -> str:
"""Get return URL for PayPal redirects"""
return self.config.get(
'return_url',
f'{self.frontend_url}/account/plans?paypal=success'
)
@property
def cancel_url(self) -> str:
"""Get cancel URL for PayPal redirects"""
return self.config.get(
'cancel_url',
f'{self.frontend_url}/account/plans?paypal=cancel'
)
# ========== Authentication ==========
def _get_access_token(self) -> str:
"""
Get OAuth 2.0 access token from PayPal.
Returns:
str: Access token
Raises:
PayPalAPIError: If token request fails
"""
import time
# Return cached token if still valid
if self._access_token and self._token_expires_at:
if time.time() < self._token_expires_at - 60: # 60 second buffer
return self._access_token
# Create Basic auth header
auth_string = f'{self.client_id}:{self.client_secret}'
auth_bytes = base64.b64encode(auth_string.encode()).decode()
response = requests.post(
f'{self.base_url}/v1/oauth2/token',
headers={
'Authorization': f'Basic {auth_bytes}',
'Content-Type': 'application/x-www-form-urlencoded',
},
data='grant_type=client_credentials',
timeout=30,
)
if response.status_code != 200:
logger.error(f"PayPal token request failed: {response.text}")
raise PayPalAPIError(
"Failed to obtain PayPal access token",
status_code=response.status_code,
response=response.json() if response.text else None
)
data = response.json()
self._access_token = data['access_token']
self._token_expires_at = time.time() + data.get('expires_in', 32400)
logger.debug("PayPal access token obtained successfully")
return self._access_token
def _make_request(
self,
method: str,
endpoint: str,
json_data: dict = None,
params: dict = None,
timeout: int = 30,
) -> dict:
"""
Make authenticated API request to PayPal.
Args:
method: HTTP method (GET, POST, etc.)
endpoint: API endpoint (e.g., '/v2/checkout/orders')
json_data: JSON body data
params: Query parameters
timeout: Request timeout in seconds
Returns:
dict: Response JSON
Raises:
PayPalAPIError: If request fails
"""
token = self._get_access_token()
headers = {
'Authorization': f'Bearer {token}',
'Content-Type': 'application/json',
}
url = f'{self.base_url}{endpoint}'
response = requests.request(
method=method,
url=url,
headers=headers,
json=json_data,
params=params,
timeout=timeout,
)
# Handle no content response
if response.status_code == 204:
return {}
# Parse JSON response
try:
response_data = response.json() if response.text else {}
except Exception:
response_data = {'raw': response.text}
# Check for errors
if response.status_code >= 400:
error_msg = response_data.get('message', str(response_data))
logger.error(f"PayPal API error: {error_msg}")
raise PayPalAPIError(
f"PayPal API error: {error_msg}",
status_code=response.status_code,
response=response_data
)
return response_data
# ========== Order Operations ==========
def create_order(
self,
account,
amount: float,
currency: str = None,
description: str = '',
return_url: str = None,
cancel_url: str = None,
metadata: dict = None,
) -> Dict[str, Any]:
"""
Create PayPal order for one-time payment.
Args:
account: Account model instance
amount: Payment amount
currency: Currency code (default from config)
description: Payment description
return_url: URL to redirect after approval
cancel_url: URL to redirect on cancellation
metadata: Additional metadata to store
Returns:
dict: Order data including order_id and approval_url
"""
currency = currency or self.currency
return_url = return_url or self.return_url
cancel_url = cancel_url or self.cancel_url
# Build order payload
order_data = {
'intent': 'CAPTURE',
'purchase_units': [{
'amount': {
'currency_code': currency,
'value': f'{amount:.2f}',
},
'description': description or 'IGNY8 Payment',
'custom_id': str(account.id),
'reference_id': str(account.id),
}],
'application_context': {
'return_url': return_url,
'cancel_url': cancel_url,
'brand_name': 'IGNY8',
'landing_page': 'BILLING',
'user_action': 'PAY_NOW',
'shipping_preference': 'NO_SHIPPING',
}
}
# Create order
response = self._make_request('POST', '/v2/checkout/orders', json_data=order_data)
# Extract approval URL
approval_url = None
for link in response.get('links', []):
if link.get('rel') == 'approve':
approval_url = link.get('href')
break
logger.info(
f"Created PayPal order {response.get('id')} for account {account.id}, "
f"amount {currency} {amount}"
)
return {
'order_id': response.get('id'),
'status': response.get('status'),
'approval_url': approval_url,
'links': response.get('links', []),
}
def create_credit_order(
self,
account,
credit_package,
return_url: str = None,
cancel_url: str = None,
) -> Dict[str, Any]:
"""
Create PayPal order for credit package purchase.
Args:
account: Account model instance
credit_package: CreditPackage model instance
return_url: URL to redirect after approval
cancel_url: URL to redirect on cancellation
Returns:
dict: Order data including order_id and approval_url
"""
return_url = return_url or f'{self.frontend_url}/account/usage?paypal=success'
cancel_url = cancel_url or f'{self.frontend_url}/account/usage?paypal=cancel'
# Add credit package info to custom_id for webhook processing
order = self.create_order(
account=account,
amount=float(credit_package.price),
description=f'{credit_package.name} - {credit_package.credits} credits',
return_url=f'{return_url}&package_id={credit_package.id}',
cancel_url=cancel_url,
)
# Store package info in order
order['credit_package_id'] = str(credit_package.id)
order['credit_amount'] = credit_package.credits
return order
def capture_order(self, order_id: str) -> Dict[str, Any]:
"""
Capture payment for approved order.
Call this after customer approves the order at PayPal.
Args:
order_id: PayPal order ID
Returns:
dict: Capture result with payment details
"""
response = self._make_request(
'POST',
f'/v2/checkout/orders/{order_id}/capture'
)
# Extract capture details
capture_id = None
amount = None
currency = None
if response.get('purchase_units'):
captures = response['purchase_units'][0].get('payments', {}).get('captures', [])
if captures:
capture = captures[0]
capture_id = capture.get('id')
amount = capture.get('amount', {}).get('value')
currency = capture.get('amount', {}).get('currency_code')
logger.info(
f"Captured PayPal order {order_id}, capture_id={capture_id}, "
f"amount={currency} {amount}"
)
return {
'order_id': response.get('id'),
'status': response.get('status'),
'capture_id': capture_id,
'amount': amount,
'currency': currency,
'payer': response.get('payer', {}),
'custom_id': response.get('purchase_units', [{}])[0].get('custom_id'),
}
def get_order(self, order_id: str) -> Dict[str, Any]:
"""
Get order details.
Args:
order_id: PayPal order ID
Returns:
dict: Order details
"""
response = self._make_request('GET', f'/v2/checkout/orders/{order_id}')
return {
'order_id': response.get('id'),
'status': response.get('status'),
'intent': response.get('intent'),
'payer': response.get('payer', {}),
'purchase_units': response.get('purchase_units', []),
'create_time': response.get('create_time'),
'update_time': response.get('update_time'),
}
# ========== Subscription Operations ==========
def create_subscription(
self,
account,
plan_id: str,
return_url: str = None,
cancel_url: str = None,
) -> Dict[str, Any]:
"""
Create PayPal subscription.
Requires plan to be created in PayPal dashboard first.
Args:
account: Account model instance
plan_id: PayPal Plan ID (created in PayPal dashboard)
return_url: URL to redirect after approval
cancel_url: URL to redirect on cancellation
Returns:
dict: Subscription data including approval_url
"""
return_url = return_url or self.return_url
cancel_url = cancel_url or self.cancel_url
subscription_data = {
'plan_id': plan_id,
'custom_id': str(account.id),
'application_context': {
'return_url': return_url,
'cancel_url': cancel_url,
'brand_name': 'IGNY8',
'locale': 'en-US',
'shipping_preference': 'NO_SHIPPING',
'user_action': 'SUBSCRIBE_NOW',
'payment_method': {
'payer_selected': 'PAYPAL',
'payee_preferred': 'IMMEDIATE_PAYMENT_REQUIRED',
}
}
}
response = self._make_request(
'POST',
'/v1/billing/subscriptions',
json_data=subscription_data
)
# Extract approval URL
approval_url = None
for link in response.get('links', []):
if link.get('rel') == 'approve':
approval_url = link.get('href')
break
logger.info(
f"Created PayPal subscription {response.get('id')} for account {account.id}"
)
return {
'subscription_id': response.get('id'),
'status': response.get('status'),
'approval_url': approval_url,
'links': response.get('links', []),
}
def get_subscription(self, subscription_id: str) -> Dict[str, Any]:
"""
Get subscription details.
Args:
subscription_id: PayPal subscription ID
Returns:
dict: Subscription details
"""
response = self._make_request(
'GET',
f'/v1/billing/subscriptions/{subscription_id}'
)
return {
'subscription_id': response.get('id'),
'status': response.get('status'),
'plan_id': response.get('plan_id'),
'start_time': response.get('start_time'),
'billing_info': response.get('billing_info', {}),
'custom_id': response.get('custom_id'),
}
def cancel_subscription(
self,
subscription_id: str,
reason: str = 'Customer requested cancellation'
) -> Dict[str, Any]:
"""
Cancel PayPal subscription.
Args:
subscription_id: PayPal subscription ID
reason: Reason for cancellation
Returns:
dict: Cancellation result
"""
self._make_request(
'POST',
f'/v1/billing/subscriptions/{subscription_id}/cancel',
json_data={'reason': reason}
)
logger.info(f"Cancelled PayPal subscription {subscription_id}")
return {
'subscription_id': subscription_id,
'status': 'CANCELLED',
}
def suspend_subscription(self, subscription_id: str, reason: str = '') -> Dict[str, Any]:
"""
Suspend PayPal subscription.
Args:
subscription_id: PayPal subscription ID
reason: Reason for suspension
Returns:
dict: Suspension result
"""
self._make_request(
'POST',
f'/v1/billing/subscriptions/{subscription_id}/suspend',
json_data={'reason': reason}
)
logger.info(f"Suspended PayPal subscription {subscription_id}")
return {
'subscription_id': subscription_id,
'status': 'SUSPENDED',
}
def activate_subscription(self, subscription_id: str, reason: str = '') -> Dict[str, Any]:
"""
Activate/reactivate PayPal subscription.
Args:
subscription_id: PayPal subscription ID
reason: Reason for activation
Returns:
dict: Activation result
"""
self._make_request(
'POST',
f'/v1/billing/subscriptions/{subscription_id}/activate',
json_data={'reason': reason}
)
logger.info(f"Activated PayPal subscription {subscription_id}")
return {
'subscription_id': subscription_id,
'status': 'ACTIVE',
}
# ========== Webhook Verification ==========
def verify_webhook_signature(
self,
headers: dict,
body: dict,
) -> bool:
"""
Verify webhook signature from PayPal.
Args:
headers: Request headers (dict-like)
body: Request body (parsed JSON dict)
Returns:
bool: True if signature is valid
"""
if not self.webhook_id:
logger.warning("PayPal webhook_id not configured, skipping verification")
return True # Optionally fail open or closed based on security policy
verification_data = {
'auth_algo': headers.get('PAYPAL-AUTH-ALGO'),
'cert_url': headers.get('PAYPAL-CERT-URL'),
'transmission_id': headers.get('PAYPAL-TRANSMISSION-ID'),
'transmission_sig': headers.get('PAYPAL-TRANSMISSION-SIG'),
'transmission_time': headers.get('PAYPAL-TRANSMISSION-TIME'),
'webhook_id': self.webhook_id,
'webhook_event': body,
}
try:
response = self._make_request(
'POST',
'/v1/notifications/verify-webhook-signature',
json_data=verification_data
)
is_valid = response.get('verification_status') == 'SUCCESS'
if not is_valid:
logger.warning(
f"PayPal webhook verification failed: {response.get('verification_status')}"
)
return is_valid
except PayPalAPIError as e:
logger.error(f"PayPal webhook verification error: {e}")
return False
# ========== Refunds ==========
def refund_capture(
self,
capture_id: str,
amount: float = None,
currency: str = None,
note: str = None,
) -> Dict[str, Any]:
"""
Refund a captured payment.
Args:
capture_id: PayPal capture ID
amount: Amount to refund (None for full refund)
currency: Currency code
note: Note to payer
Returns:
dict: Refund details
"""
refund_data = {}
if amount:
refund_data['amount'] = {
'value': f'{amount:.2f}',
'currency_code': currency or self.currency,
}
if note:
refund_data['note_to_payer'] = note
response = self._make_request(
'POST',
f'/v2/payments/captures/{capture_id}/refund',
json_data=refund_data if refund_data else None
)
logger.info(
f"Refunded PayPal capture {capture_id}, refund_id={response.get('id')}"
)
return {
'refund_id': response.get('id'),
'status': response.get('status'),
'amount': response.get('amount', {}).get('value'),
'currency': response.get('amount', {}).get('currency_code'),
}
# Convenience function
def get_paypal_service() -> PayPalService:
"""
Get PayPalService instance.
Returns:
PayPalService: Initialized service
Raises:
PayPalConfigurationError: If PayPal not configured
"""
return PayPalService()

View File

@@ -9,17 +9,32 @@ from reportlab.lib import colors
from reportlab.lib.pagesizes import letter
from reportlab.lib.styles import getSampleStyleSheet, ParagraphStyle
from reportlab.lib.units import inch
from reportlab.platypus import SimpleDocTemplate, Table, TableStyle, Paragraph, Spacer, Image
from reportlab.platypus import SimpleDocTemplate, Table, TableStyle, Paragraph, Spacer, Image, HRFlowable
from reportlab.lib.enums import TA_LEFT, TA_RIGHT, TA_CENTER
from django.conf import settings
import os
import logging
logger = logging.getLogger(__name__)
# Logo path - check multiple possible locations
LOGO_PATHS = [
'/data/app/igny8/frontend/public/images/logo/IGNY8_LIGHT_LOGO.png',
'/app/static/images/logo/IGNY8_LIGHT_LOGO.png',
]
class InvoicePDFGenerator:
"""Generate PDF invoices"""
@staticmethod
def get_logo_path():
"""Find the logo file from possible locations"""
for path in LOGO_PATHS:
if os.path.exists(path):
return path
return None
@staticmethod
def generate_invoice_pdf(invoice):
"""
@@ -39,8 +54,8 @@ class InvoicePDFGenerator:
pagesize=letter,
rightMargin=0.75*inch,
leftMargin=0.75*inch,
topMargin=0.75*inch,
bottomMargin=0.75*inch
topMargin=0.5*inch,
bottomMargin=0.5*inch
)
# Container for PDF elements
@@ -51,17 +66,19 @@ class InvoicePDFGenerator:
title_style = ParagraphStyle(
'CustomTitle',
parent=styles['Heading1'],
fontSize=24,
fontSize=28,
textColor=colors.HexColor('#1f2937'),
spaceAfter=30,
spaceAfter=0,
fontName='Helvetica-Bold',
)
heading_style = ParagraphStyle(
'CustomHeading',
parent=styles['Heading2'],
fontSize=14,
textColor=colors.HexColor('#374151'),
spaceAfter=12,
fontSize=12,
textColor=colors.HexColor('#1f2937'),
spaceAfter=8,
fontName='Helvetica-Bold',
)
normal_style = ParagraphStyle(
@@ -69,145 +86,292 @@ class InvoicePDFGenerator:
parent=styles['Normal'],
fontSize=10,
textColor=colors.HexColor('#4b5563'),
fontName='Helvetica',
)
# Header
elements.append(Paragraph('INVOICE', title_style))
elements.append(Spacer(1, 0.2*inch))
label_style = ParagraphStyle(
'LabelStyle',
parent=styles['Normal'],
fontSize=9,
textColor=colors.HexColor('#6b7280'),
fontName='Helvetica',
)
# Company info and invoice details side by side
company_data = [
['<b>From:</b>', f'<b>Invoice #:</b> {invoice.invoice_number}'],
[getattr(settings, 'COMPANY_NAME', 'Igny8'), f'<b>Date:</b> {invoice.created_at.strftime("%B %d, %Y")}'],
[getattr(settings, 'COMPANY_ADDRESS', ''), f'<b>Due Date:</b> {invoice.due_date.strftime("%B %d, %Y")}'],
[getattr(settings, 'COMPANY_EMAIL', settings.DEFAULT_FROM_EMAIL), f'<b>Status:</b> {invoice.status.upper()}'],
]
value_style = ParagraphStyle(
'ValueStyle',
parent=styles['Normal'],
fontSize=10,
textColor=colors.HexColor('#1f2937'),
fontName='Helvetica-Bold',
)
company_table = Table(company_data, colWidths=[3.5*inch, 3*inch])
company_table.setStyle(TableStyle([
('FONTNAME', (0, 0), (-1, -1), 'Helvetica'),
('FONTSIZE', (0, 0), (-1, -1), 10),
('TEXTCOLOR', (0, 0), (-1, -1), colors.HexColor('#4b5563')),
('VALIGN', (0, 0), (-1, -1), 'TOP'),
('ALIGN', (1, 0), (1, -1), 'RIGHT'),
right_align_style = ParagraphStyle(
'RightAlign',
parent=styles['Normal'],
fontSize=10,
textColor=colors.HexColor('#4b5563'),
alignment=TA_RIGHT,
fontName='Helvetica',
)
right_bold_style = ParagraphStyle(
'RightBold',
parent=styles['Normal'],
fontSize=10,
textColor=colors.HexColor('#1f2937'),
alignment=TA_RIGHT,
fontName='Helvetica-Bold',
)
# Header with Logo and Invoice title
logo_path = InvoicePDFGenerator.get_logo_path()
header_data = []
if logo_path:
try:
logo = Image(logo_path, width=1.5*inch, height=0.5*inch)
logo.hAlign = 'LEFT'
header_data = [[logo, Paragraph('INVOICE', title_style)]]
except Exception as e:
logger.warning(f"Could not load logo: {e}")
header_data = [[Paragraph('IGNY8', title_style), Paragraph('INVOICE', title_style)]]
else:
header_data = [[Paragraph('IGNY8', title_style), Paragraph('INVOICE', title_style)]]
header_table = Table(header_data, colWidths=[3.5*inch, 3*inch])
header_table.setStyle(TableStyle([
('VALIGN', (0, 0), (-1, -1), 'MIDDLE'),
('ALIGN', (0, 0), (0, 0), 'LEFT'),
('ALIGN', (1, 0), (1, 0), 'RIGHT'),
]))
elements.append(company_table)
elements.append(header_table)
elements.append(Spacer(1, 0.3*inch))
# Bill to section
elements.append(Paragraph('<b>Bill To:</b>', heading_style))
bill_to_data = [
[invoice.account.name],
[invoice.account.owner.email],
# Divider line
elements.append(HRFlowable(width="100%", thickness=1, color=colors.HexColor('#e5e7eb'), spaceAfter=20))
# Invoice details section (right side info)
invoice_info = [
[Paragraph('Invoice Number:', label_style), Paragraph(invoice.invoice_number, value_style)],
[Paragraph('Date:', label_style), Paragraph(invoice.created_at.strftime("%B %d, %Y"), value_style)],
[Paragraph('Due Date:', label_style), Paragraph(invoice.due_date.strftime("%B %d, %Y"), value_style)],
[Paragraph('Status:', label_style), Paragraph(invoice.status.upper(), value_style)],
]
if hasattr(invoice.account, 'billing_email') and invoice.account.billing_email:
bill_to_data.append([f'Billing: {invoice.account.billing_email}'])
invoice_info_table = Table(invoice_info, colWidths=[1.2*inch, 2*inch])
invoice_info_table.setStyle(TableStyle([
('VALIGN', (0, 0), (-1, -1), 'MIDDLE'),
('BOTTOMPADDING', (0, 0), (-1, -1), 4),
('TOPPADDING', (0, 0), (-1, -1), 4),
]))
for line in bill_to_data:
elements.append(Paragraph(line[0], normal_style))
# From and To section
company_name = getattr(settings, 'COMPANY_NAME', 'Igny8')
company_email = getattr(settings, 'COMPANY_EMAIL', settings.DEFAULT_FROM_EMAIL)
elements.append(Spacer(1, 0.3*inch))
from_section = [
Paragraph('FROM', heading_style),
Paragraph(company_name, value_style),
Paragraph(company_email, normal_style),
]
customer_name = invoice.account.name if invoice.account else 'N/A'
customer_email = invoice.account.owner.email if invoice.account and invoice.account.owner else invoice.account.billing_email if invoice.account else 'N/A'
billing_email = invoice.account.billing_email if invoice.account and hasattr(invoice.account, 'billing_email') and invoice.account.billing_email else None
to_section = [
Paragraph('BILL TO', heading_style),
Paragraph(customer_name, value_style),
Paragraph(customer_email, normal_style),
]
if billing_email and billing_email != customer_email:
to_section.append(Paragraph(f'Billing: {billing_email}', normal_style))
# Create from/to layout
from_content = []
for item in from_section:
from_content.append([item])
from_table = Table(from_content, colWidths=[3*inch])
to_content = []
for item in to_section:
to_content.append([item])
to_table = Table(to_content, colWidths=[3*inch])
# Main info layout with From, To, and Invoice details
main_info = [[from_table, to_table, invoice_info_table]]
main_info_table = Table(main_info, colWidths=[2.3*inch, 2.3*inch, 2.4*inch])
main_info_table.setStyle(TableStyle([
('VALIGN', (0, 0), (-1, -1), 'TOP'),
]))
elements.append(main_info_table)
elements.append(Spacer(1, 0.4*inch))
# Line items table
elements.append(Paragraph('<b>Items:</b>', heading_style))
elements.append(Paragraph('ITEMS', heading_style))
elements.append(Spacer(1, 0.1*inch))
# Table header
# Table header - use Paragraph for proper rendering
line_items_data = [
['Description', 'Quantity', 'Unit Price', 'Amount']
[
Paragraph('Description', ParagraphStyle('Header', fontName='Helvetica-Bold', fontSize=10, textColor=colors.HexColor('#374151'))),
Paragraph('Qty', ParagraphStyle('Header', fontName='Helvetica-Bold', fontSize=10, textColor=colors.HexColor('#374151'), alignment=TA_CENTER)),
Paragraph('Unit Price', ParagraphStyle('Header', fontName='Helvetica-Bold', fontSize=10, textColor=colors.HexColor('#374151'), alignment=TA_RIGHT)),
Paragraph('Amount', ParagraphStyle('Header', fontName='Helvetica-Bold', fontSize=10, textColor=colors.HexColor('#374151'), alignment=TA_RIGHT)),
]
]
# Get line items
for item in invoice.line_items.all():
# Get line items - line_items is a JSON field (list of dicts)
items = invoice.line_items or []
for item in items:
unit_price = float(item.get('unit_price', 0))
amount = float(item.get('amount', 0))
line_items_data.append([
item.description,
str(item.quantity),
f'{invoice.currency} {item.unit_price:.2f}',
f'{invoice.currency} {item.total_price:.2f}'
Paragraph(item.get('description', ''), normal_style),
Paragraph(str(item.get('quantity', 1)), ParagraphStyle('Center', parent=normal_style, alignment=TA_CENTER)),
Paragraph(f'{invoice.currency} {unit_price:.2f}', right_align_style),
Paragraph(f'{invoice.currency} {amount:.2f}', right_align_style),
])
# Add subtotal, tax, total rows
line_items_data.append(['', '', '<b>Subtotal:</b>', f'<b>{invoice.currency} {invoice.subtotal:.2f}</b>'])
if invoice.tax_amount and invoice.tax_amount > 0:
line_items_data.append(['', '', f'Tax ({invoice.tax_rate}%):', f'{invoice.currency} {invoice.tax_amount:.2f}'])
if invoice.discount_amount and invoice.discount_amount > 0:
line_items_data.append(['', '', 'Discount:', f'-{invoice.currency} {invoice.discount_amount:.2f}'])
line_items_data.append(['', '', '<b>Total:</b>', f'<b>{invoice.currency} {invoice.total_amount:.2f}</b>'])
# Add empty row for spacing before totals
line_items_data.append(['', '', '', ''])
# Create table
line_items_table = Table(
line_items_data,
colWidths=[3*inch, 1*inch, 1.25*inch, 1.25*inch]
colWidths=[3.2*inch, 0.8*inch, 1.25*inch, 1.25*inch]
)
num_items = len(items)
line_items_table.setStyle(TableStyle([
# Header row
('BACKGROUND', (0, 0), (-1, 0), colors.HexColor('#f3f4f6')),
('TEXTCOLOR', (0, 0), (-1, 0), colors.HexColor('#1f2937')),
('FONTNAME', (0, 0), (-1, 0), 'Helvetica-Bold'),
('FONTSIZE', (0, 0), (-1, 0), 10),
('BOTTOMPADDING', (0, 0), (-1, 0), 12),
('TOPPADDING', (0, 0), (-1, 0), 12),
# Body rows
('FONTNAME', (0, 1), (-1, -4), 'Helvetica'),
('FONTSIZE', (0, 1), (-1, -4), 9),
('TEXTCOLOR', (0, 1), (-1, -4), colors.HexColor('#4b5563')),
('ROWBACKGROUNDS', (0, 1), (-1, -4), [colors.white, colors.HexColor('#f9fafb')]),
('ROWBACKGROUNDS', (0, 1), (-1, num_items), [colors.white, colors.HexColor('#f9fafb')]),
# Summary rows (last 3-4 rows)
('FONTNAME', (0, -4), (-1, -1), 'Helvetica'),
('FONTSIZE', (0, -4), (-1, -1), 9),
('ALIGN', (2, 0), (2, -1), 'RIGHT'),
('ALIGN', (3, 0), (3, -1), 'RIGHT'),
# Alignment
('ALIGN', (1, 0), (1, -1), 'CENTER'),
('ALIGN', (2, 0), (-1, -1), 'RIGHT'),
('VALIGN', (0, 0), (-1, -1), 'MIDDLE'),
# Grid
('GRID', (0, 0), (-1, -4), 0.5, colors.HexColor('#e5e7eb')),
('LINEABOVE', (2, -4), (-1, -4), 1, colors.HexColor('#d1d5db')),
('LINEABOVE', (2, -1), (-1, -1), 2, colors.HexColor('#1f2937')),
# Grid for items only
('LINEBELOW', (0, 0), (-1, 0), 1, colors.HexColor('#d1d5db')),
('LINEBELOW', (0, num_items), (-1, num_items), 1, colors.HexColor('#e5e7eb')),
# Padding
('TOPPADDING', (0, 0), (-1, -1), 8),
('BOTTOMPADDING', (0, 0), (-1, -1), 8),
('LEFTPADDING', (0, 0), (-1, -1), 10),
('RIGHTPADDING', (0, 0), (-1, -1), 10),
('TOPPADDING', (0, 1), (-1, -1), 10),
('BOTTOMPADDING', (0, 1), (-1, -1), 10),
('LEFTPADDING', (0, 0), (-1, -1), 8),
('RIGHTPADDING', (0, 0), (-1, -1), 8),
]))
elements.append(line_items_table)
elements.append(Spacer(1, 0.2*inch))
# Totals section - right aligned
totals_data = [
[Paragraph('Subtotal:', right_align_style), Paragraph(f'{invoice.currency} {float(invoice.subtotal):.2f}', right_bold_style)],
]
tax_amount = float(invoice.tax or 0)
if tax_amount > 0:
tax_rate = invoice.metadata.get('tax_rate', 0) if invoice.metadata else 0
totals_data.append([
Paragraph(f'Tax ({tax_rate}%):', right_align_style),
Paragraph(f'{invoice.currency} {tax_amount:.2f}', right_align_style)
])
discount_amount = float(invoice.metadata.get('discount_amount', 0)) if invoice.metadata else 0
if discount_amount > 0:
totals_data.append([
Paragraph('Discount:', right_align_style),
Paragraph(f'-{invoice.currency} {discount_amount:.2f}', right_align_style)
])
totals_data.append([
Paragraph('Total:', ParagraphStyle('TotalLabel', fontName='Helvetica-Bold', fontSize=12, textColor=colors.HexColor('#1f2937'), alignment=TA_RIGHT)),
Paragraph(f'{invoice.currency} {float(invoice.total):.2f}', ParagraphStyle('TotalValue', fontName='Helvetica-Bold', fontSize=12, textColor=colors.HexColor('#1f2937'), alignment=TA_RIGHT))
])
totals_table = Table(totals_data, colWidths=[1.5*inch, 1.5*inch])
totals_table.setStyle(TableStyle([
('ALIGN', (0, 0), (-1, -1), 'RIGHT'),
('VALIGN', (0, 0), (-1, -1), 'MIDDLE'),
('TOPPADDING', (0, 0), (-1, -1), 6),
('BOTTOMPADDING', (0, 0), (-1, -1), 6),
('LINEABOVE', (0, -1), (-1, -1), 2, colors.HexColor('#1f2937')),
]))
# Right-align the totals table
totals_wrapper = Table([[totals_table]], colWidths=[6.5*inch])
totals_wrapper.setStyle(TableStyle([
('ALIGN', (0, 0), (0, 0), 'RIGHT'),
]))
elements.append(totals_wrapper)
elements.append(Spacer(1, 0.4*inch))
# Payment information
if invoice.status == 'paid':
elements.append(Paragraph('<b>Payment Information:</b>', heading_style))
elements.append(HRFlowable(width="100%", thickness=1, color=colors.HexColor('#e5e7eb'), spaceBefore=10, spaceAfter=15))
elements.append(Paragraph('PAYMENT INFORMATION', heading_style))
payment = invoice.payments.filter(status='succeeded').first()
if payment:
payment_method = payment.get_payment_method_display() if hasattr(payment, 'get_payment_method_display') else str(payment.payment_method)
payment_date = payment.processed_at.strftime("%B %d, %Y") if payment.processed_at else 'N/A'
payment_info = [
f'Payment Method: {payment.get_payment_method_display()}',
f'Paid On: {payment.processed_at.strftime("%B %d, %Y")}',
[Paragraph('Payment Method:', label_style), Paragraph(payment_method, value_style)],
[Paragraph('Paid On:', label_style), Paragraph(payment_date, value_style)],
]
if payment.manual_reference:
payment_info.append(f'Reference: {payment.manual_reference}')
for line in payment_info:
elements.append(Paragraph(line, normal_style))
payment_info.append([Paragraph('Reference:', label_style), Paragraph(payment.manual_reference, value_style)])
payment_table = Table(payment_info, colWidths=[1.5*inch, 3*inch])
payment_table.setStyle(TableStyle([
('VALIGN', (0, 0), (-1, -1), 'MIDDLE'),
('BOTTOMPADDING', (0, 0), (-1, -1), 4),
('TOPPADDING', (0, 0), (-1, -1), 4),
]))
elements.append(payment_table)
elements.append(Spacer(1, 0.2*inch))
# Footer / Notes
if invoice.notes:
elements.append(Spacer(1, 0.2*inch))
elements.append(Paragraph('<b>Notes:</b>', heading_style))
elements.append(Paragraph('NOTES', heading_style))
elements.append(Paragraph(invoice.notes, normal_style))
# Terms
elements.append(Spacer(1, 0.3*inch))
elements.append(Paragraph('<b>Terms & Conditions:</b>', heading_style))
terms = getattr(settings, 'INVOICE_TERMS', 'Payment is due within 7 days of invoice date.')
elements.append(Paragraph(terms, normal_style))
elements.append(HRFlowable(width="100%", thickness=1, color=colors.HexColor('#e5e7eb'), spaceAfter=15))
terms_style = ParagraphStyle(
'Terms',
parent=styles['Normal'],
fontSize=8,
textColor=colors.HexColor('#9ca3af'),
fontName='Helvetica',
)
terms = getattr(settings, 'INVOICE_TERMS', 'Payment is due within 7 days of invoice date. Thank you for your business!')
elements.append(Paragraph(f'Terms & Conditions: {terms}', terms_style))
# Footer with company info
elements.append(Spacer(1, 0.2*inch))
footer_style = ParagraphStyle(
'Footer',
parent=styles['Normal'],
fontSize=8,
textColor=colors.HexColor('#9ca3af'),
fontName='Helvetica',
alignment=TA_CENTER,
)
elements.append(Paragraph(f'Generated by IGNY8 • {company_email}', footer_style))
# Build PDF
doc.build(elements)

View File

@@ -0,0 +1,627 @@
"""
Stripe Service - Wrapper for Stripe API operations
Handles:
- Checkout sessions for subscriptions and credit packages
- Billing portal sessions for subscription management
- Webhook event construction and verification
- Customer management
Configuration stored in IntegrationProvider model (provider_id='stripe')
"""
import stripe
import logging
from typing import Optional, Dict, Any
from django.conf import settings
from django.utils import timezone
from igny8_core.modules.system.models import IntegrationProvider
logger = logging.getLogger(__name__)
class StripeConfigurationError(Exception):
"""Raised when Stripe is not properly configured"""
pass
class StripeService:
"""Service for Stripe payment operations"""
def __init__(self):
"""
Initialize Stripe service with credentials from IntegrationProvider.
Raises:
StripeConfigurationError: If Stripe provider not configured or missing credentials
"""
provider = IntegrationProvider.get_provider('stripe')
if not provider:
raise StripeConfigurationError(
"Stripe provider not configured. Add 'stripe' provider in admin."
)
if not provider.api_secret:
raise StripeConfigurationError(
"Stripe secret key not configured. Set api_secret in provider."
)
self.is_sandbox = provider.is_sandbox
self.provider = provider
# Set Stripe API key
stripe.api_key = provider.api_secret
# Store keys for reference
self.publishable_key = provider.api_key
self.webhook_secret = provider.webhook_secret
self.config = provider.config or {}
# Default currency from config
self.currency = self.config.get('currency', 'usd')
logger.info(
f"Stripe service initialized (sandbox={self.is_sandbox}, "
f"currency={self.currency})"
)
@property
def frontend_url(self) -> str:
"""Get frontend URL from Django settings"""
return getattr(settings, 'FRONTEND_URL', 'http://localhost:3000')
def get_publishable_key(self) -> str:
"""Return publishable key for frontend use"""
return self.publishable_key
# ========== Customer Management ==========
def _get_or_create_customer(self, account) -> str:
"""
Get existing Stripe customer or create new one.
Args:
account: Account model instance
Returns:
str: Stripe customer ID
"""
# Return existing customer if available
if account.stripe_customer_id:
try:
# Verify customer still exists in Stripe
stripe.Customer.retrieve(account.stripe_customer_id)
return account.stripe_customer_id
except stripe.error.InvalidRequestError:
# Customer was deleted, create new one
logger.warning(
f"Stripe customer {account.stripe_customer_id} not found, creating new"
)
# Create new customer
customer = stripe.Customer.create(
email=account.billing_email or account.owner.email,
name=account.name,
metadata={
'account_id': str(account.id),
'environment': 'sandbox' if self.is_sandbox else 'production'
},
)
# Save customer ID to account
account.stripe_customer_id = customer.id
account.save(update_fields=['stripe_customer_id', 'updated_at'])
logger.info(f"Created Stripe customer {customer.id} for account {account.id}")
return customer.id
def get_customer(self, account) -> Optional[Dict]:
"""
Get Stripe customer details.
Args:
account: Account model instance
Returns:
dict: Customer data or None if not found
"""
if not account.stripe_customer_id:
return None
try:
customer = stripe.Customer.retrieve(account.stripe_customer_id)
return {
'id': customer.id,
'email': customer.email,
'name': customer.name,
'created': customer.created,
'default_source': customer.default_source,
}
except stripe.error.InvalidRequestError:
return None
# ========== Checkout Sessions ==========
def create_checkout_session(
self,
account,
plan,
success_url: Optional[str] = None,
cancel_url: Optional[str] = None,
allow_promotion_codes: bool = True,
trial_period_days: Optional[int] = None,
) -> Dict[str, Any]:
"""
Create Stripe Checkout session for new subscription.
Args:
account: Account model instance
plan: Plan model instance with stripe_price_id
success_url: URL to redirect after successful payment
cancel_url: URL to redirect if payment is canceled
allow_promotion_codes: Allow discount codes in checkout
trial_period_days: Optional trial period (overrides plan default)
Returns:
dict: Session data with checkout_url and session_id
Raises:
ValueError: If plan has no stripe_price_id
"""
if not plan.stripe_price_id:
raise ValueError(
f"Plan '{plan.name}' (id={plan.id}) has no stripe_price_id configured"
)
# Get or create customer
customer_id = self._get_or_create_customer(account)
# Build URLs
if not success_url:
success_url = f'{self.frontend_url}/account/plans?success=true&session_id={{CHECKOUT_SESSION_ID}}'
if not cancel_url:
cancel_url = f'{self.frontend_url}/account/plans?canceled=true'
# Build subscription data
subscription_data = {
'metadata': {
'account_id': str(account.id),
'plan_id': str(plan.id),
}
}
if trial_period_days:
subscription_data['trial_period_days'] = trial_period_days
# Create checkout session
session = stripe.checkout.Session.create(
customer=customer_id,
payment_method_types=self.config.get('payment_methods', ['card']),
mode='subscription',
line_items=[{
'price': plan.stripe_price_id,
'quantity': 1,
}],
success_url=success_url,
cancel_url=cancel_url,
allow_promotion_codes=allow_promotion_codes,
metadata={
'account_id': str(account.id),
'plan_id': str(plan.id),
'type': 'subscription',
},
subscription_data=subscription_data,
)
logger.info(
f"Created Stripe checkout session {session.id} for account {account.id}, "
f"plan {plan.name}"
)
return {
'checkout_url': session.url,
'session_id': session.id,
}
def create_credit_checkout_session(
self,
account,
credit_package,
success_url: Optional[str] = None,
cancel_url: Optional[str] = None,
) -> Dict[str, Any]:
"""
Create Stripe Checkout session for one-time credit purchase.
Args:
account: Account model instance
credit_package: CreditPackage model instance
success_url: URL to redirect after successful payment
cancel_url: URL to redirect if payment is canceled
Returns:
dict: Session data with checkout_url and session_id
"""
# Get or create customer
customer_id = self._get_or_create_customer(account)
# Build URLs
if not success_url:
success_url = f'{self.frontend_url}/account/usage?purchase=success&session_id={{CHECKOUT_SESSION_ID}}'
if not cancel_url:
cancel_url = f'{self.frontend_url}/account/usage?purchase=canceled'
# Use existing Stripe price if available, otherwise create price_data
if credit_package.stripe_price_id:
line_items = [{
'price': credit_package.stripe_price_id,
'quantity': 1,
}]
else:
# Create price_data for dynamic pricing
line_items = [{
'price_data': {
'currency': self.currency,
'product_data': {
'name': credit_package.name,
'description': f'{credit_package.credits} credits',
},
'unit_amount': int(credit_package.price * 100), # Convert to cents
},
'quantity': 1,
}]
# Create checkout session
session = stripe.checkout.Session.create(
customer=customer_id,
payment_method_types=self.config.get('payment_methods', ['card']),
mode='payment',
line_items=line_items,
success_url=success_url,
cancel_url=cancel_url,
metadata={
'account_id': str(account.id),
'credit_package_id': str(credit_package.id),
'credit_amount': str(credit_package.credits),
'type': 'credit_purchase',
},
)
logger.info(
f"Created Stripe credit checkout session {session.id} for account {account.id}, "
f"package {credit_package.name} ({credit_package.credits} credits)"
)
return {
'checkout_url': session.url,
'session_id': session.id,
}
def get_checkout_session(self, session_id: str) -> Optional[Dict]:
"""
Retrieve checkout session details.
Args:
session_id: Stripe checkout session ID
Returns:
dict: Session data or None if not found
"""
try:
session = stripe.checkout.Session.retrieve(session_id)
return {
'id': session.id,
'status': session.status,
'payment_status': session.payment_status,
'customer': session.customer,
'subscription': session.subscription,
'metadata': session.metadata,
'amount_total': session.amount_total,
'currency': session.currency,
}
except stripe.error.InvalidRequestError as e:
logger.error(f"Failed to retrieve checkout session {session_id}: {e}")
return None
# ========== Billing Portal ==========
def create_billing_portal_session(
self,
account,
return_url: Optional[str] = None,
) -> Dict[str, Any]:
"""
Create Stripe Billing Portal session for subscription management.
Allows customers to:
- Update payment method
- View billing history
- Cancel subscription
- Update billing info
Args:
account: Account model instance
return_url: URL to return to after portal session
Returns:
dict: Portal session data with portal_url
Raises:
ValueError: If account has no Stripe customer
"""
if not self.config.get('billing_portal_enabled', True):
raise ValueError("Billing portal is disabled in configuration")
# Get or create customer
customer_id = self._get_or_create_customer(account)
if not return_url:
return_url = f'{self.frontend_url}/account/plans'
# Create billing portal session
session = stripe.billing_portal.Session.create(
customer=customer_id,
return_url=return_url,
)
logger.info(
f"Created Stripe billing portal session for account {account.id}"
)
return {
'portal_url': session.url,
}
# ========== Subscription Management ==========
def get_subscription(self, subscription_id: str) -> Optional[Dict]:
"""
Get subscription details from Stripe.
Args:
subscription_id: Stripe subscription ID
Returns:
dict: Subscription data or None if not found
"""
try:
sub = stripe.Subscription.retrieve(subscription_id)
return {
'id': sub.id,
'status': sub.status,
'current_period_start': sub.current_period_start,
'current_period_end': sub.current_period_end,
'cancel_at_period_end': sub.cancel_at_period_end,
'canceled_at': sub.canceled_at,
'ended_at': sub.ended_at,
'customer': sub.customer,
'items': [{
'id': item.id,
'price_id': item.price.id,
'quantity': item.quantity,
} for item in sub['items'].data],
'metadata': sub.metadata,
}
except stripe.error.InvalidRequestError as e:
logger.error(f"Failed to retrieve subscription {subscription_id}: {e}")
return None
def cancel_subscription(
self,
subscription_id: str,
at_period_end: bool = True
) -> Dict[str, Any]:
"""
Cancel a Stripe subscription.
Args:
subscription_id: Stripe subscription ID
at_period_end: If True, cancel at end of billing period
Returns:
dict: Updated subscription data
"""
if at_period_end:
sub = stripe.Subscription.modify(
subscription_id,
cancel_at_period_end=True
)
logger.info(f"Subscription {subscription_id} marked for cancellation at period end")
else:
sub = stripe.Subscription.delete(subscription_id)
logger.info(f"Subscription {subscription_id} canceled immediately")
return {
'id': sub.id,
'status': sub.status,
'cancel_at_period_end': sub.cancel_at_period_end,
}
def update_subscription(
self,
subscription_id: str,
new_price_id: str,
proration_behavior: str = 'create_prorations'
) -> Dict[str, Any]:
"""
Update subscription to a new plan/price.
Args:
subscription_id: Stripe subscription ID
new_price_id: New Stripe price ID
proration_behavior: How to handle proration
- 'create_prorations': Prorate the change
- 'none': No proration
- 'always_invoice': Invoice immediately
Returns:
dict: Updated subscription data
"""
# Get current subscription
sub = stripe.Subscription.retrieve(subscription_id)
# Update the subscription item
updated = stripe.Subscription.modify(
subscription_id,
items=[{
'id': sub['items'].data[0].id,
'price': new_price_id,
}],
proration_behavior=proration_behavior,
)
logger.info(
f"Updated subscription {subscription_id} to price {new_price_id}"
)
return {
'id': updated.id,
'status': updated.status,
'current_period_end': updated.current_period_end,
}
# ========== Webhook Handling ==========
def construct_webhook_event(
self,
payload: bytes,
sig_header: str
) -> stripe.Event:
"""
Verify and construct webhook event from Stripe.
Args:
payload: Raw request body
sig_header: Stripe-Signature header value
Returns:
stripe.Event: Verified event object
Raises:
stripe.error.SignatureVerificationError: If signature is invalid
"""
if not self.webhook_secret:
raise StripeConfigurationError(
"Webhook secret not configured. Set webhook_secret in provider."
)
return stripe.Webhook.construct_event(
payload, sig_header, self.webhook_secret
)
# ========== Invoice Operations ==========
def get_invoice(self, invoice_id: str) -> Optional[Dict]:
"""
Get invoice details from Stripe.
Args:
invoice_id: Stripe invoice ID
Returns:
dict: Invoice data or None if not found
"""
try:
invoice = stripe.Invoice.retrieve(invoice_id)
return {
'id': invoice.id,
'status': invoice.status,
'amount_due': invoice.amount_due,
'amount_paid': invoice.amount_paid,
'currency': invoice.currency,
'customer': invoice.customer,
'subscription': invoice.subscription,
'invoice_pdf': invoice.invoice_pdf,
'hosted_invoice_url': invoice.hosted_invoice_url,
}
except stripe.error.InvalidRequestError as e:
logger.error(f"Failed to retrieve invoice {invoice_id}: {e}")
return None
def get_upcoming_invoice(self, customer_id: str) -> Optional[Dict]:
"""
Get upcoming invoice for a customer.
Args:
customer_id: Stripe customer ID
Returns:
dict: Upcoming invoice preview or None
"""
try:
invoice = stripe.Invoice.upcoming(customer=customer_id)
return {
'amount_due': invoice.amount_due,
'currency': invoice.currency,
'next_payment_attempt': invoice.next_payment_attempt,
'lines': [{
'description': line.description,
'amount': line.amount,
} for line in invoice.lines.data],
}
except stripe.error.InvalidRequestError:
return None
# ========== Refunds ==========
def create_refund(
self,
payment_intent_id: Optional[str] = None,
charge_id: Optional[str] = None,
amount: Optional[int] = None,
reason: Optional[str] = None,
) -> Dict[str, Any]:
"""
Create a refund for a payment.
Args:
payment_intent_id: Stripe PaymentIntent ID
charge_id: Stripe Charge ID (alternative to payment_intent_id)
amount: Amount to refund in cents (None for full refund)
reason: Reason for refund ('duplicate', 'fraudulent', 'requested_by_customer')
Returns:
dict: Refund data
"""
params = {}
if payment_intent_id:
params['payment_intent'] = payment_intent_id
elif charge_id:
params['charge'] = charge_id
else:
raise ValueError("Either payment_intent_id or charge_id required")
if amount:
params['amount'] = amount
if reason:
params['reason'] = reason
refund = stripe.Refund.create(**params)
logger.info(
f"Created refund {refund.id} for "
f"{'payment_intent ' + payment_intent_id if payment_intent_id else 'charge ' + charge_id}"
)
return {
'id': refund.id,
'amount': refund.amount,
'status': refund.status,
'reason': refund.reason,
}
# Convenience function
def get_stripe_service() -> StripeService:
"""
Get StripeService instance.
Returns:
StripeService: Initialized service
Raises:
StripeConfigurationError: If Stripe not configured
"""
return StripeService()

View File

@@ -172,7 +172,7 @@ def _attempt_stripe_renewal(subscription: Subscription, invoice: Invoice) -> boo
payment_method='stripe',
status='processing',
stripe_payment_intent_id=intent.id,
metadata={'renewal': True}
metadata={'renewal': True, 'auto_approved': True}
)
return True
@@ -210,7 +210,7 @@ def _attempt_paypal_renewal(subscription: Subscription, invoice: Invoice) -> boo
payment_method='paypal',
status='processing',
paypal_order_id=subscription.metadata['paypal_subscription_id'],
metadata={'renewal': True}
metadata={'renewal': True, 'auto_approved': True}
)
return True
else:

View File

@@ -1,7 +1,7 @@
"""Billing routes including bank transfer confirmation and credit endpoints."""
from django.urls import path, include
from rest_framework.routers import DefaultRouter
from .views import (
from .billing_views import (
BillingViewSet,
InvoiceViewSet,
PaymentViewSet,
@@ -13,6 +13,25 @@ from igny8_core.modules.billing.views import (
CreditBalanceViewSet,
CreditUsageViewSet,
CreditTransactionViewSet,
AIModelConfigViewSet,
)
# Payment gateway views
from .views.stripe_views import (
StripeConfigView,
StripeCheckoutView,
StripeCreditCheckoutView,
StripeBillingPortalView,
StripeReturnVerificationView,
stripe_webhook,
)
from .views.paypal_views import (
PayPalConfigView,
PayPalCreateOrderView,
PayPalCreateSubscriptionOrderView,
PayPalCaptureOrderView,
PayPalCreateSubscriptionView,
PayPalReturnVerificationView,
paypal_webhook,
)
router = DefaultRouter()
@@ -21,6 +40,8 @@ router.register(r'admin', BillingViewSet, basename='billing-admin')
router.register(r'credits/balance', CreditBalanceViewSet, basename='credit-balance')
router.register(r'credits/usage', CreditUsageViewSet, basename='credit-usage')
router.register(r'credits/transactions', CreditTransactionViewSet, basename='credit-transactions')
# AI Models endpoint
router.register(r'ai/models', AIModelConfigViewSet, basename='ai-models')
# User-facing billing endpoints
router.register(r'invoices', InvoiceViewSet, basename='invoices')
router.register(r'payments', PaymentViewSet, basename='payments')
@@ -32,4 +53,21 @@ urlpatterns = [
path('', include(router.urls)),
# User-facing usage summary endpoint for plan limits
path('usage-summary/', get_usage_summary, name='usage-summary'),
# Stripe endpoints
path('stripe/config/', StripeConfigView.as_view(), name='stripe-config'),
path('stripe/checkout/', StripeCheckoutView.as_view(), name='stripe-checkout'),
path('stripe/credit-checkout/', StripeCreditCheckoutView.as_view(), name='stripe-credit-checkout'),
path('stripe/billing-portal/', StripeBillingPortalView.as_view(), name='stripe-billing-portal'),
path('stripe/verify-return/', StripeReturnVerificationView.as_view(), name='stripe-verify-return'),
path('webhooks/stripe/', stripe_webhook, name='stripe-webhook'),
# PayPal endpoints
path('paypal/config/', PayPalConfigView.as_view(), name='paypal-config'),
path('paypal/create-order/', PayPalCreateOrderView.as_view(), name='paypal-create-order'),
path('paypal/create-subscription-order/', PayPalCreateSubscriptionOrderView.as_view(), name='paypal-create-subscription-order'),
path('paypal/capture-order/', PayPalCaptureOrderView.as_view(), name='paypal-capture-order'),
path('paypal/create-subscription/', PayPalCreateSubscriptionView.as_view(), name='paypal-create-subscription'),
path('paypal/verify-return/', PayPalReturnVerificationView.as_view(), name='paypal-verify-return'),
path('webhooks/paypal/', paypal_webhook, name='paypal-webhook'),
]

View File

@@ -5,6 +5,8 @@ API endpoints for generating and downloading invoice PDFs
from django.http import HttpResponse
from rest_framework.decorators import api_view, permission_classes
from rest_framework.permissions import IsAuthenticated
from rest_framework.response import Response
from rest_framework import status
from igny8_core.business.billing.models import Invoice
from igny8_core.business.billing.services.pdf_service import InvoicePDFGenerator
from igny8_core.business.billing.utils.errors import not_found_response
@@ -22,20 +24,46 @@ def download_invoice_pdf(request, invoice_id):
GET /api/v1/billing/invoices/<id>/pdf/
"""
try:
invoice = Invoice.objects.prefetch_related('line_items').get(
# Note: line_items is a JSONField, not a related model - no prefetch needed
invoice = Invoice.objects.select_related('account', 'account__owner', 'subscription', 'subscription__plan').get(
id=invoice_id,
account=request.user.account
)
except Invoice.DoesNotExist:
return not_found_response('Invoice', invoice_id)
# Generate PDF
pdf_buffer = InvoicePDFGenerator.generate_invoice_pdf(invoice)
# Return PDF response
response = HttpResponse(pdf_buffer.read(), content_type='application/pdf')
response['Content-Disposition'] = f'attachment; filename="invoice_{invoice.invoice_number}.pdf"'
logger.info(f'Invoice PDF downloaded: {invoice.invoice_number} by user {request.user.id}')
return response
try:
# Generate PDF
pdf_buffer = InvoicePDFGenerator.generate_invoice_pdf(invoice)
# Build descriptive filename: IGNY8-Invoice-INV123456-Growth-2026-01-08.pdf
plan_name = ''
if invoice.subscription and invoice.subscription.plan:
plan_name = invoice.subscription.plan.name.replace(' ', '-')
elif invoice.metadata and 'plan_name' in invoice.metadata:
plan_name = invoice.metadata['plan_name'].replace(' ', '-')
date_str = invoice.invoice_date.strftime('%Y-%m-%d') if invoice.invoice_date else ''
filename_parts = ['IGNY8', 'Invoice', invoice.invoice_number]
if plan_name:
filename_parts.append(plan_name)
if date_str:
filename_parts.append(date_str)
filename = '-'.join(filename_parts) + '.pdf'
# Return PDF response
response = HttpResponse(pdf_buffer.read(), content_type='application/pdf')
response['Content-Disposition'] = f'attachment; filename="{filename}"'
logger.info(f'Invoice PDF downloaded: {invoice.invoice_number} by user {request.user.id}')
return response
except Exception as e:
logger.error(f'Failed to generate PDF for invoice {invoice_id}: {str(e)}', exc_info=True)
return Response(
{'error': 'Failed to generate PDF', 'detail': str(e)},
status=status.HTTP_500_INTERNAL_SERVER_ERROR
)

File diff suppressed because it is too large Load Diff

View File

@@ -160,20 +160,18 @@ def initiate_refund(request, payment_id):
def _process_stripe_refund(payment: Payment, amount: Decimal, reason: str) -> bool:
"""Process Stripe refund"""
try:
import stripe
from igny8_core.business.billing.utils.payment_gateways import get_stripe_client
from igny8_core.business.billing.services.stripe_service import StripeService
stripe_client = get_stripe_client()
stripe_service = StripeService()
refund = stripe_client.Refund.create(
payment_intent=payment.stripe_payment_intent_id,
refund = stripe_service.create_refund(
payment_intent_id=payment.stripe_payment_intent_id,
amount=int(amount * 100), # Convert to cents
reason='requested_by_customer',
metadata={'reason': reason}
)
payment.metadata['stripe_refund_id'] = refund.id
return refund.status == 'succeeded'
payment.metadata['stripe_refund_id'] = refund.get('id')
return refund.get('status') == 'succeeded'
except Exception as e:
logger.exception(f"Stripe refund failed for payment {payment.id}: {str(e)}")
@@ -183,25 +181,19 @@ def _process_stripe_refund(payment: Payment, amount: Decimal, reason: str) -> bo
def _process_paypal_refund(payment: Payment, amount: Decimal, reason: str) -> bool:
"""Process PayPal refund"""
try:
from igny8_core.business.billing.utils.payment_gateways import get_paypal_client
from igny8_core.business.billing.services.paypal_service import PayPalService
paypal_client = get_paypal_client()
paypal_service = PayPalService()
refund_request = {
'amount': {
'value': str(amount),
'currency_code': payment.currency
},
'note_to_payer': reason
}
refund = paypal_client.payments.captures.refund(
payment.paypal_capture_id,
refund_request
refund = paypal_service.refund_capture(
capture_id=payment.paypal_capture_id,
amount=float(amount),
currency=payment.currency,
note=reason,
)
payment.metadata['paypal_refund_id'] = refund.id
return refund.status == 'COMPLETED'
payment.metadata['paypal_refund_id'] = refund.get('id')
return refund.get('status') == 'COMPLETED'
except Exception as e:
logger.exception(f"PayPal refund failed for payment {payment.id}: {str(e)}")

File diff suppressed because it is too large Load Diff

View File

@@ -119,10 +119,40 @@ class Tasks(SoftDeletableModel, SiteSectorBaseModel):
objects = SoftDeleteManager()
all_objects = models.Manager()
def __str__(self):
return self.title
def soft_delete(self, user=None, reason=None, retention_days=None):
"""
Override soft_delete to cascade to related models.
This ensures Images and ContentClusterMap are also deleted when a Task is deleted.
"""
import logging
logger = logging.getLogger(__name__)
# Soft-delete related Images (which are also SoftDeletable)
related_images = self.images.filter(is_deleted=False)
images_count = related_images.count()
for image in related_images:
image.soft_delete(user=user, reason=f"Parent task deleted: {reason or 'No reason'}")
# Hard-delete ContentClusterMap (not soft-deletable)
cluster_maps_count = self.cluster_mappings.count()
self.cluster_mappings.all().delete()
# Hard-delete ContentAttribute (not soft-deletable)
attributes_count = self.attribute_mappings.count()
self.attribute_mappings.all().delete()
logger.info(
f"[Tasks.soft_delete] Task {self.id} '{self.title}' cascade delete: "
f"{images_count} images, {cluster_maps_count} cluster maps, {attributes_count} attributes"
)
# Call parent soft_delete
super().soft_delete(user=user, reason=reason, retention_days=retention_days)
class ContentTaxonomyRelation(models.Model):
"""Through model for Content-Taxonomy many-to-many relationship"""
@@ -241,7 +271,8 @@ class Content(SoftDeletableModel, SiteSectorBaseModel):
STATUS_CHOICES = [
('draft', 'Draft'),
('review', 'Review'),
('published', 'Published'),
('approved', 'Approved'), # Ready for publishing to external site
('published', 'Published'), # Actually published on external site
]
status = models.CharField(
max_length=50,
@@ -251,6 +282,33 @@ class Content(SoftDeletableModel, SiteSectorBaseModel):
help_text="Content status"
)
# Publishing scheduler fields
SITE_STATUS_CHOICES = [
('not_published', 'Not Published'),
('scheduled', 'Scheduled'),
('publishing', 'Publishing'),
('published', 'Published'),
('failed', 'Failed'),
]
site_status = models.CharField(
max_length=50,
choices=SITE_STATUS_CHOICES,
default='not_published',
db_index=True,
help_text="External site publishing status"
)
scheduled_publish_at = models.DateTimeField(
null=True,
blank=True,
db_index=True,
help_text="Scheduled time for publishing to external site"
)
site_status_updated_at = models.DateTimeField(
null=True,
blank=True,
help_text="Last time site_status was changed"
)
created_at = models.DateTimeField(auto_now_add=True)
updated_at = models.DateTimeField(auto_now=True)
@@ -326,6 +384,61 @@ class Content(SoftDeletableModel, SiteSectorBaseModel):
logger = logging.getLogger(__name__)
logger.error(f"Error incrementing word usage for content {self.id}: {str(e)}")
def soft_delete(self, user=None, reason=None, retention_days=None):
"""
Override soft_delete to cascade to related models.
This ensures Images, ContentClusterMap, ContentAttribute are also deleted.
"""
import logging
logger = logging.getLogger(__name__)
# Soft-delete related Images (which are also SoftDeletable)
related_images = self.images.filter(is_deleted=False)
images_count = related_images.count()
for image in related_images:
image.soft_delete(user=user, reason=f"Parent content deleted: {reason or 'No reason'}")
# Hard-delete ContentClusterMap (not soft-deletable)
cluster_maps_count = self.cluster_mappings.count()
self.cluster_mappings.all().delete()
# Hard-delete ContentAttribute (not soft-deletable)
attributes_count = self.attributes.count()
self.attributes.all().delete()
# Hard-delete ContentTaxonomyRelation (through model for many-to-many)
taxonomy_relations_count = ContentTaxonomyRelation.objects.filter(content=self).count()
ContentTaxonomyRelation.objects.filter(content=self).delete()
logger.info(
f"[Content.soft_delete] Content {self.id} '{self.title}' cascade delete: "
f"{images_count} images, {cluster_maps_count} cluster maps, "
f"{attributes_count} attributes, {taxonomy_relations_count} taxonomy relations"
)
# Call parent soft_delete
super().soft_delete(user=user, reason=reason, retention_days=retention_days)
def hard_delete(self, using=None, keep_parents=False):
"""
Override hard_delete to cascade to related models.
Django CASCADE should handle this, but we explicitly clean up for safety.
"""
import logging
logger = logging.getLogger(__name__)
# Hard-delete related Images (including soft-deleted ones)
images_count = Images.all_objects.filter(content=self).count()
Images.all_objects.filter(content=self).delete()
logger.info(
f"[Content.hard_delete] Content {self.id} '{self.title}' hard delete: "
f"{images_count} images removed"
)
# Call parent hard_delete (Django CASCADE will handle the rest)
return super().hard_delete(using=using, keep_parents=keep_parents)
class ContentTaxonomy(SiteSectorBaseModel):
"""
@@ -436,6 +549,7 @@ class Images(SoftDeletableModel, SiteSectorBaseModel):
image_url = models.CharField(max_length=500, blank=True, null=True, help_text="URL of the generated/stored image")
image_path = models.CharField(max_length=500, blank=True, null=True, help_text="Local path if stored locally")
prompt = models.TextField(blank=True, null=True, help_text="Image generation prompt used")
caption = models.TextField(blank=True, null=True, help_text="Image caption (40-60 words) to display with the image")
status = models.CharField(max_length=50, default='pending', help_text="Status: pending, generated, failed")
position = models.IntegerField(default=0, help_text="Position for in-article images ordering")
created_at = models.DateTimeField(auto_now_add=True)
@@ -454,10 +568,33 @@ class Images(SoftDeletableModel, SiteSectorBaseModel):
models.Index(fields=['content', 'position']),
models.Index(fields=['task', 'position']),
]
# Ensure unique position per content+image_type combination
constraints = [
models.UniqueConstraint(
fields=['content', 'image_type', 'position'],
name='unique_content_image_type_position',
condition=models.Q(is_deleted=False)
),
]
objects = SoftDeleteManager()
all_objects = models.Manager()
@property
def aspect_ratio(self):
"""
Determine aspect ratio based on position for layout rendering.
Position 0, 2: square (1:1)
Position 1, 3: landscape (16:9 or similar)
Featured: always landscape
"""
if self.image_type == 'featured':
return 'landscape'
elif self.image_type == 'in_article':
# Even positions are square, odd positions are landscape
return 'square' if (self.position or 0) % 2 == 0 else 'landscape'
return 'square' # Default
def save(self, *args, **kwargs):
"""Track image usage when creating new images"""
is_new = self.pk is None
@@ -674,3 +811,14 @@ class ContentAttribute(SiteSectorBaseModel):
# Backward compatibility alias
ContentAttributeMap = ContentAttribute
class ImagePrompts(Images):
"""
Proxy model for Images to provide a separate admin interface focused on prompts.
This allows a dedicated "Image Prompts" view in the admin sidebar.
"""
class Meta:
proxy = True
verbose_name = 'Image Prompt'
verbose_name_plural = 'Image Prompts'
app_label = 'writer'

View File

@@ -26,17 +26,7 @@ class ContentValidationService:
"""
errors = []
# Stage 3: Enforce "no cluster, no task" rule when feature flag enabled
from django.conf import settings
if getattr(settings, 'USE_SITE_BUILDER_REFACTOR', False):
if not task.cluster:
errors.append({
'field': 'cluster',
'code': 'missing_cluster',
'message': 'Task must be associated with a cluster before content generation',
})
# Stage 3: Validate entity_type is set
# Validate entity_type is set
if not task.content_type:
errors.append({
'field': 'content_type',

View File

@@ -16,8 +16,18 @@ class SyncEventResource(resources.ModelResource):
export_order = fields
class SiteIntegrationResource(resources.ModelResource):
"""Resource class for exporting Site Integrations"""
class Meta:
model = SiteIntegration
fields = ('id', 'site__name', 'platform', 'platform_type', 'is_active',
'sync_enabled', 'sync_status', 'last_sync_at', 'created_at')
export_order = fields
@admin.register(SiteIntegration)
class SiteIntegrationAdmin(AccountAdminMixin, Igny8ModelAdmin):
class SiteIntegrationAdmin(ExportMixin, AccountAdminMixin, Igny8ModelAdmin):
resource_class = SiteIntegrationResource
list_display = [
'site',
'platform',
@@ -30,7 +40,13 @@ class SiteIntegrationAdmin(AccountAdminMixin, Igny8ModelAdmin):
list_filter = ['platform', 'platform_type', 'is_active', 'sync_enabled', 'sync_status']
search_fields = ['site__name', 'site__domain', 'platform']
readonly_fields = ['created_at', 'updated_at']
actions = ['bulk_enable_sync', 'bulk_disable_sync', 'bulk_trigger_sync']
actions = [
'bulk_enable_sync',
'bulk_disable_sync',
'bulk_trigger_sync',
'bulk_test_connection',
'bulk_delete_integrations',
]
def bulk_enable_sync(self, request, queryset):
"""Enable sync for selected integrations"""
@@ -52,6 +68,29 @@ class SiteIntegrationAdmin(AccountAdminMixin, Igny8ModelAdmin):
count += 1
self.message_user(request, f'{count} integration(s) queued for sync.', messages.INFO)
bulk_trigger_sync.short_description = 'Trigger sync now'
def bulk_test_connection(self, request, queryset):
"""Test connection for selected integrations"""
tested = 0
successful = 0
for integration in queryset.filter(is_active=True):
# TODO: Implement actual connection test logic
tested += 1
successful += 1 # Placeholder
self.message_user(
request,
f'Tested {tested} integration(s). {successful} successful. (Connection test logic to be implemented)',
messages.INFO
)
bulk_test_connection.short_description = 'Test connections'
def bulk_delete_integrations(self, request, queryset):
"""Delete selected integrations"""
count = queryset.count()
queryset.delete()
self.message_user(request, f'{count} integration(s) deleted.', messages.SUCCESS)
bulk_delete_integrations.short_description = 'Delete selected integrations'
@admin.register(SyncEvent)
@@ -69,7 +108,10 @@ class SyncEventAdmin(ExportMixin, AccountAdminMixin, Igny8ModelAdmin):
list_filter = ['event_type', 'action', 'success', 'created_at']
search_fields = ['integration__site__name', 'site__name', 'description', 'external_id']
readonly_fields = ['created_at']
actions = ['bulk_mark_reviewed']
actions = [
'bulk_mark_reviewed',
'bulk_delete_old_events',
]
def bulk_mark_reviewed(self, request, queryset):
"""Mark selected sync events as reviewed"""
@@ -77,4 +119,16 @@ class SyncEventAdmin(ExportMixin, AccountAdminMixin, Igny8ModelAdmin):
count = queryset.count()
self.message_user(request, f'{count} sync event(s) marked as reviewed.', messages.SUCCESS)
bulk_mark_reviewed.short_description = 'Mark as reviewed'
def bulk_delete_old_events(self, request, queryset):
"""Delete sync events older than 30 days"""
from django.utils import timezone
from datetime import timedelta
cutoff_date = timezone.now() - timedelta(days=30)
old_events = queryset.filter(created_at__lt=cutoff_date)
count = old_events.count()
old_events.delete()
self.message_user(request, f'{count} old sync event(s) deleted (older than 30 days).', messages.SUCCESS)
bulk_delete_old_events.short_description = 'Delete old events (>30 days)'

View File

@@ -0,0 +1,38 @@
# Generated by Django 5.2.9 on 2026-01-01 06:37
import django.core.validators
import django.db.models.deletion
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('igny8_core_auth', '0018_add_country_remove_intent_seedkeyword'),
('integration', '0002_add_sync_event_model'),
]
operations = [
migrations.CreateModel(
name='PublishingSettings',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('auto_approval_enabled', models.BooleanField(default=True, help_text="Automatically approve content after review (moves to 'approved' status)")),
('auto_publish_enabled', models.BooleanField(default=True, help_text='Automatically publish approved content to the external site')),
('daily_publish_limit', models.PositiveIntegerField(default=3, help_text='Maximum number of articles to publish per day', validators=[django.core.validators.MinValueValidator(1)])),
('weekly_publish_limit', models.PositiveIntegerField(default=15, help_text='Maximum number of articles to publish per week', validators=[django.core.validators.MinValueValidator(1)])),
('monthly_publish_limit', models.PositiveIntegerField(default=50, help_text='Maximum number of articles to publish per month', validators=[django.core.validators.MinValueValidator(1)])),
('publish_days', models.JSONField(default=list, help_text='Days of the week to publish (mon, tue, wed, thu, fri, sat, sun)')),
('publish_time_slots', models.JSONField(default=list, help_text="Times of day to publish (HH:MM format, e.g., ['09:00', '14:00', '18:00'])")),
('created_at', models.DateTimeField(auto_now_add=True)),
('updated_at', models.DateTimeField(auto_now=True)),
('account', models.ForeignKey(db_column='tenant_id', on_delete=django.db.models.deletion.CASCADE, related_name='%(class)s_set', to='igny8_core_auth.account')),
('site', models.OneToOneField(help_text='Site these publishing settings belong to', on_delete=django.db.models.deletion.CASCADE, related_name='publishing_settings', to='igny8_core_auth.site')),
],
options={
'verbose_name': 'Publishing Settings',
'verbose_name_plural': 'Publishing Settings',
'db_table': 'igny8_publishing_settings',
},
),
]

View File

@@ -244,3 +244,100 @@ class SyncEvent(AccountBaseModel):
def __str__(self):
return f"{self.get_event_type_display()} - {self.description[:50]}"
class PublishingSettings(AccountBaseModel):
"""
Site-level publishing configuration settings.
Controls automatic approval, publishing limits, and scheduling.
"""
DEFAULT_PUBLISH_DAYS = ['mon', 'tue', 'wed', 'thu', 'fri']
DEFAULT_TIME_SLOTS = ['09:00', '14:00', '18:00']
site = models.OneToOneField(
'igny8_core_auth.Site',
on_delete=models.CASCADE,
related_name='publishing_settings',
help_text="Site these publishing settings belong to"
)
# Auto-approval settings
auto_approval_enabled = models.BooleanField(
default=True,
help_text="Automatically approve content after review (moves to 'approved' status)"
)
# Auto-publish settings
auto_publish_enabled = models.BooleanField(
default=True,
help_text="Automatically publish approved content to the external site"
)
# Publishing limits
daily_publish_limit = models.PositiveIntegerField(
default=3,
validators=[MinValueValidator(1)],
help_text="Maximum number of articles to publish per day"
)
weekly_publish_limit = models.PositiveIntegerField(
default=15,
validators=[MinValueValidator(1)],
help_text="Maximum number of articles to publish per week"
)
monthly_publish_limit = models.PositiveIntegerField(
default=50,
validators=[MinValueValidator(1)],
help_text="Maximum number of articles to publish per month"
)
# Publishing schedule
publish_days = models.JSONField(
default=list,
help_text="Days of the week to publish (mon, tue, wed, thu, fri, sat, sun)"
)
publish_time_slots = models.JSONField(
default=list,
help_text="Times of day to publish (HH:MM format, e.g., ['09:00', '14:00', '18:00'])"
)
created_at = models.DateTimeField(auto_now_add=True)
updated_at = models.DateTimeField(auto_now=True)
class Meta:
app_label = 'integration'
db_table = 'igny8_publishing_settings'
verbose_name = 'Publishing Settings'
verbose_name_plural = 'Publishing Settings'
def __str__(self):
return f"Publishing Settings for {self.site.name}"
def save(self, *args, **kwargs):
"""Set defaults for JSON fields if empty"""
if not self.publish_days:
self.publish_days = self.DEFAULT_PUBLISH_DAYS
if not self.publish_time_slots:
self.publish_time_slots = self.DEFAULT_TIME_SLOTS
super().save(*args, **kwargs)
@classmethod
def get_or_create_for_site(cls, site):
"""Get or create publishing settings for a site with defaults"""
settings, created = cls.objects.get_or_create(
site=site,
defaults={
'account': site.account,
'auto_approval_enabled': True,
'auto_publish_enabled': True,
'daily_publish_limit': 3,
'weekly_publish_limit': 15,
'monthly_publish_limit': 50,
'publish_days': cls.DEFAULT_PUBLISH_DAYS,
'publish_time_slots': cls.DEFAULT_TIME_SLOTS,
}
)
return settings, created

View File

@@ -0,0 +1,259 @@
"""
Defaults Service
Creates sites with default settings for simplified onboarding.
"""
import logging
from typing import Dict, Any, Tuple, Optional
from django.db import transaction
from django.utils import timezone
from igny8_core.auth.models import Account, Site
from igny8_core.business.integration.models import PublishingSettings
from igny8_core.business.automation.models import AutomationConfig
logger = logging.getLogger(__name__)
# Default settings for new sites
DEFAULT_PUBLISHING_SETTINGS = {
'auto_approval_enabled': True,
'auto_publish_enabled': True,
'daily_publish_limit': 3,
'weekly_publish_limit': 15,
'monthly_publish_limit': 50,
'publish_days': ['mon', 'tue', 'wed', 'thu', 'fri'],
'publish_time_slots': ['09:00', '14:00', '18:00'],
}
DEFAULT_AUTOMATION_SETTINGS = {
'is_enabled': True,
'frequency': 'daily',
'scheduled_time': '02:00',
'stage_1_batch_size': 50,
'stage_2_batch_size': 1,
'stage_3_batch_size': 20,
'stage_4_batch_size': 1,
'stage_5_batch_size': 1,
'stage_6_batch_size': 1,
'within_stage_delay': 3,
'between_stage_delay': 5,
}
class DefaultsService:
"""
Service for creating sites with sensible defaults.
Used during onboarding for a simplified first-run experience.
"""
def __init__(self, account: Account):
self.account = account
@transaction.atomic
def create_site_with_defaults(
self,
site_data: Dict[str, Any],
publishing_overrides: Optional[Dict[str, Any]] = None,
automation_overrides: Optional[Dict[str, Any]] = None,
) -> Tuple[Site, PublishingSettings, AutomationConfig]:
"""
Create a new site with default publishing and automation settings.
Args:
site_data: Dict with site fields (name, domain, etc.)
publishing_overrides: Optional overrides for publishing settings
automation_overrides: Optional overrides for automation settings
Returns:
Tuple of (Site, PublishingSettings, AutomationConfig)
"""
# Check hard limit for sites BEFORE creating
from igny8_core.business.billing.services.limit_service import LimitService, HardLimitExceededError
LimitService.check_hard_limit(self.account, 'sites', additional_count=1)
# Create the site
site = Site.objects.create(
account=self.account,
name=site_data.get('name', 'My Site'),
domain=site_data.get('domain', ''),
base_url=site_data.get('base_url', ''),
hosting_type=site_data.get('hosting_type', 'wordpress'),
is_active=site_data.get('is_active', True),
)
logger.info(f"Created site: {site.name} (id={site.id}) for account {self.account.id}")
# Create publishing settings with defaults
publishing_settings = self._create_publishing_settings(
site,
overrides=publishing_overrides
)
# Create automation config with defaults
automation_config = self._create_automation_config(
site,
overrides=automation_overrides
)
return site, publishing_settings, automation_config
def _create_publishing_settings(
self,
site: Site,
overrides: Optional[Dict[str, Any]] = None
) -> PublishingSettings:
"""Create publishing settings with defaults, applying any overrides."""
settings_data = {**DEFAULT_PUBLISHING_SETTINGS}
if overrides:
settings_data.update(overrides)
publishing_settings = PublishingSettings.objects.create(
account=self.account,
site=site,
**settings_data
)
logger.info(
f"Created publishing settings for site {site.id}: "
f"auto_approval={publishing_settings.auto_approval_enabled}, "
f"auto_publish={publishing_settings.auto_publish_enabled}"
)
return publishing_settings
def _create_automation_config(
self,
site: Site,
overrides: Optional[Dict[str, Any]] = None
) -> AutomationConfig:
"""Create automation config with defaults, applying any overrides."""
config_data = {**DEFAULT_AUTOMATION_SETTINGS}
if overrides:
config_data.update(overrides)
# Calculate next run time (tomorrow at scheduled time)
scheduled_time = config_data.pop('scheduled_time', '02:00')
automation_config = AutomationConfig.objects.create(
account=self.account,
site=site,
scheduled_time=scheduled_time,
**config_data
)
# Set next run to tomorrow at scheduled time if enabled
if automation_config.is_enabled:
next_run = self._calculate_initial_next_run(scheduled_time)
automation_config.next_run_at = next_run
automation_config.save(update_fields=['next_run_at'])
logger.info(
f"Created automation config for site {site.id}: "
f"enabled={automation_config.is_enabled}, "
f"frequency={automation_config.frequency}, "
f"next_run={automation_config.next_run_at}"
)
return automation_config
def _calculate_initial_next_run(self, scheduled_time: str) -> timezone.datetime:
"""Calculate the initial next run datetime (tomorrow at scheduled time)."""
now = timezone.now()
# Parse time
try:
hour, minute = map(int, scheduled_time.split(':'))
except (ValueError, AttributeError):
hour, minute = 2, 0 # Default to 2:00 AM
# Set to tomorrow at the scheduled time
next_run = now.replace(
hour=hour,
minute=minute,
second=0,
microsecond=0
)
# If the time has passed today, schedule for tomorrow
if next_run <= now:
next_run += timezone.timedelta(days=1)
return next_run
@transaction.atomic
def apply_defaults_to_existing_site(
self,
site: Site,
force_overwrite: bool = False
) -> Tuple[PublishingSettings, AutomationConfig]:
"""
Apply default settings to an existing site.
Args:
site: Existing Site instance
force_overwrite: If True, overwrite existing settings. If False, only create if missing.
Returns:
Tuple of (PublishingSettings, AutomationConfig)
"""
# Handle publishing settings
if force_overwrite:
PublishingSettings.objects.filter(site=site).delete()
publishing_settings = self._create_publishing_settings(site)
else:
publishing_settings, created = PublishingSettings.objects.get_or_create(
site=site,
defaults={
'account': self.account,
**DEFAULT_PUBLISHING_SETTINGS
}
)
if not created:
logger.info(f"Publishing settings already exist for site {site.id}")
# Handle automation config
if force_overwrite:
AutomationConfig.objects.filter(site=site).delete()
automation_config = self._create_automation_config(site)
else:
try:
automation_config = AutomationConfig.objects.get(site=site)
logger.info(f"Automation config already exists for site {site.id}")
except AutomationConfig.DoesNotExist:
automation_config = self._create_automation_config(site)
return publishing_settings, automation_config
def create_site_with_defaults(
account: Account,
site_data: Dict[str, Any],
publishing_overrides: Optional[Dict[str, Any]] = None,
automation_overrides: Optional[Dict[str, Any]] = None,
) -> Tuple[Site, PublishingSettings, AutomationConfig]:
"""
Convenience function to create a site with default settings.
This is the main entry point for the onboarding flow.
Usage:
from igny8_core.business.integration.services.defaults_service import create_site_with_defaults
site, pub_settings, auto_config = create_site_with_defaults(
account=request.user.account,
site_data={
'name': 'My Blog',
'domain': 'myblog.com',
'hosting_type': 'wordpress',
}
)
"""
service = DefaultsService(account)
return service.create_site_with_defaults(
site_data,
publishing_overrides=publishing_overrides,
automation_overrides=automation_overrides,
)

View File

@@ -63,10 +63,14 @@ class LinkerService:
content.linker_version += 1
content.save(update_fields=['html_content', 'internal_links', 'linker_version'])
# Deduct credits
# Deduct credits (non-AI operation - use fixed token estimate)
# Estimate: 1 token per 4 characters of HTML content
estimated_tokens = len(content.html_content or '') // 4
self.credit_service.deduct_credits_for_operation(
account=account,
operation_type='linking',
tokens_input=estimated_tokens,
tokens_output=0, # No output tokens for linking operation
description=f"Internal linking for content: {content.title or 'Untitled'}",
related_object_type='content',
related_object_id=content.id
@@ -139,10 +143,14 @@ class LinkerService:
content.linker_version += 1
content.save(update_fields=['html_content', 'internal_links', 'linker_version'])
# Deduct credits
# Deduct credits (non-AI operation - use fixed token estimate)
# Estimate: 1 token per 4 characters of HTML content
estimated_tokens = len(content.html_content or '') // 4
self.credit_service.deduct_credits_for_operation(
account=account,
operation_type='linking',
tokens_input=estimated_tokens,
tokens_output=0,
description=f"Product linking for: {content.title or 'Untitled'}",
related_object_type='content',
related_object_id=content.id
@@ -193,10 +201,14 @@ class LinkerService:
content.linker_version += 1
content.save(update_fields=['html_content', 'internal_links', 'linker_version'])
# Deduct credits
# Deduct credits (non-AI operation - use fixed token estimate)
# Estimate: 1 token per 4 characters of HTML content
estimated_tokens = len(content.html_content or '') // 4
self.credit_service.deduct_credits_for_operation(
account=account,
operation_type='linking',
tokens_input=estimated_tokens,
tokens_output=0,
description=f"Taxonomy linking for: {content.title or 'Untitled'}",
related_object_type='content',
related_object_id=content.id

View File

@@ -0,0 +1 @@
# Notifications module

View File

@@ -0,0 +1,40 @@
"""
Notification Admin Configuration
"""
from django.contrib import admin
from unfold.admin import ModelAdmin
from .models import Notification
@admin.register(Notification)
class NotificationAdmin(ModelAdmin):
list_display = ['title', 'notification_type', 'severity', 'account', 'user', 'is_read', 'created_at']
list_filter = ['notification_type', 'severity', 'is_read', 'created_at']
search_fields = ['title', 'message', 'account__name', 'user__email']
readonly_fields = ['created_at', 'updated_at', 'read_at']
ordering = ['-created_at']
fieldsets = (
('Notification', {
'fields': ('account', 'user', 'notification_type', 'severity')
}),
('Content', {
'fields': ('title', 'message', 'site')
}),
('Action', {
'fields': ('action_url', 'action_label')
}),
('Status', {
'fields': ('is_read', 'read_at')
}),
('Metadata', {
'fields': ('metadata',),
'classes': ('collapse',)
}),
('Timestamps', {
'fields': ('created_at', 'updated_at'),
'classes': ('collapse',)
}),
)

View File

@@ -0,0 +1,13 @@
"""
Notifications App Configuration
"""
from django.apps import AppConfig
class NotificationsConfig(AppConfig):
"""Configuration for the notifications app."""
default_auto_field = 'django.db.models.BigAutoField'
name = 'igny8_core.business.notifications'
label = 'notifications'
verbose_name = 'Notifications'

View File

@@ -0,0 +1,45 @@
# Generated by Django 5.2.9 on 2025-12-27 22:02
import django.db.models.deletion
from django.conf import settings
from django.db import migrations, models
class Migration(migrations.Migration):
initial = True
dependencies = [
('contenttypes', '0002_remove_content_type_name'),
('igny8_core_auth', '0018_add_country_remove_intent_seedkeyword'),
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
]
operations = [
migrations.CreateModel(
name='Notification',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created_at', models.DateTimeField(auto_now_add=True, db_index=True)),
('updated_at', models.DateTimeField(auto_now=True)),
('notification_type', models.CharField(choices=[('ai_cluster_complete', 'Clustering Complete'), ('ai_cluster_failed', 'Clustering Failed'), ('ai_ideas_complete', 'Ideas Generated'), ('ai_ideas_failed', 'Idea Generation Failed'), ('ai_content_complete', 'Content Generated'), ('ai_content_failed', 'Content Generation Failed'), ('ai_images_complete', 'Images Generated'), ('ai_images_failed', 'Image Generation Failed'), ('ai_prompts_complete', 'Image Prompts Created'), ('ai_prompts_failed', 'Image Prompts Failed'), ('content_ready_review', 'Content Ready for Review'), ('content_published', 'Content Published'), ('content_publish_failed', 'Publishing Failed'), ('wordpress_sync_success', 'WordPress Sync Complete'), ('wordpress_sync_failed', 'WordPress Sync Failed'), ('credits_low', 'Credits Running Low'), ('credits_depleted', 'Credits Depleted'), ('site_setup_complete', 'Site Setup Complete'), ('keywords_imported', 'Keywords Imported'), ('system_info', 'System Information')], default='system_info', max_length=50)),
('title', models.CharField(max_length=200)),
('message', models.TextField()),
('severity', models.CharField(choices=[('info', 'Info'), ('success', 'Success'), ('warning', 'Warning'), ('error', 'Error')], default='info', max_length=20)),
('object_id', models.PositiveIntegerField(blank=True, null=True)),
('action_url', models.CharField(blank=True, max_length=500, null=True)),
('action_label', models.CharField(blank=True, max_length=50, null=True)),
('is_read', models.BooleanField(default=False)),
('read_at', models.DateTimeField(blank=True, null=True)),
('metadata', models.JSONField(blank=True, default=dict)),
('account', models.ForeignKey(db_column='tenant_id', on_delete=django.db.models.deletion.CASCADE, related_name='%(class)s_set', to='igny8_core_auth.account')),
('content_type', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='contenttypes.contenttype')),
('site', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, related_name='notifications', to='igny8_core_auth.site')),
('user', models.ForeignKey(blank=True, help_text='If null, notification is visible to all account users', null=True, on_delete=django.db.models.deletion.CASCADE, related_name='notifications', to=settings.AUTH_USER_MODEL)),
],
options={
'ordering': ['-created_at'],
'indexes': [models.Index(fields=['account', '-created_at'], name='notificatio_tenant__3b20a7_idx'), models.Index(fields=['account', 'is_read', '-created_at'], name='notificatio_tenant__9a5521_idx'), models.Index(fields=['user', '-created_at'], name='notificatio_user_id_05b4bc_idx')],
},
),
]

View File

@@ -0,0 +1,191 @@
"""
Notification Models for IGNY8
This module provides a notification system for tracking AI operations,
workflow events, and system alerts.
"""
from django.db import models
from django.conf import settings
from django.contrib.contenttypes.fields import GenericForeignKey
from django.contrib.contenttypes.models import ContentType
from igny8_core.auth.models import AccountBaseModel
class NotificationType(models.TextChoices):
"""Notification type choices"""
# AI Operations
AI_CLUSTER_COMPLETE = 'ai_cluster_complete', 'Clustering Complete'
AI_CLUSTER_FAILED = 'ai_cluster_failed', 'Clustering Failed'
AI_IDEAS_COMPLETE = 'ai_ideas_complete', 'Ideas Generated'
AI_IDEAS_FAILED = 'ai_ideas_failed', 'Idea Generation Failed'
AI_CONTENT_COMPLETE = 'ai_content_complete', 'Content Generated'
AI_CONTENT_FAILED = 'ai_content_failed', 'Content Generation Failed'
AI_IMAGES_COMPLETE = 'ai_images_complete', 'Images Generated'
AI_IMAGES_FAILED = 'ai_images_failed', 'Image Generation Failed'
AI_PROMPTS_COMPLETE = 'ai_prompts_complete', 'Image Prompts Created'
AI_PROMPTS_FAILED = 'ai_prompts_failed', 'Image Prompts Failed'
# Workflow
CONTENT_READY_REVIEW = 'content_ready_review', 'Content Ready for Review'
CONTENT_PUBLISHED = 'content_published', 'Content Published'
CONTENT_PUBLISH_FAILED = 'content_publish_failed', 'Publishing Failed'
# WordPress Sync
WORDPRESS_SYNC_SUCCESS = 'wordpress_sync_success', 'WordPress Sync Complete'
WORDPRESS_SYNC_FAILED = 'wordpress_sync_failed', 'WordPress Sync Failed'
# Credits/Billing
CREDITS_LOW = 'credits_low', 'Credits Running Low'
CREDITS_DEPLETED = 'credits_depleted', 'Credits Depleted'
# Setup
SITE_SETUP_COMPLETE = 'site_setup_complete', 'Site Setup Complete'
KEYWORDS_IMPORTED = 'keywords_imported', 'Keywords Imported'
# System
SYSTEM_INFO = 'system_info', 'System Information'
class NotificationSeverity(models.TextChoices):
"""Notification severity choices"""
INFO = 'info', 'Info'
SUCCESS = 'success', 'Success'
WARNING = 'warning', 'Warning'
ERROR = 'error', 'Error'
class Notification(AccountBaseModel):
"""
Notification model for tracking events and alerts
Notifications are account-scoped (via AccountBaseModel) and can optionally target specific users.
They support generic relations to link to any related object.
"""
user = models.ForeignKey(
settings.AUTH_USER_MODEL,
on_delete=models.CASCADE,
null=True,
blank=True,
related_name='notifications',
help_text='If null, notification is visible to all account users'
)
# Notification content
notification_type = models.CharField(
max_length=50,
choices=NotificationType.choices,
default=NotificationType.SYSTEM_INFO
)
title = models.CharField(max_length=200)
message = models.TextField()
severity = models.CharField(
max_length=20,
choices=NotificationSeverity.choices,
default=NotificationSeverity.INFO
)
# Related site (optional)
site = models.ForeignKey(
'igny8_core_auth.Site',
on_delete=models.CASCADE,
null=True,
blank=True,
related_name='notifications'
)
# Generic relation to any object
content_type = models.ForeignKey(
ContentType,
on_delete=models.CASCADE,
null=True,
blank=True
)
object_id = models.PositiveIntegerField(null=True, blank=True)
content_object = GenericForeignKey('content_type', 'object_id')
# Action
action_url = models.CharField(max_length=500, null=True, blank=True)
action_label = models.CharField(max_length=50, null=True, blank=True)
# Status
is_read = models.BooleanField(default=False)
read_at = models.DateTimeField(null=True, blank=True)
# Metadata for counts/details
metadata = models.JSONField(default=dict, blank=True)
class Meta:
ordering = ['-created_at']
indexes = [
models.Index(fields=['account', '-created_at']),
models.Index(fields=['account', 'is_read', '-created_at']),
models.Index(fields=['user', '-created_at']),
]
def __str__(self):
return f"{self.title} ({self.notification_type})"
def mark_as_read(self):
"""Mark notification as read"""
if not self.is_read:
from django.utils import timezone
self.is_read = True
self.read_at = timezone.now()
self.save(update_fields=['is_read', 'read_at', 'updated_at'])
@classmethod
def create_notification(
cls,
account,
notification_type: str,
title: str,
message: str,
severity: str = NotificationSeverity.INFO,
user=None,
site=None,
content_object=None,
action_url: str = None,
action_label: str = None,
metadata: dict = None
):
"""
Factory method to create notifications
Args:
account: The account this notification belongs to
notification_type: Type from NotificationType choices
title: Notification title
message: Notification message body
severity: Severity level from NotificationSeverity choices
user: Optional specific user (if None, visible to all account users)
site: Optional related site
content_object: Optional related object (using GenericForeignKey)
action_url: Optional URL for action button
action_label: Optional label for action button
metadata: Optional dict with additional data (counts, etc.)
Returns:
Created Notification instance
"""
notification = cls(
account=account,
user=user,
notification_type=notification_type,
title=title,
message=message,
severity=severity,
site=site,
action_url=action_url,
action_label=action_label,
metadata=metadata or {}
)
if content_object:
notification.content_type = ContentType.objects.get_for_model(content_object)
notification.object_id = content_object.pk
notification.save()
return notification

View File

@@ -0,0 +1,90 @@
"""
Notification Serializers
"""
from rest_framework import serializers
from .models import Notification
class NotificationSerializer(serializers.ModelSerializer):
"""Serializer for Notification model"""
site_name = serializers.CharField(source='site.name', read_only=True, default=None)
class Meta:
model = Notification
fields = [
'id',
'notification_type',
'title',
'message',
'severity',
'site',
'site_name',
'action_url',
'action_label',
'is_read',
'read_at',
'metadata',
'created_at',
]
read_only_fields = ['id', 'created_at', 'read_at']
class NotificationListSerializer(serializers.ModelSerializer):
"""Lightweight serializer for notification lists"""
site_name = serializers.CharField(source='site.name', read_only=True, default=None)
time_ago = serializers.SerializerMethodField()
class Meta:
model = Notification
fields = [
'id',
'notification_type',
'title',
'message',
'severity',
'site_name',
'action_url',
'action_label',
'is_read',
'created_at',
'time_ago',
'metadata',
]
def get_time_ago(self, obj):
"""Return human-readable time since notification"""
from django.utils import timezone
from datetime import timedelta
now = timezone.now()
diff = now - obj.created_at
if diff < timedelta(minutes=1):
return 'Just now'
elif diff < timedelta(hours=1):
minutes = int(diff.total_seconds() / 60)
return f'{minutes} minute{"s" if minutes != 1 else ""} ago'
elif diff < timedelta(days=1):
hours = int(diff.total_seconds() / 3600)
return f'{hours} hour{"s" if hours != 1 else ""} ago'
elif diff < timedelta(days=7):
days = diff.days
if days == 1:
return 'Yesterday'
return f'{days} days ago'
else:
return obj.created_at.strftime('%b %d, %Y')
class MarkReadSerializer(serializers.Serializer):
"""Serializer for marking notifications as read"""
notification_ids = serializers.ListField(
child=serializers.IntegerField(),
required=False,
help_text='List of notification IDs to mark as read. If empty, marks all as read.'
)

View File

@@ -0,0 +1,306 @@
"""
Notification Service
Provides methods to create notifications for various events in the system.
"""
from .models import Notification, NotificationType, NotificationSeverity
class NotificationService:
"""Service for creating notifications"""
@staticmethod
def notify_clustering_complete(account, site=None, cluster_count=0, keyword_count=0, user=None):
"""Create notification when keyword clustering completes"""
return Notification.create_notification(
account=account,
notification_type=NotificationType.AI_CLUSTER_COMPLETE,
title='Clustering Complete',
message=f'Created {cluster_count} clusters from {keyword_count} keywords',
severity=NotificationSeverity.SUCCESS,
user=user,
site=site,
action_url='/planner/clusters',
action_label='View Clusters',
metadata={'cluster_count': cluster_count, 'keyword_count': keyword_count}
)
@staticmethod
def notify_clustering_failed(account, site=None, error=None, user=None):
"""Create notification when keyword clustering fails"""
return Notification.create_notification(
account=account,
notification_type=NotificationType.AI_CLUSTER_FAILED,
title='Clustering Failed',
message=f'Failed to cluster keywords: {error}' if error else 'Failed to cluster keywords',
severity=NotificationSeverity.ERROR,
user=user,
site=site,
action_url='/planner/keywords',
action_label='View Keywords',
metadata={'error': str(error) if error else None}
)
@staticmethod
def notify_ideas_complete(account, site=None, idea_count=0, cluster_count=0, user=None):
"""Create notification when idea generation completes"""
return Notification.create_notification(
account=account,
notification_type=NotificationType.AI_IDEAS_COMPLETE,
title='Ideas Generated',
message=f'Generated {idea_count} content ideas from {cluster_count} clusters',
severity=NotificationSeverity.SUCCESS,
user=user,
site=site,
action_url='/planner/ideas',
action_label='View Ideas',
metadata={'idea_count': idea_count, 'cluster_count': cluster_count}
)
@staticmethod
def notify_ideas_failed(account, site=None, error=None, user=None):
"""Create notification when idea generation fails"""
return Notification.create_notification(
account=account,
notification_type=NotificationType.AI_IDEAS_FAILED,
title='Idea Generation Failed',
message=f'Failed to generate ideas: {error}' if error else 'Failed to generate ideas',
severity=NotificationSeverity.ERROR,
user=user,
site=site,
action_url='/planner/clusters',
action_label='View Clusters',
metadata={'error': str(error) if error else None}
)
@staticmethod
def notify_content_complete(account, site=None, article_count=0, word_count=0, user=None):
"""Create notification when content generation completes"""
return Notification.create_notification(
account=account,
notification_type=NotificationType.AI_CONTENT_COMPLETE,
title='Content Generated',
message=f'Generated {article_count} article{"s" if article_count != 1 else ""} ({word_count:,} words)',
severity=NotificationSeverity.SUCCESS,
user=user,
site=site,
action_url='/writer/content',
action_label='View Content',
metadata={'article_count': article_count, 'word_count': word_count}
)
@staticmethod
def notify_content_failed(account, site=None, error=None, user=None):
"""Create notification when content generation fails"""
return Notification.create_notification(
account=account,
notification_type=NotificationType.AI_CONTENT_FAILED,
title='Content Generation Failed',
message=f'Failed to generate content: {error}' if error else 'Failed to generate content',
severity=NotificationSeverity.ERROR,
user=user,
site=site,
action_url='/writer/tasks',
action_label='View Tasks',
metadata={'error': str(error) if error else None}
)
@staticmethod
def notify_images_complete(account, site=None, image_count=0, user=None):
"""Create notification when image generation completes"""
return Notification.create_notification(
account=account,
notification_type=NotificationType.AI_IMAGES_COMPLETE,
title='Images Generated',
message=f'Generated {image_count} image{"s" if image_count != 1 else ""}',
severity=NotificationSeverity.SUCCESS,
user=user,
site=site,
action_url='/writer/images',
action_label='View Images',
metadata={'image_count': image_count}
)
@staticmethod
def notify_images_failed(account, site=None, error=None, image_count=0, user=None):
"""Create notification when image generation fails"""
return Notification.create_notification(
account=account,
notification_type=NotificationType.AI_IMAGES_FAILED,
title='Image Generation Failed',
message=f'Failed to generate {image_count} image{"s" if image_count != 1 else ""}: {error}' if error else f'Failed to generate images',
severity=NotificationSeverity.ERROR,
user=user,
site=site,
action_url='/writer/images',
action_label='View Images',
metadata={'error': str(error) if error else None, 'image_count': image_count}
)
@staticmethod
def notify_prompts_complete(account, site=None, prompt_count=0, user=None):
"""Create notification when image prompt generation completes"""
in_article_count = prompt_count - 1 if prompt_count > 1 else 0
message = f'{prompt_count} image prompts ready (1 featured + {in_article_count} in-article)' if in_article_count > 0 else '1 image prompt ready'
return Notification.create_notification(
account=account,
notification_type=NotificationType.AI_PROMPTS_COMPLETE,
title='Image Prompts Created',
message=message,
severity=NotificationSeverity.SUCCESS,
user=user,
site=site,
action_url='/writer/images',
action_label='Generate Images',
metadata={'prompt_count': prompt_count, 'in_article_count': in_article_count}
)
@staticmethod
def notify_prompts_failed(account, site=None, error=None, user=None):
"""Create notification when image prompt generation fails"""
return Notification.create_notification(
account=account,
notification_type=NotificationType.AI_PROMPTS_FAILED,
title='Image Prompts Failed',
message=f'Failed to create image prompts: {error}' if error else 'Failed to create image prompts',
severity=NotificationSeverity.ERROR,
user=user,
site=site,
action_url='/writer/content',
action_label='View Content',
metadata={'error': str(error) if error else None}
)
@staticmethod
def notify_content_published(account, site=None, title='', content_object=None, user=None):
"""Create notification when content is published"""
site_name = site.name if site else 'site'
return Notification.create_notification(
account=account,
notification_type=NotificationType.CONTENT_PUBLISHED,
title='Content Published',
message=f'"{title}" published to {site_name}',
severity=NotificationSeverity.SUCCESS,
user=user,
site=site,
content_object=content_object,
action_url='/writer/published',
action_label='View Published',
metadata={'content_title': title}
)
@staticmethod
def notify_publish_failed(account, site=None, title='', error=None, user=None):
"""Create notification when publishing fails"""
return Notification.create_notification(
account=account,
notification_type=NotificationType.CONTENT_PUBLISH_FAILED,
title='Publishing Failed',
message=f'Failed to publish "{title}": {error}' if error else f'Failed to publish "{title}"',
severity=NotificationSeverity.ERROR,
user=user,
site=site,
action_url='/writer/review',
action_label='View Review',
metadata={'content_title': title, 'error': str(error) if error else None}
)
@staticmethod
def notify_wordpress_sync_success(account, site=None, count=0, user=None):
"""Create notification when WordPress sync succeeds"""
site_name = site.name if site else 'site'
return Notification.create_notification(
account=account,
notification_type=NotificationType.WORDPRESS_SYNC_SUCCESS,
title='WordPress Synced',
message=f'Synced {count} item{"s" if count != 1 else ""} with {site_name}',
severity=NotificationSeverity.SUCCESS,
user=user,
site=site,
action_url='/writer/published',
action_label='View Published',
metadata={'sync_count': count}
)
@staticmethod
def notify_wordpress_sync_failed(account, site=None, error=None, user=None):
"""Create notification when WordPress sync fails"""
site_name = site.name if site else 'site'
return Notification.create_notification(
account=account,
notification_type=NotificationType.WORDPRESS_SYNC_FAILED,
title='Sync Failed',
message=f'WordPress sync failed for {site_name}: {error}' if error else f'WordPress sync failed for {site_name}',
severity=NotificationSeverity.ERROR,
user=user,
site=site,
action_url=f'/sites/{site.id}/integrations' if site else '/sites',
action_label='Check Integration',
metadata={'error': str(error) if error else None}
)
@staticmethod
def notify_credits_low(account, percentage_used=80, credits_remaining=0, user=None):
"""Create notification when credits are running low"""
return Notification.create_notification(
account=account,
notification_type=NotificationType.CREDITS_LOW,
title='Credits Running Low',
message=f"You've used {percentage_used}% of your credits. {credits_remaining} credits remaining.",
severity=NotificationSeverity.WARNING,
user=user,
action_url='/account/billing',
action_label='Upgrade Plan',
metadata={'percentage_used': percentage_used, 'credits_remaining': credits_remaining}
)
@staticmethod
def notify_credits_depleted(account, user=None):
"""Create notification when credits are depleted"""
return Notification.create_notification(
account=account,
notification_type=NotificationType.CREDITS_DEPLETED,
title='Credits Depleted',
message='Your credits are exhausted. Upgrade to continue using AI features.',
severity=NotificationSeverity.ERROR,
user=user,
action_url='/account/billing',
action_label='Upgrade Now',
metadata={}
)
@staticmethod
def notify_site_setup_complete(account, site=None, user=None):
"""Create notification when site setup is complete"""
site_name = site.name if site else 'Site'
return Notification.create_notification(
account=account,
notification_type=NotificationType.SITE_SETUP_COMPLETE,
title='Site Ready',
message=f'{site_name} is fully configured and ready!',
severity=NotificationSeverity.SUCCESS,
user=user,
site=site,
action_url=f'/sites/{site.id}' if site else '/sites',
action_label='View Site',
metadata={}
)
@staticmethod
def notify_keywords_imported(account, site=None, count=0, user=None):
"""Create notification when keywords are imported"""
site_name = site.name if site else 'site'
return Notification.create_notification(
account=account,
notification_type=NotificationType.KEYWORDS_IMPORTED,
title='Keywords Imported',
message=f'Added {count} keyword{"s" if count != 1 else ""} to {site_name}',
severity=NotificationSeverity.INFO,
user=user,
site=site,
action_url='/planner/keywords',
action_label='View Keywords',
metadata={'keyword_count': count}
)

View File

@@ -0,0 +1,15 @@
"""
Notification URL Configuration
"""
from django.urls import path, include
from rest_framework.routers import DefaultRouter
from .views import NotificationViewSet
router = DefaultRouter()
router.register(r'notifications', NotificationViewSet, basename='notification')
urlpatterns = [
path('', include(router.urls)),
]

View File

@@ -0,0 +1,132 @@
"""
Notification Views
"""
from rest_framework import viewsets, status
from rest_framework.decorators import action
from rest_framework.response import Response
from rest_framework.permissions import IsAuthenticated
from django.utils import timezone
from igny8_core.api.pagination import CustomPageNumberPagination
from igny8_core.api.base import AccountModelViewSet
from .models import Notification
from .serializers import NotificationSerializer, NotificationListSerializer, MarkReadSerializer
class NotificationViewSet(AccountModelViewSet):
"""
ViewSet for managing notifications
Endpoints:
- GET /api/v1/notifications/ - List notifications
- GET /api/v1/notifications/{id}/ - Get notification detail
- DELETE /api/v1/notifications/{id}/ - Delete notification
- POST /api/v1/notifications/{id}/read/ - Mark single notification as read
- POST /api/v1/notifications/read-all/ - Mark all notifications as read
- GET /api/v1/notifications/unread-count/ - Get unread notification count
"""
serializer_class = NotificationSerializer
pagination_class = CustomPageNumberPagination
permission_classes = [IsAuthenticated]
def get_queryset(self):
"""Filter notifications for current account and user"""
from django.db.models import Q
user = self.request.user
account = getattr(user, 'account', None)
if not account:
return Notification.objects.none()
# Get notifications for this account that are either:
# - For all users (user=None)
# - For this specific user
queryset = Notification.objects.filter(
Q(account=account, user__isnull=True) |
Q(account=account, user=user)
).select_related('site').order_by('-created_at')
# Optional filters
is_read = self.request.query_params.get('is_read')
if is_read is not None:
queryset = queryset.filter(is_read=is_read.lower() == 'true')
notification_type = self.request.query_params.get('type')
if notification_type:
queryset = queryset.filter(notification_type=notification_type)
severity = self.request.query_params.get('severity')
if severity:
queryset = queryset.filter(severity=severity)
return queryset
def get_serializer_class(self):
"""Use list serializer for list action"""
if self.action == 'list':
return NotificationListSerializer
return NotificationSerializer
def list(self, request, *args, **kwargs):
"""List notifications with unread count"""
queryset = self.filter_queryset(self.get_queryset())
# Get unread count
unread_count = queryset.filter(is_read=False).count()
page = self.paginate_queryset(queryset)
if page is not None:
serializer = self.get_serializer(page, many=True)
response = self.get_paginated_response(serializer.data)
response.data['unread_count'] = unread_count
return response
serializer = self.get_serializer(queryset, many=True)
return Response({
'results': serializer.data,
'unread_count': unread_count
})
@action(detail=True, methods=['post'])
def read(self, request, pk=None):
"""Mark a single notification as read"""
notification = self.get_object()
notification.mark_as_read()
serializer = self.get_serializer(notification)
return Response(serializer.data)
@action(detail=False, methods=['post'], url_path='read-all')
def read_all(self, request):
"""Mark all notifications as read"""
serializer = MarkReadSerializer(data=request.data)
serializer.is_valid(raise_exception=True)
notification_ids = serializer.validated_data.get('notification_ids', [])
queryset = self.get_queryset().filter(is_read=False)
if notification_ids:
queryset = queryset.filter(id__in=notification_ids)
count = queryset.update(is_read=True, read_at=timezone.now())
return Response({
'status': 'success',
'marked_read': count
})
@action(detail=False, methods=['get'], url_path='unread-count')
def unread_count(self, request):
"""Get count of unread notifications"""
count = self.get_queryset().filter(is_read=False).count()
return Response({'unread_count': count})
def destroy(self, request, *args, **kwargs):
"""Delete a notification"""
instance = self.get_object()
self.perform_destroy(instance)
return Response(status=status.HTTP_204_NO_CONTENT)

View File

@@ -1,13 +1,45 @@
from django.contrib import admin
from django.contrib import messages
from unfold.admin import ModelAdmin
from igny8_core.admin.base import AccountAdminMixin, Igny8ModelAdmin
from .models import OptimizationTask
from import_export.admin import ExportMixin
from import_export import resources
class OptimizationTaskResource(resources.ModelResource):
"""Resource class for exporting Optimization Tasks"""
class Meta:
model = OptimizationTask
fields = ('id', 'content__title', 'account__name', 'status',
'credits_used', 'created_at')
export_order = fields
@admin.register(OptimizationTask)
class OptimizationTaskAdmin(AccountAdminMixin, Igny8ModelAdmin):
class OptimizationTaskAdmin(ExportMixin, AccountAdminMixin, Igny8ModelAdmin):
resource_class = OptimizationTaskResource
list_display = ['content', 'account', 'status', 'credits_used', 'created_at']
list_filter = ['status', 'created_at']
search_fields = ['content__title', 'account__name']
readonly_fields = ['created_at', 'updated_at']
actions = [
'bulk_mark_completed',
'bulk_mark_failed',
'bulk_retry',
]
def bulk_mark_completed(self, request, queryset):
updated = queryset.update(status='completed')
self.message_user(request, f'{updated} optimization task(s) marked as completed.', messages.SUCCESS)
bulk_mark_completed.short_description = 'Mark as completed'
def bulk_mark_failed(self, request, queryset):
updated = queryset.update(status='failed')
self.message_user(request, f'{updated} optimization task(s) marked as failed.', messages.SUCCESS)
bulk_mark_failed.short_description = 'Mark as failed'
def bulk_retry(self, request, queryset):
updated = queryset.filter(status='failed').update(status='pending')
self.message_user(request, f'{updated} failed optimization task(s) queued for retry.', messages.SUCCESS)
bulk_retry.short_description = 'Retry failed tasks'

View File

@@ -133,7 +133,10 @@ class OptimizerService:
scores_after = self.analyzer.analyze(optimized_content)
# Calculate credits used
credits_used = self.credit_service.get_credit_cost('optimization', word_count)
estimated_tokens = len(content.html_content or '') // 4
credits_used = self.credit_service.calculate_credits_from_tokens(
'optimization', estimated_tokens, 0
)
# Update optimization task
task.scores_after = scores_after
@@ -148,18 +151,22 @@ class OptimizerService:
content.optimization_scores = scores_after
content.save(update_fields=['html_content', 'optimizer_version', 'optimization_scores'])
# Deduct credits
# Deduct credits (non-AI operation - use fixed token estimate based on content size)
# Estimate: 1 token per 4 characters of HTML content
estimated_tokens = len(content.html_content or '') // 4
self.credit_service.deduct_credits_for_operation(
account=account,
operation_type='optimization',
amount=word_count,
tokens_input=estimated_tokens,
tokens_output=0,
description=f"Content optimization: {content.title or 'Untitled'}",
related_object_type='content',
related_object_id=content.id,
metadata={
'scores_before': scores_before,
'scores_after': scores_after,
'improvement': scores_after.get('overall_score', 0) - scores_before.get('overall_score', 0)
'improvement': scores_after.get('overall_score', 0) - scores_before.get('overall_score', 0),
'word_count': word_count
}
)
@@ -279,7 +286,10 @@ class OptimizerService:
scores_after = self._enhance_product_scores(scores_after, optimized_content)
# Calculate credits used
credits_used = self.credit_service.get_credit_cost('optimization', word_count)
estimated_tokens = len(content.html_content or '') // 4
credits_used = self.credit_service.calculate_credits_from_tokens(
'optimization', estimated_tokens, 0
)
# Update optimization task
task.scores_after = scores_after
@@ -294,11 +304,14 @@ class OptimizerService:
content.optimization_scores = scores_after
content.save(update_fields=['html_content', 'optimizer_version', 'optimization_scores'])
# Deduct credits
# Deduct credits (non-AI operation - use fixed token estimate based on content size)
# Estimate: 1 token per 4 characters of HTML content
estimated_tokens = len(content.html_content or '') // 4
self.credit_service.deduct_credits_for_operation(
account=account,
operation_type='optimization',
amount=word_count,
tokens_input=estimated_tokens,
tokens_output=0,
description=f"Product optimization: {content.title or 'Untitled'}",
related_object_type='content',
related_object_id=content.id,
@@ -306,6 +319,7 @@ class OptimizerService:
'scores_before': scores_before,
'scores_after': scores_after,
'improvement': scores_after.get('overall_score', 0) - scores_before.get('overall_score', 0),
'word_count': word_count,
'entity_type': 'product'
}
)
@@ -372,7 +386,11 @@ class OptimizerService:
scores_after = self._enhance_taxonomy_scores(scores_after, optimized_content)
# Calculate credits used
credits_used = self.credit_service.get_credit_cost('optimization', word_count)
# Calculate estimated credits for task tracking
estimated_tokens = len(content.html_content or '') // 4
credits_used = self.credit_service.calculate_credits_from_tokens(
'optimization', estimated_tokens, 0
)
# Update optimization task
task.scores_after = scores_after
@@ -387,17 +405,20 @@ class OptimizerService:
content.optimization_scores = scores_after
content.save(update_fields=['html_content', 'optimizer_version', 'optimization_scores'])
# Deduct credits
# Deduct credits (non-AI operation - use fixed token estimate based on content size)
# Estimate: 1 token per 4 characters of HTML content
self.credit_service.deduct_credits_for_operation(
account=account,
operation_type='optimization',
amount=word_count,
tokens_input=estimated_tokens,
tokens_output=0,
description=f"Taxonomy optimization: {content.title or 'Untitled'}",
related_object_type='content',
related_object_id=content.id,
metadata={
'scores_before': scores_before,
'scores_after': scores_after,
'word_count': word_count,
'improvement': scores_after.get('overall_score', 0) - scores_before.get('overall_score', 0),
'entity_type': 'taxonomy'
}

View File

@@ -1,4 +1,5 @@
"""
Planning business logic - Keywords, Clusters, ContentIdeas models and services
"""
# Import signals to register cascade handlers
from . import signals # noqa: F401

View File

@@ -1,6 +1,9 @@
from django.db import models
from igny8_core.auth.models import SiteSectorBaseModel, SeedKeyword
from igny8_core.common.soft_delete import SoftDeletableModel, SoftDeleteManager
import logging
logger = logging.getLogger(__name__)
class Clusters(SoftDeletableModel, SiteSectorBaseModel):
@@ -39,6 +42,27 @@ class Clusters(SoftDeletableModel, SiteSectorBaseModel):
def __str__(self):
return self.name
def soft_delete(self, user=None, reason=None, retention_days=None):
"""
Override soft_delete to cascade status reset to related Keywords.
When a cluster is deleted, its keywords should:
- Have their cluster FK set to NULL (handled by SET_NULL)
- Have their status reset to 'new' (orphaned keywords)
"""
# Reset related keywords status to 'new' and clear cluster FK
keywords_count = self.keywords.filter(is_deleted=False).update(
cluster=None,
status='new'
)
logger.info(
f"[Clusters.soft_delete] Cluster {self.id} '{self.name}' cascade: "
f"reset {keywords_count} keywords to status='new'"
)
# Call parent soft_delete
super().soft_delete(user=user, reason=reason, retention_days=retention_days)
class Keywords(SoftDeletableModel, SiteSectorBaseModel):
@@ -108,6 +132,13 @@ class Keywords(SoftDeletableModel, SiteSectorBaseModel):
objects = SoftDeleteManager()
all_objects = models.Manager()
def soft_delete(self, user=None, reason=None, retention_days=None):
"""Override soft_delete to clear seed_keyword FK to prevent PROTECT issues"""
# Clear the seed_keyword FK before soft delete to prevent cascade protection issues
# This allows SeedKeywords to be managed independently after Keywords are deleted
self.seed_keyword = None
super().soft_delete(user=user, reason=reason, retention_days=retention_days)
@property
def keyword(self):
"""Get keyword text from seed_keyword"""
@@ -124,9 +155,9 @@ class Keywords(SoftDeletableModel, SiteSectorBaseModel):
return self.difficulty_override if self.difficulty_override is not None else (self.seed_keyword.difficulty if self.seed_keyword else 0)
@property
def intent(self):
"""Get intent from seed_keyword"""
return self.seed_keyword.intent if self.seed_keyword else 'informational'
def country(self):
"""Get country from seed_keyword"""
return self.seed_keyword.country if self.seed_keyword else 'US'
def save(self, *args, **kwargs):
"""Validate that seed_keyword's industry/sector matches site's industry/sector"""

View File

@@ -38,10 +38,10 @@ class ClusteringService:
'error': 'No keyword IDs provided'
}
if len(keyword_ids) > 20:
if len(keyword_ids) > 50:
return {
'success': False,
'error': 'Maximum 20 keywords allowed for clustering'
'error': 'Maximum 50 keywords allowed for clustering'
}
# Check credits (fixed cost per clustering operation)
@@ -52,26 +52,12 @@ class ClusteringService:
# Delegate to AI task
from igny8_core.ai.tasks import run_ai_task
from django.conf import settings
payload = {
'ids': keyword_ids,
'sector_id': sector_id
}
# Stage 1: When USE_SITE_BUILDER_REFACTOR is enabled, payload can include
# taxonomy hints and dimension metadata for enhanced clustering.
# TODO (Stage 2/3): Enhance clustering to collect and use:
# - Taxonomy hints from SiteBlueprintTaxonomy
# - Dimension metadata (context_type, dimension_meta) for clusters
# - Attribute values from Keywords.attribute_values
if getattr(settings, 'USE_SITE_BUILDER_REFACTOR', False):
logger.info(
f"Clustering with refactor enabled: {len(keyword_ids)} keywords, "
f"sector_id={sector_id}, account_id={account.id}"
)
# Future: Add taxonomy hints and dimension metadata to payload
try:
if hasattr(run_ai_task, 'delay'):
# Celery available - queue async

View File

@@ -0,0 +1,130 @@
"""
Cascade signals for Planning models
Handles status updates and relationship cleanup when parent records are deleted
"""
import logging
from django.db.models.signals import pre_delete, post_save
from django.dispatch import receiver
logger = logging.getLogger(__name__)
@receiver(pre_delete, sender='planner.Clusters')
def handle_cluster_soft_delete(sender, instance, **kwargs):
"""
When a Cluster is deleted:
- Set Keywords.cluster = NULL
- Reset Keywords.status to 'new'
- Set ContentIdeas.keyword_cluster = NULL
- Reset ContentIdeas.status to 'new'
"""
from igny8_core.business.planning.models import Keywords, ContentIdeas
# Check if this is a soft delete (is_deleted=True) vs hard delete
# Soft deletes trigger delete() which calls soft_delete()
if hasattr(instance, 'is_deleted') and instance.is_deleted:
return # Skip if already soft-deleted
try:
# Update related Keywords - clear cluster FK and reset status
updated_keywords = Keywords.objects.filter(cluster=instance).update(
cluster=None,
status='new'
)
if updated_keywords:
logger.info(
f"[Cascade] Cluster '{instance.name}' (ID: {instance.id}) deleted: "
f"Reset {updated_keywords} keywords to status='new', cluster=NULL"
)
# Update related ContentIdeas - clear cluster FK and reset status
updated_ideas = ContentIdeas.objects.filter(keyword_cluster=instance).update(
keyword_cluster=None,
status='new'
)
if updated_ideas:
logger.info(
f"[Cascade] Cluster '{instance.name}' (ID: {instance.id}) deleted: "
f"Reset {updated_ideas} content ideas to status='new', keyword_cluster=NULL"
)
except Exception as e:
logger.error(f"[Cascade] Error handling cluster deletion cascade: {e}", exc_info=True)
@receiver(pre_delete, sender='planner.ContentIdeas')
def handle_idea_soft_delete(sender, instance, **kwargs):
"""
When a ContentIdea is deleted:
- Set Tasks.idea = NULL (don't delete tasks, they may have content)
- Log orphaned tasks
"""
from igny8_core.business.content.models import Tasks
if hasattr(instance, 'is_deleted') and instance.is_deleted:
return
try:
# Update related Tasks - clear idea FK
updated_tasks = Tasks.objects.filter(idea=instance).update(idea=None)
if updated_tasks:
logger.info(
f"[Cascade] ContentIdea '{instance.idea_title}' (ID: {instance.id}) deleted: "
f"Cleared idea reference from {updated_tasks} tasks"
)
except Exception as e:
logger.error(f"[Cascade] Error handling content idea deletion cascade: {e}", exc_info=True)
@receiver(pre_delete, sender='writer.Tasks')
def handle_task_soft_delete(sender, instance, **kwargs):
"""
When a Task is deleted:
- Set Content.task = NULL
"""
from igny8_core.business.content.models import Content
if hasattr(instance, 'is_deleted') and instance.is_deleted:
return
try:
# Update related Content - clear task FK
updated_content = Content.objects.filter(task=instance).update(task=None)
if updated_content:
logger.info(
f"[Cascade] Task '{instance.title}' (ID: {instance.id}) deleted: "
f"Cleared task reference from {updated_content} content items"
)
except Exception as e:
logger.error(f"[Cascade] Error handling task deletion cascade: {e}", exc_info=True)
@receiver(pre_delete, sender='writer.Content')
def handle_content_soft_delete(sender, instance, **kwargs):
"""
When Content is deleted:
- Soft delete related Images (cascade soft delete)
- Clear PublishingRecord references
"""
from igny8_core.business.content.models import Images
if hasattr(instance, 'is_deleted') and instance.is_deleted:
return
try:
# Soft delete related Images
related_images = Images.objects.filter(content=instance)
for image in related_images:
image.soft_delete(reason='cascade_from_content')
count = related_images.count()
if count:
logger.info(
f"[Cascade] Content '{instance.title}' (ID: {instance.id}) deleted: "
f"Soft deleted {count} related images"
)
except Exception as e:
logger.error(f"[Cascade] Error handling content deletion cascade: {e}", exc_info=True)

View File

@@ -31,7 +31,11 @@ class PublishingRecordAdmin(ExportMixin, SiteSectorAdminMixin, Igny8ModelAdmin):
list_filter = ['destination', 'status', 'site']
search_fields = ['content__title', 'destination', 'destination_url']
readonly_fields = ['created_at', 'updated_at']
actions = ['bulk_retry_failed']
actions = [
'bulk_retry_failed',
'bulk_cancel_pending',
'bulk_mark_published',
]
def bulk_retry_failed(self, request, queryset):
"""Retry failed publishing records"""
@@ -39,10 +43,34 @@ class PublishingRecordAdmin(ExportMixin, SiteSectorAdminMixin, Igny8ModelAdmin):
count = failed_records.update(status='pending')
self.message_user(request, f'{count} failed record(s) marked for retry.', messages.SUCCESS)
bulk_retry_failed.short_description = 'Retry failed publishes'
def bulk_cancel_pending(self, request, queryset):
"""Cancel pending publishing records"""
pending = queryset.filter(status__in=['pending', 'publishing'])
count = pending.update(status='failed', error_message='Cancelled by admin')
self.message_user(request, f'{count} publishing record(s) cancelled.', messages.SUCCESS)
bulk_cancel_pending.short_description = 'Cancel pending publishes'
def bulk_mark_published(self, request, queryset):
"""Mark selected records as published"""
from django.utils import timezone
count = queryset.update(status='published', published_at=timezone.now())
self.message_user(request, f'{count} record(s) marked as published.', messages.SUCCESS)
bulk_mark_published.short_description = 'Mark as published'
class DeploymentRecordResource(resources.ModelResource):
"""Resource class for exporting Deployment Records"""
class Meta:
model = DeploymentRecord
fields = ('id', 'site__name', 'sector__name', 'version', 'deployed_version',
'status', 'deployment_url', 'deployed_at', 'created_at')
export_order = fields
@admin.register(DeploymentRecord)
class DeploymentRecordAdmin(SiteSectorAdminMixin, Igny8ModelAdmin):
class DeploymentRecordAdmin(ExportMixin, SiteSectorAdminMixin, Igny8ModelAdmin):
resource_class = DeploymentRecordResource
list_display = [
'site',
'sector',
@@ -55,4 +83,35 @@ class DeploymentRecordAdmin(SiteSectorAdminMixin, Igny8ModelAdmin):
list_filter = ['status', 'site']
search_fields = ['site__name', 'deployment_url']
readonly_fields = ['created_at', 'updated_at']
actions = [
'bulk_retry_failed',
'bulk_rollback',
'bulk_cancel_pending',
]
actions = [
'bulk_retry_failed',
'bulk_rollback',
'bulk_cancel_pending',
]
def bulk_retry_failed(self, request, queryset):
"""Retry failed deployments"""
failed = queryset.filter(status='failed')
count = failed.update(status='pending', error_message='')
self.message_user(request, f'{count} failed deployment(s) marked for retry.', messages.SUCCESS)
bulk_retry_failed.short_description = 'Retry failed deployments'
def bulk_rollback(self, request, queryset):
"""Rollback selected deployments"""
deployed = queryset.filter(status='deployed')
count = deployed.update(status='rolled_back')
self.message_user(request, f'{count} deployment(s) marked for rollback.', messages.SUCCESS)
bulk_rollback.short_description = 'Rollback deployments'
def bulk_cancel_pending(self, request, queryset):
"""Cancel pending deployments"""
pending = queryset.filter(status__in=['pending', 'deploying'])
count = pending.update(status='failed', error_message='Cancelled by admin')
self.message_user(request, f'{count} deployment(s) cancelled.', messages.SUCCESS)
bulk_cancel_pending.short_description = 'Cancel pending deployments'

View File

@@ -19,6 +19,9 @@ app.config_from_object('django.conf:settings', namespace='CELERY')
# Load task modules from all registered Django apps.
app.autodiscover_tasks()
# Explicitly import tasks from igny8_core/tasks directory
app.autodiscover_tasks(['igny8_core.tasks'])
# Celery Beat schedule for periodic tasks
app.conf.beat_schedule = {
'replenish-monthly-credits': {
@@ -39,6 +42,15 @@ app.conf.beat_schedule = {
'task': 'automation.check_scheduled_automations',
'schedule': crontab(minute=0), # Every hour at :00
},
# Publishing Scheduler Tasks
'schedule-approved-content': {
'task': 'publishing.schedule_approved_content',
'schedule': crontab(minute=0), # Every hour at :00
},
'process-scheduled-publications': {
'task': 'publishing.process_scheduled_publications',
'schedule': crontab(minute='*/5'), # Every 5 minutes
},
# Maintenance: purge expired soft-deleted records daily at 3:15 AM
'purge-soft-deleted-records': {
'task': 'igny8_core.purge_soft_deleted',

View File

@@ -0,0 +1,62 @@
"""
Backfill token data from AITaskLog to CreditUsageLog
"""
from django.core.management.base import BaseCommand
from igny8_core.ai.models import AITaskLog
from igny8_core.modules.billing.models import CreditUsageLog
from datetime import timedelta
class Command(BaseCommand):
help = 'Backfill token data from AITaskLog to CreditUsageLog'
def handle(self, *args, **options):
self.stdout.write("=== Token Data Backfill ===\n")
# Get AITaskLog entries with token data
ai_logs = AITaskLog.objects.filter(
tokens__gt=0,
status='success'
).select_related('account').order_by('created_at')
self.stdout.write(f"Found {ai_logs.count()} AITaskLog entries with tokens\n")
updated_count = 0
skipped_count = 0
no_match_count = 0
for ai_log in ai_logs:
# Find matching CreditUsageLog within 10 second window
time_start = ai_log.created_at - timedelta(seconds=10)
time_end = ai_log.created_at + timedelta(seconds=10)
# Try to find exact match
credit_log = CreditUsageLog.objects.filter(
account=ai_log.account,
created_at__gte=time_start,
created_at__lte=time_end,
tokens_input=0,
tokens_output=0
).order_by('created_at').first()
if credit_log:
# AITaskLog has total tokens, estimate 40/60 split for input/output
# This is approximate but better than 0
total_tokens = ai_log.tokens
estimated_input = int(total_tokens * 0.4)
estimated_output = total_tokens - estimated_input
credit_log.tokens_input = estimated_input
credit_log.tokens_output = estimated_output
credit_log.save(update_fields=['tokens_input', 'tokens_output'])
updated_count += 1
if updated_count % 50 == 0:
self.stdout.write(f" Updated {updated_count} records...")
else:
no_match_count += 1
self.stdout.write(self.style.SUCCESS(f"\n✅ Backfill complete!"))
self.stdout.write(f" Updated: {updated_count}")
self.stdout.write(f" No match: {no_match_count}")
self.stdout.write(f" Total processed: {ai_logs.count()}")

View File

@@ -0,0 +1,152 @@
"""
Management command to clean up all user-generated data (DESTRUCTIVE).
This is used before V1.0 production launch to start with a clean database.
⚠️ WARNING: This permanently deletes ALL user data!
Usage:
# DRY RUN (recommended first):
python manage.py cleanup_user_data --dry-run
# ACTUAL CLEANUP (after reviewing dry-run):
python manage.py cleanup_user_data --confirm
"""
from django.core.management.base import BaseCommand
from django.db import transaction
from django.conf import settings
class Command(BaseCommand):
help = 'Clean up all user-generated data (DESTRUCTIVE - for pre-launch cleanup)'
def add_arguments(self, parser):
parser.add_argument(
'--confirm',
action='store_true',
help='Confirm you want to delete all user data'
)
parser.add_argument(
'--dry-run',
action='store_true',
help='Show what would be deleted without actually deleting'
)
def handle(self, *args, **options):
if not options['confirm'] and not options['dry_run']:
self.stdout.write(
self.style.ERROR('\n⚠️ ERROR: Must use --confirm or --dry-run flag\n')
)
self.stdout.write('Usage:')
self.stdout.write(' python manage.py cleanup_user_data --dry-run # See what will be deleted')
self.stdout.write(' python manage.py cleanup_user_data --confirm # Actually delete data\n')
return
# Safety check: Prevent running in production unless explicitly allowed
if getattr(settings, 'ENVIRONMENT', 'production') == 'production' and options['confirm']:
self.stdout.write(
self.style.ERROR('\n⚠️ BLOCKED: Cannot run cleanup in PRODUCTION environment!\n')
)
self.stdout.write('To allow this, temporarily set ENVIRONMENT to "staging" in settings.\n')
return
# Import models
from igny8_core.auth.models import Site, CustomUser
from igny8_core.business.planning.models import Keywords, Clusters
from igny8_core.business.content.models import ContentIdea, Tasks, Content, Images
from igny8_core.modules.publisher.models import PublishingRecord
from igny8_core.business.integration.models import WordPressSyncEvent
from igny8_core.modules.billing.models import CreditTransaction, CreditUsageLog, Order
from igny8_core.modules.system.models import Notification
from igny8_core.modules.writer.models import AutomationRun
# Define models to clear (ORDER MATTERS - foreign keys)
# Delete child records before parent records
models_to_clear = [
('Notifications', Notification),
('Credit Usage Logs', CreditUsageLog),
('Credit Transactions', CreditTransaction),
('Orders', Order),
('WordPress Sync Events', WordPressSyncEvent),
('Publishing Records', PublishingRecord),
('Automation Runs', AutomationRun),
('Images', Images),
('Content', Content),
('Tasks', Tasks),
('Content Ideas', ContentIdea),
('Clusters', Clusters),
('Keywords', Keywords),
('Sites', Site), # Sites should be near last (many foreign keys)
# Note: We do NOT delete CustomUser - keep admin users
]
if options['dry_run']:
self.stdout.write(self.style.WARNING('\n' + '=' * 70))
self.stdout.write(self.style.WARNING('DRY RUN - No data will be deleted'))
self.stdout.write(self.style.WARNING('=' * 70 + '\n'))
total_records = 0
for name, model in models_to_clear:
count = model.objects.count()
total_records += count
status = '' if count > 0 else '·'
self.stdout.write(f' {status} Would delete {count:6d} {name}')
# Count users (not deleted)
user_count = CustomUser.objects.count()
self.stdout.write(f'\n → Keeping {user_count:6d} Users (not deleted)')
self.stdout.write(f'\n Total records to delete: {total_records:,}')
self.stdout.write('\n' + '=' * 70)
self.stdout.write(self.style.SUCCESS('\nTo proceed with actual deletion, run:'))
self.stdout.write(' python manage.py cleanup_user_data --confirm\n')
return
# ACTUAL DELETION
self.stdout.write(self.style.ERROR('\n' + '=' * 70))
self.stdout.write(self.style.ERROR('⚠️ DELETING ALL USER DATA - THIS CANNOT BE UNDONE!'))
self.stdout.write(self.style.ERROR('=' * 70 + '\n'))
# Final confirmation prompt
confirm_text = input('Type "DELETE ALL DATA" to proceed: ')
if confirm_text != 'DELETE ALL DATA':
self.stdout.write(self.style.WARNING('\nAborted. Data was NOT deleted.\n'))
return
self.stdout.write('\nProceeding with deletion...\n')
deleted_counts = {}
failed_deletions = []
with transaction.atomic():
for name, model in models_to_clear:
try:
count = model.objects.count()
if count > 0:
model.objects.all().delete()
deleted_counts[name] = count
self.stdout.write(
self.style.SUCCESS(f'✓ Deleted {count:6d} {name}')
)
else:
self.stdout.write(
self.style.WARNING(f'· Skipped {count:6d} {name} (already empty)')
)
except Exception as e:
failed_deletions.append((name, str(e)))
self.stdout.write(
self.style.ERROR(f'✗ Failed to delete {name}: {str(e)}')
)
# Summary
total_deleted = sum(deleted_counts.values())
self.stdout.write('\n' + '=' * 70)
self.stdout.write(self.style.SUCCESS(f'\nUser Data Cleanup Complete!\n'))
self.stdout.write(f' Total records deleted: {total_deleted:,}')
self.stdout.write(f' Failed deletions: {len(failed_deletions)}')
if failed_deletions:
self.stdout.write(self.style.WARNING('\nFailed deletions:'))
for name, error in failed_deletions:
self.stdout.write(f' - {name}: {error}')
self.stdout.write('\n' + '=' * 70 + '\n')

View File

@@ -0,0 +1,122 @@
"""
Management command to export system configuration data to JSON files.
This exports Plans, Credit Costs, AI Models, Industries, Sectors, Seed Keywords, etc.
Usage:
python manage.py export_system_config --output-dir=backups/config
"""
from django.core.management.base import BaseCommand
from django.core import serializers
import json
import os
from datetime import datetime
class Command(BaseCommand):
help = 'Export system configuration data to JSON files for V1.0 backup'
def add_arguments(self, parser):
parser.add_argument(
'--output-dir',
default='backups/config',
help='Output directory for config files (relative to project root)'
)
def handle(self, *args, **options):
output_dir = options['output_dir']
# Make output_dir absolute if it's relative
if not os.path.isabs(output_dir):
# Get project root (parent of manage.py)
import sys
project_root = os.path.dirname(os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))
output_dir = os.path.join(project_root, '..', output_dir)
os.makedirs(output_dir, exist_ok=True)
self.stdout.write(self.style.SUCCESS(f'\nExporting system configuration to: {output_dir}\n'))
# Import models
from igny8_core.modules.billing.models import Plan, CreditCostConfig
from igny8_core.modules.system.models import AIModelConfig, GlobalIntegrationSettings
from igny8_core.auth.models import Industry, Sector, SeedKeyword, AuthorProfile
from igny8_core.ai.models import Prompt, PromptVariable
# Define what to export
exports = {
'plans': (Plan.objects.all(), 'Subscription Plans'),
'credit_costs': (CreditCostConfig.objects.all(), 'Credit Cost Configurations'),
'ai_models': (AIModelConfig.objects.all(), 'AI Model Configurations'),
'global_integrations': (GlobalIntegrationSettings.objects.all(), 'Global Integration Settings'),
'industries': (Industry.objects.all(), 'Industries'),
'sectors': (Sector.objects.all(), 'Sectors'),
'seed_keywords': (SeedKeyword.objects.all(), 'Seed Keywords'),
'author_profiles': (AuthorProfile.objects.all(), 'Author Profiles'),
'prompts': (Prompt.objects.all(), 'AI Prompts'),
'prompt_variables': (PromptVariable.objects.all(), 'Prompt Variables'),
}
successful_exports = []
failed_exports = []
for name, (queryset, description) in exports.items():
try:
count = queryset.count()
data = serializers.serialize('json', queryset, indent=2)
filepath = os.path.join(output_dir, f'{name}.json')
with open(filepath, 'w') as f:
f.write(data)
self.stdout.write(
self.style.SUCCESS(f'✓ Exported {count:4d} {description:30s}{name}.json')
)
successful_exports.append(name)
except Exception as e:
self.stdout.write(
self.style.ERROR(f'✗ Failed to export {description}: {str(e)}')
)
failed_exports.append((name, str(e)))
# Export metadata
metadata = {
'exported_at': datetime.now().isoformat(),
'django_version': self.get_django_version(),
'database': self.get_database_info(),
'successful_exports': successful_exports,
'failed_exports': failed_exports,
'export_count': len(successful_exports),
}
metadata_path = os.path.join(output_dir, 'export_metadata.json')
with open(metadata_path, 'w') as f:
json.dump(metadata, f, indent=2)
self.stdout.write(self.style.SUCCESS(f'\n✓ Metadata saved to export_metadata.json'))
# Summary
self.stdout.write('\n' + '=' * 70)
self.stdout.write(self.style.SUCCESS(f'\nSystem Configuration Export Complete!\n'))
self.stdout.write(f' Successful: {len(successful_exports)} exports')
self.stdout.write(f' Failed: {len(failed_exports)} exports')
self.stdout.write(f' Location: {output_dir}\n')
if failed_exports:
self.stdout.write(self.style.WARNING('\nFailed exports:'))
for name, error in failed_exports:
self.stdout.write(f' - {name}: {error}')
self.stdout.write('=' * 70 + '\n')
def get_django_version(self):
import django
return django.get_version()
def get_database_info(self):
from django.conf import settings
db_config = settings.DATABASES.get('default', {})
return {
'engine': db_config.get('ENGINE', '').split('.')[-1],
'name': db_config.get('NAME', ''),
}

View File

@@ -0,0 +1,238 @@
"""
Management command to populate GlobalAIPrompt entries with default templates
"""
from django.core.management.base import BaseCommand
from igny8_core.modules.system.global_settings_models import GlobalAIPrompt
class Command(BaseCommand):
help = 'Populate GlobalAIPrompt entries with default prompt templates'
def handle(self, *args, **options):
prompts_data = [
{
'prompt_type': 'clustering',
'prompt_value': '''Analyze the following keywords and group them into clusters based on semantic similarity and topical relevance:
Keywords: {keywords}
Instructions:
1. Group keywords that share similar intent or topic
2. Each cluster should have 3-10 related keywords
3. Create meaningful cluster names that capture the essence
4. Prioritize high-value, commercially-relevant groupings
Return a JSON array of clusters with this structure:
[
{
"cluster_name": "Descriptive name",
"keywords": ["keyword1", "keyword2", ...],
"primary_intent": "informational|commercial|transactional"
}
]'''
},
{
'prompt_type': 'ideas',
'prompt_value': '''Generate content ideas for the following topic cluster:
Topic: {topic}
Keywords: {keywords}
Target Audience: {audience}
Instructions:
1. Create {count} unique content ideas
2. Each idea should target different angles or subtopics
3. Consider various content formats (how-to, comparison, list, guide)
4. Focus on search intent and user value
Return a JSON array:
[
{
"title": "Engaging title",
"angle": "Unique perspective or approach",
"target_keywords": ["keyword1", "keyword2"],
"content_type": "how-to|comparison|list|guide|analysis"
}
]'''
},
{
'prompt_type': 'content_generation',
'prompt_value': '''Write comprehensive, SEO-optimized content for the following:
Title: {title}
Target Keywords: {keywords}
Word Count: {word_count}
Tone: {tone}
Content Brief:
{brief}
Instructions:
1. Create engaging, informative content that fully addresses the topic
2. Naturally incorporate target keywords (avoid keyword stuffing)
3. Use clear headings (H2, H3) to structure the content
4. Include actionable insights and examples where relevant
5. Write for both readers and search engines
6. Maintain the specified tone throughout
Return well-structured HTML content with proper heading tags.'''
},
{
'prompt_type': 'image_prompt_extraction',
'prompt_value': '''Analyze this article content and extract key visual concepts for image generation:
Content: {content}
Instructions:
1. Identify {count} main concepts that would benefit from visual representation
2. Focus on concrete, visualizable elements (not abstract concepts)
3. Consider what would add value for readers
4. Prioritize scenes, objects, or scenarios that can be depicted
Return a JSON array:
[
{
"concept": "Brief description of what to visualize",
"placement": "header|section1|section2|...",
"priority": "high|medium|low"
}
]'''
},
{
'prompt_type': 'image_prompt_template',
'prompt_value': '''Create a detailed image generation prompt for:
Concept: {concept}
Style: {style}
Context: {context}
Generate a prompt that:
1. Describes the scene or subject clearly
2. Specifies composition, lighting, and perspective
3. Matches the {style} aesthetic
4. Is optimized for AI image generation
Return only the image prompt (no explanations).'''
},
{
'prompt_type': 'negative_prompt',
'prompt_value': '''text, watermark, logo, signature, username, artist name, blurry, low quality, pixelated, distorted, deformed, duplicate, cropped, out of frame, bad anatomy, bad proportions, extra limbs, missing limbs, floating limbs, disconnected limbs, mutation, mutated, ugly, disgusting, amputation, cartoon, anime'''
},
{
'prompt_type': 'site_structure_generation',
'prompt_value': '''Design a comprehensive site structure for:
Business Type: {business_type}
Primary Keywords: {keywords}
Target Audience: {audience}
Goals: {goals}
Instructions:
1. Create a logical, user-friendly navigation hierarchy
2. Include essential pages (Home, About, Services/Products, Contact)
3. Design category pages for primary keywords
4. Plan supporting content pages
5. Consider user journey and conversion paths
Return a JSON structure:
{
"navigation": [
{
"page": "Page name",
"slug": "url-slug",
"type": "home|category|product|service|content|utility",
"children": []
}
]
}'''
},
{
'prompt_type': 'product_generation',
'prompt_value': '''Create comprehensive product content for:
Product Name: {product_name}
Category: {category}
Features: {features}
Target Audience: {audience}
Generate:
1. Compelling product description (200-300 words)
2. Key features and benefits (bullet points)
3. Technical specifications
4. Use cases or applications
5. SEO-optimized meta description
Return structured JSON with all elements.'''
},
{
'prompt_type': 'service_generation',
'prompt_value': '''Create detailed service page content for:
Service Name: {service_name}
Category: {category}
Key Benefits: {benefits}
Target Audience: {audience}
Generate:
1. Overview section (150-200 words)
2. Process or methodology (step-by-step)
3. Benefits and outcomes
4. Why choose us / differentiators
5. FAQ section (5-7 questions)
6. Call-to-action suggestions
Return structured HTML content.'''
},
{
'prompt_type': 'taxonomy_generation',
'prompt_value': '''Create a logical taxonomy structure for:
Content Type: {content_type}
Domain: {domain}
Existing Keywords: {keywords}
Instructions:
1. Design parent categories that organize content logically
2. Create subcategories for detailed organization
3. Ensure balanced hierarchy (not too deep or flat)
4. Use clear, descriptive category names
5. Consider SEO and user navigation
Return a JSON structure:
{
"categories": [
{
"name": "Category Name",
"slug": "category-slug",
"description": "Brief description",
"subcategories": []
}
]
}'''
},
]
created_count = 0
updated_count = 0
for prompt_data in prompts_data:
prompt, created = GlobalAIPrompt.objects.update_or_create(
prompt_type=prompt_data['prompt_type'],
defaults={'prompt_value': prompt_data['prompt_value']}
)
if created:
created_count += 1
self.stdout.write(
self.style.SUCCESS(f'Created: {prompt.get_prompt_type_display()}')
)
else:
updated_count += 1
self.stdout.write(
self.style.WARNING(f'Updated: {prompt.get_prompt_type_display()}')
)
self.stdout.write(
self.style.SUCCESS(
f'\nCompleted: {created_count} created, {updated_count} updated'
)
)

Some files were not shown because too many files have changed in this diff Show More