4 Commits

Author SHA1 Message Date
IGNY8 VPS (Salman)
042e5c6735 sections verifications moved 2025-12-27 16:34:51 +00:00
IGNY8 VPS (Salman)
3ea7d4f933 final polish 3 2025-12-27 15:55:54 +00:00
IGNY8 VPS (Salman)
b9e4b6f7e2 final plolish phase 2 2025-12-27 15:25:05 +00:00
IGNY8 VPS (Salman)
99982eb4fb Final Polish phase 1 2025-12-27 13:27:02 +00:00
592 changed files with 30565 additions and 73801 deletions

194
.rules
View File

@@ -1,6 +1,6 @@
# IGNY8 AI Agent Rules
**Version:** 1.2.0 | **Updated:** January 2, 2026
**Version:** 1.1.3 | **Updated:** December 27, 2025
---
@@ -8,10 +8,8 @@
**BEFORE any change, read these docs in order:**
1. [docs/INDEX.md](docs/INDEX.md) - Quick navigation to any module/feature
2. [docs/30-FRONTEND/COMPONENT-SYSTEM.md](docs/30-FRONTEND/COMPONENT-SYSTEM.md) - **REQUIRED** for any frontend work
3. [docs/30-FRONTEND/DESIGN-TOKENS.md](docs/30-FRONTEND/DESIGN-TOKENS.md) - Color tokens and styling rules
4. Module doc for the feature you're modifying (see INDEX.md for paths)
5. [CHANGELOG.md](CHANGELOG.md) - Recent changes and version history
2. Module doc for the feature you're modifying (see INDEX.md for paths)
3. [CHANGELOG.md](CHANGELOG.md) - Recent changes and version history
---
@@ -23,10 +21,6 @@
| Frontend | `frontend/src/` | React + TypeScript SPA |
| Docs | `docs/` | Technical documentation |
| AI Engine | `backend/igny8_core/ai/` | AI functions (use this, NOT `utils/ai_processor.py`) |
| Design Tokens | `frontend/src/styles/design-system.css` | **Single source** for colors, shadows, typography |
| UI Components | `frontend/src/components/ui/` | Button, Badge, Card, Modal, etc. |
| Form Components | `frontend/src/components/form/` | InputField, Select, Checkbox, Switch |
| Icons | `frontend/src/icons/` | All SVG icons (import from `../../icons`) |
**Module → File Quick Reference:** See [docs/INDEX.md](docs/INDEX.md#module--file-quick-reference)
@@ -51,104 +45,6 @@
---
## 🎨 DESIGN SYSTEM RULES (CRITICAL!)
> **🔒 STYLE LOCKED** - All UI must use the design system. ESLint enforces these rules.
### Color System (Only 6 Base Colors!)
All colors in the system derive from 6 primary hex values in `design-system.css`:
- `--color-primary` (#0077B6) - Brand Blue
- `--color-success` (#2CA18E) - Success Green
- `--color-warning` (#D9A12C) - Warning Amber
- `--color-danger` (#A12C40) - Danger Red
- `--color-purple` (#2C40A1) - Purple accent
- `--color-gray-base` (#667085) - Neutral gray
### Tailwind Color Classes
**✅ USE ONLY THESE** (Tailwind defaults are DISABLED):
```
brand-* (50-950) - Primary blue scale
gray-* (25-950) - Neutral scale
success-* (25-950) - Green scale
error-* (25-950) - Red scale
warning-* (25-950) - Amber scale
purple-* (25-950) - Purple scale
```
**❌ BANNED** (These will NOT work):
```
blue-*, red-*, green-*, emerald-*, amber-*, indigo-*,
pink-*, rose-*, sky-*, teal-*, cyan-*, etc.
```
### Styling Rules
| ✅ DO | ❌ DON'T |
|-------|---------|
| `className="bg-brand-500"` | `className="bg-blue-500"` |
| `className="text-gray-700"` | `className="text-[#333]"` |
| `<Button variant="primary">` | `<button className="...">` |
| Import from `../../icons` | Import from `@heroicons/*` |
| Use CSS variables `var(--color-primary)` | Hardcode hex values |
---
## 🧩 COMPONENT RULES (ESLint Enforced!)
> **Never use raw HTML elements** - Use design system components.
### Required Component Mappings
| HTML Element | Required Component | Import Path |
|--------------|-------------------|-------------|
| `<button>` | `Button` or `IconButton` | `components/ui/button/Button` |
| `<input type="text/email/password">` | `InputField` | `components/form/input/InputField` |
| `<input type="checkbox">` | `Checkbox` | `components/form/input/Checkbox` |
| `<input type="radio">` | `Radio` | `components/form/input/Radio` |
| `<select>` | `Select` or `SelectDropdown` | `components/form/Select` |
| `<textarea>` | `TextArea` | `components/form/input/TextArea` |
### Component Quick Reference
```tsx
// Buttons
<Button variant="primary" tone="brand">Save</Button>
<Button variant="outline" tone="danger">Delete</Button>
<IconButton icon={<CloseIcon />} variant="ghost" title="Close" />
// Form Inputs
<InputField type="text" label="Name" value={val} onChange={setVal} />
<Select options={opts} onChange={setVal} />
<Checkbox label="Accept" checked={val} onChange={setVal} />
<Switch label="Enable" checked={val} onChange={setVal} />
// Display
<Badge tone="success" variant="soft">Active</Badge>
<Alert variant="error" title="Error" message="Failed" />
<Spinner size="md" />
```
### Icon Rules
**Always import from central location:**
```tsx
// ✅ CORRECT
import { PlusIcon, CloseIcon, CheckCircleIcon } from '../../icons';
// ❌ BANNED - External icon libraries
import { XIcon } from '@heroicons/react/24/outline';
import { Trash } from 'lucide-react';
```
**Icon sizing:**
- `className="w-4 h-4"` - In buttons, badges
- `className="w-5 h-5"` - Standalone
- `className="w-6 h-6"` - Headers, features
---
## 🐳 Docker Commands (IMPORTANT!)
**Container Names:**
@@ -169,7 +65,6 @@ docker exec -it igny8_backend python manage.py shell
# ✅ CORRECT - Run npm commands
docker exec -it igny8_frontend npm install
docker exec -it igny8_frontend npm run build
docker exec -it igny8_frontend npm run lint # Check design system violations
# ✅ CORRECT - View logs
docker logs igny8_backend -f
@@ -202,31 +97,23 @@ docker logs igny8_celery_worker -f
### Before Coding
1. **Read docs first** - Always read the relevant module doc from `docs/10-MODULES/` before changing code
2. **Read COMPONENT-SYSTEM.md** - **REQUIRED** before any frontend changes
3. **Check existing patterns** - Search codebase for similar implementations before creating new ones
4. **Use existing components** - Never duplicate; reuse components from `frontend/src/components/`
5. **Check data scope** - Know if your model is Global, Account, or Site/Sector scoped (see table above)
2. **Check existing patterns** - Search codebase for similar implementations before creating new ones
3. **Use existing components** - Never duplicate; reuse components from `frontend/src/components/`
4. **Check data scope** - Know if your model is Global, Account, or Site/Sector scoped (see table above)
### During Coding - Backend
6. **Use correct base class** - Global: `models.Model`, Account: `AccountBaseModel`, Site: `SiteSectorBaseModel`
7. **Use AI framework** - Use `backend/igny8_core/ai/` for AI operations, NOT legacy `utils/ai_processor.py`
8. **Follow service pattern** - Business logic in `backend/igny8_core/business/*/services/`
9. **Check permissions** - Use `IsAuthenticatedAndActive`, `HasTenantAccess` in views
### During Coding - Frontend (DESIGN SYSTEM)
10. **Use design system components** - Button, InputField, Select, Badge, Card - never raw HTML
11. **Use only design system colors** - `brand-*`, `gray-*`, `success-*`, `error-*`, `warning-*`, `purple-*`
12. **Import icons from central location** - `import { Icon } from '../../icons'` - never external libraries
13. **No inline styles** - Use Tailwind utilities or CSS variables only
14. **No hardcoded colors** - No hex values, no `blue-500`, `red-500` (Tailwind defaults disabled)
15. **Use TypeScript types** - All frontend code must be typed
### During Coding
5. **Use correct base class** - Global: `models.Model`, Account: `AccountBaseModel`, Site: `SiteSectorBaseModel`
6. **Use AI framework** - Use `backend/igny8_core/ai/` for AI operations, NOT legacy `utils/ai_processor.py`
7. **Follow service pattern** - Business logic in `backend/igny8_core/business/*/services/`
8. **Check permissions** - Use `IsAuthenticatedAndActive`, `HasTenantAccess` in views
9. **Use TypeScript types** - All frontend code must be typed
10. **Use TailwindCSS** - No inline styles; follow `frontend/DESIGN_SYSTEM.md`
### After Coding
16. **Run ESLint** - `docker exec -it igny8_frontend npm run lint` to check design system violations
17. **Update CHANGELOG.md** - Every commit needs a changelog entry with git reference
18. **Increment version** - PATCH for fixes, MINOR for features, MAJOR for breaking changes
19. **Update docs** - If you changed APIs or architecture, update relevant docs in `docs/`
20. **Run migrations** - After model changes: `docker exec -it igny8_backend python manage.py makemigrations`
11. **Update CHANGELOG.md** - Every commit needs a changelog entry with git reference
12. **Increment version** - PATCH for fixes, MINOR for features, MAJOR for breaking changes
13. **Update docs** - If you changed APIs or architecture, update relevant docs in `docs/`
14. **Run migrations** - After model changes: `docker exec -it igny8_backend python manage.py makemigrations`
---
@@ -252,22 +139,17 @@ docker logs igny8_celery_worker -f
| I want to... | Go to |
|--------------|-------|
| Find any module | [docs/INDEX.md](docs/INDEX.md) |
| **Use UI components** | [docs/30-FRONTEND/COMPONENT-SYSTEM.md](docs/30-FRONTEND/COMPONENT-SYSTEM.md) |
| **Check design tokens** | [docs/30-FRONTEND/DESIGN-TOKENS.md](docs/30-FRONTEND/DESIGN-TOKENS.md) |
| **Design guide** | [docs/30-FRONTEND/DESIGN-GUIDE.md](docs/30-FRONTEND/DESIGN-GUIDE.md) |
| Understand architecture | [docs/00-SYSTEM/ARCHITECTURE.md](docs/00-SYSTEM/ARCHITECTURE.md) |
| Find an API endpoint | [docs/20-API/ENDPOINTS.md](docs/20-API/ENDPOINTS.md) |
| See all models | [docs/90-REFERENCE/MODELS.md](docs/90-REFERENCE/MODELS.md) |
| Understand AI functions | [docs/90-REFERENCE/AI-FUNCTIONS.md](docs/90-REFERENCE/AI-FUNCTIONS.md) |
| See frontend pages | [docs/30-FRONTEND/PAGES.md](docs/30-FRONTEND/PAGES.md) |
| See recent changes | [CHANGELOG.md](CHANGELOG.md) |
| View component demos | App route: `/ui-elements` |
---
## 🚫 Don't Do
### General
- ❌ Skip reading docs before coding
- ❌ Create duplicate components
- ❌ Use `docker-compose` for exec commands (use `docker exec`)
@@ -275,21 +157,11 @@ docker logs igny8_celery_worker -f
- ❌ Add account filtering to Global models (they're platform-wide!)
- ❌ Forget site/sector filtering on content models
- ❌ Forget to update CHANGELOG
- ❌ Use inline styles (use TailwindCSS)
- ❌ Hardcode values (use settings/constants)
- ❌ Work on Linker/Optimizer (inactive modules - Phase 2)
- ❌ Use any SiteBuilder code (deprecated - mark for removal)
### Frontend - DESIGN SYSTEM VIOLATIONS
- ❌ Use raw `<button>` - use `Button` or `IconButton`
- ❌ Use raw `<input>` - use `InputField`, `Checkbox`, `Radio`
- ❌ Use raw `<select>` - use `Select` or `SelectDropdown`
- ❌ Use raw `<textarea>` - use `TextArea`
- ❌ Use inline `style={}` attributes
- ❌ Hardcode hex colors (`#0693e3`, `#ff0000`)
- ❌ Use Tailwind default colors (`blue-500`, `red-500`, `green-500`)
- ❌ Import from `@heroicons/*`, `lucide-react`, `@mui/icons-material`
- ❌ Create new CSS files (use `design-system.css` only)
---
## 📊 API Base URLs
@@ -311,22 +183,22 @@ docker logs igny8_celery_worker -f
## 📄 Documentation Rules
**Root folder MD files allowed (ONLY these):**
- `.rules` - AI agent rules (this file)
**Root folder MD files allowed:**
- `CHANGELOG.md` - Version history
- `README.md` - Project quickstart
- `README.md` - Project quickstart
- `IGNY8-APP.md` - Executive summary
- `TODOS.md` - Cleanup tracking
**All other docs go in `/docs/` folder:**
```
docs/
├── INDEX.md # Master navigation
├── 00-SYSTEM/ # Architecture, auth, tenancy, IGNY8-APP.md
├── 00-SYSTEM/ # Architecture, auth, tenancy
├── 10-MODULES/ # One file per module
├── 20-API/ # API endpoints
├── 30-FRONTEND/ # Pages, stores, DESIGN-GUIDE, DESIGN-TOKENS, COMPONENT-SYSTEM
├── 30-FRONTEND/ # Pages, stores
├── 40-WORKFLOWS/ # Cross-module flows
── 90-REFERENCE/ # Models, AI functions, FIXES-KB
└── plans/ # FINAL-PRELAUNCH, implementation plans
── 90-REFERENCE/ # Models, AI functions
```
**When updating docs:**
@@ -343,20 +215,10 @@ docs/
## 🎯 Quick Checklist Before Commit
### Backend Changes
- [ ] Read relevant module docs
- [ ] Used existing components/patterns
- [ ] Correct data scope (Global/Account/Site)
- [ ] Ran migrations if model changed
### Frontend Changes
- [ ] Read COMPONENT-SYSTEM.md
- [ ] Used design system components (not raw HTML)
- [ ] Used design system colors (brand-*, gray-*, success-*, error-*, warning-*, purple-*)
- [ ] Icons imported from `../../icons`
- [ ] No inline styles or hardcoded hex colors
- [ ] Ran `npm run lint` - no design system violations
### All Changes
- [ ] Updated CHANGELOG.md with git reference
- [ ] Incremented version number
- [ ] Updated version number
- [ ] Ran migrations if model changed
- [ ] Tested locally

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,962 @@
# IGNY8 Comprehensive UX Audit & Recommendations
**Date:** December 27, 2025
**Scope:** Complete application audit for optimal user experience
**Note:** Plans, billing, credits, usage sections excluded - will be done in separate phase
**Status:** ✅ IMPLEMENTED & INTEGRATED
---
## Implementation Status
| Section | Status | Files Modified |
|---------|--------|----------------|
| 1. Site & Sector Selector | ✅ | Already implemented per guidelines |
| 2. Tooltip Improvements | ✅ | `config/pages/*.config.tsx` (all 8 page configs updated with actionable tooltips) |
| 3. Footer 3-Widget Layout | ✅ | `components/dashboard/ThreeWidgetFooter.tsx` |
| 4. Progress Modal Steps | ✅ | `backend/igny8_core/ai/engine.py` |
| 5. Dashboard Redesign | ✅ | `components/dashboard/CompactDashboard.tsx` |
| 6. Site Setup Checklist | ✅ | `components/common/SiteCard.tsx`, `backend/auth/serializers.py`, `services/api.ts` |
| 7. To-Do-s Audit | ✅ | Documentation only |
| 8. Notification System | ✅ | `store/notificationStore.ts`, `components/header/NotificationDropdownNew.tsx`, `hooks/useProgressModal.ts` |
### Integration Complete
| Integration | Status | Details |
|-------------|--------|---------|
| NotificationDropdown → AppHeader | ✅ | `layout/AppHeader.tsx`, `components/header/Header.tsx` now use `NotificationDropdownNew` |
| AI Task → Notifications | ✅ | `hooks/useProgressModal.ts` automatically adds notifications on success/failure |
| Dashboard exports | ✅ | `components/dashboard/index.ts` barrel export created |
| NeedsAttentionBar → Home | ✅ | `pages/Dashboard/Home.tsx` shows attention items at top |
| ThreeWidgetFooter hook | ✅ | `hooks/useThreeWidgetFooter.ts` helper for easy integration |
---
## Table of Contents
1. [Site & Sector Selector Placement](#1-site--sector-selector-placement)
2. [Table Action Row Metrics - Tooltip Improvements](#2-table-action-row-metrics---tooltip-improvements)
3. [Footer Metrics - 3-Widget Layout](#3-footer-metrics---3-widget-layout)
4. [Progress Modal Steps Audit](#4-progress-modal-steps-audit)
5. [Dashboard Redesign Plan](#5-dashboard-redesign-plan)
6. [Site Setup Checklist Implementation](#6-site-setup-checklist-implementation)
7. [To-Do-s Completion Audit](#7-to-do-s-completion-audit)
8. [Notification System Plan](#8-notification-system-plan)
---
## 1. Site & Sector Selector Placement
### Rationale
- **Site Selector**: Required when data is scoped to a specific site
- **Sector Selector**: Required when data can be further filtered by content category/niche
- **Both**: When user needs precise data filtering at granular level
- **None**: When page is not site-specific or shows account-level data
### Recommendations by Page
| Page | Site Selector | Sector Selector | Reason |
|------|:-------------:|:---------------:|--------|
| **DASHBOARD** |
| Home | ✅ All Sites option | ❌ | Overview across sites - sector too granular for dashboard |
| **SETUP** |
| Add Keywords | ✅ | ✅ | Keywords are site+sector specific |
| Content Settings | ✅ | ❌ | Settings are site-level, not sector-level |
| Sites List | ❌ | ❌ | Managing sites themselves |
| Site Dashboard | ❌ (context) | ❌ | Already in specific site context |
| Site Settings tabs | ❌ (context) | ❌ | Already in specific site context |
| **PLANNER** |
| Keywords | ✅ | ✅ | Keywords organized by site+sector |
| Clusters | ✅ | ✅ | Clusters organized by site+sector |
| Cluster Detail | ❌ (context) | ❌ (context) | Already in cluster context |
| Ideas | ✅ | ✅ | Ideas organized by site+sector |
| **WRITER** |
| Tasks/Queue | ✅ | ✅ | Tasks organized by site+sector |
| Content/Drafts | ✅ | ✅ | Content organized by site+sector |
| Content View | ❌ (context) | ❌ (context) | Viewing specific content |
| Images | ✅ | ✅ | Images tied to content by site+sector |
| Review | ✅ | ✅ | Review queue by site+sector |
| Published | ✅ | ✅ | Published content by site+sector |
| **AUTOMATION** |
| Automation | ✅ | ❌ | Automation runs at site level |
| **LINKER** (if enabled) |
| Content List | ✅ | ✅ | Linking is content-specific |
| **OPTIMIZER** (if enabled) |
| Content Selector | ✅ | ✅ | Optimization is content-specific |
| Analysis Preview | ❌ (context) | ❌ (context) | Already in analysis context |
| **THINKER** (Admin) |
| All Thinker pages | ❌ | ❌ | System-wide prompts/profiles |
| **BILLING** |
| All Billing pages | ❌ | ❌ | Account-level billing data |
| **ACCOUNT** |
| Account Settings | ❌ | ❌ | Account-level settings |
| Profile | ❌ | ❌ | User profile |
| Team | ❌ | ❌ | Account-wide team |
| Plans | ❌ | ❌ | Account-level plans |
| Usage | ❌ | ❌ | Account-level usage |
| **HELP** |
| Help Page | ❌ | ❌ | Documentation |
### Implementation Priority
1. **High**: Ensure Planner & Writer pages show both selectors
2. **Medium**: Automation shows site only
3. **Low**: Account/Billing/Thinker show none
---
## 2. Table Action Row Metrics - Tooltip Improvements
### Current State
The metrics in the table action row are already implemented (as shown in screenshot):
- Keywords page: `Keywords 46 | Clustered 10 | Unmapped 0 | Volume 13.6K`
**NO additional metrics should be added to the App Header** - only Credits remains there.
### Improvement: Better Actionable Tooltips
The current tooltips are basic. Improve them with **actionable context and next-step guidance**:
#### Keywords Page Metrics Tooltips
| Metric | Current Tooltip | Improved Tooltip |
|--------|----------------|------------------|
| **Keywords** | "Total keywords" | "46 keywords ready for clustering. Select unclustered keywords and click 'Auto Cluster' to organize them into topic groups." |
| **Clustered** | "Keywords in clusters" | "10 clusters created. Clusters with 3-7 keywords are optimal. Click on a cluster to generate content ideas from it." |
| **Unmapped** | "Unclustered keywords" | "All keywords are clustered! New keywords you add will appear here until clustered." |
| **Volume** | "Total search volume" | "13.6K combined monthly searches. Higher volume keywords should be prioritized for content creation." |
#### Clusters Page Metrics Tooltips
| Metric | Current Tooltip | Improved Tooltip |
|--------|----------------|------------------|
| **Clusters** | "Total clusters" | "12 topic clusters available. Each cluster groups related keywords for focused content creation." |
| **With Ideas** | "Clusters with ideas" | "8 clusters have content ideas. Click 'Generate Ideas' on clusters without ideas to plan new content." |
| **Keywords** | "Total keywords" | "46 keywords organized across clusters. Well-balanced clusters have 3-7 keywords each." |
| **Ready** | "Ready for ideas" | "4 clusters are ready for idea generation. Select them and click 'Generate Ideas' to create content outlines." |
#### Ideas Page Metrics Tooltips
| Metric | Current Tooltip | Improved Tooltip |
|--------|----------------|------------------|
| **Ideas** | "Total ideas" | "34 content ideas generated. Review each idea's outline, then click 'Create Task' to begin content generation." |
| **Pending** | "Not yet tasks" | "12 ideas haven't been converted to tasks yet. Convert ideas to tasks to start the content writing process." |
| **In Tasks** | "Converted to tasks" | "22 ideas are now writing tasks. View their progress in Writer → Tasks queue." |
#### Tasks Page Metrics Tooltips
| Metric | Current Tooltip | Improved Tooltip |
|--------|----------------|------------------|
| **Queue** | "Pending tasks" | "15 tasks waiting for content generation. Select tasks and click 'Generate Content' to write articles." |
| **Processing** | "In progress" | "2 tasks are being written by AI. Content will appear in Drafts when complete (~2-3 min each)." |
| **Complete** | "Finished tasks" | "28 tasks have generated content. Review articles in Writer → Content before publishing." |
#### Content Page Metrics Tooltips
| Metric | Current Tooltip | Improved Tooltip |
|--------|----------------|------------------|
| **Drafts** | "Draft articles" | "25 articles in draft status. Add images and review before sending to the approval queue." |
| **Has Images** | "With images" | "17 articles have images attached. Articles with images get 94% more engagement." |
| **Needs Images** | "Missing images" | "8 articles need images. Select them and click 'Generate Images' to create featured & in-article visuals." |
#### Images Page Metrics Tooltips
| Metric | Current Tooltip | Improved Tooltip |
|--------|----------------|------------------|
| **Total** | "Total images" | "127 images in your library. Each article can have 1 featured image + multiple in-article images." |
| **Generated** | "AI generated" | "112 images created by AI. Review generated images and regenerate any that don't match your brand." |
| **Pending** | "Awaiting generation" | "15 image prompts ready. Click 'Generate Images' to create visuals from your approved prompts." |
---
## 3. Footer Metrics - 3-Widget Layout
### Design: Three-Column Widget Layout
Replace current single metric cards with a **3-widget horizontal layout** (33.3% each):
```
┌─────────────────────────────────────────────────────────────────────────────────────┐
│ WIDGET 1: PAGE METRICS │ WIDGET 2: MODULE STATS │ WIDGET 3: COMPLETION │
│ (Current Page Progress) │ (Full Module Overview) │ (Both Modules Stats) │
│ ~33.3% width │ ~33.3% width │ ~33.3% width │
└─────────────────────────────────────────────────────────────────────────────────────┘
```
### Widget 1: Current Page Metrics (with Combined Progress Bar)
Shows metrics specific to the current page with a single combined progress bar.
#### Keywords Page - Widget 1
```
┌──────────────────────────────────────────────────┐
│ PAGE PROGRESS │
│ │
│ Keywords 46 Clustered 42 (91%) │
│ Unmapped 4 Volume 13.6K │
│ │
│ ████████████████████░░░ 91% Clustered │
│ │
│ 💡 4 keywords ready to cluster │
└──────────────────────────────────────────────────┘
```
#### Clusters Page - Widget 1
```
┌──────────────────────────────────────────────────┐
│ PAGE PROGRESS │
│ │
│ Clusters 12 With Ideas 8 (67%) │
│ Keywords 46 Ready 4 │
│ │
│ ██████████████░░░░░░░ 67% Have Ideas │
│ │
│ 💡 4 clusters ready for idea generation │
└──────────────────────────────────────────────────┘
```
#### Ideas Page - Widget 1
```
┌──────────────────────────────────────────────────┐
│ PAGE PROGRESS │
│ │
│ Ideas 34 In Tasks 22 (65%) │
│ Pending 12 From Clusters 8 │
│ │
│ █████████████░░░░░░░░ 65% Converted │
│ │
│ 💡 12 ideas ready to become tasks │
└──────────────────────────────────────────────────┘
```
#### Tasks Page - Widget 1
```
┌──────────────────────────────────────────────────┐
│ PAGE PROGRESS │
│ │
│ Total 45 Complete 28 (62%) │
│ Queue 15 Processing 2 │
│ │
│ ████████████░░░░░░░░░ 62% Generated │
│ │
│ 💡 15 tasks in queue for content generation │
└──────────────────────────────────────────────────┘
```
#### Content Page - Widget 1
```
┌──────────────────────────────────────────────────┐
│ PAGE PROGRESS │
│ │
│ Drafts 25 Has Images 17 (68%) │
│ Total Words 12.5K Ready 17 │
│ │
│ █████████████░░░░░░░░ 68% Have Images │
│ │
│ 💡 8 drafts need images before review │
└──────────────────────────────────────────────────┘
```
### Widget 2: Module Stats (Same Widget Across Module Pages)
Shows the **complete module overview** with actionable links. Same widget appears on all pages within a module.
#### Planner Module - Widget 2 (shown on Keywords, Clusters, Ideas pages)
```
┌──────────────────────────────────────────────────┐
│ PLANNER MODULE │
│ │
│ Keywords ─────────────────────────────► Clusters │
│ 46 Auto Cluster 12 │
│ ████████████████████░░░ 91% │
│ │
│ Clusters ─────────────────────────────► Ideas │
│ 12 Generate Ideas 34 │
│ █████████████░░░░░░░░░ 67% │
│ │
│ Ideas ────────────────────────────────► Tasks │
│ 34 Create Tasks 22 │
│ █████████████░░░░░░░░░ 65% │
│ │
│ [→ Keywords] [→ Clusters] [→ Ideas] │
└──────────────────────────────────────────────────┘
```
#### Writer Module - Widget 2 (shown on Tasks, Content, Images, Review, Published pages)
```
┌──────────────────────────────────────────────────┐
│ WRITER MODULE │
│ │
│ Tasks ───────────────────────────────► Drafts │
│ 45 Generate Content 28 │
│ ████████████░░░░░░░░░ 62% │
│ │
│ Drafts ──────────────────────────────► Images │
│ 28 Generate Images 17 │
│ █████████████░░░░░░░░ 68% │
│ │
│ Ready ───────────────────────────────► Published │
│ 17 Review & Publish 45 │
│ ████████████████░░░░ 73% │
│ │
│ [→ Tasks] [→ Content] [→ Images] [→ Published] │
└──────────────────────────────────────────────────┘
```
### Widget 3: Both Modules Completion Stats
Shows **completed items from both Planner and Writer** with time filter (7/30/90 days).
```
┌──────────────────────────────────────────────────┐
│ WORKFLOW COMPLETION [7d] [30d] [90d] │
│ │
│ PLANNER │
│ ├─ Keywords Clustered 42 ████████ │
│ ├─ Clusters Created 12 ███ │
│ └─ Ideas Generated 34 ███████ │
│ │
│ WRITER │
│ ├─ Content Generated 28 ██████ │
│ ├─ Images Created 127 █████████ │
│ └─ Articles Published 45 █████████ │
│ │
│ Credits Used: 2,450 │ Operations: 156 │
│ │
│ [View Full Analytics →] │
└──────────────────────────────────────────────────┘
```
### Implementation Notes
- Use existing `Card` component from `components/ui/card`
- Use existing `ProgressBar` component from `components/ui/progress`
- Use standard CSS tokens from `styles/tokens.css`:
- `--color-primary` for primary progress bars
- `--color-success` for completion indicators
- `--color-warning` for attention items
- Grid layout: `grid grid-cols-1 lg:grid-cols-3 gap-4`
- Compact padding: `p-4` instead of `p-6`
---
## 4. Progress Modal Steps Audit
### Current Issues
- Generic messages lacking context
- Missing counts where data is available
- Inconsistent terminology
- Not professional/polished
### Recommended Progress Step Text
#### Auto Cluster Keywords
| Phase | Current | Recommended |
|-------|---------|-------------|
| INIT | Validating keywords | Validating {count} keywords for clustering |
| PREP | Loading keyword data | Analyzing keyword relationships |
| AI_CALL | Generating clusters with Igny8 Semantic SEO Model | Grouping keywords by search intent ({count} keywords) |
| PARSE | Organizing clusters | Organizing {cluster_count} semantic clusters |
| SAVE | Saving clusters | Saving {cluster_count} clusters with {keyword_count} keywords |
| DONE | Clustering complete! | ✓ Created {cluster_count} clusters from {keyword_count} keywords |
#### Generate Ideas
| Phase | Current | Recommended |
|-------|---------|-------------|
| INIT | Verifying cluster integrity | Analyzing {count} clusters for content opportunities |
| PREP | Loading cluster keywords | Mapping {keyword_count} keywords to topic briefs |
| AI_CALL | Generating ideas with Igny8 Semantic AI | Generating content ideas for {cluster_count} clusters |
| PARSE | High-opportunity ideas generated | Structuring {idea_count} article outlines |
| SAVE | Content Outline for Ideas generated | Saving {idea_count} content ideas with outlines |
| DONE | Ideas generated! | ✓ Generated {idea_count} content ideas from {cluster_count} clusters |
#### Generate Content
| Phase | Current | Recommended |
|-------|---------|-------------|
| INIT | Validating task | Preparing {count} article{s} for generation |
| PREP | Preparing content idea | Building content brief with {keyword_count} target keywords |
| AI_CALL | Writing article with Igny8 Semantic AI | Writing {count} article{s} (~{word_target} words each) |
| PARSE | Formatting content | Formatting HTML content and metadata |
| SAVE | Saving article | Saving {count} article{s} ({total_words} words) |
| DONE | Content generated! | ✓ {count} article{s} generated ({total_words} words total) |
#### Generate Image Prompts
| Phase | Current | Recommended |
|-------|---------|-------------|
| INIT | Checking content and image slots | Analyzing content for {count} image opportunities |
| PREP | Mapping content for image prompts | Identifying featured image and {in_article_count} in-article image slots |
| AI_CALL | Writing Featured Image Prompts | Creating optimized prompts for {count} images |
| PARSE | Writing Inarticle Image Prompts | Refining {in_article_count} contextual image descriptions |
| SAVE | Assigning Prompts to Dedicated Slots | Assigning {count} prompts to image slots |
| DONE | Prompts generated! | ✓ {count} image prompts ready (1 featured + {in_article_count} in-article) |
#### Generate Images from Prompts
| Phase | Current | Recommended |
|-------|---------|-------------|
| INIT | Validating image prompts | Queuing {count} images for generation |
| PREP | Preparing image generation queue | Preparing AI image generation ({count} images) |
| AI_CALL | Generating images with AI | Generating image {current}/{count}... |
| PARSE | Processing image URLs | Processing {count} generated images |
| SAVE | Saving image URLs | Uploading {count} images to media library |
| DONE | Images generated! | ✓ {count} images generated and saved |
### Success Message Templates (with counts)
```typescript
// Clustering
`✓ Organized ${keywordCount} keywords into ${clusterCount} semantic clusters`
// Ideas
`✓ Created ${ideaCount} content ideas with detailed outlines`
// Content
`✓ Generated ${articleCount} articles (${totalWords.toLocaleString()} words)`
// Image Prompts
`✓ Created prompts for ${imageCount} images (1 featured + ${inArticleCount} in-article)`
// Image Generation
`✓ Generated and saved ${imageCount} AI images`
```
---
## 5. Dashboard Redesign Plan
### Current Issues
- Too much whitespace and large headings
- Repeating same counts/metrics without different dimensions
- Missing actionable insights
- No AI operations analytics
- Missing "needs attention" items
### New Dashboard Design: Multi-Dimension Compact Widgets
Based on Django admin reports analysis, the dashboard should show **different data dimensions** instead of repeating counts:
### Dashboard Layout (Compact, Information-Dense)
```
┌─────────────────────────────────────────────────────────────────────────────────────┐
│ ⚠ NEEDS ATTENTION (collapsible, only shows if items exist) │
│ ┌────────────────────┐ ┌────────────────────┐ ┌────────────────────┐ │
│ │ 3 pending review │ │ WP sync failed │ │ Setup incomplete │ │
│ │ [Review →] │ │ [Retry] [Fix →] │ │ [Complete →] │ │
│ └────────────────────┘ └────────────────────┘ └────────────────────┘ │
├─────────────────────────────────────────────────────────────────────────────────────┤
│ │
│ ┌─────────────────────────────────┐ ┌─────────────────────────────────────────┐ │
│ │ WORKFLOW PIPELINE │ │ QUICK ACTIONS │ │
│ │ │ │ │ │
│ │ Sites → KWs → Clusters → Ideas │ │ [+ Keywords] [⚡ Cluster] [📝 Content] │ │
│ │ 2 156 23 67 │ │ [🖼 Images] [✓ Review] [🚀 Publish] │ │
│ │ ↓ │ │ │ │
│ │ Tasks → Drafts → Published │ │ WORKFLOW GUIDE │ │
│ │ 45 28 45 │ │ 1. Add Keywords 5. Generate Content │ │
│ │ │ │ 2. Auto Cluster 6. Generate Images │ │
│ │ ████████████░░░ 72% Complete │ │ 3. Generate Ideas 7. Review & Approve │ │
│ │ │ │ 4. Create Tasks 8. Publish to WP │ │
│ └─────────────────────────────────┘ │ [Full Help →] │ │
│ └─────────────────────────────────────────┘ │
│ │
├─────────────────────────────────────────────────────────────────────────────────────┤
│ │
│ ┌─────────────────────────────────┐ ┌─────────────────────────────────────────┐ │
│ │ AI OPERATIONS (7d) [▼ 30d] │ │ RECENT ACTIVITY │ │
│ │ │ │ │ │
│ │ Operation Count Credits │ │ • Clustered 45 keywords → 8 clusters │ │
│ │ ───────────────────────────────│ │ 2 hours ago │ │
│ │ Clustering 8 80 │ │ • Generated 5 articles (4.2K words) │ │
│ │ Ideas 12 24 │ │ 4 hours ago │ │
│ │ Content 28 1,400 │ │ • Created 15 image prompts │ │
│ │ Images 45 225 │ │ Yesterday │ │
│ │ ───────────────────────────────│ │ • Published "Best Running Shoes" to WP │ │
│ │ Total 93 1,729 │ │ Yesterday │ │
│ │ │ │ • Added 23 keywords from seed DB │ │
│ │ Success Rate: 98.5% │ │ 2 days ago │ │
│ │ Avg Credits/Op: 18.6 │ │ │ │
│ └─────────────────────────────────┘ │ [View All Activity →] │ │
│ └─────────────────────────────────────────┘ │
│ │
├─────────────────────────────────────────────────────────────────────────────────────┤
│ │
│ ┌─────────────────────────────────┐ ┌─────────────────────────────────────────┐ │
│ │ CONTENT VELOCITY │ │ AUTOMATION STATUS │ │
│ │ │ │ │ │
│ │ This Week This Month Total │ │ ● Active │ Schedule: Daily 9 AM │ │
│ │ │ │ │ │
│ │ Articles 5 28 156 │ │ Last Run: Dec 27, 7:00 AM │ │
│ │ Words 4.2K 24K 156K │ │ ├─ Clustered: 12 keywords │ │
│ │ Images 12 67 340 │ │ ├─ Ideas: 8 generated │ │
│ │ │ │ ├─ Content: 5 articles │ │
│ │ 📈 +23% vs last week │ │ └─ Images: 15 created │ │
│ │ │ │ │ │
│ │ [View Analytics →] │ │ Next Run: Dec 28, 9:00 AM │ │
│ └─────────────────────────────────┘ │ [Configure →] [Run Now →] │ │
│ └─────────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────────────────────┘
```
### Widget Specifications
#### 1. Needs Attention Bar
- Collapsible, only visible when items exist
- Types: `pending_review`, `sync_failed`, `setup_incomplete`, `automation_failed`
- Compact horizontal cards with action buttons
#### 2. Workflow Pipeline Widget
- Visual flow: Sites → Keywords → Clusters → Ideas → Tasks → Drafts → Published
- Shows counts at each stage
- Single progress bar for overall completion
- Clickable stage names link to respective pages
#### 3. Quick Actions + Workflow Guide Widget
- 2x3 grid of action buttons (use existing icons)
- Compact numbered workflow guide (1-8 steps)
- "Full Help" link to help page
#### 4. AI Operations Widget (NEW - from Django Admin Reports)
Shows data from `CreditUsageLog` model:
```typescript
interface AIOperationsData {
period: '7d' | '30d' | '90d';
operations: Array<{
type: 'clustering' | 'ideas' | 'content' | 'images';
count: number;
credits: number;
}>;
totals: {
count: number;
credits: number;
success_rate: number;
avg_credits_per_op: number;
};
}
```
- Time period filter (7d/30d/90d dropdown)
- Table with operation type, count, credits
- Success rate percentage
- Average credits per operation
#### 5. Recent Activity Widget
Shows data from `AITaskLog` and `CreditUsageLog`:
- Last 5 significant operations
- Timestamp relative (2 hours ago, Yesterday)
- Clickable to navigate to relevant content
- "View All Activity" link
#### 6. Content Velocity Widget (NEW)
Shows content production rates:
```typescript
interface ContentVelocityData {
this_week: { articles: number; words: number; images: number };
this_month: { articles: number; words: number; images: number };
total: { articles: number; words: number; images: number };
trend: number; // percentage vs previous period
}
```
- Three time columns: This Week, This Month, Total
- Rows: Articles, Words, Images
- Trend indicator vs previous period
#### 7. Automation Status Widget
Shows automation run status:
- Current status indicator (Active/Paused/Failed)
- Schedule display
- Last run details with stage breakdown
- Next scheduled run
- Configure and Run Now buttons
### API Endpoint Required
```python
# GET /api/v1/dashboard/summary/
{
"needs_attention": [...],
"pipeline": {
"sites": 2, "keywords": 156, "clusters": 23,
"ideas": 67, "tasks": 45, "drafts": 28, "published": 45,
"completion_percentage": 72
},
"ai_operations": {
"period": "7d",
"operations": [...],
"totals": {...}
},
"recent_activity": [...],
"content_velocity": {...},
"automation": {...}
}
```
### Implementation Notes
- Use existing components from `components/ui/`
- Use CSS tokens from `styles/tokens.css`
- Grid layout: `grid grid-cols-1 lg:grid-cols-2 gap-4`
- Compact widget padding: `p-4`
- No large headings - use subtle section labels
---
## 6. Site Setup Checklist Implementation
### Current Status
-`SiteSetupChecklist.tsx` component EXISTS
- ✅ Integrated in `Site Dashboard` page (full mode)
-**NOT integrated in `SiteCard.tsx`** (compact mode not used)
### Missing Implementation
The component has a `compact` prop but is not used in the site cards list.
### Recommended Fix
**File:** `frontend/src/components/sites/SiteCard.tsx`
Add compact checklist to each site card:
```tsx
// In SiteCard component, add after the status badges:
<SiteSetupChecklist
siteId={site.id}
siteName={site.name}
hasIndustry={!!site.industry}
hasSectors={site.sectors_count > 0}
hasWordPressIntegration={!!site.wordpress_site_url}
hasKeywords={site.keywords_count > 0}
compact={true}
/>
```
**Visual Result:**
```
┌─────────────────────────────────────────┐
│ My Website [Active] │
│ example.com │
│ Industry: Tech │ 3 Sectors │
│ ●●●○ 3/4 Setup Steps Complete │ ← NEW compact checklist
│ [Manage →] │
└─────────────────────────────────────────┘
```
---
## 7. To-Do-s Completion Audit
### Summary by Section
| Section | File | Status | Remaining Items |
|---------|------|--------|-----------------|
| Section 1 | `dashboard_mods.md` | 📋 Planned (do LAST) | Dashboard revamp, aggregated API |
| Section 2 | `SECTION_2_FINAL_MODS.md` | ✅ Done | - |
| Section 3 | `SECTION_3_FINAL_MODS.md` | ✅ Done | - |
| Section 4 | `SECTION_4_FINAL_MODS.md` | ✅ Done | - |
| Section 5 | `SECTION_5_FINAL_MODS.md` | ✅ Done | - |
| Section 6 | `SECTION_6_FINAL_MODS.md` | ✅ Done | - |
**Note:** Plans, billing, credits, usage improvements moved to separate phase.
### Remaining Items Detail
#### Dashboard (Section 1) - Major Work
- [ ] Aggregated API endpoint `/v1/dashboard/summary/`
- [ ] NeedsAttention widget
- [ ] Real Recent Activity log (replace hardcoded)
- [ ] AI Operations widget (from CreditUsageLog)
- [ ] Content Velocity widget
- [ ] Automation Status display
- [ ] Contextual Quick Actions
#### Cross-Module
- [ ] Notification bell dropdown with AI run logging
- [ ] 3-widget footer layout for Planner/Writer pages
- [ ] Improved tooltips for table action row metrics
- [ ] Site Setup Checklist on site cards (compact mode)
---
## 8. Notification System Plan
### Current State
- Bell icon exists with placeholder/mock notifications
- No real notification system or API
- No notification persistence
### Comprehensive Notification System Design
#### A. Notification Data Model
```python
# backend/igny8_core/business/notifications/models.py
class Notification(BaseModel):
account = models.ForeignKey('Account', on_delete=models.CASCADE)
user = models.ForeignKey('User', on_delete=models.CASCADE, null=True) # null = all users
# Notification content
type = models.CharField(max_length=50, choices=NOTIFICATION_TYPES)
title = models.CharField(max_length=200)
message = models.TextField()
severity = models.CharField(max_length=20, choices=SEVERITY_CHOICES)
# Related objects
site = models.ForeignKey('Site', null=True, on_delete=models.CASCADE)
content_type = models.ForeignKey(ContentType, null=True)
object_id = models.PositiveIntegerField(null=True)
content_object = GenericForeignKey()
# Action
action_url = models.CharField(max_length=500, null=True)
action_label = models.CharField(max_length=50, null=True)
# Status
is_read = models.BooleanField(default=False)
read_at = models.DateTimeField(null=True)
created_at = models.DateTimeField(auto_now_add=True)
NOTIFICATION_TYPES = [
# AI Operations
('ai_cluster_complete', 'Clustering Complete'),
('ai_cluster_failed', 'Clustering Failed'),
('ai_ideas_complete', 'Ideas Generated'),
('ai_ideas_failed', 'Idea Generation Failed'),
('ai_content_complete', 'Content Generated'),
('ai_content_failed', 'Content Generation Failed'),
('ai_images_complete', 'Images Generated'),
('ai_images_failed', 'Image Generation Failed'),
# Workflow
('content_ready_review', 'Content Ready for Review'),
('content_published', 'Content Published'),
('content_publish_failed', 'Publishing Failed'),
# WordPress Sync
('wordpress_sync_success', 'WordPress Sync Complete'),
('wordpress_sync_failed', 'WordPress Sync Failed'),
# Credits/Billing
('credits_low', 'Credits Running Low'),
('credits_depleted', 'Credits Depleted'),
('plan_upgraded', 'Plan Upgraded'),
# Setup
('site_setup_complete', 'Site Setup Complete'),
('keywords_imported', 'Keywords Imported'),
]
SEVERITY_CHOICES = [
('info', 'Info'),
('success', 'Success'),
('warning', 'Warning'),
('error', 'Error'),
]
```
#### B. Notification Creation Points
| Trigger Event | Notification Type | Severity | Title | Message Template |
|---------------|-------------------|----------|-------|------------------|
| Clustering completes | `ai_cluster_complete` | success | Clustering Complete | Created {count} clusters from {keyword_count} keywords |
| Clustering fails | `ai_cluster_failed` | error | Clustering Failed | Failed to cluster keywords: {error} |
| Ideas generated | `ai_ideas_complete` | success | Ideas Generated | Generated {count} content ideas from {cluster_count} clusters |
| Ideas failed | `ai_ideas_failed` | error | Idea Generation Failed | Failed to generate ideas: {error} |
| Content generated | `ai_content_complete` | success | Content Generated | Generated {count} articles ({word_count} words) |
| Content failed | `ai_content_failed` | error | Content Generation Failed | Failed to generate content: {error} |
| Images generated | `ai_images_complete` | success | Images Generated | Generated {count} images for your content |
| Images failed | `ai_images_failed` | error | Image Generation Failed | Failed to generate {count} images: {error} |
| Content published | `content_published` | success | Content Published | "{title}" published to {site_name} |
| Publish failed | `content_publish_failed` | error | Publishing Failed | Failed to publish "{title}": {error} |
| WP sync success | `wordpress_sync_success` | success | WordPress Synced | Synced {count} items with {site_name} |
| WP sync failed | `wordpress_sync_failed` | error | Sync Failed | WordPress sync failed for {site_name}: {error} |
| Credits at 80% | `credits_low` | warning | Credits Running Low | You've used 80% of your credits. Consider upgrading. |
| Credits at 90% | `credits_low` | warning | Credits Almost Depleted | Only 10% of credits remaining. Upgrade to continue. |
| Credits depleted | `credits_depleted` | error | Credits Depleted | Your credits are exhausted. Upgrade to continue. |
| Site setup done | `site_setup_complete` | success | Site Ready | {site_name} is fully configured and ready! |
| Keywords imported | `keywords_imported` | info | Keywords Imported | Added {count} keywords to {site_name} |
#### C. API Endpoints
```python
# GET /api/v1/notifications/
# Returns paginated list, most recent first
{
"count": 45,
"unread_count": 3,
"results": [
{
"id": 123,
"type": "ai_content_complete",
"title": "Content Generated",
"message": "Generated 5 articles (4,500 words)",
"severity": "success",
"site": {"id": 1, "name": "My Blog"},
"action_url": "/writer/content",
"action_label": "View Content",
"is_read": false,
"created_at": "2025-12-27T10:30:00Z"
}
]
}
# POST /api/v1/notifications/{id}/read/
# Mark single notification as read
# POST /api/v1/notifications/read-all/
# Mark all notifications as read
# DELETE /api/v1/notifications/{id}/
# Delete notification
```
#### D. Frontend Integration
##### NotificationDropdown Component Updates
```tsx
// frontend/src/components/header/NotificationDropdown.tsx
interface Notification {
id: number;
type: string;
title: string;
message: string;
severity: 'info' | 'success' | 'warning' | 'error';
site?: { id: number; name: string };
action_url?: string;
action_label?: string;
is_read: boolean;
created_at: string;
}
// Features:
// - Fetch real notifications on mount
// - Poll every 30 seconds for new notifications
// - Show unread count badge on bell icon
// - Mark as read on click
// - Navigate to action_url on click
// - "Mark all read" button
// - "View all" link to full notifications page
```
##### Full Notifications Page
Create `/account/notifications` page with:
- Full list of all notifications (paginated)
- Filter by type, severity, site
- Bulk actions (mark read, delete)
- Date range filtering
#### E. Implementation Priority
**Phase 1 (Core):**
1. Create Notification model
2. Create API endpoints
3. Hook AI functions to create notifications on complete/fail
4. Update NotificationDropdown to fetch real data
**Phase 2 (Enhanced):**
1. Credit threshold notifications
2. WordPress sync notifications
3. Full notifications page
4. Email notifications (optional)
**Phase 3 (Polish):**
1. Notification preferences
2. Push notifications
3. Real-time updates (WebSocket)
---
## Implementation Roadmap
### Priority Order
1. **Site Setup Checklist on Cards** - Quick win, already built
2. **Table Action Row Tooltip Improvements** - Quick improvement
3. **Footer 3-Widget Layout** - Better workflow visibility
4. **Notification System** - High user value
5. **Progress Modal Text** - Polish
6. **Dashboard Redesign** - Major effort, do last
### Estimated Effort
| Item | Backend | Frontend | Total |
|------|---------|----------|-------|
| Site checklist on cards | 0h | 2h | 2h |
| Tooltip improvements | 0h | 4h | 4h |
| Footer 3-widget layout | 2h | 12h | 14h |
| Notification system | 8h | 8h | 16h |
| Progress modal text | 4h | 4h | 8h |
| Dashboard redesign | 8h | 16h | 24h |
| **Total** | **22h** | **46h** | **68h** |
---
## Appendix: Current vs Recommended Comparison
### Table Action Row Metrics (Already Implemented)
**Current:** Shows metrics in table action row
```
| Keywords 46 | Clustered 10 | Unmapped 0 | Volume 13.6K |
```
**Improvement:** Better actionable tooltips with guidance on next steps
### Footer Metrics Example (Keywords Page)
**Current:** Large cards with minimal info
**Recommended:** 3-widget layout:
- Widget 1: Page metrics with combined progress bar
- Widget 2: Full Planner module stats with links
- Widget 3: Both modules completion stats with 7/30/90d filter
### Dashboard Example
**Current:** Hero banner + large sections + much whitespace + repeating counts
**Recommended:** Compact info-dense layout with:
- Needs Attention bar (only if items exist)
- Workflow Pipeline + Quick Actions/Guide (2 columns)
- AI Operations stats + Recent Activity (2 columns)
- Content Velocity + Automation Status (2 columns)
- All visible without scrolling, different data dimensions
---
## Technical Notes
### Standard Components to Use
From existing codebase:
- `components/ui/card` - Card component
- `components/ui/progress` - ProgressBar component
- `components/ui/button/Button` - Button component
- `components/ui/tooltip/Tooltip` - Tooltip component
- `components/ui/dropdown/Dropdown` - Dropdown component
### Standard CSS Tokens
From `styles/tokens.css`:
```css
--color-primary: #0693e3; /* Primary brand blue */
--color-success: #0bbf87; /* Success green */
--color-warning: #ff7a00; /* Warning orange */
--color-danger: #ef4444; /* Danger red */
--color-purple: #5d4ae3; /* Purple accent */
```
### Do NOT Create
- Inline duplicate styles
- New color variables outside tokens.css
- Duplicate component implementations
- Styles in igny8-colors.css (use tokens.css)

View File

@@ -246,21 +246,8 @@ Stage 7: Generate images → Review queue
| **Usage** | ✅ Active | `/account/usage` |
| **AI Models** | ✅ Active (Admin) | `/settings/integration` |
| **Help** | ✅ Active | `/help` |
| **SiteBuilder** | ❌ Deprecated | Removed - was for site structure generation |
| **Linker** | ⏸️ Phase 2 | Internal linking suggestions (disabled by default) |
| **Optimizer** | ⏸️ Phase 2 | Content optimization (disabled by default) |
### Module Status Details
| Module | Status | Notes |
|--------|--------|-------|
| **SiteBuilder** | ❌ Deprecated | Code exists but feature is removed. Marked for cleanup. |
| **Linker** | ⏸️ Phase 2 | Feature flag: `linker_enabled`. Available but disabled by default. |
| **Optimizer** | ⏸️ Phase 2 | Feature flag: `optimizer_enabled`. Available but disabled by default. |
To enable Phase 2 modules, update via Django Admin:
- `GlobalModuleSettings` (pk=1) for platform-wide settings
- `ModuleEnableSettings` for per-account settings
| **Linker** | ⏸️ Disabled | Internal linking (available, disabled by default) |
| **Optimizer** | ⏸️ Disabled | Content optimization (available, disabled by default) |
---

View File

@@ -10,10 +10,10 @@
| Document | Description |
|----------|-------------|
| [docs/00-SYSTEM/IGNY8-APP.md](docs/00-SYSTEM/IGNY8-APP.md) | Executive summary (non-technical) |
| [IGNY8-APP.md](IGNY8-APP.md) | Executive summary (non-technical) |
| [docs/INDEX.md](docs/INDEX.md) | Full documentation index |
| [CHANGELOG.md](CHANGELOG.md) | Version history |
| [.rules](.rules) | AI agent rules |
| [RULES.md](RULES.md) | Documentation maintenance rules |
---
@@ -40,19 +40,19 @@ IGNY8 is a full-stack SaaS platform that combines AI-powered content generation
igny8/
├── README.md # This file
├── CHANGELOG.md # Version history
├── .rules # AI agent rules
├── IGNY8-APP.md # Executive summary
├── RULES.md # Documentation rules
├── backend/ # Django REST API + Celery
├── frontend/ # React + Vite SPA
├── docs/ # Full documentation
│ ├── INDEX.md # Documentation navigation
│ ├── 00-SYSTEM/ # Architecture, auth, IGNY8-APP
│ ├── 00-SYSTEM/ # Architecture & auth
│ ├── 10-MODULES/ # Module documentation
│ ├── 20-API/ # API endpoints
│ ├── 30-FRONTEND/ # Frontend pages, stores, design system
│ ├── 30-FRONTEND/ # Frontend pages & stores
│ ├── 40-WORKFLOWS/ # Cross-module workflows
│ ├── 50-DEPLOYMENT/ # Deployment guides
── 90-REFERENCE/ # Models, AI functions, fixes
│ └── plans/ # Implementation plans
── 90-REFERENCE/ # Models & AI functions
└── docker-compose.app.yml
```

View File

@@ -41,11 +41,6 @@ class Igny8AdminConfig(AdminConfig):
admin_site._actions = old_site._actions.copy()
admin_site._global_actions = old_site._global_actions.copy()
# CRITICAL: Update each ModelAdmin's admin_site attribute to point to our custom site
# Otherwise, each_context() will use the wrong admin site and miss our customizations
for model, model_admin in admin_site._registry.items():
model_admin.admin_site = admin_site
# Now replace the default site
admin_module.site = admin_site
admin_module.sites.site = admin_site

View File

@@ -145,16 +145,7 @@ class Igny8ModelAdmin(UnfoldModelAdmin):
for group in sidebar_navigation:
group_is_active = False
for item in group.get('items', []):
# Unfold stores resolved link in 'link_callback', original lambda in 'link'
item_link = item.get('link_callback') or item.get('link', '')
# Convert to string (handles lazy proxy objects and ensures it's a string)
try:
item_link = str(item_link) if item_link else ''
except:
item_link = ''
# Skip if it's a function representation (e.g., "<function ...>")
if item_link.startswith('<'):
continue
item_link = item.get('link', '')
# Check if current path matches this item's link
if item_link and current_path.startswith(item_link):
item['active'] = True

View File

@@ -1,30 +1,28 @@
"""
Custom AdminSite for IGNY8 using Unfold theme.
SIMPLIFIED VERSION - Navigation is now handled via UNFOLD settings in settings.py
This file only handles:
1. Custom URLs for dashboard, reports, and monitoring pages
2. Index redirect to dashboard
All sidebar navigation is configured in settings.py under UNFOLD["SIDEBAR"]["navigation"]
Custom AdminSite for IGNY8 to organize models into proper groups using Unfold
NO EMOJIS - Unfold handles all icons via Material Design
"""
from django.contrib import admin
from django.urls import path
from django.contrib.admin.apps import AdminConfig
from django.apps import apps
from django.urls import path, reverse_lazy
from django.shortcuts import redirect
from django.contrib.admin import sites
from unfold.admin import ModelAdmin as UnfoldModelAdmin
from unfold.sites import UnfoldAdminSite
class Igny8AdminSite(UnfoldAdminSite):
"""
Custom AdminSite based on Unfold.
Navigation is handled via UNFOLD settings - this just adds custom URLs.
Custom AdminSite based on Unfold that organizes models into the planned groups
"""
site_header = 'IGNY8 Administration'
site_title = 'IGNY8 Admin'
index_title = 'IGNY8 Administration'
def get_urls(self):
"""Add custom URLs for dashboard, reports, and monitoring pages"""
"""Get admin URLs with dashboard, reports, and monitoring pages available"""
from django.urls import path
from .dashboard import admin_dashboard
from .reports import (
revenue_report, usage_report, content_report, data_quality_report,
@@ -33,12 +31,12 @@ class Igny8AdminSite(UnfoldAdminSite):
from .monitoring import (
system_health_dashboard, api_monitor_dashboard, debug_console
)
urls = super().get_urls()
custom_urls = [
# Dashboard
path('dashboard/', self.admin_view(admin_dashboard), name='dashboard'),
# Reports
path('reports/revenue/', self.admin_view(revenue_report), name='report_revenue'),
path('reports/usage/', self.admin_view(usage_report), name='report_usage'),
@@ -46,17 +44,308 @@ class Igny8AdminSite(UnfoldAdminSite):
path('reports/data-quality/', self.admin_view(data_quality_report), name='report_data_quality'),
path('reports/token-usage/', self.admin_view(token_usage_report), name='report_token_usage'),
path('reports/ai-cost-analysis/', self.admin_view(ai_cost_analysis), name='report_ai_cost_analysis'),
# Monitoring
# Monitoring (NEW)
path('monitoring/system-health/', self.admin_view(system_health_dashboard), name='monitoring_system_health'),
path('monitoring/api-monitor/', self.admin_view(api_monitor_dashboard), name='monitoring_api_monitor'),
path('monitoring/debug-console/', self.admin_view(debug_console), name='monitoring_debug_console'),
]
return custom_urls + urls
def index(self, request, extra_context=None):
"""Redirect admin index to custom dashboard"""
"""Redirect to custom dashboard"""
from django.shortcuts import redirect
return redirect('admin:dashboard')
def get_sidebar_list(self, request):
"""
Override Unfold's get_sidebar_list to return our custom app groups
Convert Django app_list format to Unfold sidebar navigation format
"""
# Get our custom Django app list
django_apps = self.get_app_list(request, app_label=None)
# Convert to Unfold navigation format: {title, items: [{title, link, icon}]}
sidebar_groups = []
for app in django_apps:
group = {
'title': app['name'],
'collapsible': True,
'items': []
}
# Convert each model to navigation item
for model in app.get('models', []):
if model.get('perms', {}).get('view', False) or model.get('perms', {}).get('change', False):
item = {
'title': model['name'],
'link': model['admin_url'],
'icon': None, # Unfold will use default
'has_permission': True, # CRITICAL: Template checks this
}
group['items'].append(item)
# Only add groups that have items
if group['items']:
sidebar_groups.append(group)
return sidebar_groups
def each_context(self, request):
"""
Override context to ensure our custom app_list is always used
This is called by all admin templates for sidebar rendering
CRITICAL FIX: Force custom sidebar on ALL pages including model detail/list views
"""
# CRITICAL: Must call parent to get sidebar_navigation set
context = super().each_context(request)
# DEBUGGING: Print to console what parent returned
print(f"\n=== DEBUG each_context for {request.path} ===")
print(f"sidebar_navigation length from parent: {len(context.get('sidebar_navigation', []))}")
if context.get('sidebar_navigation'):
print(f"First sidebar group: {context['sidebar_navigation'][0].get('title', 'NO TITLE')}")
# Force our custom app list to be used everywhere - IGNORE app_label parameter
custom_apps = self.get_app_list(request, app_label=None)
context['available_apps'] = custom_apps
context['app_list'] = custom_apps # Also set app_list for compatibility
# CRITICAL FIX: Ensure sidebar_navigation is using our custom sidebar
# Parent's each_context already called get_sidebar_list(), which returns our custom sidebar
# So sidebar_navigation should already be correct, but let's verify
if not context.get('sidebar_navigation') or len(context.get('sidebar_navigation', [])) == 0:
# If sidebar_navigation is empty, force it
print("WARNING: sidebar_navigation was empty, forcing it!")
context['sidebar_navigation'] = self.get_sidebar_list(request)
print(f"Final sidebar_navigation length: {len(context['sidebar_navigation'])}")
print("=== END DEBUG ===\n")
return context
def get_app_list(self, request, app_label=None):
"""
Customize the app list to organize models into logical groups
NO EMOJIS - Unfold handles all icons via Material Design
Args:
request: The HTTP request
app_label: IGNORED - Always return full custom sidebar for consistency
"""
# CRITICAL: Always build full app_dict (ignore app_label) for consistent sidebar
app_dict = self._build_app_dict(request, None)
# Define our custom groups with their models (using object_name)
# Organized by business function - Material icons configured in Unfold
custom_groups = {
'Accounts & Tenancy': {
'models': [
('igny8_core_auth', 'Account'),
('igny8_core_auth', 'User'),
('igny8_core_auth', 'Site'),
('igny8_core_auth', 'Sector'),
('igny8_core_auth', 'SiteUserAccess'),
],
},
'Global Resources': {
'models': [
('igny8_core_auth', 'Industry'),
('igny8_core_auth', 'IndustrySector'),
('igny8_core_auth', 'SeedKeyword'),
],
},
'Global Settings': {
'models': [
('system', 'GlobalIntegrationSettings'),
('system', 'GlobalModuleSettings'),
('system', 'GlobalAIPrompt'),
('system', 'GlobalAuthorProfile'),
('system', 'GlobalStrategy'),
],
},
'Plans and Billing': {
'models': [
('igny8_core_auth', 'Plan'),
('igny8_core_auth', 'Subscription'),
('billing', 'Invoice'),
('billing', 'Payment'),
('billing', 'CreditPackage'),
('billing', 'PaymentMethodConfig'),
('billing', 'AccountPaymentMethod'),
],
},
'Credits': {
'models': [
('billing', 'CreditTransaction'),
('billing', 'CreditUsageLog'),
('billing', 'CreditCostConfig'),
('billing', 'PlanLimitUsage'),
],
},
'Content Planning': {
'models': [
('planner', 'Keywords'),
('planner', 'Clusters'),
('planner', 'ContentIdeas'),
],
},
'Content Generation': {
'models': [
('writer', 'Tasks'),
('writer', 'Content'),
('writer', 'Images'),
],
},
'Taxonomy & Organization': {
'models': [
('writer', 'ContentTaxonomy'),
('writer', 'ContentTaxonomyRelation'),
('writer', 'ContentClusterMap'),
('writer', 'ContentAttribute'),
],
},
'Publishing & Integration': {
'models': [
('integration', 'SiteIntegration'),
('integration', 'SyncEvent'),
('publishing', 'PublishingRecord'),
('system', 'PublishingChannel'),
('publishing', 'DeploymentRecord'),
],
},
'AI & Automation': {
'models': [
('system', 'IntegrationSettings'),
('system', 'AIPrompt'),
('system', 'Strategy'),
('system', 'AuthorProfile'),
('system', 'APIKey'),
('system', 'WebhookConfig'),
('automation', 'AutomationConfig'),
('automation', 'AutomationRun'),
],
},
'System Settings': {
'models': [
('contenttypes', 'ContentType'),
('system', 'ContentTemplate'),
('system', 'TaxonomyConfig'),
('system', 'SystemSetting'),
('system', 'ContentTypeConfig'),
('system', 'NotificationConfig'),
],
},
'Django Admin': {
'models': [
('auth', 'Group'),
('auth', 'Permission'),
('igny8_core_auth', 'PasswordResetToken'),
('sessions', 'Session'),
],
},
'Tasks & Logging': {
'models': [
('ai', 'AITaskLog'),
('system', 'AuditLog'),
('admin', 'LogEntry'),
('django_celery_results', 'TaskResult'),
('django_celery_results', 'GroupResult'),
],
},
}
# ALWAYS build and return our custom organized app list
# regardless of app_label parameter (for consistent sidebar on all pages)
organized_apps = []
# Add Dashboard link as first item
organized_apps.append({
'name': '📊 Dashboard',
'app_label': '_dashboard',
'app_url': '/admin/dashboard/',
'has_module_perms': True,
'models': [],
})
# Add Reports section with links to all reports
organized_apps.append({
'name': 'Reports & Analytics',
'app_label': '_reports',
'app_url': '#',
'has_module_perms': True,
'models': [
{
'name': 'Revenue Report',
'object_name': 'RevenueReport',
'admin_url': '/admin/reports/revenue/',
'view_only': True,
'perms': {'view': True},
},
{
'name': 'Usage Report',
'object_name': 'UsageReport',
'admin_url': '/admin/reports/usage/',
'view_only': True,
'perms': {'view': True},
},
{
'name': 'Content Report',
'object_name': 'ContentReport',
'admin_url': '/admin/reports/content/',
'view_only': True,
'perms': {'view': True},
},
{
'name': 'Data Quality Report',
'object_name': 'DataQualityReport',
'admin_url': '/admin/reports/data-quality/',
'view_only': True,
'perms': {'view': True},
},
{
'name': 'Token Usage Report',
'object_name': 'TokenUsageReport',
'admin_url': '/admin/reports/token-usage/',
'view_only': True,
'perms': {'view': True},
},
{
'name': 'AI Cost Analysis',
'object_name': 'AICostAnalysis',
'admin_url': '/admin/reports/ai-cost-analysis/',
'view_only': True,
'perms': {'view': True},
},
],
})
for group_name, group_config in custom_groups.items():
group_models = []
for app_label, model_name in group_config['models']:
# Find the model in app_dict
for app in app_dict.values():
if app['app_label'] == app_label:
for model in app.get('models', []):
if model['object_name'] == model_name:
group_models.append(model)
break
if group_models:
# Get the first model's app_label to use as the real app_label
first_model_app_label = group_config['models'][0][0]
organized_apps.append({
'name': group_name,
'app_label': first_model_app_label, # Use real app_label, not fake one
'app_url': f'/admin/{first_model_app_label}/', # Real URL, not '#'
'has_module_perms': True,
'models': group_models,
})
return organized_apps
# Instantiate custom admin site

View File

@@ -13,6 +13,8 @@ from django.conf import settings
from .constants import (
DEFAULT_AI_MODEL,
JSON_MODE_MODELS,
MODEL_RATES,
IMAGE_MODEL_RATES,
VALID_OPENAI_IMAGE_MODELS,
VALID_SIZES_BY_MODEL,
DEBUG_MODE,
@@ -38,27 +40,24 @@ class AICore:
self.account = account
self._openai_api_key = None
self._runware_api_key = None
self._bria_api_key = None
self._anthropic_api_key = None
self._load_account_settings()
def _load_account_settings(self):
"""Load API keys from IntegrationProvider (centralized provider config)"""
"""Load API keys from GlobalIntegrationSettings (platform-wide, used by ALL accounts)"""
try:
from igny8_core.ai.model_registry import ModelRegistry
from igny8_core.modules.system.global_settings_models import GlobalIntegrationSettings
# Load API keys from IntegrationProvider (centralized, platform-wide)
self._openai_api_key = ModelRegistry.get_api_key('openai')
self._runware_api_key = ModelRegistry.get_api_key('runware')
self._bria_api_key = ModelRegistry.get_api_key('bria')
self._anthropic_api_key = ModelRegistry.get_api_key('anthropic')
# Get global settings - single instance used by ALL accounts
global_settings = GlobalIntegrationSettings.get_instance()
# Load API keys from global settings (platform-wide)
self._openai_api_key = global_settings.openai_api_key
self._runware_api_key = global_settings.runware_api_key
except Exception as e:
logger.error(f"Could not load API keys from IntegrationProvider: {e}", exc_info=True)
logger.error(f"Could not load GlobalIntegrationSettings: {e}", exc_info=True)
self._openai_api_key = None
self._runware_api_key = None
self._bria_api_key = None
self._anthropic_api_key = None
def get_api_key(self, integration_type: str = 'openai') -> Optional[str]:
"""Get API key for integration type"""
@@ -66,10 +65,6 @@ class AICore:
return self._openai_api_key
elif integration_type == 'runware':
return self._runware_api_key
elif integration_type == 'bria':
return self._bria_api_key
elif integration_type == 'anthropic':
return self._anthropic_api_key
return None
def get_model(self, integration_type: str = 'openai') -> str:
@@ -92,13 +87,13 @@ class AICore:
response_format: Optional[Dict] = None,
api_key: Optional[str] = None,
function_name: str = 'ai_request',
prompt_prefix: Optional[str] = None,
function_id: Optional[str] = None,
tracker: Optional[ConsoleStepTracker] = None
) -> Dict[str, Any]:
"""
Centralized AI request handler with console logging.
All AI text generation requests go through this method.
Args:
prompt: Prompt text
model: Model name (required - must be provided from IntegrationSettings)
@@ -107,13 +102,12 @@ class AICore:
response_format: Optional response format dict (for JSON mode)
api_key: Optional API key override
function_name: Function name for logging (e.g., 'cluster_keywords')
prompt_prefix: Optional prefix to add before prompt (e.g., '##GP01-Clustering')
tracker: Optional ConsoleStepTracker instance for logging
Returns:
Dict with 'content', 'input_tokens', 'output_tokens', 'total_tokens',
'model', 'cost', 'error', 'api_id'
Raises:
ValueError: If model is not provided
"""
@@ -164,12 +158,8 @@ class AICore:
logger.info(f" - Model used in request: {active_model}")
tracker.ai_call(f"Using model: {active_model}")
# Use ModelRegistry for validation (database-driven)
from igny8_core.ai.model_registry import ModelRegistry
if not ModelRegistry.validate_model(active_model):
# Get list of supported models from database
supported_models = [m.model_name for m in ModelRegistry.list_models(model_type='text')]
error_msg = f"Model '{active_model}' is not supported. Supported models: {supported_models}"
if active_model not in MODEL_RATES:
error_msg = f"Model '{active_model}' is not supported. Supported models: {list(MODEL_RATES.keys())}"
logger.error(f"[AICore] {error_msg}")
tracker.error('ConfigurationError', error_msg)
return {
@@ -194,16 +184,16 @@ class AICore:
else:
tracker.ai_call("Using text response format")
# Step 4: Validate prompt length and add prompt_prefix
# Step 4: Validate prompt length and add function_id
prompt_length = len(prompt)
tracker.ai_call(f"Prompt length: {prompt_length} characters")
# Add prompt_prefix to prompt if provided (for tracking)
# Format: ##GP01-Clustering or ##CP01-Clustering
# Add function_id to prompt if provided (for tracking)
final_prompt = prompt
if prompt_prefix:
final_prompt = f'{prompt_prefix}\n\n{prompt}'
tracker.ai_call(f"Added prompt prefix: {prompt_prefix}")
if function_id:
function_id_prefix = f'function_id: "{function_id}"\n\n'
final_prompt = function_id_prefix + prompt
tracker.ai_call(f"Added function_id to prompt: {function_id}")
# Step 5: Build request payload
url = 'https://api.openai.com/v1/chat/completions'
@@ -300,13 +290,9 @@ class AICore:
tracker.parse(f"Received {total_tokens} tokens (input: {input_tokens}, output: {output_tokens})")
tracker.parse(f"Content length: {len(content)} characters")
# Step 10: Calculate cost using ModelRegistry (database-driven)
from igny8_core.ai.model_registry import ModelRegistry
cost = float(ModelRegistry.calculate_cost(
active_model,
input_tokens=input_tokens,
output_tokens=output_tokens
))
# Step 10: Calculate cost
rates = MODEL_RATES.get(active_model, {'input': 2.00, 'output': 8.00})
cost = (input_tokens * rates['input'] + output_tokens * rates['output']) / 1_000_000
tracker.parse(f"Cost calculated: ${cost:.6f}")
tracker.done("Request completed successfully")
@@ -381,289 +367,6 @@ class AICore:
'api_id': None,
}
def run_anthropic_request(
self,
prompt: str,
model: str,
max_tokens: int = 8192,
temperature: float = 0.7,
api_key: Optional[str] = None,
function_name: str = 'anthropic_request',
prompt_prefix: Optional[str] = None,
tracker: Optional[ConsoleStepTracker] = None,
system_prompt: Optional[str] = None,
) -> Dict[str, Any]:
"""
Anthropic (Claude) AI request handler with console logging.
Alternative to OpenAI for text generation.
Args:
prompt: Prompt text
model: Claude model name (required - must be provided from IntegrationSettings)
max_tokens: Maximum tokens
temperature: Temperature (0-1)
api_key: Optional API key override
function_name: Function name for logging (e.g., 'cluster_keywords')
prompt_prefix: Optional prefix to add before prompt
tracker: Optional ConsoleStepTracker instance for logging
system_prompt: Optional system prompt for Claude
Returns:
Dict with 'content', 'input_tokens', 'output_tokens', 'total_tokens',
'model', 'cost', 'error', 'api_id'
Raises:
ValueError: If model is not provided
"""
# Use provided tracker or create a new one
if tracker is None:
tracker = ConsoleStepTracker(function_name)
tracker.ai_call("Preparing Anthropic request...")
# Step 1: Validate model is provided
if not model:
error_msg = "Model is required. Ensure IntegrationSettings is configured for the account."
tracker.error('ConfigurationError', error_msg)
logger.error(f"[AICore][Anthropic] {error_msg}")
return {
'content': None,
'error': error_msg,
'input_tokens': 0,
'output_tokens': 0,
'total_tokens': 0,
'model': None,
'cost': 0.0,
'api_id': None,
}
# Step 2: Validate API key
api_key = api_key or self._anthropic_api_key
if not api_key:
error_msg = 'Anthropic API key not configured'
tracker.error('ConfigurationError', error_msg)
return {
'content': None,
'error': error_msg,
'input_tokens': 0,
'output_tokens': 0,
'total_tokens': 0,
'model': model,
'cost': 0.0,
'api_id': None,
}
active_model = model
# Debug logging: Show model used
logger.info(f"[AICore][Anthropic] Model Configuration:")
logger.info(f" - Model parameter passed: {model}")
logger.info(f" - Model used in request: {active_model}")
tracker.ai_call(f"Using Anthropic model: {active_model}")
# Add prompt_prefix to prompt if provided (for tracking)
final_prompt = prompt
if prompt_prefix:
final_prompt = f'{prompt_prefix}\n\n{prompt}'
tracker.ai_call(f"Added prompt prefix: {prompt_prefix}")
# Step 5: Build request payload using Anthropic Messages API
url = 'https://api.anthropic.com/v1/messages'
headers = {
'x-api-key': api_key,
'anthropic-version': '2023-06-01',
'Content-Type': 'application/json',
}
body_data = {
'model': active_model,
'max_tokens': max_tokens,
'messages': [{'role': 'user', 'content': final_prompt}],
}
# Only add temperature if it's less than 1.0 (Claude's default)
if temperature < 1.0:
body_data['temperature'] = temperature
# Add system prompt if provided
if system_prompt:
body_data['system'] = system_prompt
tracker.ai_call(f"Request payload prepared (model={active_model}, max_tokens={max_tokens}, temp={temperature})")
# Step 6: Send request
tracker.ai_call("Sending request to Anthropic API...")
request_start = time.time()
try:
response = requests.post(url, headers=headers, json=body_data, timeout=180)
request_duration = time.time() - request_start
tracker.ai_call(f"Received response in {request_duration:.2f}s (status={response.status_code})")
# Step 7: Validate HTTP response
if response.status_code != 200:
error_data = response.json() if response.headers.get('content-type', '').startswith('application/json') else {}
error_message = f"HTTP {response.status_code} error"
if isinstance(error_data, dict) and 'error' in error_data:
if isinstance(error_data['error'], dict) and 'message' in error_data['error']:
error_message += f": {error_data['error']['message']}"
# Check for rate limit
if response.status_code == 429:
retry_after = response.headers.get('retry-after', '60')
tracker.rate_limit(retry_after)
error_message += f" (Rate limit - retry after {retry_after}s)"
else:
tracker.error('HTTPError', error_message)
logger.error(f"Anthropic API HTTP error {response.status_code}: {error_message}")
return {
'content': None,
'error': error_message,
'input_tokens': 0,
'output_tokens': 0,
'total_tokens': 0,
'model': active_model,
'cost': 0.0,
'api_id': None,
}
# Step 8: Parse response JSON
try:
data = response.json()
except json.JSONDecodeError as e:
error_msg = f'Failed to parse JSON response: {str(e)}'
tracker.malformed_json(str(e))
logger.error(error_msg)
return {
'content': None,
'error': error_msg,
'input_tokens': 0,
'output_tokens': 0,
'total_tokens': 0,
'model': active_model,
'cost': 0.0,
'api_id': None,
}
api_id = data.get('id')
# Step 9: Extract content (Anthropic format)
# Claude returns content as array: [{"type": "text", "text": "..."}]
if 'content' in data and len(data['content']) > 0:
# Extract text from first content block
content_blocks = data['content']
content = ''
for block in content_blocks:
if block.get('type') == 'text':
content += block.get('text', '')
usage = data.get('usage', {})
input_tokens = usage.get('input_tokens', 0)
output_tokens = usage.get('output_tokens', 0)
total_tokens = input_tokens + output_tokens
tracker.parse(f"Received {total_tokens} tokens (input: {input_tokens}, output: {output_tokens})")
tracker.parse(f"Content length: {len(content)} characters")
# Step 10: Calculate cost using ModelRegistry (with fallback)
# Claude pricing as of 2024:
# claude-3-5-sonnet: $3/1M input, $15/1M output
# claude-3-opus: $15/1M input, $75/1M output
# claude-3-haiku: $0.25/1M input, $1.25/1M output
from igny8_core.ai.model_registry import ModelRegistry
cost = float(ModelRegistry.calculate_cost(
active_model,
input_tokens=input_tokens,
output_tokens=output_tokens
))
# Fallback to hardcoded rates if ModelRegistry returns 0
if cost == 0:
anthropic_rates = {
'claude-3-5-sonnet-20241022': {'input': 3.00, 'output': 15.00},
'claude-3-5-haiku-20241022': {'input': 1.00, 'output': 5.00},
'claude-3-opus-20240229': {'input': 15.00, 'output': 75.00},
'claude-3-sonnet-20240229': {'input': 3.00, 'output': 15.00},
'claude-3-haiku-20240307': {'input': 0.25, 'output': 1.25},
}
rates = anthropic_rates.get(active_model, {'input': 3.00, 'output': 15.00})
cost = (input_tokens * rates['input'] + output_tokens * rates['output']) / 1_000_000
tracker.parse(f"Cost calculated: ${cost:.6f}")
tracker.done("Anthropic request completed successfully")
return {
'content': content,
'input_tokens': input_tokens,
'output_tokens': output_tokens,
'total_tokens': total_tokens,
'model': active_model,
'cost': cost,
'error': None,
'api_id': api_id,
'duration': request_duration,
}
else:
error_msg = 'No content in Anthropic response'
tracker.error('EmptyResponse', error_msg)
logger.error(error_msg)
return {
'content': None,
'error': error_msg,
'input_tokens': 0,
'output_tokens': 0,
'total_tokens': 0,
'model': active_model,
'cost': 0.0,
'api_id': api_id,
}
except requests.exceptions.Timeout:
error_msg = 'Request timeout (180s exceeded)'
tracker.timeout(180)
logger.error(error_msg)
return {
'content': None,
'error': error_msg,
'input_tokens': 0,
'output_tokens': 0,
'total_tokens': 0,
'model': active_model,
'cost': 0.0,
'api_id': None,
}
except requests.exceptions.RequestException as e:
error_msg = f'Request exception: {str(e)}'
tracker.error('RequestException', error_msg, e)
logger.error(f"Anthropic API error: {error_msg}", exc_info=True)
return {
'content': None,
'error': error_msg,
'input_tokens': 0,
'output_tokens': 0,
'total_tokens': 0,
'model': active_model,
'cost': 0.0,
'api_id': None,
}
except Exception as e:
error_msg = f'Unexpected error: {str(e)}'
logger.error(f"[AI][{function_name}][Anthropic][Error] {error_msg}", exc_info=True)
if tracker:
tracker.error('UnexpectedError', error_msg, e)
return {
'content': None,
'error': error_msg,
'input_tokens': 0,
'output_tokens': 0,
'total_tokens': 0,
'model': active_model,
'cost': 0.0,
'api_id': None,
}
def extract_json(self, response_text: str) -> Optional[Dict]:
"""
Extract JSON from response text.
@@ -713,8 +416,7 @@ class AICore:
n: int = 1,
api_key: Optional[str] = None,
negative_prompt: Optional[str] = None,
function_name: str = 'generate_image',
style: Optional[str] = None
function_name: str = 'generate_image'
) -> Dict[str, Any]:
"""
Generate image using AI with console logging.
@@ -735,11 +437,9 @@ class AICore:
print(f"[AI][{function_name}] Step 1: Preparing image generation request...")
if provider == 'openai':
return self._generate_image_openai(prompt, model, size, n, api_key, negative_prompt, function_name, style)
return self._generate_image_openai(prompt, model, size, n, api_key, negative_prompt, function_name)
elif provider == 'runware':
return self._generate_image_runware(prompt, model, size, n, api_key, negative_prompt, function_name)
elif provider == 'bria':
return self._generate_image_bria(prompt, model, size, n, api_key, negative_prompt, function_name)
else:
error_msg = f'Unknown provider: {provider}'
print(f"[AI][{function_name}][Error] {error_msg}")
@@ -759,15 +459,9 @@ class AICore:
n: int,
api_key: Optional[str],
negative_prompt: Optional[str],
function_name: str,
style: Optional[str] = None
function_name: str
) -> Dict[str, Any]:
"""Generate image using OpenAI DALL-E
Args:
style: For DALL-E 3 only. 'vivid' (hyper-real/dramatic) or 'natural' (more realistic).
Default is 'natural' for realistic photos.
"""
"""Generate image using OpenAI DALL-E"""
print(f"[AI][{function_name}] Provider: OpenAI")
# Determine character limit based on model
@@ -852,15 +546,6 @@ class AICore:
'size': size
}
# For DALL-E 3, add style parameter
# 'natural' = more realistic photos, 'vivid' = hyper-real/dramatic
if model == 'dall-e-3':
# Default to 'natural' for realistic images, but respect user preference
dalle_style = style if style in ['vivid', 'natural'] else 'natural'
data['style'] = dalle_style
data['quality'] = 'hd' # Always use HD quality for best results
print(f"[AI][{function_name}] DALL-E 3 style: {dalle_style}, quality: hd")
if negative_prompt:
# Note: OpenAI DALL-E doesn't support negative_prompt in API, but we log it
print(f"[AI][{function_name}] Note: Negative prompt provided but OpenAI DALL-E doesn't support it")
@@ -893,9 +578,7 @@ class AICore:
image_url = image_data.get('url')
revised_prompt = image_data.get('revised_prompt')
# Use ModelRegistry for image cost (database-driven)
from igny8_core.ai.model_registry import ModelRegistry
cost = float(ModelRegistry.calculate_cost(model, num_images=n))
cost = IMAGE_MODEL_RATES.get(model, 0.040) * n
print(f"[AI][{function_name}] Step 5: Image generated successfully")
print(f"[AI][{function_name}] Step 6: Cost: ${cost:.4f}")
print(f"[AI][{function_name}][Success] Image generation completed")
@@ -987,57 +670,24 @@ class AICore:
# Runware uses array payload with authentication task first, then imageInference
# Reference: image-generation.php lines 79-97
import uuid
# Build base inference task
inference_task = {
'taskType': 'imageInference',
'taskUUID': str(uuid.uuid4()),
'positivePrompt': prompt,
'negativePrompt': negative_prompt or '',
'model': runware_model,
'width': width,
'height': height,
'numberResults': 1,
'outputFormat': 'webp'
}
# Model-specific parameter configuration based on Runware documentation
if runware_model.startswith('bria:'):
# Bria 3.2 (bria:10@1) - Commercial-ready, steps 20-50 (API requires minimum 20)
inference_task['steps'] = 20
# Enhanced negative prompt for Bria to prevent disfigured images
enhanced_negative = (negative_prompt or '') + ', disfigured, deformed, bad anatomy, wrong anatomy, extra limbs, missing limbs, floating limbs, mutated hands, extra fingers, missing fingers, fused fingers, poorly drawn hands, poorly drawn face, mutation, ugly, blurry, low quality, worst quality, jpeg artifacts, watermark, text, signature'
inference_task['negativePrompt'] = enhanced_negative
# Bria provider settings for enhanced quality
inference_task['providerSettings'] = {
'bria': {
'promptEnhancement': True,
'enhanceImage': True,
'medium': 'photography',
'contentModeration': True
}
}
print(f"[AI][{function_name}] Using Bria 3.2 config: steps=20, enhanced negative prompt, providerSettings enabled")
elif runware_model.startswith('google:'):
# Nano Banana (google:4@2) - Premium quality
# Google models use 'resolution' parameter INSTEAD of width/height
# Remove width/height and use resolution only
del inference_task['width']
del inference_task['height']
inference_task['resolution'] = '1k' # Use 1K tier for optimal speed/quality
print(f"[AI][{function_name}] Using Nano Banana config: resolution=1k (no width/height)")
else:
# Hi Dream Full (runware:97@1) - General diffusion, steps 20, CFGScale 7
inference_task['steps'] = 20
inference_task['CFGScale'] = 7
print(f"[AI][{function_name}] Using Hi Dream Full config: steps=20, CFGScale=7")
payload = [
{
'taskType': 'authentication',
'apiKey': api_key
},
inference_task
{
'taskType': 'imageInference',
'taskUUID': str(uuid.uuid4()),
'positivePrompt': prompt,
'negativePrompt': negative_prompt or '',
'model': runware_model,
'width': width,
'height': height,
'steps': 30,
'CFGScale': 7.5,
'numberResults': 1,
'outputFormat': 'webp'
}
]
request_start = time.time()
@@ -1047,29 +697,7 @@ class AICore:
print(f"[AI][{function_name}] Step 4: Received response in {request_duration:.2f}s (status={response.status_code})")
if response.status_code != 200:
# Log the full error response for debugging
try:
error_body = response.json()
print(f"[AI][{function_name}][Error] Runware error response: {error_body}")
logger.error(f"[AI][{function_name}] Runware HTTP {response.status_code} error body: {error_body}")
# Extract specific error message from Runware response
error_detail = None
if isinstance(error_body, list):
for item in error_body:
if isinstance(item, dict) and 'errors' in item:
errors = item['errors']
if isinstance(errors, list) and len(errors) > 0:
err = errors[0]
error_detail = err.get('message') or err.get('error') or str(err)
break
elif isinstance(error_body, dict):
error_detail = error_body.get('message') or error_body.get('error') or str(error_body)
error_msg = f"HTTP {response.status_code}: {error_detail}" if error_detail else f"HTTP {response.status_code} error"
except Exception as e:
error_msg = f"HTTP {response.status_code} error (could not parse response: {e})"
error_msg = f"HTTP {response.status_code} error"
print(f"[AI][{function_name}][Error] {error_msg}")
return {
'url': None,
@@ -1185,178 +813,16 @@ class AICore:
'error': error_msg,
}
def _generate_image_bria(
self,
prompt: str,
model: Optional[str],
size: str,
n: int,
api_key: Optional[str],
negative_prompt: Optional[str],
function_name: str
) -> Dict[str, Any]:
"""
Generate image using Bria AI.
Bria API Reference: https://docs.bria.ai/reference/text-to-image
"""
print(f"[AI][{function_name}] Provider: Bria AI")
api_key = api_key or self._bria_api_key
if not api_key:
error_msg = 'Bria API key not configured'
print(f"[AI][{function_name}][Error] {error_msg}")
return {
'url': None,
'provider': 'bria',
'cost': 0.0,
'error': error_msg,
}
bria_model = model or 'bria-2.3'
print(f"[AI][{function_name}] Step 2: Using model: {bria_model}, size: {size}")
# Parse size
try:
width, height = map(int, size.split('x'))
except ValueError:
error_msg = f"Invalid size format: {size}. Expected format: WIDTHxHEIGHT"
print(f"[AI][{function_name}][Error] {error_msg}")
return {
'url': None,
'provider': 'bria',
'cost': 0.0,
'error': error_msg,
}
# Bria API endpoint
url = 'https://engine.prod.bria-api.com/v1/text-to-image/base'
headers = {
'api_token': api_key,
'Content-Type': 'application/json'
}
payload = {
'prompt': prompt,
'num_results': n,
'sync': True, # Wait for result
'model_version': bria_model.replace('bria-', ''), # e.g., '2.3'
}
# Add negative prompt if provided
if negative_prompt:
payload['negative_prompt'] = negative_prompt
# Add size constraints if not default
if width and height:
# Bria uses aspect ratio or fixed sizes
payload['width'] = width
payload['height'] = height
print(f"[AI][{function_name}] Step 3: Sending request to Bria API...")
request_start = time.time()
try:
response = requests.post(url, json=payload, headers=headers, timeout=150)
request_duration = time.time() - request_start
print(f"[AI][{function_name}] Step 4: Received response in {request_duration:.2f}s (status={response.status_code})")
if response.status_code != 200:
error_msg = f"HTTP {response.status_code} error: {response.text[:200]}"
print(f"[AI][{function_name}][Error] {error_msg}")
return {
'url': None,
'provider': 'bria',
'cost': 0.0,
'error': error_msg,
}
body = response.json()
print(f"[AI][{function_name}] Bria response keys: {list(body.keys()) if isinstance(body, dict) else type(body)}")
# Bria returns { "result": [ { "urls": ["..."] } ] }
image_url = None
error_msg = None
if isinstance(body, dict):
if 'result' in body and isinstance(body['result'], list) and len(body['result']) > 0:
first_result = body['result'][0]
if 'urls' in first_result and isinstance(first_result['urls'], list) and len(first_result['urls']) > 0:
image_url = first_result['urls'][0]
elif 'url' in first_result:
image_url = first_result['url']
elif 'error' in body:
error_msg = body['error']
elif 'message' in body:
error_msg = body['message']
if error_msg:
print(f"[AI][{function_name}][Error] Bria API error: {error_msg}")
return {
'url': None,
'provider': 'bria',
'cost': 0.0,
'error': error_msg,
}
if image_url:
# Cost based on model
cost_per_image = {
'bria-2.3': 0.015,
'bria-2.3-fast': 0.010,
'bria-2.2': 0.012,
}.get(bria_model, 0.015)
cost = cost_per_image * n
print(f"[AI][{function_name}] Step 5: Image generated successfully")
print(f"[AI][{function_name}] Step 6: Cost: ${cost:.4f}")
print(f"[AI][{function_name}][Success] Image generation completed")
return {
'url': image_url,
'provider': 'bria',
'cost': cost,
'error': None,
}
else:
error_msg = f'No image data in Bria response'
print(f"[AI][{function_name}][Error] {error_msg}")
logger.error(f"[AI][{function_name}] Full Bria response: {json.dumps(body, indent=2) if isinstance(body, dict) else str(body)}")
return {
'url': None,
'provider': 'bria',
'cost': 0.0,
'error': error_msg,
}
except requests.exceptions.Timeout:
error_msg = 'Request timeout (150s exceeded)'
print(f"[AI][{function_name}][Error] {error_msg}")
return {
'url': None,
'provider': 'bria',
'cost': 0.0,
'error': error_msg,
}
except Exception as e:
error_msg = f'Unexpected error: {str(e)}'
print(f"[AI][{function_name}][Error] {error_msg}")
logger.error(error_msg, exc_info=True)
return {
'url': None,
'provider': 'bria',
'cost': 0.0,
'error': error_msg,
}
def calculate_cost(self, model: str, input_tokens: int, output_tokens: int, model_type: str = 'text') -> float:
"""Calculate cost for API call using ModelRegistry (database-driven)"""
from igny8_core.ai.model_registry import ModelRegistry
"""Calculate cost for API call"""
if model_type == 'text':
return float(ModelRegistry.calculate_cost(model, input_tokens=input_tokens, output_tokens=output_tokens))
rates = MODEL_RATES.get(model, {'input': 2.00, 'output': 8.00})
input_cost = (input_tokens / 1_000_000) * rates['input']
output_cost = (output_tokens / 1_000_000) * rates['output']
return input_cost + output_cost
elif model_type == 'image':
return float(ModelRegistry.calculate_cost(model, num_images=1))
rate = IMAGE_MODEL_RATES.get(model, 0.040)
return rate * 1
return 0.0
# Legacy method names for backward compatibility

View File

@@ -1,17 +1,7 @@
"""
AI Constants - Configuration constants for AI operations
NOTE: Model pricing (MODEL_RATES, IMAGE_MODEL_RATES) has been moved to the database
via AIModelConfig. Use ModelRegistry to get model pricing:
from igny8_core.ai.model_registry import ModelRegistry
cost = ModelRegistry.calculate_cost(model_id, input_tokens=N, output_tokens=N)
The constants below are DEPRECATED and kept only for reference/backward compatibility.
Do NOT use MODEL_RATES or IMAGE_MODEL_RATES in new code.
AI Constants - Model pricing, valid models, and configuration constants
"""
# DEPRECATED - Use AIModelConfig database table instead
# Model pricing (per 1M tokens) - kept for reference only
# Model pricing (per 1M tokens) - EXACT from reference plugin model-rates-config.php
MODEL_RATES = {
'gpt-4.1': {'input': 2.00, 'output': 8.00},
'gpt-4o-mini': {'input': 0.15, 'output': 0.60},
@@ -20,8 +10,7 @@ MODEL_RATES = {
'gpt-5.2': {'input': 1.75, 'output': 14.00},
}
# DEPRECATED - Use AIModelConfig database table instead
# Image model pricing (per image) - kept for reference only
# Image model pricing (per image) - EXACT from reference plugin
IMAGE_MODEL_RATES = {
'dall-e-3': 0.040,
'dall-e-2': 0.020,

View File

@@ -306,13 +306,12 @@ class AIEngine:
ai_core = AICore(account=self.account)
function_name = fn.get_name()
# Generate prompt prefix for tracking (e.g., ##GP01-Clustering or ##CP01-Clustering)
# This replaces function_id and indicates whether prompt is global or custom
from igny8_core.ai.prompts import get_prompt_prefix_for_function
prompt_prefix = get_prompt_prefix_for_function(function_name, account=self.account)
logger.info(f"[AIEngine] Using prompt prefix: {prompt_prefix}")
# Generate function_id for tracking (ai-{function_name}-01)
# Normalize underscores to hyphens to match frontend tracking IDs
function_id_base = function_name.replace('_', '-')
function_id = f"ai-{function_id_base}-01-desktop"
# Get model config from settings (requires account)
# This will raise ValueError if IntegrationSettings not configured
try:
@@ -350,7 +349,7 @@ class AIEngine:
temperature=model_config.get('temperature'),
response_format=model_config.get('response_format'),
function_name=function_name,
prompt_prefix=prompt_prefix # Pass prompt prefix for tracking (replaces function_id)
function_id=function_id # Pass function_id for tracking
)
except Exception as e:
error_msg = f"AI call failed: {str(e)}"
@@ -482,9 +481,6 @@ class AIEngine:
# Log to database
self._log_to_database(fn, payload, parsed, save_result)
# Create notification for successful completion
self._create_success_notification(function_name, save_result, payload)
return {
'success': True,
**save_result,
@@ -528,9 +524,6 @@ class AIEngine:
self._log_to_database(fn, None, None, None, error=error)
# Create notification for failure
self._create_failure_notification(function_name, error)
return {
'success': False,
'error': error,
@@ -658,104 +651,4 @@ class AIEngine:
'generate_site_structure': 'site_blueprint',
}
return mapping.get(function_name, 'unknown')
def _create_success_notification(self, function_name: str, save_result: dict, payload: dict):
"""Create notification for successful AI task completion"""
if not self.account:
return
# Lazy import to avoid circular dependency and Django app loading issues
from igny8_core.business.notifications.services import NotificationService
# Get site from payload if available
site = None
site_id = payload.get('site_id')
if site_id:
try:
from igny8_core.auth.models import Site
site = Site.objects.get(id=site_id, account=self.account)
except:
pass
try:
# Map function to appropriate notification method
if function_name == 'auto_cluster':
NotificationService.notify_clustering_complete(
account=self.account,
site=site,
cluster_count=save_result.get('clusters_created', 0),
keyword_count=save_result.get('keywords_updated', 0)
)
elif function_name == 'generate_ideas':
NotificationService.notify_ideas_complete(
account=self.account,
site=site,
idea_count=save_result.get('count', 0),
cluster_count=len(payload.get('ids', []))
)
elif function_name == 'generate_content':
NotificationService.notify_content_complete(
account=self.account,
site=site,
article_count=save_result.get('count', 0),
word_count=save_result.get('word_count', 0)
)
elif function_name == 'generate_image_prompts':
NotificationService.notify_prompts_complete(
account=self.account,
site=site,
prompt_count=save_result.get('count', 0)
)
elif function_name == 'generate_images':
NotificationService.notify_images_complete(
account=self.account,
site=site,
image_count=save_result.get('count', 0)
)
logger.info(f"[AIEngine] Created success notification for {function_name}")
except Exception as e:
# Don't fail the task if notification creation fails
logger.warning(f"[AIEngine] Failed to create success notification: {e}", exc_info=True)
def _create_failure_notification(self, function_name: str, error: str):
"""Create notification for failed AI task"""
if not self.account:
return
# Lazy import to avoid circular dependency and Django app loading issues
from igny8_core.business.notifications.services import NotificationService
try:
# Map function to appropriate failure notification method
if function_name == 'auto_cluster':
NotificationService.notify_clustering_failed(
account=self.account,
error=error
)
elif function_name == 'generate_ideas':
NotificationService.notify_ideas_failed(
account=self.account,
error=error
)
elif function_name == 'generate_content':
NotificationService.notify_content_failed(
account=self.account,
error=error
)
elif function_name == 'generate_image_prompts':
NotificationService.notify_prompts_failed(
account=self.account,
error=error
)
elif function_name == 'generate_images':
NotificationService.notify_images_failed(
account=self.account,
error=error
)
logger.info(f"[AIEngine] Created failure notification for {function_name}")
except Exception as e:
# Don't fail the task if notification creation fails
logger.warning(f"[AIEngine] Failed to create failure notification: {e}", exc_info=True)

View File

@@ -219,12 +219,32 @@ class GenerateImagePromptsFunction(BaseAIFunction):
# Helper methods
def _get_max_in_article_images(self, account) -> int:
"""
Get max_in_article_images from AISettings (with account override).
Get max_in_article_images from settings.
Uses account's IntegrationSettings override, or GlobalIntegrationSettings.
"""
from igny8_core.modules.system.ai_settings import AISettings
from igny8_core.modules.system.models import IntegrationSettings
from igny8_core.modules.system.global_settings_models import GlobalIntegrationSettings
max_images = AISettings.get_effective_max_images(account)
logger.info(f"Using max_in_article_images={max_images} for account {account.id}")
# Try account-specific override first
try:
settings = IntegrationSettings.objects.get(
account=account,
integration_type='image_generation',
is_active=True
)
max_images = settings.config.get('max_in_article_images')
if max_images is not None:
max_images = int(max_images)
logger.info(f"Using max_in_article_images={max_images} from account {account.id} IntegrationSettings override")
return max_images
except IntegrationSettings.DoesNotExist:
logger.debug(f"No IntegrationSettings override for account {account.id}, using GlobalIntegrationSettings")
# Use GlobalIntegrationSettings default
global_settings = GlobalIntegrationSettings.get_instance()
max_images = global_settings.max_in_article_images
logger.info(f"Using max_in_article_images={max_images} from GlobalIntegrationSettings (account {account.id})")
return max_images
def _extract_content_elements(self, content: Content, max_images: int) -> Dict:

View File

@@ -67,33 +67,42 @@ class GenerateImagesFunction(BaseAIFunction):
if not tasks:
raise ValueError("No tasks found")
# Get image generation settings from AISettings (with account overrides)
from igny8_core.modules.system.ai_settings import AISettings
from igny8_core.ai.model_registry import ModelRegistry
# Get image generation settings
# Try account-specific override, otherwise use GlobalIntegrationSettings
from igny8_core.modules.system.models import IntegrationSettings
from igny8_core.modules.system.global_settings_models import GlobalIntegrationSettings
# Get effective settings (AISettings + AccountSettings overrides)
image_style = AISettings.get_effective_image_style(account)
max_images = AISettings.get_effective_max_images(account)
image_settings = {}
try:
integration = IntegrationSettings.objects.get(
account=account,
integration_type='image_generation',
is_active=True
)
image_settings = integration.config or {}
logger.info(f"Using image settings from account {account.id} IntegrationSettings override")
except IntegrationSettings.DoesNotExist:
logger.info(f"No IntegrationSettings override for account {account.id}, using GlobalIntegrationSettings")
# Get default image model and provider from database
default_model = ModelRegistry.get_default_model('image')
if default_model:
model_config = ModelRegistry.get_model(default_model)
provider = model_config.provider if model_config else 'openai'
model = default_model
# Use GlobalIntegrationSettings for missing values
global_settings = GlobalIntegrationSettings.get_instance()
# Extract settings with defaults from global settings
provider = image_settings.get('provider') or image_settings.get('service') or global_settings.default_image_service
if provider == 'runware':
model = image_settings.get('model') or image_settings.get('runwareModel') or global_settings.runware_model
else:
provider = 'openai'
model = 'dall-e-3'
logger.info(f"Using image settings: provider={provider}, model={model}, style={image_style}, max={max_images}")
model = image_settings.get('model') or global_settings.dalle_model
return {
'tasks': tasks,
'account': account,
'provider': provider,
'model': model,
'image_type': image_style,
'max_in_article_images': max_images,
'image_type': image_settings.get('image_type') or global_settings.image_style,
'max_in_article_images': int(image_settings.get('max_in_article_images') or global_settings.max_in_article_images),
'desktop_enabled': image_settings.get('desktop_enabled', True),
'mobile_enabled': image_settings.get('mobile_enabled', True),
}
def build_prompt(self, data: Dict, account=None) -> Dict:

View File

@@ -1,377 +0,0 @@
"""
Model Registry Service
Central registry for AI model configurations with caching.
This service provides:
- Database-driven model configuration (from AIModelConfig)
- Integration provider API key retrieval (from IntegrationProvider)
- Caching for performance
- Cost calculation methods
Usage:
from igny8_core.ai.model_registry import ModelRegistry
# Get model config
model = ModelRegistry.get_model('gpt-4o-mini')
# Get rate
input_rate = ModelRegistry.get_rate('gpt-4o-mini', 'input')
# Calculate cost
cost = ModelRegistry.calculate_cost('gpt-4o-mini', input_tokens=1000, output_tokens=500)
# Get API key for a provider
api_key = ModelRegistry.get_api_key('openai')
"""
import logging
from decimal import Decimal
from typing import Optional, Dict, Any
from django.core.cache import cache
logger = logging.getLogger(__name__)
# Cache TTL in seconds (5 minutes)
MODEL_CACHE_TTL = 300
# Cache key prefix
CACHE_KEY_PREFIX = 'ai_model_'
PROVIDER_CACHE_PREFIX = 'provider_'
class ModelRegistry:
"""
Central registry for AI model configurations with caching.
Uses AIModelConfig from database for model configs.
Uses IntegrationProvider for API keys.
"""
@classmethod
def _get_cache_key(cls, model_id: str) -> str:
"""Generate cache key for model"""
return f"{CACHE_KEY_PREFIX}{model_id}"
@classmethod
def _get_provider_cache_key(cls, provider_id: str) -> str:
"""Generate cache key for provider"""
return f"{PROVIDER_CACHE_PREFIX}{provider_id}"
@classmethod
def _get_from_db(cls, model_id: str) -> Optional[Any]:
"""Get model config from database"""
try:
from igny8_core.business.billing.models import AIModelConfig
return AIModelConfig.objects.filter(
model_name=model_id,
is_active=True
).first()
except Exception as e:
logger.debug(f"Could not fetch model {model_id} from DB: {e}")
return None
@classmethod
def get_model(cls, model_id: str) -> Optional[Any]:
"""
Get model configuration by model_id.
Order of lookup:
1. Cache
2. Database (AIModelConfig)
Args:
model_id: The model identifier (e.g., 'gpt-4o-mini', 'dall-e-3')
Returns:
AIModelConfig instance, None if not found
"""
cache_key = cls._get_cache_key(model_id)
# Try cache first
cached = cache.get(cache_key)
if cached is not None:
return cached
# Try database
model_config = cls._get_from_db(model_id)
if model_config:
cache.set(cache_key, model_config, MODEL_CACHE_TTL)
return model_config
logger.warning(f"Model {model_id} not found in database")
return None
@classmethod
def get_rate(cls, model_id: str, rate_type: str) -> Decimal:
"""
Get specific rate for a model.
Args:
model_id: The model identifier
rate_type: 'input', 'output' (for text models) or 'image' (for image models)
Returns:
Decimal rate value, 0 if not found
"""
model = cls.get_model(model_id)
if not model:
return Decimal('0')
# Handle AIModelConfig instance
if rate_type == 'input':
return model.input_cost_per_1m or Decimal('0')
elif rate_type == 'output':
return model.output_cost_per_1m or Decimal('0')
elif rate_type == 'image':
return model.cost_per_image or Decimal('0')
return Decimal('0')
@classmethod
def calculate_cost(cls, model_id: str, input_tokens: int = 0, output_tokens: int = 0, num_images: int = 0) -> Decimal:
"""
Calculate cost for model usage.
For text models: Uses input/output token counts
For image models: Uses num_images
Args:
model_id: The model identifier
input_tokens: Number of input tokens (for text models)
output_tokens: Number of output tokens (for text models)
num_images: Number of images (for image models)
Returns:
Decimal cost in USD
"""
model = cls.get_model(model_id)
if not model:
return Decimal('0')
# Get model type from AIModelConfig
model_type = model.model_type
if model_type == 'text':
input_rate = cls.get_rate(model_id, 'input')
output_rate = cls.get_rate(model_id, 'output')
cost = (
(Decimal(input_tokens) * input_rate) +
(Decimal(output_tokens) * output_rate)
) / Decimal('1000000')
return cost
elif model_type == 'image':
image_rate = cls.get_rate(model_id, 'image')
return image_rate * Decimal(num_images)
return Decimal('0')
@classmethod
def get_default_model(cls, model_type: str = 'text') -> Optional[str]:
"""
Get the default model for a given type from database.
Args:
model_type: 'text' or 'image'
Returns:
model_id string or None
"""
try:
from igny8_core.business.billing.models import AIModelConfig
default = AIModelConfig.objects.filter(
model_type=model_type,
is_active=True,
is_default=True
).first()
if default:
return default.model_name
# If no default is set, return first active model of this type
first_active = AIModelConfig.objects.filter(
model_type=model_type,
is_active=True
).order_by('model_name').first()
if first_active:
return first_active.model_name
except Exception as e:
logger.error(f"Could not get default {model_type} model from DB: {e}")
return None
@classmethod
def list_models(cls, model_type: Optional[str] = None, provider: Optional[str] = None) -> list:
"""
List all available models from database, optionally filtered by type or provider.
Args:
model_type: Filter by 'text', 'image', or 'embedding'
provider: Filter by 'openai', 'anthropic', 'runware', etc.
Returns:
List of AIModelConfig instances
"""
try:
from igny8_core.business.billing.models import AIModelConfig
queryset = AIModelConfig.objects.filter(is_active=True)
if model_type:
queryset = queryset.filter(model_type=model_type)
if provider:
queryset = queryset.filter(provider=provider)
return list(queryset.order_by('model_name'))
except Exception as e:
logger.error(f"Could not list models from DB: {e}")
return []
@classmethod
def clear_cache(cls, model_id: Optional[str] = None):
"""
Clear model cache.
Args:
model_id: Clear specific model cache, or all if None
"""
if model_id:
cache.delete(cls._get_cache_key(model_id))
else:
# Clear all model caches - use pattern if available
try:
from django.core.cache import caches
default_cache = caches['default']
if hasattr(default_cache, 'delete_pattern'):
default_cache.delete_pattern(f"{CACHE_KEY_PREFIX}*")
else:
# Fallback: clear all known models from DB
from igny8_core.business.billing.models import AIModelConfig
for model in AIModelConfig.objects.values_list('model_name', flat=True):
cache.delete(cls._get_cache_key(model))
except Exception as e:
logger.warning(f"Could not clear all model caches: {e}")
@classmethod
def validate_model(cls, model_id: str) -> bool:
"""
Check if a model ID is valid and active.
Args:
model_id: The model identifier to validate
Returns:
True if model exists and is active, False otherwise
"""
model = cls.get_model(model_id)
if not model:
return False
return model.is_active
# ========== IntegrationProvider methods ==========
@classmethod
def get_provider(cls, provider_id: str) -> Optional[Any]:
"""
Get IntegrationProvider by provider_id.
Args:
provider_id: The provider identifier (e.g., 'openai', 'stripe', 'resend')
Returns:
IntegrationProvider instance, None if not found
"""
cache_key = cls._get_provider_cache_key(provider_id)
# Try cache first
cached = cache.get(cache_key)
if cached is not None:
return cached
try:
from igny8_core.modules.system.models import IntegrationProvider
provider = IntegrationProvider.objects.filter(
provider_id=provider_id,
is_active=True
).first()
if provider:
cache.set(cache_key, provider, MODEL_CACHE_TTL)
return provider
except Exception as e:
logger.error(f"Could not fetch provider {provider_id} from DB: {e}")
return None
@classmethod
def get_api_key(cls, provider_id: str) -> Optional[str]:
"""
Get API key for a provider.
Args:
provider_id: The provider identifier (e.g., 'openai', 'anthropic', 'runware')
Returns:
API key string, None if not found or provider is inactive
"""
provider = cls.get_provider(provider_id)
if provider and provider.api_key:
return provider.api_key
return None
@classmethod
def get_api_secret(cls, provider_id: str) -> Optional[str]:
"""
Get API secret for a provider (for OAuth, Stripe secret key, etc.).
Args:
provider_id: The provider identifier
Returns:
API secret string, None if not found
"""
provider = cls.get_provider(provider_id)
if provider and provider.api_secret:
return provider.api_secret
return None
@classmethod
def get_webhook_secret(cls, provider_id: str) -> Optional[str]:
"""
Get webhook secret for a provider (for Stripe, PayPal webhooks).
Args:
provider_id: The provider identifier
Returns:
Webhook secret string, None if not found
"""
provider = cls.get_provider(provider_id)
if provider and provider.webhook_secret:
return provider.webhook_secret
return None
@classmethod
def clear_provider_cache(cls, provider_id: Optional[str] = None):
"""
Clear provider cache.
Args:
provider_id: Clear specific provider cache, or all if None
"""
if provider_id:
cache.delete(cls._get_provider_cache_key(provider_id))
else:
try:
from django.core.cache import caches
default_cache = caches['default']
if hasattr(default_cache, 'delete_pattern'):
default_cache.delete_pattern(f"{PROVIDER_CACHE_PREFIX}*")
else:
from igny8_core.modules.system.models import IntegrationProvider
for pid in IntegrationProvider.objects.values_list('provider_id', flat=True):
cache.delete(cls._get_provider_cache_key(pid))
except Exception as e:
logger.warning(f"Could not clear provider caches: {e}")

View File

@@ -3,7 +3,7 @@ Prompt Registry - Centralized prompt management with override hierarchy
Supports: task-level overrides → DB prompts → GlobalAIPrompt (REQUIRED)
"""
import logging
from typing import Dict, Any, Optional, Tuple
from typing import Dict, Any, Optional
from django.db import models
logger = logging.getLogger(__name__)
@@ -16,10 +16,10 @@ class PromptRegistry:
2. DB prompt for (account, function)
3. GlobalAIPrompt (REQUIRED - no hardcoded fallbacks)
"""
# Removed ALL hardcoded prompts - GlobalAIPrompt is now the ONLY source of default prompts
# To add/modify prompts, use Django admin: /admin/system/globalaiprompt/
# Mapping from function names to prompt types
FUNCTION_TO_PROMPT_TYPE = {
'auto_cluster': 'clustering',
@@ -35,114 +35,7 @@ class PromptRegistry:
'generate_service_page': 'service_generation',
'generate_taxonomy': 'taxonomy_generation',
}
# Mapping of prompt types to their prefix numbers and display names
# Format: {prompt_type: (number, display_name)}
# GP = Global Prompt, CP = Custom Prompt
PROMPT_PREFIX_MAP = {
'clustering': ('01', 'Clustering'),
'ideas': ('02', 'Ideas'),
'content_generation': ('03', 'ContentGen'),
'image_prompt_extraction': ('04', 'ImagePrompts'),
'site_structure_generation': ('05', 'SiteStructure'),
'optimize_content': ('06', 'OptimizeContent'),
'product_generation': ('07', 'ProductGen'),
'service_generation': ('08', 'ServiceGen'),
'taxonomy_generation': ('09', 'TaxonomyGen'),
'image_prompt_template': ('10', 'ImageTemplate'),
'negative_prompt': ('11', 'NegativePrompt'),
}
@classmethod
def get_prompt_prefix(cls, prompt_type: str, is_custom: bool) -> str:
"""
Generate prompt prefix for tracking.
Args:
prompt_type: The prompt type (e.g., 'clustering', 'ideas')
is_custom: True if using custom/account-specific prompt, False if global
Returns:
Prefix string like "##GP01-Clustering" or "##CP01-Clustering"
"""
prefix_info = cls.PROMPT_PREFIX_MAP.get(prompt_type, ('00', prompt_type.title()))
number, display_name = prefix_info
prefix_type = 'CP' if is_custom else 'GP'
return f"##{prefix_type}{number}-{display_name}"
@classmethod
def get_prompt_with_metadata(
cls,
function_name: str,
account: Optional[Any] = None,
task: Optional[Any] = None,
context: Optional[Dict[str, Any]] = None
) -> Tuple[str, bool, str]:
"""
Get prompt for a function with metadata about source.
Priority:
1. task.prompt_override (if task provided and has override)
2. DB prompt for (account, function) - marked as custom if is_customized=True
3. GlobalAIPrompt (REQUIRED - no hardcoded fallbacks)
Args:
function_name: AI function name (e.g., 'auto_cluster', 'generate_ideas')
account: Account object (optional)
task: Task object with optional prompt_override (optional)
context: Additional context for prompt rendering (optional)
Returns:
Tuple of (prompt_string, is_custom, prompt_type)
- prompt_string: The rendered prompt
- is_custom: True if using custom/account prompt, False if global
- prompt_type: The prompt type identifier
"""
# Step 1: Get prompt type
prompt_type = cls.FUNCTION_TO_PROMPT_TYPE.get(function_name, function_name)
# Step 2: Check task-level override (always considered custom)
if task and hasattr(task, 'prompt_override') and task.prompt_override:
logger.info(f"Using task-level prompt override for {function_name}")
prompt = task.prompt_override
return cls._render_prompt(prompt, context or {}), True, prompt_type
# Step 3: Try DB prompt (account-specific)
if account:
try:
from igny8_core.modules.system.models import AIPrompt
db_prompt = AIPrompt.objects.get(
account=account,
prompt_type=prompt_type,
is_active=True
)
# Check if prompt is customized
is_custom = db_prompt.is_customized
logger.info(f"Using {'customized' if is_custom else 'default'} account prompt for {function_name} (account {account.id})")
prompt = db_prompt.prompt_value
return cls._render_prompt(prompt, context or {}), is_custom, prompt_type
except Exception as e:
logger.debug(f"No account-specific prompt found for {function_name}: {e}")
# Step 4: Try GlobalAIPrompt (platform-wide default) - REQUIRED
try:
from igny8_core.modules.system.global_settings_models import GlobalAIPrompt
global_prompt = GlobalAIPrompt.objects.get(
prompt_type=prompt_type,
is_active=True
)
logger.info(f"Using global default prompt for {function_name} from GlobalAIPrompt")
prompt = global_prompt.prompt_value
return cls._render_prompt(prompt, context or {}), False, prompt_type
except Exception as e:
error_msg = (
f"ERROR: Global prompt '{prompt_type}' not found for function '{function_name}'. "
f"Please configure it in Django admin at: /admin/system/globalaiprompt/. "
f"Error: {e}"
)
logger.error(error_msg)
raise ValueError(error_msg)
@classmethod
def get_prompt(
cls,
@@ -153,23 +46,63 @@ class PromptRegistry:
) -> str:
"""
Get prompt for a function with hierarchical resolution.
Priority:
1. task.prompt_override (if task provided and has override)
2. DB prompt for (account, function)
3. GlobalAIPrompt (REQUIRED - no hardcoded fallbacks)
Args:
function_name: AI function name (e.g., 'auto_cluster', 'generate_ideas')
account: Account object (optional)
task: Task object with optional prompt_override (optional)
context: Additional context for prompt rendering (optional)
Returns:
Prompt string ready for formatting
"""
prompt, _, _ = cls.get_prompt_with_metadata(function_name, account, task, context)
return prompt
# Step 1: Check task-level override
if task and hasattr(task, 'prompt_override') and task.prompt_override:
logger.info(f"Using task-level prompt override for {function_name}")
prompt = task.prompt_override
return cls._render_prompt(prompt, context or {})
# Step 2: Get prompt type
prompt_type = cls.FUNCTION_TO_PROMPT_TYPE.get(function_name, function_name)
# Step 3: Try DB prompt (account-specific)
if account:
try:
from igny8_core.modules.system.models import AIPrompt
db_prompt = AIPrompt.objects.get(
account=account,
prompt_type=prompt_type,
is_active=True
)
logger.info(f"Using account-specific prompt for {function_name} (account {account.id})")
prompt = db_prompt.prompt_value
return cls._render_prompt(prompt, context or {})
except Exception as e:
logger.debug(f"No account-specific prompt found for {function_name}: {e}")
# Step 4: Try GlobalAIPrompt (platform-wide default) - REQUIRED
try:
from igny8_core.modules.system.global_settings_models import GlobalAIPrompt
global_prompt = GlobalAIPrompt.objects.get(
prompt_type=prompt_type,
is_active=True
)
logger.info(f"Using global default prompt for {function_name} from GlobalAIPrompt")
prompt = global_prompt.prompt_value
return cls._render_prompt(prompt, context or {})
except Exception as e:
error_msg = (
f"ERROR: Global prompt '{prompt_type}' not found for function '{function_name}'. "
f"Please configure it in Django admin at: /admin/system/globalaiprompt/. "
f"Error: {e}"
)
logger.error(error_msg)
raise ValueError(error_msg)
@classmethod
def _render_prompt(cls, prompt_template: str, context: Dict[str, Any]) -> str:
@@ -286,61 +219,3 @@ def get_prompt(function_name: str, account=None, task=None, context=None) -> str
"""Get prompt using registry"""
return PromptRegistry.get_prompt(function_name, account=account, task=task, context=context)
def get_prompt_with_prefix(function_name: str, account=None, task=None, context=None) -> Tuple[str, str]:
"""
Get prompt with its tracking prefix.
Args:
function_name: AI function name
account: Account object (optional)
task: Task object with optional prompt_override (optional)
context: Additional context for prompt rendering (optional)
Returns:
Tuple of (prompt_string, prefix_string)
- prompt_string: The rendered prompt
- prefix_string: The tracking prefix (e.g., '##GP01-Clustering' or '##CP01-Clustering')
"""
prompt, is_custom, prompt_type = PromptRegistry.get_prompt_with_metadata(
function_name, account=account, task=task, context=context
)
prefix = PromptRegistry.get_prompt_prefix(prompt_type, is_custom)
return prompt, prefix
def get_prompt_prefix_for_function(function_name: str, account=None, task=None) -> str:
"""
Get just the prefix for a function without fetching the full prompt.
Useful when the prompt was already fetched elsewhere.
Args:
function_name: AI function name
account: Account object (optional)
task: Task object with optional prompt_override (optional)
Returns:
The tracking prefix (e.g., '##GP01-Clustering' or '##CP01-Clustering')
"""
prompt_type = PromptRegistry.FUNCTION_TO_PROMPT_TYPE.get(function_name, function_name)
# Check for task-level override (always custom)
if task and hasattr(task, 'prompt_override') and task.prompt_override:
return PromptRegistry.get_prompt_prefix(prompt_type, is_custom=True)
# Check for account-specific prompt
if account:
try:
from igny8_core.modules.system.models import AIPrompt
db_prompt = AIPrompt.objects.get(
account=account,
prompt_type=prompt_type,
is_active=True
)
return PromptRegistry.get_prompt_prefix(prompt_type, is_custom=db_prompt.is_customized)
except Exception:
pass
# Fallback to global (not custom)
return PromptRegistry.get_prompt_prefix(prompt_type, is_custom=False)

View File

@@ -1,7 +1,6 @@
"""
AI Settings - Centralized model configurations and limits
Uses AISettings (system defaults) with optional per-account overrides via AccountSettings.
API keys are stored in IntegrationProvider.
Uses global settings with optional per-account overrides.
"""
from typing import Dict, Any
import logging
@@ -23,9 +22,10 @@ def get_model_config(function_name: str, account) -> Dict[str, Any]:
Get model configuration for AI function.
Architecture:
- API keys: From IntegrationProvider (centralized)
- Model: From AIModelConfig (is_default=True)
- Params: From AISettings with AccountSettings overrides
- API keys: ALWAYS from GlobalIntegrationSettings (platform-wide)
- Model/params: From IntegrationSettings if account has override, else from global
- Free plan: Cannot override, uses global defaults
- Starter/Growth/Scale: Can override model, temperature, max_tokens, etc.
Args:
function_name: Name of the AI function
@@ -44,57 +44,67 @@ def get_model_config(function_name: str, account) -> Dict[str, Any]:
actual_name = FUNCTION_ALIASES.get(function_name, function_name)
try:
from igny8_core.modules.system.ai_settings import AISettings
from igny8_core.ai.model_registry import ModelRegistry
from igny8_core.modules.system.global_settings_models import GlobalIntegrationSettings
from igny8_core.modules.system.models import IntegrationSettings
# Get API key from IntegrationProvider
api_key = ModelRegistry.get_api_key('openai')
# Get global settings (for API keys and defaults)
global_settings = GlobalIntegrationSettings.get_instance()
if not api_key:
if not global_settings.openai_api_key:
raise ValueError(
"Platform OpenAI API key not configured. "
"Please configure IntegrationProvider in Django admin."
"Please configure GlobalIntegrationSettings in Django admin."
)
# Get default text model from AIModelConfig
default_model = ModelRegistry.get_default_model('text')
if not default_model:
default_model = 'gpt-4o-mini' # Ultimate fallback
# Start with global defaults
model = global_settings.openai_model
temperature = global_settings.openai_temperature
max_tokens = global_settings.openai_max_tokens
api_key = global_settings.openai_api_key # ALWAYS from global
model = default_model
# Get settings with account overrides
temperature = AISettings.get_effective_temperature(account)
max_tokens = AISettings.get_effective_max_tokens(account)
# Get max_tokens from AIModelConfig if available
# Check if account has overrides (only for Starter/Growth/Scale plans)
# Free plan users cannot create IntegrationSettings records
try:
from igny8_core.business.billing.models import AIModelConfig
model_config = AIModelConfig.objects.filter(
model_name=model,
account_settings = IntegrationSettings.objects.get(
account=account,
integration_type='openai',
is_active=True
).first()
if model_config and model_config.max_output_tokens:
max_tokens = model_config.max_output_tokens
except Exception as e:
logger.warning(f"Could not load max_tokens from AIModelConfig for {model}: {e}")
)
config = account_settings.config or {}
# Override model if specified (NULL = use global)
if config.get('model'):
model = config['model']
# Override temperature if specified
if config.get('temperature') is not None:
temperature = config['temperature']
# Override max_tokens if specified
if config.get('max_tokens'):
max_tokens = config['max_tokens']
except IntegrationSettings.DoesNotExist:
# No account override, use global defaults (already set above)
pass
except Exception as e:
logger.error(f"Could not load OpenAI settings for account {account.id}: {e}")
raise ValueError(
f"Could not load OpenAI configuration for account {account.id}. "
f"Please configure IntegrationProvider and AISettings."
f"Please configure GlobalIntegrationSettings."
)
# Validate model is in our supported list using ModelRegistry (database-driven)
# Validate model is in our supported list (optional validation)
try:
if not ModelRegistry.validate_model(model):
supported_models = [m.model_name for m in ModelRegistry.list_models(model_type='text')]
from igny8_core.utils.ai_processor import MODEL_RATES
if model not in MODEL_RATES:
logger.warning(
f"Model '{model}' for account {account.id} is not in supported list. "
f"Supported models: {supported_models}"
f"Supported models: {list(MODEL_RATES.keys())}"
)
except Exception:
except ImportError:
pass
# Build response format based on model (JSON mode for supported models)

View File

@@ -157,7 +157,6 @@ def process_image_generation_queue(self, image_ids: list, account_id: int = None
from igny8_core.modules.system.models import IntegrationSettings
from igny8_core.ai.ai_core import AICore
from igny8_core.ai.prompts import PromptRegistry
from igny8_core.business.billing.services.credit_service import CreditService
logger.info("=" * 80)
logger.info(f"process_image_generation_queue STARTED")
@@ -182,86 +181,73 @@ def process_image_generation_queue(self, image_ids: list, account_id: int = None
failed = 0
results = []
# Get image generation settings from AISettings (with account overrides)
# Get image generation settings
# Try account-specific override, otherwise use GlobalIntegrationSettings
logger.info("[process_image_generation_queue] Step 1: Loading image generation settings")
from igny8_core.modules.system.ai_settings import AISettings
from igny8_core.ai.model_registry import ModelRegistry
from igny8_core.modules.system.global_settings_models import GlobalIntegrationSettings
# Get effective settings
image_type = AISettings.get_effective_image_style(account)
image_format = 'webp' # Default format
config = {}
try:
image_settings = IntegrationSettings.objects.get(
account=account,
integration_type='image_generation',
is_active=True
)
logger.info(f"[process_image_generation_queue] Using account {account.id} IntegrationSettings override")
config = image_settings.config or {}
except IntegrationSettings.DoesNotExist:
logger.info(f"[process_image_generation_queue] No IntegrationSettings override for account {account.id}, using GlobalIntegrationSettings")
except Exception as e:
logger.error(f"[process_image_generation_queue] ERROR loading image generation settings: {e}", exc_info=True)
return {'success': False, 'error': f'Error loading image generation settings: {str(e)}'}
# Get default image model from database
default_model = ModelRegistry.get_default_model('image')
if default_model:
model_config = ModelRegistry.get_model(default_model)
provider = model_config.provider if model_config else 'openai'
model = default_model
# Use GlobalIntegrationSettings for missing values
global_settings = GlobalIntegrationSettings.get_instance()
logger.info(f"[process_image_generation_queue] Image generation settings loaded. Config keys: {list(config.keys())}")
logger.info(f"[process_image_generation_queue] Full config: {config}")
# Get provider and model from config with global fallbacks
provider = config.get('provider') or global_settings.default_image_service
if provider == 'runware':
model = config.get('model') or config.get('imageModel') or global_settings.runware_model
else:
provider = 'openai'
model = 'dall-e-3'
model = config.get('model') or config.get('imageModel') or global_settings.dalle_model
logger.info(f"[process_image_generation_queue] Using PROVIDER: {provider}, MODEL: {model} from settings")
# Style to prompt enhancement mapping
# These style descriptors are added to the image prompt for better results
STYLE_PROMPT_MAP = {
# Runware styles
'photorealistic': 'ultra realistic photography, natural lighting, real world look, photorealistic',
'illustration': 'digital illustration, clean lines, artistic style, modern illustration',
'3d_render': 'computer generated 3D render, modern polished 3D style, depth and dramatic lighting',
'minimal_flat': 'minimal flat design, simple shapes, flat colors, modern graphic design aesthetic',
'artistic': 'artistic painterly style, expressive brushstrokes, hand painted aesthetic',
'cartoon': 'cartoon stylized illustration, playful exaggerated forms, animated character style',
# DALL-E styles (mapped from OpenAI API style parameter)
'natural': 'natural realistic style',
'vivid': 'vivid dramatic hyper-realistic style',
# Legacy fallbacks
'realistic': 'ultra realistic photography, natural lighting, photorealistic',
}
# Get the style description for prompt enhancement
style_description = STYLE_PROMPT_MAP.get(image_type, STYLE_PROMPT_MAP.get('photorealistic'))
logger.info(f"[process_image_generation_queue] Style: {image_type} -> prompt enhancement: {style_description[:50]}...")
# Model-specific landscape sizes (square is always 1024x1024)
# For Runware models - based on Runware documentation for optimal results per model
# For OpenAI DALL-E 3 - uses 1792x1024 for landscape
MODEL_LANDSCAPE_SIZES = {
'runware:97@1': '1280x768', # Hi Dream Full landscape
'bria:10@1': '1344x768', # Bria 3.2 landscape (16:9)
'google:4@2': '1376x768', # Nano Banana landscape (16:9)
'dall-e-3': '1792x1024', # DALL-E 3 landscape
'dall-e-2': '1024x1024', # DALL-E 2 only supports square
}
DEFAULT_SQUARE_SIZE = '1024x1024'
# Get model-specific landscape size for featured images
model_landscape_size = MODEL_LANDSCAPE_SIZES.get(model, '1792x1024' if provider == 'openai' else '1280x768')
# Featured image always uses model-specific landscape size
featured_image_size = model_landscape_size
# In-article images: alternating square/landscape based on position (handled in image loop)
in_article_square_size = DEFAULT_SQUARE_SIZE
in_article_landscape_size = model_landscape_size
image_type = config.get('image_type') or global_settings.image_style
image_format = config.get('image_format', 'webp')
desktop_enabled = config.get('desktop_enabled', True)
mobile_enabled = config.get('mobile_enabled', True)
# Get image sizes from config, with fallback defaults
featured_image_size = config.get('featured_image_size') or ('1280x832' if provider == 'runware' else '1024x1024')
desktop_image_size = config.get('desktop_image_size') or global_settings.desktop_image_size
in_article_image_size = config.get('in_article_image_size') or '512x512' # Default to 512x512
logger.info(f"[process_image_generation_queue] Settings loaded:")
logger.info(f" - Provider: {provider}")
logger.info(f" - Model: {model}")
logger.info(f" - Image type: {image_type}")
logger.info(f" - Image format: {image_format}")
logger.info(f" - Featured image size: {featured_image_size}")
logger.info(f" - In-article square: {in_article_square_size}, landscape: {in_article_landscape_size}")
logger.info(f" - Desktop enabled: {desktop_enabled}")
logger.info(f" - Mobile enabled: {mobile_enabled}")
# Get provider API key from IntegrationProvider (centralized)
logger.info(f"[process_image_generation_queue] Step 2: Loading {provider.upper()} API key from IntegrationProvider")
# Get provider API key
# API keys are ALWAYS from GlobalIntegrationSettings (accounts cannot override API keys)
# Account IntegrationSettings only store provider preference, NOT API keys
logger.info(f"[process_image_generation_queue] Step 2: Loading {provider.upper()} API key from GlobalIntegrationSettings")
# Get API key from IntegrationProvider (centralized)
api_key = ModelRegistry.get_api_key(provider)
# Get API key from GlobalIntegrationSettings
if provider == 'runware':
api_key = global_settings.runware_api_key
elif provider == 'openai':
api_key = global_settings.dalle_api_key or global_settings.openai_api_key
else:
api_key = None
if not api_key:
logger.error(f"[process_image_generation_queue] {provider.upper()} API key not configured in IntegrationProvider")
return {'success': False, 'error': f'{provider.upper()} API key not configured'}
logger.error(f"[process_image_generation_queue] {provider.upper()} API key not configured in GlobalIntegrationSettings")
return {'success': False, 'error': f'{provider.upper()} API key not configured in GlobalIntegrationSettings'}
# Log API key presence (but not the actual key for security)
api_key_preview = f"{api_key[:10]}...{api_key[-4:]}" if len(api_key) > 14 else "***"
@@ -400,7 +386,7 @@ def process_image_generation_queue(self, image_ids: list, account_id: int = None
# Calculate actual template length with placeholders filled
# Format template with dummy values to measure actual length
template_with_dummies = image_prompt_template.format(
image_type=style_description, # Use actual style description length
image_type=image_type,
post_title='X' * len(post_title), # Use same length as actual post_title
image_prompt='' # Empty to measure template overhead
)
@@ -427,7 +413,7 @@ def process_image_generation_queue(self, image_ids: list, account_id: int = None
image_prompt = image_prompt[:max_image_prompt_length - 3] + "..."
formatted_prompt = image_prompt_template.format(
image_type=style_description, # Use full style description instead of raw value
image_type=image_type,
post_title=post_title,
image_prompt=image_prompt
)
@@ -492,40 +478,15 @@ def process_image_generation_queue(self, image_ids: list, account_id: int = None
}
)
# Use appropriate size based on image type and position
# Featured: Always landscape (model-specific)
# In-article: Alternating square/landscape based on position
# Position 0: Square (1024x1024)
# Position 1: Landscape (model-specific)
# Position 2: Square (1024x1024)
# Position 3: Landscape (model-specific)
# Use appropriate size based on image type
if image.image_type == 'featured':
image_size = featured_image_size # Model-specific landscape
elif image.image_type == 'in_article':
# Alternate based on position: even=square, odd=landscape
position = image.position or 0
if position % 2 == 0: # Position 0, 2: Square
image_size = in_article_square_size
else: # Position 1, 3: Landscape
image_size = in_article_landscape_size
logger.info(f"[process_image_generation_queue] In-article image position {position}: using {'square' if position % 2 == 0 else 'landscape'} size {image_size}")
else: # desktop or other (legacy)
image_size = in_article_square_size # Default to square
# For DALL-E, convert image_type to style parameter
# image_type is from user settings (e.g., 'vivid', 'natural', 'realistic')
# DALL-E accepts 'vivid' or 'natural' - map accordingly
dalle_style = None
if provider == 'openai':
# Map image_type to DALL-E style
# 'natural' = more realistic photos (default)
# 'vivid' = hyper-real, dramatic images
if image_type in ['vivid']:
dalle_style = 'vivid'
else:
# Default to 'natural' for realistic photos
dalle_style = 'natural'
logger.info(f"[process_image_generation_queue] DALL-E style: {dalle_style} (from image_type: {image_type})")
image_size = featured_image_size # Read from config
elif image.image_type == 'desktop':
image_size = desktop_image_size
elif image.image_type == 'mobile':
image_size = '512x512' # Fixed mobile size
else: # in_article or other
image_size = in_article_image_size # Read from config, default 512x512
result = ai_core.generate_image(
prompt=formatted_prompt,
@@ -534,8 +495,7 @@ def process_image_generation_queue(self, image_ids: list, account_id: int = None
size=image_size,
api_key=api_key,
negative_prompt=negative_prompt,
function_name='generate_images_from_prompts',
style=dalle_style
function_name='generate_images_from_prompts'
)
# Update progress: Image generation complete (50%)
@@ -710,33 +670,6 @@ def process_image_generation_queue(self, image_ids: list, account_id: int = None
})
failed += 1
else:
# Deduct credits for successful image generation
credits_deducted = 0
cost_usd = result.get('cost_usd', 0)
if account:
try:
credits_deducted = CreditService.deduct_credits_for_image(
account=account,
model_name=model,
num_images=1,
description=f"Image generation: {content.title[:50] if content else 'Image'}" if content else f"Image {image_id}",
metadata={
'image_id': image_id,
'content_id': content_id,
'provider': provider,
'model': model,
'image_type': image.image_type if image else 'unknown',
'size': image_size,
},
cost_usd=cost_usd,
related_object_type='image',
related_object_id=image_id
)
logger.info(f"[process_image_generation_queue] Credits deducted for image {image_id}: account balance now {credits_deducted}")
except Exception as credit_error:
logger.error(f"[process_image_generation_queue] Failed to deduct credits for image {image_id}: {credit_error}")
# Don't fail the image generation if credit deduction fails
# Update progress: Complete (100%)
self.update_state(
state='PROGRESS',

View File

@@ -145,7 +145,7 @@ def validate_model(model: str, model_type: str = 'text') -> Dict[str, Any]:
Dict with 'valid' (bool) and optional 'error' (str)
"""
try:
# Use database-driven validation via AIModelConfig
# Try database first
from igny8_core.business.billing.models import AIModelConfig
exists = AIModelConfig.objects.filter(
@@ -169,20 +169,29 @@ def validate_model(model: str, model_type: str = 'text') -> Dict[str, Any]:
else:
return {
'valid': False,
'error': f'No {model_type} models configured in database'
'error': f'Model "{model}" is not found in database'
}
return {'valid': True}
except Exception as e:
# Log error but don't fallback to constants - DB is authoritative
import logging
logger = logging.getLogger(__name__)
logger.error(f"Error validating model {model}: {e}")
return {
'valid': False,
'error': f'Error validating model: {e}'
}
except Exception:
# Fallback to constants if database fails
from .constants import MODEL_RATES, VALID_OPENAI_IMAGE_MODELS
if model_type == 'text':
if model not in MODEL_RATES:
return {
'valid': False,
'error': f'Model "{model}" is not in supported models list'
}
elif model_type == 'image':
if model not in VALID_OPENAI_IMAGE_MODELS:
return {
'valid': False,
'error': f'Model "{model}" is not valid for OpenAI image generation. Only {", ".join(VALID_OPENAI_IMAGE_MODELS)} are supported.'
}
return {'valid': True}
def validate_image_size(size: str, model: str) -> Dict[str, Any]:

View File

@@ -5,8 +5,7 @@ from django.urls import path
from igny8_core.api.account_views import (
AccountSettingsViewSet,
TeamManagementViewSet,
UsageAnalyticsViewSet,
DashboardStatsViewSet
UsageAnalyticsViewSet
)
urlpatterns = [
@@ -29,9 +28,4 @@ urlpatterns = [
path('usage/analytics/', UsageAnalyticsViewSet.as_view({
'get': 'overview'
}), name='usage-analytics'),
# Dashboard Stats (real data for home page)
path('dashboard/stats/', DashboardStatsViewSet.as_view({
'get': 'stats'
}), name='dashboard-stats'),
]

View File

@@ -10,7 +10,6 @@ from django.contrib.auth import get_user_model
from django.db.models import Q, Count, Sum
from django.utils import timezone
from datetime import timedelta
from decimal import Decimal
from drf_spectacular.utils import extend_schema, extend_schema_view
from igny8_core.auth.models import Account
@@ -132,16 +131,6 @@ class TeamManagementViewSet(viewsets.ViewSet):
status=status.HTTP_400_BAD_REQUEST
)
# Check hard limit for users BEFORE creating
from igny8_core.business.billing.services.limit_service import LimitService, HardLimitExceededError
try:
LimitService.check_hard_limit(account, 'users', additional_count=1)
except HardLimitExceededError as e:
return Response(
{'error': str(e)},
status=status.HTTP_400_BAD_REQUEST
)
# Create user (simplified - in production, send invitation email)
user = User.objects.create_user(
email=email,
@@ -253,216 +242,3 @@ class UsageAnalyticsViewSet(viewsets.ViewSet):
'total_usage': abs(transactions.filter(amount__lt=0).aggregate(Sum('amount'))['amount__sum'] or 0),
'total_purchases': transactions.filter(amount__gt=0).aggregate(Sum('amount'))['amount__sum'] or 0,
})
@extend_schema_view(
stats=extend_schema(tags=['Account']),
)
class DashboardStatsViewSet(viewsets.ViewSet):
"""Dashboard statistics - real data for home page widgets"""
permission_classes = [IsAuthenticated]
@action(detail=False, methods=['get'])
def stats(self, request):
"""
Get dashboard statistics for the home page.
Query params:
- site_id: Filter by site (optional, defaults to all sites)
- days: Number of days for AI operations (default: 7)
Returns:
- ai_operations: Real credit usage by operation type
- recent_activity: Recent notifications
- content_velocity: Content created this week/month
- images_count: Actual total images count
- published_count: Actual published content count
"""
account = request.user.account
site_id = request.query_params.get('site_id')
days = int(request.query_params.get('days', 7))
# Import models here to avoid circular imports
from igny8_core.modules.writer.models import Images, Content
from igny8_core.modules.planner.models import Keywords, Clusters, ContentIdeas
from igny8_core.business.notifications.models import Notification
from igny8_core.business.billing.models import CreditUsageLog
from igny8_core.auth.models import Site
# Build base filter for site
site_filter = {}
if site_id:
try:
site_filter['site_id'] = int(site_id)
except (ValueError, TypeError):
pass
# ========== AI OPERATIONS (from CreditUsageLog) ==========
start_date = timezone.now() - timedelta(days=days)
usage_query = CreditUsageLog.objects.filter(
account=account,
created_at__gte=start_date
)
# Get operations grouped by type
operations_data = usage_query.values('operation_type').annotate(
count=Count('id'),
credits=Sum('credits_used')
).order_by('-credits')
# Calculate totals
total_ops = usage_query.count()
total_credits = usage_query.aggregate(total=Sum('credits_used'))['total'] or 0
# Format operations for frontend
operations = []
for op in operations_data:
op_type = op['operation_type'] or 'other'
operations.append({
'type': op_type,
'count': op['count'] or 0,
'credits': op['credits'] or 0,
})
ai_operations = {
'period': f'{days}d',
'operations': operations,
'totals': {
'count': total_ops,
'credits': total_credits,
'successRate': 98.5, # TODO: calculate from actual success/failure
'avgCreditsPerOp': round(total_credits / total_ops, 1) if total_ops > 0 else 0,
}
}
# ========== RECENT ACTIVITY (from Notifications) ==========
recent_notifications = Notification.objects.filter(
account=account
).order_by('-created_at')[:10]
recent_activity = []
for notif in recent_notifications:
# Map notification type to activity type
activity_type_map = {
'ai_clustering_complete': 'clustering',
'ai_ideas_complete': 'ideas',
'ai_content_complete': 'content',
'ai_images_complete': 'images',
'ai_prompts_complete': 'images',
'content_published': 'published',
'wp_sync_success': 'published',
}
activity_type = activity_type_map.get(notif.notification_type, 'system')
# Map notification type to href
href_map = {
'clustering': '/planner/clusters',
'ideas': '/planner/ideas',
'content': '/writer/content',
'images': '/writer/images',
'published': '/writer/published',
}
recent_activity.append({
'id': str(notif.id),
'type': activity_type,
'title': notif.title,
'description': notif.message[:100] if notif.message else '',
'timestamp': notif.created_at.isoformat(),
'href': href_map.get(activity_type, '/dashboard'),
})
# ========== CONTENT COUNTS ==========
content_base = Content.objects.filter(account=account)
if site_filter:
content_base = content_base.filter(**site_filter)
total_content = content_base.count()
draft_content = content_base.filter(status='draft').count()
review_content = content_base.filter(status='review').count()
published_content = content_base.filter(status='published').count()
# ========== IMAGES COUNT (actual images, not content with images) ==========
images_base = Images.objects.filter(account=account)
if site_filter:
images_base = images_base.filter(**site_filter)
total_images = images_base.count()
generated_images = images_base.filter(status='generated').count()
pending_images = images_base.filter(status='pending').count()
# ========== CONTENT VELOCITY ==========
now = timezone.now()
week_ago = now - timedelta(days=7)
month_ago = now - timedelta(days=30)
# This week's content
week_content = content_base.filter(created_at__gte=week_ago).count()
week_images = images_base.filter(created_at__gte=week_ago).count()
# This month's content
month_content = content_base.filter(created_at__gte=month_ago).count()
month_images = images_base.filter(created_at__gte=month_ago).count()
# Estimate words (avg 1500 per article)
content_velocity = {
'thisWeek': {
'articles': week_content,
'words': week_content * 1500,
'images': week_images,
},
'thisMonth': {
'articles': month_content,
'words': month_content * 1500,
'images': month_images,
},
'total': {
'articles': total_content,
'words': total_content * 1500,
'images': total_images,
},
'trend': 0, # TODO: calculate actual trend
}
# ========== PIPELINE COUNTS ==========
keywords_base = Keywords.objects.filter(account=account)
clusters_base = Clusters.objects.filter(account=account)
ideas_base = ContentIdeas.objects.filter(account=account)
if site_filter:
keywords_base = keywords_base.filter(**site_filter)
clusters_base = clusters_base.filter(**site_filter)
ideas_base = ideas_base.filter(**site_filter)
# Get site count
sites_count = Site.objects.filter(account=account, is_active=True).count()
pipeline = {
'sites': sites_count,
'keywords': keywords_base.count(),
'clusters': clusters_base.count(),
'ideas': ideas_base.count(),
'tasks': ideas_base.filter(status='queued').count() + ideas_base.filter(status='completed').count(),
'drafts': draft_content + review_content,
'published': published_content,
}
return Response({
'ai_operations': ai_operations,
'recent_activity': recent_activity,
'content_velocity': content_velocity,
'pipeline': pipeline,
'counts': {
'content': {
'total': total_content,
'draft': draft_content,
'review': review_content,
'published': published_content,
},
'images': {
'total': total_images,
'generated': generated_images,
'pending': pending_images,
},
}
})

View File

@@ -0,0 +1,381 @@
"""
Dashboard API Views
Provides aggregated data for the frontend dashboard in a single call.
Replaces multiple sequential API calls for better performance.
"""
from rest_framework import viewsets, status
from rest_framework.decorators import action
from rest_framework.response import Response
from rest_framework.permissions import IsAuthenticated
from django.db.models import Count, Sum, Q, F
from django.utils import timezone
from datetime import timedelta
from drf_spectacular.utils import extend_schema, extend_schema_view, OpenApiParameter
from igny8_core.auth.models import Site, Sector
from igny8_core.business.planning.models import Keywords, Clusters, ContentIdeas
from igny8_core.business.content.models import Tasks, Content
from igny8_core.business.billing.models import CreditUsageLog
from igny8_core.ai.models import AITaskLog
@extend_schema_view(
summary=extend_schema(
tags=['Dashboard'],
summary='Get dashboard summary',
description='Returns aggregated dashboard data including pipeline counts, AI operations, recent activity, and items needing attention.',
parameters=[
OpenApiParameter(
name='site_id',
description='Filter by specific site ID',
required=False,
type=int
),
OpenApiParameter(
name='days',
description='Number of days for recent activity and AI operations (default: 7)',
required=False,
type=int
),
]
),
)
class DashboardSummaryViewSet(viewsets.ViewSet):
"""Dashboard summary providing aggregated data for the main dashboard."""
permission_classes = [IsAuthenticated]
@action(detail=False, methods=['get'])
def summary(self, request):
"""
Get comprehensive dashboard summary in a single API call.
Returns:
- needs_attention: Items requiring user action
- pipeline: Workflow pipeline counts (keywords → published)
- ai_operations: Recent AI usage stats
- recent_activity: Latest activity log
- content_velocity: Content creation trends
- automation: Automation status summary
"""
account = request.user.account
site_id = request.query_params.get('site_id')
days = int(request.query_params.get('days', 7))
start_date = timezone.now() - timedelta(days=days)
# Build base filters
site_filter = Q(site__account=account)
if site_id:
site_filter &= Q(site_id=site_id)
# ==========================================
# 1. PIPELINE COUNTS
# ==========================================
keywords_count = Keywords.objects.filter(site_filter).count()
clusters_count = Clusters.objects.filter(site_filter).count()
ideas_count = ContentIdeas.objects.filter(site_filter).count()
tasks_count = Tasks.objects.filter(site_filter).count()
content_filter = site_filter
drafts_count = Content.objects.filter(content_filter, status='draft').count()
review_count = Content.objects.filter(content_filter, status='review').count()
published_count = Content.objects.filter(content_filter, status='published').count()
total_content = drafts_count + review_count + published_count
# Calculate completion percentage based on workflow milestones
milestones = [
keywords_count > 0,
clusters_count > 0,
ideas_count > 0,
tasks_count > 0,
total_content > 0,
published_count > 0,
]
completion_percentage = int((sum(milestones) / len(milestones)) * 100) if milestones else 0
pipeline = {
'keywords': keywords_count,
'clusters': clusters_count,
'ideas': ideas_count,
'tasks': tasks_count,
'drafts': drafts_count,
'review': review_count,
'published': published_count,
'total_content': total_content,
'completion_percentage': completion_percentage,
}
# ==========================================
# 2. NEEDS ATTENTION
# ==========================================
needs_attention = []
# Content pending review
if review_count > 0:
needs_attention.append({
'id': 'pending-review',
'type': 'pending_review',
'title': 'pending review',
'count': review_count,
'action_label': 'Review',
'action_url': '/writer/review',
'severity': 'warning',
})
# Sites without keywords (incomplete setup)
sites = Site.objects.filter(account=account, is_active=True)
sites_without_keywords = []
for site in sites:
kw_count = Keywords.objects.filter(site=site).count()
if kw_count == 0:
sites_without_keywords.append(site)
if sites_without_keywords:
if len(sites_without_keywords) == 1:
needs_attention.append({
'id': 'setup-incomplete',
'type': 'setup_incomplete',
'title': f'{sites_without_keywords[0].name} needs setup',
'action_label': 'Complete',
'action_url': f'/sites/{sites_without_keywords[0].id}',
'severity': 'info',
})
else:
needs_attention.append({
'id': 'setup-incomplete',
'type': 'setup_incomplete',
'title': f'{len(sites_without_keywords)} sites need setup',
'action_label': 'Complete',
'action_url': '/sites',
'severity': 'info',
})
# Sites without integrations
sites_without_integration = sites.filter(has_integration=False).count()
if sites_without_integration > 0:
needs_attention.append({
'id': 'no-integration',
'type': 'no_integration',
'title': f'{sites_without_integration} site{"s" if sites_without_integration > 1 else ""} without WordPress',
'action_label': 'Connect',
'action_url': '/integrations',
'severity': 'info',
})
# Low credits warning
if account.credits < 100:
needs_attention.append({
'id': 'credits-low',
'type': 'credits_low',
'title': f'Credits running low ({account.credits} remaining)',
'action_label': 'Upgrade',
'action_url': '/billing/plans',
'severity': 'warning' if account.credits > 20 else 'error',
})
# Queued tasks not processed
queued_tasks = Tasks.objects.filter(site_filter, status='queued').count()
if queued_tasks > 10:
needs_attention.append({
'id': 'queued-tasks',
'type': 'queued_tasks',
'title': f'{queued_tasks} tasks waiting to be generated',
'action_label': 'Generate',
'action_url': '/writer/tasks',
'severity': 'info',
})
# ==========================================
# 3. AI OPERATIONS (last N days)
# ==========================================
ai_usage = CreditUsageLog.objects.filter(
account=account,
created_at__gte=start_date
)
# Group by operation type
operations_by_type = ai_usage.values('operation_type').annotate(
count=Count('id'),
credits=Sum('credits_used'),
tokens=Sum('tokens_input') + Sum('tokens_output')
).order_by('-count')
# Format operation names
operation_display = {
'clustering': 'Clustering',
'idea_generation': 'Ideas',
'content_generation': 'Content',
'image_generation': 'Images',
'image_prompt_extraction': 'Image Prompts',
'linking': 'Linking',
'optimization': 'Optimization',
'reparse': 'Reparse',
'site_page_generation': 'Site Pages',
'site_structure_generation': 'Site Structure',
'ideas': 'Ideas',
'content': 'Content',
'images': 'Images',
}
operations = []
for op in operations_by_type[:5]: # Top 5 operations
operations.append({
'type': op['operation_type'],
'label': operation_display.get(op['operation_type'], op['operation_type'].replace('_', ' ').title()),
'count': op['count'],
'credits': op['credits'] or 0,
'tokens': op['tokens'] or 0,
})
total_credits_used = ai_usage.aggregate(total=Sum('credits_used'))['total'] or 0
total_operations = ai_usage.count()
ai_operations = {
'period_days': days,
'operations': operations,
'totals': {
'credits': total_credits_used,
'operations': total_operations,
}
}
# ==========================================
# 4. RECENT ACTIVITY
# ==========================================
recent_logs = AITaskLog.objects.filter(
account=account,
status='success',
created_at__gte=start_date
).order_by('-created_at')[:10]
activity_icons = {
'run_clustering': 'group',
'generate_content_ideas': 'bolt',
'generate_content': 'file-text',
'generate_images': 'image',
'publish_content': 'paper-plane',
'optimize_content': 'sparkles',
'link_content': 'link',
}
activity_colors = {
'run_clustering': 'purple',
'generate_content_ideas': 'orange',
'generate_content': 'blue',
'generate_images': 'pink',
'publish_content': 'green',
'optimize_content': 'cyan',
'link_content': 'indigo',
}
recent_activity = []
for log in recent_logs:
# Parse friendly message from the log
message = log.message or f'{log.function_name} completed'
recent_activity.append({
'id': log.id,
'type': log.function_name,
'description': message,
'timestamp': log.created_at.isoformat(),
'icon': activity_icons.get(log.function_name, 'bolt'),
'color': activity_colors.get(log.function_name, 'gray'),
'credits': float(log.cost) if log.cost else 0,
})
# ==========================================
# 5. CONTENT VELOCITY
# ==========================================
# Content created in different periods
now = timezone.now()
content_today = Content.objects.filter(
content_filter,
created_at__date=now.date()
).count()
content_this_week = Content.objects.filter(
content_filter,
created_at__gte=now - timedelta(days=7)
).count()
content_this_month = Content.objects.filter(
content_filter,
created_at__gte=now - timedelta(days=30)
).count()
# Daily breakdown for last 7 days
daily_content = []
for i in range(7):
day = now - timedelta(days=6-i)
count = Content.objects.filter(
content_filter,
created_at__date=day.date()
).count()
daily_content.append({
'date': day.date().isoformat(),
'count': count,
})
content_velocity = {
'today': content_today,
'this_week': content_this_week,
'this_month': content_this_month,
'daily': daily_content,
'average_per_day': round(content_this_week / 7, 1) if content_this_week else 0,
}
# ==========================================
# 6. AUTOMATION STATUS
# ==========================================
# Check automation settings
from igny8_core.business.automation.models import AutomationSettings
automation_enabled = AutomationSettings.objects.filter(
account=account,
enabled=True
).exists()
active_automations = AutomationSettings.objects.filter(
account=account,
enabled=True
).count()
automation = {
'enabled': automation_enabled,
'active_count': active_automations,
'status': 'active' if automation_enabled else 'inactive',
}
# ==========================================
# 7. SITES SUMMARY
# ==========================================
sites_data = []
for site in sites[:5]: # Top 5 sites
site_keywords = Keywords.objects.filter(site=site).count()
site_content = Content.objects.filter(site=site).count()
site_published = Content.objects.filter(site=site, status='published').count()
sites_data.append({
'id': site.id,
'name': site.name,
'domain': site.url,
'keywords': site_keywords,
'content': site_content,
'published': site_published,
'has_integration': site.has_integration,
'sectors_count': site.sectors.filter(is_active=True).count(),
})
return Response({
'needs_attention': needs_attention,
'pipeline': pipeline,
'ai_operations': ai_operations,
'recent_activity': recent_activity,
'content_velocity': content_velocity,
'automation': automation,
'sites': sites_data,
'account': {
'credits': account.credits,
'name': account.name,
},
'generated_at': timezone.now().isoformat(),
})

View File

@@ -124,22 +124,12 @@ class IsEditorOrAbove(permissions.BasePermission):
class IsAdminOrOwner(permissions.BasePermission):
"""
Permission class that requires admin or owner role only
OR user belongs to aws-admin account
For settings, keys, billing operations
"""
def has_permission(self, request, view):
if not request.user or not request.user.is_authenticated:
return False
# Check if user belongs to aws-admin account (case-insensitive)
if hasattr(request.user, 'account') and request.user.account:
account_name = getattr(request.user.account, 'name', None)
account_slug = getattr(request.user.account, 'slug', None)
if account_name and account_name.lower() == 'aws admin':
return True
if account_slug == 'aws-admin':
return True
# Check user role
if hasattr(request.user, 'role'):
role = request.user.role

View File

@@ -6,10 +6,9 @@ from rest_framework.routers import DefaultRouter
from .account_views import (
AccountSettingsViewSet,
TeamManagementViewSet,
UsageAnalyticsViewSet,
DashboardStatsViewSet
UsageAnalyticsViewSet
)
from igny8_core.modules.system.settings_views import ContentGenerationSettingsViewSet
from .dashboard_views import DashboardSummaryViewSet
router = DefaultRouter()
@@ -17,10 +16,6 @@ urlpatterns = [
# Account settings (non-router endpoints for simplified access)
path('settings/', AccountSettingsViewSet.as_view({'get': 'retrieve', 'patch': 'partial_update'}), name='account-settings'),
# AI Settings - Content Generation Settings per the plan
# GET/POST /api/v1/account/settings/ai/
path('settings/ai/', ContentGenerationSettingsViewSet.as_view({'get': 'list', 'post': 'create', 'put': 'create'}), name='ai-settings'),
# Team management
path('team/', TeamManagementViewSet.as_view({'get': 'list', 'post': 'create'}), name='team-list'),
path('team/<int:pk>/', TeamManagementViewSet.as_view({'delete': 'destroy'}), name='team-detail'),
@@ -28,8 +23,8 @@ urlpatterns = [
# Usage analytics
path('usage/analytics/', UsageAnalyticsViewSet.as_view({'get': 'overview'}), name='usage-analytics'),
# Dashboard stats (real data for home page)
path('dashboard/stats/', DashboardStatsViewSet.as_view({'get': 'stats'}), name='dashboard-stats'),
# Dashboard summary (aggregated data for main dashboard)
path('dashboard/summary/', DashboardSummaryViewSet.as_view({'get': 'summary'}), name='dashboard-summary'),
path('', include(router.urls)),
]
]

View File

@@ -117,7 +117,7 @@ class PlanResource(resources.ModelResource):
class Meta:
model = Plan
fields = ('id', 'name', 'slug', 'price', 'billing_cycle', 'max_sites', 'max_users',
'max_keywords', 'max_ahrefs_queries', 'included_credits', 'is_active', 'is_featured')
'max_keywords', 'max_content_words', 'included_credits', 'is_active', 'is_featured')
export_order = fields
import_id_fields = ('id',)
skip_unchanged = True
@@ -127,7 +127,7 @@ class PlanResource(resources.ModelResource):
class PlanAdmin(ImportExportMixin, Igny8ModelAdmin):
resource_class = PlanResource
"""Plan admin - Global, no account filtering needed"""
list_display = ['name', 'slug', 'price', 'billing_cycle', 'max_sites', 'max_users', 'max_keywords', 'max_ahrefs_queries', 'included_credits', 'is_active', 'is_featured']
list_display = ['name', 'slug', 'price', 'billing_cycle', 'max_sites', 'max_users', 'max_keywords', 'max_content_words', 'included_credits', 'is_active', 'is_featured']
list_filter = ['is_active', 'billing_cycle', 'is_internal', 'is_featured']
search_fields = ['name', 'slug']
readonly_fields = ['created_at']
@@ -147,12 +147,12 @@ class PlanAdmin(ImportExportMixin, Igny8ModelAdmin):
'description': 'Persistent limits for account-level resources'
}),
('Hard Limits (Persistent)', {
'fields': ('max_keywords',),
'fields': ('max_keywords', 'max_clusters'),
'description': 'Total allowed - never reset'
}),
('Monthly Limits (Reset on Billing Cycle)', {
'fields': ('max_ahrefs_queries',),
'description': 'Monthly Ahrefs keyword research queries (0 = disabled)'
'fields': ('max_content_ideas', 'max_content_words', 'max_images_basic', 'max_images_premium', 'max_image_prompts'),
'description': 'Monthly allowances - reset at billing cycle'
}),
('Billing & Credits', {
'fields': ('included_credits', 'extra_credit_price', 'allow_credit_topup', 'auto_credit_topup_threshold', 'auto_credit_topup_amount', 'credits_per_month')
@@ -214,7 +214,6 @@ class AccountAdmin(ExportMixin, AccountAdminMixin, SimpleHistoryAdmin, Igny8Mode
'bulk_add_credits',
'bulk_subtract_credits',
'bulk_soft_delete',
'bulk_hard_delete',
]
def get_queryset(self, request):
@@ -455,39 +454,14 @@ class AccountAdmin(ExportMixin, AccountAdminMixin, SimpleHistoryAdmin, Igny8Mode
bulk_subtract_credits.short_description = 'Subtract credits from accounts'
def bulk_soft_delete(self, request, queryset):
"""Soft delete selected accounts and all related data"""
"""Soft delete selected accounts"""
count = 0
for account in queryset:
if account.slug != 'aws-admin': # Protect admin account
account.delete() # Soft delete via SoftDeletableModel (now cascades)
account.delete() # Soft delete via SoftDeletableModel
count += 1
self.message_user(request, f'{count} account(s) and all related data soft deleted.', messages.SUCCESS)
bulk_soft_delete.short_description = 'Soft delete accounts (with cascade)'
def bulk_hard_delete(self, request, queryset):
"""PERMANENTLY delete selected accounts and ALL related data - cannot be undone!"""
import traceback
count = 0
errors = []
for account in queryset:
if account.slug == 'aws-admin': # Protect admin account
errors.append(f'{account.name}: Protected system account')
continue
try:
account.hard_delete_with_cascade() # Permanently delete everything
count += 1
except Exception as e:
# Log full traceback for debugging
import logging
logger = logging.getLogger(__name__)
logger.error(f'Hard delete failed for account {account.pk} ({account.name}): {traceback.format_exc()}')
errors.append(f'{account.name}: {str(e)}')
if count > 0:
self.message_user(request, f'{count} account(s) and ALL related data permanently deleted.', messages.SUCCESS)
if errors:
self.message_user(request, f'Errors: {"; ".join(errors)}', messages.ERROR)
bulk_hard_delete.short_description = '⚠️ PERMANENTLY delete accounts (irreversible!)'
self.message_user(request, f'{count} account(s) soft deleted.', messages.SUCCESS)
bulk_soft_delete.short_description = 'Soft delete selected accounts'
class SubscriptionResource(resources.ModelResource):
@@ -1007,7 +981,7 @@ class UserAdmin(ExportMixin, BaseUserAdmin, Igny8ModelAdmin):
list_display = ['email', 'username', 'account', 'role', 'is_active', 'is_staff', 'created_at']
list_filter = ['role', 'account', 'is_active', 'is_staff']
search_fields = ['email', 'username']
readonly_fields = ['created_at', 'updated_at', 'password_display']
readonly_fields = ['created_at', 'updated_at']
fieldsets = BaseUserAdmin.fieldsets + (
('IGNY8 Info', {'fields': ('account', 'role')}),
@@ -1025,45 +999,8 @@ class UserAdmin(ExportMixin, BaseUserAdmin, Igny8ModelAdmin):
'bulk_activate',
'bulk_deactivate',
'bulk_send_password_reset',
'bulk_set_temporary_password',
]
def password_display(self, obj):
"""Show password hash with copy button (for debugging only)"""
if obj.password:
return f'Hash: {obj.password[:50]}...'
return 'No password set'
password_display.short_description = 'Password Hash'
def bulk_set_temporary_password(self, request, queryset):
"""Set a temporary password for selected users and display it"""
import secrets
import string
# Generate a secure random password
alphabet = string.ascii_letters + string.digits
temp_password = ''.join(secrets.choice(alphabet) for _ in range(12))
users_updated = []
for user in queryset:
user.set_password(temp_password)
user.save(update_fields=['password'])
users_updated.append(user.email)
if users_updated:
# Display the password in the message (only visible to admin)
self.message_user(
request,
f'Temporary password set for {len(users_updated)} user(s): "{temp_password}" (same password for all selected users)',
messages.SUCCESS
)
self.message_user(
request,
f'Users updated: {", ".join(users_updated)}',
messages.INFO
)
bulk_set_temporary_password.short_description = '🔑 Set temporary password (will display)'
def get_queryset(self, request):
"""Filter users by account for non-superusers"""
qs = super().get_queryset(request)

View File

@@ -25,7 +25,18 @@ class Command(BaseCommand):
'max_users': 999999,
'max_sites': 999999,
'max_keywords': 999999,
'max_ahrefs_queries': 999999,
'max_clusters': 999999,
'max_content_ideas': 999999,
'monthly_word_count_limit': 999999999,
'daily_content_tasks': 999999,
'daily_ai_requests': 999999,
'daily_ai_request_limit': 999999,
'monthly_ai_credit_limit': 999999,
'monthly_image_count': 999999,
'daily_image_generation_limit': 999999,
'monthly_cluster_ai_credits': 999999,
'monthly_content_ai_credits': 999999,
'monthly_image_ai_credits': 999999,
'included_credits': 999999,
'is_active': True,
'features': ['ai_writer', 'image_gen', 'auto_publish', 'custom_prompts', 'unlimited'],

View File

@@ -1,100 +0,0 @@
# Generated by IGNY8 Phase 1: Simplify Credits & Limits
# Migration: Remove unused limit fields, add Ahrefs query tracking
# Date: January 5, 2026
from django.db import migrations, models
import django.core.validators
class Migration(migrations.Migration):
"""
Simplify the credits and limits system:
PLAN MODEL:
- REMOVE: max_clusters, max_content_ideas, max_content_words,
max_images_basic, max_images_premium, max_image_prompts
- ADD: max_ahrefs_queries (monthly keyword research queries)
ACCOUNT MODEL:
- REMOVE: usage_content_ideas, usage_content_words, usage_images_basic,
usage_images_premium, usage_image_prompts
- ADD: usage_ahrefs_queries
RATIONALE:
All consumption is now controlled by credits only. The only non-credit
limits are: sites, users, keywords (hard limits) and ahrefs_queries (monthly).
"""
dependencies = [
('igny8_core_auth', '0018_add_country_remove_intent_seedkeyword'),
]
operations = [
# STEP 1: Add new Ahrefs fields FIRST (before removing old ones)
migrations.AddField(
model_name='plan',
name='max_ahrefs_queries',
field=models.IntegerField(
default=0,
validators=[django.core.validators.MinValueValidator(0)],
help_text='Monthly Ahrefs keyword research queries (0 = disabled)'
),
),
migrations.AddField(
model_name='account',
name='usage_ahrefs_queries',
field=models.IntegerField(
default=0,
validators=[django.core.validators.MinValueValidator(0)],
help_text='Ahrefs queries used this month'
),
),
# STEP 2: Remove unused Plan fields
migrations.RemoveField(
model_name='plan',
name='max_clusters',
),
migrations.RemoveField(
model_name='plan',
name='max_content_ideas',
),
migrations.RemoveField(
model_name='plan',
name='max_content_words',
),
migrations.RemoveField(
model_name='plan',
name='max_images_basic',
),
migrations.RemoveField(
model_name='plan',
name='max_images_premium',
),
migrations.RemoveField(
model_name='plan',
name='max_image_prompts',
),
# STEP 3: Remove unused Account fields
migrations.RemoveField(
model_name='account',
name='usage_content_ideas',
),
migrations.RemoveField(
model_name='account',
name='usage_content_words',
),
migrations.RemoveField(
model_name='account',
name='usage_images_basic',
),
migrations.RemoveField(
model_name='account',
name='usage_images_premium',
),
migrations.RemoveField(
model_name='account',
name='usage_image_prompts',
),
]

View File

@@ -1,39 +0,0 @@
# Generated by Django 5.2.9 on 2026-01-06 00:11
import django.core.validators
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('igny8_core_auth', '0019_simplify_credits_limits'),
]
operations = [
migrations.RemoveField(
model_name='historicalaccount',
name='usage_content_ideas',
),
migrations.RemoveField(
model_name='historicalaccount',
name='usage_content_words',
),
migrations.RemoveField(
model_name='historicalaccount',
name='usage_image_prompts',
),
migrations.RemoveField(
model_name='historicalaccount',
name='usage_images_basic',
),
migrations.RemoveField(
model_name='historicalaccount',
name='usage_images_premium',
),
migrations.AddField(
model_name='historicalaccount',
name='usage_ahrefs_queries',
field=models.IntegerField(default=0, help_text='Ahrefs queries used this month', validators=[django.core.validators.MinValueValidator(0)]),
),
]

View File

@@ -108,7 +108,11 @@ class Account(SoftDeletableModel):
tax_id = models.CharField(max_length=100, blank=True, help_text="VAT/Tax ID number")
# Monthly usage tracking (reset on billing cycle)
usage_ahrefs_queries = models.IntegerField(default=0, validators=[MinValueValidator(0)], help_text="Ahrefs queries used this month")
usage_content_ideas = models.IntegerField(default=0, validators=[MinValueValidator(0)], help_text="Content ideas generated this month")
usage_content_words = models.IntegerField(default=0, validators=[MinValueValidator(0)], help_text="Content words generated this month")
usage_images_basic = models.IntegerField(default=0, validators=[MinValueValidator(0)], help_text="Basic AI images this month")
usage_images_premium = models.IntegerField(default=0, validators=[MinValueValidator(0)], help_text="Premium AI images this month")
usage_image_prompts = models.IntegerField(default=0, validators=[MinValueValidator(0)], help_text="Image prompts this month")
usage_period_start = models.DateTimeField(null=True, blank=True, help_text="Current billing period start")
usage_period_end = models.DateTimeField(null=True, blank=True, help_text="Current billing period end")
@@ -153,152 +157,12 @@ class Account(SoftDeletableModel):
# System accounts bypass all filtering restrictions
return self.slug in ['aws-admin', 'default-account', 'default']
def soft_delete(self, user=None, reason=None, retention_days=None, cascade=True):
"""
Soft delete the account and optionally cascade to all related objects.
Args:
user: User performing the deletion
reason: Reason for deletion
retention_days: Days before permanent deletion
cascade: If True, also soft-delete related objects that support soft delete,
and hard-delete objects that don't support soft delete
"""
def soft_delete(self, user=None, reason=None, retention_days=None):
if self.is_system_account():
from django.core.exceptions import PermissionDenied
raise PermissionDenied("System account cannot be deleted.")
if cascade:
self._cascade_delete_related(user=user, reason=reason, retention_days=retention_days, hard_delete=False)
return super().soft_delete(user=user, reason=reason, retention_days=retention_days)
def _cascade_delete_related(self, user=None, reason=None, retention_days=None, hard_delete=False):
"""
Delete all related objects when account is deleted.
For soft delete: soft-deletes objects with SoftDeletableModel, hard-deletes others
For hard delete: hard-deletes everything
"""
from igny8_core.common.soft_delete import SoftDeletableModel
# List of related objects to delete (in order to avoid FK constraint issues)
# Related names from Account reverse relations
related_names = [
# Content & Planning related (delete first due to dependencies)
'contentclustermap_set',
'contentattribute_set',
'contenttaxonomy_set',
'content_set',
'images_set',
'contentideas_set',
'tasks_set',
'keywords_set',
'clusters_set',
'strategy_set',
# Automation
'automation_runs',
'automation_configs',
# Publishing & Integration
'syncevent_set',
'publishingsettings_set',
'publishingrecord_set',
'deploymentrecord_set',
'siteintegration_set',
# Notifications & Optimization
'notification_set',
'optimizationtask_set',
# AI & Settings
'aitasklog_set',
'aiprompt_set',
'aisettings_set',
'authorprofile_set',
# Billing (preserve invoices/payments for audit, delete others)
'planlimitusage_set',
'creditusagelog_set',
'credittransaction_set',
'accountpaymentmethod_set',
'payment_set',
'invoice_set',
# Settings
'modulesettings_set',
'moduleenablesettings_set',
'integrationsettings_set',
'user_settings',
'accountsettings_set',
# Core (last due to dependencies)
'sector_set',
'site_set',
# Users (delete after sites to avoid FK issues, owner is SET_NULL)
'users',
# Subscription (OneToOne)
'subscription',
]
for related_name in related_names:
try:
related = getattr(self, related_name, None)
if related is None:
continue
# Handle OneToOne fields (subscription)
if hasattr(related, 'pk'):
# It's a single object (OneToOneField)
if hard_delete:
related.hard_delete() if hasattr(related, 'hard_delete') else related.delete()
elif isinstance(related, SoftDeletableModel):
related.soft_delete(user=user, reason=reason, retention_days=retention_days)
else:
# Non-soft-deletable single object - hard delete
related.delete()
else:
# It's a RelatedManager (ForeignKey)
queryset = related.all()
if queryset.exists():
if hard_delete:
# Hard delete all
if hasattr(queryset, 'hard_delete'):
queryset.hard_delete()
else:
for obj in queryset:
if hasattr(obj, 'hard_delete'):
obj.hard_delete()
else:
obj.delete()
else:
# Soft delete if supported, otherwise hard delete
model = queryset.model
if issubclass(model, SoftDeletableModel):
for obj in queryset:
obj.soft_delete(user=user, reason=reason, retention_days=retention_days)
else:
queryset.delete()
except Exception as e:
# Log but don't fail - some relations may not exist
import logging
logger = logging.getLogger(__name__)
logger.warning(f"Failed to delete related {related_name} for account {self.pk}: {e}")
def hard_delete_with_cascade(self, using=None, keep_parents=False):
"""
Permanently delete the account and ALL related objects.
This bypasses soft-delete and removes everything from the database.
USE WITH CAUTION - this cannot be undone!
"""
if self.is_system_account():
from django.core.exceptions import PermissionDenied
raise PermissionDenied("System account cannot be deleted.")
# Clear owner reference first to avoid FK constraint issues
# (owner is SET_NULL but we're deleting the user who is the owner)
if self.owner:
self.owner = None
self.save(update_fields=['owner'])
# Cascade hard-delete all related objects first
self._cascade_delete_related(hard_delete=True)
# Finally hard-delete the account itself
return super().hard_delete(using=using, keep_parents=keep_parents)
def delete(self, using=None, keep_parents=False):
return self.soft_delete()
@@ -352,12 +216,37 @@ class Plan(models.Model):
validators=[MinValueValidator(1)],
help_text="Maximum total keywords allowed (hard limit)"
)
max_clusters = models.IntegerField(
default=100,
validators=[MinValueValidator(1)],
help_text="Maximum AI keyword clusters allowed (hard limit)"
)
# Monthly Limits (Reset on billing cycle)
max_ahrefs_queries = models.IntegerField(
default=0,
max_content_ideas = models.IntegerField(
default=300,
validators=[MinValueValidator(1)],
help_text="Maximum AI content ideas per month"
)
max_content_words = models.IntegerField(
default=100000,
validators=[MinValueValidator(1)],
help_text="Maximum content words per month (e.g., 100000 = 100K words)"
)
max_images_basic = models.IntegerField(
default=300,
validators=[MinValueValidator(0)],
help_text="Monthly Ahrefs keyword research queries (0 = disabled)"
help_text="Maximum basic AI images per month"
)
max_images_premium = models.IntegerField(
default=60,
validators=[MinValueValidator(0)],
help_text="Maximum premium AI images per month (DALL-E)"
)
max_image_prompts = models.IntegerField(
default=300,
validators=[MinValueValidator(0)],
help_text="Maximum image prompts per month"
)
# Billing & Credits (Phase 0: Credit-only system)

View File

@@ -13,7 +13,9 @@ class PlanSerializer(serializers.ModelSerializer):
'id', 'name', 'slug', 'price', 'original_price', 'billing_cycle', 'annual_discount_percent',
'is_featured', 'features', 'is_active',
'max_users', 'max_sites', 'max_industries', 'max_author_profiles',
'max_keywords', 'max_ahrefs_queries',
'max_keywords', 'max_clusters',
'max_content_ideas', 'max_content_words',
'max_images_basic', 'max_images_premium', 'max_image_prompts',
'included_credits', 'extra_credit_price', 'allow_credit_topup',
'auto_credit_topup_threshold', 'auto_credit_topup_amount',
'stripe_product_id', 'stripe_price_id', 'credits_per_month'
@@ -53,7 +55,7 @@ class AccountSerializer(serializers.ModelSerializer):
fields = [
'id', 'name', 'slug', 'owner', 'plan', 'plan_id',
'credits', 'status', 'payment_method',
'subscription', 'billing_country', 'created_at'
'subscription', 'created_at'
]
read_only_fields = ['owner', 'created_at']
@@ -171,7 +173,7 @@ class SiteSerializer(serializers.ModelSerializer):
from igny8_core.business.integration.models import SiteIntegration
return SiteIntegration.objects.filter(
site=obj,
platform='wordpress',
integration_type='wordpress',
is_active=True
).exists() or bool(obj.wp_url)
@@ -406,20 +408,11 @@ class RegisterSerializer(serializers.Serializer):
)
# Generate unique slug for account
# Clean the base slug: lowercase, replace spaces and underscores with hyphens
import re
import random
import string
base_slug = re.sub(r'[^a-z0-9-]', '', account_name.lower().replace(' ', '-').replace('_', '-'))[:40] or 'account'
# Add random suffix to prevent collisions (especially during concurrent registrations)
random_suffix = ''.join(random.choices(string.ascii_lowercase + string.digits, k=6))
slug = f"{base_slug}-{random_suffix}"
# Ensure uniqueness with fallback counter
base_slug = account_name.lower().replace(' ', '-').replace('_', '-')[:50] or 'account'
slug = base_slug
counter = 1
while Account.objects.filter(slug=slug).exists():
slug = f"{base_slug}-{random_suffix}-{counter}"
slug = f"{base_slug}-{counter}"
counter += 1
# Create account with status and credits seeded (0 for paid pending)

View File

@@ -109,38 +109,16 @@ class RegisterView(APIView):
refresh_expires_at = timezone.now() + get_refresh_token_expiry()
user_serializer = UserSerializer(user)
# Build response data
response_data = {
'user': user_serializer.data,
'tokens': {
'access': access_token,
'refresh': refresh_token,
'access_expires_at': access_expires_at.isoformat(),
'refresh_expires_at': refresh_expires_at.isoformat(),
}
}
# NOTE: Payment checkout is NO LONGER created at registration
# User will complete payment on /account/plans after signup
# This simplifies the signup flow and consolidates all payment handling
# Send welcome email (if enabled in settings)
try:
from igny8_core.modules.system.email_models import EmailSettings
from igny8_core.business.billing.services.email_service import send_welcome_email
email_settings = EmailSettings.get_settings()
if email_settings.send_welcome_emails and account:
send_welcome_email(user, account)
except Exception as e:
# Don't fail registration if email fails
import logging
logger = logging.getLogger(__name__)
logger.error(f"Failed to send welcome email for user {user.id}: {e}")
return success_response(
data=response_data,
data={
'user': user_serializer.data,
'tokens': {
'access': access_token,
'refresh': refresh_token,
'access_expires_at': access_expires_at.isoformat(),
'refresh_expires_at': refresh_expires_at.isoformat(),
}
},
message='Registration successful',
status_code=status.HTTP_201_CREATED,
request=request
@@ -285,128 +263,6 @@ class LoginView(APIView):
)
@extend_schema(
tags=['Authentication'],
summary='Request Password Reset',
description='Request password reset email'
)
class PasswordResetRequestView(APIView):
"""Request password reset endpoint - sends email with reset token."""
permission_classes = [permissions.AllowAny]
def post(self, request):
from .serializers import RequestPasswordResetSerializer
from .models import PasswordResetToken
serializer = RequestPasswordResetSerializer(data=request.data)
if not serializer.is_valid():
return error_response(
error='Validation failed',
errors=serializer.errors,
status_code=status.HTTP_400_BAD_REQUEST,
request=request
)
email = serializer.validated_data['email']
try:
user = User.objects.get(email=email)
except User.DoesNotExist:
# Don't reveal if email exists - return success anyway
return success_response(
message='If an account with that email exists, a password reset link has been sent.',
request=request
)
# Generate secure token
import secrets
token = secrets.token_urlsafe(32)
# Create reset token (expires in 1 hour)
from django.utils import timezone
from datetime import timedelta
expires_at = timezone.now() + timedelta(hours=1)
PasswordResetToken.objects.create(
user=user,
token=token,
expires_at=expires_at
)
# Send password reset email
import logging
logger = logging.getLogger(__name__)
logger.info(f"[PASSWORD_RESET] Attempting to send reset email to: {email}")
try:
from igny8_core.business.billing.services.email_service import send_password_reset_email
result = send_password_reset_email(user, token)
logger.info(f"[PASSWORD_RESET] Email send result: {result}")
print(f"[PASSWORD_RESET] Email send result: {result}") # Console output
except Exception as e:
logger.error(f"[PASSWORD_RESET] Failed to send password reset email: {e}", exc_info=True)
print(f"[PASSWORD_RESET] ERROR: {e}") # Console output
return success_response(
message='If an account with that email exists, a password reset link has been sent.',
request=request
)
@extend_schema(
tags=['Authentication'],
summary='Reset Password',
description='Reset password using token from email'
)
class PasswordResetConfirmView(APIView):
"""Confirm password reset with token."""
permission_classes = [permissions.AllowAny]
def post(self, request):
from .serializers import ResetPasswordSerializer
from .models import PasswordResetToken
from django.utils import timezone
serializer = ResetPasswordSerializer(data=request.data)
if not serializer.is_valid():
return error_response(
error='Validation failed',
errors=serializer.errors,
status_code=status.HTTP_400_BAD_REQUEST,
request=request
)
token = serializer.validated_data['token']
new_password = serializer.validated_data['new_password']
try:
reset_token = PasswordResetToken.objects.get(
token=token,
used=False,
expires_at__gt=timezone.now()
)
except PasswordResetToken.DoesNotExist:
return error_response(
error='Invalid or expired reset token',
status_code=status.HTTP_400_BAD_REQUEST,
request=request
)
# Reset password
user = reset_token.user
user.set_password(new_password)
user.save()
# Mark token as used
reset_token.used = True
reset_token.save()
return success_response(
message='Password reset successfully. You can now log in with your new password.',
request=request
)
@extend_schema(
tags=['Authentication'],
summary='Change Password',
@@ -522,77 +378,6 @@ class RefreshTokenView(APIView):
)
@extend_schema(
tags=['Authentication'],
summary='Get Country List',
description='Returns list of countries for registration country selection'
)
class CountryListView(APIView):
"""Returns list of countries for signup dropdown"""
permission_classes = [permissions.AllowAny] # Public endpoint
def get(self, request):
"""Get list of countries with codes and names"""
# Comprehensive list of countries for billing purposes
countries = [
{'code': 'US', 'name': 'United States'},
{'code': 'GB', 'name': 'United Kingdom'},
{'code': 'CA', 'name': 'Canada'},
{'code': 'AU', 'name': 'Australia'},
{'code': 'DE', 'name': 'Germany'},
{'code': 'FR', 'name': 'France'},
{'code': 'ES', 'name': 'Spain'},
{'code': 'IT', 'name': 'Italy'},
{'code': 'NL', 'name': 'Netherlands'},
{'code': 'BE', 'name': 'Belgium'},
{'code': 'CH', 'name': 'Switzerland'},
{'code': 'AT', 'name': 'Austria'},
{'code': 'SE', 'name': 'Sweden'},
{'code': 'NO', 'name': 'Norway'},
{'code': 'DK', 'name': 'Denmark'},
{'code': 'FI', 'name': 'Finland'},
{'code': 'IE', 'name': 'Ireland'},
{'code': 'PT', 'name': 'Portugal'},
{'code': 'PL', 'name': 'Poland'},
{'code': 'CZ', 'name': 'Czech Republic'},
{'code': 'NZ', 'name': 'New Zealand'},
{'code': 'SG', 'name': 'Singapore'},
{'code': 'HK', 'name': 'Hong Kong'},
{'code': 'JP', 'name': 'Japan'},
{'code': 'KR', 'name': 'South Korea'},
{'code': 'IN', 'name': 'India'},
{'code': 'PK', 'name': 'Pakistan'},
{'code': 'BD', 'name': 'Bangladesh'},
{'code': 'AE', 'name': 'United Arab Emirates'},
{'code': 'SA', 'name': 'Saudi Arabia'},
{'code': 'ZA', 'name': 'South Africa'},
{'code': 'NG', 'name': 'Nigeria'},
{'code': 'EG', 'name': 'Egypt'},
{'code': 'KE', 'name': 'Kenya'},
{'code': 'BR', 'name': 'Brazil'},
{'code': 'MX', 'name': 'Mexico'},
{'code': 'AR', 'name': 'Argentina'},
{'code': 'CL', 'name': 'Chile'},
{'code': 'CO', 'name': 'Colombia'},
{'code': 'PE', 'name': 'Peru'},
{'code': 'MY', 'name': 'Malaysia'},
{'code': 'TH', 'name': 'Thailand'},
{'code': 'VN', 'name': 'Vietnam'},
{'code': 'PH', 'name': 'Philippines'},
{'code': 'ID', 'name': 'Indonesia'},
{'code': 'TR', 'name': 'Turkey'},
{'code': 'RU', 'name': 'Russia'},
{'code': 'UA', 'name': 'Ukraine'},
{'code': 'RO', 'name': 'Romania'},
{'code': 'GR', 'name': 'Greece'},
{'code': 'IL', 'name': 'Israel'},
{'code': 'TW', 'name': 'Taiwan'},
]
# Sort alphabetically by name
countries.sort(key=lambda x: x['name'])
return Response({'countries': countries})
@extend_schema(exclude=True) # Exclude from public API documentation - internal authenticated endpoint
class MeView(APIView):
"""Get current user information."""
@@ -610,86 +395,12 @@ class MeView(APIView):
)
@extend_schema(
tags=['Authentication'],
summary='Unsubscribe from Emails',
description='Unsubscribe a user from marketing, billing, or all email notifications'
)
class UnsubscribeView(APIView):
"""Handle email unsubscribe requests with signed URLs."""
permission_classes = [permissions.AllowAny]
def post(self, request):
"""
Process unsubscribe request.
Expected payload:
- email: The email address to unsubscribe
- type: Type of emails to unsubscribe from (marketing, billing, all)
- ts: Timestamp from signed URL
- sig: HMAC signature from signed URL
"""
from igny8_core.business.billing.services.email_service import verify_unsubscribe_signature
import logging
logger = logging.getLogger(__name__)
email = request.data.get('email')
email_type = request.data.get('type', 'all')
timestamp = request.data.get('ts')
signature = request.data.get('sig')
# Validate required fields
if not email or not timestamp or not signature:
return error_response(
error='Missing required parameters',
status_code=status.HTTP_400_BAD_REQUEST,
request=request
)
try:
timestamp = int(timestamp)
except (ValueError, TypeError):
return error_response(
error='Invalid timestamp',
status_code=status.HTTP_400_BAD_REQUEST,
request=request
)
# Verify signature
if not verify_unsubscribe_signature(email, email_type, timestamp, signature):
return error_response(
error='Invalid or expired unsubscribe link',
status_code=status.HTTP_400_BAD_REQUEST,
request=request
)
# Log the unsubscribe request
# In production, update user preferences or use email provider's suppression list
logger.info(f'Unsubscribe request processed: email={email}, type={email_type}')
# TODO: Implement preference storage
# Options:
# 1. Add email preference fields to User model
# 2. Use Resend's suppression list API
# 3. Create EmailPreferences model
return success_response(
message=f'Successfully unsubscribed from {email_type} emails',
request=request
)
urlpatterns = [
path('', include(router.urls)),
path('register/', csrf_exempt(RegisterView.as_view()), name='auth-register'),
path('login/', csrf_exempt(LoginView.as_view()), name='auth-login'),
path('refresh/', csrf_exempt(RefreshTokenView.as_view()), name='auth-refresh'),
path('change-password/', ChangePasswordView.as_view(), name='auth-change-password'),
path('password-reset/', csrf_exempt(PasswordResetRequestView.as_view()), name='auth-password-reset-request'),
path('password-reset/confirm/', csrf_exempt(PasswordResetConfirmView.as_view()), name='auth-password-reset-confirm'),
path('me/', MeView.as_view(), name='auth-me'),
path('countries/', CountryListView.as_view(), name='auth-countries'),
path('unsubscribe/', csrf_exempt(UnsubscribeView.as_view()), name='auth-unsubscribe'),
]

View File

@@ -1267,21 +1267,16 @@ class AuthViewSet(viewsets.GenericViewSet):
expires_at=expires_at
)
# Send password reset email using the email service
# Send email (async via Celery if available, otherwise sync)
try:
from igny8_core.business.billing.services.email_service import send_password_reset_email
send_password_reset_email(user, token)
except Exception as e:
# Fallback to Django's send_mail if email service fails
import logging
logger = logging.getLogger(__name__)
logger.error(f"Failed to send password reset email via email service: {e}")
from igny8_core.modules.system.tasks import send_password_reset_email
send_password_reset_email.delay(user.id, token)
except:
# Fallback to sync email sending
from django.core.mail import send_mail
from django.conf import settings
frontend_url = getattr(settings, 'FRONTEND_URL', 'https://app.igny8.com')
reset_url = f"{frontend_url}/reset-password?token={token}"
reset_url = f"{request.scheme}://{request.get_host()}/reset-password?token={token}"
send_mail(
subject='Reset Your IGNY8 Password',

View File

@@ -1,22 +0,0 @@
# Generated migration for adding initial_snapshot field to AutomationRun
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('automation', '0005_add_default_image_service'),
]
operations = [
migrations.AddField(
model_name='automationrun',
name='initial_snapshot',
field=models.JSONField(
blank=True,
default=dict,
help_text='Snapshot of initial queue sizes: {stage_1_initial, stage_2_initial, ..., total_initial_items}'
),
),
]

View File

@@ -88,13 +88,6 @@ class AutomationRun(models.Model):
total_credits_used = models.IntegerField(default=0)
# Initial queue snapshot - captured at run start for accurate progress tracking
initial_snapshot = models.JSONField(
default=dict,
blank=True,
help_text="Snapshot of initial queue sizes: {stage_1_initial, stage_2_initial, ..., total_initial_items}"
)
# JSON results per stage
stage_1_result = models.JSONField(null=True, blank=True, help_text="{keywords_processed, clusters_created, batches}")
stage_2_result = models.JSONField(null=True, blank=True, help_text="{clusters_processed, ideas_created}")

View File

@@ -98,18 +98,6 @@ class AutomationService:
raise ValueError("Automation already running for this site (cache lock)")
try:
# Capture initial queue snapshot for accurate progress tracking
# Do this BEFORE credit check to validate there's work to do
initial_snapshot = self._capture_initial_snapshot()
# Check if there are any items to process across all stages
total_pending = initial_snapshot.get('total_initial_items', 0)
if total_pending == 0:
raise ValueError(
"No items available to process. Add keywords, clusters, ideas, tasks, "
"or content to the pipeline before running automation."
)
# Estimate credits needed
estimated_credits = self.estimate_credits()
@@ -129,7 +117,6 @@ class AutomationService:
trigger_type=trigger_type,
status='running',
current_stage=1,
initial_snapshot=initial_snapshot,
)
# Log start
@@ -137,10 +124,6 @@ class AutomationService:
run_id, self.account.id, self.site.id, 0,
f"Automation started (trigger: {trigger_type})"
)
self.logger.log_stage_progress(
run_id, self.account.id, self.site.id, 0,
f"Initial snapshot captured: {initial_snapshot['total_initial_items']} total items across all stages"
)
self.logger.log_stage_progress(
run_id, self.account.id, self.site.id, 0,
f"Credit check: Account has {self.account.credits} credits, estimated need: {estimated_credits} credits"
@@ -161,12 +144,10 @@ class AutomationService:
start_time = time.time()
# Query pending keywords
# FIXED: Match pipeline_overview query - use status='new' only
# Keywords with status='new' are ready for clustering, regardless of cluster FK
# (If cluster FK is set but status='new', it means the old cluster was deleted)
pending_keywords = Keywords.objects.filter(
site=self.site,
status='new',
cluster__isnull=True,
disabled=False
)
@@ -236,21 +217,9 @@ class AutomationService:
clusters_created = 0
batches_run = 0
credits_before = self._get_credits_used()
keyword_ids = list(pending_keywords.values_list('id', flat=True))
# INITIAL SAVE: Set keywords_total immediately so frontend shows accurate counts from start
self.run.stage_1_result = {
'keywords_processed': 0,
'keywords_total': len(keyword_ids),
'clusters_created': 0,
'batches_run': 0,
'credits_used': 0,
'time_elapsed': '0m 0s',
'in_progress': True
}
self.run.save(update_fields=['stage_1_result'])
for i in range(0, len(keyword_ids), actual_batch_size):
# Check if automation should stop (paused or cancelled)
should_stop, reason = self._check_should_stop()
@@ -285,21 +254,19 @@ class AutomationService:
stage_number, f"Processing batch {batch_num}/{total_batches} ({len(batch)} keywords)"
)
# Call AI function via AIEngine (runs synchronously - no Celery subtask)
# Call AI function via AIEngine
engine = AIEngine(account=self.account)
result = engine.execute(
fn=AutoClusterFunction(),
payload={'ids': batch}
)
# NOTE: AIEngine.execute() runs synchronously and returns immediately
# No Celery task polling needed
if not result.get('success'):
error_msg = result.get('error', 'Unknown error')
logger.warning(f"[AutomationService] Clustering failed for batch {batch_num}: {error_msg}")
# Continue to next batch
# Monitor task
task_id = result.get('task_id')
if task_id:
# FIXED: Pass continue_on_error=True to keep processing other batches on failure
self._wait_for_task(task_id, stage_number, f"Batch {batch_num}", continue_on_error=True)
keywords_processed += len(batch)
batches_run += 1
@@ -309,22 +276,6 @@ class AutomationService:
stage_number, f"Batch {batch_num} complete"
)
# INCREMENTAL SAVE: Update stage result after each batch for real-time UI progress
clusters_so_far = Clusters.objects.filter(
site=self.site,
created_at__gte=self.run.started_at
).count()
self.run.stage_1_result = {
'keywords_processed': keywords_processed,
'keywords_total': len(keyword_ids),
'clusters_created': clusters_so_far,
'batches_run': batches_run,
'credits_used': self._get_credits_used() - credits_before,
'time_elapsed': self._format_time_elapsed(start_time),
'in_progress': True
}
self.run.save(update_fields=['stage_1_result'])
# Emit per-item trace event for UI progress tracking
try:
self.logger.append_trace(self.account.id, self.site.id, self.run.run_id, {
@@ -399,10 +350,6 @@ class AutomationService:
}
self.run.current_stage = 2
self.run.total_credits_used += credits_used
# UPDATE SNAPSHOT: Record new items created for Stage 2
self._update_snapshot_after_stage(1, {'stage_2_initial': clusters_created})
self.run.save()
logger.info(f"[AutomationService] Stage 1 complete: {keywords_processed} keywords → {clusters_created} clusters")
@@ -422,10 +369,10 @@ class AutomationService:
start_time = time.time()
# ADDED: Pre-stage validation - verify Stage 1 completion
# FIXED: Match pipeline_overview query - use status='new' only
pending_keywords = Keywords.objects.filter(
site=self.site,
status='new',
cluster__isnull=True,
disabled=False
).count()
@@ -475,21 +422,8 @@ class AutomationService:
# Process one at a time
clusters_processed = 0
credits_before = self._get_credits_used()
# INITIAL SAVE: Set clusters_total immediately so frontend shows accurate counts from start
cluster_list = list(pending_clusters)
total_clusters = len(cluster_list)
self.run.stage_2_result = {
'clusters_processed': 0,
'clusters_total': total_clusters,
'ideas_created': 0,
'credits_used': 0,
'time_elapsed': '0m 0s',
'in_progress': True
}
self.run.save(update_fields=['stage_2_result'])
for cluster in cluster_list:
for cluster in pending_clusters:
# Check if automation should stop (paused or cancelled)
should_stop, reason = self._check_should_stop()
if should_stop:
@@ -523,42 +457,25 @@ class AutomationService:
stage_number, f"Generating ideas for cluster: {cluster.name}"
)
# Call AI function via AIEngine (runs synchronously - no Celery subtask)
# Call AI function via AIEngine
engine = AIEngine(account=self.account)
result = engine.execute(
fn=GenerateIdeasFunction(),
payload={'ids': [cluster.id]}
)
# NOTE: AIEngine.execute() runs synchronously and returns immediately
# No Celery task polling needed
if not result.get('success'):
error_msg = result.get('error', 'Unknown error')
logger.warning(f"[AutomationService] Idea generation failed for cluster '{cluster.name}': {error_msg}")
# Continue to next cluster
# Monitor task
task_id = result.get('task_id')
if task_id:
# FIXED: Pass continue_on_error=True to keep processing other clusters on failure
self._wait_for_task(task_id, stage_number, f"Cluster '{cluster.name}'", continue_on_error=True)
clusters_processed += 1
self.logger.log_stage_progress(
self.run.run_id, self.account.id, self.site.id,
stage_number, f"Cluster '{cluster.name}' complete"
)
# INCREMENTAL SAVE: Update stage result after each cluster for real-time UI progress
ideas_so_far = ContentIdeas.objects.filter(
site=self.site,
created_at__gte=self.run.started_at
).count()
self.run.stage_2_result = {
'clusters_processed': clusters_processed,
'clusters_total': total_count,
'ideas_created': ideas_so_far,
'credits_used': self._get_credits_used() - credits_before,
'time_elapsed': self._format_time_elapsed(start_time),
'in_progress': True
}
self.run.save(update_fields=['stage_2_result'])
except Exception as e:
# FIXED: Log error but continue processing remaining clusters
error_msg = f"Failed to generate ideas for cluster '{cluster.name}': {str(e)}"
@@ -600,10 +517,6 @@ class AutomationService:
}
self.run.current_stage = 3
self.run.total_credits_used += credits_used
# UPDATE SNAPSHOT: Record new items created for Stage 3
self._update_snapshot_after_stage(2, {'stage_3_initial': ideas_created})
self.run.save()
logger.info(f"[AutomationService] Stage 2 complete: {clusters_processed} clusters → {ideas_created} ideas")
@@ -766,10 +679,6 @@ class AutomationService:
'time_elapsed': time_elapsed
}
self.run.current_stage = 4
# UPDATE SNAPSHOT: Record new items created for Stage 4
self._update_snapshot_after_stage(3, {'stage_4_initial': tasks_created})
self.run.save()
logger.info(f"[AutomationService] Stage 3 complete: {ideas_processed} ideas → {tasks_created} tasks")
@@ -832,22 +741,11 @@ class AutomationService:
# Process one at a time
tasks_processed = 0
credits_before = self._get_credits_used()
# FIXED: Ensure ALL tasks are processed by iterating over queryset list
task_list = list(pending_tasks)
total_tasks = len(task_list)
# INITIAL SAVE: Set tasks_total immediately so frontend shows accurate counts from start
self.run.stage_4_result = {
'tasks_processed': 0,
'tasks_total': total_tasks,
'content_created': 0,
'credits_used': 0,
'time_elapsed': '0m 0s',
'in_progress': True
}
self.run.save(update_fields=['stage_4_result'])
for idx, task in enumerate(task_list, 1):
# Check if automation should stop (paused or cancelled)
should_stop, reason = self._check_should_stop()
@@ -882,22 +780,19 @@ class AutomationService:
stage_number, f"Generating content for task {idx}/{total_tasks}: {task.title}"
)
# Call AI function via AIEngine (runs synchronously - no Celery subtask)
# Call AI function via AIEngine
engine = AIEngine(account=self.account)
result = engine.execute(
fn=GenerateContentFunction(),
payload={'ids': [task.id]}
)
# NOTE: AIEngine.execute() runs synchronously and returns immediately
# The result contains 'task_id' which is the DB Task model ID, NOT a Celery task ID
# So we do NOT call _wait_for_task here
if not result.get('success'):
error_msg = result.get('error', 'Unknown error')
logger.warning(f"[AutomationService] Content generation failed for task '{task.title}': {error_msg}")
# Continue to next task
# Monitor task
task_id = result.get('task_id')
if task_id:
# FIXED: Pass continue_on_error=True to keep processing other tasks on failure
self._wait_for_task(task_id, stage_number, f"Task '{task.title}'", continue_on_error=True)
tasks_processed += 1
# Log progress
@@ -906,21 +801,6 @@ class AutomationService:
stage_number, f"Task '{task.title}' complete ({tasks_processed}/{total_tasks})"
)
# INCREMENTAL SAVE: Update stage result after each item for real-time UI progress
content_created_so_far = Content.objects.filter(
site=self.site,
created_at__gte=self.run.started_at
).count()
self.run.stage_4_result = {
'tasks_processed': tasks_processed,
'tasks_total': total_tasks,
'content_created': content_created_so_far,
'credits_used': self._get_credits_used() - credits_before,
'time_elapsed': self._format_time_elapsed(start_time),
'in_progress': True
}
self.run.save(update_fields=['stage_4_result'])
# Emit per-item trace event for UI progress tracking
try:
self.logger.append_trace(self.account.id, self.site.id, self.run.run_id, {
@@ -1079,21 +959,10 @@ class AutomationService:
# Process one at a time
content_processed = 0
credits_before = self._get_credits_used()
content_list = list(content_without_images)
total_content = len(content_list)
# INITIAL SAVE: Set content_total immediately so frontend shows accurate counts from start
self.run.stage_5_result = {
'content_processed': 0,
'content_total': total_content,
'prompts_created': 0,
'credits_used': 0,
'time_elapsed': '0m 0s',
'in_progress': True
}
self.run.save(update_fields=['stage_5_result'])
for idx, content in enumerate(content_list, 1):
# Check if automation should stop (paused or cancelled)
should_stop, reason = self._check_should_stop()
@@ -1128,47 +997,25 @@ class AutomationService:
stage_number, f"Extracting prompts {idx}/{total_content}: {content.title}"
)
# Call AI function via AIEngine (runs synchronously - no Celery subtask)
# Call AI function via AIEngine
engine = AIEngine(account=self.account)
result = engine.execute(
fn=GenerateImagePromptsFunction(),
payload={'ids': [content.id]}
)
# NOTE: AIEngine.execute() runs synchronously and returns immediately
# No Celery task polling needed
if not result.get('success'):
error_msg = result.get('error', 'Unknown error')
logger.warning(f"[AutomationService] Image prompt generation failed for content '{content.title}': {error_msg}")
# Continue to next content
# Monitor task
task_id = result.get('task_id')
if task_id:
# FIXED: Pass continue_on_error=True to keep processing other content on failure
self._wait_for_task(task_id, stage_number, f"Content '{content.title}'", continue_on_error=True)
content_processed += 1
self.logger.log_stage_progress(
self.run.run_id, self.account.id, self.site.id,
stage_number, f"Content '{content.title}' complete ({content_processed}/{total_content})"
)
# ADDED: Incremental save after each content piece for real-time frontend progress
# This allows the frontend to show accurate progress during Stage 5
current_prompts_created = Images.objects.filter(
site=self.site,
status='pending',
created_at__gte=self.run.started_at
).count()
current_credits_used = self._get_credits_used() - credits_before
current_time_elapsed = self._format_time_elapsed(start_time)
self.run.stage_5_result = {
'content_processed': content_processed,
'content_total': total_content,
'prompts_created': current_prompts_created,
'credits_used': current_credits_used,
'time_elapsed': current_time_elapsed,
'in_progress': True
}
self.run.save(update_fields=['stage_5_result'])
except Exception as e:
# FIXED: Log error but continue processing remaining content
error_msg = f"Failed to extract prompts for content '{content.title}': {str(e)}"
@@ -1286,22 +1133,10 @@ class AutomationService:
# Process one at a time
images_processed = 0
credits_before = self._get_credits_used()
image_list = list(pending_images)
total_images = len(image_list)
# INITIAL SAVE: Set images_total immediately so frontend shows accurate counts from start
self.run.stage_6_result = {
'images_processed': 0,
'images_total': total_images,
'images_generated': 0,
'content_moved_to_review': 0,
'credits_used': 0,
'time_elapsed': '0m 0s',
'in_progress': True
}
self.run.save(update_fields=['stage_6_result'])
for idx, image in enumerate(image_list, 1):
# Check if automation should stop (paused or cancelled)
should_stop, reason = self._check_should_stop()
@@ -1310,7 +1145,7 @@ class AutomationService:
self.run.run_id, self.account.id, self.site.id,
stage_number, f"Stage {reason} - saving progress ({images_processed} images processed)"
)
# Save current progress - FIXED: preserve images_total for accurate frontend display
# Save current progress
images_generated = Images.objects.filter(
site=self.site,
status='completed',
@@ -1326,14 +1161,12 @@ class AutomationService:
from django.utils import timezone
self.run.stage_6_result = {
'images_processed': images_processed,
'images_total': total_images, # FIXED: Preserve total for progress calculation
'images_generated': images_generated,
'content_moved_to_review': content_moved,
'credits_used': credits_used,
'time_elapsed': time_elapsed,
'partial': True,
'stopped_reason': reason,
'in_progress': False
'stopped_reason': reason
}
self.run.total_credits_used += credits_used
self.run.save()
@@ -1370,32 +1203,11 @@ class AutomationService:
self._wait_for_task(task_id, stage_number, f"Image for '{content_title}'", continue_on_error=True)
images_processed += 1
self.logger.log_stage_progress(
self.run.run_id, self.account.id, self.site.id,
stage_number, f"Image generated for '{content_title}' ({images_processed}/{total_images})"
)
# ADDED: Incremental save after each image for real-time frontend progress
# This allows the frontend to show accurate progress during Stage 6
current_images_generated = Images.objects.filter(
site=self.site,
status__in=['generated', 'completed'],
updated_at__gte=self.run.started_at
).count()
current_credits_used = self._get_credits_used() - credits_before
current_time_elapsed = self._format_time_elapsed(start_time)
self.run.stage_6_result = {
'images_processed': images_processed,
'images_total': total_images,
'images_generated': current_images_generated,
'content_moved_to_review': 0, # Updated at end
'credits_used': current_credits_used,
'time_elapsed': current_time_elapsed,
'in_progress': True
}
self.run.save(update_fields=['stage_6_result'])
except Exception as e:
# FIXED: Log error but continue processing remaining images
content_title = image.content.title if image.content else 'Unknown'
@@ -1407,7 +1219,7 @@ class AutomationService:
)
# Continue to next image
continue
# ADDED: Within-stage delay between images
if idx < total_images:
delay = self.config.within_stage_delay
@@ -1448,14 +1260,12 @@ class AutomationService:
stage_6_start = start_time # Capture stage start time
self.run.stage_6_result = {
'images_processed': images_processed,
'images_total': total_images, # ADDED: Include total for consistency
'images_generated': images_generated,
'content_moved_to_review': content_moved_to_review,
'credits_used': credits_used,
'started_at': self.run.started_at.isoformat(),
'completed_at': timezone.now().isoformat(),
'time_elapsed': time_elapsed,
'in_progress': False
'time_elapsed': time_elapsed
}
self.run.current_stage = 7
self.run.total_credits_used += credits_used
@@ -1472,39 +1282,9 @@ class AutomationService:
time.sleep(delay)
def run_stage_7(self):
"""Stage 7: Auto-Approve Review Content
This stage automatically approves content in 'review' status and
marks it as 'approved' (ready for publishing to WordPress).
Respects PublishingSettings:
- If auto_approval_enabled is False, skip approval and keep content in 'review'
"""
"""Stage 7: Manual Review Gate (Count Only)"""
stage_number = 7
stage_name = "Review → Approved"
start_time = time.time()
# Check publishing settings for auto-approval
from igny8_core.business.integration.models import PublishingSettings
publishing_settings, _ = PublishingSettings.get_or_create_for_site(self.site)
if not publishing_settings.auto_approval_enabled:
self.logger.log_stage_progress(
self.run.run_id, self.account.id, self.site.id,
stage_number, "Auto-approval is disabled for this site - skipping Stage 7"
)
self.run.stage_7_result = {
'ready_for_review': 0,
'approved_count': 0,
'content_ids': [],
'skipped': True,
'reason': 'auto_approval_disabled'
}
self.run.status = 'completed'
self.run.completed_at = datetime.now()
self.run.save()
cache.delete(f'automation_lock_{self.site.id}')
return
stage_name = "Manual Review Gate"
# Query content ready for review
ready_for_review = Content.objects.filter(
@@ -1513,6 +1293,7 @@ class AutomationService:
)
total_count = ready_for_review.count()
content_ids = list(ready_for_review.values_list('id', flat=True))
# Log stage start
self.logger.log_stage_start(
@@ -1520,162 +1301,21 @@ class AutomationService:
stage_number, stage_name, total_count
)
if total_count == 0:
self.logger.log_stage_progress(
self.run.run_id, self.account.id, self.site.id,
stage_number, "No content in review to approve - completing automation"
)
self.run.stage_7_result = {
'ready_for_review': 0,
'approved_count': 0,
'content_ids': []
}
self.run.status = 'completed'
self.run.completed_at = datetime.now()
self.run.save()
cache.delete(f'automation_lock_{self.site.id}')
return
content_list = list(ready_for_review)
approved_count = 0
# INITIAL SAVE: Set totals immediately
self.run.stage_7_result = {
'ready_for_review': total_count,
'review_total': total_count,
'approved_count': 0,
'content_ids': [],
'in_progress': True
}
self.run.save(update_fields=['stage_7_result'])
for idx, content in enumerate(content_list, 1):
# Check if automation should stop (paused or cancelled)
should_stop, reason = self._check_should_stop()
if should_stop:
self.logger.log_stage_progress(
self.run.run_id, self.account.id, self.site.id,
stage_number, f"Stage {reason} - saving progress ({approved_count} content approved)"
)
time_elapsed = self._format_time_elapsed(start_time)
self.run.stage_7_result = {
'ready_for_review': total_count,
'review_total': total_count,
'approved_count': approved_count,
'content_ids': list(Content.objects.filter(
site=self.site, status='approved', updated_at__gte=self.run.started_at
).values_list('id', flat=True)),
'partial': True,
'stopped_reason': reason,
'time_elapsed': time_elapsed
}
self.run.save()
return
try:
self.logger.log_stage_progress(
self.run.run_id, self.account.id, self.site.id,
stage_number, f"Approving content {idx}/{total_count}: {content.title}"
)
# Approve content by changing status to 'approved' (ready for publishing)
content.status = 'approved'
content.save(update_fields=['status', 'updated_at'])
approved_count += 1
self.logger.log_stage_progress(
self.run.run_id, self.account.id, self.site.id,
stage_number, f"Content '{content.title}' approved ({approved_count}/{total_count})"
)
# Incremental save for real-time frontend progress
current_time_elapsed = self._format_time_elapsed(start_time)
self.run.stage_7_result = {
'ready_for_review': total_count,
'review_total': total_count,
'approved_count': approved_count,
'content_ids': [], # Don't store full list during processing
'time_elapsed': current_time_elapsed,
'in_progress': True
}
self.run.save(update_fields=['stage_7_result'])
except Exception as e:
error_msg = f"Failed to approve content '{content.title}': {str(e)}"
logger.error(f"[AutomationService] {error_msg}", exc_info=True)
self.logger.log_stage_error(
self.run.run_id, self.account.id, self.site.id,
stage_number, error_msg
)
continue
# Small delay between approvals to prevent overwhelming the system
if idx < total_count:
time.sleep(0.5)
# Final results
time_elapsed = self._format_time_elapsed(start_time)
content_ids = list(Content.objects.filter(
site=self.site,
status='approved',
updated_at__gte=self.run.started_at
).values_list('id', flat=True))
self.logger.log_stage_complete(
self.logger.log_stage_progress(
self.run.run_id, self.account.id, self.site.id,
stage_number, approved_count, time_elapsed, 0
stage_number, f"Automation complete. {total_count} content pieces ready for review"
)
# Check if auto-publish is enabled and queue approved content for publishing
published_count = 0
if publishing_settings.auto_publish_enabled and approved_count > 0:
if content_ids:
self.logger.log_stage_progress(
self.run.run_id, self.account.id, self.site.id,
stage_number, f"Auto-publish enabled - queuing {len(content_ids)} content items for publishing"
stage_number, f"Content IDs ready: {content_ids[:10]}..." if len(content_ids) > 10 else f"Content IDs ready: {content_ids}"
)
# Get WordPress integration for this site
from igny8_core.business.integration.models import SiteIntegration
wp_integration = SiteIntegration.objects.filter(
site=self.site,
platform='wordpress',
is_active=True
).first()
if wp_integration:
from igny8_core.tasks.wordpress_publishing import publish_content_to_wordpress
for content_id in content_ids:
try:
# Queue publish task
publish_content_to_wordpress.delay(
content_id=content_id,
site_integration_id=wp_integration.id
)
published_count += 1
except Exception as e:
logger.error(f"[AutomationService] Failed to queue publish for content {content_id}: {str(e)}")
self.logger.log_stage_progress(
self.run.run_id, self.account.id, self.site.id,
stage_number, f"Queued {published_count} content items for WordPress publishing"
)
else:
self.logger.log_stage_progress(
self.run.run_id, self.account.id, self.site.id,
stage_number, "No active WordPress integration found - skipping auto-publish"
)
# Save results
self.run.stage_7_result = {
'ready_for_review': total_count,
'review_total': total_count,
'approved_count': approved_count,
'content_ids': content_ids,
'time_elapsed': time_elapsed,
'in_progress': False,
'auto_published_count': published_count if publishing_settings.auto_publish_enabled else 0,
'auto_publish_enabled': publishing_settings.auto_publish_enabled,
'content_ids': content_ids
}
self.run.status = 'completed'
self.run.completed_at = datetime.now()
@@ -1684,8 +1324,7 @@ class AutomationService:
# Release lock
cache.delete(f'automation_lock_{self.site.id}')
logger.info(f"[AutomationService] Stage 7 complete: {approved_count} content pieces approved" +
(f", {published_count} queued for publishing" if published_count > 0 else " (ready for publishing)"))
logger.info(f"[AutomationService] Stage 7 complete: Automation ended, {total_count} content ready for review")
def pause_automation(self):
"""Pause current automation run"""
@@ -1703,8 +1342,8 @@ class AutomationService:
def estimate_credits(self) -> int:
"""Estimate total credits needed for automation"""
# Count items - FIXED: Match pipeline_overview query
keywords_count = Keywords.objects.filter(site=self.site, status='new', disabled=False).count()
# Count items
keywords_count = Keywords.objects.filter(site=self.site, status='new', cluster__isnull=True, disabled=False).count()
clusters_count = Clusters.objects.filter(site=self.site, status='new').exclude(ideas__isnull=False).count()
ideas_count = ContentIdeas.objects.filter(site=self.site, status='new').count()
tasks_count = Tasks.objects.filter(site=self.site, status='queued').count()
@@ -1722,88 +1361,6 @@ class AutomationService:
logger.info(f"[AutomationService] Estimated credits: {total}")
return total
def _capture_initial_snapshot(self) -> dict:
"""
Capture initial queue sizes at run start for accurate progress tracking.
This snapshot is used to calculate global progress percentage correctly.
"""
# Stage 1: Keywords pending clustering
# FIXED: Match pipeline_overview query - use status='new' only
stage_1_initial = Keywords.objects.filter(
site=self.site, status='new', disabled=False
).count()
# Stage 2: Clusters needing ideas
stage_2_initial = Clusters.objects.filter(
site=self.site, status='new', disabled=False
).exclude(ideas__isnull=False).count()
# Stage 3: Ideas ready to be converted to tasks
stage_3_initial = ContentIdeas.objects.filter(
site=self.site, status='new'
).count()
# Stage 4: Tasks ready for content generation
stage_4_initial = Tasks.objects.filter(
site=self.site, status='queued'
).count()
# Stage 5: Content needing image prompts
stage_5_initial = Content.objects.filter(
site=self.site, status='draft'
).annotate(images_count=Count('images')).filter(images_count=0).count()
# Stage 6: Image prompts pending generation
stage_6_initial = Images.objects.filter(
site=self.site, status='pending'
).count()
# Stage 7: Content ready for review
stage_7_initial = Content.objects.filter(
site=self.site, status='review'
).count()
snapshot = {
'stage_1_initial': stage_1_initial,
'stage_2_initial': stage_2_initial,
'stage_3_initial': stage_3_initial,
'stage_4_initial': stage_4_initial,
'stage_5_initial': stage_5_initial,
'stage_6_initial': stage_6_initial,
'stage_7_initial': stage_7_initial,
'total_initial_items': stage_1_initial + stage_2_initial + stage_3_initial +
stage_4_initial + stage_5_initial + stage_6_initial + stage_7_initial,
}
logger.info(f"[AutomationService] Initial snapshot captured: {snapshot}")
return snapshot
def _update_snapshot_after_stage(self, completed_stage: int, updates: dict):
"""
Update snapshot after a stage completes with new items created.
This ensures accurate counts for cascading stages.
Args:
completed_stage: The stage number that just completed
updates: Dict of snapshot keys to update, e.g., {'stage_4_initial': 12}
"""
if not self.run or not self.run.initial_snapshot:
return
snapshot = self.run.initial_snapshot.copy()
old_total = snapshot.get('total_initial_items', 0)
for key, value in updates.items():
old_value = snapshot.get(key, 0)
snapshot[key] = value
# Adjust total
old_total = old_total - old_value + value
snapshot['total_initial_items'] = old_total
self.run.initial_snapshot = snapshot
logger.info(f"[AutomationService] Snapshot updated after Stage {completed_stage}: {updates}, new total: {old_total}")
# Helper methods
def _wait_for_task(self, task_id: str, stage_number: int, item_name: str, continue_on_error: bool = True):
@@ -1871,22 +1428,14 @@ class AutomationService:
raise
def _get_credits_used(self) -> int:
"""
Get total credits used by this run so far.
Uses CreditUsageLog (same source as /account/usage/credits endpoint) for accuracy.
"""
"""Get total credits used by this run so far"""
if not self.run:
return 0
# FIXED: Use CreditUsageLog instead of counting AITaskLog records
# This matches the source of truth used by /account/usage/credits endpoint
from igny8_core.business.billing.models import CreditUsageLog
from django.db.models import Sum
total = CreditUsageLog.objects.filter(
total = AITaskLog.objects.filter(
account=self.account,
created_at__gte=self.run.started_at
).aggregate(total=Sum('credits_used'))['total'] or 0
).aggregate(total=Count('id'))['total'] or 0
return total
@@ -1972,16 +1521,7 @@ class AutomationService:
).order_by('id')
processed = self._get_processed_count(1)
remaining = queue.count()
# Use keywords_total from incremental result if available
result = getattr(self.run, 'stage_1_result', None)
if result and result.get('keywords_total'):
total = result.get('keywords_total')
elif self.run and self.run.initial_snapshot:
total = self.run.initial_snapshot.get('stage_1_initial', remaining + processed)
else:
total = remaining + processed
total = queue.count() + processed
return {
'stage_number': 1,
@@ -1992,7 +1532,7 @@ class AutomationService:
'percentage': round((processed / total * 100) if total > 0 else 0),
'currently_processing': self._get_current_items(queue, 3),
'up_next': self._get_next_items(queue, 2, skip=3),
'remaining_count': remaining
'remaining_count': queue.count()
}
def _get_stage_2_state(self) -> dict:
@@ -2002,16 +1542,7 @@ class AutomationService:
).order_by('id')
processed = self._get_processed_count(2)
remaining = queue.count()
# Use clusters_total from incremental result if available
result = getattr(self.run, 'stage_2_result', None)
if result and result.get('clusters_total'):
total = result.get('clusters_total')
elif self.run and self.run.initial_snapshot:
total = self.run.initial_snapshot.get('stage_2_initial', remaining + processed)
else:
total = remaining + processed
total = queue.count() + processed
return {
'stage_number': 2,
@@ -2022,13 +1553,13 @@ class AutomationService:
'percentage': round((processed / total * 100) if total > 0 else 0),
'currently_processing': self._get_current_items(queue, 1),
'up_next': self._get_next_items(queue, 2, skip=1),
'remaining_count': remaining
'remaining_count': queue.count()
}
def _get_stage_3_state(self) -> dict:
"""Get processing state for Stage 3: Ideas → Tasks"""
queue = ContentIdeas.objects.filter(
site=self.site, status='new' # Fixed: Match pipeline_overview status
site=self.site, status='approved'
).order_by('id')
processed = self._get_processed_count(3)
@@ -2049,21 +1580,11 @@ class AutomationService:
def _get_stage_4_state(self) -> dict:
"""Get processing state for Stage 4: Tasks → Content"""
queue = Tasks.objects.filter(
site=self.site, status='queued' # Fixed: Match pipeline_overview status
site=self.site, status='ready'
).order_by('id')
processed = self._get_processed_count(4)
remaining = queue.count()
# Use tasks_total from incremental result if available (during active processing)
result = getattr(self.run, 'stage_4_result', None)
if result and result.get('tasks_total'):
total = result.get('tasks_total')
elif self.run and self.run.initial_snapshot:
# Fall back to snapshot (may be updated after Stage 3)
total = self.run.initial_snapshot.get('stage_4_initial', remaining + processed)
else:
total = remaining + processed
total = queue.count() + processed
return {
'stage_number': 4,
@@ -2074,7 +1595,7 @@ class AutomationService:
'percentage': round((processed / total * 100) if total > 0 else 0),
'currently_processing': self._get_current_items(queue, 1),
'up_next': self._get_next_items(queue, 2, skip=1),
'remaining_count': remaining
'remaining_count': queue.count()
}
def _get_stage_5_state(self) -> dict:
@@ -2145,30 +1666,51 @@ class AutomationService:
}
def _get_processed_count(self, stage: int) -> int:
"""
Get accurate processed count from stage result.
Uses stage-specific keys for correct counting instead of DB queries.
"""
"""Get count of items processed in current stage during this run"""
if not self.run:
return 0
# Get the stage result from the run
result = getattr(self.run, f'stage_{stage}_result', None)
if not result:
return 0
# Count items that were updated during this run and changed status from pending
if stage == 1:
# Keywords that changed status from 'new' during this run
return Keywords.objects.filter(
site=self.site,
updated_at__gte=self.run.started_at
).exclude(status='new').count()
elif stage == 2:
# Clusters that changed status from 'new' during this run
return Clusters.objects.filter(
site=self.site,
updated_at__gte=self.run.started_at
).exclude(status='new').count()
elif stage == 3:
# Ideas that changed status from 'approved' during this run
return ContentIdeas.objects.filter(
site=self.site,
updated_at__gte=self.run.started_at
).exclude(status='approved').count()
elif stage == 4:
# Tasks that changed status from 'ready'/'queued' during this run
return Tasks.objects.filter(
site=self.site,
updated_at__gte=self.run.started_at
).exclude(status__in=['ready', 'queued']).count()
elif stage == 5:
# Content processed for image prompts during this run
return Content.objects.filter(
site=self.site,
updated_at__gte=self.run.started_at,
images__isnull=False
).distinct().count()
elif stage == 6:
# Images completed during this run
return Images.objects.filter(
site=self.site,
updated_at__gte=self.run.started_at,
status='completed'
).count()
# Map stage to correct result key for processed count
key_map = {
1: 'keywords_processed',
2: 'clusters_processed',
3: 'ideas_processed',
4: 'tasks_processed',
5: 'content_processed',
6: 'images_processed',
7: 'ready_for_review'
}
return result.get(key_map.get(stage, ''), 0)
return 0
def _get_current_items(self, queryset, count: int) -> list:
"""Get currently processing items"""

View File

@@ -387,17 +387,16 @@ class AutomationViewSet(viewsets.ViewSet):
return counts, total
# Stage 1: Keywords pending clustering
# Stage 1: Keywords pending clustering (keep previous "pending" semantics but also return status breakdown)
stage_1_counts, stage_1_total = _counts_by_status(
Keywords,
extra_filter={'disabled': False}
)
# FIXED: Stage 1 pending = all keywords with status='new' (ready for clustering)
# This should match the "New" count shown in Keywords metric card
# Previously filtered by cluster__isnull=True which caused mismatch
# pending definition used by the UI previously (new & not clustered)
stage_1_pending = Keywords.objects.filter(
site=site,
status='new',
cluster__isnull=True,
disabled=False
).count()
@@ -715,237 +714,3 @@ class AutomationViewSet(viewsets.ViewSet):
status=status.HTTP_500_INTERNAL_SERVER_ERROR
)
@extend_schema(tags=['Automation'])
@action(detail=False, methods=['get'], url_path='run_progress')
def run_progress(self, request):
"""
GET /api/v1/automation/run_progress/?site_id=123&run_id=abc
Unified endpoint for ALL run progress data - global + per-stage.
Replaces multiple separate API calls with single comprehensive response.
Response includes:
- run: Current run status and metadata
- global_progress: Overall pipeline progress percentage
- stages: Per-stage progress with input/output/processed counts
- metrics: Credits used, duration, errors
"""
site_id = request.query_params.get('site_id')
run_id = request.query_params.get('run_id')
if not site_id:
return Response(
{'error': 'site_id required'},
status=status.HTTP_400_BAD_REQUEST
)
try:
site = get_object_or_404(Site, id=site_id, account=request.user.account)
# If no run_id, get current run
if run_id:
run = AutomationRun.objects.get(run_id=run_id, site=site)
else:
run = AutomationRun.objects.filter(
site=site,
status__in=['running', 'paused']
).order_by('-started_at').first()
if not run:
return Response({
'run': None,
'global_progress': None,
'stages': [],
'metrics': None
})
# Build unified response
response = self._build_run_progress_response(site, run)
return Response(response)
except AutomationRun.DoesNotExist:
return Response(
{'error': 'Run not found'},
status=status.HTTP_404_NOT_FOUND
)
except Exception as e:
return Response(
{'error': str(e)},
status=status.HTTP_500_INTERNAL_SERVER_ERROR
)
def _build_run_progress_response(self, site, run):
"""Build comprehensive progress response for a run"""
from igny8_core.business.planning.models import Keywords, Clusters, ContentIdeas
from igny8_core.business.content.models import Tasks, Content, Images
from django.db.models import Count
from django.utils import timezone
initial_snapshot = run.initial_snapshot or {}
# Helper to get processed count from result
def get_processed(result, key):
if not result:
return 0
return result.get(key, 0)
# Helper to get output count from result
def get_output(result, key):
if not result:
return 0
return result.get(key, 0)
# Stage-specific key mapping for processed counts
processed_keys = {
1: 'keywords_processed',
2: 'clusters_processed',
3: 'ideas_processed',
4: 'tasks_processed',
5: 'content_processed',
6: 'images_processed',
7: 'ready_for_review'
}
# Stage-specific key mapping for output counts
output_keys = {
1: 'clusters_created',
2: 'ideas_created',
3: 'tasks_created',
4: 'content_created',
5: 'prompts_created',
6: 'images_generated',
7: 'ready_for_review'
}
# Build stages array
stages = []
total_processed = 0
total_initial = initial_snapshot.get('total_initial_items', 0)
stage_names = {
1: 'Keywords → Clusters',
2: 'Clusters → Ideas',
3: 'Ideas → Tasks',
4: 'Tasks → Content',
5: 'Content → Image Prompts',
6: 'Image Prompts → Images',
7: 'Manual Review Gate'
}
stage_types = {
1: 'AI', 2: 'AI', 3: 'Local', 4: 'AI', 5: 'AI', 6: 'AI', 7: 'Manual'
}
for stage_num in range(1, 8):
result = getattr(run, f'stage_{stage_num}_result', None)
initial_count = initial_snapshot.get(f'stage_{stage_num}_initial', 0)
processed = get_processed(result, processed_keys[stage_num])
output = get_output(result, output_keys[stage_num])
total_processed += processed
# Determine stage status
if run.current_stage > stage_num:
stage_status = 'completed'
elif run.current_stage == stage_num:
stage_status = 'active'
else:
stage_status = 'pending'
# Calculate progress percentage for this stage
progress = 0
if initial_count > 0:
progress = round((processed / initial_count) * 100)
elif run.current_stage > stage_num:
progress = 100
stage_data = {
'number': stage_num,
'name': stage_names[stage_num],
'type': stage_types[stage_num],
'status': stage_status,
'input_count': initial_count,
'output_count': output,
'processed_count': processed,
'progress_percentage': min(progress, 100),
'credits_used': result.get('credits_used', 0) if result else 0,
'time_elapsed': result.get('time_elapsed', '') if result else '',
}
# Add currently_processing for active stage
if stage_status == 'active':
try:
service = AutomationService.from_run_id(run.run_id)
processing_state = service.get_current_processing_state()
if processing_state:
stage_data['currently_processing'] = processing_state.get('currently_processing', [])
stage_data['up_next'] = processing_state.get('up_next', [])
stage_data['remaining_count'] = processing_state.get('remaining_count', 0)
except Exception:
pass
stages.append(stage_data)
# Calculate global progress
# Stages 1-6 are automation stages, Stage 7 is manual review (not counted)
# Progress = weighted average of stages 1-6 completion
global_percentage = 0
if run.status == 'completed':
# If run is completed (after Stage 6), show 100%
global_percentage = 100
elif run.status in ('cancelled', 'failed'):
# Keep current progress for cancelled/failed
if total_initial > 0:
global_percentage = round((total_processed / total_initial) * 100)
else:
# Calculate based on completed stages (1-6 only)
# Each of the 6 automation stages contributes ~16.67% to total
completed_stages = min(max(run.current_stage - 1, 0), 6)
stage_weight = 100 / 6 # Each stage is ~16.67%
# Base progress from completed stages
base_progress = completed_stages * stage_weight
# Add partial progress from current stage
current_stage_progress = 0
if run.current_stage <= 6:
current_result = getattr(run, f'stage_{run.current_stage}_result', None)
current_initial = initial_snapshot.get(f'stage_{run.current_stage}_initial', 0)
if current_initial > 0 and current_result:
processed_key = processed_keys.get(run.current_stage, '')
current_processed = current_result.get(processed_key, 0)
current_stage_progress = (current_processed / current_initial) * stage_weight
global_percentage = round(base_progress + current_stage_progress)
# Calculate duration
duration_seconds = 0
if run.started_at:
end_time = run.completed_at or timezone.now()
duration_seconds = int((end_time - run.started_at).total_seconds())
return {
'run': {
'run_id': run.run_id,
'status': run.status,
'current_stage': run.current_stage,
'trigger_type': run.trigger_type,
'started_at': run.started_at,
'completed_at': run.completed_at,
'paused_at': run.paused_at,
},
'global_progress': {
'total_items': total_initial,
'completed_items': total_processed,
'percentage': min(global_percentage, 100),
'current_stage': run.current_stage,
'total_stages': 7
},
'stages': stages,
'metrics': {
'credits_used': run.total_credits_used,
'duration_seconds': duration_seconds,
'errors': []
},
'initial_snapshot': initial_snapshot
}

View File

@@ -9,18 +9,14 @@ from django.contrib import messages
from django.utils.html import format_html
from unfold.admin import ModelAdmin
from igny8_core.admin.base import AccountAdminMixin, Igny8ModelAdmin
# NOTE: Most billing models are now registered in modules/billing/admin.py
# This file is kept for reference but all registrations are commented out
# to avoid AlreadyRegistered errors
# from .models import (
# CreditCostConfig,
# AccountPaymentMethod,
# Invoice,
# Payment,
# CreditPackage,
# PaymentMethodConfig,
# )
from .models import (
CreditCostConfig,
AccountPaymentMethod,
Invoice,
Payment,
CreditPackage,
PaymentMethodConfig,
)
# CreditCostConfig - DUPLICATE - Registered in modules/billing/admin.py with better features
@@ -51,21 +47,97 @@ from igny8_core.admin.base import AccountAdminMixin, Igny8ModelAdmin
# ...existing implementation...
# AccountPaymentMethod - DUPLICATE - Registered in modules/billing/admin.py with AccountAdminMixin
# Commenting out to avoid AlreadyRegistered error
# The version in modules/billing/admin.py is preferred as it includes AccountAdminMixin
# PaymentMethodConfig and AccountPaymentMethod are kept here as they're not duplicated
# or have minimal implementations that don't conflict
# from import_export.admin import ExportMixin
# from import_export import resources
#
# class AccountPaymentMethodResource(resources.ModelResource):
# """Resource class for exporting Account Payment Methods"""
# class Meta:
# model = AccountPaymentMethod
# fields = ('id', 'display_name', 'type', 'account__name', 'is_default',
# 'is_enabled', 'is_verified', 'country_code', 'created_at')
# export_order = fields
#
# @admin.register(AccountPaymentMethod)
# class AccountPaymentMethodAdmin(ExportMixin, Igny8ModelAdmin):
# ... (see modules/billing/admin.py for active registration)
from import_export.admin import ExportMixin
from import_export import resources
class AccountPaymentMethodResource(resources.ModelResource):
"""Resource class for exporting Account Payment Methods"""
class Meta:
model = AccountPaymentMethod
fields = ('id', 'display_name', 'type', 'account__name', 'is_default',
'is_enabled', 'is_verified', 'country_code', 'created_at')
export_order = fields
@admin.register(AccountPaymentMethod)
class AccountPaymentMethodAdmin(ExportMixin, Igny8ModelAdmin):
resource_class = AccountPaymentMethodResource
list_display = [
'display_name',
'type',
'account',
'is_default',
'is_enabled',
'country_code',
'is_verified',
'updated_at',
]
list_filter = ['type', 'is_default', 'is_enabled', 'is_verified', 'country_code']
search_fields = ['display_name', 'account__name', 'account__id']
readonly_fields = ['created_at', 'updated_at']
actions = [
'bulk_enable',
'bulk_disable',
'bulk_set_default',
'bulk_delete_methods',
]
fieldsets = (
('Payment Method', {
'fields': ('account', 'type', 'display_name', 'is_default', 'is_enabled', 'is_verified', 'country_code')
}),
('Instructions / Metadata', {
'fields': ('instructions', 'metadata')
}),
('Timestamps', {
'fields': ('created_at', 'updated_at'),
'classes': ('collapse',)
}),
)
def bulk_enable(self, request, queryset):
updated = queryset.update(is_enabled=True)
self.message_user(request, f'{updated} payment method(s) enabled.', messages.SUCCESS)
bulk_enable.short_description = 'Enable selected payment methods'
def bulk_disable(self, request, queryset):
updated = queryset.update(is_enabled=False)
self.message_user(request, f'{updated} payment method(s) disabled.', messages.SUCCESS)
bulk_disable.short_description = 'Disable selected payment methods'
def bulk_set_default(self, request, queryset):
from django import forms
if 'apply' in request.POST:
method_id = request.POST.get('payment_method')
if method_id:
method = AccountPaymentMethod.objects.get(pk=method_id)
# Unset all others for this account
AccountPaymentMethod.objects.filter(account=method.account).update(is_default=False)
method.is_default = True
method.save()
self.message_user(request, f'{method.display_name} set as default for {method.account.name}.', messages.SUCCESS)
return
class PaymentMethodForm(forms.Form):
payment_method = forms.ModelChoiceField(
queryset=queryset,
label="Select Payment Method to Set as Default"
)
from django.shortcuts import render
return render(request, 'admin/bulk_action_form.html', {
'title': 'Set Default Payment Method',
'queryset': queryset,
'form': PaymentMethodForm(),
'action': 'bulk_set_default',
})
bulk_set_default.short_description = 'Set as default'
def bulk_delete_methods(self, request, queryset):
count = queryset.count()
queryset.delete()
self.message_user(request, f'{count} payment method(s) deleted.', messages.SUCCESS)
bulk_delete_methods.short_description = 'Delete selected payment methods'

View File

@@ -1,9 +1,6 @@
"""
Management command to backfill usage tracking for existing content.
Usage: python manage.py backfill_usage [account_id]
NOTE: Since the simplification of limits (Jan 2026), this command only
tracks Ahrefs queries. All other usage is tracked via CreditUsageLog.
"""
from django.core.management.base import BaseCommand
from django.apps import apps
@@ -12,7 +9,7 @@ from igny8_core.auth.models import Account
class Command(BaseCommand):
help = 'Backfill usage tracking for existing content (Ahrefs queries only)'
help = 'Backfill usage tracking for existing content'
def add_arguments(self, parser):
parser.add_argument(
@@ -33,6 +30,10 @@ class Command(BaseCommand):
else:
accounts = Account.objects.filter(plan__isnull=False).select_related('plan')
ContentIdeas = apps.get_model('planner', 'ContentIdeas')
Content = apps.get_model('writer', 'Content')
Images = apps.get_model('writer', 'Images')
total_accounts = accounts.count()
self.stdout.write(f'Processing {total_accounts} account(s)...\n')
@@ -42,14 +43,45 @@ class Command(BaseCommand):
self.stdout.write(f'Plan: {account.plan.name if account.plan else "No Plan"}')
self.stdout.write('=' * 60)
# Ahrefs queries are tracked in CreditUsageLog with operation_type='ahrefs_query'
# We don't backfill these as they should be tracked in real-time going forward
# This command is primarily for verification
# Count content ideas
ideas_count = ContentIdeas.objects.filter(account=account).count()
self.stdout.write(f'Content Ideas: {ideas_count}')
self.stdout.write(f'Ahrefs queries used this month: {account.usage_ahrefs_queries}')
self.stdout.write(self.style.SUCCESS('\n✅ Verified usage tracking'))
self.stdout.write(f' usage_ahrefs_queries: {account.usage_ahrefs_queries}\n')
# Count content words
from django.db.models import Sum
total_words = Content.objects.filter(account=account).aggregate(
total=Sum('word_count')
)['total'] or 0
self.stdout.write(f'Content Words: {total_words}')
# Count images
total_images = Images.objects.filter(account=account).count()
images_with_prompts = Images.objects.filter(
account=account, prompt__isnull=False
).exclude(prompt='').count()
self.stdout.write(f'Total Images: {total_images}')
self.stdout.write(f'Images with Prompts: {images_with_prompts}')
# Update account usage fields
with transaction.atomic():
account.usage_content_ideas = ideas_count
account.usage_content_words = total_words
account.usage_images_basic = total_images
account.usage_images_premium = 0 # Premium not implemented yet
account.usage_image_prompts = images_with_prompts
account.save(update_fields=[
'usage_content_ideas', 'usage_content_words',
'usage_images_basic', 'usage_images_premium', 'usage_image_prompts',
'updated_at'
])
self.stdout.write(self.style.SUCCESS('\n✅ Updated usage tracking:'))
self.stdout.write(f' usage_content_ideas: {account.usage_content_ideas}')
self.stdout.write(f' usage_content_words: {account.usage_content_words}')
self.stdout.write(f' usage_images_basic: {account.usage_images_basic}')
self.stdout.write(f' usage_images_premium: {account.usage_images_premium}')
self.stdout.write(f' usage_image_prompts: {account.usage_image_prompts}\n')
self.stdout.write('=' * 60)
self.stdout.write(self.style.SUCCESS('Verification complete!'))
self.stdout.write(self.style.SUCCESS('Backfill complete!'))
self.stdout.write('=' * 60)

View File

@@ -1,48 +0,0 @@
"""
Migration: Simplify payment methods to global (remove country-specific filtering)
This migration:
1. Updates existing PaymentMethodConfig records to use country_code='*' (global)
2. Removes duplicate payment methods per country, keeping only one global config per method
"""
from django.db import migrations
def migrate_to_global_payment_methods(apps, schema_editor):
"""
Convert country-specific payment methods to global.
For each payment_method type, keep only one configuration with country_code='*'
"""
PaymentMethodConfig = apps.get_model('billing', 'PaymentMethodConfig')
# Get all unique payment methods
payment_methods = PaymentMethodConfig.objects.values_list('payment_method', flat=True).distinct()
for method in payment_methods:
# Get all configs for this payment method
configs = PaymentMethodConfig.objects.filter(payment_method=method).order_by('sort_order', 'id')
if configs.exists():
# Keep the first one and make it global
first_config = configs.first()
first_config.country_code = '*'
first_config.save(update_fields=['country_code'])
# Delete duplicates (other country-specific versions)
configs.exclude(id=first_config.id).delete()
def reverse_migration(apps, schema_editor):
"""Reverse is a no-op - can't restore original country codes"""
pass
class Migration(migrations.Migration):
dependencies = [
('billing', '0007_simplify_payment_statuses'),
]
operations = [
migrations.RunPython(migrate_to_global_payment_methods, reverse_migration),
]

View File

@@ -1,359 +0,0 @@
"""
Migration: Seed AIModelConfig from constants.py
This migration populates the AIModelConfig table with the current models
from ai/constants.py, enabling database-driven model configuration.
"""
from decimal import Decimal
from django.db import migrations
def seed_ai_models(apps, schema_editor):
"""
Seed AIModelConfig with models from constants.py
"""
AIModelConfig = apps.get_model('billing', 'AIModelConfig')
# Text Models (from MODEL_RATES)
text_models = [
{
'model_name': 'gpt-4.1',
'display_name': 'GPT-4.1 - Balanced Performance',
'model_type': 'text',
'provider': 'openai',
'input_cost_per_1m': Decimal('2.00'),
'output_cost_per_1m': Decimal('8.00'),
'context_window': 128000,
'max_output_tokens': 16384,
'supports_json_mode': True,
'supports_vision': True,
'supports_function_calling': True,
'is_active': True,
'is_default': True, # Default text model
'sort_order': 1,
'description': 'Default model - good balance of cost and capability',
},
{
'model_name': 'gpt-4o-mini',
'display_name': 'GPT-4o Mini - Fast & Affordable',
'model_type': 'text',
'provider': 'openai',
'input_cost_per_1m': Decimal('0.15'),
'output_cost_per_1m': Decimal('0.60'),
'context_window': 128000,
'max_output_tokens': 16384,
'supports_json_mode': True,
'supports_vision': True,
'supports_function_calling': True,
'is_active': True,
'is_default': False,
'sort_order': 2,
'description': 'Best for high-volume tasks where cost matters',
},
{
'model_name': 'gpt-4o',
'display_name': 'GPT-4o - High Quality',
'model_type': 'text',
'provider': 'openai',
'input_cost_per_1m': Decimal('2.50'),
'output_cost_per_1m': Decimal('10.00'),
'context_window': 128000,
'max_output_tokens': 16384,
'supports_json_mode': True,
'supports_vision': True,
'supports_function_calling': True,
'is_active': True,
'is_default': False,
'sort_order': 3,
'description': 'Premium model for complex tasks requiring best quality',
},
{
'model_name': 'gpt-5.1',
'display_name': 'GPT-5.1 - Latest Generation',
'model_type': 'text',
'provider': 'openai',
'input_cost_per_1m': Decimal('1.25'),
'output_cost_per_1m': Decimal('10.00'),
'context_window': 200000,
'max_output_tokens': 32768,
'supports_json_mode': True,
'supports_vision': True,
'supports_function_calling': True,
'is_active': True,
'is_default': False,
'sort_order': 4,
'description': 'Next-gen model with improved reasoning',
},
{
'model_name': 'gpt-5.2',
'display_name': 'GPT-5.2 - Most Advanced',
'model_type': 'text',
'provider': 'openai',
'input_cost_per_1m': Decimal('1.75'),
'output_cost_per_1m': Decimal('14.00'),
'context_window': 200000,
'max_output_tokens': 65536,
'supports_json_mode': True,
'supports_vision': True,
'supports_function_calling': True,
'is_active': True,
'is_default': False,
'sort_order': 5,
'description': 'Most capable model for enterprise-grade tasks',
},
]
# Image Models (from IMAGE_MODEL_RATES)
image_models = [
{
'model_name': 'dall-e-3',
'display_name': 'DALL-E 3 - Premium Images',
'model_type': 'image',
'provider': 'openai',
'cost_per_image': Decimal('0.040'),
'valid_sizes': ['1024x1024', '1024x1792', '1792x1024'],
'supports_json_mode': False,
'supports_vision': False,
'supports_function_calling': False,
'is_active': True,
'is_default': True, # Default image model
'sort_order': 1,
'description': 'Best quality image generation, good for hero images and marketing',
},
{
'model_name': 'dall-e-2',
'display_name': 'DALL-E 2 - Standard Images',
'model_type': 'image',
'provider': 'openai',
'cost_per_image': Decimal('0.020'),
'valid_sizes': ['256x256', '512x512', '1024x1024'],
'supports_json_mode': False,
'supports_vision': False,
'supports_function_calling': False,
'is_active': True,
'is_default': False,
'sort_order': 2,
'description': 'Lower cost option for bulk image generation',
},
{
'model_name': 'gpt-image-1',
'display_name': 'GPT Image 1 - Advanced',
'model_type': 'image',
'provider': 'openai',
'cost_per_image': Decimal('0.042'),
'valid_sizes': ['1024x1024', '1024x1792', '1792x1024'],
'supports_json_mode': False,
'supports_vision': False,
'supports_function_calling': False,
'is_active': True,
'is_default': False,
'sort_order': 3,
'description': 'Advanced image model with enhanced capabilities',
},
{
'model_name': 'gpt-image-1-mini',
'display_name': 'GPT Image 1 Mini - Fast',
'model_type': 'image',
'provider': 'openai',
'cost_per_image': Decimal('0.011'),
'valid_sizes': ['1024x1024'],
'supports_json_mode': False,
'supports_vision': False,
'supports_function_calling': False,
'is_active': True,
'is_default': False,
'sort_order': 4,
'description': 'Fastest and most affordable image model',
},
]
# Runware Image Models (from existing integration)
runware_models = [
{
'model_name': 'runware:100@1',
'display_name': 'Runware Standard',
'model_type': 'image',
'provider': 'runware',
'cost_per_image': Decimal('0.008'),
'valid_sizes': ['512x512', '768x768', '1024x1024'],
'supports_json_mode': False,
'supports_vision': False,
'supports_function_calling': False,
'is_active': True,
'is_default': False,
'sort_order': 10,
'description': 'Runware image generation - most affordable',
},
]
# Bria AI Image Models
bria_models = [
{
'model_name': 'bria-2.3',
'display_name': 'Bria 2.3 High Quality',
'model_type': 'image',
'provider': 'bria',
'cost_per_image': Decimal('0.015'),
'valid_sizes': ['512x512', '768x768', '1024x1024', '1024x1792', '1792x1024'],
'supports_json_mode': False,
'supports_vision': False,
'supports_function_calling': False,
'is_active': True,
'is_default': False,
'sort_order': 11,
'description': 'Bria 2.3 - High quality image generation',
},
{
'model_name': 'bria-2.3-fast',
'display_name': 'Bria 2.3 Fast',
'model_type': 'image',
'provider': 'bria',
'cost_per_image': Decimal('0.010'),
'valid_sizes': ['512x512', '768x768', '1024x1024'],
'supports_json_mode': False,
'supports_vision': False,
'supports_function_calling': False,
'is_active': True,
'is_default': False,
'sort_order': 12,
'description': 'Bria 2.3 Fast - Quick generation, lower cost',
},
{
'model_name': 'bria-2.2',
'display_name': 'Bria 2.2 Standard',
'model_type': 'image',
'provider': 'bria',
'cost_per_image': Decimal('0.012'),
'valid_sizes': ['512x512', '768x768', '1024x1024'],
'supports_json_mode': False,
'supports_vision': False,
'supports_function_calling': False,
'is_active': True,
'is_default': False,
'sort_order': 13,
'description': 'Bria 2.2 - Standard image generation',
},
]
# Anthropic Claude Text Models
anthropic_models = [
{
'model_name': 'claude-3-5-sonnet-20241022',
'display_name': 'Claude 3.5 Sonnet (Latest)',
'model_type': 'text',
'provider': 'anthropic',
'input_cost_per_1m': Decimal('3.00'),
'output_cost_per_1m': Decimal('15.00'),
'context_window': 200000,
'max_output_tokens': 8192,
'supports_json_mode': True,
'supports_vision': True,
'supports_function_calling': True,
'is_active': True,
'is_default': False,
'sort_order': 20,
'description': 'Claude 3.5 Sonnet - Best for most tasks, excellent reasoning',
},
{
'model_name': 'claude-3-5-haiku-20241022',
'display_name': 'Claude 3.5 Haiku (Fast)',
'model_type': 'text',
'provider': 'anthropic',
'input_cost_per_1m': Decimal('1.00'),
'output_cost_per_1m': Decimal('5.00'),
'context_window': 200000,
'max_output_tokens': 8192,
'supports_json_mode': True,
'supports_vision': True,
'supports_function_calling': True,
'is_active': True,
'is_default': False,
'sort_order': 21,
'description': 'Claude 3.5 Haiku - Fast and affordable',
},
{
'model_name': 'claude-3-opus-20240229',
'display_name': 'Claude 3 Opus',
'model_type': 'text',
'provider': 'anthropic',
'input_cost_per_1m': Decimal('15.00'),
'output_cost_per_1m': Decimal('75.00'),
'context_window': 200000,
'max_output_tokens': 4096,
'supports_json_mode': True,
'supports_vision': True,
'supports_function_calling': True,
'is_active': True,
'is_default': False,
'sort_order': 22,
'description': 'Claude 3 Opus - Most capable Claude model',
},
{
'model_name': 'claude-3-sonnet-20240229',
'display_name': 'Claude 3 Sonnet',
'model_type': 'text',
'provider': 'anthropic',
'input_cost_per_1m': Decimal('3.00'),
'output_cost_per_1m': Decimal('15.00'),
'context_window': 200000,
'max_output_tokens': 4096,
'supports_json_mode': True,
'supports_vision': True,
'supports_function_calling': True,
'is_active': True,
'is_default': False,
'sort_order': 23,
'description': 'Claude 3 Sonnet - Balanced performance and cost',
},
{
'model_name': 'claude-3-haiku-20240307',
'display_name': 'Claude 3 Haiku',
'model_type': 'text',
'provider': 'anthropic',
'input_cost_per_1m': Decimal('0.25'),
'output_cost_per_1m': Decimal('1.25'),
'context_window': 200000,
'max_output_tokens': 4096,
'supports_json_mode': True,
'supports_vision': True,
'supports_function_calling': True,
'is_active': True,
'is_default': False,
'sort_order': 24,
'description': 'Claude 3 Haiku - Most affordable Claude model',
},
]
# Create all models
all_models = text_models + image_models + runware_models + bria_models + anthropic_models
for model_data in all_models:
AIModelConfig.objects.update_or_create(
model_name=model_data['model_name'],
defaults=model_data
)
def reverse_migration(apps, schema_editor):
"""Remove seeded models"""
AIModelConfig = apps.get_model('billing', 'AIModelConfig')
seeded_models = [
'gpt-4.1', 'gpt-4o-mini', 'gpt-4o', 'gpt-5.1', 'gpt-5.2',
'dall-e-3', 'dall-e-2', 'gpt-image-1', 'gpt-image-1-mini',
'runware:100@1',
'bria-2.3', 'bria-2.3-fast', 'bria-2.2',
'claude-3-5-sonnet-20241022', 'claude-3-5-haiku-20241022',
'claude-3-opus-20240229', 'claude-3-sonnet-20240229', 'claude-3-haiku-20240307'
]
AIModelConfig.objects.filter(model_name__in=seeded_models).delete()
class Migration(migrations.Migration):
dependencies = [
('billing', '0008_global_payment_methods'),
]
operations = [
migrations.RunPython(seed_ai_models, reverse_migration),
]

View File

@@ -114,48 +114,65 @@ class CreditUsageLog(AccountBaseModel):
class CreditCostConfig(models.Model):
"""
Fixed credit costs per operation type.
Per final-model-schemas.md:
| Field | Type | Required | Notes |
|-------|------|----------|-------|
| operation_type | CharField(50) PK | Yes | Unique operation ID |
| display_name | CharField(100) | Yes | Human-readable |
| base_credits | IntegerField | Yes | Fixed credits per operation |
| is_active | BooleanField | Yes | Enable/disable |
| description | TextField | No | Admin notes |
Token-based credit pricing configuration.
ALL operations use token-to-credit conversion.
"""
# Operation identification (Primary Key)
# Operation identification
operation_type = models.CharField(
max_length=50,
unique=True,
primary_key=True,
help_text="Unique operation ID (e.g., 'article_generation', 'image_generation')"
choices=CreditUsageLog.OPERATION_TYPE_CHOICES,
help_text="AI operation type"
)
# Human-readable name
display_name = models.CharField(
max_length=100,
help_text="Human-readable name"
# Token-to-credit ratio (tokens per 1 credit)
tokens_per_credit = models.IntegerField(
default=100,
validators=[MinValueValidator(1)],
help_text="Number of tokens that equal 1 credit (e.g., 100 tokens = 1 credit)"
)
# Fixed credits per operation
base_credits = models.IntegerField(
# Minimum credits (for very small token usage)
min_credits = models.IntegerField(
default=1,
validators=[MinValueValidator(0)],
help_text="Fixed credits per operation"
help_text="Minimum credits to charge regardless of token usage"
)
# Price per credit (for revenue reporting)
price_per_credit_usd = models.DecimalField(
max_digits=10,
decimal_places=4,
default=Decimal('0.01'),
validators=[MinValueValidator(Decimal('0.0001'))],
help_text="USD price per credit (for revenue reporting)"
)
# Metadata
display_name = models.CharField(max_length=100, help_text="Human-readable name")
description = models.TextField(blank=True, help_text="What this operation does")
# Status
is_active = models.BooleanField(
default=True,
help_text="Enable/disable this operation"
is_active = models.BooleanField(default=True, help_text="Enable/disable this operation")
# Audit fields
created_at = models.DateTimeField(auto_now_add=True)
updated_at = models.DateTimeField(auto_now=True)
updated_by = models.ForeignKey(
settings.AUTH_USER_MODEL,
null=True,
blank=True,
on_delete=models.SET_NULL,
related_name='credit_cost_updates',
help_text="Admin who last updated"
)
# Admin notes
description = models.TextField(
# Change tracking
previous_tokens_per_credit = models.IntegerField(
null=True,
blank=True,
help_text="Admin notes about this operation"
help_text="Tokens per credit before last update (for audit trail)"
)
# History tracking
@@ -169,7 +186,18 @@ class CreditCostConfig(models.Model):
ordering = ['operation_type']
def __str__(self):
return f"{self.display_name} - {self.base_credits} credits"
return f"{self.display_name} - {self.tokens_per_credit} tokens/credit"
def save(self, *args, **kwargs):
# Track token ratio changes
if self.pk:
try:
old = CreditCostConfig.objects.get(pk=self.pk)
if old.tokens_per_credit != self.tokens_per_credit:
self.previous_tokens_per_credit = old.tokens_per_credit
except CreditCostConfig.DoesNotExist:
pass
super().save(*args, **kwargs)
class BillingConfiguration(models.Model):
@@ -398,20 +426,6 @@ class Invoice(AccountBaseModel):
def tax_amount(self):
return self.tax
@property
def tax_rate(self):
"""Get tax rate from metadata if stored"""
if self.metadata and 'tax_rate' in self.metadata:
return self.metadata['tax_rate']
return 0
@property
def discount_amount(self):
"""Get discount amount from metadata if stored"""
if self.metadata and 'discount_amount' in self.metadata:
return self.metadata['discount_amount']
return 0
@property
def total_amount(self):
return self.total
@@ -501,7 +515,6 @@ class Payment(AccountBaseModel):
manual_reference = models.CharField(
max_length=255,
blank=True,
null=True,
help_text="Bank transfer reference, wallet transaction ID, etc."
)
manual_notes = models.TextField(blank=True, help_text="Admin notes for manual payments")
@@ -541,24 +554,9 @@ class Payment(AccountBaseModel):
models.Index(fields=['account', 'payment_method']),
models.Index(fields=['invoice', 'status']),
]
constraints = [
# Ensure manual_reference is unique when not null/empty
# This prevents duplicate bank transfer references
models.UniqueConstraint(
fields=['manual_reference'],
name='unique_manual_reference_when_not_null',
condition=models.Q(manual_reference__isnull=False) & ~models.Q(manual_reference='')
),
]
def __str__(self):
return f"Payment {self.id} - {self.get_payment_method_display()} - {self.amount} {self.currency}"
def save(self, *args, **kwargs):
"""Normalize empty manual_reference to NULL for proper uniqueness handling"""
if self.manual_reference == '':
self.manual_reference = None
super().save(*args, **kwargs)
class CreditPackage(models.Model):
@@ -608,10 +606,8 @@ class CreditPackage(models.Model):
class PaymentMethodConfig(models.Model):
"""
Configure payment methods availability per country.
For online payments (stripe, paypal): Credentials stored in IntegrationProvider.
For manual payments (bank_transfer, local_wallet): Bank/wallet details stored here.
Configure payment methods availability per country
Allows enabling/disabling manual payments by region
"""
# Use centralized choices
PAYMENT_METHOD_CHOICES = PAYMENT_METHOD_CHOICES
@@ -619,7 +615,7 @@ class PaymentMethodConfig(models.Model):
country_code = models.CharField(
max_length=2,
db_index=True,
help_text="ISO 2-letter country code (e.g., US, GB, PK) or '*' for global"
help_text="ISO 2-letter country code (e.g., US, GB, IN)"
)
payment_method = models.CharField(max_length=50, choices=PAYMENT_METHOD_CHOICES)
is_enabled = models.BooleanField(default=True)
@@ -628,17 +624,21 @@ class PaymentMethodConfig(models.Model):
display_name = models.CharField(max_length=100, blank=True)
instructions = models.TextField(blank=True, help_text="Payment instructions for users")
# Manual payment details (for bank_transfer only)
# Manual payment details (for bank_transfer/local_wallet)
bank_name = models.CharField(max_length=255, blank=True)
account_number = models.CharField(max_length=255, blank=True)
account_title = models.CharField(max_length=255, blank=True, help_text="Account holder name")
routing_number = models.CharField(max_length=255, blank=True, help_text="Routing/Sort code")
swift_code = models.CharField(max_length=255, blank=True, help_text="SWIFT/BIC code for international")
iban = models.CharField(max_length=255, blank=True, help_text="IBAN for international transfers")
routing_number = models.CharField(max_length=255, blank=True)
swift_code = models.CharField(max_length=255, blank=True)
# Additional fields for local wallets
wallet_type = models.CharField(max_length=100, blank=True, help_text="E.g., JazzCash, EasyPaisa, etc.")
wallet_id = models.CharField(max_length=255, blank=True, help_text="Mobile number or wallet ID")
wallet_type = models.CharField(max_length=100, blank=True, help_text="E.g., PayTM, PhonePe, etc.")
wallet_id = models.CharField(max_length=255, blank=True)
# Webhook configuration (Stripe/PayPal)
webhook_url = models.URLField(blank=True, help_text="Webhook URL for payment gateway callbacks")
webhook_secret = models.CharField(max_length=255, blank=True, help_text="Webhook secret for signature verification")
api_key = models.CharField(max_length=255, blank=True, help_text="API key for payment gateway integration")
api_secret = models.CharField(max_length=255, blank=True, help_text="API secret for payment gateway integration")
# Order/priority
sort_order = models.IntegerField(default=0)
@@ -696,34 +696,18 @@ class AccountPaymentMethod(AccountBaseModel):
class AIModelConfig(models.Model):
"""
All AI models (text + image) with pricing and credit configuration.
Single Source of Truth for Models.
AI Model Configuration - Database-driven model pricing and capabilities.
Replaces hardcoded MODEL_RATES and IMAGE_MODEL_RATES from constants.py
Per final-model-schemas.md:
| Field | Type | Required | Notes |
|-------|------|----------|-------|
| id | AutoField PK | Auto | |
| model_name | CharField(100) | Yes | gpt-5.1, dall-e-3, runware:97@1 |
| model_type | CharField(20) | Yes | text / image |
| provider | CharField(50) | Yes | Links to IntegrationProvider |
| display_name | CharField(200) | Yes | Human-readable |
| is_default | BooleanField | Yes | One default per type |
| is_active | BooleanField | Yes | Enable/disable |
| cost_per_1k_input | DecimalField | No | Provider cost (USD) - text models |
| cost_per_1k_output | DecimalField | No | Provider cost (USD) - text models |
| tokens_per_credit | IntegerField | No | Text: tokens per 1 credit (e.g., 1000) |
| credits_per_image | IntegerField | No | Image: credits per image (e.g., 1, 5, 15) |
| quality_tier | CharField(20) | No | basic / quality / premium |
| max_tokens | IntegerField | No | Model token limit |
| context_window | IntegerField | No | Model context size |
| capabilities | JSONField | No | vision, function_calling, etc. |
| created_at | DateTime | Auto | |
| updated_at | DateTime | Auto | |
Two pricing models:
- Text models: Cost per 1M tokens (input/output), credits calculated AFTER AI call
- Image models: Cost per image, credits calculated BEFORE AI call
"""
MODEL_TYPE_CHOICES = [
('text', 'Text Generation'),
('image', 'Image Generation'),
('embedding', 'Embedding'),
]
PROVIDER_CHOICES = [
@@ -733,112 +717,145 @@ class AIModelConfig(models.Model):
('google', 'Google'),
]
QUALITY_TIER_CHOICES = [
('basic', 'Basic'),
('quality', 'Quality'),
('premium', 'Premium'),
]
# Basic Information
model_name = models.CharField(
max_length=100,
unique=True,
db_index=True,
help_text="Model identifier (e.g., 'gpt-5.1', 'dall-e-3', 'runware:97@1')"
help_text="Model identifier used in API calls (e.g., 'gpt-4o-mini', 'dall-e-3')"
)
display_name = models.CharField(
max_length=200,
help_text="Human-readable name shown in UI (e.g., 'GPT-4o mini - Fast & Affordable')"
)
model_type = models.CharField(
max_length=20,
choices=MODEL_TYPE_CHOICES,
db_index=True,
help_text="text / image"
help_text="Type of model - determines which pricing fields are used"
)
provider = models.CharField(
max_length=50,
choices=PROVIDER_CHOICES,
db_index=True,
help_text="Links to IntegrationProvider"
help_text="AI provider (OpenAI, Anthropic, etc.)"
)
display_name = models.CharField(
max_length=200,
help_text="Human-readable name"
)
is_default = models.BooleanField(
default=False,
db_index=True,
help_text="One default per type"
)
is_active = models.BooleanField(
default=True,
db_index=True,
help_text="Enable/disable"
)
# Text Model Pricing (cost per 1K tokens)
cost_per_1k_input = models.DecimalField(
# Text Model Pricing (Only for model_type='text')
input_cost_per_1m = models.DecimalField(
max_digits=10,
decimal_places=6,
decimal_places=4,
null=True,
blank=True,
help_text="Provider cost per 1K input tokens (USD) - text models"
validators=[MinValueValidator(Decimal('0.0001'))],
help_text="Cost per 1 million input tokens (USD). For text models only."
)
cost_per_1k_output = models.DecimalField(
output_cost_per_1m = models.DecimalField(
max_digits=10,
decimal_places=6,
decimal_places=4,
null=True,
blank=True,
help_text="Provider cost per 1K output tokens (USD) - text models"
)
# Credit Configuration
tokens_per_credit = models.IntegerField(
null=True,
blank=True,
help_text="Text: tokens per 1 credit (e.g., 1000, 10000)"
)
credits_per_image = models.IntegerField(
null=True,
blank=True,
help_text="Image: credits per image (e.g., 1, 5, 15)"
)
quality_tier = models.CharField(
max_length=20,
choices=QUALITY_TIER_CHOICES,
null=True,
blank=True,
help_text="basic / quality / premium - for image models"
)
# Model Limits
max_tokens = models.IntegerField(
null=True,
blank=True,
help_text="Model token limit"
validators=[MinValueValidator(Decimal('0.0001'))],
help_text="Cost per 1 million output tokens (USD). For text models only."
)
context_window = models.IntegerField(
null=True,
blank=True,
help_text="Model context size"
validators=[MinValueValidator(1)],
help_text="Maximum input tokens (context length). For text models only."
)
max_output_tokens = models.IntegerField(
null=True,
blank=True,
validators=[MinValueValidator(1)],
help_text="Maximum output tokens per request. For text models only."
)
# Image Model Pricing (Only for model_type='image')
cost_per_image = models.DecimalField(
max_digits=10,
decimal_places=4,
null=True,
blank=True,
validators=[MinValueValidator(Decimal('0.0001'))],
help_text="Fixed cost per image generation (USD). For image models only."
)
valid_sizes = models.JSONField(
null=True,
blank=True,
help_text='Array of valid image sizes (e.g., ["1024x1024", "1024x1792"]). For image models only.'
)
# Capabilities
capabilities = models.JSONField(
default=dict,
blank=True,
help_text="Capabilities: vision, function_calling, json_mode, etc."
supports_json_mode = models.BooleanField(
default=False,
help_text="True for models with JSON response format support"
)
# Timestamps
supports_vision = models.BooleanField(
default=False,
help_text="True for models that can analyze images"
)
supports_function_calling = models.BooleanField(
default=False,
help_text="True for models with function calling capability"
)
# Status & Configuration
is_active = models.BooleanField(
default=True,
db_index=True,
help_text="Enable/disable model without deleting"
)
is_default = models.BooleanField(
default=False,
db_index=True,
help_text="Mark as default model for its type (only one per type)"
)
sort_order = models.IntegerField(
default=0,
help_text="Control order in dropdown lists (lower numbers first)"
)
# Metadata
description = models.TextField(
blank=True,
help_text="Admin notes about model usage, strengths, limitations"
)
release_date = models.DateField(
null=True,
blank=True,
help_text="When model was released/added"
)
deprecation_date = models.DateField(
null=True,
blank=True,
help_text="When model will be removed"
)
# Audit Fields
created_at = models.DateTimeField(auto_now_add=True)
updated_at = models.DateTimeField(auto_now=True)
updated_by = models.ForeignKey(
settings.AUTH_USER_MODEL,
null=True,
blank=True,
on_delete=models.SET_NULL,
related_name='ai_model_updates',
help_text="Admin who last updated"
)
# History tracking
history = HistoricalRecords()
@@ -848,7 +865,7 @@ class AIModelConfig(models.Model):
db_table = 'igny8_ai_model_config'
verbose_name = 'AI Model Configuration'
verbose_name_plural = 'AI Model Configurations'
ordering = ['model_type', 'model_name']
ordering = ['model_type', 'sort_order', 'model_name']
indexes = [
models.Index(fields=['model_type', 'is_active']),
models.Index(fields=['provider', 'is_active']),
@@ -861,138 +878,52 @@ class AIModelConfig(models.Model):
def save(self, *args, **kwargs):
"""Ensure only one is_default per model_type"""
if self.is_default:
# Unset other defaults for same model_type
AIModelConfig.objects.filter(
model_type=self.model_type,
is_default=True
).exclude(pk=self.pk).update(is_default=False)
super().save(*args, **kwargs)
@classmethod
def get_default_text_model(cls):
"""Get the default text generation model"""
return cls.objects.filter(model_type='text', is_default=True, is_active=True).first()
def get_cost_for_tokens(self, input_tokens, output_tokens):
"""Calculate cost for text models based on token usage"""
if self.model_type != 'text':
raise ValueError("get_cost_for_tokens only applies to text models")
if not self.input_cost_per_1m or not self.output_cost_per_1m:
raise ValueError(f"Model {self.model_name} missing cost_per_1m values")
cost = (
(Decimal(input_tokens) * self.input_cost_per_1m) +
(Decimal(output_tokens) * self.output_cost_per_1m)
) / Decimal('1000000')
return cost
@classmethod
def get_default_image_model(cls):
"""Get the default image generation model"""
return cls.objects.filter(model_type='image', is_default=True, is_active=True).first()
def get_cost_for_images(self, num_images):
"""Calculate cost for image models"""
if self.model_type != 'image':
raise ValueError("get_cost_for_images only applies to image models")
if not self.cost_per_image:
raise ValueError(f"Model {self.model_name} missing cost_per_image")
return self.cost_per_image * Decimal(num_images)
@classmethod
def get_image_models_by_tier(cls):
"""Get all active image models grouped by quality tier"""
return cls.objects.filter(
model_type='image',
is_active=True
).order_by('quality_tier', 'model_name')
class WebhookEvent(models.Model):
"""
Store all incoming webhook events for audit and replay capability.
def validate_size(self, size):
"""Check if size is valid for this image model"""
if self.model_type != 'image':
raise ValueError("validate_size only applies to image models")
if not self.valid_sizes:
return True # No size restrictions
return size in self.valid_sizes
This model provides:
- Audit trail of all webhook events
- Idempotency verification (via event_id)
- Ability to replay failed events
- Debugging and monitoring
"""
PROVIDER_CHOICES = [
('stripe', 'Stripe'),
('paypal', 'PayPal'),
]
# Unique identifier from the payment provider
event_id = models.CharField(
max_length=255,
unique=True,
db_index=True,
help_text="Unique event ID from the payment provider"
)
# Payment provider
provider = models.CharField(
max_length=20,
choices=PROVIDER_CHOICES,
db_index=True,
help_text="Payment provider (stripe or paypal)"
)
# Event type (e.g., 'checkout.session.completed', 'PAYMENT.CAPTURE.COMPLETED')
event_type = models.CharField(
max_length=100,
db_index=True,
help_text="Event type from the provider"
)
# Full payload for debugging and replay
payload = models.JSONField(
help_text="Full webhook payload"
)
# Processing status
processed = models.BooleanField(
default=False,
db_index=True,
help_text="Whether this event has been successfully processed"
)
processed_at = models.DateTimeField(
null=True,
blank=True,
help_text="When the event was processed"
)
# Error tracking
error_message = models.TextField(
blank=True,
help_text="Error message if processing failed"
)
retry_count = models.IntegerField(
default=0,
help_text="Number of processing attempts"
)
# Timestamps
created_at = models.DateTimeField(auto_now_add=True)
class Meta:
app_label = 'billing'
db_table = 'igny8_webhook_events'
verbose_name = 'Webhook Event'
verbose_name_plural = 'Webhook Events'
ordering = ['-created_at']
indexes = [
models.Index(fields=['provider', 'event_type']),
models.Index(fields=['processed', 'created_at']),
models.Index(fields=['provider', 'processed']),
]
def __str__(self):
return f"{self.provider}:{self.event_type} - {self.event_id[:20]}..."
@classmethod
def record_event(cls, event_id: str, provider: str, event_type: str, payload: dict):
"""
Record a webhook event. Returns (event, created) tuple.
If the event already exists, returns the existing event.
"""
return cls.objects.get_or_create(
event_id=event_id,
defaults={
'provider': provider,
'event_type': event_type,
'payload': payload,
}
)
def mark_processed(self):
"""Mark the event as successfully processed"""
from django.utils import timezone
self.processed = True
self.processed_at = timezone.now()
self.save(update_fields=['processed', 'processed_at'])
def mark_failed(self, error_message: str):
"""Mark the event as failed with error message"""
self.error_message = error_message
self.retry_count += 1
self.save(update_fields=['error_message', 'retry_count'])
def get_display_with_pricing(self):
"""For dropdowns: show model with pricing"""
if self.model_type == 'text':
return f"{self.display_name} - ${self.input_cost_per_1m}/${self.output_cost_per_1m} per 1M"
elif self.model_type == 'image':
return f"{self.display_name} - ${self.cost_per_image} per image"
return self.display_name

View File

@@ -1,8 +1,6 @@
"""
Credit Service for managing credit transactions and deductions
"""
import math
import logging
from django.db import transaction
from django.utils import timezone
from igny8_core.business.billing.models import CreditTransaction, CreditUsageLog
@@ -10,151 +8,10 @@ from igny8_core.business.billing.constants import CREDIT_COSTS
from igny8_core.business.billing.exceptions import InsufficientCreditsError, CreditCalculationError
from igny8_core.auth.models import Account
logger = logging.getLogger(__name__)
def _check_low_credits_warning(account, previous_balance):
"""
Check if credits have fallen below threshold and send warning email.
Only sends if this is the first time falling below threshold.
"""
try:
from igny8_core.modules.system.email_models import EmailSettings
from .email_service import BillingEmailService
settings = EmailSettings.get_settings()
if not settings.send_low_credit_warnings:
return
threshold = settings.low_credit_threshold
# Only send if we CROSSED below the threshold (wasn't already below)
if account.credits < threshold <= previous_balance:
logger.info(f"Credits fell below threshold for account {account.id}: {account.credits} < {threshold}")
BillingEmailService.send_low_credits_warning(
account=account,
current_credits=account.credits,
threshold=threshold
)
except Exception as e:
logger.error(f"Failed to check/send low credits warning: {e}")
class CreditService:
"""Service for managing credits - Token-based only"""
@staticmethod
def calculate_credits_for_image(model_name: str, num_images: int = 1) -> int:
"""
Calculate credits for image generation based on AIModelConfig.credits_per_image.
Args:
model_name: The AI model name (e.g., 'dall-e-3', 'flux-1-1-pro')
num_images: Number of images to generate
Returns:
int: Credits required
Raises:
CreditCalculationError: If model not found or has no credits_per_image
"""
from igny8_core.business.billing.models import AIModelConfig
try:
model = AIModelConfig.objects.filter(
model_name=model_name,
is_active=True
).first()
if not model:
raise CreditCalculationError(f"Model {model_name} not found or inactive")
if model.credits_per_image is None:
raise CreditCalculationError(
f"Model {model_name} has no credits_per_image configured"
)
credits = model.credits_per_image * num_images
logger.info(
f"Calculated credits for {model_name}: "
f"{num_images} images × {model.credits_per_image} = {credits} credits"
)
return credits
except AIModelConfig.DoesNotExist:
raise CreditCalculationError(f"Model {model_name} not found")
@staticmethod
def calculate_credits_from_tokens_by_model(model_name: str, total_tokens: int) -> int:
"""
Calculate credits from token usage based on AIModelConfig.tokens_per_credit.
This is the model-specific version that uses the model's configured rate.
For operation-based calculation, use calculate_credits_from_tokens().
Args:
model_name: The AI model name (e.g., 'gpt-4o', 'claude-3-5-sonnet')
total_tokens: Total tokens used (input + output)
Returns:
int: Credits required (minimum 1)
Raises:
CreditCalculationError: If model not found
"""
from igny8_core.business.billing.models import AIModelConfig, BillingConfiguration
try:
model = AIModelConfig.objects.filter(
model_name=model_name,
is_active=True
).first()
if model and model.tokens_per_credit:
tokens_per_credit = model.tokens_per_credit
else:
# Fallback to global default
billing_config = BillingConfiguration.get_config()
tokens_per_credit = billing_config.default_tokens_per_credit
logger.info(
f"Model {model_name} has no tokens_per_credit, "
f"using default: {tokens_per_credit}"
)
if tokens_per_credit <= 0:
raise CreditCalculationError(
f"Invalid tokens_per_credit for {model_name}: {tokens_per_credit}"
)
# Get rounding mode
billing_config = BillingConfiguration.get_config()
rounding_mode = billing_config.credit_rounding_mode
credits_float = total_tokens / tokens_per_credit
if rounding_mode == 'up':
credits = math.ceil(credits_float)
elif rounding_mode == 'down':
credits = math.floor(credits_float)
else: # nearest
credits = round(credits_float)
# Minimum 1 credit
credits = max(credits, 1)
logger.info(
f"Calculated credits for {model_name}: "
f"{total_tokens} tokens ÷ {tokens_per_credit} = {credits} credits"
)
return credits
except Exception as e:
logger.error(f"Error calculating credits for {model_name}: {e}")
raise CreditCalculationError(f"Error calculating credits: {e}")
@staticmethod
def calculate_credits_from_tokens(operation_type, tokens_input, tokens_output):
"""
@@ -329,9 +186,6 @@ class CreditService:
# Check sufficient credits (legacy: amount is already calculated)
CreditService.check_credits_legacy(account, amount)
# Store previous balance for low credits check
previous_balance = account.credits
# Deduct from account.credits
account.credits -= amount
account.save(update_fields=['credits'])
@@ -360,9 +214,6 @@ class CreditService:
metadata=metadata or {}
)
# Check and send low credits warning if applicable
_check_low_credits_warning(account, previous_balance)
return account.credits
@staticmethod
@@ -472,56 +323,4 @@ class CreditService:
)
return account.credits
@staticmethod
@transaction.atomic
def deduct_credits_for_image(
account,
model_name: str,
num_images: int = 1,
description: str = None,
metadata: dict = None,
cost_usd: float = None,
related_object_type: str = None,
related_object_id: int = None
):
"""
Deduct credits for image generation based on model's credits_per_image.
Args:
account: Account instance
model_name: AI model used (e.g., 'dall-e-3', 'flux-1-1-pro')
num_images: Number of images generated
description: Optional description
metadata: Optional metadata dict
cost_usd: Optional cost in USD
related_object_type: Optional related object type
related_object_id: Optional related object ID
Returns:
int: New credit balance
"""
credits_required = CreditService.calculate_credits_for_image(model_name, num_images)
if account.credits < credits_required:
raise InsufficientCreditsError(
f"Insufficient credits. Required: {credits_required}, Available: {account.credits}"
)
if not description:
description = f"Image generation: {num_images} images with {model_name} = {credits_required} credits"
return CreditService.deduct_credits(
account=account,
amount=credits_required,
operation_type='image_generation',
description=description,
metadata=metadata,
cost_usd=cost_usd,
model_used=model_name,
tokens_input=None,
tokens_output=None,
related_object_type=related_object_type,
related_object_id=related_object_id
)

File diff suppressed because it is too large Load Diff

View File

@@ -14,65 +14,32 @@ from ....auth.models import Account, Subscription
class InvoiceService:
"""Service for managing invoices"""
@staticmethod
def get_pending_invoice(subscription: Subscription) -> Optional[Invoice]:
"""
Get pending invoice for a subscription.
Used to find existing invoice during payment processing instead of creating duplicates.
"""
return Invoice.objects.filter(
subscription=subscription,
status='pending'
).order_by('-created_at').first()
@staticmethod
def get_or_create_subscription_invoice(
subscription: Subscription,
billing_period_start: datetime,
billing_period_end: datetime
) -> tuple[Invoice, bool]:
"""
Get existing pending invoice or create new one.
Returns tuple of (invoice, created) where created is True if new invoice was created.
"""
# First try to find existing pending invoice for this subscription
existing = InvoiceService.get_pending_invoice(subscription)
if existing:
return existing, False
# Create new invoice if none exists
invoice = InvoiceService.create_subscription_invoice(
subscription=subscription,
billing_period_start=billing_period_start,
billing_period_end=billing_period_end
)
return invoice, True
@staticmethod
def generate_invoice_number(account: Account) -> str:
"""
Generate unique invoice number with atomic locking to prevent duplicates
Format: INV-{YY}{MM}{COUNTER} (e.g., INV-26010001)
Format: INV-{ACCOUNT_ID}-{YEAR}{MONTH}-{COUNTER}
"""
from django.db import transaction
now = timezone.now()
prefix = f"INV-{now.year % 100:02d}{now.month:02d}"
prefix = f"INV-{account.id}-{now.year}{now.month:02d}"
# Use atomic transaction with SELECT FOR UPDATE to prevent race conditions
with transaction.atomic():
# Lock the invoice table for this month to get accurate count
# Lock the invoice table for this account/month to get accurate count
count = Invoice.objects.select_for_update().filter(
account=account,
created_at__year=now.year,
created_at__month=now.month
).count()
invoice_number = f"{prefix}{count + 1:04d}"
invoice_number = f"{prefix}-{count + 1:04d}"
# Double-check uniqueness (should not happen with lock, but safety check)
while Invoice.objects.filter(invoice_number=invoice_number).exists():
count += 1
invoice_number = f"{prefix}{count + 1:04d}"
invoice_number = f"{prefix}-{count + 1:04d}"
return invoice_number
@@ -85,11 +52,6 @@ class InvoiceService:
) -> Invoice:
"""
Create invoice for subscription billing period
SIMPLIFIED CURRENCY LOGIC:
- ALL invoices are in USD (consistent for accounting)
- PKR equivalent is calculated and stored in metadata for display purposes
- Bank transfer users see PKR equivalent but invoice is technically USD
"""
account = subscription.account
plan = subscription.plan
@@ -112,15 +74,12 @@ class InvoiceService:
invoice_date = timezone.now().date()
due_date = invoice_date + timedelta(days=INVOICE_DUE_DATE_OFFSET)
# ALWAYS use USD for invoices (simplified accounting)
# Get currency based on billing country
from igny8_core.business.billing.utils.currency import get_currency_for_country, convert_usd_to_local
currency = get_currency_for_country(account.billing_country)
currency = 'USD'
usd_price = float(plan.price)
# Calculate local equivalent for display purposes (if applicable)
local_currency = get_currency_for_country(account.billing_country) if account.billing_country else 'USD'
local_equivalent = convert_usd_to_local(usd_price, account.billing_country) if local_currency != 'USD' else usd_price
# Convert plan price to local currency
local_price = convert_usd_to_local(float(plan.price), account.billing_country)
invoice = Invoice.objects.create(
account=account,
@@ -136,19 +95,16 @@ class InvoiceService:
'billing_period_end': billing_period_end.isoformat(),
'subscription_id': subscription.id, # Keep in metadata for backward compatibility
'usd_price': str(plan.price), # Store original USD price
'local_currency': local_currency, # Store local currency code for display
'local_equivalent': str(round(local_equivalent, 2)), # Store local equivalent for display
'exchange_rate': str(local_equivalent / usd_price if usd_price > 0 else 1.0),
'payment_method': account.payment_method
'exchange_rate': str(local_price / float(plan.price) if plan.price > 0 else 1.0)
}
)
# Add line item for subscription in USD
# Add line item for subscription with converted price
invoice.add_line_item(
description=f"{plan.name} Plan - {billing_period_start.strftime('%b %Y')}",
quantity=1,
unit_price=Decimal(str(usd_price)),
amount=Decimal(str(usd_price))
unit_price=Decimal(str(local_price)),
amount=Decimal(str(local_price))
)
invoice.calculate_totals()
@@ -164,23 +120,16 @@ class InvoiceService:
) -> Invoice:
"""
Create invoice for credit package purchase
SIMPLIFIED CURRENCY LOGIC:
- ALL invoices are in USD (consistent for accounting)
- PKR equivalent is calculated and stored in metadata for display purposes
"""
from igny8_core.business.billing.config import INVOICE_DUE_DATE_OFFSET
invoice_date = timezone.now().date()
# ALWAYS use USD for invoices (simplified accounting)
# Get currency based on billing country
from igny8_core.business.billing.utils.currency import get_currency_for_country, convert_usd_to_local
currency = get_currency_for_country(account.billing_country)
currency = 'USD'
usd_price = float(credit_package.price)
# Calculate local equivalent for display purposes (if applicable)
local_currency = get_currency_for_country(account.billing_country) if account.billing_country else 'USD'
local_equivalent = convert_usd_to_local(usd_price, account.billing_country) if local_currency != 'USD' else usd_price
# Convert credit package price to local currency
local_price = convert_usd_to_local(float(credit_package.price), account.billing_country)
invoice = Invoice.objects.create(
account=account,
@@ -194,19 +143,16 @@ class InvoiceService:
'credit_package_id': credit_package.id,
'credit_amount': credit_package.credits,
'usd_price': str(credit_package.price), # Store original USD price
'local_currency': local_currency, # Store local currency code for display
'local_equivalent': str(round(local_equivalent, 2)), # Store local equivalent for display
'exchange_rate': str(local_equivalent / usd_price if usd_price > 0 else 1.0),
'payment_method': account.payment_method
'exchange_rate': str(local_price / float(credit_package.price) if credit_package.price > 0 else 1.0)
},
)
# Add line item for credit package in USD
# Add line item for credit package with converted price
invoice.add_line_item(
description=f"{credit_package.name} - {credit_package.credits:,} Credits",
quantity=1,
unit_price=Decimal(str(usd_price)),
amount=Decimal(str(usd_price))
unit_price=Decimal(str(local_price)),
amount=Decimal(str(local_price))
)
invoice.calculate_totals()
@@ -266,21 +212,10 @@ class InvoiceService:
transaction_id: Optional[str] = None
) -> Invoice:
"""
Mark invoice as paid and record payment details
Args:
invoice: Invoice to mark as paid
payment_method: Payment method used ('stripe', 'paypal', 'bank_transfer', etc.)
transaction_id: External transaction ID (Stripe payment intent, PayPal capture ID, etc.)
Mark invoice as paid
"""
invoice.status = 'paid'
invoice.paid_at = timezone.now()
invoice.payment_method = payment_method
# For Stripe payments, store the transaction ID in stripe_invoice_id field
if payment_method == 'stripe' and transaction_id:
invoice.stripe_invoice_id = transaction_id
invoice.save()
return invoice
@@ -304,13 +239,43 @@ class InvoiceService:
@staticmethod
def generate_pdf(invoice: Invoice) -> bytes:
"""
Generate professional PDF invoice using ReportLab
"""
from igny8_core.business.billing.services.pdf_service import InvoicePDFGenerator
Generate PDF for invoice
# Use the professional PDF generator
pdf_buffer = InvoicePDFGenerator.generate_invoice_pdf(invoice)
return pdf_buffer.getvalue()
TODO: Implement PDF generation using reportlab or weasyprint
For now, return placeholder
"""
from io import BytesIO
# Placeholder - implement PDF generation
buffer = BytesIO()
# Simple text representation for now
content = f"""
INVOICE #{invoice.invoice_number}
Bill To: {invoice.account.name}
Email: {invoice.billing_email}
Date: {invoice.created_at.strftime('%Y-%m-%d')}
Due Date: {invoice.due_date.strftime('%Y-%m-%d') if invoice.due_date else 'N/A'}
Line Items:
"""
for item in invoice.line_items:
content += f" {item['description']} - ${item['amount']}\n"
content += f"""
Subtotal: ${invoice.subtotal}
Tax: ${invoice.tax_amount}
Total: ${invoice.total_amount}
Status: {invoice.status.upper()}
"""
buffer.write(content.encode('utf-8'))
buffer.seek(0)
return buffer.getvalue()
@staticmethod
def get_account_invoices(

View File

@@ -1,6 +1,6 @@
"""
Limit Service for Plan Limit Enforcement
Manages hard limits (sites, users, keywords) and monthly limits (ahrefs_queries)
Manages hard limits (sites, users, keywords, clusters) and monthly limits (ideas, words, images, prompts)
"""
from django.db import transaction
from django.utils import timezone
@@ -18,12 +18,12 @@ class LimitExceededError(Exception):
class HardLimitExceededError(LimitExceededError):
"""Raised when a hard limit (sites, users, keywords) is exceeded"""
"""Raised when a hard limit (sites, users, keywords, clusters) is exceeded"""
pass
class MonthlyLimitExceededError(LimitExceededError):
"""Raised when a monthly limit (ahrefs_queries) is exceeded"""
"""Raised when a monthly limit (ideas, words, images, prompts) is exceeded"""
pass
@@ -31,7 +31,6 @@ class LimitService:
"""Service for managing and enforcing plan limits"""
# Map limit types to model/field names
# Simplified to only 3 hard limits: sites, users, keywords
HARD_LIMIT_MAPPINGS = {
'sites': {
'model': 'igny8_core_auth.Site',
@@ -40,10 +39,10 @@ class LimitService:
'filter_field': 'account',
},
'users': {
'model': 'igny8_core_auth.User',
'model': 'igny8_core_auth.SiteUserAccess',
'plan_field': 'max_users',
'display_name': 'Team Members',
'filter_field': 'account',
'display_name': 'Team Users',
'filter_field': 'site__account',
},
'keywords': {
'model': 'planner.Keywords',
@@ -51,15 +50,39 @@ class LimitService:
'display_name': 'Keywords',
'filter_field': 'account',
},
'clusters': {
'model': 'planner.Clusters',
'plan_field': 'max_clusters',
'display_name': 'Clusters',
'filter_field': 'account',
},
}
# Simplified to only 1 monthly limit: ahrefs_queries
# All other consumption is controlled by credits only
MONTHLY_LIMIT_MAPPINGS = {
'ahrefs_queries': {
'plan_field': 'max_ahrefs_queries',
'usage_field': 'usage_ahrefs_queries',
'display_name': 'Keyword Research Queries',
'content_ideas': {
'plan_field': 'max_content_ideas',
'usage_field': 'usage_content_ideas',
'display_name': 'Content Ideas',
},
'content_words': {
'plan_field': 'max_content_words',
'usage_field': 'usage_content_words',
'display_name': 'Content Words',
},
'images_basic': {
'plan_field': 'max_images_basic',
'usage_field': 'usage_images_basic',
'display_name': 'Basic Images',
},
'images_premium': {
'plan_field': 'max_images_premium',
'usage_field': 'usage_images_premium',
'display_name': 'Premium Images',
},
'image_prompts': {
'plan_field': 'max_image_prompts',
'usage_field': 'usage_image_prompts',
'display_name': 'Image Prompts',
},
}
@@ -295,8 +318,11 @@ class LimitService:
Returns:
dict: Summary of reset operation
"""
# Reset only ahrefs_queries (the only monthly limit now)
account.usage_ahrefs_queries = 0
account.usage_content_ideas = 0
account.usage_content_words = 0
account.usage_images_basic = 0
account.usage_images_premium = 0
account.usage_image_prompts = 0
old_period_end = account.usage_period_end
@@ -315,7 +341,8 @@ class LimitService:
account.usage_period_end = new_period_end
account.save(update_fields=[
'usage_ahrefs_queries',
'usage_content_ideas', 'usage_content_words',
'usage_images_basic', 'usage_images_premium', 'usage_image_prompts',
'usage_period_start', 'usage_period_end', 'updated_at'
])
@@ -326,5 +353,5 @@ class LimitService:
'old_period_end': old_period_end.isoformat() if old_period_end else None,
'new_period_start': new_period_start.isoformat(),
'new_period_end': new_period_end.isoformat(),
'limits_reset': 1,
'limits_reset': 5,
}

View File

@@ -105,15 +105,11 @@ class PaymentService:
) -> Payment:
"""
Mark payment as completed and update invoice
For automatic payments (Stripe/PayPal), sets approved_at but leaves approved_by as None
"""
from .invoice_service import InvoiceService
payment.status = 'succeeded'
payment.processed_at = timezone.now()
# For automatic payments, set approved_at to indicate when payment was verified
# approved_by stays None to indicate it was automated, not manual approval
payment.approved_at = timezone.now()
if transaction_id:
payment.transaction_reference = transaction_id

View File

@@ -1,679 +0,0 @@
"""
PayPal Service - REST API v2 integration
Handles:
- Order creation and capture for one-time payments
- Subscription management
- Webhook verification
Configuration stored in IntegrationProvider model (provider_id='paypal')
Endpoints:
- Sandbox: https://api-m.sandbox.paypal.com
- Production: https://api-m.paypal.com
"""
import requests
import base64
import logging
from typing import Optional, Dict, Any
from django.conf import settings
from igny8_core.modules.system.models import IntegrationProvider
logger = logging.getLogger(__name__)
class PayPalConfigurationError(Exception):
"""Raised when PayPal is not properly configured"""
pass
class PayPalAPIError(Exception):
"""Raised when PayPal API returns an error"""
def __init__(self, message: str, status_code: int = None, response: dict = None):
super().__init__(message)
self.status_code = status_code
self.response = response
class PayPalService:
"""Service for PayPal payment operations using REST API v2"""
SANDBOX_URL = 'https://api-m.sandbox.paypal.com'
PRODUCTION_URL = 'https://api-m.paypal.com'
def __init__(self):
"""
Initialize PayPal service with credentials from IntegrationProvider.
Raises:
PayPalConfigurationError: If PayPal provider not configured or missing credentials
"""
provider = IntegrationProvider.get_provider('paypal')
if not provider:
raise PayPalConfigurationError(
"PayPal provider not configured. Add 'paypal' provider in admin."
)
if not provider.api_key or not provider.api_secret:
raise PayPalConfigurationError(
"PayPal client credentials not configured. "
"Set api_key (Client ID) and api_secret (Client Secret) in provider."
)
self.client_id = provider.api_key
self.client_secret = provider.api_secret
self.is_sandbox = provider.is_sandbox
self.provider = provider
self.config = provider.config or {}
# Set base URL
if provider.api_endpoint:
self.base_url = provider.api_endpoint.rstrip('/')
else:
self.base_url = self.SANDBOX_URL if self.is_sandbox else self.PRODUCTION_URL
# Cache access token
self._access_token = None
self._token_expires_at = None
# Configuration
self.currency = self.config.get('currency', 'USD')
self.webhook_id = self.config.get('webhook_id', '')
logger.info(
f"PayPal service initialized (sandbox={self.is_sandbox}, "
f"base_url={self.base_url})"
)
@property
def frontend_url(self) -> str:
"""Get frontend URL from Django settings"""
return getattr(settings, 'FRONTEND_URL', 'http://localhost:3000')
@property
def return_url(self) -> str:
"""Get return URL for PayPal redirects"""
return self.config.get(
'return_url',
f'{self.frontend_url}/account/plans?paypal=success'
)
@property
def cancel_url(self) -> str:
"""Get cancel URL for PayPal redirects"""
return self.config.get(
'cancel_url',
f'{self.frontend_url}/account/plans?paypal=cancel'
)
# ========== Authentication ==========
def _get_access_token(self) -> str:
"""
Get OAuth 2.0 access token from PayPal.
Returns:
str: Access token
Raises:
PayPalAPIError: If token request fails
"""
import time
# Return cached token if still valid
if self._access_token and self._token_expires_at:
if time.time() < self._token_expires_at - 60: # 60 second buffer
return self._access_token
# Create Basic auth header
auth_string = f'{self.client_id}:{self.client_secret}'
auth_bytes = base64.b64encode(auth_string.encode()).decode()
response = requests.post(
f'{self.base_url}/v1/oauth2/token',
headers={
'Authorization': f'Basic {auth_bytes}',
'Content-Type': 'application/x-www-form-urlencoded',
},
data='grant_type=client_credentials',
timeout=30,
)
if response.status_code != 200:
logger.error(f"PayPal token request failed: {response.text}")
raise PayPalAPIError(
"Failed to obtain PayPal access token",
status_code=response.status_code,
response=response.json() if response.text else None
)
data = response.json()
self._access_token = data['access_token']
self._token_expires_at = time.time() + data.get('expires_in', 32400)
logger.debug("PayPal access token obtained successfully")
return self._access_token
def _make_request(
self,
method: str,
endpoint: str,
json_data: dict = None,
params: dict = None,
timeout: int = 30,
) -> dict:
"""
Make authenticated API request to PayPal.
Args:
method: HTTP method (GET, POST, etc.)
endpoint: API endpoint (e.g., '/v2/checkout/orders')
json_data: JSON body data
params: Query parameters
timeout: Request timeout in seconds
Returns:
dict: Response JSON
Raises:
PayPalAPIError: If request fails
"""
token = self._get_access_token()
headers = {
'Authorization': f'Bearer {token}',
'Content-Type': 'application/json',
}
url = f'{self.base_url}{endpoint}'
response = requests.request(
method=method,
url=url,
headers=headers,
json=json_data,
params=params,
timeout=timeout,
)
# Handle no content response
if response.status_code == 204:
return {}
# Parse JSON response
try:
response_data = response.json() if response.text else {}
except Exception:
response_data = {'raw': response.text}
# Check for errors
if response.status_code >= 400:
error_msg = response_data.get('message', str(response_data))
logger.error(f"PayPal API error: {error_msg}")
raise PayPalAPIError(
f"PayPal API error: {error_msg}",
status_code=response.status_code,
response=response_data
)
return response_data
# ========== Order Operations ==========
def create_order(
self,
account,
amount: float,
currency: str = None,
description: str = '',
return_url: str = None,
cancel_url: str = None,
metadata: dict = None,
) -> Dict[str, Any]:
"""
Create PayPal order for one-time payment.
Args:
account: Account model instance
amount: Payment amount
currency: Currency code (default from config)
description: Payment description
return_url: URL to redirect after approval
cancel_url: URL to redirect on cancellation
metadata: Additional metadata to store
Returns:
dict: Order data including order_id and approval_url
"""
currency = currency or self.currency
return_url = return_url or self.return_url
cancel_url = cancel_url or self.cancel_url
# Build order payload
order_data = {
'intent': 'CAPTURE',
'purchase_units': [{
'amount': {
'currency_code': currency,
'value': f'{amount:.2f}',
},
'description': description or 'IGNY8 Payment',
'custom_id': str(account.id),
'reference_id': str(account.id),
}],
'application_context': {
'return_url': return_url,
'cancel_url': cancel_url,
'brand_name': 'IGNY8',
'landing_page': 'BILLING',
'user_action': 'PAY_NOW',
'shipping_preference': 'NO_SHIPPING',
}
}
# Create order
response = self._make_request('POST', '/v2/checkout/orders', json_data=order_data)
# Extract approval URL
approval_url = None
for link in response.get('links', []):
if link.get('rel') == 'approve':
approval_url = link.get('href')
break
logger.info(
f"Created PayPal order {response.get('id')} for account {account.id}, "
f"amount {currency} {amount}"
)
return {
'order_id': response.get('id'),
'status': response.get('status'),
'approval_url': approval_url,
'links': response.get('links', []),
}
def create_credit_order(
self,
account,
credit_package,
return_url: str = None,
cancel_url: str = None,
) -> Dict[str, Any]:
"""
Create PayPal order for credit package purchase.
Args:
account: Account model instance
credit_package: CreditPackage model instance
return_url: URL to redirect after approval
cancel_url: URL to redirect on cancellation
Returns:
dict: Order data including order_id and approval_url
"""
return_url = return_url or f'{self.frontend_url}/account/usage?paypal=success'
cancel_url = cancel_url or f'{self.frontend_url}/account/usage?paypal=cancel'
# Add credit package info to custom_id for webhook processing
order = self.create_order(
account=account,
amount=float(credit_package.price),
description=f'{credit_package.name} - {credit_package.credits} credits',
return_url=f'{return_url}&package_id={credit_package.id}',
cancel_url=cancel_url,
)
# Store package info in order
order['credit_package_id'] = str(credit_package.id)
order['credit_amount'] = credit_package.credits
return order
def capture_order(self, order_id: str) -> Dict[str, Any]:
"""
Capture payment for approved order.
Call this after customer approves the order at PayPal.
Args:
order_id: PayPal order ID
Returns:
dict: Capture result with payment details
"""
response = self._make_request(
'POST',
f'/v2/checkout/orders/{order_id}/capture'
)
# Extract capture details
capture_id = None
amount = None
currency = None
if response.get('purchase_units'):
captures = response['purchase_units'][0].get('payments', {}).get('captures', [])
if captures:
capture = captures[0]
capture_id = capture.get('id')
amount = capture.get('amount', {}).get('value')
currency = capture.get('amount', {}).get('currency_code')
logger.info(
f"Captured PayPal order {order_id}, capture_id={capture_id}, "
f"amount={currency} {amount}"
)
return {
'order_id': response.get('id'),
'status': response.get('status'),
'capture_id': capture_id,
'amount': amount,
'currency': currency,
'payer': response.get('payer', {}),
'custom_id': response.get('purchase_units', [{}])[0].get('custom_id'),
}
def get_order(self, order_id: str) -> Dict[str, Any]:
"""
Get order details.
Args:
order_id: PayPal order ID
Returns:
dict: Order details
"""
response = self._make_request('GET', f'/v2/checkout/orders/{order_id}')
return {
'order_id': response.get('id'),
'status': response.get('status'),
'intent': response.get('intent'),
'payer': response.get('payer', {}),
'purchase_units': response.get('purchase_units', []),
'create_time': response.get('create_time'),
'update_time': response.get('update_time'),
}
# ========== Subscription Operations ==========
def create_subscription(
self,
account,
plan_id: str,
return_url: str = None,
cancel_url: str = None,
) -> Dict[str, Any]:
"""
Create PayPal subscription.
Requires plan to be created in PayPal dashboard first.
Args:
account: Account model instance
plan_id: PayPal Plan ID (created in PayPal dashboard)
return_url: URL to redirect after approval
cancel_url: URL to redirect on cancellation
Returns:
dict: Subscription data including approval_url
"""
return_url = return_url or self.return_url
cancel_url = cancel_url or self.cancel_url
subscription_data = {
'plan_id': plan_id,
'custom_id': str(account.id),
'application_context': {
'return_url': return_url,
'cancel_url': cancel_url,
'brand_name': 'IGNY8',
'locale': 'en-US',
'shipping_preference': 'NO_SHIPPING',
'user_action': 'SUBSCRIBE_NOW',
'payment_method': {
'payer_selected': 'PAYPAL',
'payee_preferred': 'IMMEDIATE_PAYMENT_REQUIRED',
}
}
}
response = self._make_request(
'POST',
'/v1/billing/subscriptions',
json_data=subscription_data
)
# Extract approval URL
approval_url = None
for link in response.get('links', []):
if link.get('rel') == 'approve':
approval_url = link.get('href')
break
logger.info(
f"Created PayPal subscription {response.get('id')} for account {account.id}"
)
return {
'subscription_id': response.get('id'),
'status': response.get('status'),
'approval_url': approval_url,
'links': response.get('links', []),
}
def get_subscription(self, subscription_id: str) -> Dict[str, Any]:
"""
Get subscription details.
Args:
subscription_id: PayPal subscription ID
Returns:
dict: Subscription details
"""
response = self._make_request(
'GET',
f'/v1/billing/subscriptions/{subscription_id}'
)
return {
'subscription_id': response.get('id'),
'status': response.get('status'),
'plan_id': response.get('plan_id'),
'start_time': response.get('start_time'),
'billing_info': response.get('billing_info', {}),
'custom_id': response.get('custom_id'),
}
def cancel_subscription(
self,
subscription_id: str,
reason: str = 'Customer requested cancellation'
) -> Dict[str, Any]:
"""
Cancel PayPal subscription.
Args:
subscription_id: PayPal subscription ID
reason: Reason for cancellation
Returns:
dict: Cancellation result
"""
self._make_request(
'POST',
f'/v1/billing/subscriptions/{subscription_id}/cancel',
json_data={'reason': reason}
)
logger.info(f"Cancelled PayPal subscription {subscription_id}")
return {
'subscription_id': subscription_id,
'status': 'CANCELLED',
}
def suspend_subscription(self, subscription_id: str, reason: str = '') -> Dict[str, Any]:
"""
Suspend PayPal subscription.
Args:
subscription_id: PayPal subscription ID
reason: Reason for suspension
Returns:
dict: Suspension result
"""
self._make_request(
'POST',
f'/v1/billing/subscriptions/{subscription_id}/suspend',
json_data={'reason': reason}
)
logger.info(f"Suspended PayPal subscription {subscription_id}")
return {
'subscription_id': subscription_id,
'status': 'SUSPENDED',
}
def activate_subscription(self, subscription_id: str, reason: str = '') -> Dict[str, Any]:
"""
Activate/reactivate PayPal subscription.
Args:
subscription_id: PayPal subscription ID
reason: Reason for activation
Returns:
dict: Activation result
"""
self._make_request(
'POST',
f'/v1/billing/subscriptions/{subscription_id}/activate',
json_data={'reason': reason}
)
logger.info(f"Activated PayPal subscription {subscription_id}")
return {
'subscription_id': subscription_id,
'status': 'ACTIVE',
}
# ========== Webhook Verification ==========
def verify_webhook_signature(
self,
headers: dict,
body: dict,
) -> bool:
"""
Verify webhook signature from PayPal.
Args:
headers: Request headers (dict-like)
body: Request body (parsed JSON dict)
Returns:
bool: True if signature is valid
"""
if not self.webhook_id:
logger.warning("PayPal webhook_id not configured, skipping verification")
return True # Optionally fail open or closed based on security policy
verification_data = {
'auth_algo': headers.get('PAYPAL-AUTH-ALGO'),
'cert_url': headers.get('PAYPAL-CERT-URL'),
'transmission_id': headers.get('PAYPAL-TRANSMISSION-ID'),
'transmission_sig': headers.get('PAYPAL-TRANSMISSION-SIG'),
'transmission_time': headers.get('PAYPAL-TRANSMISSION-TIME'),
'webhook_id': self.webhook_id,
'webhook_event': body,
}
try:
response = self._make_request(
'POST',
'/v1/notifications/verify-webhook-signature',
json_data=verification_data
)
is_valid = response.get('verification_status') == 'SUCCESS'
if not is_valid:
logger.warning(
f"PayPal webhook verification failed: {response.get('verification_status')}"
)
return is_valid
except PayPalAPIError as e:
logger.error(f"PayPal webhook verification error: {e}")
return False
# ========== Refunds ==========
def refund_capture(
self,
capture_id: str,
amount: float = None,
currency: str = None,
note: str = None,
) -> Dict[str, Any]:
"""
Refund a captured payment.
Args:
capture_id: PayPal capture ID
amount: Amount to refund (None for full refund)
currency: Currency code
note: Note to payer
Returns:
dict: Refund details
"""
refund_data = {}
if amount:
refund_data['amount'] = {
'value': f'{amount:.2f}',
'currency_code': currency or self.currency,
}
if note:
refund_data['note_to_payer'] = note
response = self._make_request(
'POST',
f'/v2/payments/captures/{capture_id}/refund',
json_data=refund_data if refund_data else None
)
logger.info(
f"Refunded PayPal capture {capture_id}, refund_id={response.get('id')}"
)
return {
'refund_id': response.get('id'),
'status': response.get('status'),
'amount': response.get('amount', {}).get('value'),
'currency': response.get('amount', {}).get('currency_code'),
}
# Convenience function
def get_paypal_service() -> PayPalService:
"""
Get PayPalService instance.
Returns:
PayPalService: Initialized service
Raises:
PayPalConfigurationError: If PayPal not configured
"""
return PayPalService()

View File

@@ -9,32 +9,17 @@ from reportlab.lib import colors
from reportlab.lib.pagesizes import letter
from reportlab.lib.styles import getSampleStyleSheet, ParagraphStyle
from reportlab.lib.units import inch
from reportlab.platypus import SimpleDocTemplate, Table, TableStyle, Paragraph, Spacer, Image, HRFlowable
from reportlab.platypus import SimpleDocTemplate, Table, TableStyle, Paragraph, Spacer, Image
from reportlab.lib.enums import TA_LEFT, TA_RIGHT, TA_CENTER
from django.conf import settings
import os
import logging
logger = logging.getLogger(__name__)
# Logo path - check multiple possible locations
LOGO_PATHS = [
'/data/app/igny8/frontend/public/images/logo/IGNY8_LIGHT_LOGO.png',
'/app/static/images/logo/IGNY8_LIGHT_LOGO.png',
]
class InvoicePDFGenerator:
"""Generate PDF invoices"""
@staticmethod
def get_logo_path():
"""Find the logo file from possible locations"""
for path in LOGO_PATHS:
if os.path.exists(path):
return path
return None
@staticmethod
def generate_invoice_pdf(invoice):
"""
@@ -54,8 +39,8 @@ class InvoicePDFGenerator:
pagesize=letter,
rightMargin=0.75*inch,
leftMargin=0.75*inch,
topMargin=0.5*inch,
bottomMargin=0.5*inch
topMargin=0.75*inch,
bottomMargin=0.75*inch
)
# Container for PDF elements
@@ -66,19 +51,17 @@ class InvoicePDFGenerator:
title_style = ParagraphStyle(
'CustomTitle',
parent=styles['Heading1'],
fontSize=28,
fontSize=24,
textColor=colors.HexColor('#1f2937'),
spaceAfter=0,
fontName='Helvetica-Bold',
spaceAfter=30,
)
heading_style = ParagraphStyle(
'CustomHeading',
parent=styles['Heading2'],
fontSize=12,
textColor=colors.HexColor('#1f2937'),
spaceAfter=8,
fontName='Helvetica-Bold',
fontSize=14,
textColor=colors.HexColor('#374151'),
spaceAfter=12,
)
normal_style = ParagraphStyle(
@@ -86,292 +69,145 @@ class InvoicePDFGenerator:
parent=styles['Normal'],
fontSize=10,
textColor=colors.HexColor('#4b5563'),
fontName='Helvetica',
)
label_style = ParagraphStyle(
'LabelStyle',
parent=styles['Normal'],
fontSize=9,
textColor=colors.HexColor('#6b7280'),
fontName='Helvetica',
)
# Header
elements.append(Paragraph('INVOICE', title_style))
elements.append(Spacer(1, 0.2*inch))
value_style = ParagraphStyle(
'ValueStyle',
parent=styles['Normal'],
fontSize=10,
textColor=colors.HexColor('#1f2937'),
fontName='Helvetica-Bold',
)
# Company info and invoice details side by side
company_data = [
['<b>From:</b>', f'<b>Invoice #:</b> {invoice.invoice_number}'],
[getattr(settings, 'COMPANY_NAME', 'Igny8'), f'<b>Date:</b> {invoice.created_at.strftime("%B %d, %Y")}'],
[getattr(settings, 'COMPANY_ADDRESS', ''), f'<b>Due Date:</b> {invoice.due_date.strftime("%B %d, %Y")}'],
[getattr(settings, 'COMPANY_EMAIL', settings.DEFAULT_FROM_EMAIL), f'<b>Status:</b> {invoice.status.upper()}'],
]
right_align_style = ParagraphStyle(
'RightAlign',
parent=styles['Normal'],
fontSize=10,
textColor=colors.HexColor('#4b5563'),
alignment=TA_RIGHT,
fontName='Helvetica',
)
right_bold_style = ParagraphStyle(
'RightBold',
parent=styles['Normal'],
fontSize=10,
textColor=colors.HexColor('#1f2937'),
alignment=TA_RIGHT,
fontName='Helvetica-Bold',
)
# Header with Logo and Invoice title
logo_path = InvoicePDFGenerator.get_logo_path()
header_data = []
if logo_path:
try:
logo = Image(logo_path, width=1.5*inch, height=0.5*inch)
logo.hAlign = 'LEFT'
header_data = [[logo, Paragraph('INVOICE', title_style)]]
except Exception as e:
logger.warning(f"Could not load logo: {e}")
header_data = [[Paragraph('IGNY8', title_style), Paragraph('INVOICE', title_style)]]
else:
header_data = [[Paragraph('IGNY8', title_style), Paragraph('INVOICE', title_style)]]
header_table = Table(header_data, colWidths=[3.5*inch, 3*inch])
header_table.setStyle(TableStyle([
('VALIGN', (0, 0), (-1, -1), 'MIDDLE'),
('ALIGN', (0, 0), (0, 0), 'LEFT'),
('ALIGN', (1, 0), (1, 0), 'RIGHT'),
company_table = Table(company_data, colWidths=[3.5*inch, 3*inch])
company_table.setStyle(TableStyle([
('FONTNAME', (0, 0), (-1, -1), 'Helvetica'),
('FONTSIZE', (0, 0), (-1, -1), 10),
('TEXTCOLOR', (0, 0), (-1, -1), colors.HexColor('#4b5563')),
('VALIGN', (0, 0), (-1, -1), 'TOP'),
('ALIGN', (1, 0), (1, -1), 'RIGHT'),
]))
elements.append(header_table)
elements.append(company_table)
elements.append(Spacer(1, 0.3*inch))
# Divider line
elements.append(HRFlowable(width="100%", thickness=1, color=colors.HexColor('#e5e7eb'), spaceAfter=20))
# Invoice details section (right side info)
invoice_info = [
[Paragraph('Invoice Number:', label_style), Paragraph(invoice.invoice_number, value_style)],
[Paragraph('Date:', label_style), Paragraph(invoice.created_at.strftime("%B %d, %Y"), value_style)],
[Paragraph('Due Date:', label_style), Paragraph(invoice.due_date.strftime("%B %d, %Y"), value_style)],
[Paragraph('Status:', label_style), Paragraph(invoice.status.upper(), value_style)],
# Bill to section
elements.append(Paragraph('<b>Bill To:</b>', heading_style))
bill_to_data = [
[invoice.account.name],
[invoice.account.owner.email],
]
invoice_info_table = Table(invoice_info, colWidths=[1.2*inch, 2*inch])
invoice_info_table.setStyle(TableStyle([
('VALIGN', (0, 0), (-1, -1), 'MIDDLE'),
('BOTTOMPADDING', (0, 0), (-1, -1), 4),
('TOPPADDING', (0, 0), (-1, -1), 4),
]))
if hasattr(invoice.account, 'billing_email') and invoice.account.billing_email:
bill_to_data.append([f'Billing: {invoice.account.billing_email}'])
# From and To section
company_name = getattr(settings, 'COMPANY_NAME', 'Igny8')
company_email = getattr(settings, 'COMPANY_EMAIL', settings.DEFAULT_FROM_EMAIL)
for line in bill_to_data:
elements.append(Paragraph(line[0], normal_style))
from_section = [
Paragraph('FROM', heading_style),
Paragraph(company_name, value_style),
Paragraph(company_email, normal_style),
]
customer_name = invoice.account.name if invoice.account else 'N/A'
customer_email = invoice.account.owner.email if invoice.account and invoice.account.owner else invoice.account.billing_email if invoice.account else 'N/A'
billing_email = invoice.account.billing_email if invoice.account and hasattr(invoice.account, 'billing_email') and invoice.account.billing_email else None
to_section = [
Paragraph('BILL TO', heading_style),
Paragraph(customer_name, value_style),
Paragraph(customer_email, normal_style),
]
if billing_email and billing_email != customer_email:
to_section.append(Paragraph(f'Billing: {billing_email}', normal_style))
# Create from/to layout
from_content = []
for item in from_section:
from_content.append([item])
from_table = Table(from_content, colWidths=[3*inch])
to_content = []
for item in to_section:
to_content.append([item])
to_table = Table(to_content, colWidths=[3*inch])
# Main info layout with From, To, and Invoice details
main_info = [[from_table, to_table, invoice_info_table]]
main_info_table = Table(main_info, colWidths=[2.3*inch, 2.3*inch, 2.4*inch])
main_info_table.setStyle(TableStyle([
('VALIGN', (0, 0), (-1, -1), 'TOP'),
]))
elements.append(main_info_table)
elements.append(Spacer(1, 0.4*inch))
elements.append(Spacer(1, 0.3*inch))
# Line items table
elements.append(Paragraph('ITEMS', heading_style))
elements.append(Spacer(1, 0.1*inch))
elements.append(Paragraph('<b>Items:</b>', heading_style))
# Table header - use Paragraph for proper rendering
# Table header
line_items_data = [
[
Paragraph('Description', ParagraphStyle('Header', fontName='Helvetica-Bold', fontSize=10, textColor=colors.HexColor('#374151'))),
Paragraph('Qty', ParagraphStyle('Header', fontName='Helvetica-Bold', fontSize=10, textColor=colors.HexColor('#374151'), alignment=TA_CENTER)),
Paragraph('Unit Price', ParagraphStyle('Header', fontName='Helvetica-Bold', fontSize=10, textColor=colors.HexColor('#374151'), alignment=TA_RIGHT)),
Paragraph('Amount', ParagraphStyle('Header', fontName='Helvetica-Bold', fontSize=10, textColor=colors.HexColor('#374151'), alignment=TA_RIGHT)),
]
['Description', 'Quantity', 'Unit Price', 'Amount']
]
# Get line items - line_items is a JSON field (list of dicts)
items = invoice.line_items or []
for item in items:
unit_price = float(item.get('unit_price', 0))
amount = float(item.get('amount', 0))
# Get line items
for item in invoice.line_items.all():
line_items_data.append([
Paragraph(item.get('description', ''), normal_style),
Paragraph(str(item.get('quantity', 1)), ParagraphStyle('Center', parent=normal_style, alignment=TA_CENTER)),
Paragraph(f'{invoice.currency} {unit_price:.2f}', right_align_style),
Paragraph(f'{invoice.currency} {amount:.2f}', right_align_style),
item.description,
str(item.quantity),
f'{invoice.currency} {item.unit_price:.2f}',
f'{invoice.currency} {item.total_price:.2f}'
])
# Add empty row for spacing before totals
line_items_data.append(['', '', '', ''])
# Add subtotal, tax, total rows
line_items_data.append(['', '', '<b>Subtotal:</b>', f'<b>{invoice.currency} {invoice.subtotal:.2f}</b>'])
if invoice.tax_amount and invoice.tax_amount > 0:
line_items_data.append(['', '', f'Tax ({invoice.tax_rate}%):', f'{invoice.currency} {invoice.tax_amount:.2f}'])
if invoice.discount_amount and invoice.discount_amount > 0:
line_items_data.append(['', '', 'Discount:', f'-{invoice.currency} {invoice.discount_amount:.2f}'])
line_items_data.append(['', '', '<b>Total:</b>', f'<b>{invoice.currency} {invoice.total_amount:.2f}</b>'])
# Create table
line_items_table = Table(
line_items_data,
colWidths=[3.2*inch, 0.8*inch, 1.25*inch, 1.25*inch]
colWidths=[3*inch, 1*inch, 1.25*inch, 1.25*inch]
)
num_items = len(items)
line_items_table.setStyle(TableStyle([
# Header row
('BACKGROUND', (0, 0), (-1, 0), colors.HexColor('#f3f4f6')),
('TEXTCOLOR', (0, 0), (-1, 0), colors.HexColor('#1f2937')),
('FONTNAME', (0, 0), (-1, 0), 'Helvetica-Bold'),
('FONTSIZE', (0, 0), (-1, 0), 10),
('BOTTOMPADDING', (0, 0), (-1, 0), 12),
('TOPPADDING', (0, 0), (-1, 0), 12),
# Body rows
('ROWBACKGROUNDS', (0, 1), (-1, num_items), [colors.white, colors.HexColor('#f9fafb')]),
('FONTNAME', (0, 1), (-1, -4), 'Helvetica'),
('FONTSIZE', (0, 1), (-1, -4), 9),
('TEXTCOLOR', (0, 1), (-1, -4), colors.HexColor('#4b5563')),
('ROWBACKGROUNDS', (0, 1), (-1, -4), [colors.white, colors.HexColor('#f9fafb')]),
# Alignment
('ALIGN', (1, 0), (1, -1), 'CENTER'),
('ALIGN', (2, 0), (-1, -1), 'RIGHT'),
('VALIGN', (0, 0), (-1, -1), 'MIDDLE'),
# Summary rows (last 3-4 rows)
('FONTNAME', (0, -4), (-1, -1), 'Helvetica'),
('FONTSIZE', (0, -4), (-1, -1), 9),
('ALIGN', (2, 0), (2, -1), 'RIGHT'),
('ALIGN', (3, 0), (3, -1), 'RIGHT'),
# Grid for items only
('LINEBELOW', (0, 0), (-1, 0), 1, colors.HexColor('#d1d5db')),
('LINEBELOW', (0, num_items), (-1, num_items), 1, colors.HexColor('#e5e7eb')),
# Grid
('GRID', (0, 0), (-1, -4), 0.5, colors.HexColor('#e5e7eb')),
('LINEABOVE', (2, -4), (-1, -4), 1, colors.HexColor('#d1d5db')),
('LINEABOVE', (2, -1), (-1, -1), 2, colors.HexColor('#1f2937')),
# Padding
('TOPPADDING', (0, 1), (-1, -1), 10),
('BOTTOMPADDING', (0, 1), (-1, -1), 10),
('LEFTPADDING', (0, 0), (-1, -1), 8),
('RIGHTPADDING', (0, 0), (-1, -1), 8),
('TOPPADDING', (0, 0), (-1, -1), 8),
('BOTTOMPADDING', (0, 0), (-1, -1), 8),
('LEFTPADDING', (0, 0), (-1, -1), 10),
('RIGHTPADDING', (0, 0), (-1, -1), 10),
]))
elements.append(line_items_table)
elements.append(Spacer(1, 0.2*inch))
# Totals section - right aligned
totals_data = [
[Paragraph('Subtotal:', right_align_style), Paragraph(f'{invoice.currency} {float(invoice.subtotal):.2f}', right_bold_style)],
]
tax_amount = float(invoice.tax or 0)
if tax_amount > 0:
tax_rate = invoice.metadata.get('tax_rate', 0) if invoice.metadata else 0
totals_data.append([
Paragraph(f'Tax ({tax_rate}%):', right_align_style),
Paragraph(f'{invoice.currency} {tax_amount:.2f}', right_align_style)
])
discount_amount = float(invoice.metadata.get('discount_amount', 0)) if invoice.metadata else 0
if discount_amount > 0:
totals_data.append([
Paragraph('Discount:', right_align_style),
Paragraph(f'-{invoice.currency} {discount_amount:.2f}', right_align_style)
])
totals_data.append([
Paragraph('Total:', ParagraphStyle('TotalLabel', fontName='Helvetica-Bold', fontSize=12, textColor=colors.HexColor('#1f2937'), alignment=TA_RIGHT)),
Paragraph(f'{invoice.currency} {float(invoice.total):.2f}', ParagraphStyle('TotalValue', fontName='Helvetica-Bold', fontSize=12, textColor=colors.HexColor('#1f2937'), alignment=TA_RIGHT))
])
totals_table = Table(totals_data, colWidths=[1.5*inch, 1.5*inch])
totals_table.setStyle(TableStyle([
('ALIGN', (0, 0), (-1, -1), 'RIGHT'),
('VALIGN', (0, 0), (-1, -1), 'MIDDLE'),
('TOPPADDING', (0, 0), (-1, -1), 6),
('BOTTOMPADDING', (0, 0), (-1, -1), 6),
('LINEABOVE', (0, -1), (-1, -1), 2, colors.HexColor('#1f2937')),
]))
# Right-align the totals table
totals_wrapper = Table([[totals_table]], colWidths=[6.5*inch])
totals_wrapper.setStyle(TableStyle([
('ALIGN', (0, 0), (0, 0), 'RIGHT'),
]))
elements.append(totals_wrapper)
elements.append(Spacer(1, 0.4*inch))
# Payment information
if invoice.status == 'paid':
elements.append(HRFlowable(width="100%", thickness=1, color=colors.HexColor('#e5e7eb'), spaceBefore=10, spaceAfter=15))
elements.append(Paragraph('PAYMENT INFORMATION', heading_style))
elements.append(Paragraph('<b>Payment Information:</b>', heading_style))
payment = invoice.payments.filter(status='succeeded').first()
if payment:
payment_method = payment.get_payment_method_display() if hasattr(payment, 'get_payment_method_display') else str(payment.payment_method)
payment_date = payment.processed_at.strftime("%B %d, %Y") if payment.processed_at else 'N/A'
payment_info = [
[Paragraph('Payment Method:', label_style), Paragraph(payment_method, value_style)],
[Paragraph('Paid On:', label_style), Paragraph(payment_date, value_style)],
f'Payment Method: {payment.get_payment_method_display()}',
f'Paid On: {payment.processed_at.strftime("%B %d, %Y")}',
]
if payment.manual_reference:
payment_info.append([Paragraph('Reference:', label_style), Paragraph(payment.manual_reference, value_style)])
payment_info.append(f'Reference: {payment.manual_reference}')
for line in payment_info:
elements.append(Paragraph(line, normal_style))
payment_table = Table(payment_info, colWidths=[1.5*inch, 3*inch])
payment_table.setStyle(TableStyle([
('VALIGN', (0, 0), (-1, -1), 'MIDDLE'),
('BOTTOMPADDING', (0, 0), (-1, -1), 4),
('TOPPADDING', (0, 0), (-1, -1), 4),
]))
elements.append(payment_table)
elements.append(Spacer(1, 0.2*inch))
# Footer / Notes
if invoice.notes:
elements.append(Spacer(1, 0.2*inch))
elements.append(Paragraph('NOTES', heading_style))
elements.append(Paragraph('<b>Notes:</b>', heading_style))
elements.append(Paragraph(invoice.notes, normal_style))
# Terms
elements.append(Spacer(1, 0.3*inch))
elements.append(HRFlowable(width="100%", thickness=1, color=colors.HexColor('#e5e7eb'), spaceAfter=15))
terms_style = ParagraphStyle(
'Terms',
parent=styles['Normal'],
fontSize=8,
textColor=colors.HexColor('#9ca3af'),
fontName='Helvetica',
)
terms = getattr(settings, 'INVOICE_TERMS', 'Payment is due within 7 days of invoice date. Thank you for your business!')
elements.append(Paragraph(f'Terms & Conditions: {terms}', terms_style))
# Footer with company info
elements.append(Spacer(1, 0.2*inch))
footer_style = ParagraphStyle(
'Footer',
parent=styles['Normal'],
fontSize=8,
textColor=colors.HexColor('#9ca3af'),
fontName='Helvetica',
alignment=TA_CENTER,
)
elements.append(Paragraph(f'Generated by IGNY8 • {company_email}', footer_style))
elements.append(Paragraph('<b>Terms & Conditions:</b>', heading_style))
terms = getattr(settings, 'INVOICE_TERMS', 'Payment is due within 7 days of invoice date.')
elements.append(Paragraph(terms, normal_style))
# Build PDF
doc.build(elements)

View File

@@ -1,627 +0,0 @@
"""
Stripe Service - Wrapper for Stripe API operations
Handles:
- Checkout sessions for subscriptions and credit packages
- Billing portal sessions for subscription management
- Webhook event construction and verification
- Customer management
Configuration stored in IntegrationProvider model (provider_id='stripe')
"""
import stripe
import logging
from typing import Optional, Dict, Any
from django.conf import settings
from django.utils import timezone
from igny8_core.modules.system.models import IntegrationProvider
logger = logging.getLogger(__name__)
class StripeConfigurationError(Exception):
"""Raised when Stripe is not properly configured"""
pass
class StripeService:
"""Service for Stripe payment operations"""
def __init__(self):
"""
Initialize Stripe service with credentials from IntegrationProvider.
Raises:
StripeConfigurationError: If Stripe provider not configured or missing credentials
"""
provider = IntegrationProvider.get_provider('stripe')
if not provider:
raise StripeConfigurationError(
"Stripe provider not configured. Add 'stripe' provider in admin."
)
if not provider.api_secret:
raise StripeConfigurationError(
"Stripe secret key not configured. Set api_secret in provider."
)
self.is_sandbox = provider.is_sandbox
self.provider = provider
# Set Stripe API key
stripe.api_key = provider.api_secret
# Store keys for reference
self.publishable_key = provider.api_key
self.webhook_secret = provider.webhook_secret
self.config = provider.config or {}
# Default currency from config
self.currency = self.config.get('currency', 'usd')
logger.info(
f"Stripe service initialized (sandbox={self.is_sandbox}, "
f"currency={self.currency})"
)
@property
def frontend_url(self) -> str:
"""Get frontend URL from Django settings"""
return getattr(settings, 'FRONTEND_URL', 'http://localhost:3000')
def get_publishable_key(self) -> str:
"""Return publishable key for frontend use"""
return self.publishable_key
# ========== Customer Management ==========
def _get_or_create_customer(self, account) -> str:
"""
Get existing Stripe customer or create new one.
Args:
account: Account model instance
Returns:
str: Stripe customer ID
"""
# Return existing customer if available
if account.stripe_customer_id:
try:
# Verify customer still exists in Stripe
stripe.Customer.retrieve(account.stripe_customer_id)
return account.stripe_customer_id
except stripe.error.InvalidRequestError:
# Customer was deleted, create new one
logger.warning(
f"Stripe customer {account.stripe_customer_id} not found, creating new"
)
# Create new customer
customer = stripe.Customer.create(
email=account.billing_email or account.owner.email,
name=account.name,
metadata={
'account_id': str(account.id),
'environment': 'sandbox' if self.is_sandbox else 'production'
},
)
# Save customer ID to account
account.stripe_customer_id = customer.id
account.save(update_fields=['stripe_customer_id', 'updated_at'])
logger.info(f"Created Stripe customer {customer.id} for account {account.id}")
return customer.id
def get_customer(self, account) -> Optional[Dict]:
"""
Get Stripe customer details.
Args:
account: Account model instance
Returns:
dict: Customer data or None if not found
"""
if not account.stripe_customer_id:
return None
try:
customer = stripe.Customer.retrieve(account.stripe_customer_id)
return {
'id': customer.id,
'email': customer.email,
'name': customer.name,
'created': customer.created,
'default_source': customer.default_source,
}
except stripe.error.InvalidRequestError:
return None
# ========== Checkout Sessions ==========
def create_checkout_session(
self,
account,
plan,
success_url: Optional[str] = None,
cancel_url: Optional[str] = None,
allow_promotion_codes: bool = True,
trial_period_days: Optional[int] = None,
) -> Dict[str, Any]:
"""
Create Stripe Checkout session for new subscription.
Args:
account: Account model instance
plan: Plan model instance with stripe_price_id
success_url: URL to redirect after successful payment
cancel_url: URL to redirect if payment is canceled
allow_promotion_codes: Allow discount codes in checkout
trial_period_days: Optional trial period (overrides plan default)
Returns:
dict: Session data with checkout_url and session_id
Raises:
ValueError: If plan has no stripe_price_id
"""
if not plan.stripe_price_id:
raise ValueError(
f"Plan '{plan.name}' (id={plan.id}) has no stripe_price_id configured"
)
# Get or create customer
customer_id = self._get_or_create_customer(account)
# Build URLs
if not success_url:
success_url = f'{self.frontend_url}/account/plans?success=true&session_id={{CHECKOUT_SESSION_ID}}'
if not cancel_url:
cancel_url = f'{self.frontend_url}/account/plans?canceled=true'
# Build subscription data
subscription_data = {
'metadata': {
'account_id': str(account.id),
'plan_id': str(plan.id),
}
}
if trial_period_days:
subscription_data['trial_period_days'] = trial_period_days
# Create checkout session
session = stripe.checkout.Session.create(
customer=customer_id,
payment_method_types=self.config.get('payment_methods', ['card']),
mode='subscription',
line_items=[{
'price': plan.stripe_price_id,
'quantity': 1,
}],
success_url=success_url,
cancel_url=cancel_url,
allow_promotion_codes=allow_promotion_codes,
metadata={
'account_id': str(account.id),
'plan_id': str(plan.id),
'type': 'subscription',
},
subscription_data=subscription_data,
)
logger.info(
f"Created Stripe checkout session {session.id} for account {account.id}, "
f"plan {plan.name}"
)
return {
'checkout_url': session.url,
'session_id': session.id,
}
def create_credit_checkout_session(
self,
account,
credit_package,
success_url: Optional[str] = None,
cancel_url: Optional[str] = None,
) -> Dict[str, Any]:
"""
Create Stripe Checkout session for one-time credit purchase.
Args:
account: Account model instance
credit_package: CreditPackage model instance
success_url: URL to redirect after successful payment
cancel_url: URL to redirect if payment is canceled
Returns:
dict: Session data with checkout_url and session_id
"""
# Get or create customer
customer_id = self._get_or_create_customer(account)
# Build URLs
if not success_url:
success_url = f'{self.frontend_url}/account/usage?purchase=success&session_id={{CHECKOUT_SESSION_ID}}'
if not cancel_url:
cancel_url = f'{self.frontend_url}/account/usage?purchase=canceled'
# Use existing Stripe price if available, otherwise create price_data
if credit_package.stripe_price_id:
line_items = [{
'price': credit_package.stripe_price_id,
'quantity': 1,
}]
else:
# Create price_data for dynamic pricing
line_items = [{
'price_data': {
'currency': self.currency,
'product_data': {
'name': credit_package.name,
'description': f'{credit_package.credits} credits',
},
'unit_amount': int(credit_package.price * 100), # Convert to cents
},
'quantity': 1,
}]
# Create checkout session
session = stripe.checkout.Session.create(
customer=customer_id,
payment_method_types=self.config.get('payment_methods', ['card']),
mode='payment',
line_items=line_items,
success_url=success_url,
cancel_url=cancel_url,
metadata={
'account_id': str(account.id),
'credit_package_id': str(credit_package.id),
'credit_amount': str(credit_package.credits),
'type': 'credit_purchase',
},
)
logger.info(
f"Created Stripe credit checkout session {session.id} for account {account.id}, "
f"package {credit_package.name} ({credit_package.credits} credits)"
)
return {
'checkout_url': session.url,
'session_id': session.id,
}
def get_checkout_session(self, session_id: str) -> Optional[Dict]:
"""
Retrieve checkout session details.
Args:
session_id: Stripe checkout session ID
Returns:
dict: Session data or None if not found
"""
try:
session = stripe.checkout.Session.retrieve(session_id)
return {
'id': session.id,
'status': session.status,
'payment_status': session.payment_status,
'customer': session.customer,
'subscription': session.subscription,
'metadata': session.metadata,
'amount_total': session.amount_total,
'currency': session.currency,
}
except stripe.error.InvalidRequestError as e:
logger.error(f"Failed to retrieve checkout session {session_id}: {e}")
return None
# ========== Billing Portal ==========
def create_billing_portal_session(
self,
account,
return_url: Optional[str] = None,
) -> Dict[str, Any]:
"""
Create Stripe Billing Portal session for subscription management.
Allows customers to:
- Update payment method
- View billing history
- Cancel subscription
- Update billing info
Args:
account: Account model instance
return_url: URL to return to after portal session
Returns:
dict: Portal session data with portal_url
Raises:
ValueError: If account has no Stripe customer
"""
if not self.config.get('billing_portal_enabled', True):
raise ValueError("Billing portal is disabled in configuration")
# Get or create customer
customer_id = self._get_or_create_customer(account)
if not return_url:
return_url = f'{self.frontend_url}/account/plans'
# Create billing portal session
session = stripe.billing_portal.Session.create(
customer=customer_id,
return_url=return_url,
)
logger.info(
f"Created Stripe billing portal session for account {account.id}"
)
return {
'portal_url': session.url,
}
# ========== Subscription Management ==========
def get_subscription(self, subscription_id: str) -> Optional[Dict]:
"""
Get subscription details from Stripe.
Args:
subscription_id: Stripe subscription ID
Returns:
dict: Subscription data or None if not found
"""
try:
sub = stripe.Subscription.retrieve(subscription_id)
return {
'id': sub.id,
'status': sub.status,
'current_period_start': sub.current_period_start,
'current_period_end': sub.current_period_end,
'cancel_at_period_end': sub.cancel_at_period_end,
'canceled_at': sub.canceled_at,
'ended_at': sub.ended_at,
'customer': sub.customer,
'items': [{
'id': item.id,
'price_id': item.price.id,
'quantity': item.quantity,
} for item in sub['items'].data],
'metadata': sub.metadata,
}
except stripe.error.InvalidRequestError as e:
logger.error(f"Failed to retrieve subscription {subscription_id}: {e}")
return None
def cancel_subscription(
self,
subscription_id: str,
at_period_end: bool = True
) -> Dict[str, Any]:
"""
Cancel a Stripe subscription.
Args:
subscription_id: Stripe subscription ID
at_period_end: If True, cancel at end of billing period
Returns:
dict: Updated subscription data
"""
if at_period_end:
sub = stripe.Subscription.modify(
subscription_id,
cancel_at_period_end=True
)
logger.info(f"Subscription {subscription_id} marked for cancellation at period end")
else:
sub = stripe.Subscription.delete(subscription_id)
logger.info(f"Subscription {subscription_id} canceled immediately")
return {
'id': sub.id,
'status': sub.status,
'cancel_at_period_end': sub.cancel_at_period_end,
}
def update_subscription(
self,
subscription_id: str,
new_price_id: str,
proration_behavior: str = 'create_prorations'
) -> Dict[str, Any]:
"""
Update subscription to a new plan/price.
Args:
subscription_id: Stripe subscription ID
new_price_id: New Stripe price ID
proration_behavior: How to handle proration
- 'create_prorations': Prorate the change
- 'none': No proration
- 'always_invoice': Invoice immediately
Returns:
dict: Updated subscription data
"""
# Get current subscription
sub = stripe.Subscription.retrieve(subscription_id)
# Update the subscription item
updated = stripe.Subscription.modify(
subscription_id,
items=[{
'id': sub['items'].data[0].id,
'price': new_price_id,
}],
proration_behavior=proration_behavior,
)
logger.info(
f"Updated subscription {subscription_id} to price {new_price_id}"
)
return {
'id': updated.id,
'status': updated.status,
'current_period_end': updated.current_period_end,
}
# ========== Webhook Handling ==========
def construct_webhook_event(
self,
payload: bytes,
sig_header: str
) -> stripe.Event:
"""
Verify and construct webhook event from Stripe.
Args:
payload: Raw request body
sig_header: Stripe-Signature header value
Returns:
stripe.Event: Verified event object
Raises:
stripe.error.SignatureVerificationError: If signature is invalid
"""
if not self.webhook_secret:
raise StripeConfigurationError(
"Webhook secret not configured. Set webhook_secret in provider."
)
return stripe.Webhook.construct_event(
payload, sig_header, self.webhook_secret
)
# ========== Invoice Operations ==========
def get_invoice(self, invoice_id: str) -> Optional[Dict]:
"""
Get invoice details from Stripe.
Args:
invoice_id: Stripe invoice ID
Returns:
dict: Invoice data or None if not found
"""
try:
invoice = stripe.Invoice.retrieve(invoice_id)
return {
'id': invoice.id,
'status': invoice.status,
'amount_due': invoice.amount_due,
'amount_paid': invoice.amount_paid,
'currency': invoice.currency,
'customer': invoice.customer,
'subscription': invoice.subscription,
'invoice_pdf': invoice.invoice_pdf,
'hosted_invoice_url': invoice.hosted_invoice_url,
}
except stripe.error.InvalidRequestError as e:
logger.error(f"Failed to retrieve invoice {invoice_id}: {e}")
return None
def get_upcoming_invoice(self, customer_id: str) -> Optional[Dict]:
"""
Get upcoming invoice for a customer.
Args:
customer_id: Stripe customer ID
Returns:
dict: Upcoming invoice preview or None
"""
try:
invoice = stripe.Invoice.upcoming(customer=customer_id)
return {
'amount_due': invoice.amount_due,
'currency': invoice.currency,
'next_payment_attempt': invoice.next_payment_attempt,
'lines': [{
'description': line.description,
'amount': line.amount,
} for line in invoice.lines.data],
}
except stripe.error.InvalidRequestError:
return None
# ========== Refunds ==========
def create_refund(
self,
payment_intent_id: Optional[str] = None,
charge_id: Optional[str] = None,
amount: Optional[int] = None,
reason: Optional[str] = None,
) -> Dict[str, Any]:
"""
Create a refund for a payment.
Args:
payment_intent_id: Stripe PaymentIntent ID
charge_id: Stripe Charge ID (alternative to payment_intent_id)
amount: Amount to refund in cents (None for full refund)
reason: Reason for refund ('duplicate', 'fraudulent', 'requested_by_customer')
Returns:
dict: Refund data
"""
params = {}
if payment_intent_id:
params['payment_intent'] = payment_intent_id
elif charge_id:
params['charge'] = charge_id
else:
raise ValueError("Either payment_intent_id or charge_id required")
if amount:
params['amount'] = amount
if reason:
params['reason'] = reason
refund = stripe.Refund.create(**params)
logger.info(
f"Created refund {refund.id} for "
f"{'payment_intent ' + payment_intent_id if payment_intent_id else 'charge ' + charge_id}"
)
return {
'id': refund.id,
'amount': refund.amount,
'status': refund.status,
'reason': refund.reason,
}
# Convenience function
def get_stripe_service() -> StripeService:
"""
Get StripeService instance.
Returns:
StripeService: Initialized service
Raises:
StripeConfigurationError: If Stripe not configured
"""
return StripeService()

View File

@@ -172,7 +172,7 @@ def _attempt_stripe_renewal(subscription: Subscription, invoice: Invoice) -> boo
payment_method='stripe',
status='processing',
stripe_payment_intent_id=intent.id,
metadata={'renewal': True, 'auto_approved': True}
metadata={'renewal': True}
)
return True
@@ -210,7 +210,7 @@ def _attempt_paypal_renewal(subscription: Subscription, invoice: Invoice) -> boo
payment_method='paypal',
status='processing',
paypal_order_id=subscription.metadata['paypal_subscription_id'],
metadata={'renewal': True, 'auto_approved': True}
metadata={'renewal': True}
)
return True
else:

View File

@@ -1,7 +1,7 @@
"""Billing routes including bank transfer confirmation and credit endpoints."""
from django.urls import path, include
from rest_framework.routers import DefaultRouter
from .billing_views import (
from .views import (
BillingViewSet,
InvoiceViewSet,
PaymentViewSet,
@@ -15,24 +15,6 @@ from igny8_core.modules.billing.views import (
CreditTransactionViewSet,
AIModelConfigViewSet,
)
# Payment gateway views
from .views.stripe_views import (
StripeConfigView,
StripeCheckoutView,
StripeCreditCheckoutView,
StripeBillingPortalView,
StripeReturnVerificationView,
stripe_webhook,
)
from .views.paypal_views import (
PayPalConfigView,
PayPalCreateOrderView,
PayPalCreateSubscriptionOrderView,
PayPalCaptureOrderView,
PayPalCreateSubscriptionView,
PayPalReturnVerificationView,
paypal_webhook,
)
router = DefaultRouter()
router.register(r'admin', BillingViewSet, basename='billing-admin')
@@ -53,21 +35,4 @@ urlpatterns = [
path('', include(router.urls)),
# User-facing usage summary endpoint for plan limits
path('usage-summary/', get_usage_summary, name='usage-summary'),
# Stripe endpoints
path('stripe/config/', StripeConfigView.as_view(), name='stripe-config'),
path('stripe/checkout/', StripeCheckoutView.as_view(), name='stripe-checkout'),
path('stripe/credit-checkout/', StripeCreditCheckoutView.as_view(), name='stripe-credit-checkout'),
path('stripe/billing-portal/', StripeBillingPortalView.as_view(), name='stripe-billing-portal'),
path('stripe/verify-return/', StripeReturnVerificationView.as_view(), name='stripe-verify-return'),
path('webhooks/stripe/', stripe_webhook, name='stripe-webhook'),
# PayPal endpoints
path('paypal/config/', PayPalConfigView.as_view(), name='paypal-config'),
path('paypal/create-order/', PayPalCreateOrderView.as_view(), name='paypal-create-order'),
path('paypal/create-subscription-order/', PayPalCreateSubscriptionOrderView.as_view(), name='paypal-create-subscription-order'),
path('paypal/capture-order/', PayPalCaptureOrderView.as_view(), name='paypal-capture-order'),
path('paypal/create-subscription/', PayPalCreateSubscriptionView.as_view(), name='paypal-create-subscription'),
path('paypal/verify-return/', PayPalReturnVerificationView.as_view(), name='paypal-verify-return'),
path('webhooks/paypal/', paypal_webhook, name='paypal-webhook'),
]

View File

@@ -192,32 +192,22 @@ class BillingViewSet(viewsets.GenericViewSet):
@action(detail=False, methods=['get'], url_path='payment-methods', permission_classes=[AllowAny])
def list_payment_methods(self, request):
"""
Get available payment methods filtered by country code.
Get available payment methods for a specific country.
Public endpoint - only returns enabled payment methods.
Does not expose sensitive configuration details.
Query Parameters:
- country_code: ISO 2-letter country code (e.g., 'US', 'PK')
Query params:
country: ISO 2-letter country code (default: 'US')
Returns methods for:
1. Specified country (country_code=XX)
2. Global methods (country_code='*')
Returns payment methods filtered by country.
"""
country_code = request.query_params.get('country_code', '').upper()
country = request.GET.get('country', 'US').upper()
if country_code:
# Filter by specific country OR global methods
methods = PaymentMethodConfig.objects.filter(
is_enabled=True
).filter(
Q(country_code=country_code) | Q(country_code='*')
).order_by('sort_order')
else:
# No country specified - return only global methods
methods = PaymentMethodConfig.objects.filter(
is_enabled=True,
country_code='*'
).order_by('sort_order')
# Get country-specific methods
methods = PaymentMethodConfig.objects.filter(
country_code=country,
is_enabled=True
).order_by('sort_order')
# Serialize using the proper serializer
serializer = PaymentMethodConfigSerializer(methods, many=True)
@@ -619,7 +609,7 @@ class BillingViewSet(viewsets.GenericViewSet):
class InvoiceViewSet(AccountModelViewSet):
"""ViewSet for user-facing invoices"""
queryset = Invoice.objects.all().select_related('account', 'subscription', 'subscription__plan')
queryset = Invoice.objects.all().select_related('account')
permission_classes = [IsAuthenticatedAndActive, HasTenantAccess]
pagination_class = CustomPageNumberPagination
@@ -630,43 +620,6 @@ class InvoiceViewSet(AccountModelViewSet):
queryset = queryset.filter(account=self.request.account)
return queryset.order_by('-invoice_date', '-created_at')
def _serialize_invoice(self, invoice):
"""Serialize an invoice with all needed fields"""
# Build subscription data if exists
subscription_data = None
if invoice.subscription:
plan_data = None
if invoice.subscription.plan:
plan_data = {
'id': invoice.subscription.plan.id,
'name': invoice.subscription.plan.name,
'slug': invoice.subscription.plan.slug,
}
subscription_data = {
'id': invoice.subscription.id,
'plan': plan_data,
}
return {
'id': invoice.id,
'invoice_number': invoice.invoice_number,
'status': invoice.status,
'total': str(invoice.total), # Alias for compatibility
'total_amount': str(invoice.total),
'subtotal': str(invoice.subtotal),
'tax_amount': str(invoice.tax),
'currency': invoice.currency,
'invoice_date': invoice.invoice_date.isoformat(),
'due_date': invoice.due_date.isoformat(),
'paid_at': invoice.paid_at.isoformat() if invoice.paid_at else None,
'line_items': invoice.line_items,
'billing_email': invoice.billing_email,
'notes': invoice.notes,
'payment_method': invoice.payment_method,
'subscription': subscription_data,
'created_at': invoice.created_at.isoformat(),
}
def list(self, request):
"""List invoices for current account"""
queryset = self.get_queryset()
@@ -680,7 +633,25 @@ class InvoiceViewSet(AccountModelViewSet):
page = paginator.paginate_queryset(queryset, request)
# Serialize invoice data
results = [self._serialize_invoice(invoice) for invoice in (page if page is not None else [])]
results = []
for invoice in (page if page is not None else []):
results.append({
'id': invoice.id,
'invoice_number': invoice.invoice_number,
'status': invoice.status,
'total': str(invoice.total), # Alias for compatibility
'total_amount': str(invoice.total),
'subtotal': str(invoice.subtotal),
'tax_amount': str(invoice.tax),
'currency': invoice.currency,
'invoice_date': invoice.invoice_date.isoformat(),
'due_date': invoice.due_date.isoformat(),
'paid_at': invoice.paid_at.isoformat() if invoice.paid_at else None,
'line_items': invoice.line_items,
'billing_email': invoice.billing_email,
'notes': invoice.notes,
'created_at': invoice.created_at.isoformat(),
})
return paginated_response(
{'count': paginator.page.paginator.count, 'next': paginator.get_next_link(), 'previous': paginator.get_previous_link(), 'results': results},
@@ -691,7 +662,24 @@ class InvoiceViewSet(AccountModelViewSet):
"""Get invoice detail"""
try:
invoice = self.get_queryset().get(pk=pk)
return success_response(data=self._serialize_invoice(invoice), request=request)
data = {
'id': invoice.id,
'invoice_number': invoice.invoice_number,
'status': invoice.status,
'total': str(invoice.total), # Alias for compatibility
'total_amount': str(invoice.total),
'subtotal': str(invoice.subtotal),
'tax_amount': str(invoice.tax),
'currency': invoice.currency,
'invoice_date': invoice.invoice_date.isoformat(),
'due_date': invoice.due_date.isoformat(),
'paid_at': invoice.paid_at.isoformat() if invoice.paid_at else None,
'line_items': invoice.line_items,
'billing_email': invoice.billing_email,
'notes': invoice.notes,
'created_at': invoice.created_at.isoformat(),
}
return success_response(data=data, request=request)
except Invoice.DoesNotExist:
return error_response(error='Invoice not found', status_code=404, request=request)
@@ -699,38 +687,14 @@ class InvoiceViewSet(AccountModelViewSet):
def download_pdf(self, request, pk=None):
"""Download invoice PDF"""
try:
invoice = self.get_queryset().select_related(
'account', 'account__owner', 'subscription', 'subscription__plan'
).get(pk=pk)
invoice = self.get_queryset().get(pk=pk)
pdf_bytes = InvoiceService.generate_pdf(invoice)
# Build descriptive filename
plan_name = ''
if invoice.subscription and invoice.subscription.plan:
plan_name = invoice.subscription.plan.name.replace(' ', '-')
elif invoice.metadata and 'plan_name' in invoice.metadata:
plan_name = invoice.metadata.get('plan_name', '').replace(' ', '-')
date_str = invoice.invoice_date.strftime('%Y-%m-%d') if invoice.invoice_date else ''
filename_parts = ['IGNY8', 'Invoice', invoice.invoice_number]
if plan_name:
filename_parts.append(plan_name)
if date_str:
filename_parts.append(date_str)
filename = '-'.join(filename_parts) + '.pdf'
response = HttpResponse(pdf_bytes, content_type='application/pdf')
response['Content-Disposition'] = f'attachment; filename="{filename}"'
response['Content-Disposition'] = f'attachment; filename="invoice-{invoice.invoice_number}.pdf"'
return response
except Invoice.DoesNotExist:
return error_response(error='Invoice not found', status_code=404, request=request)
except Exception as e:
import logging
logger = logging.getLogger(__name__)
logger.error(f'PDF generation failed for invoice {pk}: {str(e)}', exc_info=True)
return error_response(error=f'Failed to generate PDF: {str(e)}', status_code=500, request=request)
class PaymentViewSet(AccountModelViewSet):
@@ -805,7 +769,6 @@ class PaymentViewSet(AccountModelViewSet):
payment_method = request.data.get('payment_method', 'bank_transfer')
reference = request.data.get('reference', '')
notes = request.data.get('notes', '')
currency = request.data.get('currency', 'USD')
if not amount:
return error_response(error='Amount is required', status_code=400, request=request)
@@ -815,30 +778,18 @@ class PaymentViewSet(AccountModelViewSet):
invoice = None
if invoice_id:
invoice = Invoice.objects.get(id=invoice_id, account=account)
# Use invoice currency if not explicitly provided
if not request.data.get('currency') and invoice:
currency = invoice.currency
payment = Payment.objects.create(
account=account,
invoice=invoice,
amount=amount,
currency=currency,
currency='USD',
payment_method=payment_method,
status='pending_approval',
manual_reference=reference,
manual_notes=notes,
)
# Send payment confirmation email
try:
from igny8_core.business.billing.services.email_service import BillingEmailService
BillingEmailService.send_payment_confirmation_email(payment, account)
except Exception as e:
import logging
logger = logging.getLogger(__name__)
logger.error(f'Failed to send payment confirmation email: {str(e)}')
return success_response(
data={'id': payment.id, 'status': payment.status},
message='Manual payment submitted for approval',
@@ -882,16 +833,11 @@ class CreditPackageViewSet(viewsets.ReadOnlyModelViewSet):
class AccountPaymentMethodViewSet(AccountModelViewSet):
"""ViewSet for account payment methods - Full CRUD support"""
"""ViewSet for account payment methods"""
queryset = AccountPaymentMethod.objects.all()
permission_classes = [IsAuthenticatedAndActive, HasTenantAccess]
pagination_class = CustomPageNumberPagination
def get_serializer_class(self):
"""Return serializer class"""
from igny8_core.modules.billing.serializers import AccountPaymentMethodSerializer
return AccountPaymentMethodSerializer
def get_queryset(self):
"""Filter payment methods by account"""
queryset = super().get_queryset()
@@ -899,15 +845,6 @@ class AccountPaymentMethodViewSet(AccountModelViewSet):
queryset = queryset.filter(account=self.request.account)
return queryset.order_by('-is_default', 'type')
def get_serializer_context(self):
"""Add account to serializer context"""
context = super().get_serializer_context()
account = getattr(self.request, 'account', None)
if not account and hasattr(self.request, 'user') and self.request.user:
account = getattr(self.request.user, 'account', None)
context['account'] = account
return context
def list(self, request):
"""List payment methods for current account"""
queryset = self.get_queryset()
@@ -917,108 +854,18 @@ class AccountPaymentMethodViewSet(AccountModelViewSet):
results = []
for method in (page if page is not None else []):
results.append({
'id': method.id,
'id': str(method.id),
'type': method.type,
'display_name': method.display_name,
'is_default': method.is_default,
'is_enabled': method.is_enabled,
'is_verified': method.is_verified,
'is_enabled': method.is_enabled if hasattr(method, 'is_enabled') else True,
'instructions': method.instructions,
'metadata': method.metadata,
'created_at': method.created_at.isoformat() if method.created_at else None,
'updated_at': method.updated_at.isoformat() if method.updated_at else None,
})
return paginated_response(
{'count': paginator.page.paginator.count, 'next': paginator.get_next_link(), 'previous': paginator.get_previous_link(), 'results': results},
request=request
)
def create(self, request, *args, **kwargs):
"""Create a new payment method"""
serializer = self.get_serializer(data=request.data)
try:
serializer.is_valid(raise_exception=True)
instance = serializer.save()
return success_response(
data={
'id': instance.id,
'type': instance.type,
'display_name': instance.display_name,
'is_default': instance.is_default,
'is_enabled': instance.is_enabled,
'is_verified': instance.is_verified,
'instructions': instance.instructions,
},
message='Payment method created successfully',
request=request,
status_code=status.HTTP_201_CREATED
)
except Exception as e:
return error_response(
error=str(e),
status_code=status.HTTP_400_BAD_REQUEST,
request=request
)
def update(self, request, *args, **kwargs):
"""Update a payment method"""
partial = kwargs.pop('partial', False)
instance = self.get_object()
serializer = self.get_serializer(instance, data=request.data, partial=partial)
try:
serializer.is_valid(raise_exception=True)
instance = serializer.save()
return success_response(
data={
'id': instance.id,
'type': instance.type,
'display_name': instance.display_name,
'is_default': instance.is_default,
'is_enabled': instance.is_enabled,
'is_verified': instance.is_verified,
'instructions': instance.instructions,
},
message='Payment method updated successfully',
request=request
)
except Exception as e:
return error_response(
error=str(e),
status_code=status.HTTP_400_BAD_REQUEST,
request=request
)
def destroy(self, request, *args, **kwargs):
"""Delete a payment method"""
try:
instance = self.get_object()
# Don't allow deleting the only default payment method
if instance.is_default:
other_methods = AccountPaymentMethod.objects.filter(
account=instance.account
).exclude(pk=instance.pk).count()
if other_methods == 0:
return error_response(
error='Cannot delete the only payment method',
status_code=status.HTTP_400_BAD_REQUEST,
request=request
)
instance.delete()
return success_response(
data=None,
message='Payment method deleted successfully',
request=request,
status_code=status.HTTP_204_NO_CONTENT
)
except Exception as e:
return error_response(
error=str(e),
status_code=status.HTTP_400_BAD_REQUEST,
request=request
)
# ============================================================================

View File

@@ -5,8 +5,6 @@ API endpoints for generating and downloading invoice PDFs
from django.http import HttpResponse
from rest_framework.decorators import api_view, permission_classes
from rest_framework.permissions import IsAuthenticated
from rest_framework.response import Response
from rest_framework import status
from igny8_core.business.billing.models import Invoice
from igny8_core.business.billing.services.pdf_service import InvoicePDFGenerator
from igny8_core.business.billing.utils.errors import not_found_response
@@ -24,46 +22,20 @@ def download_invoice_pdf(request, invoice_id):
GET /api/v1/billing/invoices/<id>/pdf/
"""
try:
# Note: line_items is a JSONField, not a related model - no prefetch needed
invoice = Invoice.objects.select_related('account', 'account__owner', 'subscription', 'subscription__plan').get(
invoice = Invoice.objects.prefetch_related('line_items').get(
id=invoice_id,
account=request.user.account
)
except Invoice.DoesNotExist:
return not_found_response('Invoice', invoice_id)
try:
# Generate PDF
pdf_buffer = InvoicePDFGenerator.generate_invoice_pdf(invoice)
# Build descriptive filename: IGNY8-Invoice-INV123456-Growth-2026-01-08.pdf
plan_name = ''
if invoice.subscription and invoice.subscription.plan:
plan_name = invoice.subscription.plan.name.replace(' ', '-')
elif invoice.metadata and 'plan_name' in invoice.metadata:
plan_name = invoice.metadata['plan_name'].replace(' ', '-')
date_str = invoice.invoice_date.strftime('%Y-%m-%d') if invoice.invoice_date else ''
filename_parts = ['IGNY8', 'Invoice', invoice.invoice_number]
if plan_name:
filename_parts.append(plan_name)
if date_str:
filename_parts.append(date_str)
filename = '-'.join(filename_parts) + '.pdf'
# Return PDF response
response = HttpResponse(pdf_buffer.read(), content_type='application/pdf')
response['Content-Disposition'] = f'attachment; filename="{filename}"'
logger.info(f'Invoice PDF downloaded: {invoice.invoice_number} by user {request.user.id}')
return response
except Exception as e:
logger.error(f'Failed to generate PDF for invoice {invoice_id}: {str(e)}', exc_info=True)
return Response(
{'error': 'Failed to generate PDF', 'detail': str(e)},
status=status.HTTP_500_INTERNAL_SERVER_ERROR
)
# Generate PDF
pdf_buffer = InvoicePDFGenerator.generate_invoice_pdf(invoice)
# Return PDF response
response = HttpResponse(pdf_buffer.read(), content_type='application/pdf')
response['Content-Disposition'] = f'attachment; filename="invoice_{invoice.invoice_number}.pdf"'
logger.info(f'Invoice PDF downloaded: {invoice.invoice_number} by user {request.user.id}')
return response

File diff suppressed because it is too large Load Diff

View File

@@ -160,18 +160,20 @@ def initiate_refund(request, payment_id):
def _process_stripe_refund(payment: Payment, amount: Decimal, reason: str) -> bool:
"""Process Stripe refund"""
try:
from igny8_core.business.billing.services.stripe_service import StripeService
import stripe
from igny8_core.business.billing.utils.payment_gateways import get_stripe_client
stripe_service = StripeService()
stripe_client = get_stripe_client()
refund = stripe_service.create_refund(
payment_intent_id=payment.stripe_payment_intent_id,
refund = stripe_client.Refund.create(
payment_intent=payment.stripe_payment_intent_id,
amount=int(amount * 100), # Convert to cents
reason='requested_by_customer',
metadata={'reason': reason}
)
payment.metadata['stripe_refund_id'] = refund.get('id')
return refund.get('status') == 'succeeded'
payment.metadata['stripe_refund_id'] = refund.id
return refund.status == 'succeeded'
except Exception as e:
logger.exception(f"Stripe refund failed for payment {payment.id}: {str(e)}")
@@ -181,19 +183,25 @@ def _process_stripe_refund(payment: Payment, amount: Decimal, reason: str) -> bo
def _process_paypal_refund(payment: Payment, amount: Decimal, reason: str) -> bool:
"""Process PayPal refund"""
try:
from igny8_core.business.billing.services.paypal_service import PayPalService
from igny8_core.business.billing.utils.payment_gateways import get_paypal_client
paypal_service = PayPalService()
paypal_client = get_paypal_client()
refund = paypal_service.refund_capture(
capture_id=payment.paypal_capture_id,
amount=float(amount),
currency=payment.currency,
note=reason,
refund_request = {
'amount': {
'value': str(amount),
'currency_code': payment.currency
},
'note_to_payer': reason
}
refund = paypal_client.payments.captures.refund(
payment.paypal_capture_id,
refund_request
)
payment.metadata['paypal_refund_id'] = refund.get('id')
return refund.get('status') == 'COMPLETED'
payment.metadata['paypal_refund_id'] = refund.id
return refund.status == 'COMPLETED'
except Exception as e:
logger.exception(f"PayPal refund failed for payment {payment.id}: {str(e)}")

File diff suppressed because it is too large Load Diff

View File

@@ -119,40 +119,10 @@ class Tasks(SoftDeletableModel, SiteSectorBaseModel):
objects = SoftDeleteManager()
all_objects = models.Manager()
def __str__(self):
return self.title
def soft_delete(self, user=None, reason=None, retention_days=None):
"""
Override soft_delete to cascade to related models.
This ensures Images and ContentClusterMap are also deleted when a Task is deleted.
"""
import logging
logger = logging.getLogger(__name__)
# Soft-delete related Images (which are also SoftDeletable)
related_images = self.images.filter(is_deleted=False)
images_count = related_images.count()
for image in related_images:
image.soft_delete(user=user, reason=f"Parent task deleted: {reason or 'No reason'}")
# Hard-delete ContentClusterMap (not soft-deletable)
cluster_maps_count = self.cluster_mappings.count()
self.cluster_mappings.all().delete()
# Hard-delete ContentAttribute (not soft-deletable)
attributes_count = self.attribute_mappings.count()
self.attribute_mappings.all().delete()
logger.info(
f"[Tasks.soft_delete] Task {self.id} '{self.title}' cascade delete: "
f"{images_count} images, {cluster_maps_count} cluster maps, {attributes_count} attributes"
)
# Call parent soft_delete
super().soft_delete(user=user, reason=reason, retention_days=retention_days)
class ContentTaxonomyRelation(models.Model):
"""Through model for Content-Taxonomy many-to-many relationship"""
@@ -271,8 +241,7 @@ class Content(SoftDeletableModel, SiteSectorBaseModel):
STATUS_CHOICES = [
('draft', 'Draft'),
('review', 'Review'),
('approved', 'Approved'), # Ready for publishing to external site
('published', 'Published'), # Actually published on external site
('published', 'Published'),
]
status = models.CharField(
max_length=50,
@@ -282,33 +251,6 @@ class Content(SoftDeletableModel, SiteSectorBaseModel):
help_text="Content status"
)
# Publishing scheduler fields
SITE_STATUS_CHOICES = [
('not_published', 'Not Published'),
('scheduled', 'Scheduled'),
('publishing', 'Publishing'),
('published', 'Published'),
('failed', 'Failed'),
]
site_status = models.CharField(
max_length=50,
choices=SITE_STATUS_CHOICES,
default='not_published',
db_index=True,
help_text="External site publishing status"
)
scheduled_publish_at = models.DateTimeField(
null=True,
blank=True,
db_index=True,
help_text="Scheduled time for publishing to external site"
)
site_status_updated_at = models.DateTimeField(
null=True,
blank=True,
help_text="Last time site_status was changed"
)
created_at = models.DateTimeField(auto_now_add=True)
updated_at = models.DateTimeField(auto_now=True)
@@ -384,61 +326,6 @@ class Content(SoftDeletableModel, SiteSectorBaseModel):
logger = logging.getLogger(__name__)
logger.error(f"Error incrementing word usage for content {self.id}: {str(e)}")
def soft_delete(self, user=None, reason=None, retention_days=None):
"""
Override soft_delete to cascade to related models.
This ensures Images, ContentClusterMap, ContentAttribute are also deleted.
"""
import logging
logger = logging.getLogger(__name__)
# Soft-delete related Images (which are also SoftDeletable)
related_images = self.images.filter(is_deleted=False)
images_count = related_images.count()
for image in related_images:
image.soft_delete(user=user, reason=f"Parent content deleted: {reason or 'No reason'}")
# Hard-delete ContentClusterMap (not soft-deletable)
cluster_maps_count = self.cluster_mappings.count()
self.cluster_mappings.all().delete()
# Hard-delete ContentAttribute (not soft-deletable)
attributes_count = self.attributes.count()
self.attributes.all().delete()
# Hard-delete ContentTaxonomyRelation (through model for many-to-many)
taxonomy_relations_count = ContentTaxonomyRelation.objects.filter(content=self).count()
ContentTaxonomyRelation.objects.filter(content=self).delete()
logger.info(
f"[Content.soft_delete] Content {self.id} '{self.title}' cascade delete: "
f"{images_count} images, {cluster_maps_count} cluster maps, "
f"{attributes_count} attributes, {taxonomy_relations_count} taxonomy relations"
)
# Call parent soft_delete
super().soft_delete(user=user, reason=reason, retention_days=retention_days)
def hard_delete(self, using=None, keep_parents=False):
"""
Override hard_delete to cascade to related models.
Django CASCADE should handle this, but we explicitly clean up for safety.
"""
import logging
logger = logging.getLogger(__name__)
# Hard-delete related Images (including soft-deleted ones)
images_count = Images.all_objects.filter(content=self).count()
Images.all_objects.filter(content=self).delete()
logger.info(
f"[Content.hard_delete] Content {self.id} '{self.title}' hard delete: "
f"{images_count} images removed"
)
# Call parent hard_delete (Django CASCADE will handle the rest)
return super().hard_delete(using=using, keep_parents=keep_parents)
class ContentTaxonomy(SiteSectorBaseModel):
"""
@@ -568,33 +455,10 @@ class Images(SoftDeletableModel, SiteSectorBaseModel):
models.Index(fields=['content', 'position']),
models.Index(fields=['task', 'position']),
]
# Ensure unique position per content+image_type combination
constraints = [
models.UniqueConstraint(
fields=['content', 'image_type', 'position'],
name='unique_content_image_type_position',
condition=models.Q(is_deleted=False)
),
]
objects = SoftDeleteManager()
all_objects = models.Manager()
@property
def aspect_ratio(self):
"""
Determine aspect ratio based on position for layout rendering.
Position 0, 2: square (1:1)
Position 1, 3: landscape (16:9 or similar)
Featured: always landscape
"""
if self.image_type == 'featured':
return 'landscape'
elif self.image_type == 'in_article':
# Even positions are square, odd positions are landscape
return 'square' if (self.position or 0) % 2 == 0 else 'landscape'
return 'square' # Default
def save(self, *args, **kwargs):
"""Track image usage when creating new images"""
is_new = self.pk is None
@@ -811,14 +675,3 @@ class ContentAttribute(SiteSectorBaseModel):
# Backward compatibility alias
ContentAttributeMap = ContentAttribute
class ImagePrompts(Images):
"""
Proxy model for Images to provide a separate admin interface focused on prompts.
This allows a dedicated "Image Prompts" view in the admin sidebar.
"""
class Meta:
proxy = True
verbose_name = 'Image Prompt'
verbose_name_plural = 'Image Prompts'
app_label = 'writer'

View File

@@ -26,7 +26,17 @@ class ContentValidationService:
"""
errors = []
# Validate entity_type is set
# Stage 3: Enforce "no cluster, no task" rule when feature flag enabled
from django.conf import settings
if getattr(settings, 'USE_SITE_BUILDER_REFACTOR', False):
if not task.cluster:
errors.append({
'field': 'cluster',
'code': 'missing_cluster',
'message': 'Task must be associated with a cluster before content generation',
})
# Stage 3: Validate entity_type is set
if not task.content_type:
errors.append({
'field': 'content_type',

View File

@@ -1,38 +0,0 @@
# Generated by Django 5.2.9 on 2026-01-01 06:37
import django.core.validators
import django.db.models.deletion
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('igny8_core_auth', '0018_add_country_remove_intent_seedkeyword'),
('integration', '0002_add_sync_event_model'),
]
operations = [
migrations.CreateModel(
name='PublishingSettings',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('auto_approval_enabled', models.BooleanField(default=True, help_text="Automatically approve content after review (moves to 'approved' status)")),
('auto_publish_enabled', models.BooleanField(default=True, help_text='Automatically publish approved content to the external site')),
('daily_publish_limit', models.PositiveIntegerField(default=3, help_text='Maximum number of articles to publish per day', validators=[django.core.validators.MinValueValidator(1)])),
('weekly_publish_limit', models.PositiveIntegerField(default=15, help_text='Maximum number of articles to publish per week', validators=[django.core.validators.MinValueValidator(1)])),
('monthly_publish_limit', models.PositiveIntegerField(default=50, help_text='Maximum number of articles to publish per month', validators=[django.core.validators.MinValueValidator(1)])),
('publish_days', models.JSONField(default=list, help_text='Days of the week to publish (mon, tue, wed, thu, fri, sat, sun)')),
('publish_time_slots', models.JSONField(default=list, help_text="Times of day to publish (HH:MM format, e.g., ['09:00', '14:00', '18:00'])")),
('created_at', models.DateTimeField(auto_now_add=True)),
('updated_at', models.DateTimeField(auto_now=True)),
('account', models.ForeignKey(db_column='tenant_id', on_delete=django.db.models.deletion.CASCADE, related_name='%(class)s_set', to='igny8_core_auth.account')),
('site', models.OneToOneField(help_text='Site these publishing settings belong to', on_delete=django.db.models.deletion.CASCADE, related_name='publishing_settings', to='igny8_core_auth.site')),
],
options={
'verbose_name': 'Publishing Settings',
'verbose_name_plural': 'Publishing Settings',
'db_table': 'igny8_publishing_settings',
},
),
]

View File

@@ -244,100 +244,3 @@ class SyncEvent(AccountBaseModel):
def __str__(self):
return f"{self.get_event_type_display()} - {self.description[:50]}"
class PublishingSettings(AccountBaseModel):
"""
Site-level publishing configuration settings.
Controls automatic approval, publishing limits, and scheduling.
"""
DEFAULT_PUBLISH_DAYS = ['mon', 'tue', 'wed', 'thu', 'fri']
DEFAULT_TIME_SLOTS = ['09:00', '14:00', '18:00']
site = models.OneToOneField(
'igny8_core_auth.Site',
on_delete=models.CASCADE,
related_name='publishing_settings',
help_text="Site these publishing settings belong to"
)
# Auto-approval settings
auto_approval_enabled = models.BooleanField(
default=True,
help_text="Automatically approve content after review (moves to 'approved' status)"
)
# Auto-publish settings
auto_publish_enabled = models.BooleanField(
default=True,
help_text="Automatically publish approved content to the external site"
)
# Publishing limits
daily_publish_limit = models.PositiveIntegerField(
default=3,
validators=[MinValueValidator(1)],
help_text="Maximum number of articles to publish per day"
)
weekly_publish_limit = models.PositiveIntegerField(
default=15,
validators=[MinValueValidator(1)],
help_text="Maximum number of articles to publish per week"
)
monthly_publish_limit = models.PositiveIntegerField(
default=50,
validators=[MinValueValidator(1)],
help_text="Maximum number of articles to publish per month"
)
# Publishing schedule
publish_days = models.JSONField(
default=list,
help_text="Days of the week to publish (mon, tue, wed, thu, fri, sat, sun)"
)
publish_time_slots = models.JSONField(
default=list,
help_text="Times of day to publish (HH:MM format, e.g., ['09:00', '14:00', '18:00'])"
)
created_at = models.DateTimeField(auto_now_add=True)
updated_at = models.DateTimeField(auto_now=True)
class Meta:
app_label = 'integration'
db_table = 'igny8_publishing_settings'
verbose_name = 'Publishing Settings'
verbose_name_plural = 'Publishing Settings'
def __str__(self):
return f"Publishing Settings for {self.site.name}"
def save(self, *args, **kwargs):
"""Set defaults for JSON fields if empty"""
if not self.publish_days:
self.publish_days = self.DEFAULT_PUBLISH_DAYS
if not self.publish_time_slots:
self.publish_time_slots = self.DEFAULT_TIME_SLOTS
super().save(*args, **kwargs)
@classmethod
def get_or_create_for_site(cls, site):
"""Get or create publishing settings for a site with defaults"""
settings, created = cls.objects.get_or_create(
site=site,
defaults={
'account': site.account,
'auto_approval_enabled': True,
'auto_publish_enabled': True,
'daily_publish_limit': 3,
'weekly_publish_limit': 15,
'monthly_publish_limit': 50,
'publish_days': cls.DEFAULT_PUBLISH_DAYS,
'publish_time_slots': cls.DEFAULT_TIME_SLOTS,
}
)
return settings, created

View File

@@ -1,259 +0,0 @@
"""
Defaults Service
Creates sites with default settings for simplified onboarding.
"""
import logging
from typing import Dict, Any, Tuple, Optional
from django.db import transaction
from django.utils import timezone
from igny8_core.auth.models import Account, Site
from igny8_core.business.integration.models import PublishingSettings
from igny8_core.business.automation.models import AutomationConfig
logger = logging.getLogger(__name__)
# Default settings for new sites
DEFAULT_PUBLISHING_SETTINGS = {
'auto_approval_enabled': True,
'auto_publish_enabled': True,
'daily_publish_limit': 3,
'weekly_publish_limit': 15,
'monthly_publish_limit': 50,
'publish_days': ['mon', 'tue', 'wed', 'thu', 'fri'],
'publish_time_slots': ['09:00', '14:00', '18:00'],
}
DEFAULT_AUTOMATION_SETTINGS = {
'is_enabled': True,
'frequency': 'daily',
'scheduled_time': '02:00',
'stage_1_batch_size': 50,
'stage_2_batch_size': 1,
'stage_3_batch_size': 20,
'stage_4_batch_size': 1,
'stage_5_batch_size': 1,
'stage_6_batch_size': 1,
'within_stage_delay': 3,
'between_stage_delay': 5,
}
class DefaultsService:
"""
Service for creating sites with sensible defaults.
Used during onboarding for a simplified first-run experience.
"""
def __init__(self, account: Account):
self.account = account
@transaction.atomic
def create_site_with_defaults(
self,
site_data: Dict[str, Any],
publishing_overrides: Optional[Dict[str, Any]] = None,
automation_overrides: Optional[Dict[str, Any]] = None,
) -> Tuple[Site, PublishingSettings, AutomationConfig]:
"""
Create a new site with default publishing and automation settings.
Args:
site_data: Dict with site fields (name, domain, etc.)
publishing_overrides: Optional overrides for publishing settings
automation_overrides: Optional overrides for automation settings
Returns:
Tuple of (Site, PublishingSettings, AutomationConfig)
"""
# Check hard limit for sites BEFORE creating
from igny8_core.business.billing.services.limit_service import LimitService, HardLimitExceededError
LimitService.check_hard_limit(self.account, 'sites', additional_count=1)
# Create the site
site = Site.objects.create(
account=self.account,
name=site_data.get('name', 'My Site'),
domain=site_data.get('domain', ''),
base_url=site_data.get('base_url', ''),
hosting_type=site_data.get('hosting_type', 'wordpress'),
is_active=site_data.get('is_active', True),
)
logger.info(f"Created site: {site.name} (id={site.id}) for account {self.account.id}")
# Create publishing settings with defaults
publishing_settings = self._create_publishing_settings(
site,
overrides=publishing_overrides
)
# Create automation config with defaults
automation_config = self._create_automation_config(
site,
overrides=automation_overrides
)
return site, publishing_settings, automation_config
def _create_publishing_settings(
self,
site: Site,
overrides: Optional[Dict[str, Any]] = None
) -> PublishingSettings:
"""Create publishing settings with defaults, applying any overrides."""
settings_data = {**DEFAULT_PUBLISHING_SETTINGS}
if overrides:
settings_data.update(overrides)
publishing_settings = PublishingSettings.objects.create(
account=self.account,
site=site,
**settings_data
)
logger.info(
f"Created publishing settings for site {site.id}: "
f"auto_approval={publishing_settings.auto_approval_enabled}, "
f"auto_publish={publishing_settings.auto_publish_enabled}"
)
return publishing_settings
def _create_automation_config(
self,
site: Site,
overrides: Optional[Dict[str, Any]] = None
) -> AutomationConfig:
"""Create automation config with defaults, applying any overrides."""
config_data = {**DEFAULT_AUTOMATION_SETTINGS}
if overrides:
config_data.update(overrides)
# Calculate next run time (tomorrow at scheduled time)
scheduled_time = config_data.pop('scheduled_time', '02:00')
automation_config = AutomationConfig.objects.create(
account=self.account,
site=site,
scheduled_time=scheduled_time,
**config_data
)
# Set next run to tomorrow at scheduled time if enabled
if automation_config.is_enabled:
next_run = self._calculate_initial_next_run(scheduled_time)
automation_config.next_run_at = next_run
automation_config.save(update_fields=['next_run_at'])
logger.info(
f"Created automation config for site {site.id}: "
f"enabled={automation_config.is_enabled}, "
f"frequency={automation_config.frequency}, "
f"next_run={automation_config.next_run_at}"
)
return automation_config
def _calculate_initial_next_run(self, scheduled_time: str) -> timezone.datetime:
"""Calculate the initial next run datetime (tomorrow at scheduled time)."""
now = timezone.now()
# Parse time
try:
hour, minute = map(int, scheduled_time.split(':'))
except (ValueError, AttributeError):
hour, minute = 2, 0 # Default to 2:00 AM
# Set to tomorrow at the scheduled time
next_run = now.replace(
hour=hour,
minute=minute,
second=0,
microsecond=0
)
# If the time has passed today, schedule for tomorrow
if next_run <= now:
next_run += timezone.timedelta(days=1)
return next_run
@transaction.atomic
def apply_defaults_to_existing_site(
self,
site: Site,
force_overwrite: bool = False
) -> Tuple[PublishingSettings, AutomationConfig]:
"""
Apply default settings to an existing site.
Args:
site: Existing Site instance
force_overwrite: If True, overwrite existing settings. If False, only create if missing.
Returns:
Tuple of (PublishingSettings, AutomationConfig)
"""
# Handle publishing settings
if force_overwrite:
PublishingSettings.objects.filter(site=site).delete()
publishing_settings = self._create_publishing_settings(site)
else:
publishing_settings, created = PublishingSettings.objects.get_or_create(
site=site,
defaults={
'account': self.account,
**DEFAULT_PUBLISHING_SETTINGS
}
)
if not created:
logger.info(f"Publishing settings already exist for site {site.id}")
# Handle automation config
if force_overwrite:
AutomationConfig.objects.filter(site=site).delete()
automation_config = self._create_automation_config(site)
else:
try:
automation_config = AutomationConfig.objects.get(site=site)
logger.info(f"Automation config already exists for site {site.id}")
except AutomationConfig.DoesNotExist:
automation_config = self._create_automation_config(site)
return publishing_settings, automation_config
def create_site_with_defaults(
account: Account,
site_data: Dict[str, Any],
publishing_overrides: Optional[Dict[str, Any]] = None,
automation_overrides: Optional[Dict[str, Any]] = None,
) -> Tuple[Site, PublishingSettings, AutomationConfig]:
"""
Convenience function to create a site with default settings.
This is the main entry point for the onboarding flow.
Usage:
from igny8_core.business.integration.services.defaults_service import create_site_with_defaults
site, pub_settings, auto_config = create_site_with_defaults(
account=request.user.account,
site_data={
'name': 'My Blog',
'domain': 'myblog.com',
'hosting_type': 'wordpress',
}
)
"""
service = DefaultsService(account)
return service.create_site_with_defaults(
site_data,
publishing_overrides=publishing_overrides,
automation_overrides=automation_overrides,
)

View File

@@ -1 +0,0 @@
# Notifications module

View File

@@ -1,40 +0,0 @@
"""
Notification Admin Configuration
"""
from django.contrib import admin
from unfold.admin import ModelAdmin
from .models import Notification
@admin.register(Notification)
class NotificationAdmin(ModelAdmin):
list_display = ['title', 'notification_type', 'severity', 'account', 'user', 'is_read', 'created_at']
list_filter = ['notification_type', 'severity', 'is_read', 'created_at']
search_fields = ['title', 'message', 'account__name', 'user__email']
readonly_fields = ['created_at', 'updated_at', 'read_at']
ordering = ['-created_at']
fieldsets = (
('Notification', {
'fields': ('account', 'user', 'notification_type', 'severity')
}),
('Content', {
'fields': ('title', 'message', 'site')
}),
('Action', {
'fields': ('action_url', 'action_label')
}),
('Status', {
'fields': ('is_read', 'read_at')
}),
('Metadata', {
'fields': ('metadata',),
'classes': ('collapse',)
}),
('Timestamps', {
'fields': ('created_at', 'updated_at'),
'classes': ('collapse',)
}),
)

View File

@@ -1,13 +0,0 @@
"""
Notifications App Configuration
"""
from django.apps import AppConfig
class NotificationsConfig(AppConfig):
"""Configuration for the notifications app."""
default_auto_field = 'django.db.models.BigAutoField'
name = 'igny8_core.business.notifications'
label = 'notifications'
verbose_name = 'Notifications'

View File

@@ -1,45 +0,0 @@
# Generated by Django 5.2.9 on 2025-12-27 22:02
import django.db.models.deletion
from django.conf import settings
from django.db import migrations, models
class Migration(migrations.Migration):
initial = True
dependencies = [
('contenttypes', '0002_remove_content_type_name'),
('igny8_core_auth', '0018_add_country_remove_intent_seedkeyword'),
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
]
operations = [
migrations.CreateModel(
name='Notification',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created_at', models.DateTimeField(auto_now_add=True, db_index=True)),
('updated_at', models.DateTimeField(auto_now=True)),
('notification_type', models.CharField(choices=[('ai_cluster_complete', 'Clustering Complete'), ('ai_cluster_failed', 'Clustering Failed'), ('ai_ideas_complete', 'Ideas Generated'), ('ai_ideas_failed', 'Idea Generation Failed'), ('ai_content_complete', 'Content Generated'), ('ai_content_failed', 'Content Generation Failed'), ('ai_images_complete', 'Images Generated'), ('ai_images_failed', 'Image Generation Failed'), ('ai_prompts_complete', 'Image Prompts Created'), ('ai_prompts_failed', 'Image Prompts Failed'), ('content_ready_review', 'Content Ready for Review'), ('content_published', 'Content Published'), ('content_publish_failed', 'Publishing Failed'), ('wordpress_sync_success', 'WordPress Sync Complete'), ('wordpress_sync_failed', 'WordPress Sync Failed'), ('credits_low', 'Credits Running Low'), ('credits_depleted', 'Credits Depleted'), ('site_setup_complete', 'Site Setup Complete'), ('keywords_imported', 'Keywords Imported'), ('system_info', 'System Information')], default='system_info', max_length=50)),
('title', models.CharField(max_length=200)),
('message', models.TextField()),
('severity', models.CharField(choices=[('info', 'Info'), ('success', 'Success'), ('warning', 'Warning'), ('error', 'Error')], default='info', max_length=20)),
('object_id', models.PositiveIntegerField(blank=True, null=True)),
('action_url', models.CharField(blank=True, max_length=500, null=True)),
('action_label', models.CharField(blank=True, max_length=50, null=True)),
('is_read', models.BooleanField(default=False)),
('read_at', models.DateTimeField(blank=True, null=True)),
('metadata', models.JSONField(blank=True, default=dict)),
('account', models.ForeignKey(db_column='tenant_id', on_delete=django.db.models.deletion.CASCADE, related_name='%(class)s_set', to='igny8_core_auth.account')),
('content_type', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='contenttypes.contenttype')),
('site', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, related_name='notifications', to='igny8_core_auth.site')),
('user', models.ForeignKey(blank=True, help_text='If null, notification is visible to all account users', null=True, on_delete=django.db.models.deletion.CASCADE, related_name='notifications', to=settings.AUTH_USER_MODEL)),
],
options={
'ordering': ['-created_at'],
'indexes': [models.Index(fields=['account', '-created_at'], name='notificatio_tenant__3b20a7_idx'), models.Index(fields=['account', 'is_read', '-created_at'], name='notificatio_tenant__9a5521_idx'), models.Index(fields=['user', '-created_at'], name='notificatio_user_id_05b4bc_idx')],
},
),
]

View File

@@ -1,191 +0,0 @@
"""
Notification Models for IGNY8
This module provides a notification system for tracking AI operations,
workflow events, and system alerts.
"""
from django.db import models
from django.conf import settings
from django.contrib.contenttypes.fields import GenericForeignKey
from django.contrib.contenttypes.models import ContentType
from igny8_core.auth.models import AccountBaseModel
class NotificationType(models.TextChoices):
"""Notification type choices"""
# AI Operations
AI_CLUSTER_COMPLETE = 'ai_cluster_complete', 'Clustering Complete'
AI_CLUSTER_FAILED = 'ai_cluster_failed', 'Clustering Failed'
AI_IDEAS_COMPLETE = 'ai_ideas_complete', 'Ideas Generated'
AI_IDEAS_FAILED = 'ai_ideas_failed', 'Idea Generation Failed'
AI_CONTENT_COMPLETE = 'ai_content_complete', 'Content Generated'
AI_CONTENT_FAILED = 'ai_content_failed', 'Content Generation Failed'
AI_IMAGES_COMPLETE = 'ai_images_complete', 'Images Generated'
AI_IMAGES_FAILED = 'ai_images_failed', 'Image Generation Failed'
AI_PROMPTS_COMPLETE = 'ai_prompts_complete', 'Image Prompts Created'
AI_PROMPTS_FAILED = 'ai_prompts_failed', 'Image Prompts Failed'
# Workflow
CONTENT_READY_REVIEW = 'content_ready_review', 'Content Ready for Review'
CONTENT_PUBLISHED = 'content_published', 'Content Published'
CONTENT_PUBLISH_FAILED = 'content_publish_failed', 'Publishing Failed'
# WordPress Sync
WORDPRESS_SYNC_SUCCESS = 'wordpress_sync_success', 'WordPress Sync Complete'
WORDPRESS_SYNC_FAILED = 'wordpress_sync_failed', 'WordPress Sync Failed'
# Credits/Billing
CREDITS_LOW = 'credits_low', 'Credits Running Low'
CREDITS_DEPLETED = 'credits_depleted', 'Credits Depleted'
# Setup
SITE_SETUP_COMPLETE = 'site_setup_complete', 'Site Setup Complete'
KEYWORDS_IMPORTED = 'keywords_imported', 'Keywords Imported'
# System
SYSTEM_INFO = 'system_info', 'System Information'
class NotificationSeverity(models.TextChoices):
"""Notification severity choices"""
INFO = 'info', 'Info'
SUCCESS = 'success', 'Success'
WARNING = 'warning', 'Warning'
ERROR = 'error', 'Error'
class Notification(AccountBaseModel):
"""
Notification model for tracking events and alerts
Notifications are account-scoped (via AccountBaseModel) and can optionally target specific users.
They support generic relations to link to any related object.
"""
user = models.ForeignKey(
settings.AUTH_USER_MODEL,
on_delete=models.CASCADE,
null=True,
blank=True,
related_name='notifications',
help_text='If null, notification is visible to all account users'
)
# Notification content
notification_type = models.CharField(
max_length=50,
choices=NotificationType.choices,
default=NotificationType.SYSTEM_INFO
)
title = models.CharField(max_length=200)
message = models.TextField()
severity = models.CharField(
max_length=20,
choices=NotificationSeverity.choices,
default=NotificationSeverity.INFO
)
# Related site (optional)
site = models.ForeignKey(
'igny8_core_auth.Site',
on_delete=models.CASCADE,
null=True,
blank=True,
related_name='notifications'
)
# Generic relation to any object
content_type = models.ForeignKey(
ContentType,
on_delete=models.CASCADE,
null=True,
blank=True
)
object_id = models.PositiveIntegerField(null=True, blank=True)
content_object = GenericForeignKey('content_type', 'object_id')
# Action
action_url = models.CharField(max_length=500, null=True, blank=True)
action_label = models.CharField(max_length=50, null=True, blank=True)
# Status
is_read = models.BooleanField(default=False)
read_at = models.DateTimeField(null=True, blank=True)
# Metadata for counts/details
metadata = models.JSONField(default=dict, blank=True)
class Meta:
ordering = ['-created_at']
indexes = [
models.Index(fields=['account', '-created_at']),
models.Index(fields=['account', 'is_read', '-created_at']),
models.Index(fields=['user', '-created_at']),
]
def __str__(self):
return f"{self.title} ({self.notification_type})"
def mark_as_read(self):
"""Mark notification as read"""
if not self.is_read:
from django.utils import timezone
self.is_read = True
self.read_at = timezone.now()
self.save(update_fields=['is_read', 'read_at', 'updated_at'])
@classmethod
def create_notification(
cls,
account,
notification_type: str,
title: str,
message: str,
severity: str = NotificationSeverity.INFO,
user=None,
site=None,
content_object=None,
action_url: str = None,
action_label: str = None,
metadata: dict = None
):
"""
Factory method to create notifications
Args:
account: The account this notification belongs to
notification_type: Type from NotificationType choices
title: Notification title
message: Notification message body
severity: Severity level from NotificationSeverity choices
user: Optional specific user (if None, visible to all account users)
site: Optional related site
content_object: Optional related object (using GenericForeignKey)
action_url: Optional URL for action button
action_label: Optional label for action button
metadata: Optional dict with additional data (counts, etc.)
Returns:
Created Notification instance
"""
notification = cls(
account=account,
user=user,
notification_type=notification_type,
title=title,
message=message,
severity=severity,
site=site,
action_url=action_url,
action_label=action_label,
metadata=metadata or {}
)
if content_object:
notification.content_type = ContentType.objects.get_for_model(content_object)
notification.object_id = content_object.pk
notification.save()
return notification

View File

@@ -1,90 +0,0 @@
"""
Notification Serializers
"""
from rest_framework import serializers
from .models import Notification
class NotificationSerializer(serializers.ModelSerializer):
"""Serializer for Notification model"""
site_name = serializers.CharField(source='site.name', read_only=True, default=None)
class Meta:
model = Notification
fields = [
'id',
'notification_type',
'title',
'message',
'severity',
'site',
'site_name',
'action_url',
'action_label',
'is_read',
'read_at',
'metadata',
'created_at',
]
read_only_fields = ['id', 'created_at', 'read_at']
class NotificationListSerializer(serializers.ModelSerializer):
"""Lightweight serializer for notification lists"""
site_name = serializers.CharField(source='site.name', read_only=True, default=None)
time_ago = serializers.SerializerMethodField()
class Meta:
model = Notification
fields = [
'id',
'notification_type',
'title',
'message',
'severity',
'site_name',
'action_url',
'action_label',
'is_read',
'created_at',
'time_ago',
'metadata',
]
def get_time_ago(self, obj):
"""Return human-readable time since notification"""
from django.utils import timezone
from datetime import timedelta
now = timezone.now()
diff = now - obj.created_at
if diff < timedelta(minutes=1):
return 'Just now'
elif diff < timedelta(hours=1):
minutes = int(diff.total_seconds() / 60)
return f'{minutes} minute{"s" if minutes != 1 else ""} ago'
elif diff < timedelta(days=1):
hours = int(diff.total_seconds() / 3600)
return f'{hours} hour{"s" if hours != 1 else ""} ago'
elif diff < timedelta(days=7):
days = diff.days
if days == 1:
return 'Yesterday'
return f'{days} days ago'
else:
return obj.created_at.strftime('%b %d, %Y')
class MarkReadSerializer(serializers.Serializer):
"""Serializer for marking notifications as read"""
notification_ids = serializers.ListField(
child=serializers.IntegerField(),
required=False,
help_text='List of notification IDs to mark as read. If empty, marks all as read.'
)

View File

@@ -1,306 +0,0 @@
"""
Notification Service
Provides methods to create notifications for various events in the system.
"""
from .models import Notification, NotificationType, NotificationSeverity
class NotificationService:
"""Service for creating notifications"""
@staticmethod
def notify_clustering_complete(account, site=None, cluster_count=0, keyword_count=0, user=None):
"""Create notification when keyword clustering completes"""
return Notification.create_notification(
account=account,
notification_type=NotificationType.AI_CLUSTER_COMPLETE,
title='Clustering Complete',
message=f'Created {cluster_count} clusters from {keyword_count} keywords',
severity=NotificationSeverity.SUCCESS,
user=user,
site=site,
action_url='/planner/clusters',
action_label='View Clusters',
metadata={'cluster_count': cluster_count, 'keyword_count': keyword_count}
)
@staticmethod
def notify_clustering_failed(account, site=None, error=None, user=None):
"""Create notification when keyword clustering fails"""
return Notification.create_notification(
account=account,
notification_type=NotificationType.AI_CLUSTER_FAILED,
title='Clustering Failed',
message=f'Failed to cluster keywords: {error}' if error else 'Failed to cluster keywords',
severity=NotificationSeverity.ERROR,
user=user,
site=site,
action_url='/planner/keywords',
action_label='View Keywords',
metadata={'error': str(error) if error else None}
)
@staticmethod
def notify_ideas_complete(account, site=None, idea_count=0, cluster_count=0, user=None):
"""Create notification when idea generation completes"""
return Notification.create_notification(
account=account,
notification_type=NotificationType.AI_IDEAS_COMPLETE,
title='Ideas Generated',
message=f'Generated {idea_count} content ideas from {cluster_count} clusters',
severity=NotificationSeverity.SUCCESS,
user=user,
site=site,
action_url='/planner/ideas',
action_label='View Ideas',
metadata={'idea_count': idea_count, 'cluster_count': cluster_count}
)
@staticmethod
def notify_ideas_failed(account, site=None, error=None, user=None):
"""Create notification when idea generation fails"""
return Notification.create_notification(
account=account,
notification_type=NotificationType.AI_IDEAS_FAILED,
title='Idea Generation Failed',
message=f'Failed to generate ideas: {error}' if error else 'Failed to generate ideas',
severity=NotificationSeverity.ERROR,
user=user,
site=site,
action_url='/planner/clusters',
action_label='View Clusters',
metadata={'error': str(error) if error else None}
)
@staticmethod
def notify_content_complete(account, site=None, article_count=0, word_count=0, user=None):
"""Create notification when content generation completes"""
return Notification.create_notification(
account=account,
notification_type=NotificationType.AI_CONTENT_COMPLETE,
title='Content Generated',
message=f'Generated {article_count} article{"s" if article_count != 1 else ""} ({word_count:,} words)',
severity=NotificationSeverity.SUCCESS,
user=user,
site=site,
action_url='/writer/content',
action_label='View Content',
metadata={'article_count': article_count, 'word_count': word_count}
)
@staticmethod
def notify_content_failed(account, site=None, error=None, user=None):
"""Create notification when content generation fails"""
return Notification.create_notification(
account=account,
notification_type=NotificationType.AI_CONTENT_FAILED,
title='Content Generation Failed',
message=f'Failed to generate content: {error}' if error else 'Failed to generate content',
severity=NotificationSeverity.ERROR,
user=user,
site=site,
action_url='/writer/tasks',
action_label='View Tasks',
metadata={'error': str(error) if error else None}
)
@staticmethod
def notify_images_complete(account, site=None, image_count=0, user=None):
"""Create notification when image generation completes"""
return Notification.create_notification(
account=account,
notification_type=NotificationType.AI_IMAGES_COMPLETE,
title='Images Generated',
message=f'Generated {image_count} image{"s" if image_count != 1 else ""}',
severity=NotificationSeverity.SUCCESS,
user=user,
site=site,
action_url='/writer/images',
action_label='View Images',
metadata={'image_count': image_count}
)
@staticmethod
def notify_images_failed(account, site=None, error=None, image_count=0, user=None):
"""Create notification when image generation fails"""
return Notification.create_notification(
account=account,
notification_type=NotificationType.AI_IMAGES_FAILED,
title='Image Generation Failed',
message=f'Failed to generate {image_count} image{"s" if image_count != 1 else ""}: {error}' if error else f'Failed to generate images',
severity=NotificationSeverity.ERROR,
user=user,
site=site,
action_url='/writer/images',
action_label='View Images',
metadata={'error': str(error) if error else None, 'image_count': image_count}
)
@staticmethod
def notify_prompts_complete(account, site=None, prompt_count=0, user=None):
"""Create notification when image prompt generation completes"""
in_article_count = prompt_count - 1 if prompt_count > 1 else 0
message = f'{prompt_count} image prompts ready (1 featured + {in_article_count} in-article)' if in_article_count > 0 else '1 image prompt ready'
return Notification.create_notification(
account=account,
notification_type=NotificationType.AI_PROMPTS_COMPLETE,
title='Image Prompts Created',
message=message,
severity=NotificationSeverity.SUCCESS,
user=user,
site=site,
action_url='/writer/images',
action_label='Generate Images',
metadata={'prompt_count': prompt_count, 'in_article_count': in_article_count}
)
@staticmethod
def notify_prompts_failed(account, site=None, error=None, user=None):
"""Create notification when image prompt generation fails"""
return Notification.create_notification(
account=account,
notification_type=NotificationType.AI_PROMPTS_FAILED,
title='Image Prompts Failed',
message=f'Failed to create image prompts: {error}' if error else 'Failed to create image prompts',
severity=NotificationSeverity.ERROR,
user=user,
site=site,
action_url='/writer/content',
action_label='View Content',
metadata={'error': str(error) if error else None}
)
@staticmethod
def notify_content_published(account, site=None, title='', content_object=None, user=None):
"""Create notification when content is published"""
site_name = site.name if site else 'site'
return Notification.create_notification(
account=account,
notification_type=NotificationType.CONTENT_PUBLISHED,
title='Content Published',
message=f'"{title}" published to {site_name}',
severity=NotificationSeverity.SUCCESS,
user=user,
site=site,
content_object=content_object,
action_url='/writer/published',
action_label='View Published',
metadata={'content_title': title}
)
@staticmethod
def notify_publish_failed(account, site=None, title='', error=None, user=None):
"""Create notification when publishing fails"""
return Notification.create_notification(
account=account,
notification_type=NotificationType.CONTENT_PUBLISH_FAILED,
title='Publishing Failed',
message=f'Failed to publish "{title}": {error}' if error else f'Failed to publish "{title}"',
severity=NotificationSeverity.ERROR,
user=user,
site=site,
action_url='/writer/review',
action_label='View Review',
metadata={'content_title': title, 'error': str(error) if error else None}
)
@staticmethod
def notify_wordpress_sync_success(account, site=None, count=0, user=None):
"""Create notification when WordPress sync succeeds"""
site_name = site.name if site else 'site'
return Notification.create_notification(
account=account,
notification_type=NotificationType.WORDPRESS_SYNC_SUCCESS,
title='WordPress Synced',
message=f'Synced {count} item{"s" if count != 1 else ""} with {site_name}',
severity=NotificationSeverity.SUCCESS,
user=user,
site=site,
action_url='/writer/published',
action_label='View Published',
metadata={'sync_count': count}
)
@staticmethod
def notify_wordpress_sync_failed(account, site=None, error=None, user=None):
"""Create notification when WordPress sync fails"""
site_name = site.name if site else 'site'
return Notification.create_notification(
account=account,
notification_type=NotificationType.WORDPRESS_SYNC_FAILED,
title='Sync Failed',
message=f'WordPress sync failed for {site_name}: {error}' if error else f'WordPress sync failed for {site_name}',
severity=NotificationSeverity.ERROR,
user=user,
site=site,
action_url=f'/sites/{site.id}/integrations' if site else '/sites',
action_label='Check Integration',
metadata={'error': str(error) if error else None}
)
@staticmethod
def notify_credits_low(account, percentage_used=80, credits_remaining=0, user=None):
"""Create notification when credits are running low"""
return Notification.create_notification(
account=account,
notification_type=NotificationType.CREDITS_LOW,
title='Credits Running Low',
message=f"You've used {percentage_used}% of your credits. {credits_remaining} credits remaining.",
severity=NotificationSeverity.WARNING,
user=user,
action_url='/account/billing',
action_label='Upgrade Plan',
metadata={'percentage_used': percentage_used, 'credits_remaining': credits_remaining}
)
@staticmethod
def notify_credits_depleted(account, user=None):
"""Create notification when credits are depleted"""
return Notification.create_notification(
account=account,
notification_type=NotificationType.CREDITS_DEPLETED,
title='Credits Depleted',
message='Your credits are exhausted. Upgrade to continue using AI features.',
severity=NotificationSeverity.ERROR,
user=user,
action_url='/account/billing',
action_label='Upgrade Now',
metadata={}
)
@staticmethod
def notify_site_setup_complete(account, site=None, user=None):
"""Create notification when site setup is complete"""
site_name = site.name if site else 'Site'
return Notification.create_notification(
account=account,
notification_type=NotificationType.SITE_SETUP_COMPLETE,
title='Site Ready',
message=f'{site_name} is fully configured and ready!',
severity=NotificationSeverity.SUCCESS,
user=user,
site=site,
action_url=f'/sites/{site.id}' if site else '/sites',
action_label='View Site',
metadata={}
)
@staticmethod
def notify_keywords_imported(account, site=None, count=0, user=None):
"""Create notification when keywords are imported"""
site_name = site.name if site else 'site'
return Notification.create_notification(
account=account,
notification_type=NotificationType.KEYWORDS_IMPORTED,
title='Keywords Imported',
message=f'Added {count} keyword{"s" if count != 1 else ""} to {site_name}',
severity=NotificationSeverity.INFO,
user=user,
site=site,
action_url='/planner/keywords',
action_label='View Keywords',
metadata={'keyword_count': count}
)

View File

@@ -1,15 +0,0 @@
"""
Notification URL Configuration
"""
from django.urls import path, include
from rest_framework.routers import DefaultRouter
from .views import NotificationViewSet
router = DefaultRouter()
router.register(r'notifications', NotificationViewSet, basename='notification')
urlpatterns = [
path('', include(router.urls)),
]

View File

@@ -1,132 +0,0 @@
"""
Notification Views
"""
from rest_framework import viewsets, status
from rest_framework.decorators import action
from rest_framework.response import Response
from rest_framework.permissions import IsAuthenticated
from django.utils import timezone
from igny8_core.api.pagination import CustomPageNumberPagination
from igny8_core.api.base import AccountModelViewSet
from .models import Notification
from .serializers import NotificationSerializer, NotificationListSerializer, MarkReadSerializer
class NotificationViewSet(AccountModelViewSet):
"""
ViewSet for managing notifications
Endpoints:
- GET /api/v1/notifications/ - List notifications
- GET /api/v1/notifications/{id}/ - Get notification detail
- DELETE /api/v1/notifications/{id}/ - Delete notification
- POST /api/v1/notifications/{id}/read/ - Mark single notification as read
- POST /api/v1/notifications/read-all/ - Mark all notifications as read
- GET /api/v1/notifications/unread-count/ - Get unread notification count
"""
serializer_class = NotificationSerializer
pagination_class = CustomPageNumberPagination
permission_classes = [IsAuthenticated]
def get_queryset(self):
"""Filter notifications for current account and user"""
from django.db.models import Q
user = self.request.user
account = getattr(user, 'account', None)
if not account:
return Notification.objects.none()
# Get notifications for this account that are either:
# - For all users (user=None)
# - For this specific user
queryset = Notification.objects.filter(
Q(account=account, user__isnull=True) |
Q(account=account, user=user)
).select_related('site').order_by('-created_at')
# Optional filters
is_read = self.request.query_params.get('is_read')
if is_read is not None:
queryset = queryset.filter(is_read=is_read.lower() == 'true')
notification_type = self.request.query_params.get('type')
if notification_type:
queryset = queryset.filter(notification_type=notification_type)
severity = self.request.query_params.get('severity')
if severity:
queryset = queryset.filter(severity=severity)
return queryset
def get_serializer_class(self):
"""Use list serializer for list action"""
if self.action == 'list':
return NotificationListSerializer
return NotificationSerializer
def list(self, request, *args, **kwargs):
"""List notifications with unread count"""
queryset = self.filter_queryset(self.get_queryset())
# Get unread count
unread_count = queryset.filter(is_read=False).count()
page = self.paginate_queryset(queryset)
if page is not None:
serializer = self.get_serializer(page, many=True)
response = self.get_paginated_response(serializer.data)
response.data['unread_count'] = unread_count
return response
serializer = self.get_serializer(queryset, many=True)
return Response({
'results': serializer.data,
'unread_count': unread_count
})
@action(detail=True, methods=['post'])
def read(self, request, pk=None):
"""Mark a single notification as read"""
notification = self.get_object()
notification.mark_as_read()
serializer = self.get_serializer(notification)
return Response(serializer.data)
@action(detail=False, methods=['post'], url_path='read-all')
def read_all(self, request):
"""Mark all notifications as read"""
serializer = MarkReadSerializer(data=request.data)
serializer.is_valid(raise_exception=True)
notification_ids = serializer.validated_data.get('notification_ids', [])
queryset = self.get_queryset().filter(is_read=False)
if notification_ids:
queryset = queryset.filter(id__in=notification_ids)
count = queryset.update(is_read=True, read_at=timezone.now())
return Response({
'status': 'success',
'marked_read': count
})
@action(detail=False, methods=['get'], url_path='unread-count')
def unread_count(self, request):
"""Get count of unread notifications"""
count = self.get_queryset().filter(is_read=False).count()
return Response({'unread_count': count})
def destroy(self, request, *args, **kwargs):
"""Delete a notification"""
instance = self.get_object()
self.perform_destroy(instance)
return Response(status=status.HTTP_204_NO_CONTENT)

View File

@@ -1,5 +1,4 @@
"""
Planning business logic - Keywords, Clusters, ContentIdeas models and services
"""
# Import signals to register cascade handlers
from . import signals # noqa: F401

View File

@@ -1,9 +1,6 @@
from django.db import models
from igny8_core.auth.models import SiteSectorBaseModel, SeedKeyword
from igny8_core.common.soft_delete import SoftDeletableModel, SoftDeleteManager
import logging
logger = logging.getLogger(__name__)
class Clusters(SoftDeletableModel, SiteSectorBaseModel):
@@ -42,27 +39,6 @@ class Clusters(SoftDeletableModel, SiteSectorBaseModel):
def __str__(self):
return self.name
def soft_delete(self, user=None, reason=None, retention_days=None):
"""
Override soft_delete to cascade status reset to related Keywords.
When a cluster is deleted, its keywords should:
- Have their cluster FK set to NULL (handled by SET_NULL)
- Have their status reset to 'new' (orphaned keywords)
"""
# Reset related keywords status to 'new' and clear cluster FK
keywords_count = self.keywords.filter(is_deleted=False).update(
cluster=None,
status='new'
)
logger.info(
f"[Clusters.soft_delete] Cluster {self.id} '{self.name}' cascade: "
f"reset {keywords_count} keywords to status='new'"
)
# Call parent soft_delete
super().soft_delete(user=user, reason=reason, retention_days=retention_days)
class Keywords(SoftDeletableModel, SiteSectorBaseModel):

View File

@@ -52,12 +52,26 @@ class ClusteringService:
# Delegate to AI task
from igny8_core.ai.tasks import run_ai_task
from django.conf import settings
payload = {
'ids': keyword_ids,
'sector_id': sector_id
}
# Stage 1: When USE_SITE_BUILDER_REFACTOR is enabled, payload can include
# taxonomy hints and dimension metadata for enhanced clustering.
# TODO (Stage 2/3): Enhance clustering to collect and use:
# - Taxonomy hints from SiteBlueprintTaxonomy
# - Dimension metadata (context_type, dimension_meta) for clusters
# - Attribute values from Keywords.attribute_values
if getattr(settings, 'USE_SITE_BUILDER_REFACTOR', False):
logger.info(
f"Clustering with refactor enabled: {len(keyword_ids)} keywords, "
f"sector_id={sector_id}, account_id={account.id}"
)
# Future: Add taxonomy hints and dimension metadata to payload
try:
if hasattr(run_ai_task, 'delay'):
# Celery available - queue async

View File

@@ -1,130 +0,0 @@
"""
Cascade signals for Planning models
Handles status updates and relationship cleanup when parent records are deleted
"""
import logging
from django.db.models.signals import pre_delete, post_save
from django.dispatch import receiver
logger = logging.getLogger(__name__)
@receiver(pre_delete, sender='planner.Clusters')
def handle_cluster_soft_delete(sender, instance, **kwargs):
"""
When a Cluster is deleted:
- Set Keywords.cluster = NULL
- Reset Keywords.status to 'new'
- Set ContentIdeas.keyword_cluster = NULL
- Reset ContentIdeas.status to 'new'
"""
from igny8_core.business.planning.models import Keywords, ContentIdeas
# Check if this is a soft delete (is_deleted=True) vs hard delete
# Soft deletes trigger delete() which calls soft_delete()
if hasattr(instance, 'is_deleted') and instance.is_deleted:
return # Skip if already soft-deleted
try:
# Update related Keywords - clear cluster FK and reset status
updated_keywords = Keywords.objects.filter(cluster=instance).update(
cluster=None,
status='new'
)
if updated_keywords:
logger.info(
f"[Cascade] Cluster '{instance.name}' (ID: {instance.id}) deleted: "
f"Reset {updated_keywords} keywords to status='new', cluster=NULL"
)
# Update related ContentIdeas - clear cluster FK and reset status
updated_ideas = ContentIdeas.objects.filter(keyword_cluster=instance).update(
keyword_cluster=None,
status='new'
)
if updated_ideas:
logger.info(
f"[Cascade] Cluster '{instance.name}' (ID: {instance.id}) deleted: "
f"Reset {updated_ideas} content ideas to status='new', keyword_cluster=NULL"
)
except Exception as e:
logger.error(f"[Cascade] Error handling cluster deletion cascade: {e}", exc_info=True)
@receiver(pre_delete, sender='planner.ContentIdeas')
def handle_idea_soft_delete(sender, instance, **kwargs):
"""
When a ContentIdea is deleted:
- Set Tasks.idea = NULL (don't delete tasks, they may have content)
- Log orphaned tasks
"""
from igny8_core.business.content.models import Tasks
if hasattr(instance, 'is_deleted') and instance.is_deleted:
return
try:
# Update related Tasks - clear idea FK
updated_tasks = Tasks.objects.filter(idea=instance).update(idea=None)
if updated_tasks:
logger.info(
f"[Cascade] ContentIdea '{instance.idea_title}' (ID: {instance.id}) deleted: "
f"Cleared idea reference from {updated_tasks} tasks"
)
except Exception as e:
logger.error(f"[Cascade] Error handling content idea deletion cascade: {e}", exc_info=True)
@receiver(pre_delete, sender='writer.Tasks')
def handle_task_soft_delete(sender, instance, **kwargs):
"""
When a Task is deleted:
- Set Content.task = NULL
"""
from igny8_core.business.content.models import Content
if hasattr(instance, 'is_deleted') and instance.is_deleted:
return
try:
# Update related Content - clear task FK
updated_content = Content.objects.filter(task=instance).update(task=None)
if updated_content:
logger.info(
f"[Cascade] Task '{instance.title}' (ID: {instance.id}) deleted: "
f"Cleared task reference from {updated_content} content items"
)
except Exception as e:
logger.error(f"[Cascade] Error handling task deletion cascade: {e}", exc_info=True)
@receiver(pre_delete, sender='writer.Content')
def handle_content_soft_delete(sender, instance, **kwargs):
"""
When Content is deleted:
- Soft delete related Images (cascade soft delete)
- Clear PublishingRecord references
"""
from igny8_core.business.content.models import Images
if hasattr(instance, 'is_deleted') and instance.is_deleted:
return
try:
# Soft delete related Images
related_images = Images.objects.filter(content=instance)
for image in related_images:
image.soft_delete(reason='cascade_from_content')
count = related_images.count()
if count:
logger.info(
f"[Cascade] Content '{instance.title}' (ID: {instance.id}) deleted: "
f"Soft deleted {count} related images"
)
except Exception as e:
logger.error(f"[Cascade] Error handling content deletion cascade: {e}", exc_info=True)

View File

@@ -19,9 +19,6 @@ app.config_from_object('django.conf:settings', namespace='CELERY')
# Load task modules from all registered Django apps.
app.autodiscover_tasks()
# Explicitly import tasks from igny8_core/tasks directory
app.autodiscover_tasks(['igny8_core.tasks'])
# Celery Beat schedule for periodic tasks
app.conf.beat_schedule = {
'replenish-monthly-credits': {
@@ -42,15 +39,6 @@ app.conf.beat_schedule = {
'task': 'automation.check_scheduled_automations',
'schedule': crontab(minute=0), # Every hour at :00
},
# Publishing Scheduler Tasks
'schedule-approved-content': {
'task': 'publishing.schedule_approved_content',
'schedule': crontab(minute=0), # Every hour at :00
},
'process-scheduled-publications': {
'task': 'publishing.process_scheduled_publications',
'schedule': crontab(minute='*/5'), # Every 5 minutes
},
# Maintenance: purge expired soft-deleted records daily at 3:15 AM
'purge-soft-deleted-records': {
'task': 'igny8_core.purge_soft_deleted',

View File

@@ -1,152 +0,0 @@
"""
Management command to clean up all user-generated data (DESTRUCTIVE).
This is used before V1.0 production launch to start with a clean database.
⚠️ WARNING: This permanently deletes ALL user data!
Usage:
# DRY RUN (recommended first):
python manage.py cleanup_user_data --dry-run
# ACTUAL CLEANUP (after reviewing dry-run):
python manage.py cleanup_user_data --confirm
"""
from django.core.management.base import BaseCommand
from django.db import transaction
from django.conf import settings
class Command(BaseCommand):
help = 'Clean up all user-generated data (DESTRUCTIVE - for pre-launch cleanup)'
def add_arguments(self, parser):
parser.add_argument(
'--confirm',
action='store_true',
help='Confirm you want to delete all user data'
)
parser.add_argument(
'--dry-run',
action='store_true',
help='Show what would be deleted without actually deleting'
)
def handle(self, *args, **options):
if not options['confirm'] and not options['dry_run']:
self.stdout.write(
self.style.ERROR('\n⚠️ ERROR: Must use --confirm or --dry-run flag\n')
)
self.stdout.write('Usage:')
self.stdout.write(' python manage.py cleanup_user_data --dry-run # See what will be deleted')
self.stdout.write(' python manage.py cleanup_user_data --confirm # Actually delete data\n')
return
# Safety check: Prevent running in production unless explicitly allowed
if getattr(settings, 'ENVIRONMENT', 'production') == 'production' and options['confirm']:
self.stdout.write(
self.style.ERROR('\n⚠️ BLOCKED: Cannot run cleanup in PRODUCTION environment!\n')
)
self.stdout.write('To allow this, temporarily set ENVIRONMENT to "staging" in settings.\n')
return
# Import models
from igny8_core.auth.models import Site, CustomUser
from igny8_core.business.planning.models import Keywords, Clusters
from igny8_core.business.content.models import ContentIdea, Tasks, Content, Images
from igny8_core.modules.publisher.models import PublishingRecord
from igny8_core.business.integration.models import WordPressSyncEvent
from igny8_core.modules.billing.models import CreditTransaction, CreditUsageLog, Order
from igny8_core.modules.system.models import Notification
from igny8_core.modules.writer.models import AutomationRun
# Define models to clear (ORDER MATTERS - foreign keys)
# Delete child records before parent records
models_to_clear = [
('Notifications', Notification),
('Credit Usage Logs', CreditUsageLog),
('Credit Transactions', CreditTransaction),
('Orders', Order),
('WordPress Sync Events', WordPressSyncEvent),
('Publishing Records', PublishingRecord),
('Automation Runs', AutomationRun),
('Images', Images),
('Content', Content),
('Tasks', Tasks),
('Content Ideas', ContentIdea),
('Clusters', Clusters),
('Keywords', Keywords),
('Sites', Site), # Sites should be near last (many foreign keys)
# Note: We do NOT delete CustomUser - keep admin users
]
if options['dry_run']:
self.stdout.write(self.style.WARNING('\n' + '=' * 70))
self.stdout.write(self.style.WARNING('DRY RUN - No data will be deleted'))
self.stdout.write(self.style.WARNING('=' * 70 + '\n'))
total_records = 0
for name, model in models_to_clear:
count = model.objects.count()
total_records += count
status = '' if count > 0 else '·'
self.stdout.write(f' {status} Would delete {count:6d} {name}')
# Count users (not deleted)
user_count = CustomUser.objects.count()
self.stdout.write(f'\n → Keeping {user_count:6d} Users (not deleted)')
self.stdout.write(f'\n Total records to delete: {total_records:,}')
self.stdout.write('\n' + '=' * 70)
self.stdout.write(self.style.SUCCESS('\nTo proceed with actual deletion, run:'))
self.stdout.write(' python manage.py cleanup_user_data --confirm\n')
return
# ACTUAL DELETION
self.stdout.write(self.style.ERROR('\n' + '=' * 70))
self.stdout.write(self.style.ERROR('⚠️ DELETING ALL USER DATA - THIS CANNOT BE UNDONE!'))
self.stdout.write(self.style.ERROR('=' * 70 + '\n'))
# Final confirmation prompt
confirm_text = input('Type "DELETE ALL DATA" to proceed: ')
if confirm_text != 'DELETE ALL DATA':
self.stdout.write(self.style.WARNING('\nAborted. Data was NOT deleted.\n'))
return
self.stdout.write('\nProceeding with deletion...\n')
deleted_counts = {}
failed_deletions = []
with transaction.atomic():
for name, model in models_to_clear:
try:
count = model.objects.count()
if count > 0:
model.objects.all().delete()
deleted_counts[name] = count
self.stdout.write(
self.style.SUCCESS(f'✓ Deleted {count:6d} {name}')
)
else:
self.stdout.write(
self.style.WARNING(f'· Skipped {count:6d} {name} (already empty)')
)
except Exception as e:
failed_deletions.append((name, str(e)))
self.stdout.write(
self.style.ERROR(f'✗ Failed to delete {name}: {str(e)}')
)
# Summary
total_deleted = sum(deleted_counts.values())
self.stdout.write('\n' + '=' * 70)
self.stdout.write(self.style.SUCCESS(f'\nUser Data Cleanup Complete!\n'))
self.stdout.write(f' Total records deleted: {total_deleted:,}')
self.stdout.write(f' Failed deletions: {len(failed_deletions)}')
if failed_deletions:
self.stdout.write(self.style.WARNING('\nFailed deletions:'))
for name, error in failed_deletions:
self.stdout.write(f' - {name}: {error}')
self.stdout.write('\n' + '=' * 70 + '\n')

View File

@@ -1,122 +0,0 @@
"""
Management command to export system configuration data to JSON files.
This exports Plans, Credit Costs, AI Models, Industries, Sectors, Seed Keywords, etc.
Usage:
python manage.py export_system_config --output-dir=backups/config
"""
from django.core.management.base import BaseCommand
from django.core import serializers
import json
import os
from datetime import datetime
class Command(BaseCommand):
help = 'Export system configuration data to JSON files for V1.0 backup'
def add_arguments(self, parser):
parser.add_argument(
'--output-dir',
default='backups/config',
help='Output directory for config files (relative to project root)'
)
def handle(self, *args, **options):
output_dir = options['output_dir']
# Make output_dir absolute if it's relative
if not os.path.isabs(output_dir):
# Get project root (parent of manage.py)
import sys
project_root = os.path.dirname(os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))
output_dir = os.path.join(project_root, '..', output_dir)
os.makedirs(output_dir, exist_ok=True)
self.stdout.write(self.style.SUCCESS(f'\nExporting system configuration to: {output_dir}\n'))
# Import models
from igny8_core.modules.billing.models import Plan, CreditCostConfig
from igny8_core.modules.system.models import AIModelConfig, GlobalIntegrationSettings
from igny8_core.auth.models import Industry, Sector, SeedKeyword, AuthorProfile
from igny8_core.ai.models import Prompt, PromptVariable
# Define what to export
exports = {
'plans': (Plan.objects.all(), 'Subscription Plans'),
'credit_costs': (CreditCostConfig.objects.all(), 'Credit Cost Configurations'),
'ai_models': (AIModelConfig.objects.all(), 'AI Model Configurations'),
'global_integrations': (GlobalIntegrationSettings.objects.all(), 'Global Integration Settings'),
'industries': (Industry.objects.all(), 'Industries'),
'sectors': (Sector.objects.all(), 'Sectors'),
'seed_keywords': (SeedKeyword.objects.all(), 'Seed Keywords'),
'author_profiles': (AuthorProfile.objects.all(), 'Author Profiles'),
'prompts': (Prompt.objects.all(), 'AI Prompts'),
'prompt_variables': (PromptVariable.objects.all(), 'Prompt Variables'),
}
successful_exports = []
failed_exports = []
for name, (queryset, description) in exports.items():
try:
count = queryset.count()
data = serializers.serialize('json', queryset, indent=2)
filepath = os.path.join(output_dir, f'{name}.json')
with open(filepath, 'w') as f:
f.write(data)
self.stdout.write(
self.style.SUCCESS(f'✓ Exported {count:4d} {description:30s}{name}.json')
)
successful_exports.append(name)
except Exception as e:
self.stdout.write(
self.style.ERROR(f'✗ Failed to export {description}: {str(e)}')
)
failed_exports.append((name, str(e)))
# Export metadata
metadata = {
'exported_at': datetime.now().isoformat(),
'django_version': self.get_django_version(),
'database': self.get_database_info(),
'successful_exports': successful_exports,
'failed_exports': failed_exports,
'export_count': len(successful_exports),
}
metadata_path = os.path.join(output_dir, 'export_metadata.json')
with open(metadata_path, 'w') as f:
json.dump(metadata, f, indent=2)
self.stdout.write(self.style.SUCCESS(f'\n✓ Metadata saved to export_metadata.json'))
# Summary
self.stdout.write('\n' + '=' * 70)
self.stdout.write(self.style.SUCCESS(f'\nSystem Configuration Export Complete!\n'))
self.stdout.write(f' Successful: {len(successful_exports)} exports')
self.stdout.write(f' Failed: {len(failed_exports)} exports')
self.stdout.write(f' Location: {output_dir}\n')
if failed_exports:
self.stdout.write(self.style.WARNING('\nFailed exports:'))
for name, error in failed_exports:
self.stdout.write(f' - {name}: {error}')
self.stdout.write('=' * 70 + '\n')
def get_django_version(self):
import django
return django.get_version()
def get_database_info(self):
from django.conf import settings
db_config = settings.DATABASES.get('default', {})
return {
'engine': db_config.get('ENGINE', '').split('.')[-1],
'name': db_config.get('NAME', ''),
}

View File

@@ -519,30 +519,6 @@ class PaymentMethodConfigAdmin(Igny8ModelAdmin):
search_fields = ['country_code', 'display_name', 'payment_method']
list_editable = ['is_enabled', 'sort_order']
readonly_fields = ['created_at', 'updated_at']
fieldsets = (
('Payment Method', {
'fields': ('country_code', 'payment_method', 'display_name', 'is_enabled', 'sort_order')
}),
('Instructions', {
'fields': ('instructions',),
'description': 'Instructions shown to users for this payment method'
}),
('Bank Transfer Details', {
'fields': ('bank_name', 'account_title', 'account_number', 'routing_number', 'swift_code', 'iban'),
'classes': ('collapse',),
'description': 'Only for bank_transfer payment method'
}),
('Local Wallet Details', {
'fields': ('wallet_type', 'wallet_id'),
'classes': ('collapse',),
'description': 'Only for local_wallet payment method (JazzCash, EasyPaisa, etc.)'
}),
('Timestamps', {
'fields': ('created_at', 'updated_at'),
'classes': ('collapse',)
}),
)
@admin.register(AccountPaymentMethod)
@@ -576,18 +552,19 @@ class AccountPaymentMethodAdmin(AccountAdminMixin, Igny8ModelAdmin):
@admin.register(CreditCostConfig)
class CreditCostConfigAdmin(SimpleHistoryAdmin, Igny8ModelAdmin):
"""
Admin for Credit Cost Configuration.
Per final-model-schemas.md - Fixed credits per operation type.
"""
list_display = [
'operation_type',
'display_name',
'base_credits_display',
'is_active_icon',
'tokens_per_credit_display',
'price_per_credit_usd',
'min_credits',
'is_active',
'cost_change_indicator',
'updated_at',
'updated_by'
]
list_filter = ['is_active']
list_filter = ['is_active', 'updated_at']
search_fields = ['operation_type', 'display_name', 'description']
actions = ['bulk_activate', 'bulk_deactivate']
@@ -595,30 +572,60 @@ class CreditCostConfigAdmin(SimpleHistoryAdmin, Igny8ModelAdmin):
('Operation', {
'fields': ('operation_type', 'display_name', 'description')
}),
('Credits', {
'fields': ('base_credits', 'is_active'),
'description': 'Fixed credits charged per operation'
('Token-to-Credit Configuration', {
'fields': ('tokens_per_credit', 'min_credits', 'price_per_credit_usd', 'is_active'),
'description': 'Configure how tokens are converted to credits for this operation'
}),
('Audit Trail', {
'fields': ('previous_tokens_per_credit', 'updated_by', 'created_at', 'updated_at'),
'classes': ('collapse',)
}),
)
def base_credits_display(self, obj):
"""Show base credits with formatting"""
return format_html(
'<span style="font-weight: bold;">{} credits</span>',
obj.base_credits
)
base_credits_display.short_description = 'Credits'
readonly_fields = ['created_at', 'updated_at', 'previous_tokens_per_credit']
def is_active_icon(self, obj):
"""Active status icon"""
if obj.is_active:
return format_html(
'<span style="color: green; font-size: 18px;" title="Active">●</span>'
)
def tokens_per_credit_display(self, obj):
"""Show token ratio with color coding"""
if obj.tokens_per_credit <= 50:
color = 'red' # Expensive (low tokens per credit)
elif obj.tokens_per_credit <= 100:
color = 'orange'
else:
color = 'green' # Cheap (high tokens per credit)
return format_html(
'<span style="color: red; font-size: 18px;" title="Inactive">●</span>'
'<span style="color: {}; font-weight: bold;">{} tokens/credit</span>',
color,
obj.tokens_per_credit
)
is_active_icon.short_description = 'Active'
tokens_per_credit_display.short_description = 'Token Ratio'
def cost_change_indicator(self, obj):
"""Show if token ratio changed recently"""
if obj.previous_tokens_per_credit is not None:
if obj.tokens_per_credit < obj.previous_tokens_per_credit:
icon = '📈' # More expensive (fewer tokens per credit)
color = 'red'
elif obj.tokens_per_credit > obj.previous_tokens_per_credit:
icon = '📉' # Cheaper (more tokens per credit)
color = 'green'
else:
icon = '➡️' # Same
color = 'gray'
return format_html(
'{} <span style="color: {};">({}{})</span>',
icon,
color,
obj.previous_tokens_per_credit,
obj.tokens_per_credit
)
return ''
cost_change_indicator.short_description = 'Recent Change'
def save_model(self, request, obj, form, change):
"""Track who made the change"""
obj.updated_by = request.user
super().save_model(request, obj, form, change)
@admin.action(description='Activate selected configurations')
def bulk_activate(self, request, queryset):
@@ -756,60 +763,67 @@ class BillingConfigurationAdmin(Igny8ModelAdmin):
@admin.register(AIModelConfig)
class AIModelConfigAdmin(SimpleHistoryAdmin, Igny8ModelAdmin):
"""
Admin for AI Model Configuration - Single Source of Truth for Models.
Per final-model-schemas.md
Admin for AI Model Configuration - Database-driven model pricing
Replaces hardcoded MODEL_RATES and IMAGE_MODEL_RATES
"""
list_display = [
'model_name',
'display_name_short',
'model_type_badge',
'provider_badge',
'credit_display',
'quality_tier',
'pricing_display',
'is_active_icon',
'is_default_icon',
'sort_order',
'updated_at',
]
list_filter = [
'model_type',
'provider',
'quality_tier',
'is_active',
'is_default',
'supports_json_mode',
'supports_vision',
'supports_function_calling',
]
search_fields = ['model_name', 'display_name']
search_fields = ['model_name', 'display_name', 'description']
ordering = ['model_type', 'model_name']
ordering = ['model_type', 'sort_order', 'model_name']
readonly_fields = ['created_at', 'updated_at']
readonly_fields = ['created_at', 'updated_at', 'updated_by']
fieldsets = (
('Basic Information', {
'fields': ('model_name', 'model_type', 'provider', 'display_name'),
'description': 'Core model identification'
'fields': ('model_name', 'display_name', 'model_type', 'provider', 'description'),
'description': 'Core model identification and classification'
}),
('Text Model Pricing', {
'fields': ('cost_per_1k_input', 'cost_per_1k_output', 'tokens_per_credit', 'max_tokens', 'context_window'),
'description': 'For TEXT models only',
'fields': ('input_cost_per_1m', 'output_cost_per_1m', 'context_window', 'max_output_tokens'),
'description': 'Pricing and limits for TEXT models only (leave blank for image models)',
'classes': ('collapse',)
}),
('Image Model Pricing', {
'fields': ('credits_per_image', 'quality_tier'),
'description': 'For IMAGE models only',
'fields': ('cost_per_image', 'valid_sizes'),
'description': 'Pricing and configuration for IMAGE models only (leave blank for text models)',
'classes': ('collapse',)
}),
('Capabilities', {
'fields': ('capabilities',),
'description': 'JSON: vision, function_calling, json_mode, etc.',
'fields': ('supports_json_mode', 'supports_vision', 'supports_function_calling'),
'description': 'Model features and capabilities'
}),
('Status & Display', {
'fields': ('is_active', 'is_default', 'sort_order'),
'description': 'Control model availability and ordering in dropdowns'
}),
('Lifecycle', {
'fields': ('release_date', 'deprecation_date'),
'description': 'Model release and deprecation dates',
'classes': ('collapse',)
}),
('Status', {
'fields': ('is_active', 'is_default'),
}),
('Timestamps', {
'fields': ('created_at', 'updated_at'),
('Audit Trail', {
'fields': ('created_at', 'updated_at', 'updated_by'),
'classes': ('collapse',)
}),
)
@@ -817,8 +831,8 @@ class AIModelConfigAdmin(SimpleHistoryAdmin, Igny8ModelAdmin):
# Custom display methods
def display_name_short(self, obj):
"""Truncated display name for list view"""
if len(obj.display_name) > 40:
return obj.display_name[:37] + '...'
if len(obj.display_name) > 50:
return obj.display_name[:47] + '...'
return obj.display_name
display_name_short.short_description = 'Display Name'
@@ -827,6 +841,7 @@ class AIModelConfigAdmin(SimpleHistoryAdmin, Igny8ModelAdmin):
colors = {
'text': '#3498db', # Blue
'image': '#e74c3c', # Red
'embedding': '#2ecc71', # Green
}
color = colors.get(obj.model_type, '#95a5a6')
return format_html(
@@ -840,10 +855,10 @@ class AIModelConfigAdmin(SimpleHistoryAdmin, Igny8ModelAdmin):
def provider_badge(self, obj):
"""Colored badge for provider"""
colors = {
'openai': '#10a37f',
'anthropic': '#d97757',
'runware': '#6366f1',
'google': '#4285f4',
'openai': '#10a37f', # OpenAI green
'anthropic': '#d97757', # Anthropic orange
'runware': '#6366f1', # Purple
'google': '#4285f4', # Google blue
}
color = colors.get(obj.provider, '#95a5a6')
return format_html(
@@ -854,20 +869,23 @@ class AIModelConfigAdmin(SimpleHistoryAdmin, Igny8ModelAdmin):
)
provider_badge.short_description = 'Provider'
def credit_display(self, obj):
"""Format credit info based on model type"""
if obj.model_type == 'text' and obj.tokens_per_credit:
def pricing_display(self, obj):
"""Format pricing based on model type"""
if obj.model_type == 'text':
return format_html(
'<span style="font-family: monospace;">{} tokens/credit</span>',
obj.tokens_per_credit
'<span style="color: #2c3e50; font-family: monospace;">'
'${} / ${} per 1M</span>',
obj.input_cost_per_1m,
obj.output_cost_per_1m
)
elif obj.model_type == 'image' and obj.credits_per_image:
elif obj.model_type == 'image':
return format_html(
'<span style="font-family: monospace;">{} credits/image</span>',
obj.credits_per_image
'<span style="color: #2c3e50; font-family: monospace;">'
'${} per image</span>',
obj.cost_per_image
)
return '-'
credit_display.short_description = 'Credits'
pricing_display.short_description = 'Pricing'
def is_active_icon(self, obj):
"""Active status icon"""
@@ -897,27 +915,41 @@ class AIModelConfigAdmin(SimpleHistoryAdmin, Igny8ModelAdmin):
def bulk_activate(self, request, queryset):
"""Enable selected models"""
count = queryset.update(is_active=True)
self.message_user(request, f'{count} model(s) activated.', messages.SUCCESS)
self.message_user(
request,
f'{count} model(s) activated successfully.',
messages.SUCCESS
)
bulk_activate.short_description = 'Activate selected models'
def bulk_deactivate(self, request, queryset):
"""Disable selected models"""
count = queryset.update(is_active=False)
self.message_user(request, f'{count} model(s) deactivated.', messages.WARNING)
self.message_user(
request,
f'{count} model(s) deactivated successfully.',
messages.WARNING
)
bulk_deactivate.short_description = 'Deactivate selected models'
def set_as_default(self, request, queryset):
"""Set one model as default for its type"""
if queryset.count() != 1:
self.message_user(request, 'Select exactly one model.', messages.ERROR)
self.message_user(
request,
'Please select exactly one model to set as default.',
messages.ERROR
)
return
model = queryset.first()
# Unset other defaults for same type
AIModelConfig.objects.filter(
model_type=model.model_type,
is_default=True
).exclude(pk=model.pk).update(is_default=False)
# Set this as default
model.is_default = True
model.save()
@@ -926,4 +958,9 @@ class AIModelConfigAdmin(SimpleHistoryAdmin, Igny8ModelAdmin):
f'{model.model_name} is now the default {model.get_model_type_display()} model.',
messages.SUCCESS
)
set_as_default.short_description = 'Set as default model'
set_as_default.short_description = 'Set as default model (for its type)'
def save_model(self, request, obj, form, change):
"""Track who made the change"""
obj.updated_by = request.user
super().save_model(request, obj, form, change)

View File

@@ -29,10 +29,23 @@ class Command(BaseCommand):
],
'Planner': [
('max_keywords', 'Max Keywords'),
('max_ahrefs_queries', 'Max Ahrefs Queries'),
('max_clusters', 'Max Clusters'),
('max_content_ideas', 'Max Content Ideas'),
('daily_cluster_limit', 'Daily Cluster Limit'),
],
'Credits': [
('included_credits', 'Included Credits'),
'Writer': [
('monthly_word_count_limit', 'Monthly Word Count Limit'),
('daily_content_tasks', 'Daily Content Tasks'),
],
'Images': [
('monthly_image_count', 'Monthly Image Count'),
('daily_image_generation_limit', 'Daily Image Generation Limit'),
],
'AI Credits': [
('monthly_ai_credit_limit', 'Monthly AI Credit Limit'),
('monthly_cluster_ai_credits', 'Monthly Cluster AI Credits'),
('monthly_content_ai_credits', 'Monthly Content AI Credits'),
('monthly_image_ai_credits', 'Monthly Image AI Credits'),
],
}

View File

@@ -1,87 +0,0 @@
"""
Migration: Update Runware model configurations in AIModelConfig
This migration:
1. Updates runware:97@1 to have display_name "Hi Dream Full - Standard"
2. Adds Bria 3.2 model as civitai:618692@691639
"""
from decimal import Decimal
from django.db import migrations
def update_runware_models(apps, schema_editor):
"""Update Runware models in AIModelConfig"""
AIModelConfig = apps.get_model('billing', 'AIModelConfig')
# Update existing runware:97@1 model
AIModelConfig.objects.update_or_create(
model_name='runware:97@1',
defaults={
'display_name': 'Hi Dream Full - Standard',
'model_type': 'image',
'provider': 'runware',
'cost_per_image': Decimal('0.008'),
'valid_sizes': ['512x512', '768x768', '1024x1024', '1024x1792', '1792x1024'],
'supports_json_mode': False,
'supports_vision': False,
'supports_function_calling': False,
'is_active': True,
'is_default': True, # Make this the default Runware model
'sort_order': 10,
'description': 'Hi Dream Full - Standard quality image generation via Runware',
}
)
# Add Bria 3.2 Premium model
AIModelConfig.objects.update_or_create(
model_name='civitai:618692@691639',
defaults={
'display_name': 'Bria 3.2 - Premium',
'model_type': 'image',
'provider': 'runware',
'cost_per_image': Decimal('0.012'),
'valid_sizes': ['512x512', '768x768', '1024x1024', '1024x1792', '1792x1024'],
'supports_json_mode': False,
'supports_vision': False,
'supports_function_calling': False,
'is_active': True,
'is_default': False,
'sort_order': 11,
'description': 'Bria 3.2 - Premium quality image generation via Runware/Civitai',
}
)
# Optionally remove the old runware:100@1 and runware:101@1 models if they exist
AIModelConfig.objects.filter(
model_name__in=['runware:100@1', 'runware:101@1']
).update(is_active=False)
def reverse_migration(apps, schema_editor):
"""Reverse the migration"""
AIModelConfig = apps.get_model('billing', 'AIModelConfig')
# Restore old display name
AIModelConfig.objects.filter(model_name='runware:97@1').update(
display_name='Runware Standard',
is_default=False,
)
# Remove Bria 3.2 model
AIModelConfig.objects.filter(model_name='civitai:618692@691639').delete()
# Re-activate old models
AIModelConfig.objects.filter(
model_name__in=['runware:100@1', 'runware:101@1']
).update(is_active=True)
class Migration(migrations.Migration):
dependencies = [
('billing', '0022_fix_historical_calculation_mode_null'),
]
operations = [
migrations.RunPython(update_runware_models, reverse_migration),
]

View File

@@ -1,113 +0,0 @@
"""
Migration: Update Runware/Image model configurations for new model structure
This migration:
1. Updates runware:97@1 to "Hi Dream Full - Basic"
2. Adds Bria 3.2 model as bria:10@1 (correct AIR ID)
3. Adds Nano Banana (Google) as google:4@2 (Premium tier)
4. Removes old civitai model reference
5. Adds one_liner_description field values
"""
from decimal import Decimal
from django.db import migrations
def update_image_models(apps, schema_editor):
"""Update image models in AIModelConfig"""
AIModelConfig = apps.get_model('billing', 'AIModelConfig')
# Update existing runware:97@1 model
AIModelConfig.objects.update_or_create(
model_name='runware:97@1',
defaults={
'display_name': 'Hi Dream Full - Basic',
'model_type': 'image',
'provider': 'runware',
'cost_per_image': Decimal('0.006'), # Basic tier, cheaper
'valid_sizes': ['1024x1024', '1280x768', '768x1280'],
'supports_json_mode': False,
'supports_vision': False,
'supports_function_calling': False,
'is_active': True,
'is_default': True,
'sort_order': 10,
'description': 'Fast & affordable image generation. Steps: 20, CFG: 7. Good for quick iterations.',
}
)
# Add Bria 3.2 model with correct AIR ID
AIModelConfig.objects.update_or_create(
model_name='bria:10@1',
defaults={
'display_name': 'Bria 3.2 - Quality',
'model_type': 'image',
'provider': 'runware', # Via Runware API
'cost_per_image': Decimal('0.010'), # Quality tier
'valid_sizes': ['1024x1024', '1344x768', '768x1344', '1216x832', '832x1216'],
'supports_json_mode': False,
'supports_vision': False,
'supports_function_calling': False,
'is_active': True,
'is_default': False,
'sort_order': 11,
'description': 'Commercial-safe AI. Steps: 8, prompt enhancement enabled. Licensed training data.',
}
)
# Add Nano Banana (Google) Premium model
AIModelConfig.objects.update_or_create(
model_name='google:4@2',
defaults={
'display_name': 'Nano Banana - Premium',
'model_type': 'image',
'provider': 'runware', # Via Runware API
'cost_per_image': Decimal('0.015'), # Premium tier
'valid_sizes': ['1024x1024', '1376x768', '768x1376', '1264x848', '848x1264'],
'supports_json_mode': False,
'supports_vision': False,
'supports_function_calling': False,
'is_active': True,
'is_default': False,
'sort_order': 12,
'description': 'Google Gemini 3 Pro. Best quality, text rendering, advanced reasoning. Premium pricing.',
}
)
# Deactivate old civitai model (replaced by correct bria:10@1)
AIModelConfig.objects.filter(
model_name='civitai:618692@691639'
).update(is_active=False)
# Deactivate other old models
AIModelConfig.objects.filter(
model_name__in=['runware:100@1', 'runware:101@1']
).update(is_active=False)
def reverse_migration(apps, schema_editor):
"""Reverse the migration"""
AIModelConfig = apps.get_model('billing', 'AIModelConfig')
# Restore old display names
AIModelConfig.objects.filter(model_name='runware:97@1').update(
display_name='Hi Dream Full - Standard',
)
# Remove new models
AIModelConfig.objects.filter(model_name__in=['bria:10@1', 'google:4@2']).delete()
# Re-activate old models
AIModelConfig.objects.filter(
model_name__in=['runware:100@1', 'runware:101@1', 'civitai:618692@691639']
).update(is_active=True)
class Migration(migrations.Migration):
dependencies = [
('billing', '0023_update_runware_models'),
]
operations = [
migrations.RunPython(update_image_models, reverse_migration),
]

View File

@@ -1,43 +0,0 @@
# Generated by Django 5.2.9 on 2026-01-04 06:11
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('billing', '0024_update_image_models_v2'),
]
operations = [
migrations.AddField(
model_name='aimodelconfig',
name='credits_per_image',
field=models.IntegerField(blank=True, help_text='Fixed credits per image generated. For image models only. (e.g., 1, 5, 15)', null=True),
),
migrations.AddField(
model_name='aimodelconfig',
name='quality_tier',
field=models.CharField(blank=True, choices=[('basic', 'Basic'), ('quality', 'Quality'), ('premium', 'Premium')], help_text='Quality tier for frontend UI display (Basic/Quality/Premium). For image models.', max_length=20, null=True),
),
migrations.AddField(
model_name='aimodelconfig',
name='tokens_per_credit',
field=models.IntegerField(blank=True, help_text='Number of tokens that equal 1 credit. For text models only. (e.g., 1000, 10000)', null=True),
),
migrations.AddField(
model_name='historicalaimodelconfig',
name='credits_per_image',
field=models.IntegerField(blank=True, help_text='Fixed credits per image generated. For image models only. (e.g., 1, 5, 15)', null=True),
),
migrations.AddField(
model_name='historicalaimodelconfig',
name='quality_tier',
field=models.CharField(blank=True, choices=[('basic', 'Basic'), ('quality', 'Quality'), ('premium', 'Premium')], help_text='Quality tier for frontend UI display (Basic/Quality/Premium). For image models.', max_length=20, null=True),
),
migrations.AddField(
model_name='historicalaimodelconfig',
name='tokens_per_credit',
field=models.IntegerField(blank=True, help_text='Number of tokens that equal 1 credit. For text models only. (e.g., 1000, 10000)', null=True),
),
]

View File

@@ -1,63 +0,0 @@
# Generated manually for data migration
from django.db import migrations
def populate_aimodel_credit_fields(apps, schema_editor):
"""
Populate credit calculation fields in AIModelConfig.
- Text models: tokens_per_credit (how many tokens = 1 credit)
- Image models: credits_per_image (fixed credits per image) + quality_tier
"""
AIModelConfig = apps.get_model('billing', 'AIModelConfig')
# Text models: tokens_per_credit
text_model_credits = {
'gpt-4o-mini': 10000, # Cheap model: 10k tokens = 1 credit
'gpt-4o': 1000, # Premium model: 1k tokens = 1 credit
'gpt-5.1': 1000, # Default model: 1k tokens = 1 credit
'gpt-5.2': 1000, # Future model
'gpt-4.1': 1000, # Legacy
'gpt-4-turbo-preview': 500, # Expensive
}
for model_name, tokens_per_credit in text_model_credits.items():
AIModelConfig.objects.filter(
model_name=model_name,
model_type='text'
).update(tokens_per_credit=tokens_per_credit)
# Image models: credits_per_image + quality_tier
image_model_credits = {
'runware:97@1': {'credits_per_image': 1, 'quality_tier': 'basic'}, # Basic - cheap
'dall-e-3': {'credits_per_image': 5, 'quality_tier': 'quality'}, # Quality - mid
'google:4@2': {'credits_per_image': 15, 'quality_tier': 'premium'}, # Premium - expensive
'dall-e-2': {'credits_per_image': 2, 'quality_tier': 'basic'}, # Legacy
}
for model_name, credits_data in image_model_credits.items():
AIModelConfig.objects.filter(
model_name=model_name,
model_type='image'
).update(**credits_data)
def reverse_migration(apps, schema_editor):
"""Clear credit fields"""
AIModelConfig = apps.get_model('billing', 'AIModelConfig')
AIModelConfig.objects.all().update(
tokens_per_credit=None,
credits_per_image=None,
quality_tier=None
)
class Migration(migrations.Migration):
dependencies = [
('billing', '0025_add_aimodel_credit_fields'),
]
operations = [
migrations.RunPython(populate_aimodel_credit_fields, reverse_migration),
]

View File

@@ -1,356 +0,0 @@
# Generated by Django 5.2.9 on 2026-01-04 10:40
import django.core.validators
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('billing', '0026_populate_aimodel_credits'),
]
operations = [
migrations.AlterModelOptions(
name='aimodelconfig',
options={'ordering': ['model_type', 'model_name'], 'verbose_name': 'AI Model Configuration', 'verbose_name_plural': 'AI Model Configurations'},
),
migrations.RemoveField(
model_name='aimodelconfig',
name='cost_per_image',
),
migrations.RemoveField(
model_name='aimodelconfig',
name='deprecation_date',
),
migrations.RemoveField(
model_name='aimodelconfig',
name='description',
),
migrations.RemoveField(
model_name='aimodelconfig',
name='input_cost_per_1m',
),
migrations.RemoveField(
model_name='aimodelconfig',
name='max_output_tokens',
),
migrations.RemoveField(
model_name='aimodelconfig',
name='output_cost_per_1m',
),
migrations.RemoveField(
model_name='aimodelconfig',
name='release_date',
),
migrations.RemoveField(
model_name='aimodelconfig',
name='sort_order',
),
migrations.RemoveField(
model_name='aimodelconfig',
name='supports_function_calling',
),
migrations.RemoveField(
model_name='aimodelconfig',
name='supports_json_mode',
),
migrations.RemoveField(
model_name='aimodelconfig',
name='supports_vision',
),
migrations.RemoveField(
model_name='aimodelconfig',
name='updated_by',
),
migrations.RemoveField(
model_name='aimodelconfig',
name='valid_sizes',
),
migrations.RemoveField(
model_name='creditcostconfig',
name='created_at',
),
migrations.RemoveField(
model_name='creditcostconfig',
name='id',
),
migrations.RemoveField(
model_name='creditcostconfig',
name='min_credits',
),
migrations.RemoveField(
model_name='creditcostconfig',
name='previous_tokens_per_credit',
),
migrations.RemoveField(
model_name='creditcostconfig',
name='price_per_credit_usd',
),
migrations.RemoveField(
model_name='creditcostconfig',
name='tokens_per_credit',
),
migrations.RemoveField(
model_name='creditcostconfig',
name='updated_at',
),
migrations.RemoveField(
model_name='creditcostconfig',
name='updated_by',
),
migrations.RemoveField(
model_name='historicalaimodelconfig',
name='cost_per_image',
),
migrations.RemoveField(
model_name='historicalaimodelconfig',
name='deprecation_date',
),
migrations.RemoveField(
model_name='historicalaimodelconfig',
name='description',
),
migrations.RemoveField(
model_name='historicalaimodelconfig',
name='input_cost_per_1m',
),
migrations.RemoveField(
model_name='historicalaimodelconfig',
name='max_output_tokens',
),
migrations.RemoveField(
model_name='historicalaimodelconfig',
name='output_cost_per_1m',
),
migrations.RemoveField(
model_name='historicalaimodelconfig',
name='release_date',
),
migrations.RemoveField(
model_name='historicalaimodelconfig',
name='sort_order',
),
migrations.RemoveField(
model_name='historicalaimodelconfig',
name='supports_function_calling',
),
migrations.RemoveField(
model_name='historicalaimodelconfig',
name='supports_json_mode',
),
migrations.RemoveField(
model_name='historicalaimodelconfig',
name='supports_vision',
),
migrations.RemoveField(
model_name='historicalaimodelconfig',
name='updated_by',
),
migrations.RemoveField(
model_name='historicalaimodelconfig',
name='valid_sizes',
),
migrations.RemoveField(
model_name='historicalcreditcostconfig',
name='created_at',
),
migrations.RemoveField(
model_name='historicalcreditcostconfig',
name='id',
),
migrations.RemoveField(
model_name='historicalcreditcostconfig',
name='min_credits',
),
migrations.RemoveField(
model_name='historicalcreditcostconfig',
name='previous_tokens_per_credit',
),
migrations.RemoveField(
model_name='historicalcreditcostconfig',
name='price_per_credit_usd',
),
migrations.RemoveField(
model_name='historicalcreditcostconfig',
name='tokens_per_credit',
),
migrations.RemoveField(
model_name='historicalcreditcostconfig',
name='updated_at',
),
migrations.RemoveField(
model_name='historicalcreditcostconfig',
name='updated_by',
),
migrations.AddField(
model_name='aimodelconfig',
name='capabilities',
field=models.JSONField(blank=True, default=dict, help_text='Capabilities: vision, function_calling, json_mode, etc.'),
),
migrations.AddField(
model_name='aimodelconfig',
name='cost_per_1k_input',
field=models.DecimalField(blank=True, decimal_places=6, help_text='Provider cost per 1K input tokens (USD) - text models', max_digits=10, null=True),
),
migrations.AddField(
model_name='aimodelconfig',
name='cost_per_1k_output',
field=models.DecimalField(blank=True, decimal_places=6, help_text='Provider cost per 1K output tokens (USD) - text models', max_digits=10, null=True),
),
migrations.AddField(
model_name='aimodelconfig',
name='max_tokens',
field=models.IntegerField(blank=True, help_text='Model token limit', null=True),
),
migrations.AddField(
model_name='creditcostconfig',
name='base_credits',
field=models.IntegerField(default=1, help_text='Fixed credits per operation', validators=[django.core.validators.MinValueValidator(0)]),
),
migrations.AddField(
model_name='historicalaimodelconfig',
name='capabilities',
field=models.JSONField(blank=True, default=dict, help_text='Capabilities: vision, function_calling, json_mode, etc.'),
),
migrations.AddField(
model_name='historicalaimodelconfig',
name='cost_per_1k_input',
field=models.DecimalField(blank=True, decimal_places=6, help_text='Provider cost per 1K input tokens (USD) - text models', max_digits=10, null=True),
),
migrations.AddField(
model_name='historicalaimodelconfig',
name='cost_per_1k_output',
field=models.DecimalField(blank=True, decimal_places=6, help_text='Provider cost per 1K output tokens (USD) - text models', max_digits=10, null=True),
),
migrations.AddField(
model_name='historicalaimodelconfig',
name='max_tokens',
field=models.IntegerField(blank=True, help_text='Model token limit', null=True),
),
migrations.AddField(
model_name='historicalcreditcostconfig',
name='base_credits',
field=models.IntegerField(default=1, help_text='Fixed credits per operation', validators=[django.core.validators.MinValueValidator(0)]),
),
migrations.AlterField(
model_name='aimodelconfig',
name='context_window',
field=models.IntegerField(blank=True, help_text='Model context size', null=True),
),
migrations.AlterField(
model_name='aimodelconfig',
name='credits_per_image',
field=models.IntegerField(blank=True, help_text='Image: credits per image (e.g., 1, 5, 15)', null=True),
),
migrations.AlterField(
model_name='aimodelconfig',
name='display_name',
field=models.CharField(help_text='Human-readable name', max_length=200),
),
migrations.AlterField(
model_name='aimodelconfig',
name='is_active',
field=models.BooleanField(db_index=True, default=True, help_text='Enable/disable'),
),
migrations.AlterField(
model_name='aimodelconfig',
name='is_default',
field=models.BooleanField(db_index=True, default=False, help_text='One default per type'),
),
migrations.AlterField(
model_name='aimodelconfig',
name='model_name',
field=models.CharField(db_index=True, help_text="Model identifier (e.g., 'gpt-5.1', 'dall-e-3', 'runware:97@1')", max_length=100, unique=True),
),
migrations.AlterField(
model_name='aimodelconfig',
name='model_type',
field=models.CharField(choices=[('text', 'Text Generation'), ('image', 'Image Generation')], db_index=True, help_text='text / image', max_length=20),
),
migrations.AlterField(
model_name='aimodelconfig',
name='provider',
field=models.CharField(choices=[('openai', 'OpenAI'), ('anthropic', 'Anthropic'), ('runware', 'Runware'), ('google', 'Google')], db_index=True, help_text='Links to IntegrationProvider', max_length=50),
),
migrations.AlterField(
model_name='aimodelconfig',
name='quality_tier',
field=models.CharField(blank=True, choices=[('basic', 'Basic'), ('quality', 'Quality'), ('premium', 'Premium')], help_text='basic / quality / premium - for image models', max_length=20, null=True),
),
migrations.AlterField(
model_name='aimodelconfig',
name='tokens_per_credit',
field=models.IntegerField(blank=True, help_text='Text: tokens per 1 credit (e.g., 1000, 10000)', null=True),
),
migrations.AlterField(
model_name='creditcostconfig',
name='description',
field=models.TextField(blank=True, help_text='Admin notes about this operation'),
),
migrations.AlterField(
model_name='creditcostconfig',
name='operation_type',
field=models.CharField(help_text="Unique operation ID (e.g., 'article_generation', 'image_generation')", max_length=50, primary_key=True, serialize=False, unique=True),
),
migrations.AlterField(
model_name='historicalaimodelconfig',
name='context_window',
field=models.IntegerField(blank=True, help_text='Model context size', null=True),
),
migrations.AlterField(
model_name='historicalaimodelconfig',
name='credits_per_image',
field=models.IntegerField(blank=True, help_text='Image: credits per image (e.g., 1, 5, 15)', null=True),
),
migrations.AlterField(
model_name='historicalaimodelconfig',
name='display_name',
field=models.CharField(help_text='Human-readable name', max_length=200),
),
migrations.AlterField(
model_name='historicalaimodelconfig',
name='is_active',
field=models.BooleanField(db_index=True, default=True, help_text='Enable/disable'),
),
migrations.AlterField(
model_name='historicalaimodelconfig',
name='is_default',
field=models.BooleanField(db_index=True, default=False, help_text='One default per type'),
),
migrations.AlterField(
model_name='historicalaimodelconfig',
name='model_name',
field=models.CharField(db_index=True, help_text="Model identifier (e.g., 'gpt-5.1', 'dall-e-3', 'runware:97@1')", max_length=100),
),
migrations.AlterField(
model_name='historicalaimodelconfig',
name='model_type',
field=models.CharField(choices=[('text', 'Text Generation'), ('image', 'Image Generation')], db_index=True, help_text='text / image', max_length=20),
),
migrations.AlterField(
model_name='historicalaimodelconfig',
name='provider',
field=models.CharField(choices=[('openai', 'OpenAI'), ('anthropic', 'Anthropic'), ('runware', 'Runware'), ('google', 'Google')], db_index=True, help_text='Links to IntegrationProvider', max_length=50),
),
migrations.AlterField(
model_name='historicalaimodelconfig',
name='quality_tier',
field=models.CharField(blank=True, choices=[('basic', 'Basic'), ('quality', 'Quality'), ('premium', 'Premium')], help_text='basic / quality / premium - for image models', max_length=20, null=True),
),
migrations.AlterField(
model_name='historicalaimodelconfig',
name='tokens_per_credit',
field=models.IntegerField(blank=True, help_text='Text: tokens per 1 credit (e.g., 1000, 10000)', null=True),
),
migrations.AlterField(
model_name='historicalcreditcostconfig',
name='description',
field=models.TextField(blank=True, help_text='Admin notes about this operation'),
),
migrations.AlterField(
model_name='historicalcreditcostconfig',
name='operation_type',
field=models.CharField(db_index=True, help_text="Unique operation ID (e.g., 'article_generation', 'image_generation')", max_length=50),
),
]

View File

@@ -1,64 +0,0 @@
# Generated by Django 5.2.9 on 2026-01-07 03:19
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('billing', '0027_model_schema_update'),
]
operations = [
migrations.RemoveField(
model_name='paymentmethodconfig',
name='api_key',
),
migrations.RemoveField(
model_name='paymentmethodconfig',
name='api_secret',
),
migrations.RemoveField(
model_name='paymentmethodconfig',
name='webhook_secret',
),
migrations.RemoveField(
model_name='paymentmethodconfig',
name='webhook_url',
),
migrations.AddField(
model_name='paymentmethodconfig',
name='account_title',
field=models.CharField(blank=True, help_text='Account holder name', max_length=255),
),
migrations.AddField(
model_name='paymentmethodconfig',
name='iban',
field=models.CharField(blank=True, help_text='IBAN for international transfers', max_length=255),
),
migrations.AlterField(
model_name='paymentmethodconfig',
name='country_code',
field=models.CharField(db_index=True, help_text="ISO 2-letter country code (e.g., US, GB, PK) or '*' for global", max_length=2),
),
migrations.AlterField(
model_name='paymentmethodconfig',
name='routing_number',
field=models.CharField(blank=True, help_text='Routing/Sort code', max_length=255),
),
migrations.AlterField(
model_name='paymentmethodconfig',
name='swift_code',
field=models.CharField(blank=True, help_text='SWIFT/BIC code for international', max_length=255),
),
migrations.AlterField(
model_name='paymentmethodconfig',
name='wallet_id',
field=models.CharField(blank=True, help_text='Mobile number or wallet ID', max_length=255),
),
migrations.AlterField(
model_name='paymentmethodconfig',
name='wallet_type',
field=models.CharField(blank=True, help_text='E.g., JazzCash, EasyPaisa, etc.', max_length=100),
),
]

View File

@@ -1,63 +0,0 @@
# Generated by Django 5.2.9 on 2026-01-07 12:26
from django.conf import settings
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('billing', '0028_cleanup_payment_method_config'),
('igny8_core_auth', '0020_fix_historical_account'),
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
]
operations = [
migrations.CreateModel(
name='WebhookEvent',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('event_id', models.CharField(db_index=True, help_text='Unique event ID from the payment provider', max_length=255, unique=True)),
('provider', models.CharField(choices=[('stripe', 'Stripe'), ('paypal', 'PayPal')], db_index=True, help_text='Payment provider (stripe or paypal)', max_length=20)),
('event_type', models.CharField(db_index=True, help_text='Event type from the provider', max_length=100)),
('payload', models.JSONField(help_text='Full webhook payload')),
('processed', models.BooleanField(db_index=True, default=False, help_text='Whether this event has been successfully processed')),
('processed_at', models.DateTimeField(blank=True, help_text='When the event was processed', null=True)),
('error_message', models.TextField(blank=True, help_text='Error message if processing failed')),
('retry_count', models.IntegerField(default=0, help_text='Number of processing attempts')),
('created_at', models.DateTimeField(auto_now_add=True)),
],
options={
'verbose_name': 'Webhook Event',
'verbose_name_plural': 'Webhook Events',
'db_table': 'igny8_webhook_events',
'ordering': ['-created_at'],
},
),
migrations.AlterField(
model_name='historicalpayment',
name='manual_reference',
field=models.CharField(blank=True, help_text='Bank transfer reference, wallet transaction ID, etc.', max_length=255, null=True),
),
migrations.AlterField(
model_name='payment',
name='manual_reference',
field=models.CharField(blank=True, help_text='Bank transfer reference, wallet transaction ID, etc.', max_length=255, null=True),
),
migrations.AddConstraint(
model_name='payment',
constraint=models.UniqueConstraint(condition=models.Q(('manual_reference__isnull', False), models.Q(('manual_reference', ''), _negated=True)), fields=('manual_reference',), name='unique_manual_reference_when_not_null'),
),
migrations.AddIndex(
model_name='webhookevent',
index=models.Index(fields=['provider', 'event_type'], name='igny8_webho_provide_ee8a78_idx'),
),
migrations.AddIndex(
model_name='webhookevent',
index=models.Index(fields=['processed', 'created_at'], name='igny8_webho_process_88c670_idx'),
),
migrations.AddIndex(
model_name='webhookevent',
index=models.Index(fields=['provider', 'processed'], name='igny8_webho_provide_df293b_idx'),
),
]

View File

@@ -143,83 +143,6 @@ class UsageLimitsSerializer(serializers.Serializer):
limits: LimitCardSerializer = LimitCardSerializer(many=True)
class AccountPaymentMethodSerializer(serializers.Serializer):
"""
Serializer for Account Payment Methods
Handles CRUD operations for account-specific payment methods
"""
id = serializers.IntegerField(read_only=True)
type = serializers.ChoiceField(
choices=[
('stripe', 'Stripe (Credit/Debit Card)'),
('paypal', 'PayPal'),
('bank_transfer', 'Bank Transfer (Manual)'),
('local_wallet', 'Local Wallet (Manual)'),
('manual', 'Manual Payment'),
]
)
display_name = serializers.CharField(max_length=100)
is_default = serializers.BooleanField(default=False)
is_enabled = serializers.BooleanField(default=True)
is_verified = serializers.BooleanField(read_only=True, default=False)
instructions = serializers.CharField(required=False, allow_blank=True, default='')
metadata = serializers.JSONField(required=False, default=dict)
created_at = serializers.DateTimeField(read_only=True)
updated_at = serializers.DateTimeField(read_only=True)
def validate_display_name(self, value):
"""Validate display_name uniqueness per account"""
account = self.context.get('account')
instance = getattr(self, 'instance', None)
if account:
from igny8_core.business.billing.models import AccountPaymentMethod
existing = AccountPaymentMethod.objects.filter(
account=account,
display_name=value
)
if instance:
existing = existing.exclude(pk=instance.pk)
if existing.exists():
raise serializers.ValidationError(
f"A payment method with name '{value}' already exists for this account."
)
return value
def create(self, validated_data):
from igny8_core.business.billing.models import AccountPaymentMethod
account = self.context.get('account')
if not account:
raise serializers.ValidationError("Account context is required")
# If this is marked as default, unset other defaults
if validated_data.get('is_default', False):
AccountPaymentMethod.objects.filter(
account=account,
is_default=True
).update(is_default=False)
return AccountPaymentMethod.objects.create(
account=account,
**validated_data
)
def update(self, instance, validated_data):
from igny8_core.business.billing.models import AccountPaymentMethod
# If this is marked as default, unset other defaults
if validated_data.get('is_default', False) and not instance.is_default:
AccountPaymentMethod.objects.filter(
account=instance.account,
is_default=True
).exclude(pk=instance.pk).update(is_default=False)
for attr, value in validated_data.items():
setattr(instance, attr, value)
instance.save()
return instance
class AIModelConfigSerializer(serializers.Serializer):
"""
Serializer for AI Model Configuration (Read-Only API)
@@ -255,23 +178,6 @@ class AIModelConfigSerializer(serializers.Serializer):
)
valid_sizes = serializers.ListField(read_only=True, allow_null=True)
# Credit calculation fields (NEW)
credits_per_image = serializers.IntegerField(
read_only=True,
allow_null=True,
help_text="Credits charged per image generation"
)
tokens_per_credit = serializers.IntegerField(
read_only=True,
allow_null=True,
help_text="Tokens per credit for text models"
)
quality_tier = serializers.CharField(
read_only=True,
allow_null=True,
help_text="Quality tier: basic, quality, or premium"
)
# Capabilities
supports_json_mode = serializers.BooleanField(read_only=True)
supports_vision = serializers.BooleanField(read_only=True)

View File

@@ -789,7 +789,7 @@ class AIModelConfigViewSet(viewsets.ReadOnlyModelViewSet):
is_default_bool = is_default.lower() in ['true', '1', 'yes']
queryset = queryset.filter(is_default=is_default_bool)
return queryset.order_by('model_type', 'model_name')
return queryset.order_by('model_type', 'sort_order', 'model_name')
def get_serializer_class(self):
"""Return serializer class"""

View File

@@ -5,7 +5,7 @@ Phase 6: Site Integration & Multi-Destination Publishing
from django.urls import path, include
from rest_framework.routers import DefaultRouter
from igny8_core.modules.integration.views import IntegrationViewSet, PublishingSettingsViewSet
from igny8_core.modules.integration.views import IntegrationViewSet
from igny8_core.modules.integration.webhooks import (
wordpress_status_webhook,
wordpress_metadata_webhook,
@@ -14,19 +14,9 @@ from igny8_core.modules.integration.webhooks import (
router = DefaultRouter()
router.register(r'integrations', IntegrationViewSet, basename='integration')
# Create PublishingSettings ViewSet instance
publishing_settings_viewset = PublishingSettingsViewSet.as_view({
'get': 'retrieve',
'put': 'update',
'patch': 'partial_update',
})
urlpatterns = [
path('', include(router.urls)),
# Site-level publishing settings
path('sites/<int:site_id>/publishing-settings/', publishing_settings_viewset, name='publishing-settings'),
# Webhook endpoints
path('webhooks/wordpress/status/', wordpress_status_webhook, name='wordpress-status-webhook'),
path('webhooks/wordpress/metadata/', wordpress_metadata_webhook, name='wordpress-metadata-webhook'),

View File

@@ -838,148 +838,5 @@ class IntegrationViewSet(SiteSectorModelViewSet):
}, request=request)
# PublishingSettings ViewSet
from rest_framework import serializers, viewsets
from igny8_core.business.integration.models import PublishingSettings
class PublishingSettingsSerializer(serializers.ModelSerializer):
"""Serializer for PublishingSettings model"""
class Meta:
model = PublishingSettings
fields = [
'id',
'site',
'auto_approval_enabled',
'auto_publish_enabled',
'daily_publish_limit',
'weekly_publish_limit',
'monthly_publish_limit',
'publish_days',
'publish_time_slots',
'created_at',
'updated_at',
]
read_only_fields = ['id', 'site', 'created_at', 'updated_at']
@extend_schema_view(
retrieve=extend_schema(tags=['Integration']),
update=extend_schema(tags=['Integration']),
partial_update=extend_schema(tags=['Integration']),
)
class PublishingSettingsViewSet(viewsets.ViewSet):
"""
ViewSet for managing site-level publishing settings.
GET /api/v1/integration/sites/{site_id}/publishing-settings/
PUT /api/v1/integration/sites/{site_id}/publishing-settings/
PATCH /api/v1/integration/sites/{site_id}/publishing-settings/
"""
permission_classes = [IsAuthenticatedAndActive, IsEditorOrAbove]
throttle_scope = 'integration'
throttle_classes = [DebugScopedRateThrottle]
def _get_site(self, site_id, request):
"""Get site and verify user has access"""
from igny8_core.auth.models import Site
try:
site = Site.objects.get(id=int(site_id))
# Check if user has access to this site (same account)
if hasattr(request, 'account') and site.account != request.account:
return None
return site
except (Site.DoesNotExist, ValueError, TypeError):
return None
@extend_schema(tags=['Integration'])
def retrieve(self, request, site_id=None):
"""
Get publishing settings for a site.
Creates default settings if they don't exist.
"""
site = self._get_site(site_id, request)
if not site:
return error_response(
'Site not found or access denied',
None,
status.HTTP_404_NOT_FOUND,
request
)
# Get or create settings with defaults
settings, created = PublishingSettings.get_or_create_for_site(site)
serializer = PublishingSettingsSerializer(settings)
return success_response(
data=serializer.data,
message='Publishing settings retrieved' + (' (created with defaults)' if created else ''),
request=request
)
@extend_schema(tags=['Integration'])
def update(self, request, site_id=None):
"""
Update publishing settings for a site (full update).
"""
site = self._get_site(site_id, request)
if not site:
return error_response(
'Site not found or access denied',
None,
status.HTTP_404_NOT_FOUND,
request
)
# Get or create settings
settings, _ = PublishingSettings.get_or_create_for_site(site)
serializer = PublishingSettingsSerializer(settings, data=request.data)
if serializer.is_valid():
serializer.save()
return success_response(
data=serializer.data,
message='Publishing settings updated',
request=request
)
return error_response(
'Validation failed',
serializer.errors,
status.HTTP_400_BAD_REQUEST,
request
)
@extend_schema(tags=['Integration'])
def partial_update(self, request, site_id=None):
"""
Partially update publishing settings for a site.
"""
site = self._get_site(site_id, request)
if not site:
return error_response(
'Site not found or access denied',
None,
status.HTTP_404_NOT_FOUND,
request
)
# Get or create settings
settings, _ = PublishingSettings.get_or_create_for_site(site)
serializer = PublishingSettingsSerializer(settings, data=request.data, partial=True)
if serializer.is_valid():
serializer.save()
return success_response(
data=serializer.data,
message='Publishing settings updated',
request=request
)
return error_response(
'Validation failed',
serializer.errors,
status.HTTP_400_BAD_REQUEST,
request
)

View File

@@ -39,7 +39,6 @@ class ClustersResource(resources.ModelResource):
class ClustersAdmin(ImportExportMixin, SiteSectorAdminMixin, Igny8ModelAdmin):
resource_class = ClustersResource
list_display = ['name', 'site', 'sector', 'keywords_count', 'volume', 'status', 'created_at']
list_select_related = ['site', 'sector', 'account']
list_filter = [
('status', ChoicesDropdownFilter),
('site', RelatedDropdownFilter),
@@ -96,18 +95,19 @@ class ClustersAdmin(ImportExportMixin, SiteSectorAdminMixin, Igny8ModelAdmin):
@admin.register(Keywords)
class KeywordsAdmin(ImportExportMixin, SiteSectorAdminMixin, Igny8ModelAdmin):
resource_class = KeywordsResource
# Use actual DB fields and custom methods with @admin.display for computed values
list_display = ['get_keyword', 'seed_keyword', 'site', 'sector', 'cluster', 'get_volume', 'get_difficulty', 'get_country', 'status', 'created_at']
list_display = ['keyword', 'seed_keyword', 'site', 'sector', 'cluster', 'volume', 'difficulty', 'country', 'status', 'created_at']
list_editable = ['status'] # Enable inline editing for status
list_select_related = ['site', 'sector', 'cluster', 'seed_keyword', 'seed_keyword__industry', 'seed_keyword__sector', 'account']
list_filter = [
('status', ChoicesDropdownFilter),
('country', ChoicesDropdownFilter),
('site', RelatedDropdownFilter),
('sector', RelatedDropdownFilter),
('cluster', RelatedDropdownFilter),
('volume', RangeNumericFilter),
('difficulty', RangeNumericFilter),
('created_at', RangeDateFilter),
]
search_fields = ['seed_keyword__keyword']
search_fields = ['keyword', 'seed_keyword__keyword']
ordering = ['-created_at']
autocomplete_fields = ['cluster', 'site', 'sector', 'seed_keyword']
actions = [
@@ -117,30 +117,6 @@ class KeywordsAdmin(ImportExportMixin, SiteSectorAdminMixin, Igny8ModelAdmin):
'bulk_soft_delete',
]
@admin.display(description='Keyword')
def get_keyword(self, obj):
"""Get keyword from seed_keyword"""
return obj.seed_keyword.keyword if obj.seed_keyword else '-'
@admin.display(description='Volume')
def get_volume(self, obj):
"""Get volume from override or seed_keyword"""
if obj.volume_override is not None:
return obj.volume_override
return obj.seed_keyword.volume if obj.seed_keyword else 0
@admin.display(description='Difficulty')
def get_difficulty(self, obj):
"""Get difficulty from override or seed_keyword"""
if obj.difficulty_override is not None:
return obj.difficulty_override
return obj.seed_keyword.difficulty if obj.seed_keyword else 0
@admin.display(description='Country')
def get_country(self, obj):
"""Get country from seed_keyword"""
return obj.seed_keyword.country if obj.seed_keyword else 'US'
def get_site_display(self, obj):
"""Safely get site name"""
try:
@@ -242,7 +218,6 @@ class ContentIdeasResource(resources.ModelResource):
class ContentIdeasAdmin(ImportExportMixin, SiteSectorAdminMixin, Igny8ModelAdmin):
resource_class = ContentIdeasResource
list_display = ['idea_title', 'site', 'sector', 'description_preview', 'content_type', 'content_structure', 'status', 'keyword_cluster', 'estimated_word_count', 'created_at']
list_select_related = ['site', 'sector', 'keyword_cluster', 'account']
list_filter = [
('status', ChoicesDropdownFilter),
('content_type', ChoicesDropdownFilter),

View File

@@ -1,4 +1,5 @@
from rest_framework import serializers
from django.conf import settings
from .models import Keywords, Clusters, ContentIdeas
from igny8_core.auth.models import SeedKeyword
@@ -68,6 +69,12 @@ class KeywordSerializer(serializers.ModelSerializer):
]
read_only_fields = ['id', 'created_at', 'updated_at', 'account_id', 'keyword', 'volume', 'difficulty', 'country']
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
# Only include Stage 1 fields when feature flag is enabled
if getattr(settings, 'USE_SITE_BUILDER_REFACTOR', False):
self.fields['attribute_values'] = serializers.JSONField(read_only=True)
def validate(self, attrs):
"""Validate that either seed_keyword_id OR custom keyword fields are provided"""
# For create operations, need either seed_keyword_id OR custom keyword

View File

@@ -426,21 +426,6 @@ class KeywordViewSet(SiteSectorModelViewSet):
errors.append(f"Error adding '{seed_keyword.keyword}': {str(e)}")
skipped_count += 1
# Create notification if keywords were added
if created_count > 0:
try:
from igny8_core.business.notifications.services import NotificationService
NotificationService.notify_keywords_imported(
account=account,
site=site,
count=created_count
)
except Exception as e:
# Don't fail the request if notification fails
import logging
logger = logging.getLogger(__name__)
logger.warning(f"Failed to create notification for keywords import: {e}")
return success_response(
data={
'created': created_count,
@@ -934,45 +919,6 @@ class ClusterViewSet(SiteSectorModelViewSet):
# Save with all required fields explicitly
serializer.save(account=account, site=site, sector=sector)
@action(detail=False, methods=['get'], url_path='summary', url_name='summary')
def summary(self, request):
"""
Get aggregate summary metrics for clusters.
Returns total keywords count and total volume across all clusters (unfiltered).
Used for header metrics display.
"""
from django.db.models import Sum, Count, Case, When, F, IntegerField
queryset = self.get_queryset()
# Get cluster IDs
cluster_ids = list(queryset.values_list('id', flat=True))
# Aggregate keyword stats across all clusters
keyword_stats = (
Keywords.objects
.filter(cluster_id__in=cluster_ids)
.aggregate(
total_keywords=Count('id'),
total_volume=Sum(
Case(
When(volume_override__isnull=False, then=F('volume_override')),
default=F('seed_keyword__volume'),
output_field=IntegerField()
)
)
)
)
return success_response(
data={
'total_clusters': len(cluster_ids),
'total_keywords': keyword_stats['total_keywords'] or 0,
'total_volume': keyword_stats['total_volume'] or 0,
},
request=request
)
@action(detail=False, methods=['POST'], url_path='bulk_delete', url_name='bulk_delete')
def bulk_delete(self, request):
"""Bulk delete clusters"""

View File

@@ -12,14 +12,8 @@ __all__ = [
'Strategy',
# Global settings models
'GlobalIntegrationSettings',
'AccountIntegrationOverride',
'GlobalAIPrompt',
'GlobalAuthorProfile',
'GlobalStrategy',
# New centralized models
'IntegrationProvider',
'AISettings',
# Email models
'EmailSettings',
'EmailTemplate',
'EmailLog',
]

View File

@@ -2,7 +2,6 @@
System Module Admin
"""
from django.contrib import admin
from django import forms
from unfold.admin import ModelAdmin
from igny8_core.admin.base import AccountAdminMixin, Igny8ModelAdmin
from .models import AIPrompt, IntegrationSettings, AuthorProfile, Strategy
@@ -32,7 +31,7 @@ class AIPromptResource(resources.ModelResource):
# Import settings admin
from .settings_admin import (
SystemSettingsAdmin, AccountSettingsAdmin, UserSettingsAdmin,
ModuleSettingsAdmin
ModuleSettingsAdmin, AISettingsAdmin
)
try:
@@ -334,61 +333,16 @@ class StrategyAdmin(ImportExportMixin, AccountAdminMixin, Igny8ModelAdmin):
# GLOBAL SETTINGS ADMIN - Platform-wide defaults
# =============================================================================
class GlobalIntegrationSettingsForm(forms.ModelForm):
"""Custom form for GlobalIntegrationSettings with dynamic choices from AIModelConfig"""
class Meta:
model = GlobalIntegrationSettings
fields = '__all__'
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
# Load choices dynamically from AIModelConfig
from igny8_core.modules.system.global_settings_models import (
get_text_model_choices,
get_image_model_choices,
get_provider_choices,
)
# OpenAI text model choices
openai_choices = get_text_model_choices()
openai_text_choices = [(m, d) for m, d in openai_choices if 'gpt' in m.lower() or 'openai' in m.lower()]
if openai_text_choices:
self.fields['openai_model'].choices = openai_text_choices
# DALL-E image model choices
dalle_choices = get_image_model_choices(provider='openai')
if dalle_choices:
self.fields['dalle_model'].choices = dalle_choices
# Runware image model choices
runware_choices = get_image_model_choices(provider='runware')
if runware_choices:
self.fields['runware_model'].choices = runware_choices
# Image service provider choices (only OpenAI and Runware for now)
image_providers = get_provider_choices(model_type='image')
# Filter to only OpenAI and Runware
allowed_image_providers = [
(p, d) for p, d in image_providers
if p in ('openai', 'runware')
]
if allowed_image_providers:
self.fields['default_image_service'].choices = allowed_image_providers
@admin.register(GlobalIntegrationSettings)
class GlobalIntegrationSettingsAdmin(Igny8ModelAdmin):
"""Admin for global integration settings (singleton)"""
form = GlobalIntegrationSettingsForm
list_display = ["id", "is_active", "last_updated", "updated_by"]
readonly_fields = ["last_updated", "openai_max_tokens", "anthropic_max_tokens"]
readonly_fields = ["last_updated"]
fieldsets = (
("OpenAI Settings", {
"fields": ("openai_api_key", "openai_model", "openai_temperature", "openai_max_tokens"),
"description": "Global OpenAI configuration used by all accounts (unless overridden). Max tokens is loaded from AI Model Configuration."
"description": "Global OpenAI configuration used by all accounts (unless overridden)"
}),
("Image Generation - Default Service", {
"fields": ("default_image_service",),
@@ -403,7 +357,7 @@ class GlobalIntegrationSettingsAdmin(Igny8ModelAdmin):
"description": "Global Runware image generation configuration"
}),
("Universal Image Settings", {
"fields": ("image_quality", "image_style", "max_in_article_images", "desktop_image_size"),
"fields": ("image_quality", "image_style", "max_in_article_images", "desktop_image_size", "mobile_image_size"),
"description": "Image quality, style, and sizing settings that apply to ALL providers (DALL-E, Runware, etc.)"
}),
("Status", {
@@ -411,49 +365,6 @@ class GlobalIntegrationSettingsAdmin(Igny8ModelAdmin):
}),
)
def get_readonly_fields(self, request, obj=None):
"""Make max_tokens fields readonly - they are populated from AI Model Configuration"""
readonly = list(super().get_readonly_fields(request, obj))
if 'openai_max_tokens' not in readonly:
readonly.append('openai_max_tokens')
if 'anthropic_max_tokens' not in readonly:
readonly.append('anthropic_max_tokens')
return readonly
def openai_max_tokens(self, obj):
"""Display max tokens from the selected OpenAI model's configuration"""
from igny8_core.modules.system.global_settings_models import get_model_max_tokens
max_tokens = get_model_max_tokens(obj.openai_model) if obj else None
if max_tokens:
return f"{max_tokens:,} (from AI Model Configuration)"
return obj.openai_max_tokens if obj else "8192 (default)"
openai_max_tokens.short_description = "Max Output Tokens"
def anthropic_max_tokens(self, obj):
"""Display max tokens from the selected Anthropic model's configuration"""
from igny8_core.modules.system.global_settings_models import get_model_max_tokens
max_tokens = get_model_max_tokens(obj.anthropic_model) if obj else None
if max_tokens:
return f"{max_tokens:,} (from AI Model Configuration)"
return obj.anthropic_max_tokens if obj else "8192 (default)"
anthropic_max_tokens.short_description = "Max Output Tokens"
def save_model(self, request, obj, form, change):
"""Update max_tokens from model config on save"""
from igny8_core.modules.system.global_settings_models import get_model_max_tokens
# Update OpenAI max tokens from model config
openai_max = get_model_max_tokens(obj.openai_model)
if openai_max:
obj.openai_max_tokens = openai_max
# Update Anthropic max tokens from model config
anthropic_max = get_model_max_tokens(obj.anthropic_model)
if anthropic_max:
obj.anthropic_max_tokens = anthropic_max
super().save_model(request, obj, form, change)
def has_add_permission(self, request):
"""Only allow one instance (singleton pattern)"""
return not GlobalIntegrationSettings.objects.exists()
@@ -587,115 +498,3 @@ class GlobalModuleSettingsAdmin(Igny8ModelAdmin):
'updated_at',
]
# IntegrationProvider Admin (centralized API keys)
from .models import IntegrationProvider
@admin.register(IntegrationProvider)
class IntegrationProviderAdmin(Igny8ModelAdmin):
"""
Admin for IntegrationProvider - Centralized API key management.
Per final-model-schemas.md
"""
list_display = [
'provider_id',
'display_name',
'provider_type',
'is_active',
'is_sandbox',
'has_api_key',
'updated_at',
]
list_filter = ['provider_type', 'is_active', 'is_sandbox']
search_fields = ['provider_id', 'display_name']
readonly_fields = ['created_at', 'updated_at']
fieldsets = (
('Provider Info', {
'fields': ('provider_id', 'display_name', 'provider_type')
}),
('API Configuration', {
'fields': ('api_key', 'api_secret', 'webhook_secret', 'api_endpoint'),
'description': 'Enter API keys and endpoints. These are platform-wide.'
}),
('Extra Config', {
'fields': ('config',),
'classes': ('collapse',),
'description': 'JSON config for provider-specific settings'
}),
('Status', {
'fields': ('is_active', 'is_sandbox')
}),
('Metadata', {
'fields': ('updated_by', 'created_at', 'updated_at'),
'classes': ('collapse',)
}),
)
def has_api_key(self, obj):
"""Show if API key is configured"""
return bool(obj.api_key)
has_api_key.boolean = True
has_api_key.short_description = 'API Key Set'
def save_model(self, request, obj, form, change):
"""Set updated_by to current user"""
obj.updated_by = request.user
super().save_model(request, obj, form, change)
# SystemAISettings Admin (new simplified AI settings)
from .ai_settings import SystemAISettings
@admin.register(SystemAISettings)
class SystemAISettingsAdmin(Igny8ModelAdmin):
"""
Admin for SystemAISettings - System-wide AI defaults (Singleton).
Per final-model-schemas.md
"""
list_display = [
'id',
'temperature',
'max_tokens',
'image_style',
'image_quality',
'max_images_per_article',
'updated_at',
]
readonly_fields = ['updated_at']
fieldsets = (
('AI Parameters', {
'fields': ('temperature', 'max_tokens'),
'description': 'System-wide defaults for AI text generation. Accounts can override via AccountSettings.'
}),
('Image Generation', {
'fields': ('image_style', 'image_quality', 'max_images_per_article', 'image_size'),
'description': 'System-wide defaults for image generation. Accounts can override via AccountSettings.'
}),
('Metadata', {
'fields': ('updated_by', 'updated_at'),
'classes': ('collapse',)
}),
)
def has_add_permission(self, request):
"""Only allow one instance (singleton)"""
return not SystemAISettings.objects.exists()
def has_delete_permission(self, request, obj=None):
"""Prevent deletion of singleton"""
return False
def save_model(self, request, obj, form, change):
"""Set updated_by to current user"""
obj.updated_by = request.user
super().save_model(request, obj, form, change)
# Import Email Admin (EmailSettings, EmailTemplate, EmailLog)
from .email_admin import EmailSettingsAdmin, EmailTemplateAdmin, EmailLogAdmin

View File

@@ -1,195 +0,0 @@
"""
AI Settings - System-wide AI defaults (Singleton)
This is the clean, simplified model for AI configuration.
Replaces the deprecated GlobalIntegrationSettings.
API keys are stored in IntegrationProvider.
Model definitions are in AIModelConfig.
This model only stores system-wide defaults for AI parameters.
"""
from django.db import models
from django.conf import settings
import logging
logger = logging.getLogger(__name__)
class SystemAISettings(models.Model):
"""
System-wide AI defaults. Singleton (pk=1).
Removed fields (now elsewhere):
- All *_api_key fields → IntegrationProvider
- All *_model fields → AIModelConfig.is_default
- default_text_provider → AIModelConfig.is_default where model_type='text'
- default_image_service → AIModelConfig.is_default where model_type='image'
Accounts can override these via AccountSettings with keys like:
- ai.temperature
- ai.max_tokens
- ai.image_style
- ai.image_quality
- ai.max_images
"""
IMAGE_STYLE_CHOICES = [
('photorealistic', 'Photorealistic'),
('illustration', 'Illustration'),
('3d_render', '3D Render'),
('minimal_flat', 'Minimal / Flat Design'),
('artistic', 'Artistic / Painterly'),
('cartoon', 'Cartoon / Stylized'),
]
IMAGE_QUALITY_CHOICES = [
('standard', 'Standard'),
('hd', 'HD'),
]
IMAGE_SIZE_CHOICES = [
('1024x1024', '1024x1024 (Square)'),
('1792x1024', '1792x1024 (Landscape)'),
('1024x1792', '1024x1792 (Portrait)'),
]
# AI Parameters
temperature = models.FloatField(
default=0.7,
help_text="AI temperature (0.0-2.0). Higher = more creative."
)
max_tokens = models.IntegerField(
default=8192,
help_text="Max response tokens"
)
# Image Generation Settings
image_style = models.CharField(
max_length=30,
default='photorealistic',
choices=IMAGE_STYLE_CHOICES,
help_text="Default image style"
)
image_quality = models.CharField(
max_length=20,
default='standard',
choices=IMAGE_QUALITY_CHOICES,
help_text="Default image quality (standard/hd)"
)
max_images_per_article = models.IntegerField(
default=4,
help_text="Max in-article images (1-8)"
)
image_size = models.CharField(
max_length=20,
default='1024x1024',
choices=IMAGE_SIZE_CHOICES,
help_text="Default image dimensions"
)
# Metadata
updated_by = models.ForeignKey(
settings.AUTH_USER_MODEL,
on_delete=models.SET_NULL,
null=True,
blank=True,
related_name='system_ai_settings_updates'
)
updated_at = models.DateTimeField(auto_now=True)
class Meta:
db_table = 'igny8_system_ai_settings'
verbose_name = 'System AI Settings'
verbose_name_plural = 'System AI Settings'
def save(self, *args, **kwargs):
"""Enforce singleton - always use pk=1"""
self.pk = 1
super().save(*args, **kwargs)
def delete(self, *args, **kwargs):
"""Prevent deletion of singleton"""
pass
@classmethod
def get_instance(cls):
"""Get or create the singleton instance"""
obj, created = cls.objects.get_or_create(pk=1)
return obj
def __str__(self):
return "System AI Settings"
# Helper methods for getting effective settings with account overrides
@classmethod
def get_effective_temperature(cls, account=None) -> float:
"""Get temperature, checking account override first"""
if account:
override = cls._get_account_override(account, 'ai.temperature')
if override is not None:
return float(override)
return cls.get_instance().temperature
@classmethod
def get_effective_max_tokens(cls, account=None) -> int:
"""Get max_tokens, checking account override first"""
if account:
override = cls._get_account_override(account, 'ai.max_tokens')
if override is not None:
return int(override)
return cls.get_instance().max_tokens
@classmethod
def get_effective_image_style(cls, account=None) -> str:
"""Get image_style, checking account override first"""
if account:
override = cls._get_account_override(account, 'ai.image_style')
if override is not None:
return str(override)
return cls.get_instance().image_style
@classmethod
def get_effective_image_quality(cls, account=None) -> str:
"""Get image_quality, checking account override first"""
if account:
override = cls._get_account_override(account, 'ai.image_quality')
if override is not None:
return str(override)
return cls.get_instance().image_quality
@classmethod
def get_effective_max_images(cls, account=None) -> int:
"""Get max_images_per_article, checking account override first"""
if account:
override = cls._get_account_override(account, 'ai.max_images')
if override is not None:
return int(override)
return cls.get_instance().max_images_per_article
@classmethod
def get_effective_image_size(cls, account=None) -> str:
"""Get image_size, checking account override first"""
if account:
override = cls._get_account_override(account, 'ai.image_size')
if override is not None:
return str(override)
return cls.get_instance().image_size
@staticmethod
def _get_account_override(account, key: str):
"""Get account-specific override from AccountSettings"""
try:
from igny8_core.modules.system.settings_models import AccountSettings
setting = AccountSettings.objects.filter(
account=account,
key=key
).first()
if setting and setting.config:
return setting.config.get('value')
except Exception as e:
logger.debug(f"Could not get account override for {key}: {e}")
return None
# Alias for backward compatibility and clearer naming
AISettings = SystemAISettings

View File

@@ -1,446 +0,0 @@
"""
Email Admin Configuration for IGNY8
Provides admin interface for managing:
- Email Settings (global configuration)
- Email Templates (template metadata and testing)
- Email Logs (sent email history)
"""
from django.contrib import admin
from django.utils.html import format_html
from django.urls import path, reverse
from django.shortcuts import render, redirect
from django.contrib import messages
from django.http import JsonResponse
from unfold.admin import ModelAdmin as UnfoldModelAdmin
from igny8_core.admin.base import Igny8ModelAdmin
from .email_models import EmailSettings, EmailTemplate, EmailLog
@admin.register(EmailSettings)
class EmailSettingsAdmin(Igny8ModelAdmin):
"""
Admin for EmailSettings - Global email configuration (Singleton)
"""
list_display = [
'from_email',
'from_name',
'email_provider',
'reply_to_email',
'send_welcome_emails',
'send_billing_emails',
'updated_at',
]
readonly_fields = ['updated_at']
fieldsets = (
('Email Provider', {
'fields': ('email_provider',),
'description': 'Select the active email service provider. Configure SMTP settings below if using SMTP.',
}),
('SMTP Configuration', {
'fields': (
'smtp_host',
'smtp_port',
'smtp_username',
'smtp_password',
'smtp_use_tls',
'smtp_use_ssl',
'smtp_timeout',
),
'description': 'SMTP server settings. Required when email_provider is set to SMTP.',
'classes': ('collapse',),
}),
('Sender Configuration', {
'fields': ('from_email', 'from_name', 'reply_to_email'),
'description': 'Default sender settings. Email address must be verified in Resend (if using Resend) or configured in SMTP server.',
}),
('Company Branding', {
'fields': ('company_name', 'company_address', 'logo_url'),
'description': 'Company information shown in email templates.',
}),
('Support Links', {
'fields': ('support_email', 'support_url', 'unsubscribe_url'),
'classes': ('collapse',),
}),
('Email Types', {
'fields': (
'send_welcome_emails',
'send_billing_emails',
'send_subscription_emails',
'send_low_credit_warnings',
),
'description': 'Enable/disable specific email types globally.',
}),
('Thresholds', {
'fields': ('low_credit_threshold', 'renewal_reminder_days'),
}),
('Metadata', {
'fields': ('updated_by', 'updated_at'),
'classes': ('collapse',),
}),
)
change_form_template = 'admin/system/emailsettings/change_form.html'
def get_urls(self):
"""Add custom URL for test email"""
urls = super().get_urls()
custom_urls = [
path(
'test-email/',
self.admin_site.admin_view(self.test_email_view),
name='system_emailsettings_test_email'
),
path(
'send-test-email/',
self.admin_site.admin_view(self.send_test_email),
name='system_emailsettings_send_test'
),
]
return custom_urls + urls
def test_email_view(self, request):
"""Show test email form"""
settings = EmailSettings.get_settings()
context = {
**self.admin_site.each_context(request),
'title': 'Send Test Email',
'settings': settings,
'opts': self.model._meta,
'default_from_email': settings.from_email,
'default_to_email': request.user.email,
}
return render(request, 'admin/system/emailsettings/test_email.html', context)
def send_test_email(self, request):
"""Send test email to verify configuration"""
if request.method != 'POST':
return JsonResponse({'error': 'POST required'}, status=405)
from django.utils import timezone
from igny8_core.business.billing.services.email_service import EmailService
to_email = request.POST.get('to_email', request.user.email)
subject = request.POST.get('subject', 'IGNY8 Test Email')
# Create fresh EmailService instance to pick up latest settings
service = EmailService()
settings = EmailSettings.get_settings()
test_html = f"""
<html>
<body style="font-family: Arial, sans-serif; padding: 20px;">
<h1 style="color: #6366f1;">IGNY8 Email Test</h1>
<p>This is a test email to verify your email configuration.</p>
<h3>Configuration Details:</h3>
<ul>
<li><strong>Provider:</strong> {settings.email_provider.upper()}</li>
<li><strong>From:</strong> {settings.from_name} &lt;{settings.from_email}&gt;</li>
<li><strong>Reply-To:</strong> {settings.reply_to_email}</li>
<li><strong>Sent At:</strong> {timezone.now().strftime('%Y-%m-%d %H:%M:%S UTC')}</li>
</ul>
<p style="color: #22c55e; font-weight: bold;">
✓ If you received this email, your email configuration is working correctly!
</p>
<hr style="margin: 20px 0; border: none; border-top: 1px solid #e5e7eb;">
<p style="font-size: 12px; color: #6b7280;">
This is an automated test email from IGNY8 Admin.
</p>
</body>
</html>
"""
try:
result = service.send_transactional(
to=to_email,
subject=subject,
html=test_html,
tags=['test', 'admin-test'],
)
if result.get('success'):
# Log the test email
EmailLog.objects.create(
message_id=result.get('id', ''),
to_email=to_email,
from_email=settings.from_email,
subject=subject,
template_name='admin_test',
status='sent',
provider=result.get('provider', settings.email_provider),
tags=['test', 'admin-test'],
)
messages.success(
request,
f'Test email sent successfully to {to_email} via {result.get("provider", "unknown").upper()}!'
)
else:
messages.error(request, f'Failed to send: {result.get("error", "Unknown error")}')
except Exception as e:
messages.error(request, f'Error sending test email: {str(e)}')
return redirect(reverse('admin:system_emailsettings_changelist'))
def has_add_permission(self, request):
"""Only allow one instance (singleton)"""
return not EmailSettings.objects.exists()
def has_delete_permission(self, request, obj=None):
"""Prevent deletion of singleton"""
return False
def save_model(self, request, obj, form, change):
"""Set updated_by to current user"""
obj.updated_by = request.user
super().save_model(request, obj, form, change)
@admin.register(EmailTemplate)
class EmailTemplateAdmin(Igny8ModelAdmin):
"""
Admin for EmailTemplate - Manage email templates and testing
"""
list_display = [
'display_name',
'template_type',
'template_name',
'is_active',
'send_count',
'last_sent_at',
'test_email_button',
]
list_filter = ['template_type', 'is_active']
search_fields = ['display_name', 'template_name', 'description']
readonly_fields = ['send_count', 'last_sent_at', 'created_at', 'updated_at']
fieldsets = (
('Template Info', {
'fields': ('template_name', 'template_path', 'display_name', 'description'),
}),
('Email Settings', {
'fields': ('template_type', 'default_subject'),
}),
('Context Configuration', {
'fields': ('required_context', 'sample_context'),
'description': 'Define required variables and sample data for testing.',
'classes': ('collapse',),
}),
('Status', {
'fields': ('is_active',),
}),
('Statistics', {
'fields': ('send_count', 'last_sent_at'),
'classes': ('collapse',),
}),
('Timestamps', {
'fields': ('created_at', 'updated_at'),
'classes': ('collapse',),
}),
)
def test_email_button(self, obj):
"""Add test email button in list view"""
url = reverse('admin:system_emailtemplate_test', args=[obj.pk])
return format_html(
'<a class="button" href="{}" style="padding: 4px 12px; background: #6366f1; color: white; '
'border-radius: 4px; text-decoration: none; font-size: 12px;">Test</a>',
url
)
test_email_button.short_description = 'Test'
test_email_button.allow_tags = True
def get_urls(self):
"""Add custom URL for test email"""
urls = super().get_urls()
custom_urls = [
path(
'<int:template_id>/test/',
self.admin_site.admin_view(self.test_email_view),
name='system_emailtemplate_test'
),
path(
'<int:template_id>/send-test/',
self.admin_site.admin_view(self.send_test_email),
name='system_emailtemplate_send_test'
),
]
return custom_urls + urls
def test_email_view(self, request, template_id):
"""Show test email form"""
template = EmailTemplate.objects.get(pk=template_id)
context = {
**self.admin_site.each_context(request),
'title': f'Test Email: {template.display_name}',
'template': template,
'opts': self.model._meta,
}
return render(request, 'admin/system/emailtemplate/test_email.html', context)
def send_test_email(self, request, template_id):
"""Send test email"""
if request.method != 'POST':
return JsonResponse({'error': 'POST required'}, status=405)
import json
from django.utils import timezone
from igny8_core.business.billing.services.email_service import get_email_service
template = EmailTemplate.objects.get(pk=template_id)
to_email = request.POST.get('to_email', request.user.email)
custom_context = request.POST.get('context', '{}')
try:
context = json.loads(custom_context) if custom_context else {}
except json.JSONDecodeError:
context = template.sample_context or {}
# Merge sample context with any custom values
final_context = {**(template.sample_context or {}), **context}
# Add default context values
final_context.setdefault('user_name', 'Test User')
final_context.setdefault('account_name', 'Test Account')
final_context.setdefault('frontend_url', 'https://app.igny8.com')
service = get_email_service()
try:
result = service.send_transactional(
to=to_email,
subject=f'[TEST] {template.default_subject}',
template=template.template_path,
context=final_context,
tags=['test', template.template_type],
)
if result.get('success'):
# Update template stats
template.send_count += 1
template.last_sent_at = timezone.now()
template.save(update_fields=['send_count', 'last_sent_at'])
# Log the email
EmailLog.objects.create(
message_id=result.get('id', ''),
to_email=to_email,
from_email=service.from_email,
subject=f'[TEST] {template.default_subject}',
template_name=template.template_name,
status='sent',
provider=result.get('provider', 'resend'),
tags=['test', template.template_type],
)
messages.success(
request,
f'Test email sent successfully to {to_email}! (ID: {result.get("id", "N/A")})'
)
else:
messages.error(request, f'Failed to send: {result.get("error", "Unknown error")}')
except Exception as e:
messages.error(request, f'Error sending test email: {str(e)}')
return redirect(reverse('admin:system_emailtemplate_changelist'))
@admin.register(EmailLog)
class EmailLogAdmin(Igny8ModelAdmin):
"""
Admin for EmailLog - View sent email history
"""
list_display = [
'sent_at',
'to_email',
'subject_truncated',
'template_name',
'status_badge',
'provider',
'message_id_short',
]
list_filter = ['status', 'provider', 'template_name', 'sent_at']
search_fields = ['to_email', 'subject', 'message_id']
readonly_fields = [
'message_id', 'to_email', 'from_email', 'subject',
'template_name', 'status', 'provider', 'error_message',
'tags', 'sent_at'
]
date_hierarchy = 'sent_at'
fieldsets = (
('Email Details', {
'fields': ('to_email', 'from_email', 'subject'),
}),
('Delivery Info', {
'fields': ('status', 'provider', 'message_id'),
}),
('Template', {
'fields': ('template_name', 'tags'),
}),
('Error Info', {
'fields': ('error_message',),
'classes': ('collapse',),
}),
('Timestamp', {
'fields': ('sent_at',),
}),
)
def has_add_permission(self, request):
"""Logs are created automatically"""
return False
def has_change_permission(self, request, obj=None):
"""Logs are read-only"""
return False
def has_delete_permission(self, request, obj=None):
"""Allow deletion for cleanup"""
return request.user.is_superuser
def subject_truncated(self, obj):
"""Truncate long subjects"""
if len(obj.subject) > 50:
return f'{obj.subject[:50]}...'
return obj.subject
subject_truncated.short_description = 'Subject'
def message_id_short(self, obj):
"""Show truncated message ID"""
if obj.message_id:
return f'{obj.message_id[:20]}...' if len(obj.message_id) > 20 else obj.message_id
return '-'
message_id_short.short_description = 'Message ID'
def status_badge(self, obj):
"""Show status with color badge"""
colors = {
'sent': '#3b82f6',
'delivered': '#22c55e',
'failed': '#ef4444',
'bounced': '#f59e0b',
}
color = colors.get(obj.status, '#6b7280')
return format_html(
'<span style="background: {}; color: white; padding: 2px 8px; '
'border-radius: 4px; font-size: 11px;">{}</span>',
color, obj.status.upper()
)
status_badge.short_description = 'Status'
status_badge.allow_tags = True

View File

@@ -1,338 +0,0 @@
"""
Email Configuration Models for IGNY8
Provides database-driven email settings, template management, and send test functionality.
Works with the existing EmailService and IntegrationProvider models.
"""
from django.db import models
from django.conf import settings
class EmailSettings(models.Model):
"""
Global email settings - singleton model for email configuration.
Stores default email settings that can be managed through Django admin.
These settings work alongside IntegrationProvider (resend) configuration.
"""
EMAIL_PROVIDER_CHOICES = [
('resend', 'Resend'),
('smtp', 'SMTP'),
]
# Email provider selection
email_provider = models.CharField(
max_length=20,
choices=EMAIL_PROVIDER_CHOICES,
default='resend',
help_text='Active email service provider'
)
# SMTP Configuration
smtp_host = models.CharField(
max_length=255,
blank=True,
help_text='SMTP server hostname (e.g., smtp.gmail.com)'
)
smtp_port = models.IntegerField(
default=587,
help_text='SMTP server port (587 for TLS, 465 for SSL, 25 for plain)'
)
smtp_username = models.CharField(
max_length=255,
blank=True,
help_text='SMTP authentication username'
)
smtp_password = models.CharField(
max_length=255,
blank=True,
help_text='SMTP authentication password'
)
smtp_use_tls = models.BooleanField(
default=True,
help_text='Use TLS encryption (recommended for port 587)'
)
smtp_use_ssl = models.BooleanField(
default=False,
help_text='Use SSL encryption (for port 465)'
)
smtp_timeout = models.IntegerField(
default=30,
help_text='SMTP connection timeout in seconds'
)
# Default sender settings
from_email = models.EmailField(
default='noreply@igny8.com',
help_text='Default sender email address (must be verified in Resend)'
)
from_name = models.CharField(
max_length=100,
default='IGNY8',
help_text='Default sender display name'
)
reply_to_email = models.EmailField(
default='support@igny8.com',
help_text='Default reply-to email address'
)
# Company branding for emails
company_name = models.CharField(
max_length=100,
default='IGNY8',
help_text='Company name shown in emails'
)
company_address = models.TextField(
blank=True,
help_text='Company address for email footer (CAN-SPAM compliance)'
)
logo_url = models.URLField(
blank=True,
help_text='URL to company logo for emails'
)
# Support links
support_email = models.EmailField(
default='support@igny8.com',
help_text='Support email shown in emails'
)
support_url = models.URLField(
blank=True,
help_text='Link to support/help center'
)
unsubscribe_url = models.URLField(
blank=True,
help_text='URL for email unsubscribe (for marketing emails)'
)
# Feature flags
send_welcome_emails = models.BooleanField(
default=True,
help_text='Send welcome email on user registration'
)
send_billing_emails = models.BooleanField(
default=True,
help_text='Send payment confirmation, invoice emails'
)
send_subscription_emails = models.BooleanField(
default=True,
help_text='Send subscription renewal reminders'
)
send_low_credit_warnings = models.BooleanField(
default=True,
help_text='Send low credit warning emails'
)
# Credit warning threshold
low_credit_threshold = models.IntegerField(
default=100,
help_text='Send warning when credits fall below this value'
)
renewal_reminder_days = models.IntegerField(
default=7,
help_text='Days before subscription renewal to send reminder'
)
# Audit
updated_by = models.ForeignKey(
settings.AUTH_USER_MODEL,
on_delete=models.SET_NULL,
null=True,
blank=True,
related_name='email_settings_updates'
)
updated_at = models.DateTimeField(auto_now=True)
class Meta:
db_table = 'igny8_email_settings'
verbose_name = 'Email Settings'
verbose_name_plural = 'Email Settings'
def __str__(self):
return f'Email Settings (from: {self.from_email})'
def save(self, *args, **kwargs):
"""Ensure only one instance exists (singleton)"""
self.pk = 1
super().save(*args, **kwargs)
@classmethod
def get_settings(cls):
"""Get singleton settings instance, creating if needed"""
obj, _ = cls.objects.get_or_create(pk=1)
return obj
class EmailTemplate(models.Model):
"""
Email template metadata - tracks available email templates
and their usage/configuration.
Templates are stored as Django templates in templates/emails/.
This model provides admin visibility and test sending capability.
"""
TEMPLATE_TYPE_CHOICES = [
('auth', 'Authentication'),
('billing', 'Billing'),
('notification', 'Notification'),
('marketing', 'Marketing'),
]
# Template identification
template_name = models.CharField(
max_length=100,
unique=True,
help_text='Template file name without extension (e.g., "welcome")'
)
template_path = models.CharField(
max_length=200,
help_text='Full template path (e.g., "emails/welcome.html")'
)
# Display info
display_name = models.CharField(
max_length=100,
help_text='Human-readable template name'
)
description = models.TextField(
blank=True,
help_text='Description of when this template is used'
)
template_type = models.CharField(
max_length=20,
choices=TEMPLATE_TYPE_CHOICES,
default='notification'
)
# Default subject
default_subject = models.CharField(
max_length=200,
help_text='Default email subject line'
)
# Required context variables
required_context = models.JSONField(
default=list,
blank=True,
help_text='List of required context variables for this template'
)
# Sample context for testing
sample_context = models.JSONField(
default=dict,
blank=True,
help_text='Sample context for test sending (JSON)'
)
# Status
is_active = models.BooleanField(
default=True,
help_text='Whether this template is currently in use'
)
# Stats
send_count = models.IntegerField(
default=0,
help_text='Number of emails sent using this template'
)
last_sent_at = models.DateTimeField(
null=True,
blank=True,
help_text='Last time an email was sent with this template'
)
# Timestamps
created_at = models.DateTimeField(auto_now_add=True)
updated_at = models.DateTimeField(auto_now=True)
class Meta:
db_table = 'igny8_email_templates'
verbose_name = 'Email Template'
verbose_name_plural = 'Email Templates'
ordering = ['template_type', 'display_name']
def __str__(self):
return f'{self.display_name} ({self.template_type})'
class EmailLog(models.Model):
"""
Log of sent emails for audit and debugging.
"""
STATUS_CHOICES = [
('sent', 'Sent'),
('delivered', 'Delivered'),
('failed', 'Failed'),
('bounced', 'Bounced'),
]
# Email identification
message_id = models.CharField(
max_length=200,
blank=True,
help_text='Provider message ID (from Resend)'
)
# Recipients
to_email = models.EmailField(
help_text='Recipient email'
)
from_email = models.EmailField(
help_text='Sender email'
)
# Content
subject = models.CharField(
max_length=500,
help_text='Email subject'
)
template_name = models.CharField(
max_length=100,
blank=True,
help_text='Template used (if any)'
)
# Status
status = models.CharField(
max_length=20,
choices=STATUS_CHOICES,
default='sent'
)
provider = models.CharField(
max_length=50,
default='resend',
help_text='Email provider used'
)
# Error tracking
error_message = models.TextField(
blank=True,
help_text='Error message if failed'
)
# Metadata
tags = models.JSONField(
default=list,
blank=True,
help_text='Email tags for categorization'
)
# Timestamps
sent_at = models.DateTimeField(auto_now_add=True)
class Meta:
db_table = 'igny8_email_log'
verbose_name = 'Email Log'
verbose_name_plural = 'Email Logs'
ordering = ['-sent_at']
indexes = [
models.Index(fields=['to_email', 'sent_at']),
models.Index(fields=['status', 'sent_at']),
models.Index(fields=['template_name', 'sent_at']),
]
def __str__(self):
return f'{self.subject}{self.to_email} ({self.status})'

Some files were not shown because too many files have changed in this diff Show More