Compare commits
127 Commits
65bf65bb6b
...
cleanup/ph
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
e317e1de26 | ||
|
|
f04eb0a900 | ||
|
|
264c720e3e | ||
|
|
0921adbabb | ||
|
|
82d6a9e879 | ||
|
|
0526553c9b | ||
|
|
7bb9d813f2 | ||
|
|
59f7455521 | ||
|
|
34c8cc410a | ||
|
|
4f99fc1451 | ||
|
|
84ed711f6d | ||
|
|
7c79bdcc6c | ||
|
|
74370685f4 | ||
|
|
e2a1c15183 | ||
|
|
51512d6c91 | ||
|
|
4e9f2d9dbc | ||
|
|
d4ecddba22 | ||
|
|
3651ee9ed4 | ||
|
|
7da3334c03 | ||
|
|
3028db5197 | ||
|
|
7ad1f6bdff | ||
|
|
ad75fa031e | ||
|
|
ad1756c349 | ||
|
|
0386d4bf33 | ||
|
|
87d1662a18 | ||
|
|
909ed1cb17 | ||
|
|
4b6a03a898 | ||
|
|
6c8e5fdd57 | ||
|
|
52603f2deb | ||
|
|
9ca048fb9d | ||
|
|
cb8e747387 | ||
|
|
abc6c011ea | ||
|
|
de0e42cca8 | ||
|
|
ff44827b35 | ||
|
|
e93ea77c2b | ||
|
|
1f2e734ea2 | ||
|
|
6947819742 | ||
|
|
dc7a459ebb | ||
|
|
6e30d2d4e8 | ||
|
|
b2922ebec5 | ||
|
|
c4de8994dd | ||
|
|
f518e1751b | ||
|
|
a70f8cdd01 | ||
|
|
a1016ec1c2 | ||
|
|
52600c9dca | ||
|
|
f10916bfab | ||
|
|
f1ba0aa531 | ||
|
|
4d6ee21408 | ||
|
|
935c7234b1 | ||
|
|
94d37a0d84 | ||
|
|
e2d462d8b6 | ||
|
|
16dfc56ba0 | ||
|
|
bc371e5482 | ||
|
|
f28f641fd5 | ||
|
|
a4691ad2da | ||
|
|
c880e24fc0 | ||
|
|
e96069775c | ||
|
|
0e57c50e56 | ||
|
|
c44d520a7f | ||
|
|
815c7b5129 | ||
|
|
d389576634 | ||
|
|
41e124d8e8 | ||
|
|
0340016932 | ||
|
|
f81fffc9a6 | ||
|
|
dd63403e94 | ||
|
|
d16e5e1a4b | ||
|
|
6caeed14cb | ||
|
|
af408d0747 | ||
|
|
0d3e25e50f | ||
|
|
a02e485f7d | ||
|
|
89b64cd737 | ||
|
|
b61bd6e64d | ||
|
|
6953343026 | ||
|
|
1632ee62b6 | ||
|
|
51950c7ce1 | ||
|
|
885158e152 | ||
|
|
2af7bb725f | ||
|
|
96aaa4151a | ||
|
|
6c1cf99488 | ||
|
|
b23cb07f41 | ||
|
|
4f7ab9c606 | ||
|
|
c91175fdcb | ||
|
|
0ffd21b9bf | ||
|
|
53fdebf733 | ||
|
|
748de099dd | ||
|
|
7f82ef4551 | ||
|
|
f92b3fba6e | ||
|
|
d4b9c8693a | ||
|
|
ea9125b805 | ||
|
|
0605f650b1 | ||
|
|
28a60f8141 | ||
|
|
e0f3060df9 | ||
|
|
d0f98d35d6 | ||
|
|
5f9a4b8dca | ||
|
|
627938aa95 | ||
|
|
a145e6742e | ||
|
|
24cdb4fdf9 | ||
|
|
a1ec3100fd | ||
|
|
c44bee7fa7 | ||
|
|
9d54bbff48 | ||
|
|
c227d9ee03 | ||
|
|
efd7193951 | ||
|
|
034c640601 | ||
|
|
4482d2f4c4 | ||
|
|
d5bda678fd | ||
|
|
302af6337e | ||
|
|
726d945bda | ||
|
|
fd6e7eb2dd | ||
|
|
e5959c3e72 | ||
|
|
4e9bf0ba56 | ||
|
|
74a3441ee4 | ||
|
|
178b7c23ce | ||
|
|
add04e2ad5 | ||
|
|
890e138829 | ||
|
|
7af4190e6d | ||
|
|
7a9fa8fd8f | ||
|
|
277ef5c81d | ||
|
|
544a397e3d | ||
|
|
33b4454f96 | ||
|
|
444d53dc7b | ||
|
|
91525b8999 | ||
|
|
4bffede052 | ||
|
|
90e6e96b2b | ||
|
|
4248fd0969 | ||
|
|
e736697d6d | ||
|
|
d21b5b1363 | ||
|
|
34e8017770 |
380
.cursorrules
380
.cursorrules
@@ -1,380 +0,0 @@
|
||||
# IGNY8 Development Rules & Standards
|
||||
|
||||
**Project:** IGNY8 - AI-Powered Content Platform
|
||||
**Version:** v1.0.0
|
||||
**Last Updated:** December 12, 2025
|
||||
|
||||
---
|
||||
|
||||
## 📋 General Development Principles
|
||||
|
||||
### 1. **Always Read Documentation First**
|
||||
Before making changes, consult these critical docs:
|
||||
- `ARCHITECTURE-KNOWLEDGE-BASE.md` - System architecture and design patterns
|
||||
- `CHANGELOG.md` - Recent changes and version history
|
||||
- `IGNY8-COMPLETE-FEATURES-GUIDE.md` - Complete feature set and capabilities
|
||||
- `docs/00-SYSTEM/` - Core system architecture
|
||||
- `docs/10-BACKEND/` - Backend models, services, APIs
|
||||
- `docs/20-API/` - API endpoint documentation
|
||||
- `docs/30-FRONTEND/` - Frontend components and architecture
|
||||
- `docs/40-WORKFLOWS/` - Business workflows and processes
|
||||
|
||||
### 2. **Maintain Consistency**
|
||||
- **API Design:** Follow existing RESTful patterns in `backend/igny8_core/*/views.py`
|
||||
- **Models:** Use existing base classes (`SoftDeletableModel`, `AccountBaseModel`, `SiteSectorBaseModel`)
|
||||
- **Services:** Follow service pattern in `backend/igny8_core/business/*/services/`
|
||||
- **AI Functions:** Use AI framework in `backend/igny8_core/ai/` (not legacy `utils/ai_processor.py`)
|
||||
- **Frontend Components:** Use existing component library in `frontend/src/components/`
|
||||
- **Styling:** Use TailwindCSS classes, follow existing design system in `frontend/DESIGN_SYSTEM.md`
|
||||
- **State Management:** Use Zustand stores in `frontend/src/store/`
|
||||
|
||||
### 3. **Multi-Tenancy Rules**
|
||||
- **ALWAYS scope by account:** Every query must filter by account
|
||||
- **Site/Sector scoping:** Use `SiteSectorBaseModel` for site-specific data
|
||||
- **Permissions:** Check permissions via `IsAuthenticatedAndActive`, `HasTenantAccess`, role-based permissions
|
||||
- **No cross-tenant access:** Validate account ownership before operations
|
||||
|
||||
### 4. **API Endpoint Rules**
|
||||
- **Use existing API structure:** All user-facing endpoints under `/api/v1/<module>/`, admin endpoints under `/api/v1/<module>/admin/`
|
||||
- **No parallel API systems:** Register all endpoints in module's `urls.py`, test via Swagger at `/api/docs/` before documenting
|
||||
- **Document in Swagger:** Ensure drf-spectacular auto-generates docs; verify endpoint appears at `/api/docs/` and `/api/schema/`
|
||||
|
||||
---
|
||||
|
||||
## 📝 Change Management & Versioning
|
||||
|
||||
alwys udpated changelog with incremental updates, as fixed aded or modified for each version update, dotn remove or modify teh exsitng version changes
|
||||
### Versioning Scheme: `v<MAJOR>.<MINOR>.<PATCH>`
|
||||
|
||||
**Example:** v1.2.5
|
||||
- `MAJOR when asked` (1.x.x): Breaking changes, major features, architecture changes
|
||||
- `MAJOR` (x.2.x): New features, modules, significant enhancements
|
||||
- `MINOR/PATCH` (x.x.5): Bug fixes, small improvements, refactors
|
||||
|
||||
### Changelog Update Rules
|
||||
|
||||
#### **For EVERY Change:**
|
||||
1. **Update version number** in `CHANGELOG.md`
|
||||
2. **Increment PATCH** (v1.0.x → v1.0.1) for:
|
||||
- Bug fixes
|
||||
- Small improvements
|
||||
- Code refactors
|
||||
- Documentation updates
|
||||
- UI/UX tweaks
|
||||
|
||||
3. **Increment MINOR** (v1.x.0 → v1.1.0) for:
|
||||
- New features
|
||||
- New API endpoints
|
||||
- New components
|
||||
- New services
|
||||
- Significant enhancements
|
||||
|
||||
4. **Increment MAJOR** (vx.0.0 → v2.0.0) for:
|
||||
- Breaking API changes
|
||||
- Database schema breaking changes
|
||||
- Architecture overhauls
|
||||
- Major refactors affecting multiple modules
|
||||
|
||||
#### **Changelog Entry Format:**
|
||||
```markdown
|
||||
## v1.2.5 - December 12, 2025
|
||||
|
||||
### Fixed
|
||||
- User logout issue when switching accounts
|
||||
- Payment confirmation modal amount display
|
||||
|
||||
### Changed
|
||||
- Updated session storage from database to Redis
|
||||
- Enhanced credit balance widget UI
|
||||
|
||||
### Added
|
||||
- Plan limits enforcement system
|
||||
- Monthly reset task for usage tracking
|
||||
```
|
||||
|
||||
### **For Major Refactors:**
|
||||
1. **Create detailed TODO list** before starting
|
||||
2. **Document current state** in CHANGELOG
|
||||
3. **Create implementation checklist** (markdown file in root or docs/)
|
||||
4. **Track progress** with checklist updates
|
||||
5. **Test thoroughly** before committing
|
||||
6. **Update CHANGELOG** with all changes made
|
||||
7. **Update version** to next MINOR or MAJOR
|
||||
|
||||
---
|
||||
|
||||
## 🏗️ Code Organization Standards
|
||||
|
||||
### Backend Structure
|
||||
```
|
||||
backend/igny8_core/
|
||||
├── auth/ # Authentication, users, accounts, plans
|
||||
├── business/ # Business logic services
|
||||
│ ├── automation/ # Automation pipeline
|
||||
│ ├── billing/ # Billing, credits, invoices
|
||||
│ ├── content/ # Content generation
|
||||
│ ├── integration/ # External integrations
|
||||
│ ├── linking/ # Internal linking
|
||||
│ ├── optimization/ # Content optimization
|
||||
│ ├── planning/ # Keywords, clusters, ideas
|
||||
│ └── publishing/ # WordPress publishing
|
||||
├── ai/ # AI framework (NEW - use this)
|
||||
├── utils/ # Utility functions
|
||||
├── tasks/ # Celery tasks
|
||||
└── modules/ # Legacy modules (being phased out)
|
||||
```
|
||||
|
||||
### Frontend Structure
|
||||
```
|
||||
frontend/src/
|
||||
├── components/ # Reusable components
|
||||
├── pages/ # Page components
|
||||
├── store/ # Zustand state stores
|
||||
├── services/ # API service layer
|
||||
├── hooks/ # Custom React hooks
|
||||
├── utils/ # Utility functions
|
||||
├── types/ # TypeScript types
|
||||
└── marketing/ # Marketing site
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🔧 Development Workflow
|
||||
|
||||
### 1. **Planning Phase**
|
||||
- [ ] Read relevant documentation
|
||||
- [ ] Understand existing patterns
|
||||
- [ ] Create TODO list for complex changes
|
||||
- [ ] Identify affected components/modules
|
||||
- [ ] Plan database changes (if any)
|
||||
|
||||
### 2. **Implementation Phase**
|
||||
- [ ] Follow existing code patterns
|
||||
- [ ] Use proper base classes and mixins
|
||||
- [ ] Add proper error handling
|
||||
- [ ] Validate input data
|
||||
- [ ] Check permissions and scope
|
||||
- [ ] Write clean, documented code
|
||||
- [ ] Use type hints (Python) and TypeScript types
|
||||
|
||||
### 3. **Testing Phase**
|
||||
- [ ] Test locally with development data
|
||||
- [ ] Test multi-tenancy isolation
|
||||
- [ ] Test permissions and access control
|
||||
- [ ] Test error cases
|
||||
- [ ] Verify no breaking changes
|
||||
- [ ] Check frontend-backend integration
|
||||
|
||||
### 4. **Documentation Phase**
|
||||
- [ ] Update CHANGELOG.md
|
||||
- [ ] Update version number
|
||||
- [ ] Update relevant docs (if architecture/API changes)
|
||||
- [ ] Add code comments for complex logic
|
||||
- [ ] Update API documentation (if endpoints changed)
|
||||
|
||||
---
|
||||
|
||||
## 🎯 Specific Development Rules
|
||||
|
||||
### Backend Development
|
||||
|
||||
#### **Models:**
|
||||
```python
|
||||
# ALWAYS inherit from proper base classes
|
||||
from igny8_core.auth.models import SiteSectorBaseModel
|
||||
|
||||
class MyModel(SoftDeletableModel, SiteSectorBaseModel):
|
||||
# Your fields here
|
||||
pass
|
||||
```
|
||||
|
||||
#### **Services:**
|
||||
```python
|
||||
# Follow service pattern
|
||||
class MyService:
|
||||
def __init__(self):
|
||||
self.credit_service = CreditService()
|
||||
self.limit_service = LimitService()
|
||||
|
||||
def my_operation(self, account, site, **kwargs):
|
||||
# 1. Validate permissions
|
||||
# 2. Check limits/credits
|
||||
# 3. Perform operation
|
||||
# 4. Track usage
|
||||
# 5. Return result
|
||||
pass
|
||||
```
|
||||
|
||||
#### **API Views:**
|
||||
```python
|
||||
# Use proper permission classes
|
||||
class MyViewSet(viewsets.ModelViewSet):
|
||||
permission_classes = [IsAuthenticatedAndActive, HasTenantAccess]
|
||||
|
||||
def get_queryset(self):
|
||||
# ALWAYS scope by account
|
||||
return MyModel.objects.filter(
|
||||
site__account=self.request.user.account
|
||||
)
|
||||
```
|
||||
|
||||
#### **Migrations:**
|
||||
- Run `python manage.py makemigrations` after model changes
|
||||
- Test migrations: `python manage.py migrate --plan`
|
||||
- Never edit existing migrations
|
||||
- Use data migrations for complex data changes
|
||||
|
||||
### Frontend Development
|
||||
|
||||
#### **Components:**
|
||||
```typescript
|
||||
// Use existing component library
|
||||
import { Card } from '@/components/ui/card';
|
||||
import Button from '@/components/ui/button/Button';
|
||||
|
||||
// Follow naming conventions
|
||||
export default function MyComponent() {
|
||||
// Component logic
|
||||
}
|
||||
```
|
||||
|
||||
#### **State Management:**
|
||||
```typescript
|
||||
// Use Zustand stores
|
||||
import { useAuthStore } from '@/store/authStore';
|
||||
|
||||
const { user, account } = useAuthStore();
|
||||
```
|
||||
|
||||
#### **API Calls:**
|
||||
```typescript
|
||||
// Use fetchAPI from services/api.ts
|
||||
import { fetchAPI } from '@/services/api';
|
||||
|
||||
const data = await fetchAPI('/v1/my-endpoint/');
|
||||
```
|
||||
|
||||
#### **Styling:**
|
||||
```typescript
|
||||
// Use TailwindCSS classes
|
||||
<div className="p-6 bg-white dark:bg-gray-800 rounded-lg shadow">
|
||||
<h1 className="text-2xl font-bold text-gray-900 dark:text-white">
|
||||
My Heading
|
||||
</h1>
|
||||
</div>
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🚫 Common Pitfalls to Avoid
|
||||
|
||||
### **DON'T:**
|
||||
- ❌ Skip account scoping in queries
|
||||
- ❌ Use legacy AI processor (`utils/ai_processor.py`) - use `ai/` framework
|
||||
- ❌ Hardcode values - use settings or constants
|
||||
- ❌ Forget error handling
|
||||
- ❌ Skip permission checks
|
||||
- ❌ Create duplicate components - reuse existing
|
||||
- ❌ Use inline styles - use TailwindCSS
|
||||
- ❌ Forget to update CHANGELOG
|
||||
- ❌ Use workarounds - fix the root cause
|
||||
- ❌ Skip migrations after model changes
|
||||
|
||||
### **DO:**
|
||||
- ✅ Read documentation before coding
|
||||
- ✅ Follow existing patterns
|
||||
- ✅ Use proper base classes
|
||||
- ✅ Check permissions and limits
|
||||
- ✅ Handle errors gracefully
|
||||
- ✅ Return valid errors, not fallbacks
|
||||
- ✅ Update CHANGELOG for every change
|
||||
- ✅ Test multi-tenancy isolation
|
||||
- ✅ Use TypeScript types
|
||||
- ✅ Write clean, documented code
|
||||
|
||||
---
|
||||
|
||||
## 🔍 Code Review Checklist
|
||||
|
||||
Before committing code, verify:
|
||||
- [ ] Follows existing code patterns
|
||||
- [ ] Properly scoped by account/site
|
||||
- [ ] Permissions checked
|
||||
- [ ] Error handling implemented
|
||||
- [ ] No breaking changes
|
||||
- [ ] CHANGELOG.md updated
|
||||
- [ ] Version number incremented
|
||||
- [ ] Documentation updated (if needed)
|
||||
- [ ] Tested locally
|
||||
- [ ] No console errors or warnings
|
||||
- [ ] TypeScript types added/updated
|
||||
- [ ] Migrations created (if model changes)
|
||||
|
||||
---
|
||||
|
||||
## 📚 Key Architecture Concepts
|
||||
|
||||
### **Credit System:**
|
||||
- All AI operations cost credits
|
||||
- Check credits before operation: `CreditService.check_credits()`
|
||||
- Deduct after operation: `CreditService.deduct_credits()`
|
||||
- Track in `CreditUsageLog` table
|
||||
|
||||
### **Limit System:**
|
||||
- Hard limits: Persistent (sites, users, keywords, clusters)
|
||||
- Monthly limits: Reset on billing cycle (ideas, words, images)
|
||||
- Track in `PlanLimitUsage` table
|
||||
- Check before operation: `LimitService.check_limit()`
|
||||
|
||||
### **AI Framework:**
|
||||
- Use `ai/engine.py` for AI operations
|
||||
- Use `ai/functions/` for specific AI tasks
|
||||
- Use `ai/models.py` for tracking
|
||||
- Don't use legacy `utils/ai_processor.py`
|
||||
|
||||
### **Multi-Tenancy:**
|
||||
- Every request has `request.user.account`
|
||||
- All models scope by account directly or via site
|
||||
- Use `AccountBaseModel` or `SiteSectorBaseModel`
|
||||
- Validate ownership before mutations
|
||||
|
||||
---
|
||||
|
||||
## 🎨 Design System
|
||||
|
||||
### **Colors:**
|
||||
- Primary: Blue (#0693e3)
|
||||
- Success: Green (#0bbf87)
|
||||
- Error: Red (#ef4444)
|
||||
- Warning: Yellow (#f59e0b)
|
||||
- Info: Blue (#3b82f6)
|
||||
|
||||
### **Typography:**
|
||||
- Headings: font-bold
|
||||
- Body: font-normal
|
||||
- Small text: text-sm
|
||||
- Large text: text-lg, text-xl, text-2xl
|
||||
|
||||
### **Spacing:**
|
||||
- Padding: p-4, p-6 (standard)
|
||||
- Margin: mt-4, mb-6 (standard)
|
||||
- Gap: gap-4, gap-6 (standard)
|
||||
|
||||
### **Components:**
|
||||
- Card: `<Card>` with padding and shadow
|
||||
- Button: `<Button>` with variants (primary, secondary, danger)
|
||||
- Input: `<Input>` with proper validation
|
||||
- Badge: `<Badge>` with color variants
|
||||
|
||||
---
|
||||
|
||||
## 📞 Support & Questions
|
||||
|
||||
- Architecture questions → Check `ARCHITECTURE-KNOWLEDGE-BASE.md`
|
||||
- Feature questions → Check `IGNY8-COMPLETE-FEATURES-GUIDE.md`
|
||||
- API questions → Check `docs/20-API/`
|
||||
- Recent changes → Check `CHANGELOG.md`
|
||||
|
||||
---
|
||||
|
||||
**Remember:** Quality over speed. Take time to understand existing patterns before implementing new features.
|
||||
362
.rules
Normal file
362
.rules
Normal file
@@ -0,0 +1,362 @@
|
||||
# IGNY8 AI Agent Rules
|
||||
|
||||
**Version:** 1.2.0 | **Updated:** January 2, 2026
|
||||
|
||||
---
|
||||
|
||||
## 🚀 Quick Start for AI Agents
|
||||
|
||||
**BEFORE any change, read these docs in order:**
|
||||
1. [docs/INDEX.md](docs/INDEX.md) - Quick navigation to any module/feature
|
||||
2. [docs/30-FRONTEND/COMPONENT-SYSTEM.md](docs/30-FRONTEND/COMPONENT-SYSTEM.md) - **REQUIRED** for any frontend work
|
||||
3. [docs/30-FRONTEND/DESIGN-TOKENS.md](docs/30-FRONTEND/DESIGN-TOKENS.md) - Color tokens and styling rules
|
||||
4. Module doc for the feature you're modifying (see INDEX.md for paths)
|
||||
5. [CHANGELOG.md](CHANGELOG.md) - Recent changes and version history
|
||||
|
||||
---
|
||||
|
||||
## 📁 Project Structure
|
||||
|
||||
| Layer | Path | Purpose |
|
||||
|-------|------|---------|
|
||||
| Backend | `backend/igny8_core/` | Django REST API |
|
||||
| Frontend | `frontend/src/` | React + TypeScript SPA |
|
||||
| Docs | `docs/` | Technical documentation |
|
||||
| AI Engine | `backend/igny8_core/ai/` | AI functions (use this, NOT `utils/ai_processor.py`) |
|
||||
| Design Tokens | `frontend/src/styles/design-system.css` | **Single source** for colors, shadows, typography |
|
||||
| UI Components | `frontend/src/components/ui/` | Button, Badge, Card, Modal, etc. |
|
||||
| Form Components | `frontend/src/components/form/` | InputField, Select, Checkbox, Switch |
|
||||
| Icons | `frontend/src/icons/` | All SVG icons (import from `../../icons`) |
|
||||
|
||||
**Module → File Quick Reference:** See [docs/INDEX.md](docs/INDEX.md#module--file-quick-reference)
|
||||
|
||||
---
|
||||
|
||||
## ⚠️ Module Status
|
||||
|
||||
| Module | Status | Notes |
|
||||
|--------|--------|-------|
|
||||
| Planner | ✅ Active | Keywords, Clusters, Ideas |
|
||||
| Writer | ✅ Active | Tasks, Content, Images |
|
||||
| Automation | ✅ Active | 7-stage pipeline |
|
||||
| Billing | ✅ Active | Credits, Plans |
|
||||
| Publisher | ✅ Active | WordPress publishing |
|
||||
| **Linker** | ⏸️ Inactive | Exists but disabled - Phase 2 |
|
||||
| **Optimizer** | ⏸️ Inactive | Exists but disabled - Phase 2 |
|
||||
| **SiteBuilder** | ❌ Removed | Code exists but NOT part of app - mark for removal in TODOS.md |
|
||||
|
||||
**Important:**
|
||||
- Do NOT work on Linker/Optimizer unless specifically requested
|
||||
- SiteBuilder code is deprecated - if found, add to `TODOS.md` for cleanup
|
||||
|
||||
---
|
||||
|
||||
## 🎨 DESIGN SYSTEM RULES (CRITICAL!)
|
||||
|
||||
> **🔒 STYLE LOCKED** - All UI must use the design system. ESLint enforces these rules.
|
||||
|
||||
### Color System (Only 6 Base Colors!)
|
||||
|
||||
All colors in the system derive from 6 primary hex values in `design-system.css`:
|
||||
- `--color-primary` (#0077B6) - Brand Blue
|
||||
- `--color-success` (#2CA18E) - Success Green
|
||||
- `--color-warning` (#D9A12C) - Warning Amber
|
||||
- `--color-danger` (#A12C40) - Danger Red
|
||||
- `--color-purple` (#2C40A1) - Purple accent
|
||||
- `--color-gray-base` (#667085) - Neutral gray
|
||||
|
||||
### Tailwind Color Classes
|
||||
|
||||
**✅ USE ONLY THESE** (Tailwind defaults are DISABLED):
|
||||
```
|
||||
brand-* (50-950) - Primary blue scale
|
||||
gray-* (25-950) - Neutral scale
|
||||
success-* (25-950) - Green scale
|
||||
error-* (25-950) - Red scale
|
||||
warning-* (25-950) - Amber scale
|
||||
purple-* (25-950) - Purple scale
|
||||
```
|
||||
|
||||
**❌ BANNED** (These will NOT work):
|
||||
```
|
||||
blue-*, red-*, green-*, emerald-*, amber-*, indigo-*,
|
||||
pink-*, rose-*, sky-*, teal-*, cyan-*, etc.
|
||||
```
|
||||
|
||||
### Styling Rules
|
||||
|
||||
| ✅ DO | ❌ DON'T |
|
||||
|-------|---------|
|
||||
| `className="bg-brand-500"` | `className="bg-blue-500"` |
|
||||
| `className="text-gray-700"` | `className="text-[#333]"` |
|
||||
| `<Button variant="primary">` | `<button className="...">` |
|
||||
| Import from `../../icons` | Import from `@heroicons/*` |
|
||||
| Use CSS variables `var(--color-primary)` | Hardcode hex values |
|
||||
|
||||
---
|
||||
|
||||
## 🧩 COMPONENT RULES (ESLint Enforced!)
|
||||
|
||||
> **Never use raw HTML elements** - Use design system components.
|
||||
|
||||
### Required Component Mappings
|
||||
|
||||
| HTML Element | Required Component | Import Path |
|
||||
|--------------|-------------------|-------------|
|
||||
| `<button>` | `Button` or `IconButton` | `components/ui/button/Button` |
|
||||
| `<input type="text/email/password">` | `InputField` | `components/form/input/InputField` |
|
||||
| `<input type="checkbox">` | `Checkbox` | `components/form/input/Checkbox` |
|
||||
| `<input type="radio">` | `Radio` | `components/form/input/Radio` |
|
||||
| `<select>` | `Select` or `SelectDropdown` | `components/form/Select` |
|
||||
| `<textarea>` | `TextArea` | `components/form/input/TextArea` |
|
||||
|
||||
### Component Quick Reference
|
||||
|
||||
```tsx
|
||||
// Buttons
|
||||
<Button variant="primary" tone="brand">Save</Button>
|
||||
<Button variant="outline" tone="danger">Delete</Button>
|
||||
<IconButton icon={<CloseIcon />} variant="ghost" title="Close" />
|
||||
|
||||
// Form Inputs
|
||||
<InputField type="text" label="Name" value={val} onChange={setVal} />
|
||||
<Select options={opts} onChange={setVal} />
|
||||
<Checkbox label="Accept" checked={val} onChange={setVal} />
|
||||
<Switch label="Enable" checked={val} onChange={setVal} />
|
||||
|
||||
// Display
|
||||
<Badge tone="success" variant="soft">Active</Badge>
|
||||
<Alert variant="error" title="Error" message="Failed" />
|
||||
<Spinner size="md" />
|
||||
```
|
||||
|
||||
### Icon Rules
|
||||
|
||||
**Always import from central location:**
|
||||
```tsx
|
||||
// ✅ CORRECT
|
||||
import { PlusIcon, CloseIcon, CheckCircleIcon } from '../../icons';
|
||||
|
||||
// ❌ BANNED - External icon libraries
|
||||
import { XIcon } from '@heroicons/react/24/outline';
|
||||
import { Trash } from 'lucide-react';
|
||||
```
|
||||
|
||||
**Icon sizing:**
|
||||
- `className="w-4 h-4"` - In buttons, badges
|
||||
- `className="w-5 h-5"` - Standalone
|
||||
- `className="w-6 h-6"` - Headers, features
|
||||
|
||||
---
|
||||
|
||||
## 🐳 Docker Commands (IMPORTANT!)
|
||||
|
||||
**Container Names:**
|
||||
| Container | Name | Purpose |
|
||||
|-----------|------|---------|
|
||||
| Backend | `igny8_backend` | Django API server |
|
||||
| Frontend | `igny8_frontend` | React dev server |
|
||||
| Celery Worker | `igny8_celery_worker` | Background tasks |
|
||||
| Celery Beat | `igny8_celery_beat` | Scheduled tasks |
|
||||
|
||||
**Run commands INSIDE containers:**
|
||||
```bash
|
||||
# ✅ CORRECT - Run Django management commands
|
||||
docker exec -it igny8_backend python manage.py migrate
|
||||
docker exec -it igny8_backend python manage.py makemigrations
|
||||
docker exec -it igny8_backend python manage.py shell
|
||||
|
||||
# ✅ CORRECT - Run npm commands
|
||||
docker exec -it igny8_frontend npm install
|
||||
docker exec -it igny8_frontend npm run build
|
||||
docker exec -it igny8_frontend npm run lint # Check design system violations
|
||||
|
||||
# ✅ CORRECT - View logs
|
||||
docker logs igny8_backend -f
|
||||
docker logs igny8_celery_worker -f
|
||||
|
||||
# ❌ WRONG - Don't use docker-compose for commands
|
||||
# docker-compose exec backend python manage.py migrate
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 📊 Data Scoping (CRITICAL!)
|
||||
|
||||
**Understand which data is scoped where:**
|
||||
|
||||
| Scope | Models | Notes |
|
||||
|-------|--------|-------|
|
||||
| **Global (Platform-wide)** | `GlobalIntegrationSettings`, `GlobalAIPrompt`, `GlobalAuthorProfile`, `GlobalStrategy`, `GlobalModuleSettings`, `Industry`, `SeedKeyword` | Admin-only, shared by ALL accounts |
|
||||
| **Account-scoped** | `Account`, `User`, `Plan`, `IntegrationSettings`, `ModuleEnableSettings`, `AISettings`, `AIPrompt`, `AuthorProfile` | Filter by `account` |
|
||||
| **Site+Sector-scoped** | `Keywords`, `Clusters`, `ContentIdeas`, `Tasks`, `Content`, `Images` | Filter by `site` AND optionally `sector` |
|
||||
|
||||
**Key Rules:**
|
||||
- Global settings: NO account filtering (platform-wide, admin managed)
|
||||
- Account models: Use `AccountBaseModel`, filter by `request.user.account`
|
||||
- Site/Sector models: Use `SiteSectorBaseModel`, filter by `site` and `sector`
|
||||
|
||||
---
|
||||
|
||||
## ✅ Rules (One Line Each)
|
||||
|
||||
### Before Coding
|
||||
1. **Read docs first** - Always read the relevant module doc from `docs/10-MODULES/` before changing code
|
||||
2. **Read COMPONENT-SYSTEM.md** - **REQUIRED** before any frontend changes
|
||||
3. **Check existing patterns** - Search codebase for similar implementations before creating new ones
|
||||
4. **Use existing components** - Never duplicate; reuse components from `frontend/src/components/`
|
||||
5. **Check data scope** - Know if your model is Global, Account, or Site/Sector scoped (see table above)
|
||||
|
||||
### During Coding - Backend
|
||||
6. **Use correct base class** - Global: `models.Model`, Account: `AccountBaseModel`, Site: `SiteSectorBaseModel`
|
||||
7. **Use AI framework** - Use `backend/igny8_core/ai/` for AI operations, NOT legacy `utils/ai_processor.py`
|
||||
8. **Follow service pattern** - Business logic in `backend/igny8_core/business/*/services/`
|
||||
9. **Check permissions** - Use `IsAuthenticatedAndActive`, `HasTenantAccess` in views
|
||||
|
||||
### During Coding - Frontend (DESIGN SYSTEM)
|
||||
10. **Use design system components** - Button, InputField, Select, Badge, Card - never raw HTML
|
||||
11. **Use only design system colors** - `brand-*`, `gray-*`, `success-*`, `error-*`, `warning-*`, `purple-*`
|
||||
12. **Import icons from central location** - `import { Icon } from '../../icons'` - never external libraries
|
||||
13. **No inline styles** - Use Tailwind utilities or CSS variables only
|
||||
14. **No hardcoded colors** - No hex values, no `blue-500`, `red-500` (Tailwind defaults disabled)
|
||||
15. **Use TypeScript types** - All frontend code must be typed
|
||||
|
||||
### After Coding
|
||||
16. **Run ESLint** - `docker exec -it igny8_frontend npm run lint` to check design system violations
|
||||
17. **Update CHANGELOG.md** - Every commit needs a changelog entry with git reference
|
||||
18. **Increment version** - PATCH for fixes, MINOR for features, MAJOR for breaking changes
|
||||
19. **Update docs** - If you changed APIs or architecture, update relevant docs in `docs/`
|
||||
20. **Run migrations** - After model changes: `docker exec -it igny8_backend python manage.py makemigrations`
|
||||
|
||||
---
|
||||
|
||||
## 📝 Changelog Format
|
||||
|
||||
```markdown
|
||||
## v1.1.1 - December 27, 2025
|
||||
|
||||
### Fixed
|
||||
- Description here (git: abc1234)
|
||||
|
||||
### Added
|
||||
- Description here (git: def5678)
|
||||
|
||||
### Changed
|
||||
- Description here (git: ghi9012)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🔗 Key Documentation
|
||||
|
||||
| I want to... | Go to |
|
||||
|--------------|-------|
|
||||
| Find any module | [docs/INDEX.md](docs/INDEX.md) |
|
||||
| **Use UI components** | [docs/30-FRONTEND/COMPONENT-SYSTEM.md](docs/30-FRONTEND/COMPONENT-SYSTEM.md) |
|
||||
| **Check design tokens** | [docs/30-FRONTEND/DESIGN-TOKENS.md](docs/30-FRONTEND/DESIGN-TOKENS.md) |
|
||||
| **Design guide** | [docs/30-FRONTEND/DESIGN-GUIDE.md](docs/30-FRONTEND/DESIGN-GUIDE.md) |
|
||||
| Understand architecture | [docs/00-SYSTEM/ARCHITECTURE.md](docs/00-SYSTEM/ARCHITECTURE.md) |
|
||||
| Find an API endpoint | [docs/20-API/ENDPOINTS.md](docs/20-API/ENDPOINTS.md) |
|
||||
| See all models | [docs/90-REFERENCE/MODELS.md](docs/90-REFERENCE/MODELS.md) |
|
||||
| Understand AI functions | [docs/90-REFERENCE/AI-FUNCTIONS.md](docs/90-REFERENCE/AI-FUNCTIONS.md) |
|
||||
| See frontend pages | [docs/30-FRONTEND/PAGES.md](docs/30-FRONTEND/PAGES.md) |
|
||||
| See recent changes | [CHANGELOG.md](CHANGELOG.md) |
|
||||
| View component demos | App route: `/ui-elements` |
|
||||
|
||||
---
|
||||
|
||||
## 🚫 Don't Do
|
||||
|
||||
### General
|
||||
- ❌ Skip reading docs before coding
|
||||
- ❌ Create duplicate components
|
||||
- ❌ Use `docker-compose` for exec commands (use `docker exec`)
|
||||
- ❌ Use legacy `utils/ai_processor.py`
|
||||
- ❌ Add account filtering to Global models (they're platform-wide!)
|
||||
- ❌ Forget site/sector filtering on content models
|
||||
- ❌ Forget to update CHANGELOG
|
||||
- ❌ Hardcode values (use settings/constants)
|
||||
- ❌ Work on Linker/Optimizer (inactive modules - Phase 2)
|
||||
- ❌ Use any SiteBuilder code (deprecated - mark for removal)
|
||||
|
||||
### Frontend - DESIGN SYSTEM VIOLATIONS
|
||||
- ❌ Use raw `<button>` - use `Button` or `IconButton`
|
||||
- ❌ Use raw `<input>` - use `InputField`, `Checkbox`, `Radio`
|
||||
- ❌ Use raw `<select>` - use `Select` or `SelectDropdown`
|
||||
- ❌ Use raw `<textarea>` - use `TextArea`
|
||||
- ❌ Use inline `style={}` attributes
|
||||
- ❌ Hardcode hex colors (`#0693e3`, `#ff0000`)
|
||||
- ❌ Use Tailwind default colors (`blue-500`, `red-500`, `green-500`)
|
||||
- ❌ Import from `@heroicons/*`, `lucide-react`, `@mui/icons-material`
|
||||
- ❌ Create new CSS files (use `design-system.css` only)
|
||||
|
||||
---
|
||||
|
||||
## 📊 API Base URLs
|
||||
|
||||
| Module | Base URL |
|
||||
|--------|----------|
|
||||
| Auth | `/api/v1/auth/` |
|
||||
| Planner | `/api/v1/planner/` |
|
||||
| Writer | `/api/v1/writer/` |
|
||||
| Billing | `/api/v1/billing/` |
|
||||
| Integration | `/api/v1/integration/` |
|
||||
| System | `/api/v1/system/` |
|
||||
|
||||
**API Docs:** https://api.igny8.com/api/docs/
|
||||
**Admin:** https://api.igny8.com/admin/
|
||||
**App:** https://app.igny8.com/
|
||||
|
||||
---
|
||||
|
||||
## 📄 Documentation Rules
|
||||
|
||||
**Root folder MD files allowed (ONLY these):**
|
||||
- `.rules` - AI agent rules (this file)
|
||||
- `CHANGELOG.md` - Version history
|
||||
- `README.md` - Project quickstart
|
||||
|
||||
**All other docs go in `/docs/` folder:**
|
||||
```
|
||||
docs/
|
||||
├── INDEX.md # Master navigation
|
||||
├── 00-SYSTEM/ # Architecture, auth, tenancy, IGNY8-APP.md
|
||||
├── 10-MODULES/ # One file per module
|
||||
├── 20-API/ # API endpoints
|
||||
├── 30-FRONTEND/ # Pages, stores, DESIGN-GUIDE, DESIGN-TOKENS, COMPONENT-SYSTEM
|
||||
├── 40-WORKFLOWS/ # Cross-module flows
|
||||
├── 90-REFERENCE/ # Models, AI functions, FIXES-KB
|
||||
└── plans/ # FINAL-PRELAUNCH, implementation plans
|
||||
```
|
||||
|
||||
**When updating docs:**
|
||||
| Change Type | Update These Files |
|
||||
|-------------|-------------------|
|
||||
| New endpoint | Module doc + `docs/20-API/ENDPOINTS.md` |
|
||||
| New model | Module doc + `docs/90-REFERENCE/MODELS.md` |
|
||||
| New page | Module doc + `docs/30-FRONTEND/PAGES.md` |
|
||||
| New module | Create module doc + update `docs/INDEX.md` |
|
||||
|
||||
**DO NOT** create random MD files - update existing docs instead.
|
||||
|
||||
---
|
||||
|
||||
## 🎯 Quick Checklist Before Commit
|
||||
|
||||
### Backend Changes
|
||||
- [ ] Read relevant module docs
|
||||
- [ ] Correct data scope (Global/Account/Site)
|
||||
- [ ] Ran migrations if model changed
|
||||
|
||||
### Frontend Changes
|
||||
- [ ] Read COMPONENT-SYSTEM.md
|
||||
- [ ] Used design system components (not raw HTML)
|
||||
- [ ] Used design system colors (brand-*, gray-*, success-*, error-*, warning-*, purple-*)
|
||||
- [ ] Icons imported from `../../icons`
|
||||
- [ ] No inline styles or hardcoded hex colors
|
||||
- [ ] Ran `npm run lint` - no design system violations
|
||||
|
||||
### All Changes
|
||||
- [ ] Updated CHANGELOG.md with git reference
|
||||
- [ ] Incremented version number
|
||||
- [ ] Tested locally
|
||||
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
@@ -1,322 +0,0 @@
|
||||
# GLOBAL SETTINGS - DJANGO ADMIN ACCESS GUIDE
|
||||
|
||||
**Last Updated**: December 20, 2025
|
||||
**Status**: ✅ READY TO USE
|
||||
|
||||
---
|
||||
|
||||
## WHERE TO FIND GLOBAL SETTINGS IN DJANGO ADMIN
|
||||
|
||||
### 1. Global AI Integration Settings (API Keys)
|
||||
|
||||
**URL**: http://your-domain.com/admin/system/globalintegrationsettings/
|
||||
|
||||
**What It Controls**:
|
||||
- OpenAI API key (for text generation)
|
||||
- OpenAI model selection (gpt-4, gpt-3.5-turbo, etc.)
|
||||
- OpenAI temperature and max_tokens
|
||||
- DALL-E API key (for image generation)
|
||||
- DALL-E model, size, quality, style
|
||||
- Anthropic API key (for Claude)
|
||||
- Anthropic model selection
|
||||
- Runware API key (for advanced image generation)
|
||||
|
||||
**Important**:
|
||||
- This is a SINGLETON - only ONE record exists (ID=1)
|
||||
- Changes here affect ALL accounts by default
|
||||
- Enterprise accounts can override with their own keys
|
||||
|
||||
**How to Configure**:
|
||||
1. Login to Django Admin as superuser
|
||||
2. Navigate to: System → Global integration settings
|
||||
3. Click on the single "Global Integration Settings" entry
|
||||
4. Fill in your platform-wide API keys
|
||||
5. Set default models and parameters
|
||||
6. Save
|
||||
|
||||
---
|
||||
|
||||
### 2. Account Integration Overrides (Per-Account API Keys)
|
||||
|
||||
**URL**: http://your-domain.com/admin/system/accountintegrationoverride/
|
||||
|
||||
**What It Controls**:
|
||||
- Per-account API key overrides for enterprise customers
|
||||
- Each account can optionally use their own keys
|
||||
- Falls back to global if not configured
|
||||
|
||||
**Fields**:
|
||||
- Account (select which account)
|
||||
- use_own_keys (checkbox - if unchecked, uses global)
|
||||
- Same API key fields as global (all optional)
|
||||
|
||||
**How to Configure**:
|
||||
1. Navigate to: System → Account integration overrides
|
||||
2. Click "Add account integration override"
|
||||
3. Select the account
|
||||
4. Check "Use own keys"
|
||||
5. Fill in their API keys
|
||||
6. Save
|
||||
|
||||
**How It Works**:
|
||||
- If account has override with use_own_keys=True → uses their keys
|
||||
- If account has NO override OR use_own_keys=False → uses global keys
|
||||
- Account can be deleted/disabled to revert to global
|
||||
|
||||
---
|
||||
|
||||
### 3. Global AI Prompts (Prompt Templates Library)
|
||||
|
||||
**URL**: http://your-domain.com/admin/system/globalaiprompt/
|
||||
|
||||
**What It Controls**:
|
||||
- Platform-wide default AI prompt templates
|
||||
- Used for clustering, content generation, ideas, etc.
|
||||
- All accounts can use these prompts
|
||||
- Accounts can customize their own versions
|
||||
|
||||
**Fields**:
|
||||
- Prompt type (clustering, ideas, content_generation, etc.)
|
||||
- Prompt value (the actual prompt template)
|
||||
- Description (what this prompt does)
|
||||
- Variables (list of available variables like {keyword}, {industry})
|
||||
- Version (for tracking changes)
|
||||
- Is active (enable/disable)
|
||||
|
||||
**How to Configure**:
|
||||
1. Navigate to: System → Global ai prompts
|
||||
2. Click "Add global ai prompt"
|
||||
3. Select prompt type (or create new)
|
||||
4. Write your prompt template
|
||||
5. List variables it uses
|
||||
6. Mark as active
|
||||
7. Save
|
||||
|
||||
**Account Usage**:
|
||||
- Accounts automatically use global prompts
|
||||
- Accounts can create customized versions in their own AIPrompt records
|
||||
- Accounts can reset to global anytime
|
||||
|
||||
---
|
||||
|
||||
### 4. Global Author Profiles (Persona Templates Library)
|
||||
|
||||
**URL**: http://your-domain.com/admin/system/globalauthorprofile/
|
||||
|
||||
**What It Controls**:
|
||||
- Platform-wide author persona templates
|
||||
- Tone of voice configurations
|
||||
- Writing style templates
|
||||
- Accounts can clone and customize
|
||||
|
||||
**Fields**:
|
||||
- Name (e.g., "SaaS B2B Professional")
|
||||
- Description (what this persona is for)
|
||||
- Tone (professional, casual, technical, etc.)
|
||||
- Language (en, es, fr, etc.)
|
||||
- Structure template (JSON config for content structure)
|
||||
- Category (saas, ecommerce, blog, technical, creative)
|
||||
- Is active (enable/disable)
|
||||
|
||||
**How to Configure**:
|
||||
1. Navigate to: System → Global author profiles
|
||||
2. Click "Add global author profile"
|
||||
3. Create a persona template
|
||||
4. Set tone and language
|
||||
5. Add structure template if needed
|
||||
6. Assign category
|
||||
7. Save
|
||||
|
||||
**Account Usage**:
|
||||
- Accounts browse global library
|
||||
- Accounts clone a template to create their own version
|
||||
- Cloned version stored in AuthorProfile model with cloned_from reference
|
||||
- Accounts can customize their clone without affecting global
|
||||
|
||||
---
|
||||
|
||||
### 5. Global Strategies (Content Strategy Templates)
|
||||
|
||||
**URL**: http://your-domain.com/admin/system/globalstrategy/
|
||||
|
||||
**What It Controls**:
|
||||
- Platform-wide content strategy templates
|
||||
- Section structures for different content types
|
||||
- Prompt sequences for content generation
|
||||
- Accounts can clone and customize
|
||||
|
||||
**Fields**:
|
||||
- Name (e.g., "SEO Blog Post Strategy")
|
||||
- Description (what this strategy achieves)
|
||||
- Category (blog, product, howto, comparison, etc.)
|
||||
- Prompt types (which prompts to use)
|
||||
- Section logic (JSON config for content sections)
|
||||
- Is active (enable/disable)
|
||||
|
||||
**How to Configure**:
|
||||
1. Navigate to: System → Global strategies
|
||||
2. Click "Add global strategy"
|
||||
3. Create a strategy template
|
||||
4. Define section structure
|
||||
5. Specify which prompts to use
|
||||
6. Add section logic JSON
|
||||
7. Save
|
||||
|
||||
**Account Usage**:
|
||||
- Similar to author profiles
|
||||
- Accounts clone global templates
|
||||
- Customize for their needs
|
||||
- Track origin via cloned_from field
|
||||
|
||||
---
|
||||
|
||||
## ACCOUNT-SPECIFIC MODELS (Not Global)
|
||||
|
||||
These remain account-specific as originally designed:
|
||||
|
||||
### AIPrompt (Account-Level)
|
||||
**URL**: /admin/system/aiprompt/
|
||||
- Per-account AI prompt customizations
|
||||
- References global prompts by default
|
||||
- Can be customized (is_customized=True)
|
||||
- Can reset to global anytime
|
||||
|
||||
### AuthorProfile (Account-Level)
|
||||
**URL**: /admin/system/authorprofile/
|
||||
- Per-account author personas
|
||||
- Can be cloned from global (cloned_from field)
|
||||
- Can be created from scratch (is_custom=True)
|
||||
|
||||
### Strategy (Account-Level)
|
||||
**URL**: /admin/system/strategy/
|
||||
- Per-account content strategies
|
||||
- Can be cloned from global
|
||||
- Can be created from scratch
|
||||
|
||||
### IntegrationSettings (Account-Level) - DEPRECATED
|
||||
**URL**: /admin/system/integrationsettings/
|
||||
**Status**: This model is being phased out in favor of Global + Override pattern
|
||||
**Do Not Use**: Use GlobalIntegrationSettings and AccountIntegrationOverride instead
|
||||
|
||||
---
|
||||
|
||||
## NAVIGATION IN DJANGO ADMIN
|
||||
|
||||
When you login to Django Admin, you'll see:
|
||||
|
||||
```
|
||||
SYSTEM
|
||||
├── Global Integration Settings (1 entry - singleton)
|
||||
├── Account Integration Overrides (0+ entries - one per enterprise account)
|
||||
├── Global AI Prompts (library of prompt templates)
|
||||
├── Global Author Profiles (library of persona templates)
|
||||
├── Global Strategies (library of strategy templates)
|
||||
├── AI Prompts (per-account customizations)
|
||||
├── Author Profiles (per-account personas)
|
||||
├── Strategies (per-account strategies)
|
||||
└── Integration Settings (DEPRECATED - old model)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## QUICK START CHECKLIST
|
||||
|
||||
After deployment, configure in this order:
|
||||
|
||||
1. **Set Global API Keys** (/admin/system/globalintegrationsettings/)
|
||||
- [ ] OpenAI API key
|
||||
- [ ] DALL-E API key
|
||||
- [ ] Anthropic API key (optional)
|
||||
- [ ] Runware API key (optional)
|
||||
- [ ] Set default models and parameters
|
||||
|
||||
2. **Create Global Prompt Library** (/admin/system/globalaiprompt/)
|
||||
- [ ] Clustering prompt
|
||||
- [ ] Content ideas prompt
|
||||
- [ ] Content generation prompt
|
||||
- [ ] Meta description prompt
|
||||
- [ ] Title generation prompt
|
||||
|
||||
3. **Create Global Author Profiles** (/admin/system/globalauthorprofile/)
|
||||
- [ ] Professional B2B profile
|
||||
- [ ] E-commerce profile
|
||||
- [ ] Blog/casual profile
|
||||
- [ ] Technical profile
|
||||
- [ ] Creative profile
|
||||
|
||||
4. **Create Global Strategies** (/admin/system/globalstrategy/)
|
||||
- [ ] SEO blog post strategy
|
||||
- [ ] Product launch strategy
|
||||
- [ ] How-to guide strategy
|
||||
- [ ] Comparison article strategy
|
||||
|
||||
5. **Test with Regular Account**
|
||||
- [ ] Create content using global prompts
|
||||
- [ ] Verify global API keys work
|
||||
- [ ] Test cloning profiles/strategies
|
||||
|
||||
6. **Configure Enterprise Account** (if needed)
|
||||
- [ ] Create AccountIntegrationOverride
|
||||
- [ ] Add their API keys
|
||||
- [ ] Enable use_own_keys
|
||||
- [ ] Test their custom keys work
|
||||
|
||||
---
|
||||
|
||||
## TROUBLESHOOTING
|
||||
|
||||
**Problem**: Can't see Global Integration Settings in admin
|
||||
|
||||
**Solution**:
|
||||
1. Check you're logged in as superuser
|
||||
2. Refresh the page
|
||||
3. Check URL: /admin/system/globalintegrationsettings/
|
||||
4. Verify migration applied: `docker exec igny8_backend python manage.py showmigrations system`
|
||||
|
||||
---
|
||||
|
||||
**Problem**: Global settings not taking effect
|
||||
|
||||
**Solution**:
|
||||
1. Check GlobalIntegrationSettings has values saved
|
||||
2. Verify is_active=True
|
||||
3. Check no AccountIntegrationOverride for the account
|
||||
4. Restart backend: `docker restart igny8_backend`
|
||||
|
||||
---
|
||||
|
||||
**Problem**: Account override not working
|
||||
|
||||
**Solution**:
|
||||
1. Check use_own_keys checkbox is enabled
|
||||
2. Verify API keys are filled in
|
||||
3. Check account selected correctly
|
||||
4. Test the API keys manually
|
||||
|
||||
---
|
||||
|
||||
## API ACCESS TO GLOBAL SETTINGS
|
||||
|
||||
Code can access global settings:
|
||||
|
||||
**Get Global Integration Settings**:
|
||||
```python
|
||||
from igny8_core.modules.system.global_settings_models import GlobalIntegrationSettings
|
||||
settings = GlobalIntegrationSettings.get_instance()
|
||||
```
|
||||
|
||||
**Get Effective Settings for Account** (checks override, falls back to global):
|
||||
```python
|
||||
from igny8_core.ai.settings import get_openai_settings
|
||||
settings = get_openai_settings(account)
|
||||
```
|
||||
|
||||
**Get Global Prompt**:
|
||||
```python
|
||||
from igny8_core.modules.system.global_settings_models import GlobalAIPrompt
|
||||
prompt = GlobalAIPrompt.objects.get(prompt_type='clustering', is_active=True)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
*For complete implementation details, see COMPLETE-IMPLEMENTATION-GUIDE.md*
|
||||
@@ -1,320 +0,0 @@
|
||||
# GLOBAL SETTINGS - CORRECT IMPLEMENTATION
|
||||
|
||||
**Date**: December 20, 2025
|
||||
**Status**: ✅ FIXED AND WORKING
|
||||
|
||||
---
|
||||
|
||||
## WHAT WAS WRONG
|
||||
|
||||
The initial implementation had:
|
||||
- AccountIntegrationOverride model allowing users to use their own API keys
|
||||
- Enterprise plan that doesn't exist
|
||||
- Confusing override logic where accounts could bring their own API keys
|
||||
|
||||
## WHAT IS NOW CORRECT
|
||||
|
||||
### Architecture
|
||||
|
||||
**1. Plans (Only 4 Valid)**:
|
||||
- Free Plan - Cannot override anything, uses global defaults
|
||||
- Starter Plan - Can override model/settings
|
||||
- Growth Plan - Can override model/settings
|
||||
- Scale Plan - Can override model/settings
|
||||
|
||||
**2. API Keys** (Platform-Wide):
|
||||
- Stored in GlobalIntegrationSettings (singleton, pk=1)
|
||||
- ALL accounts use platform API keys
|
||||
- NO user can bring their own API keys
|
||||
- NO exceptions for any plan level
|
||||
|
||||
**3. Model & Parameter Overrides** (Per-Account):
|
||||
- Stored in IntegrationSettings model (per-account)
|
||||
- Free plan: CANNOT create overrides
|
||||
- Starter/Growth/Scale: CAN override model, temperature, max_tokens, image settings
|
||||
- NULL values in config = use global default
|
||||
- API keys NEVER stored here
|
||||
|
||||
**4. Prompts** (Global + Override):
|
||||
- GlobalAIPrompt: Platform-wide default prompts
|
||||
- AIPrompt: Per-account with default_prompt field
|
||||
- When user customizes: prompt_value changes, default_prompt stays same
|
||||
- Reset to default: Copies default_prompt back to prompt_value
|
||||
- is_customized flag tracks if using custom or default
|
||||
|
||||
---
|
||||
|
||||
## WHERE TO FIND SETTINGS IN DJANGO ADMIN
|
||||
|
||||
### 1. Global Integration Settings
|
||||
**URL**: /admin/system/globalintegrationsettings/
|
||||
|
||||
**What It Stores**:
|
||||
- Platform OpenAI API key (used by ALL accounts)
|
||||
- Platform DALL-E API key (used by ALL accounts)
|
||||
- Platform Anthropic API key (used by ALL accounts)
|
||||
- Platform Runware API key (used by ALL accounts)
|
||||
- Default model selections for each service
|
||||
- Default parameters (temperature, max_tokens, image quality, etc.)
|
||||
|
||||
**Important**:
|
||||
- Singleton model (only 1 record, pk=1)
|
||||
- Changes affect ALL accounts using global defaults
|
||||
- Free plan accounts MUST use these (cannot override)
|
||||
- Other plans can override model/params but NOT API keys
|
||||
|
||||
### 2. Integration Settings (Per-Account Overrides)
|
||||
**URL**: /admin/system/integrationsettings/
|
||||
|
||||
**What It Stores**:
|
||||
- Per-account model selection overrides
|
||||
- Per-account parameter overrides (temperature, max_tokens, etc.)
|
||||
- Per-account image setting overrides (size, quality, style)
|
||||
|
||||
**What It DOES NOT Store**:
|
||||
- API keys (those come from global)
|
||||
|
||||
**Who Can Create**:
|
||||
- Starter/Growth/Scale plans only
|
||||
- Free plan users cannot create these
|
||||
|
||||
**How It Works**:
|
||||
- If account has IntegrationSettings record with config values → uses those
|
||||
- If config field is NULL or missing → uses global default
|
||||
- API key ALWAYS from GlobalIntegrationSettings
|
||||
|
||||
**Example Config**:
|
||||
```json
|
||||
{
|
||||
"model": "gpt-4",
|
||||
"temperature": 0.8,
|
||||
"max_tokens": 4000
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Global AI Prompts
|
||||
**URL**: /admin/system/globalaiprompt/
|
||||
|
||||
**What It Stores**:
|
||||
- Platform-wide default prompt templates
|
||||
- Used for: clustering, ideas, content_generation, etc.
|
||||
|
||||
**How Accounts Use Them**:
|
||||
- All accounts start with global prompts
|
||||
- When user wants to customize, system creates AIPrompt record
|
||||
- AIPrompt.default_prompt = GlobalAIPrompt.prompt_value (for reset)
|
||||
- AIPrompt.prompt_value = user's custom text
|
||||
- AIPrompt.is_customized = True
|
||||
|
||||
### 4. AI Prompts (Per-Account)
|
||||
**URL**: /admin/system/aiprompt/
|
||||
|
||||
**What It Stores**:
|
||||
- Account-specific prompt customizations
|
||||
- default_prompt field = global default (for reset)
|
||||
- prompt_value = current prompt (custom or default)
|
||||
- is_customized = True if user modified it
|
||||
|
||||
**Actions Available**:
|
||||
- "Reset selected prompts to global default" - Copies default_prompt → prompt_value, sets is_customized=False
|
||||
|
||||
---
|
||||
|
||||
## HOW IT WORKS (Complete Flow)
|
||||
|
||||
### Text Generation Request
|
||||
|
||||
1. Code calls: `get_model_config(function_name='generate_content', account=some_account)`
|
||||
|
||||
2. System gets API key from GlobalIntegrationSettings:
|
||||
- `global_settings = GlobalIntegrationSettings.get_instance()`
|
||||
- `api_key = global_settings.openai_api_key` # ALWAYS from global
|
||||
|
||||
3. System checks for account overrides:
|
||||
- Try to find IntegrationSettings for this account + integration_type='openai'
|
||||
- If found: Use config['model'], config['temperature'], config['max_tokens']
|
||||
- If not found OR config field is NULL: Use global defaults
|
||||
|
||||
4. Result returned:
|
||||
```python
|
||||
{
|
||||
'api_key': 'sk-xxx', # Always from global
|
||||
'model': 'gpt-4', # From account override OR global
|
||||
'temperature': 0.8, # From account override OR global
|
||||
'max_tokens': 4000 # From account override OR global
|
||||
}
|
||||
```
|
||||
|
||||
### Prompt Retrieval
|
||||
|
||||
1. Code calls: `AIPrompt.get_effective_prompt(account=some_account, prompt_type='clustering')`
|
||||
|
||||
2. System checks for account-specific prompt:
|
||||
- Try to find AIPrompt for this account + prompt_type
|
||||
- If found and is_customized=True: Return prompt_value
|
||||
- If found and is_customized=False: Return default_prompt
|
||||
|
||||
3. If no account prompt found:
|
||||
- Get GlobalAIPrompt for prompt_type
|
||||
- Return global prompt_value
|
||||
|
||||
### User Customizes a Prompt
|
||||
|
||||
1. User edits prompt in frontend
|
||||
2. Frontend saves to AIPrompt model:
|
||||
- If AIPrompt doesn't exist: Create new record
|
||||
- Set default_prompt = GlobalAIPrompt.prompt_value (for future reset)
|
||||
- Set prompt_value = user's custom text
|
||||
- Set is_customized = True
|
||||
|
||||
### User Resets Prompt
|
||||
|
||||
1. User clicks "Reset to Default"
|
||||
2. System calls: `AIPrompt.reset_to_default()`
|
||||
3. Method does:
|
||||
- prompt_value = default_prompt
|
||||
- is_customized = False
|
||||
- save()
|
||||
|
||||
---
|
||||
|
||||
## MIGRATION APPLIED
|
||||
|
||||
**File**: 0004_fix_global_settings_remove_override.py
|
||||
|
||||
**Changes**:
|
||||
- Added default_prompt field to AIPrompt model
|
||||
- Updated help text on IntegrationSettings.config field
|
||||
- Updated integration_type choices (removed GSC, image_generation)
|
||||
- Updated GlobalIntegrationSettings help text
|
||||
- Removed AccountIntegrationOverride model
|
||||
|
||||
---
|
||||
|
||||
## ADMIN INTERFACE CHANGES
|
||||
|
||||
**GlobalIntegrationSettings Admin**:
|
||||
- Shows all platform API keys and default settings
|
||||
- One record only (singleton)
|
||||
- Help text clarifies these are used by ALL accounts
|
||||
|
||||
**IntegrationSettings Admin**:
|
||||
- Help text emphasizes: "NEVER store API keys here"
|
||||
- Config field description explains it's for overrides only
|
||||
- Removed bulk_test_connection action
|
||||
- Free plan check should be added to prevent creation
|
||||
|
||||
**AIPrompt Admin**:
|
||||
- Added default_prompt to readonly_fields
|
||||
- Added "Reset selected prompts to global default" bulk action
|
||||
- Fieldsets show both prompt_value and default_prompt
|
||||
|
||||
**Removed**:
|
||||
- AccountIntegrationOverride model
|
||||
- AccountIntegrationOverrideAdmin class
|
||||
- All references to per-account API keys
|
||||
|
||||
---
|
||||
|
||||
## SIDEBAR NAVIGATION (TODO)
|
||||
|
||||
Need to add links in app sidebar to access global settings:
|
||||
|
||||
**For Superusers/Admin**:
|
||||
- Global Settings
|
||||
- Platform API Keys (/admin/system/globalintegrationsettings/)
|
||||
- Global Prompts (/admin/system/globalaiprompt/)
|
||||
- Global Author Profiles (/admin/system/globalauthorprofile/)
|
||||
- Global Strategies (/admin/system/globalstrategy/)
|
||||
|
||||
**For All Users** (Starter+ plans):
|
||||
- Account Settings
|
||||
- AI Model Selection (/settings/ai) - Configure IntegrationSettings
|
||||
- Custom Prompts (/settings/prompts) - Manage AIPrompts
|
||||
- Author Profiles (/settings/profiles) - Manage AuthorProfiles
|
||||
- Content Strategies (/settings/strategies) - Manage Strategies
|
||||
|
||||
---
|
||||
|
||||
## VERIFICATION
|
||||
|
||||
Run these commands to verify:
|
||||
|
||||
```bash
|
||||
# Check migration applied
|
||||
docker exec igny8_backend python manage.py showmigrations system
|
||||
|
||||
# Verify global settings exist
|
||||
docker exec igny8_backend python manage.py shell -c "
|
||||
from igny8_core.modules.system.global_settings_models import GlobalIntegrationSettings
|
||||
obj = GlobalIntegrationSettings.get_instance()
|
||||
print(f'OpenAI Model: {obj.openai_model}')
|
||||
print(f'Max Tokens: {obj.openai_max_tokens}')
|
||||
"
|
||||
|
||||
# Check AIPrompt has default_prompt field
|
||||
docker exec igny8_backend python manage.py shell -c "
|
||||
from igny8_core.modules.system.models import AIPrompt
|
||||
fields = [f.name for f in AIPrompt._meta.get_fields()]
|
||||
print('default_prompt' in fields)
|
||||
"
|
||||
|
||||
# Verify AccountIntegrationOverride removed
|
||||
docker exec igny8_backend python manage.py shell -c "
|
||||
try:
|
||||
from igny8_core.modules.system.global_settings_models import AccountIntegrationOverride
|
||||
print('ERROR: Model still exists!')
|
||||
except ImportError:
|
||||
print('✓ Model correctly removed')
|
||||
"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## QUICK START
|
||||
|
||||
1. **Configure Platform API Keys**:
|
||||
- Login to Django Admin
|
||||
- Go to: System → Global integration settings
|
||||
- Fill in OpenAI, DALL-E API keys
|
||||
- Set default models
|
||||
- Save
|
||||
|
||||
2. **Create Global Prompts**:
|
||||
- Go to: System → Global ai prompts
|
||||
- Add prompts for: clustering, ideas, content_generation
|
||||
- These become defaults for all accounts
|
||||
|
||||
3. **Test with Account**:
|
||||
- Create test account on Starter plan
|
||||
- Account automatically uses global API keys
|
||||
- Account can create IntegrationSettings to override model selection
|
||||
- Account CANNOT override API keys
|
||||
|
||||
4. **Verify Free Plan Restriction**:
|
||||
- Create test account on Free plan
|
||||
- Verify they CANNOT create IntegrationSettings records
|
||||
- Verify they use global defaults only
|
||||
|
||||
---
|
||||
|
||||
## SUMMARY
|
||||
|
||||
✅ **Correct**: Platform API keys used by all accounts
|
||||
✅ **Correct**: No user can bring their own API keys
|
||||
✅ **Correct**: Only 4 plans (Free, Starter, Growth, Scale)
|
||||
✅ **Correct**: Free plan cannot override, must use global
|
||||
✅ **Correct**: Other plans can override model/params only
|
||||
✅ **Correct**: Prompts have default_prompt for reset
|
||||
✅ **Correct**: Global settings NOT associated with any account
|
||||
|
||||
❌ **Removed**: AccountIntegrationOverride model
|
||||
❌ **Removed**: Enterprise plan references
|
||||
❌ **Removed**: "Bring your own API key" functionality
|
||||
|
||||
🔧 **TODO**: Add sidebar navigation links to global settings
|
||||
🔧 **TODO**: Add plan check to IntegrationSettings creation
|
||||
|
||||
---
|
||||
|
||||
*For complete implementation details, see COMPLETE-IMPLEMENTATION-GUIDE.md*
|
||||
File diff suppressed because it is too large
Load Diff
@@ -1,347 +0,0 @@
|
||||
# AI Models Database Configuration - Implementation Summary
|
||||
|
||||
**Date Completed:** December 24, 2025
|
||||
**Status:** ✅ **PRODUCTION READY**
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
Successfully migrated AI model pricing from hardcoded constants to a dynamic database-driven system. The system now supports real-time model configuration via Django Admin without requiring code deployments.
|
||||
|
||||
---
|
||||
|
||||
## Implementation Phases (All Complete ✅)
|
||||
|
||||
### Phase 1: AIModelConfig Model ✅
|
||||
**File:** `backend/igny8_core/business/billing/models.py`
|
||||
|
||||
Created comprehensive model with:
|
||||
- 15 fields supporting both text and image models
|
||||
- Text model fields: `input_cost_per_1m`, `output_cost_per_1m`, `context_window`, `max_output_tokens`
|
||||
- Image model fields: `cost_per_image`, `valid_sizes` (JSON array)
|
||||
- Capabilities: `supports_json_mode`, `supports_vision`, `supports_function_calling`
|
||||
- Status fields: `is_active`, `is_default`, `sort_order`
|
||||
- Audit trail: `created_at`, `updated_at`, `updated_by`
|
||||
- History tracking via `django-simple-history`
|
||||
|
||||
**Methods:**
|
||||
- `get_cost_for_tokens(input_tokens, output_tokens)` - Calculate text model cost
|
||||
- `get_cost_for_images(num_images)` - Calculate image model cost
|
||||
- `validate_size(size)` - Validate image size for model
|
||||
- `get_display_with_pricing()` - Formatted string for dropdowns
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: Migration & Data Seeding ✅
|
||||
**File:** `backend/igny8_core/modules/billing/migrations/0020_create_ai_model_config.py`
|
||||
|
||||
**Seeded Models:**
|
||||
- **Text Models (5):**
|
||||
- `gpt-4o-mini` (default) - $0.15/$0.60 per 1M | 128K context
|
||||
- `gpt-4o` - $2.50/$10.00 per 1M | 128K context | Vision
|
||||
- `gpt-4.1` - $2.00/$8.00 per 1M | 8K context
|
||||
- `gpt-5.1` - $1.25/$10.00 per 1M | 16K context
|
||||
- `gpt-5.2` - $1.75/$14.00 per 1M | 16K context
|
||||
|
||||
- **Image Models (4):**
|
||||
- `dall-e-3` (default) - $0.040/image | 3 sizes
|
||||
- `dall-e-2` - $0.020/image | 3 sizes
|
||||
- `gpt-image-1` (inactive) - $0.042/image
|
||||
- `gpt-image-1-mini` (inactive) - $0.011/image
|
||||
|
||||
**Total:** 9 models (7 active)
|
||||
|
||||
---
|
||||
|
||||
### Phase 3: Django Admin Interface ✅
|
||||
**File:** `backend/igny8_core/modules/billing/admin.py`
|
||||
|
||||
**Features:**
|
||||
- List display with colored badges (model type, provider)
|
||||
- Formatted pricing display based on type
|
||||
- Active/inactive and default status icons
|
||||
- Filters: model_type, provider, is_active, capabilities
|
||||
- Search: model_name, display_name, description
|
||||
- Collapsible fieldsets organized by category
|
||||
|
||||
**Actions:**
|
||||
- Bulk activate/deactivate models
|
||||
- Set model as default (enforces single default per type)
|
||||
- Export pricing table
|
||||
|
||||
**Access:** Django Admin → Billing → AI Model Configurations
|
||||
|
||||
---
|
||||
|
||||
### Phase 4 & 5: AI Core Integration ✅
|
||||
**File:** `backend/igny8_core/ai/ai_core.py`
|
||||
|
||||
**Updated Functions:**
|
||||
1. `run_ai_request()` (line ~294) - Text model cost calculation
|
||||
2. `generate_image()` (line ~581) - Image model cost calculation
|
||||
3. `calculate_cost()` (line ~822) - Helper method
|
||||
|
||||
**Implementation:**
|
||||
- Lazy imports to avoid circular dependencies
|
||||
- Database-first with fallback to constants
|
||||
- Try/except wrapper for safety
|
||||
- Logging shows source (database vs constants)
|
||||
|
||||
**Example:**
|
||||
```python
|
||||
# Before (hardcoded)
|
||||
rates = MODEL_RATES.get(model, {'input': 2.00, 'output': 8.00})
|
||||
cost = (input_tokens * rates['input'] + output_tokens * rates['output']) / 1_000_000
|
||||
|
||||
# After (database)
|
||||
model_config = AIModelConfig.objects.get(model_name=model, model_type='text', is_active=True)
|
||||
cost = model_config.get_cost_for_tokens(input_tokens, output_tokens)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Phase 6: Validators Update ✅
|
||||
**File:** `backend/igny8_core/ai/validators.py`
|
||||
|
||||
**Updated Functions:**
|
||||
1. `validate_model(model, model_type)` - Checks database for active models
|
||||
2. `validate_image_size(size, model)` - Uses model's `valid_sizes` from database
|
||||
|
||||
**Benefits:**
|
||||
- Dynamic model availability
|
||||
- Better error messages with available model lists
|
||||
- Automatic sync with database state
|
||||
|
||||
---
|
||||
|
||||
### Phase 7: REST API Endpoint ✅
|
||||
**Endpoint:** `GET /api/v1/billing/ai/models/`
|
||||
|
||||
**Files Created/Updated:**
|
||||
- Serializer: `backend/igny8_core/modules/billing/serializers.py`
|
||||
- ViewSet: `backend/igny8_core/modules/billing/views.py`
|
||||
- URLs: `backend/igny8_core/business/billing/urls.py`
|
||||
|
||||
**API Features:**
|
||||
|
||||
**List Models:**
|
||||
```bash
|
||||
GET /api/v1/billing/ai/models/
|
||||
GET /api/v1/billing/ai/models/?type=text
|
||||
GET /api/v1/billing/ai/models/?type=image
|
||||
GET /api/v1/billing/ai/models/?provider=openai
|
||||
GET /api/v1/billing/ai/models/?default=true
|
||||
```
|
||||
|
||||
**Get Single Model:**
|
||||
```bash
|
||||
GET /api/v1/billing/ai/models/gpt-4o-mini/
|
||||
```
|
||||
|
||||
**Response Format:**
|
||||
```json
|
||||
{
|
||||
"success": true,
|
||||
"message": "AI models retrieved successfully",
|
||||
"data": [
|
||||
{
|
||||
"model_name": "gpt-4o-mini",
|
||||
"display_name": "GPT-4o mini - Fast & Affordable",
|
||||
"model_type": "text",
|
||||
"provider": "openai",
|
||||
"input_cost_per_1m": "0.1500",
|
||||
"output_cost_per_1m": "0.6000",
|
||||
"context_window": 128000,
|
||||
"max_output_tokens": 16000,
|
||||
"supports_json_mode": true,
|
||||
"supports_vision": false,
|
||||
"is_default": true,
|
||||
"sort_order": 1,
|
||||
"pricing_display": "$0.1500/$0.6000 per 1M"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
**Authentication:** Required (JWT)
|
||||
|
||||
---
|
||||
|
||||
## Verification Results
|
||||
|
||||
### ✅ All Tests Passed
|
||||
|
||||
| Test | Status | Details |
|
||||
|------|--------|---------|
|
||||
| Database Models | ✅ | 9 models (7 active, 2 inactive) |
|
||||
| Cost Calculations | ✅ | Text: $0.000523, Image: $0.0400 |
|
||||
| Model Validators | ✅ | Database queries work correctly |
|
||||
| Django Admin | ✅ | Registered with 9 display fields |
|
||||
| API Endpoint | ✅ | `/api/v1/billing/ai/models/` |
|
||||
| Model Methods | ✅ | All helper methods functional |
|
||||
| Default Models | ✅ | gpt-4o-mini (text), dall-e-3 (image) |
|
||||
|
||||
---
|
||||
|
||||
## Key Benefits Achieved
|
||||
|
||||
### 1. **No Code Deploys for Pricing Updates**
|
||||
- Update model pricing in Django Admin
|
||||
- Changes take effect immediately
|
||||
- No backend restart required
|
||||
|
||||
### 2. **Multi-Provider Ready**
|
||||
- Provider field supports: OpenAI, Anthropic, Runware, Google
|
||||
- Easy to add new providers without code changes
|
||||
|
||||
### 3. **Real-Time Model Management**
|
||||
- Enable/disable models via admin
|
||||
- Set default models per type
|
||||
- Configure capabilities dynamically
|
||||
|
||||
### 4. **Frontend Integration Ready**
|
||||
- RESTful API with filtering
|
||||
- Structured data for dropdowns
|
||||
- Pricing display included
|
||||
|
||||
### 5. **Backward Compatible**
|
||||
- Constants still available as fallback
|
||||
- Existing code continues to work
|
||||
- Gradual migration complete
|
||||
|
||||
### 6. **Full Audit Trail**
|
||||
- django-simple-history tracks all changes
|
||||
- Updated_by field shows who made changes
|
||||
- Created/updated timestamps
|
||||
|
||||
---
|
||||
|
||||
## Architecture
|
||||
|
||||
### Two Pricing Models Supported
|
||||
|
||||
**1. Text Models (Token-Based)**
|
||||
- Credits calculated AFTER AI call
|
||||
- Based on actual token usage
|
||||
- Formula: `cost = (input_tokens × input_rate + output_tokens × output_rate) / 1M`
|
||||
|
||||
**2. Image Models (Per-Image)**
|
||||
- Credits calculated BEFORE AI call
|
||||
- Fixed cost per image
|
||||
- Formula: `cost = cost_per_image × num_images`
|
||||
|
||||
### Data Flow
|
||||
|
||||
```
|
||||
User Request
|
||||
↓
|
||||
AICore checks AIModelConfig database
|
||||
↓
|
||||
If found: Use database pricing
|
||||
If not found: Fallback to constants
|
||||
↓
|
||||
Calculate cost
|
||||
↓
|
||||
Deduct credits
|
||||
↓
|
||||
Log to CreditUsageLog
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Files Modified
|
||||
|
||||
### New Files (2)
|
||||
1. Migration: `0020_create_ai_model_config.py` (200+ lines)
|
||||
2. Summary: This document
|
||||
|
||||
### Modified Files (6)
|
||||
1. `billing/models.py` - Added AIModelConfig model (240 lines)
|
||||
2. `billing/admin.py` - Added AIModelConfigAdmin (180 lines)
|
||||
3. `ai/ai_core.py` - Updated cost calculations (3 functions)
|
||||
4. `ai/validators.py` - Updated validators (2 functions)
|
||||
5. `modules/billing/serializers.py` - Added AIModelConfigSerializer (55 lines)
|
||||
6. `modules/billing/views.py` - Added AIModelConfigViewSet (75 lines)
|
||||
7. `business/billing/urls.py` - Registered API endpoint (1 line)
|
||||
|
||||
**Total:** ~750 lines of code added/modified
|
||||
|
||||
---
|
||||
|
||||
## Usage Examples
|
||||
|
||||
### Django Admin
|
||||
1. Navigate to: **Admin → Billing → AI Model Configurations**
|
||||
2. Click on any model to edit pricing
|
||||
3. Use filters to view specific model types
|
||||
4. Use bulk actions to activate/deactivate
|
||||
|
||||
### API Usage (Frontend)
|
||||
```javascript
|
||||
// Fetch all text models
|
||||
const response = await fetch('/api/v1/billing/ai/models/?type=text');
|
||||
const { data: models } = await response.json();
|
||||
|
||||
// Display in dropdown
|
||||
models.forEach(model => {
|
||||
console.log(model.display_name, model.pricing_display);
|
||||
});
|
||||
```
|
||||
|
||||
### Programmatic Usage (Backend)
|
||||
```python
|
||||
from igny8_core.business.billing.models import AIModelConfig
|
||||
|
||||
# Get model
|
||||
model = AIModelConfig.objects.get(model_name='gpt-4o-mini')
|
||||
|
||||
# Calculate cost
|
||||
cost = model.get_cost_for_tokens(1000, 500) # $0.000450
|
||||
|
||||
# Validate size (images)
|
||||
dalle = AIModelConfig.objects.get(model_name='dall-e-3')
|
||||
is_valid = dalle.validate_size('1024x1024') # True
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Next Steps (Optional Enhancements)
|
||||
|
||||
### Short Term
|
||||
- [ ] Add model usage analytics to admin
|
||||
- [ ] Create frontend UI for model selection
|
||||
- [ ] Add model comparison view
|
||||
|
||||
### Long Term
|
||||
- [ ] Add Anthropic models (Claude)
|
||||
- [ ] Add Google models (Gemini)
|
||||
- [ ] Implement A/B testing for models
|
||||
- [ ] Add cost forecasting based on usage patterns
|
||||
|
||||
---
|
||||
|
||||
## Rollback Plan
|
||||
|
||||
If issues occur:
|
||||
|
||||
1. **Code Level:** All functions have fallback to constants
|
||||
2. **Database Level:** Migration can be reversed: `python manage.py migrate billing 0019`
|
||||
3. **Data Level:** No existing data affected (CreditUsageLog unchanged)
|
||||
4. **Time Required:** < 5 minutes
|
||||
|
||||
**Risk:** Minimal - System has built-in fallback mechanisms
|
||||
|
||||
---
|
||||
|
||||
## Support
|
||||
|
||||
- **Django Admin:** http://your-domain/admin/billing/aimodelconfig/
|
||||
- **API Docs:** http://your-domain/api/v1/billing/ai/models/
|
||||
- **Configuration:** [AI-MODELS-DATABASE-CONFIGURATION-PLAN.md](AI-MODELS-DATABASE-CONFIGURATION-PLAN.md)
|
||||
|
||||
---
|
||||
|
||||
**Status:** ✅ Production Ready
|
||||
**Deployed:** December 24, 2025
|
||||
**Version:** 1.0
|
||||
@@ -1,261 +0,0 @@
|
||||
# AI Model Database Configuration - Validation Report
|
||||
|
||||
**Date:** 2024
|
||||
**Status:** ✅ 100% OPERATIONAL AND VERIFIED
|
||||
|
||||
---
|
||||
|
||||
## Executive Summary
|
||||
|
||||
All 34 validation tests passed successfully. The AI Model Database Configuration system is fully operational with database-driven pricing, cost calculations, validation, and REST API integration.
|
||||
|
||||
---
|
||||
|
||||
## Test Results Summary
|
||||
|
||||
| Test Suite | Tests | Passed | Status |
|
||||
|-----------|-------|--------|--------|
|
||||
| **Test 1:** Model Instance Methods | 5 | 5 | ✅ PASS |
|
||||
| **Test 2:** AI Core Cost Calculations | 5 | 5 | ✅ PASS |
|
||||
| **Test 3:** Validators | 9 | 9 | ✅ PASS |
|
||||
| **Test 4:** Credit Calculation Integration | 4 | 4 | ✅ PASS |
|
||||
| **Test 5:** REST API Serializer | 7 | 7 | ✅ PASS |
|
||||
| **Test 6:** End-to-End Integration | 4 | 4 | ✅ PASS |
|
||||
| **TOTAL** | **34** | **34** | **✅ 100%** |
|
||||
|
||||
---
|
||||
|
||||
## Database Status
|
||||
|
||||
### Active Text Models (5)
|
||||
- ✓ `gpt-4o-mini` - $0.1500/$0.6000 per 1M tokens
|
||||
- ✓ `gpt-4o` - $2.5000/$10.0000 per 1M tokens
|
||||
- ✓ `gpt-4.1` - $2.0000/$8.0000 per 1M tokens
|
||||
- ✓ `gpt-5.1` - $1.2500/$10.0000 per 1M tokens
|
||||
- ✓ `gpt-5.2` - $1.7500/$14.0000 per 1M tokens
|
||||
|
||||
### Active Image Models (2)
|
||||
- ✓ `dall-e-3` - $0.0400 per image
|
||||
- ✓ `dall-e-2` - $0.0200 per image
|
||||
|
||||
### Inactive Models (2)
|
||||
- ⊗ `gpt-image-1` - image
|
||||
- ⊗ `gpt-image-1-mini` - image
|
||||
|
||||
---
|
||||
|
||||
## Test Details
|
||||
|
||||
### Test 1: Model Instance Methods
|
||||
**Purpose:** Verify AIModelConfig model methods work correctly
|
||||
|
||||
**Tests:**
|
||||
1. ✅ `get_cost_for_tokens(2518, 242)` → $0.000523
|
||||
2. ✅ `get_cost_for_images(3)` → $0.0800
|
||||
3. ✅ `validate_size('1024x1024')` → True
|
||||
4. ✅ `validate_size('512x512')` → False (dall-e-3 doesn't support)
|
||||
5. ✅ Display format correct
|
||||
|
||||
**Result:** All model methods calculate costs accurately
|
||||
|
||||
---
|
||||
|
||||
### Test 2: AI Core Cost Calculations
|
||||
**Purpose:** Verify ai_core.py uses database correctly
|
||||
|
||||
**Tests:**
|
||||
1. ✅ Text model cost calculation (1000 input + 500 output = $0.000450)
|
||||
2. ✅ Image model cost calculation (dall-e-3 = $0.0400)
|
||||
3. ✅ Fallback mechanism works (non-existent model uses constants)
|
||||
4. ✅ All 5 text models consistent with database
|
||||
5. ✅ All 2 image models consistent with database
|
||||
|
||||
**Result:** AICore.calculate_cost() works perfectly with database queries and fallback
|
||||
|
||||
---
|
||||
|
||||
### Test 3: Validators
|
||||
**Purpose:** Verify model and size validation works
|
||||
|
||||
**Tests:**
|
||||
1. ✅ Valid text model accepted (gpt-4o-mini)
|
||||
2. ✅ Invalid text model rejected (fake-gpt-999)
|
||||
3. ✅ Valid image model accepted (dall-e-3)
|
||||
4. ✅ Invalid image model rejected (fake-dalle)
|
||||
5. ✅ Inactive model rejected (gpt-image-1)
|
||||
6. ✅ Valid size accepted (1024x1024 for dall-e-3)
|
||||
7. ✅ Invalid size rejected (512x512 for dall-e-3)
|
||||
8. ✅ All 5 active text models validate
|
||||
9. ✅ All 2 active image models validate
|
||||
|
||||
**Result:** All validation logic working perfectly
|
||||
|
||||
---
|
||||
|
||||
### Test 4: Credit Calculation Integration
|
||||
**Purpose:** Verify credit system integrates with AI costs
|
||||
|
||||
**Tests:**
|
||||
1. ✅ Clustering credits: 2760 tokens → 19 credits
|
||||
2. ✅ Profit margin: 99.7% (OpenAI cost $0.000523, Revenue $0.1900)
|
||||
3. ✅ Minimum credits enforcement: 15 tokens → 10 credits (minimum)
|
||||
4. ✅ High token count: 60,000 tokens → 600 credits
|
||||
|
||||
**Result:** Credit calculations work correctly with proper profit margins
|
||||
|
||||
---
|
||||
|
||||
### Test 5: REST API Serializer
|
||||
**Purpose:** Verify API serialization works
|
||||
|
||||
**Tests:**
|
||||
1. ✅ Single model serialization
|
||||
2. ✅ Serialize all text models (5 models)
|
||||
3. ✅ Serialize all image models (2 models)
|
||||
4. ✅ Text model pricing fields (input_cost_per_1m, output_cost_per_1m)
|
||||
5. ✅ Image model pricing fields (cost_per_image)
|
||||
6. ✅ Image model sizes field (valid_sizes array)
|
||||
7. ✅ Pricing display field
|
||||
|
||||
**Result:** All serialization working correctly with proper field names
|
||||
|
||||
---
|
||||
|
||||
### Test 6: End-to-End Integration
|
||||
**Purpose:** Verify complete workflows work end-to-end
|
||||
|
||||
**Tests:**
|
||||
1. ✅ Complete text generation workflow:
|
||||
- Model validation
|
||||
- OpenAI cost calculation ($0.000525)
|
||||
- Credit calculation (20 credits)
|
||||
- Revenue calculation ($0.2000)
|
||||
- Profit margin (99.7%)
|
||||
|
||||
2. ✅ Complete image generation workflow:
|
||||
- Model validation
|
||||
- Size validation
|
||||
- Cost calculation ($0.0400 per image)
|
||||
|
||||
3. ✅ All 7 active models verified (5 text + 2 image)
|
||||
|
||||
4. ✅ Database query performance for all models
|
||||
|
||||
**Result:** Complete workflows work perfectly from validation to cost calculation
|
||||
|
||||
---
|
||||
|
||||
## Features Verified
|
||||
|
||||
✅ Database-driven model pricing
|
||||
✅ Cost calculation for text models (token-based)
|
||||
✅ Cost calculation for image models (per-image)
|
||||
✅ Model validation with active/inactive filtering
|
||||
✅ Image size validation per model
|
||||
✅ Credit calculation integration
|
||||
✅ Profit margin calculation (99.7% for text, varies by model)
|
||||
✅ REST API serialization
|
||||
✅ Fallback to constants (safety mechanism)
|
||||
✅ Django Admin interface with filters and bulk actions
|
||||
✅ Lazy imports (circular dependency prevention)
|
||||
|
||||
---
|
||||
|
||||
## Implementation Details
|
||||
|
||||
### Database Schema
|
||||
- **Model:** `AIModelConfig`
|
||||
- **Fields:** 15 (model_name, display_name, model_type, provider, costs, features, etc.)
|
||||
- **Migration:** `0020_create_ai_model_config.py`
|
||||
- **Seeded Models:** 9 (7 active, 2 inactive)
|
||||
|
||||
### Methods Implemented
|
||||
```python
|
||||
# Text model cost calculation
|
||||
AIModelConfig.get_cost_for_tokens(input_tokens, output_tokens) -> Decimal
|
||||
|
||||
# Image model cost calculation
|
||||
AIModelConfig.get_cost_for_images(num_images) -> Decimal
|
||||
|
||||
# Size validation
|
||||
AIModelConfig.validate_size(size) -> bool
|
||||
|
||||
# Unified cost calculation (in ai_core.py)
|
||||
AICore.calculate_cost(model, input_tokens, output_tokens, model_type) -> float
|
||||
```
|
||||
|
||||
### Files Modified (7)
|
||||
1. `billing/models.py` - AIModelConfig class (240 lines)
|
||||
2. `billing/admin.py` - Admin interface with filters
|
||||
3. `ai/ai_core.py` - 3 functions updated with database queries
|
||||
4. `ai/validators.py` - 2 functions updated with database queries
|
||||
5. `modules/billing/serializers.py` - AIModelConfigSerializer
|
||||
6. `modules/billing/views.py` - AIModelConfigViewSet
|
||||
7. `business/billing/urls.py` - API routing
|
||||
|
||||
### REST API Endpoints
|
||||
- `GET /api/v1/billing/ai/models/` - List all active models
|
||||
- `GET /api/v1/billing/ai/models/?model_type=text` - Filter by type
|
||||
- `GET /api/v1/billing/ai/models/?provider=openai` - Filter by provider
|
||||
- `GET /api/v1/billing/ai/models/<id>/` - Get specific model
|
||||
|
||||
---
|
||||
|
||||
## Cost Examples
|
||||
|
||||
### Text Generation (gpt-4o-mini)
|
||||
- **OpenAI Cost:** 1000 input + 500 output tokens = $0.000450
|
||||
- **Credits Charged:** 10 credits ($0.10)
|
||||
- **Profit Margin:** 99.6%
|
||||
|
||||
### Image Generation (dall-e-3)
|
||||
- **OpenAI Cost:** 1 image (1024x1024) = $0.0400
|
||||
- **Credits:** Charged by customer configuration
|
||||
|
||||
---
|
||||
|
||||
## Fallback Safety Mechanism
|
||||
|
||||
All functions include try/except blocks that:
|
||||
1. **Try:** Query database for model config
|
||||
2. **Except:** Fall back to constants in `ai/constants.py`
|
||||
3. **Result:** System never fails, always returns a valid cost
|
||||
|
||||
**Example:**
|
||||
```python
|
||||
try:
|
||||
model_config = AIModelConfig.objects.get(model_name=model, is_active=True)
|
||||
return model_config.get_cost_for_tokens(input, output)
|
||||
except:
|
||||
# Fallback to constants
|
||||
rates = MODEL_RATES.get(model, {'input': 2.00, 'output': 8.00})
|
||||
return calculate_with_rates(rates)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Profit Margins
|
||||
|
||||
| Model | OpenAI Cost (1500 in + 500 out) | Credits | Revenue | Profit |
|
||||
|-------|----------------------------------|---------|---------|--------|
|
||||
| gpt-4o-mini | $0.000525 | 20 | $0.2000 | 99.7% |
|
||||
| gpt-4o | $0.008750 | 20 | $0.2000 | 95.6% |
|
||||
| gpt-4.1 | $0.007000 | 20 | $0.2000 | 96.5% |
|
||||
| gpt-5.1 | $0.006875 | 20 | $0.2000 | 96.6% |
|
||||
| gpt-5.2 | $0.009625 | 20 | $0.2000 | 95.2% |
|
||||
|
||||
---
|
||||
|
||||
## Conclusion
|
||||
|
||||
✅ **SYSTEM IS 100% OPERATIONAL AND VERIFIED**
|
||||
|
||||
All 34 tests passed successfully. The AI Model Database Configuration system is:
|
||||
- ✅ Fully functional
|
||||
- ✅ Accurately calculating costs
|
||||
- ✅ Properly validating models
|
||||
- ✅ Successfully integrating with credit system
|
||||
- ✅ Serving data via REST API
|
||||
- ✅ Safe with fallback mechanisms
|
||||
|
||||
The system is ready for production use.
|
||||
2623
CHANGELOG.md
2623
CHANGELOG.md
File diff suppressed because it is too large
Load Diff
@@ -1,356 +0,0 @@
|
||||
# Data Segregation: System vs User Data
|
||||
|
||||
## Purpose
|
||||
This document categorizes all models in the Django admin sidebar to identify:
|
||||
- **SYSTEM DATA**: Configuration, templates, and settings that must be preserved (pre-configured, production-ready data)
|
||||
- **USER DATA**: Account-specific, tenant-specific, or test data that can be cleaned up during testing phase
|
||||
|
||||
---
|
||||
|
||||
## 1. Accounts & Tenancy
|
||||
|
||||
| Model | Type | Description | Clean/Keep |
|
||||
|-------|------|-------------|------------|
|
||||
| Account | USER DATA | Customer accounts (test accounts during development) | ✅ CLEAN - Remove test accounts |
|
||||
| User | USER DATA | User profiles linked to accounts | ✅ CLEAN - Remove test users |
|
||||
| Site | USER DATA | Sites/domains owned by accounts | ✅ CLEAN - Remove test sites |
|
||||
| Sector | USER DATA | Sectors within sites (account-specific) | ✅ CLEAN - Remove test sectors |
|
||||
| SiteUserAccess | USER DATA | User permissions per site | ✅ CLEAN - Remove test access records |
|
||||
|
||||
**Summary**: All models are USER DATA - Safe to clean for fresh production start
|
||||
|
||||
---
|
||||
|
||||
## 2. Global Resources
|
||||
|
||||
| Model | Type | Description | Clean/Keep |
|
||||
|-------|------|-------------|------------|
|
||||
| Industry | SYSTEM DATA | Global industry taxonomy (e.g., Healthcare, Finance, Technology) | ⚠️ KEEP - Pre-configured industries |
|
||||
| IndustrySector | SYSTEM DATA | Sub-categories within industries (e.g., Cardiology, Investment Banking) | ⚠️ KEEP - Pre-configured sectors |
|
||||
| SeedKeyword | MIXED DATA | Seed keywords for industries - can be seeded or user-generated | ⚠️ REVIEW - Keep system seeds, remove test seeds |
|
||||
|
||||
**Summary**:
|
||||
- **KEEP**: Industry and IndustrySector (global taxonomy)
|
||||
- **REVIEW**: SeedKeyword - separate system defaults from test data
|
||||
|
||||
---
|
||||
|
||||
## 3. Plans and Billing
|
||||
|
||||
| Model | Type | Description | Clean/Keep |
|
||||
|-------|------|-------------|------------|
|
||||
| Plan | SYSTEM DATA | Subscription plans (Free, Pro, Enterprise, etc.) | ⚠️ KEEP - Production pricing tiers |
|
||||
| Subscription | USER DATA | Active subscriptions per account | ✅ CLEAN - Remove test subscriptions |
|
||||
| Invoice | USER DATA | Generated invoices for accounts | ✅ CLEAN - Remove test invoices |
|
||||
| Payment | USER DATA | Payment records | ✅ CLEAN - Remove test payments |
|
||||
| CreditPackage | SYSTEM DATA | Available credit packages for purchase | ⚠️ KEEP - Production credit offerings |
|
||||
| PaymentMethodConfig | SYSTEM DATA | Supported payment methods (Stripe, PayPal) | ⚠️ KEEP - Production payment configs |
|
||||
| AccountPaymentMethod | USER DATA | Saved payment methods per account | ✅ CLEAN - Remove test payment methods |
|
||||
|
||||
**Summary**:
|
||||
- **KEEP**: Plan, CreditPackage, PaymentMethodConfig (system pricing/config)
|
||||
- **CLEAN**: Subscription, Invoice, Payment, AccountPaymentMethod (user transactions)
|
||||
|
||||
---
|
||||
|
||||
## 4. Credits
|
||||
|
||||
| Model | Type | Description | Clean/Keep |
|
||||
|-------|------|-------------|------------|
|
||||
| CreditTransaction | USER DATA | Credit add/subtract transactions | ✅ CLEAN - Remove test transactions |
|
||||
| CreditUsageLog | USER DATA | Log of credit usage per operation | ✅ CLEAN - Remove test usage logs |
|
||||
| CreditCostConfig | SYSTEM DATA | Cost configuration per operation type | ⚠️ KEEP - Production cost structure |
|
||||
| PlanLimitUsage | USER DATA | Usage tracking per account/plan limits | ✅ CLEAN - Remove test usage data |
|
||||
|
||||
**Summary**:
|
||||
- **KEEP**: CreditCostConfig (system cost rules)
|
||||
- **CLEAN**: All transaction and usage logs (user activity)
|
||||
|
||||
---
|
||||
|
||||
## 5. Content Planning
|
||||
|
||||
| Model | Type | Description | Clean/Keep |
|
||||
|-------|------|-------------|------------|
|
||||
| Keywords | USER DATA | Keywords researched per site/sector | ✅ CLEAN - Remove test keywords |
|
||||
| Clusters | USER DATA | Content clusters created per site | ✅ CLEAN - Remove test clusters |
|
||||
| ContentIdeas | USER DATA | Content ideas generated for accounts | ✅ CLEAN - Remove test ideas |
|
||||
|
||||
**Summary**: All models are USER DATA - Safe to clean completely
|
||||
|
||||
---
|
||||
|
||||
## 6. Content Generation
|
||||
|
||||
| Model | Type | Description | Clean/Keep |
|
||||
|-------|------|-------------|------------|
|
||||
| Tasks | USER DATA | Content writing tasks assigned to users | ✅ CLEAN - Remove test tasks |
|
||||
| Content | USER DATA | Generated content/articles | ✅ CLEAN - Remove test content |
|
||||
| Images | USER DATA | Generated or uploaded images | ✅ CLEAN - Remove test images |
|
||||
|
||||
**Summary**: All models are USER DATA - Safe to clean completely
|
||||
|
||||
---
|
||||
|
||||
## 7. Taxonomy & Organization
|
||||
|
||||
| Model | Type | Description | Clean/Keep |
|
||||
|-------|------|-------------|------------|
|
||||
| ContentTaxonomy | USER DATA | Custom taxonomies (categories/tags) per site | ✅ CLEAN - Remove test taxonomies |
|
||||
| ContentTaxonomyRelation | USER DATA | Relationships between content and taxonomies | ✅ CLEAN - Remove test relations |
|
||||
| ContentClusterMap | USER DATA | Mapping of content to clusters | ✅ CLEAN - Remove test mappings |
|
||||
| ContentAttribute | USER DATA | Custom attributes for content | ✅ CLEAN - Remove test attributes |
|
||||
|
||||
**Summary**: All models are USER DATA - Safe to clean completely
|
||||
|
||||
---
|
||||
|
||||
## 8. Publishing & Integration
|
||||
|
||||
| Model | Type | Description | Clean/Keep |
|
||||
|-------|------|-------------|------------|
|
||||
| SiteIntegration | USER DATA | WordPress/platform integrations per site | ✅ CLEAN - Remove test integrations |
|
||||
| SyncEvent | USER DATA | Sync events between IGNY8 and external platforms | ✅ CLEAN - Remove test sync logs |
|
||||
| PublishingRecord | USER DATA | Records of published content | ✅ CLEAN - Remove test publish records |
|
||||
| PublishingChannel | SYSTEM DATA | Available publishing channels (WordPress, Ghost, etc.) | ⚠️ KEEP - Production channel configs |
|
||||
| DeploymentRecord | USER DATA | Deployment history per account | ✅ CLEAN - Remove test deployments |
|
||||
|
||||
**Summary**:
|
||||
- **KEEP**: PublishingChannel (system-wide channel definitions)
|
||||
- **CLEAN**: All user-specific integration and sync data
|
||||
|
||||
---
|
||||
|
||||
## 9. AI & Automation
|
||||
|
||||
| Model | Type | Description | Clean/Keep |
|
||||
|-------|------|-------------|------------|
|
||||
| IntegrationSettings | MIXED DATA | API keys/settings for OpenAI, etc. | ⚠️ REVIEW - Keep system defaults, remove test configs |
|
||||
| AIPrompt | SYSTEM DATA | AI prompt templates for content generation | ⚠️ KEEP - Production prompt library |
|
||||
| Strategy | SYSTEM DATA | Content strategy templates | ⚠️ KEEP - Production strategy templates |
|
||||
| AuthorProfile | SYSTEM DATA | Author persona templates | ⚠️ KEEP - Production author profiles |
|
||||
| APIKey | USER DATA | User-generated API keys for platform access | ✅ CLEAN - Remove test API keys |
|
||||
| WebhookConfig | USER DATA | Webhook configurations per account | ✅ CLEAN - Remove test webhooks |
|
||||
| AutomationConfig | USER DATA | Automation rules per account/site | ✅ CLEAN - Remove test automations |
|
||||
| AutomationRun | USER DATA | Execution history of automations | ✅ CLEAN - Remove test run logs |
|
||||
|
||||
**Summary**:
|
||||
- **KEEP**: AIPrompt, Strategy, AuthorProfile (system templates)
|
||||
- **REVIEW**: IntegrationSettings (separate system vs user API keys)
|
||||
- **CLEAN**: APIKey, WebhookConfig, AutomationConfig, AutomationRun (user configs)
|
||||
|
||||
---
|
||||
|
||||
## 10. System Settings
|
||||
|
||||
| Model | Type | Description | Clean/Keep |
|
||||
|-------|------|-------------|------------|
|
||||
| ContentType | SYSTEM DATA | Django ContentTypes (auto-managed) | ⚠️ KEEP - Django core system table |
|
||||
| ContentTemplate | SYSTEM DATA | Content templates for generation | ⚠️ KEEP - Production templates |
|
||||
| TaxonomyConfig | SYSTEM DATA | Taxonomy configuration rules | ⚠️ KEEP - Production taxonomy rules |
|
||||
| SystemSetting | SYSTEM DATA | Global system settings | ⚠️ KEEP - Production system config |
|
||||
| ContentTypeConfig | SYSTEM DATA | Content type definitions (blog post, landing page, etc.) | ⚠️ KEEP - Production content types |
|
||||
| NotificationConfig | SYSTEM DATA | Notification templates and rules | ⚠️ KEEP - Production notification configs |
|
||||
|
||||
**Summary**: All models are SYSTEM DATA - Must be kept and properly seeded for production
|
||||
|
||||
---
|
||||
|
||||
## 11. Django Admin
|
||||
|
||||
| Model | Type | Description | Clean/Keep |
|
||||
|-------|------|-------------|------------|
|
||||
| Group | SYSTEM DATA | Permission groups (Admin, Editor, Viewer, etc.) | ⚠️ KEEP - Production role definitions |
|
||||
| Permission | SYSTEM DATA | Django permissions (auto-managed) | ⚠️ KEEP - Django core system table |
|
||||
| PasswordResetToken | USER DATA | Password reset tokens (temporary) | ✅ CLEAN - Remove expired tokens |
|
||||
| Session | USER DATA | User session data | ✅ CLEAN - Remove old sessions |
|
||||
|
||||
**Summary**:
|
||||
- **KEEP**: Group, Permission (system access control)
|
||||
- **CLEAN**: PasswordResetToken, Session (temporary user data)
|
||||
|
||||
---
|
||||
|
||||
## 12. Tasks & Logging
|
||||
|
||||
| Model | Type | Description | Clean/Keep |
|
||||
|-------|------|-------------|------------|
|
||||
| AITaskLog | USER DATA | Logs of AI operations per account | ✅ CLEAN - Remove test logs |
|
||||
| AuditLog | USER DATA | Audit trail of user actions | ✅ CLEAN - Remove test audit logs |
|
||||
| LogEntry | USER DATA | Django admin action logs | ✅ CLEAN - Remove test admin logs |
|
||||
| TaskResult | USER DATA | Celery task execution results | ✅ CLEAN - Remove test task results |
|
||||
| GroupResult | USER DATA | Celery group task results | ✅ CLEAN - Remove test group results |
|
||||
|
||||
**Summary**: All models are USER DATA - Safe to clean completely (logs/audit trails)
|
||||
|
||||
---
|
||||
|
||||
## Summary Table: Data Segregation by Category
|
||||
|
||||
| Category | System Data Models | User Data Models | Mixed/Review |
|
||||
|----------|-------------------|------------------|--------------|
|
||||
| **Accounts & Tenancy** | 0 | 5 | 0 |
|
||||
| **Global Resources** | 2 | 0 | 1 |
|
||||
| **Plans and Billing** | 3 | 4 | 0 |
|
||||
| **Credits** | 1 | 3 | 0 |
|
||||
| **Content Planning** | 0 | 3 | 0 |
|
||||
| **Content Generation** | 0 | 3 | 0 |
|
||||
| **Taxonomy & Organization** | 0 | 4 | 0 |
|
||||
| **Publishing & Integration** | 1 | 4 | 0 |
|
||||
| **AI & Automation** | 3 | 4 | 1 |
|
||||
| **System Settings** | 6 | 0 | 0 |
|
||||
| **Django Admin** | 2 | 2 | 0 |
|
||||
| **Tasks & Logging** | 0 | 5 | 0 |
|
||||
| **TOTAL** | **18** | **37** | **2** |
|
||||
|
||||
---
|
||||
|
||||
## Action Plan: Production Data Preparation
|
||||
|
||||
### Phase 1: Preserve System Data ⚠️
|
||||
**Models to Keep & Seed Properly:**
|
||||
|
||||
1. **Global Taxonomy**
|
||||
- Industry (pre-populate 10-15 major industries)
|
||||
- IndustrySector (pre-populate 100+ sub-sectors)
|
||||
- SeedKeyword (system-level seed keywords per industry)
|
||||
|
||||
2. **Pricing & Plans**
|
||||
- Plan (Free, Starter, Pro, Enterprise tiers)
|
||||
- CreditPackage (credit bundles for purchase)
|
||||
- PaymentMethodConfig (Stripe, PayPal configs)
|
||||
- CreditCostConfig (cost per operation type)
|
||||
|
||||
3. **Publishing Channels**
|
||||
- PublishingChannel (WordPress, Ghost, Medium, etc.)
|
||||
|
||||
4. **AI & Content Templates**
|
||||
- AIPrompt (100+ production-ready prompts)
|
||||
- Strategy (content strategy templates)
|
||||
- AuthorProfile (author persona library)
|
||||
- ContentTemplate (article templates)
|
||||
- ContentTypeConfig (blog post, landing page, etc.)
|
||||
|
||||
5. **System Configuration**
|
||||
- SystemSetting (global platform settings)
|
||||
- TaxonomyConfig (taxonomy rules)
|
||||
- NotificationConfig (email/webhook templates)
|
||||
|
||||
6. **Access Control**
|
||||
- Group (Admin, Editor, Viewer, Owner roles)
|
||||
- Permission (Django-managed)
|
||||
- ContentType (Django-managed)
|
||||
|
||||
### Phase 2: Clean User/Test Data ✅
|
||||
**Models to Truncate/Delete:**
|
||||
|
||||
1. **Account Data**: Account, User, Site, Sector, SiteUserAccess
|
||||
2. **Billing Transactions**: Subscription, Invoice, Payment, AccountPaymentMethod, CreditTransaction
|
||||
3. **Content Data**: Keywords, Clusters, ContentIdeas, Tasks, Content, Images
|
||||
4. **Taxonomy Relations**: ContentTaxonomy, ContentTaxonomyRelation, ContentClusterMap, ContentAttribute
|
||||
5. **Integration Data**: SiteIntegration, SyncEvent, PublishingRecord, DeploymentRecord
|
||||
6. **User Configs**: APIKey, WebhookConfig, AutomationConfig, AutomationRun
|
||||
7. **Logs**: AITaskLog, AuditLog, LogEntry, TaskResult, GroupResult, CreditUsageLog, PlanLimitUsage, PasswordResetToken, Session
|
||||
|
||||
### Phase 3: Review Mixed Data ⚠️
|
||||
**Models Requiring Manual Review:**
|
||||
|
||||
1. **SeedKeyword**: Separate system seeds from test data
|
||||
2. **IntegrationSettings**: Keep system-level API configs, remove test account keys
|
||||
|
||||
---
|
||||
|
||||
## Database Cleanup Commands (Use with Caution)
|
||||
|
||||
### Safe Cleanup (Logs & Sessions)
|
||||
```python
|
||||
# Remove old logs (>90 days)
|
||||
AITaskLog.objects.filter(created_at__lt=timezone.now() - timedelta(days=90)).delete()
|
||||
CreditUsageLog.objects.filter(created_at__lt=timezone.now() - timedelta(days=90)).delete()
|
||||
LogEntry.objects.filter(action_time__lt=timezone.now() - timedelta(days=90)).delete()
|
||||
|
||||
# Remove old sessions and tokens
|
||||
Session.objects.filter(expire_date__lt=timezone.now()).delete()
|
||||
PasswordResetToken.objects.filter(expires_at__lt=timezone.now()).delete()
|
||||
|
||||
# Remove old task results
|
||||
TaskResult.objects.filter(date_done__lt=timezone.now() - timedelta(days=30)).delete()
|
||||
```
|
||||
|
||||
### Full Test Data Cleanup (Development/Staging Only)
|
||||
```python
|
||||
# WARNING: Only run in development/staging environments
|
||||
# This will delete ALL user-generated data
|
||||
|
||||
# User data
|
||||
Account.objects.all().delete() # Cascades to most user data
|
||||
User.objects.filter(is_superuser=False).delete()
|
||||
|
||||
# Remaining user data
|
||||
SiteIntegration.objects.all().delete()
|
||||
AutomationConfig.objects.all().delete()
|
||||
APIKey.objects.all().delete()
|
||||
WebhookConfig.objects.all().delete()
|
||||
|
||||
# Logs and history
|
||||
AITaskLog.objects.all().delete()
|
||||
AuditLog.objects.all().delete()
|
||||
LogEntry.objects.all().delete()
|
||||
TaskResult.objects.all().delete()
|
||||
GroupResult.objects.all().delete()
|
||||
```
|
||||
|
||||
### Verify System Data Exists
|
||||
```python
|
||||
# Check system data is properly seeded
|
||||
print(f"Industries: {Industry.objects.count()}")
|
||||
print(f"Plans: {Plan.objects.count()}")
|
||||
print(f"AI Prompts: {AIPrompt.objects.count()}")
|
||||
print(f"Strategies: {Strategy.objects.count()}")
|
||||
print(f"Content Templates: {ContentTemplate.objects.count()}")
|
||||
print(f"Publishing Channels: {PublishingChannel.objects.count()}")
|
||||
print(f"Groups: {Group.objects.count()}")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Recommendations
|
||||
|
||||
### Before Production Launch:
|
||||
|
||||
1. **Export System Data**: Export all SYSTEM DATA models to fixtures for reproducibility
|
||||
```bash
|
||||
python manage.py dumpdata igny8_core_auth.Industry > fixtures/industries.json
|
||||
python manage.py dumpdata igny8_core_auth.Plan > fixtures/plans.json
|
||||
python manage.py dumpdata system.AIPrompt > fixtures/prompts.json
|
||||
# ... repeat for all system models
|
||||
```
|
||||
|
||||
2. **Create Seed Script**: Create management command to populate fresh database with system data
|
||||
```bash
|
||||
python manage.py seed_system_data
|
||||
```
|
||||
|
||||
3. **Database Snapshot**: Take snapshot after system data is seeded, before any user data
|
||||
|
||||
4. **Separate Databases**: Consider separate staging database with full test data vs production with clean start
|
||||
|
||||
5. **Data Migration Plan**:
|
||||
- If migrating from old system: Only migrate Account, User, Content, and critical user data
|
||||
- Leave test data behind in old system
|
||||
|
||||
---
|
||||
|
||||
## Next Steps
|
||||
|
||||
1. ✅ Review this document and confirm data segregation logic
|
||||
2. ⚠️ Create fixtures/seeds for all 18 SYSTEM DATA models
|
||||
3. ⚠️ Review 2 MIXED DATA models (SeedKeyword, IntegrationSettings)
|
||||
4. ✅ Create cleanup script for 37 USER DATA models
|
||||
5. ✅ Test cleanup script in staging environment
|
||||
6. ✅ Execute cleanup before production launch
|
||||
|
||||
---
|
||||
|
||||
*Generated: December 20, 2025*
|
||||
*Purpose: Production data preparation and test data cleanup*
|
||||
File diff suppressed because it is too large
Load Diff
@@ -1,223 +0,0 @@
|
||||
# Integration Settings Workflow & Data Flow
|
||||
|
||||
## Part 1: How Global Settings Load on Frontend
|
||||
|
||||
### Admin Configures Global Settings
|
||||
**URL**: `https://api.igny8.com/admin/system/globalintegrationsettings/1/change/`
|
||||
|
||||
**What's Stored**:
|
||||
- Platform-wide API keys (OpenAI, DALL-E, Runware)
|
||||
- Default model selections (gpt-4o-mini, dall-e-3, runware:97@1)
|
||||
- Default parameters (temperature: 0.7, max_tokens: 8192)
|
||||
- Default image settings (size, quality, style)
|
||||
|
||||
**Who Can Access**: Only platform administrators
|
||||
|
||||
### Normal User Opens Integration Page
|
||||
**URL**: `https://app.igny8.com/settings/integration`
|
||||
|
||||
**What Happens**:
|
||||
|
||||
1. **Frontend Request**:
|
||||
- User browser requests: `GET /api/v1/system/settings/integrations/openai/`
|
||||
- User browser requests: `GET /api/v1/system/settings/integrations/image_generation/`
|
||||
|
||||
2. **Backend Processing** (`integration_views.py` - `get_settings()` method):
|
||||
- Checks if user's account has custom overrides in `IntegrationSettings` table
|
||||
- Gets global defaults from `GlobalIntegrationSettings` singleton
|
||||
- Merges data with this priority:
|
||||
- If account has overrides → use account settings
|
||||
- If no overrides → use global defaults
|
||||
- **NEVER returns API keys** (security)
|
||||
|
||||
3. **Response to Frontend**:
|
||||
```
|
||||
{
|
||||
"id": "openai",
|
||||
"enabled": true,
|
||||
"model": "gpt-4o-mini", // From global OR account override
|
||||
"temperature": 0.7, // From global OR account override
|
||||
"max_tokens": 8192, // From global OR account override
|
||||
"using_global": true // Flag: true if using defaults
|
||||
}
|
||||
```
|
||||
|
||||
4. **Frontend Display**:
|
||||
- Shows current model selection
|
||||
- Shows "Using platform defaults" badge if `using_global: true`
|
||||
- Shows "Custom settings" badge if `using_global: false`
|
||||
- User can change model, temperature, etc.
|
||||
- **API key status is NOT shown** (user cannot see/change platform keys)
|
||||
|
||||
---
|
||||
|
||||
## Part 2: How User Changes Are Saved
|
||||
|
||||
### User Changes Settings on Frontend
|
||||
|
||||
1. **User Actions**:
|
||||
- Opens settings modal
|
||||
- Changes model from `gpt-4o-mini` to `gpt-4o`
|
||||
- Changes temperature from `0.7` to `0.8`
|
||||
- Clicks "Save"
|
||||
|
||||
2. **Frontend Request**:
|
||||
- Sends: `PUT /api/v1/system/settings/integrations/openai/`
|
||||
- Body: `{"model": "gpt-4o", "temperature": 0.8, "max_tokens": 8192}`
|
||||
|
||||
3. **Backend Processing** (`integration_views.py` - `save_settings()` method):
|
||||
- **CRITICAL SECURITY**: Strips ANY API keys from request (apiKey, api_key, openai_api_key, etc.)
|
||||
- Validates account exists
|
||||
- Builds clean config with ONLY allowed overrides:
|
||||
- For OpenAI: model, temperature, max_tokens
|
||||
- For Image: service, model, image_quality, image_style, sizes
|
||||
- Saves to `IntegrationSettings` table:
|
||||
```
|
||||
account_id: 123
|
||||
integration_type: "openai"
|
||||
config: {"model": "gpt-4o", "temperature": 0.8, "max_tokens": 8192}
|
||||
is_active: true
|
||||
```
|
||||
|
||||
4. **Database Structure**:
|
||||
- **GlobalIntegrationSettings** (1 row, pk=1):
|
||||
- Contains: API keys + default settings
|
||||
- Used by: ALL accounts for API keys
|
||||
|
||||
- **IntegrationSettings** (multiple rows):
|
||||
- Row per account per integration type
|
||||
- Contains: ONLY overrides (no API keys)
|
||||
- Example:
|
||||
```
|
||||
id | account_id | integration_type | config
|
||||
100 | 123 | openai | {"model": "gpt-4o", "temperature": 0.8}
|
||||
101 | 456 | openai | {"model": "gpt-4.1", "max_tokens": 4000}
|
||||
102 | 123 | image_generation| {"service": "runware", "model": "runware:100@1"}
|
||||
```
|
||||
|
||||
5. **Next Request from User**:
|
||||
- Frontend requests: `GET /api/v1/system/settings/integrations/openai/`
|
||||
- Backend finds IntegrationSettings row for account 123
|
||||
- Returns: `{"model": "gpt-4o", "temperature": 0.8, "using_global": false}`
|
||||
- User sees their custom settings
|
||||
|
||||
---
|
||||
|
||||
## Data Flow Architecture
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ ADMIN SIDE │
|
||||
│ https://api.igny8.com/admin/ │
|
||||
│ │
|
||||
│ GlobalIntegrationSettings (pk=1) │
|
||||
│ ├── openai_api_key: "sk-xxx" ← Platform-wide │
|
||||
│ ├── openai_model: "gpt-4o-mini" ← Default │
|
||||
│ ├── openai_temperature: 0.7 ← Default │
|
||||
│ ├── dalle_api_key: "sk-xxx" ← Platform-wide │
|
||||
│ ├── runware_api_key: "xxx" ← Platform-wide │
|
||||
│ └── image_quality: "standard" ← Default │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
│
|
||||
│ Backend reads from
|
||||
↓
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ BACKEND API LAYER │
|
||||
│ integration_views.py │
|
||||
│ │
|
||||
│ get_settings(): │
|
||||
│ 1. Load GlobalIntegrationSettings (for defaults) │
|
||||
│ 2. Check IntegrationSettings (for account overrides) │
|
||||
│ 3. Merge: account overrides > global defaults │
|
||||
│ 4. Return to frontend (NO API keys) │
|
||||
│ │
|
||||
│ save_settings(): │
|
||||
│ 1. Receive request from frontend │
|
||||
│ 2. Strip ALL API keys (security) │
|
||||
│ 3. Save ONLY overrides to IntegrationSettings │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
│
|
||||
│ API sends data
|
||||
↓
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ FRONTEND - USER SIDE │
|
||||
│ https://app.igny8.com/settings/integration │
|
||||
│ │
|
||||
│ User sees: │
|
||||
│ ├── Model: gpt-4o-mini (dropdown) │
|
||||
│ ├── Temperature: 0.7 (slider) │
|
||||
│ ├── Status: ✓ Connected (test connection works) │
|
||||
│ └── Badge: "Using platform defaults" │
|
||||
│ │
|
||||
│ User CANNOT see: │
|
||||
│ ✗ API keys (security) │
|
||||
│ ✗ Platform configuration │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
│
|
||||
│ User changes settings
|
||||
↓
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ IntegrationSettings Table │
|
||||
│ (Per-account overrides - NO API KEYS) │
|
||||
│ │
|
||||
│ Account 123: │
|
||||
│ ├── openai: {"model": "gpt-4o", "temperature": 0.8} │
|
||||
│ └── image_generation: {"service": "runware"} │
|
||||
│ │
|
||||
│ Account 456: │
|
||||
│ ├── openai: {"model": "gpt-4.1"} │
|
||||
│ └── image_generation: (no row = uses global defaults) │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Important Security Rules
|
||||
|
||||
1. **API Keys Flow**:
|
||||
- Admin sets → GlobalIntegrationSettings
|
||||
- Backend uses → For ALL accounts
|
||||
- Frontend NEVER sees → Security
|
||||
- Users NEVER save → Stripped by backend
|
||||
|
||||
2. **Settings Flow**:
|
||||
- Admin sets defaults → GlobalIntegrationSettings
|
||||
- Users customize → IntegrationSettings (overrides only)
|
||||
- Backend merges → Global defaults + account overrides
|
||||
- Frontend displays → Merged result (no keys)
|
||||
|
||||
3. **Free Plan Restriction**:
|
||||
- Cannot create IntegrationSettings rows
|
||||
- Must use global defaults only
|
||||
- Enforced at frontend (UI disabled)
|
||||
- TODO: Add backend validation
|
||||
|
||||
---
|
||||
|
||||
## Example Scenarios
|
||||
|
||||
### Scenario 1: New User First Visit
|
||||
- User has NO IntegrationSettings row
|
||||
- Backend returns global defaults
|
||||
- `using_global: true`
|
||||
- User sees platform defaults
|
||||
- API operations use platform API key
|
||||
|
||||
### Scenario 2: User Customizes Model
|
||||
- User changes model to "gpt-4o"
|
||||
- Frontend sends: `{"model": "gpt-4o"}`
|
||||
- Backend creates IntegrationSettings row
|
||||
- Next visit: `using_global: false`
|
||||
- API operations use platform API key + user's model choice
|
||||
|
||||
### Scenario 3: User Resets to Default
|
||||
- Frontend sends: `{"model": "gpt-4o-mini"}` (same as global)
|
||||
- Backend still saves override row
|
||||
- Alternative: Delete row to truly use global
|
||||
- TODO: Add "Reset to defaults" button
|
||||
|
||||
### Scenario 4: Admin Changes Global Default
|
||||
- Admin changes global model to "gpt-4.1"
|
||||
- Users WITH overrides: See their custom model
|
||||
- Users WITHOUT overrides: See new "gpt-4.1" default
|
||||
- All users: Use platform API key
|
||||
@@ -1,11 +0,0 @@
|
||||
## 🔴 AI FUunctions progress modals texts and counts to be fixed
|
||||
|
||||
## 🔴 AUTOAMTION queue when run manualy completed count to be fixed, and progress abr to be imrpoved and fixed based on actual stage and all other data have bugs
|
||||
|
||||
## 🔴 Align prompts with teh strategy
|
||||
|
||||
## 🔴 user randomly logs out often
|
||||
|
||||
## 🔴 MArketing site cotnetn
|
||||
|
||||
## 🔴 docuementation adn help update
|
||||
35
README.md
35
README.md
@@ -1,12 +1,21 @@
|
||||
# IGNY8 - AI-Powered SEO Content Platform
|
||||
|
||||
**Version:** 1.0.0
|
||||
**Version:** 1.0.5
|
||||
**License:** Proprietary
|
||||
**Website:** https://igny8.com
|
||||
|
||||
---
|
||||
|
||||
## Quick Links
|
||||
|
||||
| Document | Description |
|
||||
|----------|-------------|
|
||||
| [docs/00-SYSTEM/IGNY8-APP.md](docs/00-SYSTEM/IGNY8-APP.md) | Executive summary (non-technical) |
|
||||
| [docs/INDEX.md](docs/INDEX.md) | Full documentation index |
|
||||
| [CHANGELOG.md](CHANGELOG.md) | Version history |
|
||||
| [.rules](.rules) | AI agent rules |
|
||||
|
||||
---
|
||||
git push test 1
|
||||
|
||||
## What is IGNY8?
|
||||
|
||||
@@ -17,8 +26,8 @@ IGNY8 is a full-stack SaaS platform that combines AI-powered content generation
|
||||
- 🔍 **Smart Keyword Management** - Import, cluster, and organize keywords with AI
|
||||
- ✍️ **AI Content Generation** - Generate SEO-optimized blog posts using GPT-4
|
||||
- 🖼️ **AI Image Creation** - Auto-generate featured and in-article images
|
||||
- 🔗 **Internal Linking** - AI-powered link suggestions for SEO
|
||||
- 📊 **Content Optimization** - Analyze and score content quality
|
||||
- 🔗 **Internal Linking** - AI-powered link suggestions (coming soon)
|
||||
- 📊 **Content Optimization** - Analyze and score content quality (coming soon)
|
||||
- 🔄 **WordPress Integration** - Bidirectional sync with WordPress sites
|
||||
- 📈 **Usage-Based Billing** - Credit system for AI operations
|
||||
- 👥 **Multi-Tenancy** - Manage multiple sites and teams
|
||||
@@ -27,14 +36,24 @@ IGNY8 is a full-stack SaaS platform that combines AI-powered content generation
|
||||
|
||||
## Repository Structure
|
||||
|
||||
This monorepo contains two main applications and documentation:
|
||||
|
||||
```
|
||||
igny8/
|
||||
├── README.md # This file
|
||||
├── CHANGELOG.md # Version history
|
||||
├── .rules # AI agent rules
|
||||
├── backend/ # Django REST API + Celery
|
||||
├── frontend/ # React + Vite SPA
|
||||
├── docs/ # Documentation index and topic folders
|
||||
└── docker-compose.app.yml # Docker deployment config
|
||||
├── docs/ # Full documentation
|
||||
│ ├── INDEX.md # Documentation navigation
|
||||
│ ├── 00-SYSTEM/ # Architecture, auth, IGNY8-APP
|
||||
│ ├── 10-MODULES/ # Module documentation
|
||||
│ ├── 20-API/ # API endpoints
|
||||
│ ├── 30-FRONTEND/ # Frontend pages, stores, design system
|
||||
│ ├── 40-WORKFLOWS/ # Cross-module workflows
|
||||
│ ├── 50-DEPLOYMENT/ # Deployment guides
|
||||
│ ├── 90-REFERENCE/ # Models, AI functions, fixes
|
||||
│ └── plans/ # Implementation plans
|
||||
└── docker-compose.app.yml
|
||||
```
|
||||
|
||||
**Separate Repository:**
|
||||
|
||||
File diff suppressed because one or more lines are too long
@@ -41,6 +41,11 @@ class Igny8AdminConfig(AdminConfig):
|
||||
admin_site._actions = old_site._actions.copy()
|
||||
admin_site._global_actions = old_site._global_actions.copy()
|
||||
|
||||
# CRITICAL: Update each ModelAdmin's admin_site attribute to point to our custom site
|
||||
# Otherwise, each_context() will use the wrong admin site and miss our customizations
|
||||
for model, model_admin in admin_site._registry.items():
|
||||
model_admin.admin_site = admin_site
|
||||
|
||||
# Now replace the default site
|
||||
admin_module.site = admin_site
|
||||
admin_module.sites.site = admin_site
|
||||
|
||||
@@ -145,7 +145,16 @@ class Igny8ModelAdmin(UnfoldModelAdmin):
|
||||
for group in sidebar_navigation:
|
||||
group_is_active = False
|
||||
for item in group.get('items', []):
|
||||
item_link = item.get('link', '')
|
||||
# Unfold stores resolved link in 'link_callback', original lambda in 'link'
|
||||
item_link = item.get('link_callback') or item.get('link', '')
|
||||
# Convert to string (handles lazy proxy objects and ensures it's a string)
|
||||
try:
|
||||
item_link = str(item_link) if item_link else ''
|
||||
except:
|
||||
item_link = ''
|
||||
# Skip if it's a function representation (e.g., "<function ...>")
|
||||
if item_link.startswith('<'):
|
||||
continue
|
||||
# Check if current path matches this item's link
|
||||
if item_link and current_path.startswith(item_link):
|
||||
item['active'] = True
|
||||
|
||||
@@ -1,28 +1,30 @@
|
||||
"""
|
||||
Custom AdminSite for IGNY8 to organize models into proper groups using Unfold
|
||||
NO EMOJIS - Unfold handles all icons via Material Design
|
||||
Custom AdminSite for IGNY8 using Unfold theme.
|
||||
|
||||
SIMPLIFIED VERSION - Navigation is now handled via UNFOLD settings in settings.py
|
||||
This file only handles:
|
||||
1. Custom URLs for dashboard, reports, and monitoring pages
|
||||
2. Index redirect to dashboard
|
||||
|
||||
All sidebar navigation is configured in settings.py under UNFOLD["SIDEBAR"]["navigation"]
|
||||
"""
|
||||
from django.contrib import admin
|
||||
from django.contrib.admin.apps import AdminConfig
|
||||
from django.apps import apps
|
||||
from django.urls import path, reverse_lazy
|
||||
from django.urls import path
|
||||
from django.shortcuts import redirect
|
||||
from django.contrib.admin import sites
|
||||
from unfold.admin import ModelAdmin as UnfoldModelAdmin
|
||||
from unfold.sites import UnfoldAdminSite
|
||||
|
||||
|
||||
class Igny8AdminSite(UnfoldAdminSite):
|
||||
"""
|
||||
Custom AdminSite based on Unfold that organizes models into the planned groups
|
||||
Custom AdminSite based on Unfold.
|
||||
Navigation is handled via UNFOLD settings - this just adds custom URLs.
|
||||
"""
|
||||
site_header = 'IGNY8 Administration'
|
||||
site_title = 'IGNY8 Admin'
|
||||
index_title = 'IGNY8 Administration'
|
||||
|
||||
|
||||
def get_urls(self):
|
||||
"""Get admin URLs with dashboard, reports, and monitoring pages available"""
|
||||
from django.urls import path
|
||||
"""Add custom URLs for dashboard, reports, and monitoring pages"""
|
||||
from .dashboard import admin_dashboard
|
||||
from .reports import (
|
||||
revenue_report, usage_report, content_report, data_quality_report,
|
||||
@@ -31,12 +33,12 @@ class Igny8AdminSite(UnfoldAdminSite):
|
||||
from .monitoring import (
|
||||
system_health_dashboard, api_monitor_dashboard, debug_console
|
||||
)
|
||||
|
||||
|
||||
urls = super().get_urls()
|
||||
custom_urls = [
|
||||
# Dashboard
|
||||
path('dashboard/', self.admin_view(admin_dashboard), name='dashboard'),
|
||||
|
||||
|
||||
# Reports
|
||||
path('reports/revenue/', self.admin_view(revenue_report), name='report_revenue'),
|
||||
path('reports/usage/', self.admin_view(usage_report), name='report_usage'),
|
||||
@@ -44,308 +46,17 @@ class Igny8AdminSite(UnfoldAdminSite):
|
||||
path('reports/data-quality/', self.admin_view(data_quality_report), name='report_data_quality'),
|
||||
path('reports/token-usage/', self.admin_view(token_usage_report), name='report_token_usage'),
|
||||
path('reports/ai-cost-analysis/', self.admin_view(ai_cost_analysis), name='report_ai_cost_analysis'),
|
||||
|
||||
# Monitoring (NEW)
|
||||
|
||||
# Monitoring
|
||||
path('monitoring/system-health/', self.admin_view(system_health_dashboard), name='monitoring_system_health'),
|
||||
path('monitoring/api-monitor/', self.admin_view(api_monitor_dashboard), name='monitoring_api_monitor'),
|
||||
path('monitoring/debug-console/', self.admin_view(debug_console), name='monitoring_debug_console'),
|
||||
]
|
||||
return custom_urls + urls
|
||||
|
||||
def index(self, request, extra_context=None):
|
||||
"""Redirect to custom dashboard"""
|
||||
from django.shortcuts import redirect
|
||||
return redirect('admin:dashboard')
|
||||
|
||||
def get_sidebar_list(self, request):
|
||||
"""
|
||||
Override Unfold's get_sidebar_list to return our custom app groups
|
||||
Convert Django app_list format to Unfold sidebar navigation format
|
||||
"""
|
||||
# Get our custom Django app list
|
||||
django_apps = self.get_app_list(request, app_label=None)
|
||||
|
||||
# Convert to Unfold navigation format: {title, items: [{title, link, icon}]}
|
||||
sidebar_groups = []
|
||||
|
||||
for app in django_apps:
|
||||
group = {
|
||||
'title': app['name'],
|
||||
'collapsible': True,
|
||||
'items': []
|
||||
}
|
||||
|
||||
# Convert each model to navigation item
|
||||
for model in app.get('models', []):
|
||||
if model.get('perms', {}).get('view', False) or model.get('perms', {}).get('change', False):
|
||||
item = {
|
||||
'title': model['name'],
|
||||
'link': model['admin_url'],
|
||||
'icon': None, # Unfold will use default
|
||||
'has_permission': True, # CRITICAL: Template checks this
|
||||
}
|
||||
group['items'].append(item)
|
||||
|
||||
# Only add groups that have items
|
||||
if group['items']:
|
||||
sidebar_groups.append(group)
|
||||
|
||||
return sidebar_groups
|
||||
|
||||
def each_context(self, request):
|
||||
"""
|
||||
Override context to ensure our custom app_list is always used
|
||||
This is called by all admin templates for sidebar rendering
|
||||
|
||||
CRITICAL FIX: Force custom sidebar on ALL pages including model detail/list views
|
||||
"""
|
||||
# CRITICAL: Must call parent to get sidebar_navigation set
|
||||
context = super().each_context(request)
|
||||
|
||||
# DEBUGGING: Print to console what parent returned
|
||||
print(f"\n=== DEBUG each_context for {request.path} ===")
|
||||
print(f"sidebar_navigation length from parent: {len(context.get('sidebar_navigation', []))}")
|
||||
if context.get('sidebar_navigation'):
|
||||
print(f"First sidebar group: {context['sidebar_navigation'][0].get('title', 'NO TITLE')}")
|
||||
|
||||
# Force our custom app list to be used everywhere - IGNORE app_label parameter
|
||||
custom_apps = self.get_app_list(request, app_label=None)
|
||||
context['available_apps'] = custom_apps
|
||||
context['app_list'] = custom_apps # Also set app_list for compatibility
|
||||
|
||||
# CRITICAL FIX: Ensure sidebar_navigation is using our custom sidebar
|
||||
# Parent's each_context already called get_sidebar_list(), which returns our custom sidebar
|
||||
# So sidebar_navigation should already be correct, but let's verify
|
||||
if not context.get('sidebar_navigation') or len(context.get('sidebar_navigation', [])) == 0:
|
||||
# If sidebar_navigation is empty, force it
|
||||
print("WARNING: sidebar_navigation was empty, forcing it!")
|
||||
context['sidebar_navigation'] = self.get_sidebar_list(request)
|
||||
|
||||
print(f"Final sidebar_navigation length: {len(context['sidebar_navigation'])}")
|
||||
print("=== END DEBUG ===\n")
|
||||
|
||||
return context
|
||||
|
||||
def get_app_list(self, request, app_label=None):
|
||||
"""
|
||||
Customize the app list to organize models into logical groups
|
||||
NO EMOJIS - Unfold handles all icons via Material Design
|
||||
|
||||
Args:
|
||||
request: The HTTP request
|
||||
app_label: IGNORED - Always return full custom sidebar for consistency
|
||||
"""
|
||||
# CRITICAL: Always build full app_dict (ignore app_label) for consistent sidebar
|
||||
app_dict = self._build_app_dict(request, None)
|
||||
|
||||
# Define our custom groups with their models (using object_name)
|
||||
# Organized by business function - Material icons configured in Unfold
|
||||
custom_groups = {
|
||||
'Accounts & Tenancy': {
|
||||
'models': [
|
||||
('igny8_core_auth', 'Account'),
|
||||
('igny8_core_auth', 'User'),
|
||||
('igny8_core_auth', 'Site'),
|
||||
('igny8_core_auth', 'Sector'),
|
||||
('igny8_core_auth', 'SiteUserAccess'),
|
||||
],
|
||||
},
|
||||
'Global Resources': {
|
||||
'models': [
|
||||
('igny8_core_auth', 'Industry'),
|
||||
('igny8_core_auth', 'IndustrySector'),
|
||||
('igny8_core_auth', 'SeedKeyword'),
|
||||
],
|
||||
},
|
||||
'Global Settings': {
|
||||
'models': [
|
||||
('system', 'GlobalIntegrationSettings'),
|
||||
('system', 'GlobalModuleSettings'),
|
||||
('system', 'GlobalAIPrompt'),
|
||||
('system', 'GlobalAuthorProfile'),
|
||||
('system', 'GlobalStrategy'),
|
||||
],
|
||||
},
|
||||
'Plans and Billing': {
|
||||
'models': [
|
||||
('igny8_core_auth', 'Plan'),
|
||||
('igny8_core_auth', 'Subscription'),
|
||||
('billing', 'Invoice'),
|
||||
('billing', 'Payment'),
|
||||
('billing', 'CreditPackage'),
|
||||
('billing', 'PaymentMethodConfig'),
|
||||
('billing', 'AccountPaymentMethod'),
|
||||
],
|
||||
},
|
||||
'Credits': {
|
||||
'models': [
|
||||
('billing', 'CreditTransaction'),
|
||||
('billing', 'CreditUsageLog'),
|
||||
('billing', 'CreditCostConfig'),
|
||||
('billing', 'PlanLimitUsage'),
|
||||
],
|
||||
},
|
||||
'Content Planning': {
|
||||
'models': [
|
||||
('planner', 'Keywords'),
|
||||
('planner', 'Clusters'),
|
||||
('planner', 'ContentIdeas'),
|
||||
],
|
||||
},
|
||||
'Content Generation': {
|
||||
'models': [
|
||||
('writer', 'Tasks'),
|
||||
('writer', 'Content'),
|
||||
('writer', 'Images'),
|
||||
],
|
||||
},
|
||||
'Taxonomy & Organization': {
|
||||
'models': [
|
||||
('writer', 'ContentTaxonomy'),
|
||||
('writer', 'ContentTaxonomyRelation'),
|
||||
('writer', 'ContentClusterMap'),
|
||||
('writer', 'ContentAttribute'),
|
||||
],
|
||||
},
|
||||
'Publishing & Integration': {
|
||||
'models': [
|
||||
('integration', 'SiteIntegration'),
|
||||
('integration', 'SyncEvent'),
|
||||
('publishing', 'PublishingRecord'),
|
||||
('system', 'PublishingChannel'),
|
||||
('publishing', 'DeploymentRecord'),
|
||||
],
|
||||
},
|
||||
'AI & Automation': {
|
||||
'models': [
|
||||
('system', 'IntegrationSettings'),
|
||||
('system', 'AIPrompt'),
|
||||
('system', 'Strategy'),
|
||||
('system', 'AuthorProfile'),
|
||||
('system', 'APIKey'),
|
||||
('system', 'WebhookConfig'),
|
||||
('automation', 'AutomationConfig'),
|
||||
('automation', 'AutomationRun'),
|
||||
],
|
||||
},
|
||||
'System Settings': {
|
||||
'models': [
|
||||
('contenttypes', 'ContentType'),
|
||||
('system', 'ContentTemplate'),
|
||||
('system', 'TaxonomyConfig'),
|
||||
('system', 'SystemSetting'),
|
||||
('system', 'ContentTypeConfig'),
|
||||
('system', 'NotificationConfig'),
|
||||
],
|
||||
},
|
||||
'Django Admin': {
|
||||
'models': [
|
||||
('auth', 'Group'),
|
||||
('auth', 'Permission'),
|
||||
('igny8_core_auth', 'PasswordResetToken'),
|
||||
('sessions', 'Session'),
|
||||
],
|
||||
},
|
||||
'Tasks & Logging': {
|
||||
'models': [
|
||||
('ai', 'AITaskLog'),
|
||||
('system', 'AuditLog'),
|
||||
('admin', 'LogEntry'),
|
||||
('django_celery_results', 'TaskResult'),
|
||||
('django_celery_results', 'GroupResult'),
|
||||
],
|
||||
},
|
||||
}
|
||||
|
||||
# ALWAYS build and return our custom organized app list
|
||||
# regardless of app_label parameter (for consistent sidebar on all pages)
|
||||
organized_apps = []
|
||||
|
||||
# Add Dashboard link as first item
|
||||
organized_apps.append({
|
||||
'name': '📊 Dashboard',
|
||||
'app_label': '_dashboard',
|
||||
'app_url': '/admin/dashboard/',
|
||||
'has_module_perms': True,
|
||||
'models': [],
|
||||
})
|
||||
|
||||
# Add Reports section with links to all reports
|
||||
organized_apps.append({
|
||||
'name': 'Reports & Analytics',
|
||||
'app_label': '_reports',
|
||||
'app_url': '#',
|
||||
'has_module_perms': True,
|
||||
'models': [
|
||||
{
|
||||
'name': 'Revenue Report',
|
||||
'object_name': 'RevenueReport',
|
||||
'admin_url': '/admin/reports/revenue/',
|
||||
'view_only': True,
|
||||
'perms': {'view': True},
|
||||
},
|
||||
{
|
||||
'name': 'Usage Report',
|
||||
'object_name': 'UsageReport',
|
||||
'admin_url': '/admin/reports/usage/',
|
||||
'view_only': True,
|
||||
'perms': {'view': True},
|
||||
},
|
||||
{
|
||||
'name': 'Content Report',
|
||||
'object_name': 'ContentReport',
|
||||
'admin_url': '/admin/reports/content/',
|
||||
'view_only': True,
|
||||
'perms': {'view': True},
|
||||
},
|
||||
{
|
||||
'name': 'Data Quality Report',
|
||||
'object_name': 'DataQualityReport',
|
||||
'admin_url': '/admin/reports/data-quality/',
|
||||
'view_only': True,
|
||||
'perms': {'view': True},
|
||||
},
|
||||
{
|
||||
'name': 'Token Usage Report',
|
||||
'object_name': 'TokenUsageReport',
|
||||
'admin_url': '/admin/reports/token-usage/',
|
||||
'view_only': True,
|
||||
'perms': {'view': True},
|
||||
},
|
||||
{
|
||||
'name': 'AI Cost Analysis',
|
||||
'object_name': 'AICostAnalysis',
|
||||
'admin_url': '/admin/reports/ai-cost-analysis/',
|
||||
'view_only': True,
|
||||
'perms': {'view': True},
|
||||
},
|
||||
],
|
||||
})
|
||||
|
||||
for group_name, group_config in custom_groups.items():
|
||||
group_models = []
|
||||
|
||||
for app_label, model_name in group_config['models']:
|
||||
# Find the model in app_dict
|
||||
for app in app_dict.values():
|
||||
if app['app_label'] == app_label:
|
||||
for model in app.get('models', []):
|
||||
if model['object_name'] == model_name:
|
||||
group_models.append(model)
|
||||
break
|
||||
|
||||
if group_models:
|
||||
# Get the first model's app_label to use as the real app_label
|
||||
first_model_app_label = group_config['models'][0][0]
|
||||
organized_apps.append({
|
||||
'name': group_name,
|
||||
'app_label': first_model_app_label, # Use real app_label, not fake one
|
||||
'app_url': f'/admin/{first_model_app_label}/', # Real URL, not '#'
|
||||
'has_module_perms': True,
|
||||
'models': group_models,
|
||||
})
|
||||
|
||||
return organized_apps
|
||||
def index(self, request, extra_context=None):
|
||||
"""Redirect admin index to custom dashboard"""
|
||||
return redirect('admin:dashboard')
|
||||
|
||||
|
||||
# Instantiate custom admin site
|
||||
|
||||
@@ -13,8 +13,6 @@ from django.conf import settings
|
||||
from .constants import (
|
||||
DEFAULT_AI_MODEL,
|
||||
JSON_MODE_MODELS,
|
||||
MODEL_RATES,
|
||||
IMAGE_MODEL_RATES,
|
||||
VALID_OPENAI_IMAGE_MODELS,
|
||||
VALID_SIZES_BY_MODEL,
|
||||
DEBUG_MODE,
|
||||
@@ -40,24 +38,27 @@ class AICore:
|
||||
self.account = account
|
||||
self._openai_api_key = None
|
||||
self._runware_api_key = None
|
||||
self._bria_api_key = None
|
||||
self._anthropic_api_key = None
|
||||
self._load_account_settings()
|
||||
|
||||
def _load_account_settings(self):
|
||||
"""Load API keys from GlobalIntegrationSettings (platform-wide, used by ALL accounts)"""
|
||||
"""Load API keys from IntegrationProvider (centralized provider config)"""
|
||||
try:
|
||||
from igny8_core.modules.system.global_settings_models import GlobalIntegrationSettings
|
||||
from igny8_core.ai.model_registry import ModelRegistry
|
||||
|
||||
# Get global settings - single instance used by ALL accounts
|
||||
global_settings = GlobalIntegrationSettings.get_instance()
|
||||
|
||||
# Load API keys from global settings (platform-wide)
|
||||
self._openai_api_key = global_settings.openai_api_key
|
||||
self._runware_api_key = global_settings.runware_api_key
|
||||
# Load API keys from IntegrationProvider (centralized, platform-wide)
|
||||
self._openai_api_key = ModelRegistry.get_api_key('openai')
|
||||
self._runware_api_key = ModelRegistry.get_api_key('runware')
|
||||
self._bria_api_key = ModelRegistry.get_api_key('bria')
|
||||
self._anthropic_api_key = ModelRegistry.get_api_key('anthropic')
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Could not load GlobalIntegrationSettings: {e}", exc_info=True)
|
||||
logger.error(f"Could not load API keys from IntegrationProvider: {e}", exc_info=True)
|
||||
self._openai_api_key = None
|
||||
self._runware_api_key = None
|
||||
self._bria_api_key = None
|
||||
self._anthropic_api_key = None
|
||||
|
||||
def get_api_key(self, integration_type: str = 'openai') -> Optional[str]:
|
||||
"""Get API key for integration type"""
|
||||
@@ -65,6 +66,10 @@ class AICore:
|
||||
return self._openai_api_key
|
||||
elif integration_type == 'runware':
|
||||
return self._runware_api_key
|
||||
elif integration_type == 'bria':
|
||||
return self._bria_api_key
|
||||
elif integration_type == 'anthropic':
|
||||
return self._anthropic_api_key
|
||||
return None
|
||||
|
||||
def get_model(self, integration_type: str = 'openai') -> str:
|
||||
@@ -87,13 +92,13 @@ class AICore:
|
||||
response_format: Optional[Dict] = None,
|
||||
api_key: Optional[str] = None,
|
||||
function_name: str = 'ai_request',
|
||||
function_id: Optional[str] = None,
|
||||
prompt_prefix: Optional[str] = None,
|
||||
tracker: Optional[ConsoleStepTracker] = None
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Centralized AI request handler with console logging.
|
||||
All AI text generation requests go through this method.
|
||||
|
||||
|
||||
Args:
|
||||
prompt: Prompt text
|
||||
model: Model name (required - must be provided from IntegrationSettings)
|
||||
@@ -102,12 +107,13 @@ class AICore:
|
||||
response_format: Optional response format dict (for JSON mode)
|
||||
api_key: Optional API key override
|
||||
function_name: Function name for logging (e.g., 'cluster_keywords')
|
||||
prompt_prefix: Optional prefix to add before prompt (e.g., '##GP01-Clustering')
|
||||
tracker: Optional ConsoleStepTracker instance for logging
|
||||
|
||||
|
||||
Returns:
|
||||
Dict with 'content', 'input_tokens', 'output_tokens', 'total_tokens',
|
||||
'model', 'cost', 'error', 'api_id'
|
||||
|
||||
|
||||
Raises:
|
||||
ValueError: If model is not provided
|
||||
"""
|
||||
@@ -158,8 +164,12 @@ class AICore:
|
||||
logger.info(f" - Model used in request: {active_model}")
|
||||
tracker.ai_call(f"Using model: {active_model}")
|
||||
|
||||
if active_model not in MODEL_RATES:
|
||||
error_msg = f"Model '{active_model}' is not supported. Supported models: {list(MODEL_RATES.keys())}"
|
||||
# Use ModelRegistry for validation (database-driven)
|
||||
from igny8_core.ai.model_registry import ModelRegistry
|
||||
if not ModelRegistry.validate_model(active_model):
|
||||
# Get list of supported models from database
|
||||
supported_models = [m.model_name for m in ModelRegistry.list_models(model_type='text')]
|
||||
error_msg = f"Model '{active_model}' is not supported. Supported models: {supported_models}"
|
||||
logger.error(f"[AICore] {error_msg}")
|
||||
tracker.error('ConfigurationError', error_msg)
|
||||
return {
|
||||
@@ -184,16 +194,16 @@ class AICore:
|
||||
else:
|
||||
tracker.ai_call("Using text response format")
|
||||
|
||||
# Step 4: Validate prompt length and add function_id
|
||||
# Step 4: Validate prompt length and add prompt_prefix
|
||||
prompt_length = len(prompt)
|
||||
tracker.ai_call(f"Prompt length: {prompt_length} characters")
|
||||
|
||||
# Add function_id to prompt if provided (for tracking)
|
||||
|
||||
# Add prompt_prefix to prompt if provided (for tracking)
|
||||
# Format: ##GP01-Clustering or ##CP01-Clustering
|
||||
final_prompt = prompt
|
||||
if function_id:
|
||||
function_id_prefix = f'function_id: "{function_id}"\n\n'
|
||||
final_prompt = function_id_prefix + prompt
|
||||
tracker.ai_call(f"Added function_id to prompt: {function_id}")
|
||||
if prompt_prefix:
|
||||
final_prompt = f'{prompt_prefix}\n\n{prompt}'
|
||||
tracker.ai_call(f"Added prompt prefix: {prompt_prefix}")
|
||||
|
||||
# Step 5: Build request payload
|
||||
url = 'https://api.openai.com/v1/chat/completions'
|
||||
@@ -290,9 +300,13 @@ class AICore:
|
||||
tracker.parse(f"Received {total_tokens} tokens (input: {input_tokens}, output: {output_tokens})")
|
||||
tracker.parse(f"Content length: {len(content)} characters")
|
||||
|
||||
# Step 10: Calculate cost
|
||||
rates = MODEL_RATES.get(active_model, {'input': 2.00, 'output': 8.00})
|
||||
cost = (input_tokens * rates['input'] + output_tokens * rates['output']) / 1_000_000
|
||||
# Step 10: Calculate cost using ModelRegistry (database-driven)
|
||||
from igny8_core.ai.model_registry import ModelRegistry
|
||||
cost = float(ModelRegistry.calculate_cost(
|
||||
active_model,
|
||||
input_tokens=input_tokens,
|
||||
output_tokens=output_tokens
|
||||
))
|
||||
tracker.parse(f"Cost calculated: ${cost:.6f}")
|
||||
|
||||
tracker.done("Request completed successfully")
|
||||
@@ -367,6 +381,289 @@ class AICore:
|
||||
'api_id': None,
|
||||
}
|
||||
|
||||
def run_anthropic_request(
|
||||
self,
|
||||
prompt: str,
|
||||
model: str,
|
||||
max_tokens: int = 8192,
|
||||
temperature: float = 0.7,
|
||||
api_key: Optional[str] = None,
|
||||
function_name: str = 'anthropic_request',
|
||||
prompt_prefix: Optional[str] = None,
|
||||
tracker: Optional[ConsoleStepTracker] = None,
|
||||
system_prompt: Optional[str] = None,
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Anthropic (Claude) AI request handler with console logging.
|
||||
Alternative to OpenAI for text generation.
|
||||
|
||||
Args:
|
||||
prompt: Prompt text
|
||||
model: Claude model name (required - must be provided from IntegrationSettings)
|
||||
max_tokens: Maximum tokens
|
||||
temperature: Temperature (0-1)
|
||||
api_key: Optional API key override
|
||||
function_name: Function name for logging (e.g., 'cluster_keywords')
|
||||
prompt_prefix: Optional prefix to add before prompt
|
||||
tracker: Optional ConsoleStepTracker instance for logging
|
||||
system_prompt: Optional system prompt for Claude
|
||||
|
||||
Returns:
|
||||
Dict with 'content', 'input_tokens', 'output_tokens', 'total_tokens',
|
||||
'model', 'cost', 'error', 'api_id'
|
||||
|
||||
Raises:
|
||||
ValueError: If model is not provided
|
||||
"""
|
||||
# Use provided tracker or create a new one
|
||||
if tracker is None:
|
||||
tracker = ConsoleStepTracker(function_name)
|
||||
|
||||
tracker.ai_call("Preparing Anthropic request...")
|
||||
|
||||
# Step 1: Validate model is provided
|
||||
if not model:
|
||||
error_msg = "Model is required. Ensure IntegrationSettings is configured for the account."
|
||||
tracker.error('ConfigurationError', error_msg)
|
||||
logger.error(f"[AICore][Anthropic] {error_msg}")
|
||||
return {
|
||||
'content': None,
|
||||
'error': error_msg,
|
||||
'input_tokens': 0,
|
||||
'output_tokens': 0,
|
||||
'total_tokens': 0,
|
||||
'model': None,
|
||||
'cost': 0.0,
|
||||
'api_id': None,
|
||||
}
|
||||
|
||||
# Step 2: Validate API key
|
||||
api_key = api_key or self._anthropic_api_key
|
||||
if not api_key:
|
||||
error_msg = 'Anthropic API key not configured'
|
||||
tracker.error('ConfigurationError', error_msg)
|
||||
return {
|
||||
'content': None,
|
||||
'error': error_msg,
|
||||
'input_tokens': 0,
|
||||
'output_tokens': 0,
|
||||
'total_tokens': 0,
|
||||
'model': model,
|
||||
'cost': 0.0,
|
||||
'api_id': None,
|
||||
}
|
||||
|
||||
active_model = model
|
||||
|
||||
# Debug logging: Show model used
|
||||
logger.info(f"[AICore][Anthropic] Model Configuration:")
|
||||
logger.info(f" - Model parameter passed: {model}")
|
||||
logger.info(f" - Model used in request: {active_model}")
|
||||
tracker.ai_call(f"Using Anthropic model: {active_model}")
|
||||
|
||||
# Add prompt_prefix to prompt if provided (for tracking)
|
||||
final_prompt = prompt
|
||||
if prompt_prefix:
|
||||
final_prompt = f'{prompt_prefix}\n\n{prompt}'
|
||||
tracker.ai_call(f"Added prompt prefix: {prompt_prefix}")
|
||||
|
||||
# Step 5: Build request payload using Anthropic Messages API
|
||||
url = 'https://api.anthropic.com/v1/messages'
|
||||
headers = {
|
||||
'x-api-key': api_key,
|
||||
'anthropic-version': '2023-06-01',
|
||||
'Content-Type': 'application/json',
|
||||
}
|
||||
|
||||
body_data = {
|
||||
'model': active_model,
|
||||
'max_tokens': max_tokens,
|
||||
'messages': [{'role': 'user', 'content': final_prompt}],
|
||||
}
|
||||
|
||||
# Only add temperature if it's less than 1.0 (Claude's default)
|
||||
if temperature < 1.0:
|
||||
body_data['temperature'] = temperature
|
||||
|
||||
# Add system prompt if provided
|
||||
if system_prompt:
|
||||
body_data['system'] = system_prompt
|
||||
|
||||
tracker.ai_call(f"Request payload prepared (model={active_model}, max_tokens={max_tokens}, temp={temperature})")
|
||||
|
||||
# Step 6: Send request
|
||||
tracker.ai_call("Sending request to Anthropic API...")
|
||||
request_start = time.time()
|
||||
|
||||
try:
|
||||
response = requests.post(url, headers=headers, json=body_data, timeout=180)
|
||||
request_duration = time.time() - request_start
|
||||
tracker.ai_call(f"Received response in {request_duration:.2f}s (status={response.status_code})")
|
||||
|
||||
# Step 7: Validate HTTP response
|
||||
if response.status_code != 200:
|
||||
error_data = response.json() if response.headers.get('content-type', '').startswith('application/json') else {}
|
||||
error_message = f"HTTP {response.status_code} error"
|
||||
|
||||
if isinstance(error_data, dict) and 'error' in error_data:
|
||||
if isinstance(error_data['error'], dict) and 'message' in error_data['error']:
|
||||
error_message += f": {error_data['error']['message']}"
|
||||
|
||||
# Check for rate limit
|
||||
if response.status_code == 429:
|
||||
retry_after = response.headers.get('retry-after', '60')
|
||||
tracker.rate_limit(retry_after)
|
||||
error_message += f" (Rate limit - retry after {retry_after}s)"
|
||||
else:
|
||||
tracker.error('HTTPError', error_message)
|
||||
|
||||
logger.error(f"Anthropic API HTTP error {response.status_code}: {error_message}")
|
||||
|
||||
return {
|
||||
'content': None,
|
||||
'error': error_message,
|
||||
'input_tokens': 0,
|
||||
'output_tokens': 0,
|
||||
'total_tokens': 0,
|
||||
'model': active_model,
|
||||
'cost': 0.0,
|
||||
'api_id': None,
|
||||
}
|
||||
|
||||
# Step 8: Parse response JSON
|
||||
try:
|
||||
data = response.json()
|
||||
except json.JSONDecodeError as e:
|
||||
error_msg = f'Failed to parse JSON response: {str(e)}'
|
||||
tracker.malformed_json(str(e))
|
||||
logger.error(error_msg)
|
||||
return {
|
||||
'content': None,
|
||||
'error': error_msg,
|
||||
'input_tokens': 0,
|
||||
'output_tokens': 0,
|
||||
'total_tokens': 0,
|
||||
'model': active_model,
|
||||
'cost': 0.0,
|
||||
'api_id': None,
|
||||
}
|
||||
|
||||
api_id = data.get('id')
|
||||
|
||||
# Step 9: Extract content (Anthropic format)
|
||||
# Claude returns content as array: [{"type": "text", "text": "..."}]
|
||||
if 'content' in data and len(data['content']) > 0:
|
||||
# Extract text from first content block
|
||||
content_blocks = data['content']
|
||||
content = ''
|
||||
for block in content_blocks:
|
||||
if block.get('type') == 'text':
|
||||
content += block.get('text', '')
|
||||
|
||||
usage = data.get('usage', {})
|
||||
input_tokens = usage.get('input_tokens', 0)
|
||||
output_tokens = usage.get('output_tokens', 0)
|
||||
total_tokens = input_tokens + output_tokens
|
||||
|
||||
tracker.parse(f"Received {total_tokens} tokens (input: {input_tokens}, output: {output_tokens})")
|
||||
tracker.parse(f"Content length: {len(content)} characters")
|
||||
|
||||
# Step 10: Calculate cost using ModelRegistry (with fallback)
|
||||
# Claude pricing as of 2024:
|
||||
# claude-3-5-sonnet: $3/1M input, $15/1M output
|
||||
# claude-3-opus: $15/1M input, $75/1M output
|
||||
# claude-3-haiku: $0.25/1M input, $1.25/1M output
|
||||
from igny8_core.ai.model_registry import ModelRegistry
|
||||
cost = float(ModelRegistry.calculate_cost(
|
||||
active_model,
|
||||
input_tokens=input_tokens,
|
||||
output_tokens=output_tokens
|
||||
))
|
||||
# Fallback to hardcoded rates if ModelRegistry returns 0
|
||||
if cost == 0:
|
||||
anthropic_rates = {
|
||||
'claude-3-5-sonnet-20241022': {'input': 3.00, 'output': 15.00},
|
||||
'claude-3-5-haiku-20241022': {'input': 1.00, 'output': 5.00},
|
||||
'claude-3-opus-20240229': {'input': 15.00, 'output': 75.00},
|
||||
'claude-3-sonnet-20240229': {'input': 3.00, 'output': 15.00},
|
||||
'claude-3-haiku-20240307': {'input': 0.25, 'output': 1.25},
|
||||
}
|
||||
rates = anthropic_rates.get(active_model, {'input': 3.00, 'output': 15.00})
|
||||
cost = (input_tokens * rates['input'] + output_tokens * rates['output']) / 1_000_000
|
||||
tracker.parse(f"Cost calculated: ${cost:.6f}")
|
||||
|
||||
tracker.done("Anthropic request completed successfully")
|
||||
|
||||
return {
|
||||
'content': content,
|
||||
'input_tokens': input_tokens,
|
||||
'output_tokens': output_tokens,
|
||||
'total_tokens': total_tokens,
|
||||
'model': active_model,
|
||||
'cost': cost,
|
||||
'error': None,
|
||||
'api_id': api_id,
|
||||
'duration': request_duration,
|
||||
}
|
||||
else:
|
||||
error_msg = 'No content in Anthropic response'
|
||||
tracker.error('EmptyResponse', error_msg)
|
||||
logger.error(error_msg)
|
||||
return {
|
||||
'content': None,
|
||||
'error': error_msg,
|
||||
'input_tokens': 0,
|
||||
'output_tokens': 0,
|
||||
'total_tokens': 0,
|
||||
'model': active_model,
|
||||
'cost': 0.0,
|
||||
'api_id': api_id,
|
||||
}
|
||||
|
||||
except requests.exceptions.Timeout:
|
||||
error_msg = 'Request timeout (180s exceeded)'
|
||||
tracker.timeout(180)
|
||||
logger.error(error_msg)
|
||||
return {
|
||||
'content': None,
|
||||
'error': error_msg,
|
||||
'input_tokens': 0,
|
||||
'output_tokens': 0,
|
||||
'total_tokens': 0,
|
||||
'model': active_model,
|
||||
'cost': 0.0,
|
||||
'api_id': None,
|
||||
}
|
||||
except requests.exceptions.RequestException as e:
|
||||
error_msg = f'Request exception: {str(e)}'
|
||||
tracker.error('RequestException', error_msg, e)
|
||||
logger.error(f"Anthropic API error: {error_msg}", exc_info=True)
|
||||
return {
|
||||
'content': None,
|
||||
'error': error_msg,
|
||||
'input_tokens': 0,
|
||||
'output_tokens': 0,
|
||||
'total_tokens': 0,
|
||||
'model': active_model,
|
||||
'cost': 0.0,
|
||||
'api_id': None,
|
||||
}
|
||||
except Exception as e:
|
||||
error_msg = f'Unexpected error: {str(e)}'
|
||||
logger.error(f"[AI][{function_name}][Anthropic][Error] {error_msg}", exc_info=True)
|
||||
if tracker:
|
||||
tracker.error('UnexpectedError', error_msg, e)
|
||||
return {
|
||||
'content': None,
|
||||
'error': error_msg,
|
||||
'input_tokens': 0,
|
||||
'output_tokens': 0,
|
||||
'total_tokens': 0,
|
||||
'model': active_model,
|
||||
'cost': 0.0,
|
||||
'api_id': None,
|
||||
}
|
||||
|
||||
def extract_json(self, response_text: str) -> Optional[Dict]:
|
||||
"""
|
||||
Extract JSON from response text.
|
||||
@@ -416,7 +713,8 @@ class AICore:
|
||||
n: int = 1,
|
||||
api_key: Optional[str] = None,
|
||||
negative_prompt: Optional[str] = None,
|
||||
function_name: str = 'generate_image'
|
||||
function_name: str = 'generate_image',
|
||||
style: Optional[str] = None
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Generate image using AI with console logging.
|
||||
@@ -437,9 +735,11 @@ class AICore:
|
||||
print(f"[AI][{function_name}] Step 1: Preparing image generation request...")
|
||||
|
||||
if provider == 'openai':
|
||||
return self._generate_image_openai(prompt, model, size, n, api_key, negative_prompt, function_name)
|
||||
return self._generate_image_openai(prompt, model, size, n, api_key, negative_prompt, function_name, style)
|
||||
elif provider == 'runware':
|
||||
return self._generate_image_runware(prompt, model, size, n, api_key, negative_prompt, function_name)
|
||||
elif provider == 'bria':
|
||||
return self._generate_image_bria(prompt, model, size, n, api_key, negative_prompt, function_name)
|
||||
else:
|
||||
error_msg = f'Unknown provider: {provider}'
|
||||
print(f"[AI][{function_name}][Error] {error_msg}")
|
||||
@@ -459,9 +759,15 @@ class AICore:
|
||||
n: int,
|
||||
api_key: Optional[str],
|
||||
negative_prompt: Optional[str],
|
||||
function_name: str
|
||||
function_name: str,
|
||||
style: Optional[str] = None
|
||||
) -> Dict[str, Any]:
|
||||
"""Generate image using OpenAI DALL-E"""
|
||||
"""Generate image using OpenAI DALL-E
|
||||
|
||||
Args:
|
||||
style: For DALL-E 3 only. 'vivid' (hyper-real/dramatic) or 'natural' (more realistic).
|
||||
Default is 'natural' for realistic photos.
|
||||
"""
|
||||
print(f"[AI][{function_name}] Provider: OpenAI")
|
||||
|
||||
# Determine character limit based on model
|
||||
@@ -546,6 +852,15 @@ class AICore:
|
||||
'size': size
|
||||
}
|
||||
|
||||
# For DALL-E 3, add style parameter
|
||||
# 'natural' = more realistic photos, 'vivid' = hyper-real/dramatic
|
||||
if model == 'dall-e-3':
|
||||
# Default to 'natural' for realistic images, but respect user preference
|
||||
dalle_style = style if style in ['vivid', 'natural'] else 'natural'
|
||||
data['style'] = dalle_style
|
||||
data['quality'] = 'hd' # Always use HD quality for best results
|
||||
print(f"[AI][{function_name}] DALL-E 3 style: {dalle_style}, quality: hd")
|
||||
|
||||
if negative_prompt:
|
||||
# Note: OpenAI DALL-E doesn't support negative_prompt in API, but we log it
|
||||
print(f"[AI][{function_name}] Note: Negative prompt provided but OpenAI DALL-E doesn't support it")
|
||||
@@ -578,7 +893,9 @@ class AICore:
|
||||
image_url = image_data.get('url')
|
||||
revised_prompt = image_data.get('revised_prompt')
|
||||
|
||||
cost = IMAGE_MODEL_RATES.get(model, 0.040) * n
|
||||
# Use ModelRegistry for image cost (database-driven)
|
||||
from igny8_core.ai.model_registry import ModelRegistry
|
||||
cost = float(ModelRegistry.calculate_cost(model, num_images=n))
|
||||
print(f"[AI][{function_name}] Step 5: Image generated successfully")
|
||||
print(f"[AI][{function_name}] Step 6: Cost: ${cost:.4f}")
|
||||
print(f"[AI][{function_name}][Success] Image generation completed")
|
||||
@@ -670,24 +987,57 @@ class AICore:
|
||||
# Runware uses array payload with authentication task first, then imageInference
|
||||
# Reference: image-generation.php lines 79-97
|
||||
import uuid
|
||||
|
||||
# Build base inference task
|
||||
inference_task = {
|
||||
'taskType': 'imageInference',
|
||||
'taskUUID': str(uuid.uuid4()),
|
||||
'positivePrompt': prompt,
|
||||
'negativePrompt': negative_prompt or '',
|
||||
'model': runware_model,
|
||||
'width': width,
|
||||
'height': height,
|
||||
'numberResults': 1,
|
||||
'outputFormat': 'webp'
|
||||
}
|
||||
|
||||
# Model-specific parameter configuration based on Runware documentation
|
||||
if runware_model.startswith('bria:'):
|
||||
# Bria 3.2 (bria:10@1) - Commercial-ready, steps 20-50 (API requires minimum 20)
|
||||
inference_task['steps'] = 20
|
||||
# Enhanced negative prompt for Bria to prevent disfigured images
|
||||
enhanced_negative = (negative_prompt or '') + ', disfigured, deformed, bad anatomy, wrong anatomy, extra limbs, missing limbs, floating limbs, mutated hands, extra fingers, missing fingers, fused fingers, poorly drawn hands, poorly drawn face, mutation, ugly, blurry, low quality, worst quality, jpeg artifacts, watermark, text, signature'
|
||||
inference_task['negativePrompt'] = enhanced_negative
|
||||
# Bria provider settings for enhanced quality
|
||||
inference_task['providerSettings'] = {
|
||||
'bria': {
|
||||
'promptEnhancement': True,
|
||||
'enhanceImage': True,
|
||||
'medium': 'photography',
|
||||
'contentModeration': True
|
||||
}
|
||||
}
|
||||
print(f"[AI][{function_name}] Using Bria 3.2 config: steps=20, enhanced negative prompt, providerSettings enabled")
|
||||
elif runware_model.startswith('google:'):
|
||||
# Nano Banana (google:4@2) - Premium quality
|
||||
# Google models use 'resolution' parameter INSTEAD of width/height
|
||||
# Remove width/height and use resolution only
|
||||
del inference_task['width']
|
||||
del inference_task['height']
|
||||
inference_task['resolution'] = '1k' # Use 1K tier for optimal speed/quality
|
||||
print(f"[AI][{function_name}] Using Nano Banana config: resolution=1k (no width/height)")
|
||||
else:
|
||||
# Hi Dream Full (runware:97@1) - General diffusion, steps 20, CFGScale 7
|
||||
inference_task['steps'] = 20
|
||||
inference_task['CFGScale'] = 7
|
||||
print(f"[AI][{function_name}] Using Hi Dream Full config: steps=20, CFGScale=7")
|
||||
|
||||
payload = [
|
||||
{
|
||||
'taskType': 'authentication',
|
||||
'apiKey': api_key
|
||||
},
|
||||
{
|
||||
'taskType': 'imageInference',
|
||||
'taskUUID': str(uuid.uuid4()),
|
||||
'positivePrompt': prompt,
|
||||
'negativePrompt': negative_prompt or '',
|
||||
'model': runware_model,
|
||||
'width': width,
|
||||
'height': height,
|
||||
'steps': 30,
|
||||
'CFGScale': 7.5,
|
||||
'numberResults': 1,
|
||||
'outputFormat': 'webp'
|
||||
}
|
||||
inference_task
|
||||
]
|
||||
|
||||
request_start = time.time()
|
||||
@@ -697,7 +1047,29 @@ class AICore:
|
||||
print(f"[AI][{function_name}] Step 4: Received response in {request_duration:.2f}s (status={response.status_code})")
|
||||
|
||||
if response.status_code != 200:
|
||||
error_msg = f"HTTP {response.status_code} error"
|
||||
# Log the full error response for debugging
|
||||
try:
|
||||
error_body = response.json()
|
||||
print(f"[AI][{function_name}][Error] Runware error response: {error_body}")
|
||||
logger.error(f"[AI][{function_name}] Runware HTTP {response.status_code} error body: {error_body}")
|
||||
|
||||
# Extract specific error message from Runware response
|
||||
error_detail = None
|
||||
if isinstance(error_body, list):
|
||||
for item in error_body:
|
||||
if isinstance(item, dict) and 'errors' in item:
|
||||
errors = item['errors']
|
||||
if isinstance(errors, list) and len(errors) > 0:
|
||||
err = errors[0]
|
||||
error_detail = err.get('message') or err.get('error') or str(err)
|
||||
break
|
||||
elif isinstance(error_body, dict):
|
||||
error_detail = error_body.get('message') or error_body.get('error') or str(error_body)
|
||||
|
||||
error_msg = f"HTTP {response.status_code}: {error_detail}" if error_detail else f"HTTP {response.status_code} error"
|
||||
except Exception as e:
|
||||
error_msg = f"HTTP {response.status_code} error (could not parse response: {e})"
|
||||
|
||||
print(f"[AI][{function_name}][Error] {error_msg}")
|
||||
return {
|
||||
'url': None,
|
||||
@@ -813,16 +1185,178 @@ class AICore:
|
||||
'error': error_msg,
|
||||
}
|
||||
|
||||
def _generate_image_bria(
|
||||
self,
|
||||
prompt: str,
|
||||
model: Optional[str],
|
||||
size: str,
|
||||
n: int,
|
||||
api_key: Optional[str],
|
||||
negative_prompt: Optional[str],
|
||||
function_name: str
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Generate image using Bria AI.
|
||||
|
||||
Bria API Reference: https://docs.bria.ai/reference/text-to-image
|
||||
"""
|
||||
print(f"[AI][{function_name}] Provider: Bria AI")
|
||||
|
||||
api_key = api_key or self._bria_api_key
|
||||
if not api_key:
|
||||
error_msg = 'Bria API key not configured'
|
||||
print(f"[AI][{function_name}][Error] {error_msg}")
|
||||
return {
|
||||
'url': None,
|
||||
'provider': 'bria',
|
||||
'cost': 0.0,
|
||||
'error': error_msg,
|
||||
}
|
||||
|
||||
bria_model = model or 'bria-2.3'
|
||||
print(f"[AI][{function_name}] Step 2: Using model: {bria_model}, size: {size}")
|
||||
|
||||
# Parse size
|
||||
try:
|
||||
width, height = map(int, size.split('x'))
|
||||
except ValueError:
|
||||
error_msg = f"Invalid size format: {size}. Expected format: WIDTHxHEIGHT"
|
||||
print(f"[AI][{function_name}][Error] {error_msg}")
|
||||
return {
|
||||
'url': None,
|
||||
'provider': 'bria',
|
||||
'cost': 0.0,
|
||||
'error': error_msg,
|
||||
}
|
||||
|
||||
# Bria API endpoint
|
||||
url = 'https://engine.prod.bria-api.com/v1/text-to-image/base'
|
||||
headers = {
|
||||
'api_token': api_key,
|
||||
'Content-Type': 'application/json'
|
||||
}
|
||||
|
||||
payload = {
|
||||
'prompt': prompt,
|
||||
'num_results': n,
|
||||
'sync': True, # Wait for result
|
||||
'model_version': bria_model.replace('bria-', ''), # e.g., '2.3'
|
||||
}
|
||||
|
||||
# Add negative prompt if provided
|
||||
if negative_prompt:
|
||||
payload['negative_prompt'] = negative_prompt
|
||||
|
||||
# Add size constraints if not default
|
||||
if width and height:
|
||||
# Bria uses aspect ratio or fixed sizes
|
||||
payload['width'] = width
|
||||
payload['height'] = height
|
||||
|
||||
print(f"[AI][{function_name}] Step 3: Sending request to Bria API...")
|
||||
|
||||
request_start = time.time()
|
||||
try:
|
||||
response = requests.post(url, json=payload, headers=headers, timeout=150)
|
||||
request_duration = time.time() - request_start
|
||||
print(f"[AI][{function_name}] Step 4: Received response in {request_duration:.2f}s (status={response.status_code})")
|
||||
|
||||
if response.status_code != 200:
|
||||
error_msg = f"HTTP {response.status_code} error: {response.text[:200]}"
|
||||
print(f"[AI][{function_name}][Error] {error_msg}")
|
||||
return {
|
||||
'url': None,
|
||||
'provider': 'bria',
|
||||
'cost': 0.0,
|
||||
'error': error_msg,
|
||||
}
|
||||
|
||||
body = response.json()
|
||||
print(f"[AI][{function_name}] Bria response keys: {list(body.keys()) if isinstance(body, dict) else type(body)}")
|
||||
|
||||
# Bria returns { "result": [ { "urls": ["..."] } ] }
|
||||
image_url = None
|
||||
error_msg = None
|
||||
|
||||
if isinstance(body, dict):
|
||||
if 'result' in body and isinstance(body['result'], list) and len(body['result']) > 0:
|
||||
first_result = body['result'][0]
|
||||
if 'urls' in first_result and isinstance(first_result['urls'], list) and len(first_result['urls']) > 0:
|
||||
image_url = first_result['urls'][0]
|
||||
elif 'url' in first_result:
|
||||
image_url = first_result['url']
|
||||
elif 'error' in body:
|
||||
error_msg = body['error']
|
||||
elif 'message' in body:
|
||||
error_msg = body['message']
|
||||
|
||||
if error_msg:
|
||||
print(f"[AI][{function_name}][Error] Bria API error: {error_msg}")
|
||||
return {
|
||||
'url': None,
|
||||
'provider': 'bria',
|
||||
'cost': 0.0,
|
||||
'error': error_msg,
|
||||
}
|
||||
|
||||
if image_url:
|
||||
# Cost based on model
|
||||
cost_per_image = {
|
||||
'bria-2.3': 0.015,
|
||||
'bria-2.3-fast': 0.010,
|
||||
'bria-2.2': 0.012,
|
||||
}.get(bria_model, 0.015)
|
||||
cost = cost_per_image * n
|
||||
|
||||
print(f"[AI][{function_name}] Step 5: Image generated successfully")
|
||||
print(f"[AI][{function_name}] Step 6: Cost: ${cost:.4f}")
|
||||
print(f"[AI][{function_name}][Success] Image generation completed")
|
||||
|
||||
return {
|
||||
'url': image_url,
|
||||
'provider': 'bria',
|
||||
'cost': cost,
|
||||
'error': None,
|
||||
}
|
||||
else:
|
||||
error_msg = f'No image data in Bria response'
|
||||
print(f"[AI][{function_name}][Error] {error_msg}")
|
||||
logger.error(f"[AI][{function_name}] Full Bria response: {json.dumps(body, indent=2) if isinstance(body, dict) else str(body)}")
|
||||
return {
|
||||
'url': None,
|
||||
'provider': 'bria',
|
||||
'cost': 0.0,
|
||||
'error': error_msg,
|
||||
}
|
||||
|
||||
except requests.exceptions.Timeout:
|
||||
error_msg = 'Request timeout (150s exceeded)'
|
||||
print(f"[AI][{function_name}][Error] {error_msg}")
|
||||
return {
|
||||
'url': None,
|
||||
'provider': 'bria',
|
||||
'cost': 0.0,
|
||||
'error': error_msg,
|
||||
}
|
||||
except Exception as e:
|
||||
error_msg = f'Unexpected error: {str(e)}'
|
||||
print(f"[AI][{function_name}][Error] {error_msg}")
|
||||
logger.error(error_msg, exc_info=True)
|
||||
return {
|
||||
'url': None,
|
||||
'provider': 'bria',
|
||||
'cost': 0.0,
|
||||
'error': error_msg,
|
||||
}
|
||||
|
||||
def calculate_cost(self, model: str, input_tokens: int, output_tokens: int, model_type: str = 'text') -> float:
|
||||
"""Calculate cost for API call"""
|
||||
"""Calculate cost for API call using ModelRegistry (database-driven)"""
|
||||
from igny8_core.ai.model_registry import ModelRegistry
|
||||
|
||||
if model_type == 'text':
|
||||
rates = MODEL_RATES.get(model, {'input': 2.00, 'output': 8.00})
|
||||
input_cost = (input_tokens / 1_000_000) * rates['input']
|
||||
output_cost = (output_tokens / 1_000_000) * rates['output']
|
||||
return input_cost + output_cost
|
||||
return float(ModelRegistry.calculate_cost(model, input_tokens=input_tokens, output_tokens=output_tokens))
|
||||
elif model_type == 'image':
|
||||
rate = IMAGE_MODEL_RATES.get(model, 0.040)
|
||||
return rate * 1
|
||||
return float(ModelRegistry.calculate_cost(model, num_images=1))
|
||||
return 0.0
|
||||
|
||||
# Legacy method names for backward compatibility
|
||||
|
||||
@@ -1,7 +1,17 @@
|
||||
"""
|
||||
AI Constants - Model pricing, valid models, and configuration constants
|
||||
AI Constants - Configuration constants for AI operations
|
||||
|
||||
NOTE: Model pricing (MODEL_RATES, IMAGE_MODEL_RATES) has been moved to the database
|
||||
via AIModelConfig. Use ModelRegistry to get model pricing:
|
||||
|
||||
from igny8_core.ai.model_registry import ModelRegistry
|
||||
cost = ModelRegistry.calculate_cost(model_id, input_tokens=N, output_tokens=N)
|
||||
|
||||
The constants below are DEPRECATED and kept only for reference/backward compatibility.
|
||||
Do NOT use MODEL_RATES or IMAGE_MODEL_RATES in new code.
|
||||
"""
|
||||
# Model pricing (per 1M tokens) - EXACT from reference plugin model-rates-config.php
|
||||
# DEPRECATED - Use AIModelConfig database table instead
|
||||
# Model pricing (per 1M tokens) - kept for reference only
|
||||
MODEL_RATES = {
|
||||
'gpt-4.1': {'input': 2.00, 'output': 8.00},
|
||||
'gpt-4o-mini': {'input': 0.15, 'output': 0.60},
|
||||
@@ -10,7 +20,8 @@ MODEL_RATES = {
|
||||
'gpt-5.2': {'input': 1.75, 'output': 14.00},
|
||||
}
|
||||
|
||||
# Image model pricing (per image) - EXACT from reference plugin
|
||||
# DEPRECATED - Use AIModelConfig database table instead
|
||||
# Image model pricing (per image) - kept for reference only
|
||||
IMAGE_MODEL_RATES = {
|
||||
'dall-e-3': 0.040,
|
||||
'dall-e-2': 0.020,
|
||||
|
||||
@@ -31,11 +31,15 @@ class AIEngine:
|
||||
elif function_name == 'generate_ideas':
|
||||
return f"{count} cluster{'s' if count != 1 else ''}"
|
||||
elif function_name == 'generate_content':
|
||||
return f"{count} task{'s' if count != 1 else ''}"
|
||||
return f"{count} article{'s' if count != 1 else ''}"
|
||||
elif function_name == 'generate_images':
|
||||
return f"{count} task{'s' if count != 1 else ''}"
|
||||
return f"{count} image{'s' if count != 1 else ''}"
|
||||
elif function_name == 'generate_image_prompts':
|
||||
return f"{count} image prompt{'s' if count != 1 else ''}"
|
||||
elif function_name == 'optimize_content':
|
||||
return f"{count} article{'s' if count != 1 else ''}"
|
||||
elif function_name == 'generate_site_structure':
|
||||
return "1 site blueprint"
|
||||
return "site blueprint"
|
||||
return f"{count} item{'s' if count != 1 else ''}"
|
||||
|
||||
def _build_validation_message(self, function_name: str, payload: dict, count: int, input_description: str) -> str:
|
||||
@@ -51,12 +55,22 @@ class AIEngine:
|
||||
remaining = count - len(keyword_list)
|
||||
if remaining > 0:
|
||||
keywords_text = ', '.join(keyword_list)
|
||||
return f"Validating {keywords_text} and {remaining} more keyword{'s' if remaining != 1 else ''}"
|
||||
return f"Validating {count} keywords for clustering"
|
||||
else:
|
||||
keywords_text = ', '.join(keyword_list)
|
||||
return f"Validating {keywords_text}"
|
||||
except Exception as e:
|
||||
logger.warning(f"Failed to load keyword names for validation message: {e}")
|
||||
elif function_name == 'generate_ideas':
|
||||
return f"Analyzing {count} clusters for content opportunities"
|
||||
elif function_name == 'generate_content':
|
||||
return f"Preparing {count} article{'s' if count != 1 else ''} for generation"
|
||||
elif function_name == 'generate_image_prompts':
|
||||
return f"Analyzing content for image opportunities"
|
||||
elif function_name == 'generate_images':
|
||||
return f"Queuing {count} image{'s' if count != 1 else ''} for generation"
|
||||
elif function_name == 'optimize_content':
|
||||
return f"Analyzing {count} article{'s' if count != 1 else ''} for optimization"
|
||||
|
||||
# Fallback to simple count message
|
||||
return f"Validating {input_description}"
|
||||
@@ -64,24 +78,33 @@ class AIEngine:
|
||||
def _get_prep_message(self, function_name: str, count: int, data: Any) -> str:
|
||||
"""Get user-friendly prep message"""
|
||||
if function_name == 'auto_cluster':
|
||||
return f"Loading {count} keyword{'s' if count != 1 else ''}"
|
||||
return f"Analyzing keyword relationships for {count} keyword{'s' if count != 1 else ''}"
|
||||
elif function_name == 'generate_ideas':
|
||||
return f"Loading {count} cluster{'s' if count != 1 else ''}"
|
||||
# Count keywords in clusters if available
|
||||
keyword_count = 0
|
||||
if isinstance(data, dict) and 'cluster_data' in data:
|
||||
for cluster in data['cluster_data']:
|
||||
keyword_count += len(cluster.get('keywords', []))
|
||||
if keyword_count > 0:
|
||||
return f"Mapping {keyword_count} keywords to topic briefs"
|
||||
return f"Mapping keywords to topic briefs for {count} cluster{'s' if count != 1 else ''}"
|
||||
elif function_name == 'generate_content':
|
||||
return f"Preparing {count} content idea{'s' if count != 1 else ''}"
|
||||
return f"Building content brief{'s' if count != 1 else ''} with target keywords"
|
||||
elif function_name == 'generate_images':
|
||||
return f"Extracting image prompts from {count} task{'s' if count != 1 else ''}"
|
||||
return f"Preparing AI image generation ({count} image{'s' if count != 1 else ''})"
|
||||
elif function_name == 'generate_image_prompts':
|
||||
# Extract max_images from data if available
|
||||
if isinstance(data, list) and len(data) > 0:
|
||||
max_images = data[0].get('max_images')
|
||||
total_images = 1 + max_images # 1 featured + max_images in-article
|
||||
return f"Mapping Content for {total_images} Image Prompts"
|
||||
return f"Identifying 1 featured + {max_images} in-article image slots"
|
||||
elif isinstance(data, dict) and 'max_images' in data:
|
||||
max_images = data.get('max_images')
|
||||
total_images = 1 + max_images
|
||||
return f"Mapping Content for {total_images} Image Prompts"
|
||||
return f"Mapping Content for Image Prompts"
|
||||
return f"Identifying 1 featured + {max_images} in-article image slots"
|
||||
return f"Identifying featured and in-article image slots"
|
||||
elif function_name == 'optimize_content':
|
||||
return f"Analyzing SEO factors for {count} article{'s' if count != 1 else ''}"
|
||||
elif function_name == 'generate_site_structure':
|
||||
blueprint_name = ''
|
||||
if isinstance(data, dict):
|
||||
@@ -94,13 +117,17 @@ class AIEngine:
|
||||
def _get_ai_call_message(self, function_name: str, count: int) -> str:
|
||||
"""Get user-friendly AI call message"""
|
||||
if function_name == 'auto_cluster':
|
||||
return f"Grouping {count} keyword{'s' if count != 1 else ''} into clusters"
|
||||
return f"Grouping {count} keywords by search intent"
|
||||
elif function_name == 'generate_ideas':
|
||||
return f"Generating content ideas for {count} cluster{'s' if count != 1 else ''}"
|
||||
elif function_name == 'generate_content':
|
||||
return f"Writing article{'s' if count != 1 else ''} with AI"
|
||||
return f"Writing {count} article{'s' if count != 1 else ''} with AI"
|
||||
elif function_name == 'generate_images':
|
||||
return f"Creating image{'s' if count != 1 else ''} with AI"
|
||||
return f"Generating image{'s' if count != 1 else ''} with AI"
|
||||
elif function_name == 'generate_image_prompts':
|
||||
return f"Creating optimized prompts for {count} image{'s' if count != 1 else ''}"
|
||||
elif function_name == 'optimize_content':
|
||||
return f"Optimizing {count} article{'s' if count != 1 else ''} for SEO"
|
||||
elif function_name == 'generate_site_structure':
|
||||
return "Designing complete site architecture"
|
||||
return f"Processing with AI"
|
||||
@@ -108,13 +135,17 @@ class AIEngine:
|
||||
def _get_parse_message(self, function_name: str) -> str:
|
||||
"""Get user-friendly parse message"""
|
||||
if function_name == 'auto_cluster':
|
||||
return "Organizing clusters"
|
||||
return "Organizing semantic clusters"
|
||||
elif function_name == 'generate_ideas':
|
||||
return "Structuring outlines"
|
||||
return "Structuring article outlines"
|
||||
elif function_name == 'generate_content':
|
||||
return "Formatting content"
|
||||
return "Formatting HTML content and metadata"
|
||||
elif function_name == 'generate_images':
|
||||
return "Processing images"
|
||||
return "Processing generated images"
|
||||
elif function_name == 'generate_image_prompts':
|
||||
return "Refining contextual image descriptions"
|
||||
elif function_name == 'optimize_content':
|
||||
return "Compiling optimization scores"
|
||||
elif function_name == 'generate_site_structure':
|
||||
return "Compiling site map"
|
||||
return "Processing results"
|
||||
@@ -122,19 +153,21 @@ class AIEngine:
|
||||
def _get_parse_message_with_count(self, function_name: str, count: int) -> str:
|
||||
"""Get user-friendly parse message with count"""
|
||||
if function_name == 'auto_cluster':
|
||||
return f"{count} cluster{'s' if count != 1 else ''} created"
|
||||
return f"Organizing {count} semantic cluster{'s' if count != 1 else ''}"
|
||||
elif function_name == 'generate_ideas':
|
||||
return f"{count} idea{'s' if count != 1 else ''} created"
|
||||
return f"Structuring {count} article outline{'s' if count != 1 else ''}"
|
||||
elif function_name == 'generate_content':
|
||||
return f"{count} article{'s' if count != 1 else ''} created"
|
||||
return f"Formatting {count} article{'s' if count != 1 else ''}"
|
||||
elif function_name == 'generate_images':
|
||||
return f"{count} image{'s' if count != 1 else ''} created"
|
||||
return f"Processing {count} generated image{'s' if count != 1 else ''}"
|
||||
elif function_name == 'generate_image_prompts':
|
||||
# Count is total prompts, in-article is count - 1 (subtract featured)
|
||||
in_article_count = max(0, count - 1)
|
||||
if in_article_count > 0:
|
||||
return f"Writing {in_article_count} In‑article Image Prompts"
|
||||
return "Writing In‑article Image Prompts"
|
||||
return f"Refining {in_article_count} in-article image description{'s' if in_article_count != 1 else ''}"
|
||||
return "Refining image descriptions"
|
||||
elif function_name == 'optimize_content':
|
||||
return f"Compiling scores for {count} article{'s' if count != 1 else ''}"
|
||||
elif function_name == 'generate_site_structure':
|
||||
return f"{count} page blueprint{'s' if count != 1 else ''} mapped"
|
||||
return f"{count} item{'s' if count != 1 else ''} processed"
|
||||
@@ -142,20 +175,50 @@ class AIEngine:
|
||||
def _get_save_message(self, function_name: str, count: int) -> str:
|
||||
"""Get user-friendly save message"""
|
||||
if function_name == 'auto_cluster':
|
||||
return f"Saving {count} cluster{'s' if count != 1 else ''}"
|
||||
return f"Saving {count} cluster{'s' if count != 1 else ''} with keywords"
|
||||
elif function_name == 'generate_ideas':
|
||||
return f"Saving {count} idea{'s' if count != 1 else ''}"
|
||||
return f"Saving {count} idea{'s' if count != 1 else ''} with outlines"
|
||||
elif function_name == 'generate_content':
|
||||
return f"Saving {count} article{'s' if count != 1 else ''}"
|
||||
elif function_name == 'generate_images':
|
||||
return f"Saving {count} image{'s' if count != 1 else ''}"
|
||||
return f"Uploading {count} image{'s' if count != 1 else ''} to media library"
|
||||
elif function_name == 'generate_image_prompts':
|
||||
# Count is total prompts created
|
||||
return f"Assigning {count} Prompts to Dedicated Slots"
|
||||
in_article = max(0, count - 1)
|
||||
return f"Assigning {count} prompts (1 featured + {in_article} in-article)"
|
||||
elif function_name == 'optimize_content':
|
||||
return f"Saving optimization scores for {count} article{'s' if count != 1 else ''}"
|
||||
elif function_name == 'generate_site_structure':
|
||||
return f"Publishing {count} page blueprint{'s' if count != 1 else ''}"
|
||||
return f"Saving {count} item{'s' if count != 1 else ''}"
|
||||
|
||||
def _get_done_message(self, function_name: str, result: dict) -> str:
|
||||
"""Get user-friendly completion message with counts"""
|
||||
count = result.get('count', 0)
|
||||
|
||||
if function_name == 'auto_cluster':
|
||||
keyword_count = result.get('keywords_clustered', 0)
|
||||
return f"✓ Organized {keyword_count} keywords into {count} semantic cluster{'s' if count != 1 else ''}"
|
||||
elif function_name == 'generate_ideas':
|
||||
return f"✓ Created {count} content idea{'s' if count != 1 else ''} with detailed outlines"
|
||||
elif function_name == 'generate_content':
|
||||
total_words = result.get('total_words', 0)
|
||||
if total_words > 0:
|
||||
return f"✓ Generated {count} article{'s' if count != 1 else ''} ({total_words:,} words)"
|
||||
return f"✓ Generated {count} article{'s' if count != 1 else ''}"
|
||||
elif function_name == 'generate_images':
|
||||
return f"✓ Generated and saved {count} AI image{'s' if count != 1 else ''}"
|
||||
elif function_name == 'generate_image_prompts':
|
||||
in_article = max(0, count - 1)
|
||||
return f"✓ Created {count} image prompt{'s' if count != 1 else ''} (1 featured + {in_article} in-article)"
|
||||
elif function_name == 'optimize_content':
|
||||
avg_score = result.get('average_score', 0)
|
||||
if avg_score > 0:
|
||||
return f"✓ Optimized {count} article{'s' if count != 1 else ''} (avg score: {avg_score}%)"
|
||||
return f"✓ Optimized {count} article{'s' if count != 1 else ''}"
|
||||
elif function_name == 'generate_site_structure':
|
||||
return f"✓ Created {count} page blueprint{'s' if count != 1 else ''}"
|
||||
return f"✓ {count} item{'s' if count != 1 else ''} completed"
|
||||
|
||||
def execute(self, fn: BaseAIFunction, payload: dict) -> dict:
|
||||
"""
|
||||
Unified execution pipeline for all AI functions.
|
||||
@@ -243,12 +306,13 @@ class AIEngine:
|
||||
|
||||
ai_core = AICore(account=self.account)
|
||||
function_name = fn.get_name()
|
||||
|
||||
# Generate function_id for tracking (ai-{function_name}-01)
|
||||
# Normalize underscores to hyphens to match frontend tracking IDs
|
||||
function_id_base = function_name.replace('_', '-')
|
||||
function_id = f"ai-{function_id_base}-01-desktop"
|
||||
|
||||
|
||||
# Generate prompt prefix for tracking (e.g., ##GP01-Clustering or ##CP01-Clustering)
|
||||
# This replaces function_id and indicates whether prompt is global or custom
|
||||
from igny8_core.ai.prompts import get_prompt_prefix_for_function
|
||||
prompt_prefix = get_prompt_prefix_for_function(function_name, account=self.account)
|
||||
logger.info(f"[AIEngine] Using prompt prefix: {prompt_prefix}")
|
||||
|
||||
# Get model config from settings (requires account)
|
||||
# This will raise ValueError if IntegrationSettings not configured
|
||||
try:
|
||||
@@ -286,7 +350,7 @@ class AIEngine:
|
||||
temperature=model_config.get('temperature'),
|
||||
response_format=model_config.get('response_format'),
|
||||
function_name=function_name,
|
||||
function_id=function_id # Pass function_id for tracking
|
||||
prompt_prefix=prompt_prefix # Pass prompt prefix for tracking (replaces function_id)
|
||||
)
|
||||
except Exception as e:
|
||||
error_msg = f"AI call failed: {str(e)}"
|
||||
@@ -411,13 +475,16 @@ class AIEngine:
|
||||
# Don't fail the operation if credit deduction fails (for backward compatibility)
|
||||
|
||||
# Phase 6: DONE - Finalization (98-100%)
|
||||
success_msg = f"Task completed: {final_save_msg}" if 'final_save_msg' in locals() else "Task completed successfully"
|
||||
self.step_tracker.add_request_step("DONE", "success", "Task completed successfully")
|
||||
self.tracker.update("DONE", 100, "Task complete!", meta=self.step_tracker.get_meta())
|
||||
done_msg = self._get_done_message(function_name, save_result)
|
||||
self.step_tracker.add_request_step("DONE", "success", done_msg)
|
||||
self.tracker.update("DONE", 100, done_msg, meta=self.step_tracker.get_meta())
|
||||
|
||||
# Log to database
|
||||
self._log_to_database(fn, payload, parsed, save_result)
|
||||
|
||||
# Create notification for successful completion
|
||||
self._create_success_notification(function_name, save_result, payload)
|
||||
|
||||
return {
|
||||
'success': True,
|
||||
**save_result,
|
||||
@@ -461,6 +528,9 @@ class AIEngine:
|
||||
|
||||
self._log_to_database(fn, None, None, None, error=error)
|
||||
|
||||
# Create notification for failure
|
||||
self._create_failure_notification(function_name, error)
|
||||
|
||||
return {
|
||||
'success': False,
|
||||
'error': error,
|
||||
@@ -588,4 +658,104 @@ class AIEngine:
|
||||
'generate_site_structure': 'site_blueprint',
|
||||
}
|
||||
return mapping.get(function_name, 'unknown')
|
||||
|
||||
def _create_success_notification(self, function_name: str, save_result: dict, payload: dict):
|
||||
"""Create notification for successful AI task completion"""
|
||||
if not self.account:
|
||||
return
|
||||
|
||||
# Lazy import to avoid circular dependency and Django app loading issues
|
||||
from igny8_core.business.notifications.services import NotificationService
|
||||
|
||||
# Get site from payload if available
|
||||
site = None
|
||||
site_id = payload.get('site_id')
|
||||
if site_id:
|
||||
try:
|
||||
from igny8_core.auth.models import Site
|
||||
site = Site.objects.get(id=site_id, account=self.account)
|
||||
except:
|
||||
pass
|
||||
|
||||
try:
|
||||
# Map function to appropriate notification method
|
||||
if function_name == 'auto_cluster':
|
||||
NotificationService.notify_clustering_complete(
|
||||
account=self.account,
|
||||
site=site,
|
||||
cluster_count=save_result.get('clusters_created', 0),
|
||||
keyword_count=save_result.get('keywords_updated', 0)
|
||||
)
|
||||
elif function_name == 'generate_ideas':
|
||||
NotificationService.notify_ideas_complete(
|
||||
account=self.account,
|
||||
site=site,
|
||||
idea_count=save_result.get('count', 0),
|
||||
cluster_count=len(payload.get('ids', []))
|
||||
)
|
||||
elif function_name == 'generate_content':
|
||||
NotificationService.notify_content_complete(
|
||||
account=self.account,
|
||||
site=site,
|
||||
article_count=save_result.get('count', 0),
|
||||
word_count=save_result.get('word_count', 0)
|
||||
)
|
||||
elif function_name == 'generate_image_prompts':
|
||||
NotificationService.notify_prompts_complete(
|
||||
account=self.account,
|
||||
site=site,
|
||||
prompt_count=save_result.get('count', 0)
|
||||
)
|
||||
elif function_name == 'generate_images':
|
||||
NotificationService.notify_images_complete(
|
||||
account=self.account,
|
||||
site=site,
|
||||
image_count=save_result.get('count', 0)
|
||||
)
|
||||
|
||||
logger.info(f"[AIEngine] Created success notification for {function_name}")
|
||||
except Exception as e:
|
||||
# Don't fail the task if notification creation fails
|
||||
logger.warning(f"[AIEngine] Failed to create success notification: {e}", exc_info=True)
|
||||
|
||||
def _create_failure_notification(self, function_name: str, error: str):
|
||||
"""Create notification for failed AI task"""
|
||||
if not self.account:
|
||||
return
|
||||
|
||||
# Lazy import to avoid circular dependency and Django app loading issues
|
||||
from igny8_core.business.notifications.services import NotificationService
|
||||
|
||||
try:
|
||||
# Map function to appropriate failure notification method
|
||||
if function_name == 'auto_cluster':
|
||||
NotificationService.notify_clustering_failed(
|
||||
account=self.account,
|
||||
error=error
|
||||
)
|
||||
elif function_name == 'generate_ideas':
|
||||
NotificationService.notify_ideas_failed(
|
||||
account=self.account,
|
||||
error=error
|
||||
)
|
||||
elif function_name == 'generate_content':
|
||||
NotificationService.notify_content_failed(
|
||||
account=self.account,
|
||||
error=error
|
||||
)
|
||||
elif function_name == 'generate_image_prompts':
|
||||
NotificationService.notify_prompts_failed(
|
||||
account=self.account,
|
||||
error=error
|
||||
)
|
||||
elif function_name == 'generate_images':
|
||||
NotificationService.notify_images_failed(
|
||||
account=self.account,
|
||||
error=error
|
||||
)
|
||||
|
||||
logger.info(f"[AIEngine] Created failure notification for {function_name}")
|
||||
except Exception as e:
|
||||
# Don't fail the task if notification creation fails
|
||||
logger.warning(f"[AIEngine] Failed to create failure notification: {e}", exc_info=True)
|
||||
|
||||
|
||||
@@ -219,32 +219,12 @@ class GenerateImagePromptsFunction(BaseAIFunction):
|
||||
# Helper methods
|
||||
def _get_max_in_article_images(self, account) -> int:
|
||||
"""
|
||||
Get max_in_article_images from settings.
|
||||
Uses account's IntegrationSettings override, or GlobalIntegrationSettings.
|
||||
Get max_in_article_images from AISettings (with account override).
|
||||
"""
|
||||
from igny8_core.modules.system.models import IntegrationSettings
|
||||
from igny8_core.modules.system.global_settings_models import GlobalIntegrationSettings
|
||||
from igny8_core.modules.system.ai_settings import AISettings
|
||||
|
||||
# Try account-specific override first
|
||||
try:
|
||||
settings = IntegrationSettings.objects.get(
|
||||
account=account,
|
||||
integration_type='image_generation',
|
||||
is_active=True
|
||||
)
|
||||
max_images = settings.config.get('max_in_article_images')
|
||||
|
||||
if max_images is not None:
|
||||
max_images = int(max_images)
|
||||
logger.info(f"Using max_in_article_images={max_images} from account {account.id} IntegrationSettings override")
|
||||
return max_images
|
||||
except IntegrationSettings.DoesNotExist:
|
||||
logger.debug(f"No IntegrationSettings override for account {account.id}, using GlobalIntegrationSettings")
|
||||
|
||||
# Use GlobalIntegrationSettings default
|
||||
global_settings = GlobalIntegrationSettings.get_instance()
|
||||
max_images = global_settings.max_in_article_images
|
||||
logger.info(f"Using max_in_article_images={max_images} from GlobalIntegrationSettings (account {account.id})")
|
||||
max_images = AISettings.get_effective_max_images(account)
|
||||
logger.info(f"Using max_in_article_images={max_images} for account {account.id}")
|
||||
return max_images
|
||||
|
||||
def _extract_content_elements(self, content: Content, max_images: int) -> Dict:
|
||||
|
||||
@@ -67,42 +67,33 @@ class GenerateImagesFunction(BaseAIFunction):
|
||||
if not tasks:
|
||||
raise ValueError("No tasks found")
|
||||
|
||||
# Get image generation settings
|
||||
# Try account-specific override, otherwise use GlobalIntegrationSettings
|
||||
from igny8_core.modules.system.models import IntegrationSettings
|
||||
from igny8_core.modules.system.global_settings_models import GlobalIntegrationSettings
|
||||
# Get image generation settings from AISettings (with account overrides)
|
||||
from igny8_core.modules.system.ai_settings import AISettings
|
||||
from igny8_core.ai.model_registry import ModelRegistry
|
||||
|
||||
image_settings = {}
|
||||
try:
|
||||
integration = IntegrationSettings.objects.get(
|
||||
account=account,
|
||||
integration_type='image_generation',
|
||||
is_active=True
|
||||
)
|
||||
image_settings = integration.config or {}
|
||||
logger.info(f"Using image settings from account {account.id} IntegrationSettings override")
|
||||
except IntegrationSettings.DoesNotExist:
|
||||
logger.info(f"No IntegrationSettings override for account {account.id}, using GlobalIntegrationSettings")
|
||||
# Get effective settings (AISettings + AccountSettings overrides)
|
||||
image_style = AISettings.get_effective_image_style(account)
|
||||
max_images = AISettings.get_effective_max_images(account)
|
||||
|
||||
# Use GlobalIntegrationSettings for missing values
|
||||
global_settings = GlobalIntegrationSettings.get_instance()
|
||||
|
||||
# Extract settings with defaults from global settings
|
||||
provider = image_settings.get('provider') or image_settings.get('service') or global_settings.default_image_service
|
||||
if provider == 'runware':
|
||||
model = image_settings.get('model') or image_settings.get('runwareModel') or global_settings.runware_model
|
||||
# Get default image model and provider from database
|
||||
default_model = ModelRegistry.get_default_model('image')
|
||||
if default_model:
|
||||
model_config = ModelRegistry.get_model(default_model)
|
||||
provider = model_config.provider if model_config else 'openai'
|
||||
model = default_model
|
||||
else:
|
||||
model = image_settings.get('model') or global_settings.dalle_model
|
||||
provider = 'openai'
|
||||
model = 'dall-e-3'
|
||||
|
||||
logger.info(f"Using image settings: provider={provider}, model={model}, style={image_style}, max={max_images}")
|
||||
|
||||
return {
|
||||
'tasks': tasks,
|
||||
'account': account,
|
||||
'provider': provider,
|
||||
'model': model,
|
||||
'image_type': image_settings.get('image_type') or global_settings.image_style,
|
||||
'max_in_article_images': int(image_settings.get('max_in_article_images') or global_settings.max_in_article_images),
|
||||
'desktop_enabled': image_settings.get('desktop_enabled', True),
|
||||
'mobile_enabled': image_settings.get('mobile_enabled', True),
|
||||
'image_type': image_style,
|
||||
'max_in_article_images': max_images,
|
||||
}
|
||||
|
||||
def build_prompt(self, data: Dict, account=None) -> Dict:
|
||||
|
||||
377
backend/igny8_core/ai/model_registry.py
Normal file
377
backend/igny8_core/ai/model_registry.py
Normal file
@@ -0,0 +1,377 @@
|
||||
"""
|
||||
Model Registry Service
|
||||
Central registry for AI model configurations with caching.
|
||||
|
||||
This service provides:
|
||||
- Database-driven model configuration (from AIModelConfig)
|
||||
- Integration provider API key retrieval (from IntegrationProvider)
|
||||
- Caching for performance
|
||||
- Cost calculation methods
|
||||
|
||||
Usage:
|
||||
from igny8_core.ai.model_registry import ModelRegistry
|
||||
|
||||
# Get model config
|
||||
model = ModelRegistry.get_model('gpt-4o-mini')
|
||||
|
||||
# Get rate
|
||||
input_rate = ModelRegistry.get_rate('gpt-4o-mini', 'input')
|
||||
|
||||
# Calculate cost
|
||||
cost = ModelRegistry.calculate_cost('gpt-4o-mini', input_tokens=1000, output_tokens=500)
|
||||
|
||||
# Get API key for a provider
|
||||
api_key = ModelRegistry.get_api_key('openai')
|
||||
"""
|
||||
import logging
|
||||
from decimal import Decimal
|
||||
from typing import Optional, Dict, Any
|
||||
from django.core.cache import cache
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Cache TTL in seconds (5 minutes)
|
||||
MODEL_CACHE_TTL = 300
|
||||
|
||||
# Cache key prefix
|
||||
CACHE_KEY_PREFIX = 'ai_model_'
|
||||
PROVIDER_CACHE_PREFIX = 'provider_'
|
||||
|
||||
|
||||
class ModelRegistry:
|
||||
"""
|
||||
Central registry for AI model configurations with caching.
|
||||
Uses AIModelConfig from database for model configs.
|
||||
Uses IntegrationProvider for API keys.
|
||||
"""
|
||||
|
||||
@classmethod
|
||||
def _get_cache_key(cls, model_id: str) -> str:
|
||||
"""Generate cache key for model"""
|
||||
return f"{CACHE_KEY_PREFIX}{model_id}"
|
||||
|
||||
@classmethod
|
||||
def _get_provider_cache_key(cls, provider_id: str) -> str:
|
||||
"""Generate cache key for provider"""
|
||||
return f"{PROVIDER_CACHE_PREFIX}{provider_id}"
|
||||
|
||||
@classmethod
|
||||
def _get_from_db(cls, model_id: str) -> Optional[Any]:
|
||||
"""Get model config from database"""
|
||||
try:
|
||||
from igny8_core.business.billing.models import AIModelConfig
|
||||
return AIModelConfig.objects.filter(
|
||||
model_name=model_id,
|
||||
is_active=True
|
||||
).first()
|
||||
except Exception as e:
|
||||
logger.debug(f"Could not fetch model {model_id} from DB: {e}")
|
||||
return None
|
||||
|
||||
@classmethod
|
||||
def get_model(cls, model_id: str) -> Optional[Any]:
|
||||
"""
|
||||
Get model configuration by model_id.
|
||||
|
||||
Order of lookup:
|
||||
1. Cache
|
||||
2. Database (AIModelConfig)
|
||||
|
||||
Args:
|
||||
model_id: The model identifier (e.g., 'gpt-4o-mini', 'dall-e-3')
|
||||
|
||||
Returns:
|
||||
AIModelConfig instance, None if not found
|
||||
"""
|
||||
cache_key = cls._get_cache_key(model_id)
|
||||
|
||||
# Try cache first
|
||||
cached = cache.get(cache_key)
|
||||
if cached is not None:
|
||||
return cached
|
||||
|
||||
# Try database
|
||||
model_config = cls._get_from_db(model_id)
|
||||
|
||||
if model_config:
|
||||
cache.set(cache_key, model_config, MODEL_CACHE_TTL)
|
||||
return model_config
|
||||
|
||||
logger.warning(f"Model {model_id} not found in database")
|
||||
return None
|
||||
|
||||
@classmethod
|
||||
def get_rate(cls, model_id: str, rate_type: str) -> Decimal:
|
||||
"""
|
||||
Get specific rate for a model.
|
||||
|
||||
Args:
|
||||
model_id: The model identifier
|
||||
rate_type: 'input', 'output' (for text models) or 'image' (for image models)
|
||||
|
||||
Returns:
|
||||
Decimal rate value, 0 if not found
|
||||
"""
|
||||
model = cls.get_model(model_id)
|
||||
if not model:
|
||||
return Decimal('0')
|
||||
|
||||
# Handle AIModelConfig instance
|
||||
if rate_type == 'input':
|
||||
return model.input_cost_per_1m or Decimal('0')
|
||||
elif rate_type == 'output':
|
||||
return model.output_cost_per_1m or Decimal('0')
|
||||
elif rate_type == 'image':
|
||||
return model.cost_per_image or Decimal('0')
|
||||
|
||||
return Decimal('0')
|
||||
|
||||
@classmethod
|
||||
def calculate_cost(cls, model_id: str, input_tokens: int = 0, output_tokens: int = 0, num_images: int = 0) -> Decimal:
|
||||
"""
|
||||
Calculate cost for model usage.
|
||||
|
||||
For text models: Uses input/output token counts
|
||||
For image models: Uses num_images
|
||||
|
||||
Args:
|
||||
model_id: The model identifier
|
||||
input_tokens: Number of input tokens (for text models)
|
||||
output_tokens: Number of output tokens (for text models)
|
||||
num_images: Number of images (for image models)
|
||||
|
||||
Returns:
|
||||
Decimal cost in USD
|
||||
"""
|
||||
model = cls.get_model(model_id)
|
||||
if not model:
|
||||
return Decimal('0')
|
||||
|
||||
# Get model type from AIModelConfig
|
||||
model_type = model.model_type
|
||||
|
||||
if model_type == 'text':
|
||||
input_rate = cls.get_rate(model_id, 'input')
|
||||
output_rate = cls.get_rate(model_id, 'output')
|
||||
|
||||
cost = (
|
||||
(Decimal(input_tokens) * input_rate) +
|
||||
(Decimal(output_tokens) * output_rate)
|
||||
) / Decimal('1000000')
|
||||
|
||||
return cost
|
||||
|
||||
elif model_type == 'image':
|
||||
image_rate = cls.get_rate(model_id, 'image')
|
||||
return image_rate * Decimal(num_images)
|
||||
|
||||
return Decimal('0')
|
||||
|
||||
@classmethod
|
||||
def get_default_model(cls, model_type: str = 'text') -> Optional[str]:
|
||||
"""
|
||||
Get the default model for a given type from database.
|
||||
|
||||
Args:
|
||||
model_type: 'text' or 'image'
|
||||
|
||||
Returns:
|
||||
model_id string or None
|
||||
"""
|
||||
try:
|
||||
from igny8_core.business.billing.models import AIModelConfig
|
||||
default = AIModelConfig.objects.filter(
|
||||
model_type=model_type,
|
||||
is_active=True,
|
||||
is_default=True
|
||||
).first()
|
||||
|
||||
if default:
|
||||
return default.model_name
|
||||
|
||||
# If no default is set, return first active model of this type
|
||||
first_active = AIModelConfig.objects.filter(
|
||||
model_type=model_type,
|
||||
is_active=True
|
||||
).order_by('model_name').first()
|
||||
|
||||
if first_active:
|
||||
return first_active.model_name
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Could not get default {model_type} model from DB: {e}")
|
||||
|
||||
return None
|
||||
|
||||
@classmethod
|
||||
def list_models(cls, model_type: Optional[str] = None, provider: Optional[str] = None) -> list:
|
||||
"""
|
||||
List all available models from database, optionally filtered by type or provider.
|
||||
|
||||
Args:
|
||||
model_type: Filter by 'text', 'image', or 'embedding'
|
||||
provider: Filter by 'openai', 'anthropic', 'runware', etc.
|
||||
|
||||
Returns:
|
||||
List of AIModelConfig instances
|
||||
"""
|
||||
try:
|
||||
from igny8_core.business.billing.models import AIModelConfig
|
||||
queryset = AIModelConfig.objects.filter(is_active=True)
|
||||
|
||||
if model_type:
|
||||
queryset = queryset.filter(model_type=model_type)
|
||||
if provider:
|
||||
queryset = queryset.filter(provider=provider)
|
||||
|
||||
return list(queryset.order_by('model_name'))
|
||||
except Exception as e:
|
||||
logger.error(f"Could not list models from DB: {e}")
|
||||
return []
|
||||
|
||||
@classmethod
|
||||
def clear_cache(cls, model_id: Optional[str] = None):
|
||||
"""
|
||||
Clear model cache.
|
||||
|
||||
Args:
|
||||
model_id: Clear specific model cache, or all if None
|
||||
"""
|
||||
if model_id:
|
||||
cache.delete(cls._get_cache_key(model_id))
|
||||
else:
|
||||
# Clear all model caches - use pattern if available
|
||||
try:
|
||||
from django.core.cache import caches
|
||||
default_cache = caches['default']
|
||||
if hasattr(default_cache, 'delete_pattern'):
|
||||
default_cache.delete_pattern(f"{CACHE_KEY_PREFIX}*")
|
||||
else:
|
||||
# Fallback: clear all known models from DB
|
||||
from igny8_core.business.billing.models import AIModelConfig
|
||||
for model in AIModelConfig.objects.values_list('model_name', flat=True):
|
||||
cache.delete(cls._get_cache_key(model))
|
||||
except Exception as e:
|
||||
logger.warning(f"Could not clear all model caches: {e}")
|
||||
|
||||
@classmethod
|
||||
def validate_model(cls, model_id: str) -> bool:
|
||||
"""
|
||||
Check if a model ID is valid and active.
|
||||
|
||||
Args:
|
||||
model_id: The model identifier to validate
|
||||
|
||||
Returns:
|
||||
True if model exists and is active, False otherwise
|
||||
"""
|
||||
model = cls.get_model(model_id)
|
||||
if not model:
|
||||
return False
|
||||
return model.is_active
|
||||
|
||||
# ========== IntegrationProvider methods ==========
|
||||
|
||||
@classmethod
|
||||
def get_provider(cls, provider_id: str) -> Optional[Any]:
|
||||
"""
|
||||
Get IntegrationProvider by provider_id.
|
||||
|
||||
Args:
|
||||
provider_id: The provider identifier (e.g., 'openai', 'stripe', 'resend')
|
||||
|
||||
Returns:
|
||||
IntegrationProvider instance, None if not found
|
||||
"""
|
||||
cache_key = cls._get_provider_cache_key(provider_id)
|
||||
|
||||
# Try cache first
|
||||
cached = cache.get(cache_key)
|
||||
if cached is not None:
|
||||
return cached
|
||||
|
||||
try:
|
||||
from igny8_core.modules.system.models import IntegrationProvider
|
||||
provider = IntegrationProvider.objects.filter(
|
||||
provider_id=provider_id,
|
||||
is_active=True
|
||||
).first()
|
||||
|
||||
if provider:
|
||||
cache.set(cache_key, provider, MODEL_CACHE_TTL)
|
||||
return provider
|
||||
except Exception as e:
|
||||
logger.error(f"Could not fetch provider {provider_id} from DB: {e}")
|
||||
|
||||
return None
|
||||
|
||||
@classmethod
|
||||
def get_api_key(cls, provider_id: str) -> Optional[str]:
|
||||
"""
|
||||
Get API key for a provider.
|
||||
|
||||
Args:
|
||||
provider_id: The provider identifier (e.g., 'openai', 'anthropic', 'runware')
|
||||
|
||||
Returns:
|
||||
API key string, None if not found or provider is inactive
|
||||
"""
|
||||
provider = cls.get_provider(provider_id)
|
||||
if provider and provider.api_key:
|
||||
return provider.api_key
|
||||
return None
|
||||
|
||||
@classmethod
|
||||
def get_api_secret(cls, provider_id: str) -> Optional[str]:
|
||||
"""
|
||||
Get API secret for a provider (for OAuth, Stripe secret key, etc.).
|
||||
|
||||
Args:
|
||||
provider_id: The provider identifier
|
||||
|
||||
Returns:
|
||||
API secret string, None if not found
|
||||
"""
|
||||
provider = cls.get_provider(provider_id)
|
||||
if provider and provider.api_secret:
|
||||
return provider.api_secret
|
||||
return None
|
||||
|
||||
@classmethod
|
||||
def get_webhook_secret(cls, provider_id: str) -> Optional[str]:
|
||||
"""
|
||||
Get webhook secret for a provider (for Stripe, PayPal webhooks).
|
||||
|
||||
Args:
|
||||
provider_id: The provider identifier
|
||||
|
||||
Returns:
|
||||
Webhook secret string, None if not found
|
||||
"""
|
||||
provider = cls.get_provider(provider_id)
|
||||
if provider and provider.webhook_secret:
|
||||
return provider.webhook_secret
|
||||
return None
|
||||
|
||||
@classmethod
|
||||
def clear_provider_cache(cls, provider_id: Optional[str] = None):
|
||||
"""
|
||||
Clear provider cache.
|
||||
|
||||
Args:
|
||||
provider_id: Clear specific provider cache, or all if None
|
||||
"""
|
||||
if provider_id:
|
||||
cache.delete(cls._get_provider_cache_key(provider_id))
|
||||
else:
|
||||
try:
|
||||
from django.core.cache import caches
|
||||
default_cache = caches['default']
|
||||
if hasattr(default_cache, 'delete_pattern'):
|
||||
default_cache.delete_pattern(f"{PROVIDER_CACHE_PREFIX}*")
|
||||
else:
|
||||
from igny8_core.modules.system.models import IntegrationProvider
|
||||
for pid in IntegrationProvider.objects.values_list('provider_id', flat=True):
|
||||
cache.delete(cls._get_provider_cache_key(pid))
|
||||
except Exception as e:
|
||||
logger.warning(f"Could not clear provider caches: {e}")
|
||||
@@ -3,7 +3,7 @@ Prompt Registry - Centralized prompt management with override hierarchy
|
||||
Supports: task-level overrides → DB prompts → GlobalAIPrompt (REQUIRED)
|
||||
"""
|
||||
import logging
|
||||
from typing import Dict, Any, Optional
|
||||
from typing import Dict, Any, Optional, Tuple
|
||||
from django.db import models
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
@@ -16,10 +16,10 @@ class PromptRegistry:
|
||||
2. DB prompt for (account, function)
|
||||
3. GlobalAIPrompt (REQUIRED - no hardcoded fallbacks)
|
||||
"""
|
||||
|
||||
|
||||
# Removed ALL hardcoded prompts - GlobalAIPrompt is now the ONLY source of default prompts
|
||||
# To add/modify prompts, use Django admin: /admin/system/globalaiprompt/
|
||||
|
||||
|
||||
# Mapping from function names to prompt types
|
||||
FUNCTION_TO_PROMPT_TYPE = {
|
||||
'auto_cluster': 'clustering',
|
||||
@@ -35,7 +35,114 @@ class PromptRegistry:
|
||||
'generate_service_page': 'service_generation',
|
||||
'generate_taxonomy': 'taxonomy_generation',
|
||||
}
|
||||
|
||||
# Mapping of prompt types to their prefix numbers and display names
|
||||
# Format: {prompt_type: (number, display_name)}
|
||||
# GP = Global Prompt, CP = Custom Prompt
|
||||
PROMPT_PREFIX_MAP = {
|
||||
'clustering': ('01', 'Clustering'),
|
||||
'ideas': ('02', 'Ideas'),
|
||||
'content_generation': ('03', 'ContentGen'),
|
||||
'image_prompt_extraction': ('04', 'ImagePrompts'),
|
||||
'site_structure_generation': ('05', 'SiteStructure'),
|
||||
'optimize_content': ('06', 'OptimizeContent'),
|
||||
'product_generation': ('07', 'ProductGen'),
|
||||
'service_generation': ('08', 'ServiceGen'),
|
||||
'taxonomy_generation': ('09', 'TaxonomyGen'),
|
||||
'image_prompt_template': ('10', 'ImageTemplate'),
|
||||
'negative_prompt': ('11', 'NegativePrompt'),
|
||||
}
|
||||
|
||||
@classmethod
|
||||
def get_prompt_prefix(cls, prompt_type: str, is_custom: bool) -> str:
|
||||
"""
|
||||
Generate prompt prefix for tracking.
|
||||
|
||||
Args:
|
||||
prompt_type: The prompt type (e.g., 'clustering', 'ideas')
|
||||
is_custom: True if using custom/account-specific prompt, False if global
|
||||
|
||||
Returns:
|
||||
Prefix string like "##GP01-Clustering" or "##CP01-Clustering"
|
||||
"""
|
||||
prefix_info = cls.PROMPT_PREFIX_MAP.get(prompt_type, ('00', prompt_type.title()))
|
||||
number, display_name = prefix_info
|
||||
prefix_type = 'CP' if is_custom else 'GP'
|
||||
return f"##{prefix_type}{number}-{display_name}"
|
||||
|
||||
@classmethod
|
||||
def get_prompt_with_metadata(
|
||||
cls,
|
||||
function_name: str,
|
||||
account: Optional[Any] = None,
|
||||
task: Optional[Any] = None,
|
||||
context: Optional[Dict[str, Any]] = None
|
||||
) -> Tuple[str, bool, str]:
|
||||
"""
|
||||
Get prompt for a function with metadata about source.
|
||||
|
||||
Priority:
|
||||
1. task.prompt_override (if task provided and has override)
|
||||
2. DB prompt for (account, function) - marked as custom if is_customized=True
|
||||
3. GlobalAIPrompt (REQUIRED - no hardcoded fallbacks)
|
||||
|
||||
Args:
|
||||
function_name: AI function name (e.g., 'auto_cluster', 'generate_ideas')
|
||||
account: Account object (optional)
|
||||
task: Task object with optional prompt_override (optional)
|
||||
context: Additional context for prompt rendering (optional)
|
||||
|
||||
Returns:
|
||||
Tuple of (prompt_string, is_custom, prompt_type)
|
||||
- prompt_string: The rendered prompt
|
||||
- is_custom: True if using custom/account prompt, False if global
|
||||
- prompt_type: The prompt type identifier
|
||||
"""
|
||||
# Step 1: Get prompt type
|
||||
prompt_type = cls.FUNCTION_TO_PROMPT_TYPE.get(function_name, function_name)
|
||||
|
||||
# Step 2: Check task-level override (always considered custom)
|
||||
if task and hasattr(task, 'prompt_override') and task.prompt_override:
|
||||
logger.info(f"Using task-level prompt override for {function_name}")
|
||||
prompt = task.prompt_override
|
||||
return cls._render_prompt(prompt, context or {}), True, prompt_type
|
||||
|
||||
# Step 3: Try DB prompt (account-specific)
|
||||
if account:
|
||||
try:
|
||||
from igny8_core.modules.system.models import AIPrompt
|
||||
db_prompt = AIPrompt.objects.get(
|
||||
account=account,
|
||||
prompt_type=prompt_type,
|
||||
is_active=True
|
||||
)
|
||||
# Check if prompt is customized
|
||||
is_custom = db_prompt.is_customized
|
||||
logger.info(f"Using {'customized' if is_custom else 'default'} account prompt for {function_name} (account {account.id})")
|
||||
prompt = db_prompt.prompt_value
|
||||
return cls._render_prompt(prompt, context or {}), is_custom, prompt_type
|
||||
except Exception as e:
|
||||
logger.debug(f"No account-specific prompt found for {function_name}: {e}")
|
||||
|
||||
# Step 4: Try GlobalAIPrompt (platform-wide default) - REQUIRED
|
||||
try:
|
||||
from igny8_core.modules.system.global_settings_models import GlobalAIPrompt
|
||||
global_prompt = GlobalAIPrompt.objects.get(
|
||||
prompt_type=prompt_type,
|
||||
is_active=True
|
||||
)
|
||||
logger.info(f"Using global default prompt for {function_name} from GlobalAIPrompt")
|
||||
prompt = global_prompt.prompt_value
|
||||
return cls._render_prompt(prompt, context or {}), False, prompt_type
|
||||
except Exception as e:
|
||||
error_msg = (
|
||||
f"ERROR: Global prompt '{prompt_type}' not found for function '{function_name}'. "
|
||||
f"Please configure it in Django admin at: /admin/system/globalaiprompt/. "
|
||||
f"Error: {e}"
|
||||
)
|
||||
logger.error(error_msg)
|
||||
raise ValueError(error_msg)
|
||||
|
||||
@classmethod
|
||||
def get_prompt(
|
||||
cls,
|
||||
@@ -46,63 +153,23 @@ class PromptRegistry:
|
||||
) -> str:
|
||||
"""
|
||||
Get prompt for a function with hierarchical resolution.
|
||||
|
||||
|
||||
Priority:
|
||||
1. task.prompt_override (if task provided and has override)
|
||||
2. DB prompt for (account, function)
|
||||
3. GlobalAIPrompt (REQUIRED - no hardcoded fallbacks)
|
||||
|
||||
|
||||
Args:
|
||||
function_name: AI function name (e.g., 'auto_cluster', 'generate_ideas')
|
||||
account: Account object (optional)
|
||||
task: Task object with optional prompt_override (optional)
|
||||
context: Additional context for prompt rendering (optional)
|
||||
|
||||
|
||||
Returns:
|
||||
Prompt string ready for formatting
|
||||
"""
|
||||
# Step 1: Check task-level override
|
||||
if task and hasattr(task, 'prompt_override') and task.prompt_override:
|
||||
logger.info(f"Using task-level prompt override for {function_name}")
|
||||
prompt = task.prompt_override
|
||||
return cls._render_prompt(prompt, context or {})
|
||||
|
||||
# Step 2: Get prompt type
|
||||
prompt_type = cls.FUNCTION_TO_PROMPT_TYPE.get(function_name, function_name)
|
||||
|
||||
# Step 3: Try DB prompt (account-specific)
|
||||
if account:
|
||||
try:
|
||||
from igny8_core.modules.system.models import AIPrompt
|
||||
db_prompt = AIPrompt.objects.get(
|
||||
account=account,
|
||||
prompt_type=prompt_type,
|
||||
is_active=True
|
||||
)
|
||||
logger.info(f"Using account-specific prompt for {function_name} (account {account.id})")
|
||||
prompt = db_prompt.prompt_value
|
||||
return cls._render_prompt(prompt, context or {})
|
||||
except Exception as e:
|
||||
logger.debug(f"No account-specific prompt found for {function_name}: {e}")
|
||||
|
||||
# Step 4: Try GlobalAIPrompt (platform-wide default) - REQUIRED
|
||||
try:
|
||||
from igny8_core.modules.system.global_settings_models import GlobalAIPrompt
|
||||
global_prompt = GlobalAIPrompt.objects.get(
|
||||
prompt_type=prompt_type,
|
||||
is_active=True
|
||||
)
|
||||
logger.info(f"Using global default prompt for {function_name} from GlobalAIPrompt")
|
||||
prompt = global_prompt.prompt_value
|
||||
return cls._render_prompt(prompt, context or {})
|
||||
except Exception as e:
|
||||
error_msg = (
|
||||
f"ERROR: Global prompt '{prompt_type}' not found for function '{function_name}'. "
|
||||
f"Please configure it in Django admin at: /admin/system/globalaiprompt/. "
|
||||
f"Error: {e}"
|
||||
)
|
||||
logger.error(error_msg)
|
||||
raise ValueError(error_msg)
|
||||
prompt, _, _ = cls.get_prompt_with_metadata(function_name, account, task, context)
|
||||
return prompt
|
||||
|
||||
@classmethod
|
||||
def _render_prompt(cls, prompt_template: str, context: Dict[str, Any]) -> str:
|
||||
@@ -219,3 +286,61 @@ def get_prompt(function_name: str, account=None, task=None, context=None) -> str
|
||||
"""Get prompt using registry"""
|
||||
return PromptRegistry.get_prompt(function_name, account=account, task=task, context=context)
|
||||
|
||||
|
||||
def get_prompt_with_prefix(function_name: str, account=None, task=None, context=None) -> Tuple[str, str]:
|
||||
"""
|
||||
Get prompt with its tracking prefix.
|
||||
|
||||
Args:
|
||||
function_name: AI function name
|
||||
account: Account object (optional)
|
||||
task: Task object with optional prompt_override (optional)
|
||||
context: Additional context for prompt rendering (optional)
|
||||
|
||||
Returns:
|
||||
Tuple of (prompt_string, prefix_string)
|
||||
- prompt_string: The rendered prompt
|
||||
- prefix_string: The tracking prefix (e.g., '##GP01-Clustering' or '##CP01-Clustering')
|
||||
"""
|
||||
prompt, is_custom, prompt_type = PromptRegistry.get_prompt_with_metadata(
|
||||
function_name, account=account, task=task, context=context
|
||||
)
|
||||
prefix = PromptRegistry.get_prompt_prefix(prompt_type, is_custom)
|
||||
return prompt, prefix
|
||||
|
||||
|
||||
def get_prompt_prefix_for_function(function_name: str, account=None, task=None) -> str:
|
||||
"""
|
||||
Get just the prefix for a function without fetching the full prompt.
|
||||
Useful when the prompt was already fetched elsewhere.
|
||||
|
||||
Args:
|
||||
function_name: AI function name
|
||||
account: Account object (optional)
|
||||
task: Task object with optional prompt_override (optional)
|
||||
|
||||
Returns:
|
||||
The tracking prefix (e.g., '##GP01-Clustering' or '##CP01-Clustering')
|
||||
"""
|
||||
prompt_type = PromptRegistry.FUNCTION_TO_PROMPT_TYPE.get(function_name, function_name)
|
||||
|
||||
# Check for task-level override (always custom)
|
||||
if task and hasattr(task, 'prompt_override') and task.prompt_override:
|
||||
return PromptRegistry.get_prompt_prefix(prompt_type, is_custom=True)
|
||||
|
||||
# Check for account-specific prompt
|
||||
if account:
|
||||
try:
|
||||
from igny8_core.modules.system.models import AIPrompt
|
||||
db_prompt = AIPrompt.objects.get(
|
||||
account=account,
|
||||
prompt_type=prompt_type,
|
||||
is_active=True
|
||||
)
|
||||
return PromptRegistry.get_prompt_prefix(prompt_type, is_custom=db_prompt.is_customized)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
# Fallback to global (not custom)
|
||||
return PromptRegistry.get_prompt_prefix(prompt_type, is_custom=False)
|
||||
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
"""
|
||||
AI Settings - Centralized model configurations and limits
|
||||
Uses global settings with optional per-account overrides.
|
||||
Uses AISettings (system defaults) with optional per-account overrides via AccountSettings.
|
||||
API keys are stored in IntegrationProvider.
|
||||
"""
|
||||
from typing import Dict, Any
|
||||
import logging
|
||||
@@ -22,10 +23,9 @@ def get_model_config(function_name: str, account) -> Dict[str, Any]:
|
||||
Get model configuration for AI function.
|
||||
|
||||
Architecture:
|
||||
- API keys: ALWAYS from GlobalIntegrationSettings (platform-wide)
|
||||
- Model/params: From IntegrationSettings if account has override, else from global
|
||||
- Free plan: Cannot override, uses global defaults
|
||||
- Starter/Growth/Scale: Can override model, temperature, max_tokens, etc.
|
||||
- API keys: From IntegrationProvider (centralized)
|
||||
- Model: From AIModelConfig (is_default=True)
|
||||
- Params: From AISettings with AccountSettings overrides
|
||||
|
||||
Args:
|
||||
function_name: Name of the AI function
|
||||
@@ -44,67 +44,57 @@ def get_model_config(function_name: str, account) -> Dict[str, Any]:
|
||||
actual_name = FUNCTION_ALIASES.get(function_name, function_name)
|
||||
|
||||
try:
|
||||
from igny8_core.modules.system.global_settings_models import GlobalIntegrationSettings
|
||||
from igny8_core.modules.system.models import IntegrationSettings
|
||||
from igny8_core.modules.system.ai_settings import AISettings
|
||||
from igny8_core.ai.model_registry import ModelRegistry
|
||||
|
||||
# Get global settings (for API keys and defaults)
|
||||
global_settings = GlobalIntegrationSettings.get_instance()
|
||||
# Get API key from IntegrationProvider
|
||||
api_key = ModelRegistry.get_api_key('openai')
|
||||
|
||||
if not global_settings.openai_api_key:
|
||||
if not api_key:
|
||||
raise ValueError(
|
||||
"Platform OpenAI API key not configured. "
|
||||
"Please configure GlobalIntegrationSettings in Django admin."
|
||||
"Please configure IntegrationProvider in Django admin."
|
||||
)
|
||||
|
||||
# Start with global defaults
|
||||
model = global_settings.openai_model
|
||||
temperature = global_settings.openai_temperature
|
||||
max_tokens = global_settings.openai_max_tokens
|
||||
api_key = global_settings.openai_api_key # ALWAYS from global
|
||||
# Get default text model from AIModelConfig
|
||||
default_model = ModelRegistry.get_default_model('text')
|
||||
if not default_model:
|
||||
default_model = 'gpt-4o-mini' # Ultimate fallback
|
||||
|
||||
# Check if account has overrides (only for Starter/Growth/Scale plans)
|
||||
# Free plan users cannot create IntegrationSettings records
|
||||
model = default_model
|
||||
|
||||
# Get settings with account overrides
|
||||
temperature = AISettings.get_effective_temperature(account)
|
||||
max_tokens = AISettings.get_effective_max_tokens(account)
|
||||
|
||||
# Get max_tokens from AIModelConfig if available
|
||||
try:
|
||||
account_settings = IntegrationSettings.objects.get(
|
||||
account=account,
|
||||
integration_type='openai',
|
||||
from igny8_core.business.billing.models import AIModelConfig
|
||||
model_config = AIModelConfig.objects.filter(
|
||||
model_name=model,
|
||||
is_active=True
|
||||
)
|
||||
|
||||
config = account_settings.config or {}
|
||||
|
||||
# Override model if specified (NULL = use global)
|
||||
if config.get('model'):
|
||||
model = config['model']
|
||||
|
||||
# Override temperature if specified
|
||||
if config.get('temperature') is not None:
|
||||
temperature = config['temperature']
|
||||
|
||||
# Override max_tokens if specified
|
||||
if config.get('max_tokens'):
|
||||
max_tokens = config['max_tokens']
|
||||
|
||||
except IntegrationSettings.DoesNotExist:
|
||||
# No account override, use global defaults (already set above)
|
||||
pass
|
||||
).first()
|
||||
if model_config and model_config.max_output_tokens:
|
||||
max_tokens = model_config.max_output_tokens
|
||||
except Exception as e:
|
||||
logger.warning(f"Could not load max_tokens from AIModelConfig for {model}: {e}")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Could not load OpenAI settings for account {account.id}: {e}")
|
||||
raise ValueError(
|
||||
f"Could not load OpenAI configuration for account {account.id}. "
|
||||
f"Please configure GlobalIntegrationSettings."
|
||||
f"Please configure IntegrationProvider and AISettings."
|
||||
)
|
||||
|
||||
# Validate model is in our supported list (optional validation)
|
||||
# Validate model is in our supported list using ModelRegistry (database-driven)
|
||||
try:
|
||||
from igny8_core.utils.ai_processor import MODEL_RATES
|
||||
if model not in MODEL_RATES:
|
||||
if not ModelRegistry.validate_model(model):
|
||||
supported_models = [m.model_name for m in ModelRegistry.list_models(model_type='text')]
|
||||
logger.warning(
|
||||
f"Model '{model}' for account {account.id} is not in supported list. "
|
||||
f"Supported models: {list(MODEL_RATES.keys())}"
|
||||
f"Supported models: {supported_models}"
|
||||
)
|
||||
except ImportError:
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
# Build response format based on model (JSON mode for supported models)
|
||||
|
||||
@@ -157,6 +157,7 @@ def process_image_generation_queue(self, image_ids: list, account_id: int = None
|
||||
from igny8_core.modules.system.models import IntegrationSettings
|
||||
from igny8_core.ai.ai_core import AICore
|
||||
from igny8_core.ai.prompts import PromptRegistry
|
||||
from igny8_core.business.billing.services.credit_service import CreditService
|
||||
|
||||
logger.info("=" * 80)
|
||||
logger.info(f"process_image_generation_queue STARTED")
|
||||
@@ -181,73 +182,86 @@ def process_image_generation_queue(self, image_ids: list, account_id: int = None
|
||||
failed = 0
|
||||
results = []
|
||||
|
||||
# Get image generation settings
|
||||
# Try account-specific override, otherwise use GlobalIntegrationSettings
|
||||
# Get image generation settings from AISettings (with account overrides)
|
||||
logger.info("[process_image_generation_queue] Step 1: Loading image generation settings")
|
||||
from igny8_core.modules.system.global_settings_models import GlobalIntegrationSettings
|
||||
from igny8_core.modules.system.ai_settings import AISettings
|
||||
from igny8_core.ai.model_registry import ModelRegistry
|
||||
|
||||
config = {}
|
||||
try:
|
||||
image_settings = IntegrationSettings.objects.get(
|
||||
account=account,
|
||||
integration_type='image_generation',
|
||||
is_active=True
|
||||
)
|
||||
logger.info(f"[process_image_generation_queue] Using account {account.id} IntegrationSettings override")
|
||||
config = image_settings.config or {}
|
||||
except IntegrationSettings.DoesNotExist:
|
||||
logger.info(f"[process_image_generation_queue] No IntegrationSettings override for account {account.id}, using GlobalIntegrationSettings")
|
||||
except Exception as e:
|
||||
logger.error(f"[process_image_generation_queue] ERROR loading image generation settings: {e}", exc_info=True)
|
||||
return {'success': False, 'error': f'Error loading image generation settings: {str(e)}'}
|
||||
# Get effective settings
|
||||
image_type = AISettings.get_effective_image_style(account)
|
||||
image_format = 'webp' # Default format
|
||||
|
||||
# Use GlobalIntegrationSettings for missing values
|
||||
global_settings = GlobalIntegrationSettings.get_instance()
|
||||
|
||||
logger.info(f"[process_image_generation_queue] Image generation settings loaded. Config keys: {list(config.keys())}")
|
||||
logger.info(f"[process_image_generation_queue] Full config: {config}")
|
||||
|
||||
# Get provider and model from config with global fallbacks
|
||||
provider = config.get('provider') or global_settings.default_image_service
|
||||
if provider == 'runware':
|
||||
model = config.get('model') or config.get('imageModel') or global_settings.runware_model
|
||||
# Get default image model from database
|
||||
default_model = ModelRegistry.get_default_model('image')
|
||||
if default_model:
|
||||
model_config = ModelRegistry.get_model(default_model)
|
||||
provider = model_config.provider if model_config else 'openai'
|
||||
model = default_model
|
||||
else:
|
||||
model = config.get('model') or config.get('imageModel') or global_settings.dalle_model
|
||||
provider = 'openai'
|
||||
model = 'dall-e-3'
|
||||
|
||||
logger.info(f"[process_image_generation_queue] Using PROVIDER: {provider}, MODEL: {model} from settings")
|
||||
image_type = config.get('image_type') or global_settings.image_style
|
||||
image_format = config.get('image_format', 'webp')
|
||||
desktop_enabled = config.get('desktop_enabled', True)
|
||||
mobile_enabled = config.get('mobile_enabled', True)
|
||||
# Get image sizes from config, with fallback defaults
|
||||
featured_image_size = config.get('featured_image_size') or ('1280x832' if provider == 'runware' else '1024x1024')
|
||||
desktop_image_size = config.get('desktop_image_size') or global_settings.desktop_image_size
|
||||
in_article_image_size = config.get('in_article_image_size') or '512x512' # Default to 512x512
|
||||
|
||||
# Style to prompt enhancement mapping
|
||||
# These style descriptors are added to the image prompt for better results
|
||||
STYLE_PROMPT_MAP = {
|
||||
# Runware styles
|
||||
'photorealistic': 'ultra realistic photography, natural lighting, real world look, photorealistic',
|
||||
'illustration': 'digital illustration, clean lines, artistic style, modern illustration',
|
||||
'3d_render': 'computer generated 3D render, modern polished 3D style, depth and dramatic lighting',
|
||||
'minimal_flat': 'minimal flat design, simple shapes, flat colors, modern graphic design aesthetic',
|
||||
'artistic': 'artistic painterly style, expressive brushstrokes, hand painted aesthetic',
|
||||
'cartoon': 'cartoon stylized illustration, playful exaggerated forms, animated character style',
|
||||
# DALL-E styles (mapped from OpenAI API style parameter)
|
||||
'natural': 'natural realistic style',
|
||||
'vivid': 'vivid dramatic hyper-realistic style',
|
||||
# Legacy fallbacks
|
||||
'realistic': 'ultra realistic photography, natural lighting, photorealistic',
|
||||
}
|
||||
|
||||
# Get the style description for prompt enhancement
|
||||
style_description = STYLE_PROMPT_MAP.get(image_type, STYLE_PROMPT_MAP.get('photorealistic'))
|
||||
logger.info(f"[process_image_generation_queue] Style: {image_type} -> prompt enhancement: {style_description[:50]}...")
|
||||
|
||||
# Model-specific landscape sizes (square is always 1024x1024)
|
||||
# For Runware models - based on Runware documentation for optimal results per model
|
||||
# For OpenAI DALL-E 3 - uses 1792x1024 for landscape
|
||||
MODEL_LANDSCAPE_SIZES = {
|
||||
'runware:97@1': '1280x768', # Hi Dream Full landscape
|
||||
'bria:10@1': '1344x768', # Bria 3.2 landscape (16:9)
|
||||
'google:4@2': '1376x768', # Nano Banana landscape (16:9)
|
||||
'dall-e-3': '1792x1024', # DALL-E 3 landscape
|
||||
'dall-e-2': '1024x1024', # DALL-E 2 only supports square
|
||||
}
|
||||
DEFAULT_SQUARE_SIZE = '1024x1024'
|
||||
|
||||
# Get model-specific landscape size for featured images
|
||||
model_landscape_size = MODEL_LANDSCAPE_SIZES.get(model, '1792x1024' if provider == 'openai' else '1280x768')
|
||||
|
||||
# Featured image always uses model-specific landscape size
|
||||
featured_image_size = model_landscape_size
|
||||
# In-article images: alternating square/landscape based on position (handled in image loop)
|
||||
in_article_square_size = DEFAULT_SQUARE_SIZE
|
||||
in_article_landscape_size = model_landscape_size
|
||||
|
||||
logger.info(f"[process_image_generation_queue] Settings loaded:")
|
||||
logger.info(f" - Provider: {provider}")
|
||||
logger.info(f" - Model: {model}")
|
||||
logger.info(f" - Image type: {image_type}")
|
||||
logger.info(f" - Image format: {image_format}")
|
||||
logger.info(f" - Desktop enabled: {desktop_enabled}")
|
||||
logger.info(f" - Mobile enabled: {mobile_enabled}")
|
||||
logger.info(f" - Featured image size: {featured_image_size}")
|
||||
logger.info(f" - In-article square: {in_article_square_size}, landscape: {in_article_landscape_size}")
|
||||
|
||||
# Get provider API key
|
||||
# API keys are ALWAYS from GlobalIntegrationSettings (accounts cannot override API keys)
|
||||
# Account IntegrationSettings only store provider preference, NOT API keys
|
||||
logger.info(f"[process_image_generation_queue] Step 2: Loading {provider.upper()} API key from GlobalIntegrationSettings")
|
||||
# Get provider API key from IntegrationProvider (centralized)
|
||||
logger.info(f"[process_image_generation_queue] Step 2: Loading {provider.upper()} API key from IntegrationProvider")
|
||||
|
||||
# Get API key from GlobalIntegrationSettings
|
||||
if provider == 'runware':
|
||||
api_key = global_settings.runware_api_key
|
||||
elif provider == 'openai':
|
||||
api_key = global_settings.dalle_api_key or global_settings.openai_api_key
|
||||
else:
|
||||
api_key = None
|
||||
# Get API key from IntegrationProvider (centralized)
|
||||
api_key = ModelRegistry.get_api_key(provider)
|
||||
|
||||
if not api_key:
|
||||
logger.error(f"[process_image_generation_queue] {provider.upper()} API key not configured in GlobalIntegrationSettings")
|
||||
return {'success': False, 'error': f'{provider.upper()} API key not configured in GlobalIntegrationSettings'}
|
||||
logger.error(f"[process_image_generation_queue] {provider.upper()} API key not configured in IntegrationProvider")
|
||||
return {'success': False, 'error': f'{provider.upper()} API key not configured'}
|
||||
|
||||
# Log API key presence (but not the actual key for security)
|
||||
api_key_preview = f"{api_key[:10]}...{api_key[-4:]}" if len(api_key) > 14 else "***"
|
||||
@@ -386,7 +400,7 @@ def process_image_generation_queue(self, image_ids: list, account_id: int = None
|
||||
# Calculate actual template length with placeholders filled
|
||||
# Format template with dummy values to measure actual length
|
||||
template_with_dummies = image_prompt_template.format(
|
||||
image_type=image_type,
|
||||
image_type=style_description, # Use actual style description length
|
||||
post_title='X' * len(post_title), # Use same length as actual post_title
|
||||
image_prompt='' # Empty to measure template overhead
|
||||
)
|
||||
@@ -413,7 +427,7 @@ def process_image_generation_queue(self, image_ids: list, account_id: int = None
|
||||
image_prompt = image_prompt[:max_image_prompt_length - 3] + "..."
|
||||
|
||||
formatted_prompt = image_prompt_template.format(
|
||||
image_type=image_type,
|
||||
image_type=style_description, # Use full style description instead of raw value
|
||||
post_title=post_title,
|
||||
image_prompt=image_prompt
|
||||
)
|
||||
@@ -478,15 +492,40 @@ def process_image_generation_queue(self, image_ids: list, account_id: int = None
|
||||
}
|
||||
)
|
||||
|
||||
# Use appropriate size based on image type
|
||||
# Use appropriate size based on image type and position
|
||||
# Featured: Always landscape (model-specific)
|
||||
# In-article: Alternating square/landscape based on position
|
||||
# Position 0: Square (1024x1024)
|
||||
# Position 1: Landscape (model-specific)
|
||||
# Position 2: Square (1024x1024)
|
||||
# Position 3: Landscape (model-specific)
|
||||
if image.image_type == 'featured':
|
||||
image_size = featured_image_size # Read from config
|
||||
elif image.image_type == 'desktop':
|
||||
image_size = desktop_image_size
|
||||
elif image.image_type == 'mobile':
|
||||
image_size = '512x512' # Fixed mobile size
|
||||
else: # in_article or other
|
||||
image_size = in_article_image_size # Read from config, default 512x512
|
||||
image_size = featured_image_size # Model-specific landscape
|
||||
elif image.image_type == 'in_article':
|
||||
# Alternate based on position: even=square, odd=landscape
|
||||
position = image.position or 0
|
||||
if position % 2 == 0: # Position 0, 2: Square
|
||||
image_size = in_article_square_size
|
||||
else: # Position 1, 3: Landscape
|
||||
image_size = in_article_landscape_size
|
||||
logger.info(f"[process_image_generation_queue] In-article image position {position}: using {'square' if position % 2 == 0 else 'landscape'} size {image_size}")
|
||||
else: # desktop or other (legacy)
|
||||
image_size = in_article_square_size # Default to square
|
||||
|
||||
# For DALL-E, convert image_type to style parameter
|
||||
# image_type is from user settings (e.g., 'vivid', 'natural', 'realistic')
|
||||
# DALL-E accepts 'vivid' or 'natural' - map accordingly
|
||||
dalle_style = None
|
||||
if provider == 'openai':
|
||||
# Map image_type to DALL-E style
|
||||
# 'natural' = more realistic photos (default)
|
||||
# 'vivid' = hyper-real, dramatic images
|
||||
if image_type in ['vivid']:
|
||||
dalle_style = 'vivid'
|
||||
else:
|
||||
# Default to 'natural' for realistic photos
|
||||
dalle_style = 'natural'
|
||||
logger.info(f"[process_image_generation_queue] DALL-E style: {dalle_style} (from image_type: {image_type})")
|
||||
|
||||
result = ai_core.generate_image(
|
||||
prompt=formatted_prompt,
|
||||
@@ -495,7 +534,8 @@ def process_image_generation_queue(self, image_ids: list, account_id: int = None
|
||||
size=image_size,
|
||||
api_key=api_key,
|
||||
negative_prompt=negative_prompt,
|
||||
function_name='generate_images_from_prompts'
|
||||
function_name='generate_images_from_prompts',
|
||||
style=dalle_style
|
||||
)
|
||||
|
||||
# Update progress: Image generation complete (50%)
|
||||
@@ -670,6 +710,33 @@ def process_image_generation_queue(self, image_ids: list, account_id: int = None
|
||||
})
|
||||
failed += 1
|
||||
else:
|
||||
# Deduct credits for successful image generation
|
||||
credits_deducted = 0
|
||||
cost_usd = result.get('cost_usd', 0)
|
||||
if account:
|
||||
try:
|
||||
credits_deducted = CreditService.deduct_credits_for_image(
|
||||
account=account,
|
||||
model_name=model,
|
||||
num_images=1,
|
||||
description=f"Image generation: {content.title[:50] if content else 'Image'}" if content else f"Image {image_id}",
|
||||
metadata={
|
||||
'image_id': image_id,
|
||||
'content_id': content_id,
|
||||
'provider': provider,
|
||||
'model': model,
|
||||
'image_type': image.image_type if image else 'unknown',
|
||||
'size': image_size,
|
||||
},
|
||||
cost_usd=cost_usd,
|
||||
related_object_type='image',
|
||||
related_object_id=image_id
|
||||
)
|
||||
logger.info(f"[process_image_generation_queue] Credits deducted for image {image_id}: account balance now {credits_deducted}")
|
||||
except Exception as credit_error:
|
||||
logger.error(f"[process_image_generation_queue] Failed to deduct credits for image {image_id}: {credit_error}")
|
||||
# Don't fail the image generation if credit deduction fails
|
||||
|
||||
# Update progress: Complete (100%)
|
||||
self.update_state(
|
||||
state='PROGRESS',
|
||||
|
||||
@@ -145,7 +145,7 @@ def validate_model(model: str, model_type: str = 'text') -> Dict[str, Any]:
|
||||
Dict with 'valid' (bool) and optional 'error' (str)
|
||||
"""
|
||||
try:
|
||||
# Try database first
|
||||
# Use database-driven validation via AIModelConfig
|
||||
from igny8_core.business.billing.models import AIModelConfig
|
||||
|
||||
exists = AIModelConfig.objects.filter(
|
||||
@@ -169,29 +169,20 @@ def validate_model(model: str, model_type: str = 'text') -> Dict[str, Any]:
|
||||
else:
|
||||
return {
|
||||
'valid': False,
|
||||
'error': f'Model "{model}" is not found in database'
|
||||
'error': f'No {model_type} models configured in database'
|
||||
}
|
||||
|
||||
return {'valid': True}
|
||||
|
||||
except Exception:
|
||||
# Fallback to constants if database fails
|
||||
from .constants import MODEL_RATES, VALID_OPENAI_IMAGE_MODELS
|
||||
|
||||
if model_type == 'text':
|
||||
if model not in MODEL_RATES:
|
||||
return {
|
||||
'valid': False,
|
||||
'error': f'Model "{model}" is not in supported models list'
|
||||
}
|
||||
elif model_type == 'image':
|
||||
if model not in VALID_OPENAI_IMAGE_MODELS:
|
||||
return {
|
||||
'valid': False,
|
||||
'error': f'Model "{model}" is not valid for OpenAI image generation. Only {", ".join(VALID_OPENAI_IMAGE_MODELS)} are supported.'
|
||||
}
|
||||
|
||||
return {'valid': True}
|
||||
except Exception as e:
|
||||
# Log error but don't fallback to constants - DB is authoritative
|
||||
import logging
|
||||
logger = logging.getLogger(__name__)
|
||||
logger.error(f"Error validating model {model}: {e}")
|
||||
return {
|
||||
'valid': False,
|
||||
'error': f'Error validating model: {e}'
|
||||
}
|
||||
|
||||
|
||||
def validate_image_size(size: str, model: str) -> Dict[str, Any]:
|
||||
|
||||
@@ -5,7 +5,8 @@ from django.urls import path
|
||||
from igny8_core.api.account_views import (
|
||||
AccountSettingsViewSet,
|
||||
TeamManagementViewSet,
|
||||
UsageAnalyticsViewSet
|
||||
UsageAnalyticsViewSet,
|
||||
DashboardStatsViewSet
|
||||
)
|
||||
|
||||
urlpatterns = [
|
||||
@@ -28,4 +29,9 @@ urlpatterns = [
|
||||
path('usage/analytics/', UsageAnalyticsViewSet.as_view({
|
||||
'get': 'overview'
|
||||
}), name='usage-analytics'),
|
||||
|
||||
# Dashboard Stats (real data for home page)
|
||||
path('dashboard/stats/', DashboardStatsViewSet.as_view({
|
||||
'get': 'stats'
|
||||
}), name='dashboard-stats'),
|
||||
]
|
||||
|
||||
@@ -10,6 +10,7 @@ from django.contrib.auth import get_user_model
|
||||
from django.db.models import Q, Count, Sum
|
||||
from django.utils import timezone
|
||||
from datetime import timedelta
|
||||
from decimal import Decimal
|
||||
from drf_spectacular.utils import extend_schema, extend_schema_view
|
||||
|
||||
from igny8_core.auth.models import Account
|
||||
@@ -131,6 +132,16 @@ class TeamManagementViewSet(viewsets.ViewSet):
|
||||
status=status.HTTP_400_BAD_REQUEST
|
||||
)
|
||||
|
||||
# Check hard limit for users BEFORE creating
|
||||
from igny8_core.business.billing.services.limit_service import LimitService, HardLimitExceededError
|
||||
try:
|
||||
LimitService.check_hard_limit(account, 'users', additional_count=1)
|
||||
except HardLimitExceededError as e:
|
||||
return Response(
|
||||
{'error': str(e)},
|
||||
status=status.HTTP_400_BAD_REQUEST
|
||||
)
|
||||
|
||||
# Create user (simplified - in production, send invitation email)
|
||||
user = User.objects.create_user(
|
||||
email=email,
|
||||
@@ -242,3 +253,216 @@ class UsageAnalyticsViewSet(viewsets.ViewSet):
|
||||
'total_usage': abs(transactions.filter(amount__lt=0).aggregate(Sum('amount'))['amount__sum'] or 0),
|
||||
'total_purchases': transactions.filter(amount__gt=0).aggregate(Sum('amount'))['amount__sum'] or 0,
|
||||
})
|
||||
|
||||
|
||||
@extend_schema_view(
|
||||
stats=extend_schema(tags=['Account']),
|
||||
)
|
||||
class DashboardStatsViewSet(viewsets.ViewSet):
|
||||
"""Dashboard statistics - real data for home page widgets"""
|
||||
permission_classes = [IsAuthenticated]
|
||||
|
||||
@action(detail=False, methods=['get'])
|
||||
def stats(self, request):
|
||||
"""
|
||||
Get dashboard statistics for the home page.
|
||||
|
||||
Query params:
|
||||
- site_id: Filter by site (optional, defaults to all sites)
|
||||
- days: Number of days for AI operations (default: 7)
|
||||
|
||||
Returns:
|
||||
- ai_operations: Real credit usage by operation type
|
||||
- recent_activity: Recent notifications
|
||||
- content_velocity: Content created this week/month
|
||||
- images_count: Actual total images count
|
||||
- published_count: Actual published content count
|
||||
"""
|
||||
account = request.user.account
|
||||
site_id = request.query_params.get('site_id')
|
||||
days = int(request.query_params.get('days', 7))
|
||||
|
||||
# Import models here to avoid circular imports
|
||||
from igny8_core.modules.writer.models import Images, Content
|
||||
from igny8_core.modules.planner.models import Keywords, Clusters, ContentIdeas
|
||||
from igny8_core.business.notifications.models import Notification
|
||||
from igny8_core.business.billing.models import CreditUsageLog
|
||||
from igny8_core.auth.models import Site
|
||||
|
||||
# Build base filter for site
|
||||
site_filter = {}
|
||||
if site_id:
|
||||
try:
|
||||
site_filter['site_id'] = int(site_id)
|
||||
except (ValueError, TypeError):
|
||||
pass
|
||||
|
||||
# ========== AI OPERATIONS (from CreditUsageLog) ==========
|
||||
start_date = timezone.now() - timedelta(days=days)
|
||||
usage_query = CreditUsageLog.objects.filter(
|
||||
account=account,
|
||||
created_at__gte=start_date
|
||||
)
|
||||
|
||||
# Get operations grouped by type
|
||||
operations_data = usage_query.values('operation_type').annotate(
|
||||
count=Count('id'),
|
||||
credits=Sum('credits_used')
|
||||
).order_by('-credits')
|
||||
|
||||
# Calculate totals
|
||||
total_ops = usage_query.count()
|
||||
total_credits = usage_query.aggregate(total=Sum('credits_used'))['total'] or 0
|
||||
|
||||
# Format operations for frontend
|
||||
operations = []
|
||||
for op in operations_data:
|
||||
op_type = op['operation_type'] or 'other'
|
||||
operations.append({
|
||||
'type': op_type,
|
||||
'count': op['count'] or 0,
|
||||
'credits': op['credits'] or 0,
|
||||
})
|
||||
|
||||
ai_operations = {
|
||||
'period': f'{days}d',
|
||||
'operations': operations,
|
||||
'totals': {
|
||||
'count': total_ops,
|
||||
'credits': total_credits,
|
||||
'successRate': 98.5, # TODO: calculate from actual success/failure
|
||||
'avgCreditsPerOp': round(total_credits / total_ops, 1) if total_ops > 0 else 0,
|
||||
}
|
||||
}
|
||||
|
||||
# ========== RECENT ACTIVITY (from Notifications) ==========
|
||||
recent_notifications = Notification.objects.filter(
|
||||
account=account
|
||||
).order_by('-created_at')[:10]
|
||||
|
||||
recent_activity = []
|
||||
for notif in recent_notifications:
|
||||
# Map notification type to activity type
|
||||
activity_type_map = {
|
||||
'ai_clustering_complete': 'clustering',
|
||||
'ai_ideas_complete': 'ideas',
|
||||
'ai_content_complete': 'content',
|
||||
'ai_images_complete': 'images',
|
||||
'ai_prompts_complete': 'images',
|
||||
'content_published': 'published',
|
||||
'wp_sync_success': 'published',
|
||||
}
|
||||
activity_type = activity_type_map.get(notif.notification_type, 'system')
|
||||
|
||||
# Map notification type to href
|
||||
href_map = {
|
||||
'clustering': '/planner/clusters',
|
||||
'ideas': '/planner/ideas',
|
||||
'content': '/writer/content',
|
||||
'images': '/writer/images',
|
||||
'published': '/writer/published',
|
||||
}
|
||||
|
||||
recent_activity.append({
|
||||
'id': str(notif.id),
|
||||
'type': activity_type,
|
||||
'title': notif.title,
|
||||
'description': notif.message[:100] if notif.message else '',
|
||||
'timestamp': notif.created_at.isoformat(),
|
||||
'href': href_map.get(activity_type, '/dashboard'),
|
||||
})
|
||||
|
||||
# ========== CONTENT COUNTS ==========
|
||||
content_base = Content.objects.filter(account=account)
|
||||
if site_filter:
|
||||
content_base = content_base.filter(**site_filter)
|
||||
|
||||
total_content = content_base.count()
|
||||
draft_content = content_base.filter(status='draft').count()
|
||||
review_content = content_base.filter(status='review').count()
|
||||
published_content = content_base.filter(status='published').count()
|
||||
|
||||
# ========== IMAGES COUNT (actual images, not content with images) ==========
|
||||
images_base = Images.objects.filter(account=account)
|
||||
if site_filter:
|
||||
images_base = images_base.filter(**site_filter)
|
||||
|
||||
total_images = images_base.count()
|
||||
generated_images = images_base.filter(status='generated').count()
|
||||
pending_images = images_base.filter(status='pending').count()
|
||||
|
||||
# ========== CONTENT VELOCITY ==========
|
||||
now = timezone.now()
|
||||
week_ago = now - timedelta(days=7)
|
||||
month_ago = now - timedelta(days=30)
|
||||
|
||||
# This week's content
|
||||
week_content = content_base.filter(created_at__gte=week_ago).count()
|
||||
week_images = images_base.filter(created_at__gte=week_ago).count()
|
||||
|
||||
# This month's content
|
||||
month_content = content_base.filter(created_at__gte=month_ago).count()
|
||||
month_images = images_base.filter(created_at__gte=month_ago).count()
|
||||
|
||||
# Estimate words (avg 1500 per article)
|
||||
content_velocity = {
|
||||
'thisWeek': {
|
||||
'articles': week_content,
|
||||
'words': week_content * 1500,
|
||||
'images': week_images,
|
||||
},
|
||||
'thisMonth': {
|
||||
'articles': month_content,
|
||||
'words': month_content * 1500,
|
||||
'images': month_images,
|
||||
},
|
||||
'total': {
|
||||
'articles': total_content,
|
||||
'words': total_content * 1500,
|
||||
'images': total_images,
|
||||
},
|
||||
'trend': 0, # TODO: calculate actual trend
|
||||
}
|
||||
|
||||
# ========== PIPELINE COUNTS ==========
|
||||
keywords_base = Keywords.objects.filter(account=account)
|
||||
clusters_base = Clusters.objects.filter(account=account)
|
||||
ideas_base = ContentIdeas.objects.filter(account=account)
|
||||
|
||||
if site_filter:
|
||||
keywords_base = keywords_base.filter(**site_filter)
|
||||
clusters_base = clusters_base.filter(**site_filter)
|
||||
ideas_base = ideas_base.filter(**site_filter)
|
||||
|
||||
# Get site count
|
||||
sites_count = Site.objects.filter(account=account, is_active=True).count()
|
||||
|
||||
pipeline = {
|
||||
'sites': sites_count,
|
||||
'keywords': keywords_base.count(),
|
||||
'clusters': clusters_base.count(),
|
||||
'ideas': ideas_base.count(),
|
||||
'tasks': ideas_base.filter(status='queued').count() + ideas_base.filter(status='completed').count(),
|
||||
'drafts': draft_content + review_content,
|
||||
'published': published_content,
|
||||
}
|
||||
|
||||
return Response({
|
||||
'ai_operations': ai_operations,
|
||||
'recent_activity': recent_activity,
|
||||
'content_velocity': content_velocity,
|
||||
'pipeline': pipeline,
|
||||
'counts': {
|
||||
'content': {
|
||||
'total': total_content,
|
||||
'draft': draft_content,
|
||||
'review': review_content,
|
||||
'published': published_content,
|
||||
},
|
||||
'images': {
|
||||
'total': total_images,
|
||||
'generated': generated_images,
|
||||
'pending': pending_images,
|
||||
},
|
||||
}
|
||||
})
|
||||
|
||||
@@ -124,12 +124,22 @@ class IsEditorOrAbove(permissions.BasePermission):
|
||||
class IsAdminOrOwner(permissions.BasePermission):
|
||||
"""
|
||||
Permission class that requires admin or owner role only
|
||||
OR user belongs to aws-admin account
|
||||
For settings, keys, billing operations
|
||||
"""
|
||||
def has_permission(self, request, view):
|
||||
if not request.user or not request.user.is_authenticated:
|
||||
return False
|
||||
|
||||
# Check if user belongs to aws-admin account (case-insensitive)
|
||||
if hasattr(request.user, 'account') and request.user.account:
|
||||
account_name = getattr(request.user.account, 'name', None)
|
||||
account_slug = getattr(request.user.account, 'slug', None)
|
||||
if account_name and account_name.lower() == 'aws admin':
|
||||
return True
|
||||
if account_slug == 'aws-admin':
|
||||
return True
|
||||
|
||||
# Check user role
|
||||
if hasattr(request.user, 'role'):
|
||||
role = request.user.role
|
||||
|
||||
@@ -6,8 +6,10 @@ from rest_framework.routers import DefaultRouter
|
||||
from .account_views import (
|
||||
AccountSettingsViewSet,
|
||||
TeamManagementViewSet,
|
||||
UsageAnalyticsViewSet
|
||||
UsageAnalyticsViewSet,
|
||||
DashboardStatsViewSet
|
||||
)
|
||||
from igny8_core.modules.system.settings_views import ContentGenerationSettingsViewSet
|
||||
|
||||
router = DefaultRouter()
|
||||
|
||||
@@ -15,6 +17,10 @@ urlpatterns = [
|
||||
# Account settings (non-router endpoints for simplified access)
|
||||
path('settings/', AccountSettingsViewSet.as_view({'get': 'retrieve', 'patch': 'partial_update'}), name='account-settings'),
|
||||
|
||||
# AI Settings - Content Generation Settings per the plan
|
||||
# GET/POST /api/v1/account/settings/ai/
|
||||
path('settings/ai/', ContentGenerationSettingsViewSet.as_view({'get': 'list', 'post': 'create', 'put': 'create'}), name='ai-settings'),
|
||||
|
||||
# Team management
|
||||
path('team/', TeamManagementViewSet.as_view({'get': 'list', 'post': 'create'}), name='team-list'),
|
||||
path('team/<int:pk>/', TeamManagementViewSet.as_view({'delete': 'destroy'}), name='team-detail'),
|
||||
@@ -22,5 +28,8 @@ urlpatterns = [
|
||||
# Usage analytics
|
||||
path('usage/analytics/', UsageAnalyticsViewSet.as_view({'get': 'overview'}), name='usage-analytics'),
|
||||
|
||||
# Dashboard stats (real data for home page)
|
||||
path('dashboard/stats/', DashboardStatsViewSet.as_view({'get': 'stats'}), name='dashboard-stats'),
|
||||
|
||||
path('', include(router.urls)),
|
||||
]
|
||||
]
|
||||
@@ -117,7 +117,7 @@ class PlanResource(resources.ModelResource):
|
||||
class Meta:
|
||||
model = Plan
|
||||
fields = ('id', 'name', 'slug', 'price', 'billing_cycle', 'max_sites', 'max_users',
|
||||
'max_keywords', 'max_content_words', 'included_credits', 'is_active', 'is_featured')
|
||||
'max_keywords', 'max_ahrefs_queries', 'included_credits', 'is_active', 'is_featured')
|
||||
export_order = fields
|
||||
import_id_fields = ('id',)
|
||||
skip_unchanged = True
|
||||
@@ -127,7 +127,7 @@ class PlanResource(resources.ModelResource):
|
||||
class PlanAdmin(ImportExportMixin, Igny8ModelAdmin):
|
||||
resource_class = PlanResource
|
||||
"""Plan admin - Global, no account filtering needed"""
|
||||
list_display = ['name', 'slug', 'price', 'billing_cycle', 'max_sites', 'max_users', 'max_keywords', 'max_content_words', 'included_credits', 'is_active', 'is_featured']
|
||||
list_display = ['name', 'slug', 'price', 'billing_cycle', 'max_sites', 'max_users', 'max_keywords', 'max_ahrefs_queries', 'included_credits', 'is_active', 'is_featured']
|
||||
list_filter = ['is_active', 'billing_cycle', 'is_internal', 'is_featured']
|
||||
search_fields = ['name', 'slug']
|
||||
readonly_fields = ['created_at']
|
||||
@@ -147,12 +147,12 @@ class PlanAdmin(ImportExportMixin, Igny8ModelAdmin):
|
||||
'description': 'Persistent limits for account-level resources'
|
||||
}),
|
||||
('Hard Limits (Persistent)', {
|
||||
'fields': ('max_keywords', 'max_clusters'),
|
||||
'fields': ('max_keywords',),
|
||||
'description': 'Total allowed - never reset'
|
||||
}),
|
||||
('Monthly Limits (Reset on Billing Cycle)', {
|
||||
'fields': ('max_content_ideas', 'max_content_words', 'max_images_basic', 'max_images_premium', 'max_image_prompts'),
|
||||
'description': 'Monthly allowances - reset at billing cycle'
|
||||
'fields': ('max_ahrefs_queries',),
|
||||
'description': 'Monthly Ahrefs keyword research queries (0 = disabled)'
|
||||
}),
|
||||
('Billing & Credits', {
|
||||
'fields': ('included_credits', 'extra_credit_price', 'allow_credit_topup', 'auto_credit_topup_threshold', 'auto_credit_topup_amount', 'credits_per_month')
|
||||
@@ -214,6 +214,7 @@ class AccountAdmin(ExportMixin, AccountAdminMixin, SimpleHistoryAdmin, Igny8Mode
|
||||
'bulk_add_credits',
|
||||
'bulk_subtract_credits',
|
||||
'bulk_soft_delete',
|
||||
'bulk_hard_delete',
|
||||
]
|
||||
|
||||
def get_queryset(self, request):
|
||||
@@ -454,14 +455,39 @@ class AccountAdmin(ExportMixin, AccountAdminMixin, SimpleHistoryAdmin, Igny8Mode
|
||||
bulk_subtract_credits.short_description = 'Subtract credits from accounts'
|
||||
|
||||
def bulk_soft_delete(self, request, queryset):
|
||||
"""Soft delete selected accounts"""
|
||||
"""Soft delete selected accounts and all related data"""
|
||||
count = 0
|
||||
for account in queryset:
|
||||
if account.slug != 'aws-admin': # Protect admin account
|
||||
account.delete() # Soft delete via SoftDeletableModel
|
||||
account.delete() # Soft delete via SoftDeletableModel (now cascades)
|
||||
count += 1
|
||||
self.message_user(request, f'{count} account(s) soft deleted.', messages.SUCCESS)
|
||||
bulk_soft_delete.short_description = 'Soft delete selected accounts'
|
||||
self.message_user(request, f'{count} account(s) and all related data soft deleted.', messages.SUCCESS)
|
||||
bulk_soft_delete.short_description = 'Soft delete accounts (with cascade)'
|
||||
|
||||
def bulk_hard_delete(self, request, queryset):
|
||||
"""PERMANENTLY delete selected accounts and ALL related data - cannot be undone!"""
|
||||
import traceback
|
||||
count = 0
|
||||
errors = []
|
||||
for account in queryset:
|
||||
if account.slug == 'aws-admin': # Protect admin account
|
||||
errors.append(f'{account.name}: Protected system account')
|
||||
continue
|
||||
try:
|
||||
account.hard_delete_with_cascade() # Permanently delete everything
|
||||
count += 1
|
||||
except Exception as e:
|
||||
# Log full traceback for debugging
|
||||
import logging
|
||||
logger = logging.getLogger(__name__)
|
||||
logger.error(f'Hard delete failed for account {account.pk} ({account.name}): {traceback.format_exc()}')
|
||||
errors.append(f'{account.name}: {str(e)}')
|
||||
|
||||
if count > 0:
|
||||
self.message_user(request, f'{count} account(s) and ALL related data permanently deleted.', messages.SUCCESS)
|
||||
if errors:
|
||||
self.message_user(request, f'Errors: {"; ".join(errors)}', messages.ERROR)
|
||||
bulk_hard_delete.short_description = '⚠️ PERMANENTLY delete accounts (irreversible!)'
|
||||
|
||||
|
||||
class SubscriptionResource(resources.ModelResource):
|
||||
@@ -981,7 +1007,7 @@ class UserAdmin(ExportMixin, BaseUserAdmin, Igny8ModelAdmin):
|
||||
list_display = ['email', 'username', 'account', 'role', 'is_active', 'is_staff', 'created_at']
|
||||
list_filter = ['role', 'account', 'is_active', 'is_staff']
|
||||
search_fields = ['email', 'username']
|
||||
readonly_fields = ['created_at', 'updated_at']
|
||||
readonly_fields = ['created_at', 'updated_at', 'password_display']
|
||||
|
||||
fieldsets = BaseUserAdmin.fieldsets + (
|
||||
('IGNY8 Info', {'fields': ('account', 'role')}),
|
||||
@@ -999,8 +1025,45 @@ class UserAdmin(ExportMixin, BaseUserAdmin, Igny8ModelAdmin):
|
||||
'bulk_activate',
|
||||
'bulk_deactivate',
|
||||
'bulk_send_password_reset',
|
||||
'bulk_set_temporary_password',
|
||||
]
|
||||
|
||||
def password_display(self, obj):
|
||||
"""Show password hash with copy button (for debugging only)"""
|
||||
if obj.password:
|
||||
return f'Hash: {obj.password[:50]}...'
|
||||
return 'No password set'
|
||||
password_display.short_description = 'Password Hash'
|
||||
|
||||
def bulk_set_temporary_password(self, request, queryset):
|
||||
"""Set a temporary password for selected users and display it"""
|
||||
import secrets
|
||||
import string
|
||||
|
||||
# Generate a secure random password
|
||||
alphabet = string.ascii_letters + string.digits
|
||||
temp_password = ''.join(secrets.choice(alphabet) for _ in range(12))
|
||||
|
||||
users_updated = []
|
||||
for user in queryset:
|
||||
user.set_password(temp_password)
|
||||
user.save(update_fields=['password'])
|
||||
users_updated.append(user.email)
|
||||
|
||||
if users_updated:
|
||||
# Display the password in the message (only visible to admin)
|
||||
self.message_user(
|
||||
request,
|
||||
f'Temporary password set for {len(users_updated)} user(s): "{temp_password}" (same password for all selected users)',
|
||||
messages.SUCCESS
|
||||
)
|
||||
self.message_user(
|
||||
request,
|
||||
f'Users updated: {", ".join(users_updated)}',
|
||||
messages.INFO
|
||||
)
|
||||
bulk_set_temporary_password.short_description = '🔑 Set temporary password (will display)'
|
||||
|
||||
def get_queryset(self, request):
|
||||
"""Filter users by account for non-superusers"""
|
||||
qs = super().get_queryset(request)
|
||||
|
||||
@@ -25,18 +25,7 @@ class Command(BaseCommand):
|
||||
'max_users': 999999,
|
||||
'max_sites': 999999,
|
||||
'max_keywords': 999999,
|
||||
'max_clusters': 999999,
|
||||
'max_content_ideas': 999999,
|
||||
'monthly_word_count_limit': 999999999,
|
||||
'daily_content_tasks': 999999,
|
||||
'daily_ai_requests': 999999,
|
||||
'daily_ai_request_limit': 999999,
|
||||
'monthly_ai_credit_limit': 999999,
|
||||
'monthly_image_count': 999999,
|
||||
'daily_image_generation_limit': 999999,
|
||||
'monthly_cluster_ai_credits': 999999,
|
||||
'monthly_content_ai_credits': 999999,
|
||||
'monthly_image_ai_credits': 999999,
|
||||
'max_ahrefs_queries': 999999,
|
||||
'included_credits': 999999,
|
||||
'is_active': True,
|
||||
'features': ['ai_writer', 'image_gen', 'auto_publish', 'custom_prompts', 'unlimited'],
|
||||
|
||||
@@ -0,0 +1,100 @@
|
||||
# Generated by IGNY8 Phase 1: Simplify Credits & Limits
|
||||
# Migration: Remove unused limit fields, add Ahrefs query tracking
|
||||
# Date: January 5, 2026
|
||||
|
||||
from django.db import migrations, models
|
||||
import django.core.validators
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
"""
|
||||
Simplify the credits and limits system:
|
||||
|
||||
PLAN MODEL:
|
||||
- REMOVE: max_clusters, max_content_ideas, max_content_words,
|
||||
max_images_basic, max_images_premium, max_image_prompts
|
||||
- ADD: max_ahrefs_queries (monthly keyword research queries)
|
||||
|
||||
ACCOUNT MODEL:
|
||||
- REMOVE: usage_content_ideas, usage_content_words, usage_images_basic,
|
||||
usage_images_premium, usage_image_prompts
|
||||
- ADD: usage_ahrefs_queries
|
||||
|
||||
RATIONALE:
|
||||
All consumption is now controlled by credits only. The only non-credit
|
||||
limits are: sites, users, keywords (hard limits) and ahrefs_queries (monthly).
|
||||
"""
|
||||
|
||||
dependencies = [
|
||||
('igny8_core_auth', '0018_add_country_remove_intent_seedkeyword'),
|
||||
]
|
||||
|
||||
operations = [
|
||||
# STEP 1: Add new Ahrefs fields FIRST (before removing old ones)
|
||||
migrations.AddField(
|
||||
model_name='plan',
|
||||
name='max_ahrefs_queries',
|
||||
field=models.IntegerField(
|
||||
default=0,
|
||||
validators=[django.core.validators.MinValueValidator(0)],
|
||||
help_text='Monthly Ahrefs keyword research queries (0 = disabled)'
|
||||
),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='account',
|
||||
name='usage_ahrefs_queries',
|
||||
field=models.IntegerField(
|
||||
default=0,
|
||||
validators=[django.core.validators.MinValueValidator(0)],
|
||||
help_text='Ahrefs queries used this month'
|
||||
),
|
||||
),
|
||||
|
||||
# STEP 2: Remove unused Plan fields
|
||||
migrations.RemoveField(
|
||||
model_name='plan',
|
||||
name='max_clusters',
|
||||
),
|
||||
migrations.RemoveField(
|
||||
model_name='plan',
|
||||
name='max_content_ideas',
|
||||
),
|
||||
migrations.RemoveField(
|
||||
model_name='plan',
|
||||
name='max_content_words',
|
||||
),
|
||||
migrations.RemoveField(
|
||||
model_name='plan',
|
||||
name='max_images_basic',
|
||||
),
|
||||
migrations.RemoveField(
|
||||
model_name='plan',
|
||||
name='max_images_premium',
|
||||
),
|
||||
migrations.RemoveField(
|
||||
model_name='plan',
|
||||
name='max_image_prompts',
|
||||
),
|
||||
|
||||
# STEP 3: Remove unused Account fields
|
||||
migrations.RemoveField(
|
||||
model_name='account',
|
||||
name='usage_content_ideas',
|
||||
),
|
||||
migrations.RemoveField(
|
||||
model_name='account',
|
||||
name='usage_content_words',
|
||||
),
|
||||
migrations.RemoveField(
|
||||
model_name='account',
|
||||
name='usage_images_basic',
|
||||
),
|
||||
migrations.RemoveField(
|
||||
model_name='account',
|
||||
name='usage_images_premium',
|
||||
),
|
||||
migrations.RemoveField(
|
||||
model_name='account',
|
||||
name='usage_image_prompts',
|
||||
),
|
||||
]
|
||||
@@ -0,0 +1,39 @@
|
||||
# Generated by Django 5.2.9 on 2026-01-06 00:11
|
||||
|
||||
import django.core.validators
|
||||
from django.db import migrations, models
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
|
||||
dependencies = [
|
||||
('igny8_core_auth', '0019_simplify_credits_limits'),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.RemoveField(
|
||||
model_name='historicalaccount',
|
||||
name='usage_content_ideas',
|
||||
),
|
||||
migrations.RemoveField(
|
||||
model_name='historicalaccount',
|
||||
name='usage_content_words',
|
||||
),
|
||||
migrations.RemoveField(
|
||||
model_name='historicalaccount',
|
||||
name='usage_image_prompts',
|
||||
),
|
||||
migrations.RemoveField(
|
||||
model_name='historicalaccount',
|
||||
name='usage_images_basic',
|
||||
),
|
||||
migrations.RemoveField(
|
||||
model_name='historicalaccount',
|
||||
name='usage_images_premium',
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='historicalaccount',
|
||||
name='usage_ahrefs_queries',
|
||||
field=models.IntegerField(default=0, help_text='Ahrefs queries used this month', validators=[django.core.validators.MinValueValidator(0)]),
|
||||
),
|
||||
]
|
||||
@@ -108,11 +108,7 @@ class Account(SoftDeletableModel):
|
||||
tax_id = models.CharField(max_length=100, blank=True, help_text="VAT/Tax ID number")
|
||||
|
||||
# Monthly usage tracking (reset on billing cycle)
|
||||
usage_content_ideas = models.IntegerField(default=0, validators=[MinValueValidator(0)], help_text="Content ideas generated this month")
|
||||
usage_content_words = models.IntegerField(default=0, validators=[MinValueValidator(0)], help_text="Content words generated this month")
|
||||
usage_images_basic = models.IntegerField(default=0, validators=[MinValueValidator(0)], help_text="Basic AI images this month")
|
||||
usage_images_premium = models.IntegerField(default=0, validators=[MinValueValidator(0)], help_text="Premium AI images this month")
|
||||
usage_image_prompts = models.IntegerField(default=0, validators=[MinValueValidator(0)], help_text="Image prompts this month")
|
||||
usage_ahrefs_queries = models.IntegerField(default=0, validators=[MinValueValidator(0)], help_text="Ahrefs queries used this month")
|
||||
usage_period_start = models.DateTimeField(null=True, blank=True, help_text="Current billing period start")
|
||||
usage_period_end = models.DateTimeField(null=True, blank=True, help_text="Current billing period end")
|
||||
|
||||
@@ -157,12 +153,152 @@ class Account(SoftDeletableModel):
|
||||
# System accounts bypass all filtering restrictions
|
||||
return self.slug in ['aws-admin', 'default-account', 'default']
|
||||
|
||||
def soft_delete(self, user=None, reason=None, retention_days=None):
|
||||
def soft_delete(self, user=None, reason=None, retention_days=None, cascade=True):
|
||||
"""
|
||||
Soft delete the account and optionally cascade to all related objects.
|
||||
Args:
|
||||
user: User performing the deletion
|
||||
reason: Reason for deletion
|
||||
retention_days: Days before permanent deletion
|
||||
cascade: If True, also soft-delete related objects that support soft delete,
|
||||
and hard-delete objects that don't support soft delete
|
||||
"""
|
||||
if self.is_system_account():
|
||||
from django.core.exceptions import PermissionDenied
|
||||
raise PermissionDenied("System account cannot be deleted.")
|
||||
|
||||
if cascade:
|
||||
self._cascade_delete_related(user=user, reason=reason, retention_days=retention_days, hard_delete=False)
|
||||
|
||||
return super().soft_delete(user=user, reason=reason, retention_days=retention_days)
|
||||
|
||||
def _cascade_delete_related(self, user=None, reason=None, retention_days=None, hard_delete=False):
|
||||
"""
|
||||
Delete all related objects when account is deleted.
|
||||
For soft delete: soft-deletes objects with SoftDeletableModel, hard-deletes others
|
||||
For hard delete: hard-deletes everything
|
||||
"""
|
||||
from igny8_core.common.soft_delete import SoftDeletableModel
|
||||
|
||||
# List of related objects to delete (in order to avoid FK constraint issues)
|
||||
# Related names from Account reverse relations
|
||||
related_names = [
|
||||
# Content & Planning related (delete first due to dependencies)
|
||||
'contentclustermap_set',
|
||||
'contentattribute_set',
|
||||
'contenttaxonomy_set',
|
||||
'content_set',
|
||||
'images_set',
|
||||
'contentideas_set',
|
||||
'tasks_set',
|
||||
'keywords_set',
|
||||
'clusters_set',
|
||||
'strategy_set',
|
||||
# Automation
|
||||
'automation_runs',
|
||||
'automation_configs',
|
||||
# Publishing & Integration
|
||||
'syncevent_set',
|
||||
'publishingsettings_set',
|
||||
'publishingrecord_set',
|
||||
'deploymentrecord_set',
|
||||
'siteintegration_set',
|
||||
# Notifications & Optimization
|
||||
'notification_set',
|
||||
'optimizationtask_set',
|
||||
# AI & Settings
|
||||
'aitasklog_set',
|
||||
'aiprompt_set',
|
||||
'aisettings_set',
|
||||
'authorprofile_set',
|
||||
# Billing (preserve invoices/payments for audit, delete others)
|
||||
'planlimitusage_set',
|
||||
'creditusagelog_set',
|
||||
'credittransaction_set',
|
||||
'accountpaymentmethod_set',
|
||||
'payment_set',
|
||||
'invoice_set',
|
||||
# Settings
|
||||
'modulesettings_set',
|
||||
'moduleenablesettings_set',
|
||||
'integrationsettings_set',
|
||||
'user_settings',
|
||||
'accountsettings_set',
|
||||
# Core (last due to dependencies)
|
||||
'sector_set',
|
||||
'site_set',
|
||||
# Users (delete after sites to avoid FK issues, owner is SET_NULL)
|
||||
'users',
|
||||
# Subscription (OneToOne)
|
||||
'subscription',
|
||||
]
|
||||
|
||||
for related_name in related_names:
|
||||
try:
|
||||
related = getattr(self, related_name, None)
|
||||
if related is None:
|
||||
continue
|
||||
|
||||
# Handle OneToOne fields (subscription)
|
||||
if hasattr(related, 'pk'):
|
||||
# It's a single object (OneToOneField)
|
||||
if hard_delete:
|
||||
related.hard_delete() if hasattr(related, 'hard_delete') else related.delete()
|
||||
elif isinstance(related, SoftDeletableModel):
|
||||
related.soft_delete(user=user, reason=reason, retention_days=retention_days)
|
||||
else:
|
||||
# Non-soft-deletable single object - hard delete
|
||||
related.delete()
|
||||
else:
|
||||
# It's a RelatedManager (ForeignKey)
|
||||
queryset = related.all()
|
||||
if queryset.exists():
|
||||
if hard_delete:
|
||||
# Hard delete all
|
||||
if hasattr(queryset, 'hard_delete'):
|
||||
queryset.hard_delete()
|
||||
else:
|
||||
for obj in queryset:
|
||||
if hasattr(obj, 'hard_delete'):
|
||||
obj.hard_delete()
|
||||
else:
|
||||
obj.delete()
|
||||
else:
|
||||
# Soft delete if supported, otherwise hard delete
|
||||
model = queryset.model
|
||||
if issubclass(model, SoftDeletableModel):
|
||||
for obj in queryset:
|
||||
obj.soft_delete(user=user, reason=reason, retention_days=retention_days)
|
||||
else:
|
||||
queryset.delete()
|
||||
except Exception as e:
|
||||
# Log but don't fail - some relations may not exist
|
||||
import logging
|
||||
logger = logging.getLogger(__name__)
|
||||
logger.warning(f"Failed to delete related {related_name} for account {self.pk}: {e}")
|
||||
|
||||
def hard_delete_with_cascade(self, using=None, keep_parents=False):
|
||||
"""
|
||||
Permanently delete the account and ALL related objects.
|
||||
This bypasses soft-delete and removes everything from the database.
|
||||
USE WITH CAUTION - this cannot be undone!
|
||||
"""
|
||||
if self.is_system_account():
|
||||
from django.core.exceptions import PermissionDenied
|
||||
raise PermissionDenied("System account cannot be deleted.")
|
||||
|
||||
# Clear owner reference first to avoid FK constraint issues
|
||||
# (owner is SET_NULL but we're deleting the user who is the owner)
|
||||
if self.owner:
|
||||
self.owner = None
|
||||
self.save(update_fields=['owner'])
|
||||
|
||||
# Cascade hard-delete all related objects first
|
||||
self._cascade_delete_related(hard_delete=True)
|
||||
|
||||
# Finally hard-delete the account itself
|
||||
return super().hard_delete(using=using, keep_parents=keep_parents)
|
||||
|
||||
def delete(self, using=None, keep_parents=False):
|
||||
return self.soft_delete()
|
||||
|
||||
@@ -216,37 +352,12 @@ class Plan(models.Model):
|
||||
validators=[MinValueValidator(1)],
|
||||
help_text="Maximum total keywords allowed (hard limit)"
|
||||
)
|
||||
max_clusters = models.IntegerField(
|
||||
default=100,
|
||||
validators=[MinValueValidator(1)],
|
||||
help_text="Maximum AI keyword clusters allowed (hard limit)"
|
||||
)
|
||||
|
||||
# Monthly Limits (Reset on billing cycle)
|
||||
max_content_ideas = models.IntegerField(
|
||||
default=300,
|
||||
validators=[MinValueValidator(1)],
|
||||
help_text="Maximum AI content ideas per month"
|
||||
)
|
||||
max_content_words = models.IntegerField(
|
||||
default=100000,
|
||||
validators=[MinValueValidator(1)],
|
||||
help_text="Maximum content words per month (e.g., 100000 = 100K words)"
|
||||
)
|
||||
max_images_basic = models.IntegerField(
|
||||
default=300,
|
||||
max_ahrefs_queries = models.IntegerField(
|
||||
default=0,
|
||||
validators=[MinValueValidator(0)],
|
||||
help_text="Maximum basic AI images per month"
|
||||
)
|
||||
max_images_premium = models.IntegerField(
|
||||
default=60,
|
||||
validators=[MinValueValidator(0)],
|
||||
help_text="Maximum premium AI images per month (DALL-E)"
|
||||
)
|
||||
max_image_prompts = models.IntegerField(
|
||||
default=300,
|
||||
validators=[MinValueValidator(0)],
|
||||
help_text="Maximum image prompts per month"
|
||||
help_text="Monthly Ahrefs keyword research queries (0 = disabled)"
|
||||
)
|
||||
|
||||
# Billing & Credits (Phase 0: Credit-only system)
|
||||
|
||||
@@ -13,9 +13,7 @@ class PlanSerializer(serializers.ModelSerializer):
|
||||
'id', 'name', 'slug', 'price', 'original_price', 'billing_cycle', 'annual_discount_percent',
|
||||
'is_featured', 'features', 'is_active',
|
||||
'max_users', 'max_sites', 'max_industries', 'max_author_profiles',
|
||||
'max_keywords', 'max_clusters',
|
||||
'max_content_ideas', 'max_content_words',
|
||||
'max_images_basic', 'max_images_premium', 'max_image_prompts',
|
||||
'max_keywords', 'max_ahrefs_queries',
|
||||
'included_credits', 'extra_credit_price', 'allow_credit_topup',
|
||||
'auto_credit_topup_threshold', 'auto_credit_topup_amount',
|
||||
'stripe_product_id', 'stripe_price_id', 'credits_per_month'
|
||||
@@ -55,7 +53,7 @@ class AccountSerializer(serializers.ModelSerializer):
|
||||
fields = [
|
||||
'id', 'name', 'slug', 'owner', 'plan', 'plan_id',
|
||||
'credits', 'status', 'payment_method',
|
||||
'subscription', 'created_at'
|
||||
'subscription', 'billing_country', 'created_at'
|
||||
]
|
||||
read_only_fields = ['owner', 'created_at']
|
||||
|
||||
@@ -66,6 +64,8 @@ class SiteSerializer(serializers.ModelSerializer):
|
||||
active_sectors_count = serializers.SerializerMethodField()
|
||||
selected_sectors = serializers.SerializerMethodField()
|
||||
can_add_sectors = serializers.SerializerMethodField()
|
||||
keywords_count = serializers.SerializerMethodField()
|
||||
has_integration = serializers.SerializerMethodField()
|
||||
industry_name = serializers.CharField(source='industry.name', read_only=True)
|
||||
industry_slug = serializers.CharField(source='industry.slug', read_only=True)
|
||||
# Override domain field to use CharField instead of URLField to avoid premature validation
|
||||
@@ -79,7 +79,7 @@ class SiteSerializer(serializers.ModelSerializer):
|
||||
'is_active', 'status',
|
||||
'site_type', 'hosting_type', 'seo_metadata',
|
||||
'sectors_count', 'active_sectors_count', 'selected_sectors',
|
||||
'can_add_sectors',
|
||||
'can_add_sectors', 'keywords_count', 'has_integration',
|
||||
'created_at', 'updated_at'
|
||||
]
|
||||
read_only_fields = ['created_at', 'updated_at', 'account']
|
||||
@@ -161,6 +161,20 @@ class SiteSerializer(serializers.ModelSerializer):
|
||||
"""Check if site can add more sectors (max 5)."""
|
||||
return obj.can_add_sector()
|
||||
|
||||
def get_keywords_count(self, obj):
|
||||
"""Get total keywords count for the site across all sectors."""
|
||||
from igny8_core.modules.planner.models import Keywords
|
||||
return Keywords.objects.filter(site=obj).count()
|
||||
|
||||
def get_has_integration(self, obj):
|
||||
"""Check if site has an active WordPress integration."""
|
||||
from igny8_core.business.integration.models import SiteIntegration
|
||||
return SiteIntegration.objects.filter(
|
||||
site=obj,
|
||||
platform='wordpress',
|
||||
is_active=True
|
||||
).exists() or bool(obj.wp_url)
|
||||
|
||||
|
||||
class IndustrySectorSerializer(serializers.ModelSerializer):
|
||||
"""Serializer for IndustrySector model."""
|
||||
@@ -392,11 +406,20 @@ class RegisterSerializer(serializers.Serializer):
|
||||
)
|
||||
|
||||
# Generate unique slug for account
|
||||
base_slug = account_name.lower().replace(' ', '-').replace('_', '-')[:50] or 'account'
|
||||
slug = base_slug
|
||||
# Clean the base slug: lowercase, replace spaces and underscores with hyphens
|
||||
import re
|
||||
import random
|
||||
import string
|
||||
base_slug = re.sub(r'[^a-z0-9-]', '', account_name.lower().replace(' ', '-').replace('_', '-'))[:40] or 'account'
|
||||
|
||||
# Add random suffix to prevent collisions (especially during concurrent registrations)
|
||||
random_suffix = ''.join(random.choices(string.ascii_lowercase + string.digits, k=6))
|
||||
slug = f"{base_slug}-{random_suffix}"
|
||||
|
||||
# Ensure uniqueness with fallback counter
|
||||
counter = 1
|
||||
while Account.objects.filter(slug=slug).exists():
|
||||
slug = f"{base_slug}-{counter}"
|
||||
slug = f"{base_slug}-{random_suffix}-{counter}"
|
||||
counter += 1
|
||||
|
||||
# Create account with status and credits seeded (0 for paid pending)
|
||||
|
||||
@@ -109,16 +109,38 @@ class RegisterView(APIView):
|
||||
refresh_expires_at = timezone.now() + get_refresh_token_expiry()
|
||||
|
||||
user_serializer = UserSerializer(user)
|
||||
|
||||
# Build response data
|
||||
response_data = {
|
||||
'user': user_serializer.data,
|
||||
'tokens': {
|
||||
'access': access_token,
|
||||
'refresh': refresh_token,
|
||||
'access_expires_at': access_expires_at.isoformat(),
|
||||
'refresh_expires_at': refresh_expires_at.isoformat(),
|
||||
}
|
||||
}
|
||||
|
||||
# NOTE: Payment checkout is NO LONGER created at registration
|
||||
# User will complete payment on /account/plans after signup
|
||||
# This simplifies the signup flow and consolidates all payment handling
|
||||
|
||||
# Send welcome email (if enabled in settings)
|
||||
try:
|
||||
from igny8_core.modules.system.email_models import EmailSettings
|
||||
from igny8_core.business.billing.services.email_service import send_welcome_email
|
||||
|
||||
email_settings = EmailSettings.get_settings()
|
||||
if email_settings.send_welcome_emails and account:
|
||||
send_welcome_email(user, account)
|
||||
except Exception as e:
|
||||
# Don't fail registration if email fails
|
||||
import logging
|
||||
logger = logging.getLogger(__name__)
|
||||
logger.error(f"Failed to send welcome email for user {user.id}: {e}")
|
||||
|
||||
return success_response(
|
||||
data={
|
||||
'user': user_serializer.data,
|
||||
'tokens': {
|
||||
'access': access_token,
|
||||
'refresh': refresh_token,
|
||||
'access_expires_at': access_expires_at.isoformat(),
|
||||
'refresh_expires_at': refresh_expires_at.isoformat(),
|
||||
}
|
||||
},
|
||||
data=response_data,
|
||||
message='Registration successful',
|
||||
status_code=status.HTTP_201_CREATED,
|
||||
request=request
|
||||
@@ -263,6 +285,128 @@ class LoginView(APIView):
|
||||
)
|
||||
|
||||
|
||||
@extend_schema(
|
||||
tags=['Authentication'],
|
||||
summary='Request Password Reset',
|
||||
description='Request password reset email'
|
||||
)
|
||||
class PasswordResetRequestView(APIView):
|
||||
"""Request password reset endpoint - sends email with reset token."""
|
||||
permission_classes = [permissions.AllowAny]
|
||||
|
||||
def post(self, request):
|
||||
from .serializers import RequestPasswordResetSerializer
|
||||
from .models import PasswordResetToken
|
||||
|
||||
serializer = RequestPasswordResetSerializer(data=request.data)
|
||||
if not serializer.is_valid():
|
||||
return error_response(
|
||||
error='Validation failed',
|
||||
errors=serializer.errors,
|
||||
status_code=status.HTTP_400_BAD_REQUEST,
|
||||
request=request
|
||||
)
|
||||
|
||||
email = serializer.validated_data['email']
|
||||
|
||||
try:
|
||||
user = User.objects.get(email=email)
|
||||
except User.DoesNotExist:
|
||||
# Don't reveal if email exists - return success anyway
|
||||
return success_response(
|
||||
message='If an account with that email exists, a password reset link has been sent.',
|
||||
request=request
|
||||
)
|
||||
|
||||
# Generate secure token
|
||||
import secrets
|
||||
token = secrets.token_urlsafe(32)
|
||||
|
||||
# Create reset token (expires in 1 hour)
|
||||
from django.utils import timezone
|
||||
from datetime import timedelta
|
||||
expires_at = timezone.now() + timedelta(hours=1)
|
||||
|
||||
PasswordResetToken.objects.create(
|
||||
user=user,
|
||||
token=token,
|
||||
expires_at=expires_at
|
||||
)
|
||||
|
||||
# Send password reset email
|
||||
import logging
|
||||
logger = logging.getLogger(__name__)
|
||||
logger.info(f"[PASSWORD_RESET] Attempting to send reset email to: {email}")
|
||||
|
||||
try:
|
||||
from igny8_core.business.billing.services.email_service import send_password_reset_email
|
||||
result = send_password_reset_email(user, token)
|
||||
logger.info(f"[PASSWORD_RESET] Email send result: {result}")
|
||||
print(f"[PASSWORD_RESET] Email send result: {result}") # Console output
|
||||
except Exception as e:
|
||||
logger.error(f"[PASSWORD_RESET] Failed to send password reset email: {e}", exc_info=True)
|
||||
print(f"[PASSWORD_RESET] ERROR: {e}") # Console output
|
||||
|
||||
return success_response(
|
||||
message='If an account with that email exists, a password reset link has been sent.',
|
||||
request=request
|
||||
)
|
||||
|
||||
|
||||
@extend_schema(
|
||||
tags=['Authentication'],
|
||||
summary='Reset Password',
|
||||
description='Reset password using token from email'
|
||||
)
|
||||
class PasswordResetConfirmView(APIView):
|
||||
"""Confirm password reset with token."""
|
||||
permission_classes = [permissions.AllowAny]
|
||||
|
||||
def post(self, request):
|
||||
from .serializers import ResetPasswordSerializer
|
||||
from .models import PasswordResetToken
|
||||
from django.utils import timezone
|
||||
|
||||
serializer = ResetPasswordSerializer(data=request.data)
|
||||
if not serializer.is_valid():
|
||||
return error_response(
|
||||
error='Validation failed',
|
||||
errors=serializer.errors,
|
||||
status_code=status.HTTP_400_BAD_REQUEST,
|
||||
request=request
|
||||
)
|
||||
|
||||
token = serializer.validated_data['token']
|
||||
new_password = serializer.validated_data['new_password']
|
||||
|
||||
try:
|
||||
reset_token = PasswordResetToken.objects.get(
|
||||
token=token,
|
||||
used=False,
|
||||
expires_at__gt=timezone.now()
|
||||
)
|
||||
except PasswordResetToken.DoesNotExist:
|
||||
return error_response(
|
||||
error='Invalid or expired reset token',
|
||||
status_code=status.HTTP_400_BAD_REQUEST,
|
||||
request=request
|
||||
)
|
||||
|
||||
# Reset password
|
||||
user = reset_token.user
|
||||
user.set_password(new_password)
|
||||
user.save()
|
||||
|
||||
# Mark token as used
|
||||
reset_token.used = True
|
||||
reset_token.save()
|
||||
|
||||
return success_response(
|
||||
message='Password reset successfully. You can now log in with your new password.',
|
||||
request=request
|
||||
)
|
||||
|
||||
|
||||
@extend_schema(
|
||||
tags=['Authentication'],
|
||||
summary='Change Password',
|
||||
@@ -378,6 +522,77 @@ class RefreshTokenView(APIView):
|
||||
)
|
||||
|
||||
|
||||
@extend_schema(
|
||||
tags=['Authentication'],
|
||||
summary='Get Country List',
|
||||
description='Returns list of countries for registration country selection'
|
||||
)
|
||||
class CountryListView(APIView):
|
||||
"""Returns list of countries for signup dropdown"""
|
||||
permission_classes = [permissions.AllowAny] # Public endpoint
|
||||
|
||||
def get(self, request):
|
||||
"""Get list of countries with codes and names"""
|
||||
# Comprehensive list of countries for billing purposes
|
||||
countries = [
|
||||
{'code': 'US', 'name': 'United States'},
|
||||
{'code': 'GB', 'name': 'United Kingdom'},
|
||||
{'code': 'CA', 'name': 'Canada'},
|
||||
{'code': 'AU', 'name': 'Australia'},
|
||||
{'code': 'DE', 'name': 'Germany'},
|
||||
{'code': 'FR', 'name': 'France'},
|
||||
{'code': 'ES', 'name': 'Spain'},
|
||||
{'code': 'IT', 'name': 'Italy'},
|
||||
{'code': 'NL', 'name': 'Netherlands'},
|
||||
{'code': 'BE', 'name': 'Belgium'},
|
||||
{'code': 'CH', 'name': 'Switzerland'},
|
||||
{'code': 'AT', 'name': 'Austria'},
|
||||
{'code': 'SE', 'name': 'Sweden'},
|
||||
{'code': 'NO', 'name': 'Norway'},
|
||||
{'code': 'DK', 'name': 'Denmark'},
|
||||
{'code': 'FI', 'name': 'Finland'},
|
||||
{'code': 'IE', 'name': 'Ireland'},
|
||||
{'code': 'PT', 'name': 'Portugal'},
|
||||
{'code': 'PL', 'name': 'Poland'},
|
||||
{'code': 'CZ', 'name': 'Czech Republic'},
|
||||
{'code': 'NZ', 'name': 'New Zealand'},
|
||||
{'code': 'SG', 'name': 'Singapore'},
|
||||
{'code': 'HK', 'name': 'Hong Kong'},
|
||||
{'code': 'JP', 'name': 'Japan'},
|
||||
{'code': 'KR', 'name': 'South Korea'},
|
||||
{'code': 'IN', 'name': 'India'},
|
||||
{'code': 'PK', 'name': 'Pakistan'},
|
||||
{'code': 'BD', 'name': 'Bangladesh'},
|
||||
{'code': 'AE', 'name': 'United Arab Emirates'},
|
||||
{'code': 'SA', 'name': 'Saudi Arabia'},
|
||||
{'code': 'ZA', 'name': 'South Africa'},
|
||||
{'code': 'NG', 'name': 'Nigeria'},
|
||||
{'code': 'EG', 'name': 'Egypt'},
|
||||
{'code': 'KE', 'name': 'Kenya'},
|
||||
{'code': 'BR', 'name': 'Brazil'},
|
||||
{'code': 'MX', 'name': 'Mexico'},
|
||||
{'code': 'AR', 'name': 'Argentina'},
|
||||
{'code': 'CL', 'name': 'Chile'},
|
||||
{'code': 'CO', 'name': 'Colombia'},
|
||||
{'code': 'PE', 'name': 'Peru'},
|
||||
{'code': 'MY', 'name': 'Malaysia'},
|
||||
{'code': 'TH', 'name': 'Thailand'},
|
||||
{'code': 'VN', 'name': 'Vietnam'},
|
||||
{'code': 'PH', 'name': 'Philippines'},
|
||||
{'code': 'ID', 'name': 'Indonesia'},
|
||||
{'code': 'TR', 'name': 'Turkey'},
|
||||
{'code': 'RU', 'name': 'Russia'},
|
||||
{'code': 'UA', 'name': 'Ukraine'},
|
||||
{'code': 'RO', 'name': 'Romania'},
|
||||
{'code': 'GR', 'name': 'Greece'},
|
||||
{'code': 'IL', 'name': 'Israel'},
|
||||
{'code': 'TW', 'name': 'Taiwan'},
|
||||
]
|
||||
# Sort alphabetically by name
|
||||
countries.sort(key=lambda x: x['name'])
|
||||
return Response({'countries': countries})
|
||||
|
||||
|
||||
@extend_schema(exclude=True) # Exclude from public API documentation - internal authenticated endpoint
|
||||
class MeView(APIView):
|
||||
"""Get current user information."""
|
||||
@@ -395,12 +610,86 @@ class MeView(APIView):
|
||||
)
|
||||
|
||||
|
||||
@extend_schema(
|
||||
tags=['Authentication'],
|
||||
summary='Unsubscribe from Emails',
|
||||
description='Unsubscribe a user from marketing, billing, or all email notifications'
|
||||
)
|
||||
class UnsubscribeView(APIView):
|
||||
"""Handle email unsubscribe requests with signed URLs."""
|
||||
permission_classes = [permissions.AllowAny]
|
||||
|
||||
def post(self, request):
|
||||
"""
|
||||
Process unsubscribe request.
|
||||
|
||||
Expected payload:
|
||||
- email: The email address to unsubscribe
|
||||
- type: Type of emails to unsubscribe from (marketing, billing, all)
|
||||
- ts: Timestamp from signed URL
|
||||
- sig: HMAC signature from signed URL
|
||||
"""
|
||||
from igny8_core.business.billing.services.email_service import verify_unsubscribe_signature
|
||||
import logging
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
email = request.data.get('email')
|
||||
email_type = request.data.get('type', 'all')
|
||||
timestamp = request.data.get('ts')
|
||||
signature = request.data.get('sig')
|
||||
|
||||
# Validate required fields
|
||||
if not email or not timestamp or not signature:
|
||||
return error_response(
|
||||
error='Missing required parameters',
|
||||
status_code=status.HTTP_400_BAD_REQUEST,
|
||||
request=request
|
||||
)
|
||||
|
||||
try:
|
||||
timestamp = int(timestamp)
|
||||
except (ValueError, TypeError):
|
||||
return error_response(
|
||||
error='Invalid timestamp',
|
||||
status_code=status.HTTP_400_BAD_REQUEST,
|
||||
request=request
|
||||
)
|
||||
|
||||
# Verify signature
|
||||
if not verify_unsubscribe_signature(email, email_type, timestamp, signature):
|
||||
return error_response(
|
||||
error='Invalid or expired unsubscribe link',
|
||||
status_code=status.HTTP_400_BAD_REQUEST,
|
||||
request=request
|
||||
)
|
||||
|
||||
# Log the unsubscribe request
|
||||
# In production, update user preferences or use email provider's suppression list
|
||||
logger.info(f'Unsubscribe request processed: email={email}, type={email_type}')
|
||||
|
||||
# TODO: Implement preference storage
|
||||
# Options:
|
||||
# 1. Add email preference fields to User model
|
||||
# 2. Use Resend's suppression list API
|
||||
# 3. Create EmailPreferences model
|
||||
|
||||
return success_response(
|
||||
message=f'Successfully unsubscribed from {email_type} emails',
|
||||
request=request
|
||||
)
|
||||
|
||||
|
||||
urlpatterns = [
|
||||
path('', include(router.urls)),
|
||||
path('register/', csrf_exempt(RegisterView.as_view()), name='auth-register'),
|
||||
path('login/', csrf_exempt(LoginView.as_view()), name='auth-login'),
|
||||
path('refresh/', csrf_exempt(RefreshTokenView.as_view()), name='auth-refresh'),
|
||||
path('change-password/', ChangePasswordView.as_view(), name='auth-change-password'),
|
||||
path('password-reset/', csrf_exempt(PasswordResetRequestView.as_view()), name='auth-password-reset-request'),
|
||||
path('password-reset/confirm/', csrf_exempt(PasswordResetConfirmView.as_view()), name='auth-password-reset-confirm'),
|
||||
path('me/', MeView.as_view(), name='auth-me'),
|
||||
path('countries/', CountryListView.as_view(), name='auth-countries'),
|
||||
path('unsubscribe/', csrf_exempt(UnsubscribeView.as_view()), name='auth-unsubscribe'),
|
||||
]
|
||||
|
||||
|
||||
@@ -1267,16 +1267,21 @@ class AuthViewSet(viewsets.GenericViewSet):
|
||||
expires_at=expires_at
|
||||
)
|
||||
|
||||
# Send email (async via Celery if available, otherwise sync)
|
||||
# Send password reset email using the email service
|
||||
try:
|
||||
from igny8_core.modules.system.tasks import send_password_reset_email
|
||||
send_password_reset_email.delay(user.id, token)
|
||||
except:
|
||||
# Fallback to sync email sending
|
||||
from igny8_core.business.billing.services.email_service import send_password_reset_email
|
||||
send_password_reset_email(user, token)
|
||||
except Exception as e:
|
||||
# Fallback to Django's send_mail if email service fails
|
||||
import logging
|
||||
logger = logging.getLogger(__name__)
|
||||
logger.error(f"Failed to send password reset email via email service: {e}")
|
||||
|
||||
from django.core.mail import send_mail
|
||||
from django.conf import settings
|
||||
|
||||
reset_url = f"{request.scheme}://{request.get_host()}/reset-password?token={token}"
|
||||
frontend_url = getattr(settings, 'FRONTEND_URL', 'https://app.igny8.com')
|
||||
reset_url = f"{frontend_url}/reset-password?token={token}"
|
||||
|
||||
send_mail(
|
||||
subject='Reset Your IGNY8 Password',
|
||||
|
||||
@@ -0,0 +1,22 @@
|
||||
# Generated migration for adding initial_snapshot field to AutomationRun
|
||||
|
||||
from django.db import migrations, models
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
|
||||
dependencies = [
|
||||
('automation', '0005_add_default_image_service'),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.AddField(
|
||||
model_name='automationrun',
|
||||
name='initial_snapshot',
|
||||
field=models.JSONField(
|
||||
blank=True,
|
||||
default=dict,
|
||||
help_text='Snapshot of initial queue sizes: {stage_1_initial, stage_2_initial, ..., total_initial_items}'
|
||||
),
|
||||
),
|
||||
]
|
||||
@@ -88,6 +88,13 @@ class AutomationRun(models.Model):
|
||||
|
||||
total_credits_used = models.IntegerField(default=0)
|
||||
|
||||
# Initial queue snapshot - captured at run start for accurate progress tracking
|
||||
initial_snapshot = models.JSONField(
|
||||
default=dict,
|
||||
blank=True,
|
||||
help_text="Snapshot of initial queue sizes: {stage_1_initial, stage_2_initial, ..., total_initial_items}"
|
||||
)
|
||||
|
||||
# JSON results per stage
|
||||
stage_1_result = models.JSONField(null=True, blank=True, help_text="{keywords_processed, clusters_created, batches}")
|
||||
stage_2_result = models.JSONField(null=True, blank=True, help_text="{clusters_processed, ideas_created}")
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -387,16 +387,17 @@ class AutomationViewSet(viewsets.ViewSet):
|
||||
|
||||
return counts, total
|
||||
|
||||
# Stage 1: Keywords pending clustering (keep previous "pending" semantics but also return status breakdown)
|
||||
# Stage 1: Keywords pending clustering
|
||||
stage_1_counts, stage_1_total = _counts_by_status(
|
||||
Keywords,
|
||||
extra_filter={'disabled': False}
|
||||
)
|
||||
# pending definition used by the UI previously (new & not clustered)
|
||||
# FIXED: Stage 1 pending = all keywords with status='new' (ready for clustering)
|
||||
# This should match the "New" count shown in Keywords metric card
|
||||
# Previously filtered by cluster__isnull=True which caused mismatch
|
||||
stage_1_pending = Keywords.objects.filter(
|
||||
site=site,
|
||||
status='new',
|
||||
cluster__isnull=True,
|
||||
disabled=False
|
||||
).count()
|
||||
|
||||
@@ -714,3 +715,237 @@ class AutomationViewSet(viewsets.ViewSet):
|
||||
status=status.HTTP_500_INTERNAL_SERVER_ERROR
|
||||
)
|
||||
|
||||
@extend_schema(tags=['Automation'])
|
||||
@action(detail=False, methods=['get'], url_path='run_progress')
|
||||
def run_progress(self, request):
|
||||
"""
|
||||
GET /api/v1/automation/run_progress/?site_id=123&run_id=abc
|
||||
|
||||
Unified endpoint for ALL run progress data - global + per-stage.
|
||||
Replaces multiple separate API calls with single comprehensive response.
|
||||
|
||||
Response includes:
|
||||
- run: Current run status and metadata
|
||||
- global_progress: Overall pipeline progress percentage
|
||||
- stages: Per-stage progress with input/output/processed counts
|
||||
- metrics: Credits used, duration, errors
|
||||
"""
|
||||
site_id = request.query_params.get('site_id')
|
||||
run_id = request.query_params.get('run_id')
|
||||
|
||||
if not site_id:
|
||||
return Response(
|
||||
{'error': 'site_id required'},
|
||||
status=status.HTTP_400_BAD_REQUEST
|
||||
)
|
||||
|
||||
try:
|
||||
site = get_object_or_404(Site, id=site_id, account=request.user.account)
|
||||
|
||||
# If no run_id, get current run
|
||||
if run_id:
|
||||
run = AutomationRun.objects.get(run_id=run_id, site=site)
|
||||
else:
|
||||
run = AutomationRun.objects.filter(
|
||||
site=site,
|
||||
status__in=['running', 'paused']
|
||||
).order_by('-started_at').first()
|
||||
|
||||
if not run:
|
||||
return Response({
|
||||
'run': None,
|
||||
'global_progress': None,
|
||||
'stages': [],
|
||||
'metrics': None
|
||||
})
|
||||
|
||||
# Build unified response
|
||||
response = self._build_run_progress_response(site, run)
|
||||
return Response(response)
|
||||
|
||||
except AutomationRun.DoesNotExist:
|
||||
return Response(
|
||||
{'error': 'Run not found'},
|
||||
status=status.HTTP_404_NOT_FOUND
|
||||
)
|
||||
except Exception as e:
|
||||
return Response(
|
||||
{'error': str(e)},
|
||||
status=status.HTTP_500_INTERNAL_SERVER_ERROR
|
||||
)
|
||||
|
||||
def _build_run_progress_response(self, site, run):
|
||||
"""Build comprehensive progress response for a run"""
|
||||
from igny8_core.business.planning.models import Keywords, Clusters, ContentIdeas
|
||||
from igny8_core.business.content.models import Tasks, Content, Images
|
||||
from django.db.models import Count
|
||||
from django.utils import timezone
|
||||
|
||||
initial_snapshot = run.initial_snapshot or {}
|
||||
|
||||
# Helper to get processed count from result
|
||||
def get_processed(result, key):
|
||||
if not result:
|
||||
return 0
|
||||
return result.get(key, 0)
|
||||
|
||||
# Helper to get output count from result
|
||||
def get_output(result, key):
|
||||
if not result:
|
||||
return 0
|
||||
return result.get(key, 0)
|
||||
|
||||
# Stage-specific key mapping for processed counts
|
||||
processed_keys = {
|
||||
1: 'keywords_processed',
|
||||
2: 'clusters_processed',
|
||||
3: 'ideas_processed',
|
||||
4: 'tasks_processed',
|
||||
5: 'content_processed',
|
||||
6: 'images_processed',
|
||||
7: 'ready_for_review'
|
||||
}
|
||||
|
||||
# Stage-specific key mapping for output counts
|
||||
output_keys = {
|
||||
1: 'clusters_created',
|
||||
2: 'ideas_created',
|
||||
3: 'tasks_created',
|
||||
4: 'content_created',
|
||||
5: 'prompts_created',
|
||||
6: 'images_generated',
|
||||
7: 'ready_for_review'
|
||||
}
|
||||
|
||||
# Build stages array
|
||||
stages = []
|
||||
total_processed = 0
|
||||
total_initial = initial_snapshot.get('total_initial_items', 0)
|
||||
|
||||
stage_names = {
|
||||
1: 'Keywords → Clusters',
|
||||
2: 'Clusters → Ideas',
|
||||
3: 'Ideas → Tasks',
|
||||
4: 'Tasks → Content',
|
||||
5: 'Content → Image Prompts',
|
||||
6: 'Image Prompts → Images',
|
||||
7: 'Manual Review Gate'
|
||||
}
|
||||
|
||||
stage_types = {
|
||||
1: 'AI', 2: 'AI', 3: 'Local', 4: 'AI', 5: 'AI', 6: 'AI', 7: 'Manual'
|
||||
}
|
||||
|
||||
for stage_num in range(1, 8):
|
||||
result = getattr(run, f'stage_{stage_num}_result', None)
|
||||
initial_count = initial_snapshot.get(f'stage_{stage_num}_initial', 0)
|
||||
processed = get_processed(result, processed_keys[stage_num])
|
||||
output = get_output(result, output_keys[stage_num])
|
||||
|
||||
total_processed += processed
|
||||
|
||||
# Determine stage status
|
||||
if run.current_stage > stage_num:
|
||||
stage_status = 'completed'
|
||||
elif run.current_stage == stage_num:
|
||||
stage_status = 'active'
|
||||
else:
|
||||
stage_status = 'pending'
|
||||
|
||||
# Calculate progress percentage for this stage
|
||||
progress = 0
|
||||
if initial_count > 0:
|
||||
progress = round((processed / initial_count) * 100)
|
||||
elif run.current_stage > stage_num:
|
||||
progress = 100
|
||||
|
||||
stage_data = {
|
||||
'number': stage_num,
|
||||
'name': stage_names[stage_num],
|
||||
'type': stage_types[stage_num],
|
||||
'status': stage_status,
|
||||
'input_count': initial_count,
|
||||
'output_count': output,
|
||||
'processed_count': processed,
|
||||
'progress_percentage': min(progress, 100),
|
||||
'credits_used': result.get('credits_used', 0) if result else 0,
|
||||
'time_elapsed': result.get('time_elapsed', '') if result else '',
|
||||
}
|
||||
|
||||
# Add currently_processing for active stage
|
||||
if stage_status == 'active':
|
||||
try:
|
||||
service = AutomationService.from_run_id(run.run_id)
|
||||
processing_state = service.get_current_processing_state()
|
||||
if processing_state:
|
||||
stage_data['currently_processing'] = processing_state.get('currently_processing', [])
|
||||
stage_data['up_next'] = processing_state.get('up_next', [])
|
||||
stage_data['remaining_count'] = processing_state.get('remaining_count', 0)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
stages.append(stage_data)
|
||||
|
||||
# Calculate global progress
|
||||
# Stages 1-6 are automation stages, Stage 7 is manual review (not counted)
|
||||
# Progress = weighted average of stages 1-6 completion
|
||||
global_percentage = 0
|
||||
if run.status == 'completed':
|
||||
# If run is completed (after Stage 6), show 100%
|
||||
global_percentage = 100
|
||||
elif run.status in ('cancelled', 'failed'):
|
||||
# Keep current progress for cancelled/failed
|
||||
if total_initial > 0:
|
||||
global_percentage = round((total_processed / total_initial) * 100)
|
||||
else:
|
||||
# Calculate based on completed stages (1-6 only)
|
||||
# Each of the 6 automation stages contributes ~16.67% to total
|
||||
completed_stages = min(max(run.current_stage - 1, 0), 6)
|
||||
stage_weight = 100 / 6 # Each stage is ~16.67%
|
||||
|
||||
# Base progress from completed stages
|
||||
base_progress = completed_stages * stage_weight
|
||||
|
||||
# Add partial progress from current stage
|
||||
current_stage_progress = 0
|
||||
if run.current_stage <= 6:
|
||||
current_result = getattr(run, f'stage_{run.current_stage}_result', None)
|
||||
current_initial = initial_snapshot.get(f'stage_{run.current_stage}_initial', 0)
|
||||
if current_initial > 0 and current_result:
|
||||
processed_key = processed_keys.get(run.current_stage, '')
|
||||
current_processed = current_result.get(processed_key, 0)
|
||||
current_stage_progress = (current_processed / current_initial) * stage_weight
|
||||
|
||||
global_percentage = round(base_progress + current_stage_progress)
|
||||
|
||||
# Calculate duration
|
||||
duration_seconds = 0
|
||||
if run.started_at:
|
||||
end_time = run.completed_at or timezone.now()
|
||||
duration_seconds = int((end_time - run.started_at).total_seconds())
|
||||
|
||||
return {
|
||||
'run': {
|
||||
'run_id': run.run_id,
|
||||
'status': run.status,
|
||||
'current_stage': run.current_stage,
|
||||
'trigger_type': run.trigger_type,
|
||||
'started_at': run.started_at,
|
||||
'completed_at': run.completed_at,
|
||||
'paused_at': run.paused_at,
|
||||
},
|
||||
'global_progress': {
|
||||
'total_items': total_initial,
|
||||
'completed_items': total_processed,
|
||||
'percentage': min(global_percentage, 100),
|
||||
'current_stage': run.current_stage,
|
||||
'total_stages': 7
|
||||
},
|
||||
'stages': stages,
|
||||
'metrics': {
|
||||
'credits_used': run.total_credits_used,
|
||||
'duration_seconds': duration_seconds,
|
||||
'errors': []
|
||||
},
|
||||
'initial_snapshot': initial_snapshot
|
||||
}
|
||||
|
||||
@@ -9,14 +9,18 @@ from django.contrib import messages
|
||||
from django.utils.html import format_html
|
||||
from unfold.admin import ModelAdmin
|
||||
from igny8_core.admin.base import AccountAdminMixin, Igny8ModelAdmin
|
||||
from .models import (
|
||||
CreditCostConfig,
|
||||
AccountPaymentMethod,
|
||||
Invoice,
|
||||
Payment,
|
||||
CreditPackage,
|
||||
PaymentMethodConfig,
|
||||
)
|
||||
# NOTE: Most billing models are now registered in modules/billing/admin.py
|
||||
# This file is kept for reference but all registrations are commented out
|
||||
# to avoid AlreadyRegistered errors
|
||||
|
||||
# from .models import (
|
||||
# CreditCostConfig,
|
||||
# AccountPaymentMethod,
|
||||
# Invoice,
|
||||
# Payment,
|
||||
# CreditPackage,
|
||||
# PaymentMethodConfig,
|
||||
# )
|
||||
|
||||
|
||||
# CreditCostConfig - DUPLICATE - Registered in modules/billing/admin.py with better features
|
||||
@@ -47,97 +51,21 @@ from .models import (
|
||||
# ...existing implementation...
|
||||
|
||||
|
||||
# PaymentMethodConfig and AccountPaymentMethod are kept here as they're not duplicated
|
||||
# or have minimal implementations that don't conflict
|
||||
# AccountPaymentMethod - DUPLICATE - Registered in modules/billing/admin.py with AccountAdminMixin
|
||||
# Commenting out to avoid AlreadyRegistered error
|
||||
# The version in modules/billing/admin.py is preferred as it includes AccountAdminMixin
|
||||
|
||||
from import_export.admin import ExportMixin
|
||||
from import_export import resources
|
||||
|
||||
|
||||
class AccountPaymentMethodResource(resources.ModelResource):
|
||||
"""Resource class for exporting Account Payment Methods"""
|
||||
class Meta:
|
||||
model = AccountPaymentMethod
|
||||
fields = ('id', 'display_name', 'type', 'account__name', 'is_default',
|
||||
'is_enabled', 'is_verified', 'country_code', 'created_at')
|
||||
export_order = fields
|
||||
|
||||
|
||||
@admin.register(AccountPaymentMethod)
|
||||
class AccountPaymentMethodAdmin(ExportMixin, Igny8ModelAdmin):
|
||||
resource_class = AccountPaymentMethodResource
|
||||
list_display = [
|
||||
'display_name',
|
||||
'type',
|
||||
'account',
|
||||
'is_default',
|
||||
'is_enabled',
|
||||
'country_code',
|
||||
'is_verified',
|
||||
'updated_at',
|
||||
]
|
||||
list_filter = ['type', 'is_default', 'is_enabled', 'is_verified', 'country_code']
|
||||
search_fields = ['display_name', 'account__name', 'account__id']
|
||||
readonly_fields = ['created_at', 'updated_at']
|
||||
actions = [
|
||||
'bulk_enable',
|
||||
'bulk_disable',
|
||||
'bulk_set_default',
|
||||
'bulk_delete_methods',
|
||||
]
|
||||
fieldsets = (
|
||||
('Payment Method', {
|
||||
'fields': ('account', 'type', 'display_name', 'is_default', 'is_enabled', 'is_verified', 'country_code')
|
||||
}),
|
||||
('Instructions / Metadata', {
|
||||
'fields': ('instructions', 'metadata')
|
||||
}),
|
||||
('Timestamps', {
|
||||
'fields': ('created_at', 'updated_at'),
|
||||
'classes': ('collapse',)
|
||||
}),
|
||||
)
|
||||
def bulk_enable(self, request, queryset):
|
||||
updated = queryset.update(is_enabled=True)
|
||||
self.message_user(request, f'{updated} payment method(s) enabled.', messages.SUCCESS)
|
||||
bulk_enable.short_description = 'Enable selected payment methods'
|
||||
|
||||
def bulk_disable(self, request, queryset):
|
||||
updated = queryset.update(is_enabled=False)
|
||||
self.message_user(request, f'{updated} payment method(s) disabled.', messages.SUCCESS)
|
||||
bulk_disable.short_description = 'Disable selected payment methods'
|
||||
|
||||
def bulk_set_default(self, request, queryset):
|
||||
from django import forms
|
||||
|
||||
if 'apply' in request.POST:
|
||||
method_id = request.POST.get('payment_method')
|
||||
if method_id:
|
||||
method = AccountPaymentMethod.objects.get(pk=method_id)
|
||||
# Unset all others for this account
|
||||
AccountPaymentMethod.objects.filter(account=method.account).update(is_default=False)
|
||||
method.is_default = True
|
||||
method.save()
|
||||
self.message_user(request, f'{method.display_name} set as default for {method.account.name}.', messages.SUCCESS)
|
||||
return
|
||||
|
||||
class PaymentMethodForm(forms.Form):
|
||||
payment_method = forms.ModelChoiceField(
|
||||
queryset=queryset,
|
||||
label="Select Payment Method to Set as Default"
|
||||
)
|
||||
|
||||
from django.shortcuts import render
|
||||
return render(request, 'admin/bulk_action_form.html', {
|
||||
'title': 'Set Default Payment Method',
|
||||
'queryset': queryset,
|
||||
'form': PaymentMethodForm(),
|
||||
'action': 'bulk_set_default',
|
||||
})
|
||||
bulk_set_default.short_description = 'Set as default'
|
||||
|
||||
def bulk_delete_methods(self, request, queryset):
|
||||
count = queryset.count()
|
||||
queryset.delete()
|
||||
self.message_user(request, f'{count} payment method(s) deleted.', messages.SUCCESS)
|
||||
bulk_delete_methods.short_description = 'Delete selected payment methods'
|
||||
# from import_export.admin import ExportMixin
|
||||
# from import_export import resources
|
||||
#
|
||||
# class AccountPaymentMethodResource(resources.ModelResource):
|
||||
# """Resource class for exporting Account Payment Methods"""
|
||||
# class Meta:
|
||||
# model = AccountPaymentMethod
|
||||
# fields = ('id', 'display_name', 'type', 'account__name', 'is_default',
|
||||
# 'is_enabled', 'is_verified', 'country_code', 'created_at')
|
||||
# export_order = fields
|
||||
#
|
||||
# @admin.register(AccountPaymentMethod)
|
||||
# class AccountPaymentMethodAdmin(ExportMixin, Igny8ModelAdmin):
|
||||
# ... (see modules/billing/admin.py for active registration)
|
||||
@@ -192,22 +192,32 @@ class BillingViewSet(viewsets.GenericViewSet):
|
||||
@action(detail=False, methods=['get'], url_path='payment-methods', permission_classes=[AllowAny])
|
||||
def list_payment_methods(self, request):
|
||||
"""
|
||||
Get available payment methods for a specific country.
|
||||
Get available payment methods filtered by country code.
|
||||
Public endpoint - only returns enabled payment methods.
|
||||
Does not expose sensitive configuration details.
|
||||
|
||||
Query params:
|
||||
country: ISO 2-letter country code (default: 'US')
|
||||
Query Parameters:
|
||||
- country_code: ISO 2-letter country code (e.g., 'US', 'PK')
|
||||
|
||||
Returns payment methods filtered by country.
|
||||
Returns methods for:
|
||||
1. Specified country (country_code=XX)
|
||||
2. Global methods (country_code='*')
|
||||
"""
|
||||
country = request.GET.get('country', 'US').upper()
|
||||
country_code = request.query_params.get('country_code', '').upper()
|
||||
|
||||
# Get country-specific methods
|
||||
methods = PaymentMethodConfig.objects.filter(
|
||||
country_code=country,
|
||||
is_enabled=True
|
||||
).order_by('sort_order')
|
||||
if country_code:
|
||||
# Filter by specific country OR global methods
|
||||
methods = PaymentMethodConfig.objects.filter(
|
||||
is_enabled=True
|
||||
).filter(
|
||||
Q(country_code=country_code) | Q(country_code='*')
|
||||
).order_by('sort_order')
|
||||
else:
|
||||
# No country specified - return only global methods
|
||||
methods = PaymentMethodConfig.objects.filter(
|
||||
is_enabled=True,
|
||||
country_code='*'
|
||||
).order_by('sort_order')
|
||||
|
||||
# Serialize using the proper serializer
|
||||
serializer = PaymentMethodConfigSerializer(methods, many=True)
|
||||
@@ -609,7 +619,7 @@ class BillingViewSet(viewsets.GenericViewSet):
|
||||
|
||||
class InvoiceViewSet(AccountModelViewSet):
|
||||
"""ViewSet for user-facing invoices"""
|
||||
queryset = Invoice.objects.all().select_related('account')
|
||||
queryset = Invoice.objects.all().select_related('account', 'subscription', 'subscription__plan')
|
||||
permission_classes = [IsAuthenticatedAndActive, HasTenantAccess]
|
||||
pagination_class = CustomPageNumberPagination
|
||||
|
||||
@@ -620,6 +630,43 @@ class InvoiceViewSet(AccountModelViewSet):
|
||||
queryset = queryset.filter(account=self.request.account)
|
||||
return queryset.order_by('-invoice_date', '-created_at')
|
||||
|
||||
def _serialize_invoice(self, invoice):
|
||||
"""Serialize an invoice with all needed fields"""
|
||||
# Build subscription data if exists
|
||||
subscription_data = None
|
||||
if invoice.subscription:
|
||||
plan_data = None
|
||||
if invoice.subscription.plan:
|
||||
plan_data = {
|
||||
'id': invoice.subscription.plan.id,
|
||||
'name': invoice.subscription.plan.name,
|
||||
'slug': invoice.subscription.plan.slug,
|
||||
}
|
||||
subscription_data = {
|
||||
'id': invoice.subscription.id,
|
||||
'plan': plan_data,
|
||||
}
|
||||
|
||||
return {
|
||||
'id': invoice.id,
|
||||
'invoice_number': invoice.invoice_number,
|
||||
'status': invoice.status,
|
||||
'total': str(invoice.total), # Alias for compatibility
|
||||
'total_amount': str(invoice.total),
|
||||
'subtotal': str(invoice.subtotal),
|
||||
'tax_amount': str(invoice.tax),
|
||||
'currency': invoice.currency,
|
||||
'invoice_date': invoice.invoice_date.isoformat(),
|
||||
'due_date': invoice.due_date.isoformat(),
|
||||
'paid_at': invoice.paid_at.isoformat() if invoice.paid_at else None,
|
||||
'line_items': invoice.line_items,
|
||||
'billing_email': invoice.billing_email,
|
||||
'notes': invoice.notes,
|
||||
'payment_method': invoice.payment_method,
|
||||
'subscription': subscription_data,
|
||||
'created_at': invoice.created_at.isoformat(),
|
||||
}
|
||||
|
||||
def list(self, request):
|
||||
"""List invoices for current account"""
|
||||
queryset = self.get_queryset()
|
||||
@@ -633,25 +680,7 @@ class InvoiceViewSet(AccountModelViewSet):
|
||||
page = paginator.paginate_queryset(queryset, request)
|
||||
|
||||
# Serialize invoice data
|
||||
results = []
|
||||
for invoice in (page if page is not None else []):
|
||||
results.append({
|
||||
'id': invoice.id,
|
||||
'invoice_number': invoice.invoice_number,
|
||||
'status': invoice.status,
|
||||
'total': str(invoice.total), # Alias for compatibility
|
||||
'total_amount': str(invoice.total),
|
||||
'subtotal': str(invoice.subtotal),
|
||||
'tax_amount': str(invoice.tax),
|
||||
'currency': invoice.currency,
|
||||
'invoice_date': invoice.invoice_date.isoformat(),
|
||||
'due_date': invoice.due_date.isoformat(),
|
||||
'paid_at': invoice.paid_at.isoformat() if invoice.paid_at else None,
|
||||
'line_items': invoice.line_items,
|
||||
'billing_email': invoice.billing_email,
|
||||
'notes': invoice.notes,
|
||||
'created_at': invoice.created_at.isoformat(),
|
||||
})
|
||||
results = [self._serialize_invoice(invoice) for invoice in (page if page is not None else [])]
|
||||
|
||||
return paginated_response(
|
||||
{'count': paginator.page.paginator.count, 'next': paginator.get_next_link(), 'previous': paginator.get_previous_link(), 'results': results},
|
||||
@@ -662,24 +691,7 @@ class InvoiceViewSet(AccountModelViewSet):
|
||||
"""Get invoice detail"""
|
||||
try:
|
||||
invoice = self.get_queryset().get(pk=pk)
|
||||
data = {
|
||||
'id': invoice.id,
|
||||
'invoice_number': invoice.invoice_number,
|
||||
'status': invoice.status,
|
||||
'total': str(invoice.total), # Alias for compatibility
|
||||
'total_amount': str(invoice.total),
|
||||
'subtotal': str(invoice.subtotal),
|
||||
'tax_amount': str(invoice.tax),
|
||||
'currency': invoice.currency,
|
||||
'invoice_date': invoice.invoice_date.isoformat(),
|
||||
'due_date': invoice.due_date.isoformat(),
|
||||
'paid_at': invoice.paid_at.isoformat() if invoice.paid_at else None,
|
||||
'line_items': invoice.line_items,
|
||||
'billing_email': invoice.billing_email,
|
||||
'notes': invoice.notes,
|
||||
'created_at': invoice.created_at.isoformat(),
|
||||
}
|
||||
return success_response(data=data, request=request)
|
||||
return success_response(data=self._serialize_invoice(invoice), request=request)
|
||||
except Invoice.DoesNotExist:
|
||||
return error_response(error='Invoice not found', status_code=404, request=request)
|
||||
|
||||
@@ -687,14 +699,38 @@ class InvoiceViewSet(AccountModelViewSet):
|
||||
def download_pdf(self, request, pk=None):
|
||||
"""Download invoice PDF"""
|
||||
try:
|
||||
invoice = self.get_queryset().get(pk=pk)
|
||||
invoice = self.get_queryset().select_related(
|
||||
'account', 'account__owner', 'subscription', 'subscription__plan'
|
||||
).get(pk=pk)
|
||||
pdf_bytes = InvoiceService.generate_pdf(invoice)
|
||||
|
||||
# Build descriptive filename
|
||||
plan_name = ''
|
||||
if invoice.subscription and invoice.subscription.plan:
|
||||
plan_name = invoice.subscription.plan.name.replace(' ', '-')
|
||||
elif invoice.metadata and 'plan_name' in invoice.metadata:
|
||||
plan_name = invoice.metadata.get('plan_name', '').replace(' ', '-')
|
||||
|
||||
date_str = invoice.invoice_date.strftime('%Y-%m-%d') if invoice.invoice_date else ''
|
||||
|
||||
filename_parts = ['IGNY8', 'Invoice', invoice.invoice_number]
|
||||
if plan_name:
|
||||
filename_parts.append(plan_name)
|
||||
if date_str:
|
||||
filename_parts.append(date_str)
|
||||
|
||||
filename = '-'.join(filename_parts) + '.pdf'
|
||||
|
||||
response = HttpResponse(pdf_bytes, content_type='application/pdf')
|
||||
response['Content-Disposition'] = f'attachment; filename="invoice-{invoice.invoice_number}.pdf"'
|
||||
response['Content-Disposition'] = f'attachment; filename="{filename}"'
|
||||
return response
|
||||
except Invoice.DoesNotExist:
|
||||
return error_response(error='Invoice not found', status_code=404, request=request)
|
||||
except Exception as e:
|
||||
import logging
|
||||
logger = logging.getLogger(__name__)
|
||||
logger.error(f'PDF generation failed for invoice {pk}: {str(e)}', exc_info=True)
|
||||
return error_response(error=f'Failed to generate PDF: {str(e)}', status_code=500, request=request)
|
||||
|
||||
|
||||
class PaymentViewSet(AccountModelViewSet):
|
||||
@@ -769,6 +805,7 @@ class PaymentViewSet(AccountModelViewSet):
|
||||
payment_method = request.data.get('payment_method', 'bank_transfer')
|
||||
reference = request.data.get('reference', '')
|
||||
notes = request.data.get('notes', '')
|
||||
currency = request.data.get('currency', 'USD')
|
||||
|
||||
if not amount:
|
||||
return error_response(error='Amount is required', status_code=400, request=request)
|
||||
@@ -778,18 +815,30 @@ class PaymentViewSet(AccountModelViewSet):
|
||||
invoice = None
|
||||
if invoice_id:
|
||||
invoice = Invoice.objects.get(id=invoice_id, account=account)
|
||||
# Use invoice currency if not explicitly provided
|
||||
if not request.data.get('currency') and invoice:
|
||||
currency = invoice.currency
|
||||
|
||||
payment = Payment.objects.create(
|
||||
account=account,
|
||||
invoice=invoice,
|
||||
amount=amount,
|
||||
currency='USD',
|
||||
currency=currency,
|
||||
payment_method=payment_method,
|
||||
status='pending_approval',
|
||||
manual_reference=reference,
|
||||
manual_notes=notes,
|
||||
)
|
||||
|
||||
# Send payment confirmation email
|
||||
try:
|
||||
from igny8_core.business.billing.services.email_service import BillingEmailService
|
||||
BillingEmailService.send_payment_confirmation_email(payment, account)
|
||||
except Exception as e:
|
||||
import logging
|
||||
logger = logging.getLogger(__name__)
|
||||
logger.error(f'Failed to send payment confirmation email: {str(e)}')
|
||||
|
||||
return success_response(
|
||||
data={'id': payment.id, 'status': payment.status},
|
||||
message='Manual payment submitted for approval',
|
||||
@@ -833,11 +882,16 @@ class CreditPackageViewSet(viewsets.ReadOnlyModelViewSet):
|
||||
|
||||
|
||||
class AccountPaymentMethodViewSet(AccountModelViewSet):
|
||||
"""ViewSet for account payment methods"""
|
||||
"""ViewSet for account payment methods - Full CRUD support"""
|
||||
queryset = AccountPaymentMethod.objects.all()
|
||||
permission_classes = [IsAuthenticatedAndActive, HasTenantAccess]
|
||||
pagination_class = CustomPageNumberPagination
|
||||
|
||||
def get_serializer_class(self):
|
||||
"""Return serializer class"""
|
||||
from igny8_core.modules.billing.serializers import AccountPaymentMethodSerializer
|
||||
return AccountPaymentMethodSerializer
|
||||
|
||||
def get_queryset(self):
|
||||
"""Filter payment methods by account"""
|
||||
queryset = super().get_queryset()
|
||||
@@ -845,6 +899,15 @@ class AccountPaymentMethodViewSet(AccountModelViewSet):
|
||||
queryset = queryset.filter(account=self.request.account)
|
||||
return queryset.order_by('-is_default', 'type')
|
||||
|
||||
def get_serializer_context(self):
|
||||
"""Add account to serializer context"""
|
||||
context = super().get_serializer_context()
|
||||
account = getattr(self.request, 'account', None)
|
||||
if not account and hasattr(self.request, 'user') and self.request.user:
|
||||
account = getattr(self.request.user, 'account', None)
|
||||
context['account'] = account
|
||||
return context
|
||||
|
||||
def list(self, request):
|
||||
"""List payment methods for current account"""
|
||||
queryset = self.get_queryset()
|
||||
@@ -854,18 +917,108 @@ class AccountPaymentMethodViewSet(AccountModelViewSet):
|
||||
results = []
|
||||
for method in (page if page is not None else []):
|
||||
results.append({
|
||||
'id': str(method.id),
|
||||
'id': method.id,
|
||||
'type': method.type,
|
||||
'display_name': method.display_name,
|
||||
'is_default': method.is_default,
|
||||
'is_enabled': method.is_enabled if hasattr(method, 'is_enabled') else True,
|
||||
'is_enabled': method.is_enabled,
|
||||
'is_verified': method.is_verified,
|
||||
'instructions': method.instructions,
|
||||
'metadata': method.metadata,
|
||||
'created_at': method.created_at.isoformat() if method.created_at else None,
|
||||
'updated_at': method.updated_at.isoformat() if method.updated_at else None,
|
||||
})
|
||||
|
||||
return paginated_response(
|
||||
{'count': paginator.page.paginator.count, 'next': paginator.get_next_link(), 'previous': paginator.get_previous_link(), 'results': results},
|
||||
request=request
|
||||
)
|
||||
|
||||
def create(self, request, *args, **kwargs):
|
||||
"""Create a new payment method"""
|
||||
serializer = self.get_serializer(data=request.data)
|
||||
try:
|
||||
serializer.is_valid(raise_exception=True)
|
||||
instance = serializer.save()
|
||||
return success_response(
|
||||
data={
|
||||
'id': instance.id,
|
||||
'type': instance.type,
|
||||
'display_name': instance.display_name,
|
||||
'is_default': instance.is_default,
|
||||
'is_enabled': instance.is_enabled,
|
||||
'is_verified': instance.is_verified,
|
||||
'instructions': instance.instructions,
|
||||
},
|
||||
message='Payment method created successfully',
|
||||
request=request,
|
||||
status_code=status.HTTP_201_CREATED
|
||||
)
|
||||
except Exception as e:
|
||||
return error_response(
|
||||
error=str(e),
|
||||
status_code=status.HTTP_400_BAD_REQUEST,
|
||||
request=request
|
||||
)
|
||||
|
||||
def update(self, request, *args, **kwargs):
|
||||
"""Update a payment method"""
|
||||
partial = kwargs.pop('partial', False)
|
||||
instance = self.get_object()
|
||||
serializer = self.get_serializer(instance, data=request.data, partial=partial)
|
||||
try:
|
||||
serializer.is_valid(raise_exception=True)
|
||||
instance = serializer.save()
|
||||
return success_response(
|
||||
data={
|
||||
'id': instance.id,
|
||||
'type': instance.type,
|
||||
'display_name': instance.display_name,
|
||||
'is_default': instance.is_default,
|
||||
'is_enabled': instance.is_enabled,
|
||||
'is_verified': instance.is_verified,
|
||||
'instructions': instance.instructions,
|
||||
},
|
||||
message='Payment method updated successfully',
|
||||
request=request
|
||||
)
|
||||
except Exception as e:
|
||||
return error_response(
|
||||
error=str(e),
|
||||
status_code=status.HTTP_400_BAD_REQUEST,
|
||||
request=request
|
||||
)
|
||||
|
||||
def destroy(self, request, *args, **kwargs):
|
||||
"""Delete a payment method"""
|
||||
try:
|
||||
instance = self.get_object()
|
||||
|
||||
# Don't allow deleting the only default payment method
|
||||
if instance.is_default:
|
||||
other_methods = AccountPaymentMethod.objects.filter(
|
||||
account=instance.account
|
||||
).exclude(pk=instance.pk).count()
|
||||
if other_methods == 0:
|
||||
return error_response(
|
||||
error='Cannot delete the only payment method',
|
||||
status_code=status.HTTP_400_BAD_REQUEST,
|
||||
request=request
|
||||
)
|
||||
|
||||
instance.delete()
|
||||
return success_response(
|
||||
data=None,
|
||||
message='Payment method deleted successfully',
|
||||
request=request,
|
||||
status_code=status.HTTP_204_NO_CONTENT
|
||||
)
|
||||
except Exception as e:
|
||||
return error_response(
|
||||
error=str(e),
|
||||
status_code=status.HTTP_400_BAD_REQUEST,
|
||||
request=request
|
||||
)
|
||||
|
||||
|
||||
# ============================================================================
|
||||
@@ -1,6 +1,9 @@
|
||||
"""
|
||||
Management command to backfill usage tracking for existing content.
|
||||
Usage: python manage.py backfill_usage [account_id]
|
||||
|
||||
NOTE: Since the simplification of limits (Jan 2026), this command only
|
||||
tracks Ahrefs queries. All other usage is tracked via CreditUsageLog.
|
||||
"""
|
||||
from django.core.management.base import BaseCommand
|
||||
from django.apps import apps
|
||||
@@ -9,7 +12,7 @@ from igny8_core.auth.models import Account
|
||||
|
||||
|
||||
class Command(BaseCommand):
|
||||
help = 'Backfill usage tracking for existing content'
|
||||
help = 'Backfill usage tracking for existing content (Ahrefs queries only)'
|
||||
|
||||
def add_arguments(self, parser):
|
||||
parser.add_argument(
|
||||
@@ -30,10 +33,6 @@ class Command(BaseCommand):
|
||||
else:
|
||||
accounts = Account.objects.filter(plan__isnull=False).select_related('plan')
|
||||
|
||||
ContentIdeas = apps.get_model('planner', 'ContentIdeas')
|
||||
Content = apps.get_model('writer', 'Content')
|
||||
Images = apps.get_model('writer', 'Images')
|
||||
|
||||
total_accounts = accounts.count()
|
||||
self.stdout.write(f'Processing {total_accounts} account(s)...\n')
|
||||
|
||||
@@ -43,45 +42,14 @@ class Command(BaseCommand):
|
||||
self.stdout.write(f'Plan: {account.plan.name if account.plan else "No Plan"}')
|
||||
self.stdout.write('=' * 60)
|
||||
|
||||
# Count content ideas
|
||||
ideas_count = ContentIdeas.objects.filter(account=account).count()
|
||||
self.stdout.write(f'Content Ideas: {ideas_count}')
|
||||
# Ahrefs queries are tracked in CreditUsageLog with operation_type='ahrefs_query'
|
||||
# We don't backfill these as they should be tracked in real-time going forward
|
||||
# This command is primarily for verification
|
||||
|
||||
# Count content words
|
||||
from django.db.models import Sum
|
||||
total_words = Content.objects.filter(account=account).aggregate(
|
||||
total=Sum('word_count')
|
||||
)['total'] or 0
|
||||
self.stdout.write(f'Content Words: {total_words}')
|
||||
|
||||
# Count images
|
||||
total_images = Images.objects.filter(account=account).count()
|
||||
images_with_prompts = Images.objects.filter(
|
||||
account=account, prompt__isnull=False
|
||||
).exclude(prompt='').count()
|
||||
self.stdout.write(f'Total Images: {total_images}')
|
||||
self.stdout.write(f'Images with Prompts: {images_with_prompts}')
|
||||
|
||||
# Update account usage fields
|
||||
with transaction.atomic():
|
||||
account.usage_content_ideas = ideas_count
|
||||
account.usage_content_words = total_words
|
||||
account.usage_images_basic = total_images
|
||||
account.usage_images_premium = 0 # Premium not implemented yet
|
||||
account.usage_image_prompts = images_with_prompts
|
||||
account.save(update_fields=[
|
||||
'usage_content_ideas', 'usage_content_words',
|
||||
'usage_images_basic', 'usage_images_premium', 'usage_image_prompts',
|
||||
'updated_at'
|
||||
])
|
||||
|
||||
self.stdout.write(self.style.SUCCESS('\n✅ Updated usage tracking:'))
|
||||
self.stdout.write(f' usage_content_ideas: {account.usage_content_ideas}')
|
||||
self.stdout.write(f' usage_content_words: {account.usage_content_words}')
|
||||
self.stdout.write(f' usage_images_basic: {account.usage_images_basic}')
|
||||
self.stdout.write(f' usage_images_premium: {account.usage_images_premium}')
|
||||
self.stdout.write(f' usage_image_prompts: {account.usage_image_prompts}\n')
|
||||
self.stdout.write(f'Ahrefs queries used this month: {account.usage_ahrefs_queries}')
|
||||
self.stdout.write(self.style.SUCCESS('\n✅ Verified usage tracking'))
|
||||
self.stdout.write(f' usage_ahrefs_queries: {account.usage_ahrefs_queries}\n')
|
||||
|
||||
self.stdout.write('=' * 60)
|
||||
self.stdout.write(self.style.SUCCESS('✅ Backfill complete!'))
|
||||
self.stdout.write(self.style.SUCCESS('✅ Verification complete!'))
|
||||
self.stdout.write('=' * 60)
|
||||
|
||||
@@ -0,0 +1,48 @@
|
||||
"""
|
||||
Migration: Simplify payment methods to global (remove country-specific filtering)
|
||||
|
||||
This migration:
|
||||
1. Updates existing PaymentMethodConfig records to use country_code='*' (global)
|
||||
2. Removes duplicate payment methods per country, keeping only one global config per method
|
||||
"""
|
||||
from django.db import migrations
|
||||
|
||||
|
||||
def migrate_to_global_payment_methods(apps, schema_editor):
|
||||
"""
|
||||
Convert country-specific payment methods to global.
|
||||
For each payment_method type, keep only one configuration with country_code='*'
|
||||
"""
|
||||
PaymentMethodConfig = apps.get_model('billing', 'PaymentMethodConfig')
|
||||
|
||||
# Get all unique payment methods
|
||||
payment_methods = PaymentMethodConfig.objects.values_list('payment_method', flat=True).distinct()
|
||||
|
||||
for method in payment_methods:
|
||||
# Get all configs for this payment method
|
||||
configs = PaymentMethodConfig.objects.filter(payment_method=method).order_by('sort_order', 'id')
|
||||
|
||||
if configs.exists():
|
||||
# Keep the first one and make it global
|
||||
first_config = configs.first()
|
||||
first_config.country_code = '*'
|
||||
first_config.save(update_fields=['country_code'])
|
||||
|
||||
# Delete duplicates (other country-specific versions)
|
||||
configs.exclude(id=first_config.id).delete()
|
||||
|
||||
|
||||
def reverse_migration(apps, schema_editor):
|
||||
"""Reverse is a no-op - can't restore original country codes"""
|
||||
pass
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
|
||||
dependencies = [
|
||||
('billing', '0007_simplify_payment_statuses'),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.RunPython(migrate_to_global_payment_methods, reverse_migration),
|
||||
]
|
||||
@@ -0,0 +1,359 @@
|
||||
"""
|
||||
Migration: Seed AIModelConfig from constants.py
|
||||
|
||||
This migration populates the AIModelConfig table with the current models
|
||||
from ai/constants.py, enabling database-driven model configuration.
|
||||
"""
|
||||
from decimal import Decimal
|
||||
from django.db import migrations
|
||||
|
||||
|
||||
def seed_ai_models(apps, schema_editor):
|
||||
"""
|
||||
Seed AIModelConfig with models from constants.py
|
||||
"""
|
||||
AIModelConfig = apps.get_model('billing', 'AIModelConfig')
|
||||
|
||||
# Text Models (from MODEL_RATES)
|
||||
text_models = [
|
||||
{
|
||||
'model_name': 'gpt-4.1',
|
||||
'display_name': 'GPT-4.1 - Balanced Performance',
|
||||
'model_type': 'text',
|
||||
'provider': 'openai',
|
||||
'input_cost_per_1m': Decimal('2.00'),
|
||||
'output_cost_per_1m': Decimal('8.00'),
|
||||
'context_window': 128000,
|
||||
'max_output_tokens': 16384,
|
||||
'supports_json_mode': True,
|
||||
'supports_vision': True,
|
||||
'supports_function_calling': True,
|
||||
'is_active': True,
|
||||
'is_default': True, # Default text model
|
||||
'sort_order': 1,
|
||||
'description': 'Default model - good balance of cost and capability',
|
||||
},
|
||||
{
|
||||
'model_name': 'gpt-4o-mini',
|
||||
'display_name': 'GPT-4o Mini - Fast & Affordable',
|
||||
'model_type': 'text',
|
||||
'provider': 'openai',
|
||||
'input_cost_per_1m': Decimal('0.15'),
|
||||
'output_cost_per_1m': Decimal('0.60'),
|
||||
'context_window': 128000,
|
||||
'max_output_tokens': 16384,
|
||||
'supports_json_mode': True,
|
||||
'supports_vision': True,
|
||||
'supports_function_calling': True,
|
||||
'is_active': True,
|
||||
'is_default': False,
|
||||
'sort_order': 2,
|
||||
'description': 'Best for high-volume tasks where cost matters',
|
||||
},
|
||||
{
|
||||
'model_name': 'gpt-4o',
|
||||
'display_name': 'GPT-4o - High Quality',
|
||||
'model_type': 'text',
|
||||
'provider': 'openai',
|
||||
'input_cost_per_1m': Decimal('2.50'),
|
||||
'output_cost_per_1m': Decimal('10.00'),
|
||||
'context_window': 128000,
|
||||
'max_output_tokens': 16384,
|
||||
'supports_json_mode': True,
|
||||
'supports_vision': True,
|
||||
'supports_function_calling': True,
|
||||
'is_active': True,
|
||||
'is_default': False,
|
||||
'sort_order': 3,
|
||||
'description': 'Premium model for complex tasks requiring best quality',
|
||||
},
|
||||
{
|
||||
'model_name': 'gpt-5.1',
|
||||
'display_name': 'GPT-5.1 - Latest Generation',
|
||||
'model_type': 'text',
|
||||
'provider': 'openai',
|
||||
'input_cost_per_1m': Decimal('1.25'),
|
||||
'output_cost_per_1m': Decimal('10.00'),
|
||||
'context_window': 200000,
|
||||
'max_output_tokens': 32768,
|
||||
'supports_json_mode': True,
|
||||
'supports_vision': True,
|
||||
'supports_function_calling': True,
|
||||
'is_active': True,
|
||||
'is_default': False,
|
||||
'sort_order': 4,
|
||||
'description': 'Next-gen model with improved reasoning',
|
||||
},
|
||||
{
|
||||
'model_name': 'gpt-5.2',
|
||||
'display_name': 'GPT-5.2 - Most Advanced',
|
||||
'model_type': 'text',
|
||||
'provider': 'openai',
|
||||
'input_cost_per_1m': Decimal('1.75'),
|
||||
'output_cost_per_1m': Decimal('14.00'),
|
||||
'context_window': 200000,
|
||||
'max_output_tokens': 65536,
|
||||
'supports_json_mode': True,
|
||||
'supports_vision': True,
|
||||
'supports_function_calling': True,
|
||||
'is_active': True,
|
||||
'is_default': False,
|
||||
'sort_order': 5,
|
||||
'description': 'Most capable model for enterprise-grade tasks',
|
||||
},
|
||||
]
|
||||
|
||||
# Image Models (from IMAGE_MODEL_RATES)
|
||||
image_models = [
|
||||
{
|
||||
'model_name': 'dall-e-3',
|
||||
'display_name': 'DALL-E 3 - Premium Images',
|
||||
'model_type': 'image',
|
||||
'provider': 'openai',
|
||||
'cost_per_image': Decimal('0.040'),
|
||||
'valid_sizes': ['1024x1024', '1024x1792', '1792x1024'],
|
||||
'supports_json_mode': False,
|
||||
'supports_vision': False,
|
||||
'supports_function_calling': False,
|
||||
'is_active': True,
|
||||
'is_default': True, # Default image model
|
||||
'sort_order': 1,
|
||||
'description': 'Best quality image generation, good for hero images and marketing',
|
||||
},
|
||||
{
|
||||
'model_name': 'dall-e-2',
|
||||
'display_name': 'DALL-E 2 - Standard Images',
|
||||
'model_type': 'image',
|
||||
'provider': 'openai',
|
||||
'cost_per_image': Decimal('0.020'),
|
||||
'valid_sizes': ['256x256', '512x512', '1024x1024'],
|
||||
'supports_json_mode': False,
|
||||
'supports_vision': False,
|
||||
'supports_function_calling': False,
|
||||
'is_active': True,
|
||||
'is_default': False,
|
||||
'sort_order': 2,
|
||||
'description': 'Lower cost option for bulk image generation',
|
||||
},
|
||||
{
|
||||
'model_name': 'gpt-image-1',
|
||||
'display_name': 'GPT Image 1 - Advanced',
|
||||
'model_type': 'image',
|
||||
'provider': 'openai',
|
||||
'cost_per_image': Decimal('0.042'),
|
||||
'valid_sizes': ['1024x1024', '1024x1792', '1792x1024'],
|
||||
'supports_json_mode': False,
|
||||
'supports_vision': False,
|
||||
'supports_function_calling': False,
|
||||
'is_active': True,
|
||||
'is_default': False,
|
||||
'sort_order': 3,
|
||||
'description': 'Advanced image model with enhanced capabilities',
|
||||
},
|
||||
{
|
||||
'model_name': 'gpt-image-1-mini',
|
||||
'display_name': 'GPT Image 1 Mini - Fast',
|
||||
'model_type': 'image',
|
||||
'provider': 'openai',
|
||||
'cost_per_image': Decimal('0.011'),
|
||||
'valid_sizes': ['1024x1024'],
|
||||
'supports_json_mode': False,
|
||||
'supports_vision': False,
|
||||
'supports_function_calling': False,
|
||||
'is_active': True,
|
||||
'is_default': False,
|
||||
'sort_order': 4,
|
||||
'description': 'Fastest and most affordable image model',
|
||||
},
|
||||
]
|
||||
|
||||
# Runware Image Models (from existing integration)
|
||||
runware_models = [
|
||||
{
|
||||
'model_name': 'runware:100@1',
|
||||
'display_name': 'Runware Standard',
|
||||
'model_type': 'image',
|
||||
'provider': 'runware',
|
||||
'cost_per_image': Decimal('0.008'),
|
||||
'valid_sizes': ['512x512', '768x768', '1024x1024'],
|
||||
'supports_json_mode': False,
|
||||
'supports_vision': False,
|
||||
'supports_function_calling': False,
|
||||
'is_active': True,
|
||||
'is_default': False,
|
||||
'sort_order': 10,
|
||||
'description': 'Runware image generation - most affordable',
|
||||
},
|
||||
]
|
||||
|
||||
# Bria AI Image Models
|
||||
bria_models = [
|
||||
{
|
||||
'model_name': 'bria-2.3',
|
||||
'display_name': 'Bria 2.3 High Quality',
|
||||
'model_type': 'image',
|
||||
'provider': 'bria',
|
||||
'cost_per_image': Decimal('0.015'),
|
||||
'valid_sizes': ['512x512', '768x768', '1024x1024', '1024x1792', '1792x1024'],
|
||||
'supports_json_mode': False,
|
||||
'supports_vision': False,
|
||||
'supports_function_calling': False,
|
||||
'is_active': True,
|
||||
'is_default': False,
|
||||
'sort_order': 11,
|
||||
'description': 'Bria 2.3 - High quality image generation',
|
||||
},
|
||||
{
|
||||
'model_name': 'bria-2.3-fast',
|
||||
'display_name': 'Bria 2.3 Fast',
|
||||
'model_type': 'image',
|
||||
'provider': 'bria',
|
||||
'cost_per_image': Decimal('0.010'),
|
||||
'valid_sizes': ['512x512', '768x768', '1024x1024'],
|
||||
'supports_json_mode': False,
|
||||
'supports_vision': False,
|
||||
'supports_function_calling': False,
|
||||
'is_active': True,
|
||||
'is_default': False,
|
||||
'sort_order': 12,
|
||||
'description': 'Bria 2.3 Fast - Quick generation, lower cost',
|
||||
},
|
||||
{
|
||||
'model_name': 'bria-2.2',
|
||||
'display_name': 'Bria 2.2 Standard',
|
||||
'model_type': 'image',
|
||||
'provider': 'bria',
|
||||
'cost_per_image': Decimal('0.012'),
|
||||
'valid_sizes': ['512x512', '768x768', '1024x1024'],
|
||||
'supports_json_mode': False,
|
||||
'supports_vision': False,
|
||||
'supports_function_calling': False,
|
||||
'is_active': True,
|
||||
'is_default': False,
|
||||
'sort_order': 13,
|
||||
'description': 'Bria 2.2 - Standard image generation',
|
||||
},
|
||||
]
|
||||
|
||||
# Anthropic Claude Text Models
|
||||
anthropic_models = [
|
||||
{
|
||||
'model_name': 'claude-3-5-sonnet-20241022',
|
||||
'display_name': 'Claude 3.5 Sonnet (Latest)',
|
||||
'model_type': 'text',
|
||||
'provider': 'anthropic',
|
||||
'input_cost_per_1m': Decimal('3.00'),
|
||||
'output_cost_per_1m': Decimal('15.00'),
|
||||
'context_window': 200000,
|
||||
'max_output_tokens': 8192,
|
||||
'supports_json_mode': True,
|
||||
'supports_vision': True,
|
||||
'supports_function_calling': True,
|
||||
'is_active': True,
|
||||
'is_default': False,
|
||||
'sort_order': 20,
|
||||
'description': 'Claude 3.5 Sonnet - Best for most tasks, excellent reasoning',
|
||||
},
|
||||
{
|
||||
'model_name': 'claude-3-5-haiku-20241022',
|
||||
'display_name': 'Claude 3.5 Haiku (Fast)',
|
||||
'model_type': 'text',
|
||||
'provider': 'anthropic',
|
||||
'input_cost_per_1m': Decimal('1.00'),
|
||||
'output_cost_per_1m': Decimal('5.00'),
|
||||
'context_window': 200000,
|
||||
'max_output_tokens': 8192,
|
||||
'supports_json_mode': True,
|
||||
'supports_vision': True,
|
||||
'supports_function_calling': True,
|
||||
'is_active': True,
|
||||
'is_default': False,
|
||||
'sort_order': 21,
|
||||
'description': 'Claude 3.5 Haiku - Fast and affordable',
|
||||
},
|
||||
{
|
||||
'model_name': 'claude-3-opus-20240229',
|
||||
'display_name': 'Claude 3 Opus',
|
||||
'model_type': 'text',
|
||||
'provider': 'anthropic',
|
||||
'input_cost_per_1m': Decimal('15.00'),
|
||||
'output_cost_per_1m': Decimal('75.00'),
|
||||
'context_window': 200000,
|
||||
'max_output_tokens': 4096,
|
||||
'supports_json_mode': True,
|
||||
'supports_vision': True,
|
||||
'supports_function_calling': True,
|
||||
'is_active': True,
|
||||
'is_default': False,
|
||||
'sort_order': 22,
|
||||
'description': 'Claude 3 Opus - Most capable Claude model',
|
||||
},
|
||||
{
|
||||
'model_name': 'claude-3-sonnet-20240229',
|
||||
'display_name': 'Claude 3 Sonnet',
|
||||
'model_type': 'text',
|
||||
'provider': 'anthropic',
|
||||
'input_cost_per_1m': Decimal('3.00'),
|
||||
'output_cost_per_1m': Decimal('15.00'),
|
||||
'context_window': 200000,
|
||||
'max_output_tokens': 4096,
|
||||
'supports_json_mode': True,
|
||||
'supports_vision': True,
|
||||
'supports_function_calling': True,
|
||||
'is_active': True,
|
||||
'is_default': False,
|
||||
'sort_order': 23,
|
||||
'description': 'Claude 3 Sonnet - Balanced performance and cost',
|
||||
},
|
||||
{
|
||||
'model_name': 'claude-3-haiku-20240307',
|
||||
'display_name': 'Claude 3 Haiku',
|
||||
'model_type': 'text',
|
||||
'provider': 'anthropic',
|
||||
'input_cost_per_1m': Decimal('0.25'),
|
||||
'output_cost_per_1m': Decimal('1.25'),
|
||||
'context_window': 200000,
|
||||
'max_output_tokens': 4096,
|
||||
'supports_json_mode': True,
|
||||
'supports_vision': True,
|
||||
'supports_function_calling': True,
|
||||
'is_active': True,
|
||||
'is_default': False,
|
||||
'sort_order': 24,
|
||||
'description': 'Claude 3 Haiku - Most affordable Claude model',
|
||||
},
|
||||
]
|
||||
|
||||
# Create all models
|
||||
all_models = text_models + image_models + runware_models + bria_models + anthropic_models
|
||||
|
||||
for model_data in all_models:
|
||||
AIModelConfig.objects.update_or_create(
|
||||
model_name=model_data['model_name'],
|
||||
defaults=model_data
|
||||
)
|
||||
|
||||
|
||||
def reverse_migration(apps, schema_editor):
|
||||
"""Remove seeded models"""
|
||||
AIModelConfig = apps.get_model('billing', 'AIModelConfig')
|
||||
seeded_models = [
|
||||
'gpt-4.1', 'gpt-4o-mini', 'gpt-4o', 'gpt-5.1', 'gpt-5.2',
|
||||
'dall-e-3', 'dall-e-2', 'gpt-image-1', 'gpt-image-1-mini',
|
||||
'runware:100@1',
|
||||
'bria-2.3', 'bria-2.3-fast', 'bria-2.2',
|
||||
'claude-3-5-sonnet-20241022', 'claude-3-5-haiku-20241022',
|
||||
'claude-3-opus-20240229', 'claude-3-sonnet-20240229', 'claude-3-haiku-20240307'
|
||||
]
|
||||
AIModelConfig.objects.filter(model_name__in=seeded_models).delete()
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
|
||||
dependencies = [
|
||||
('billing', '0008_global_payment_methods'),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.RunPython(seed_ai_models, reverse_migration),
|
||||
]
|
||||
@@ -75,7 +75,12 @@ class CreditUsageLog(AccountBaseModel):
|
||||
('idea_generation', 'Content Ideas Generation'),
|
||||
('content_generation', 'Content Generation'),
|
||||
('image_generation', 'Image Generation'),
|
||||
('image_prompt_extraction', 'Image Prompt Extraction'),
|
||||
('linking', 'Internal Linking'),
|
||||
('optimization', 'Content Optimization'),
|
||||
('reparse', 'Content Reparse'),
|
||||
('site_page_generation', 'Site Page Generation'),
|
||||
('site_structure_generation', 'Site Structure Generation'),
|
||||
('ideas', 'Content Ideas Generation'), # Legacy
|
||||
('content', 'Content Generation'), # Legacy
|
||||
('images', 'Image Generation'), # Legacy
|
||||
@@ -109,65 +114,48 @@ class CreditUsageLog(AccountBaseModel):
|
||||
|
||||
class CreditCostConfig(models.Model):
|
||||
"""
|
||||
Token-based credit pricing configuration.
|
||||
ALL operations use token-to-credit conversion.
|
||||
Fixed credit costs per operation type.
|
||||
|
||||
Per final-model-schemas.md:
|
||||
| Field | Type | Required | Notes |
|
||||
|-------|------|----------|-------|
|
||||
| operation_type | CharField(50) PK | Yes | Unique operation ID |
|
||||
| display_name | CharField(100) | Yes | Human-readable |
|
||||
| base_credits | IntegerField | Yes | Fixed credits per operation |
|
||||
| is_active | BooleanField | Yes | Enable/disable |
|
||||
| description | TextField | No | Admin notes |
|
||||
"""
|
||||
# Operation identification
|
||||
# Operation identification (Primary Key)
|
||||
operation_type = models.CharField(
|
||||
max_length=50,
|
||||
unique=True,
|
||||
choices=CreditUsageLog.OPERATION_TYPE_CHOICES,
|
||||
help_text="AI operation type"
|
||||
primary_key=True,
|
||||
help_text="Unique operation ID (e.g., 'article_generation', 'image_generation')"
|
||||
)
|
||||
|
||||
# Token-to-credit ratio (tokens per 1 credit)
|
||||
tokens_per_credit = models.IntegerField(
|
||||
default=100,
|
||||
validators=[MinValueValidator(1)],
|
||||
help_text="Number of tokens that equal 1 credit (e.g., 100 tokens = 1 credit)"
|
||||
# Human-readable name
|
||||
display_name = models.CharField(
|
||||
max_length=100,
|
||||
help_text="Human-readable name"
|
||||
)
|
||||
|
||||
# Minimum credits (for very small token usage)
|
||||
min_credits = models.IntegerField(
|
||||
# Fixed credits per operation
|
||||
base_credits = models.IntegerField(
|
||||
default=1,
|
||||
validators=[MinValueValidator(0)],
|
||||
help_text="Minimum credits to charge regardless of token usage"
|
||||
help_text="Fixed credits per operation"
|
||||
)
|
||||
|
||||
# Price per credit (for revenue reporting)
|
||||
price_per_credit_usd = models.DecimalField(
|
||||
max_digits=10,
|
||||
decimal_places=4,
|
||||
default=Decimal('0.01'),
|
||||
validators=[MinValueValidator(Decimal('0.0001'))],
|
||||
help_text="USD price per credit (for revenue reporting)"
|
||||
)
|
||||
|
||||
# Metadata
|
||||
display_name = models.CharField(max_length=100, help_text="Human-readable name")
|
||||
description = models.TextField(blank=True, help_text="What this operation does")
|
||||
|
||||
# Status
|
||||
is_active = models.BooleanField(default=True, help_text="Enable/disable this operation")
|
||||
|
||||
|
||||
# Audit fields
|
||||
created_at = models.DateTimeField(auto_now_add=True)
|
||||
updated_at = models.DateTimeField(auto_now=True)
|
||||
updated_by = models.ForeignKey(
|
||||
settings.AUTH_USER_MODEL,
|
||||
null=True,
|
||||
blank=True,
|
||||
on_delete=models.SET_NULL,
|
||||
related_name='credit_cost_updates',
|
||||
help_text="Admin who last updated"
|
||||
is_active = models.BooleanField(
|
||||
default=True,
|
||||
help_text="Enable/disable this operation"
|
||||
)
|
||||
|
||||
# Change tracking
|
||||
previous_tokens_per_credit = models.IntegerField(
|
||||
null=True,
|
||||
# Admin notes
|
||||
description = models.TextField(
|
||||
blank=True,
|
||||
help_text="Tokens per credit before last update (for audit trail)"
|
||||
help_text="Admin notes about this operation"
|
||||
)
|
||||
|
||||
# History tracking
|
||||
@@ -181,18 +169,7 @@ class CreditCostConfig(models.Model):
|
||||
ordering = ['operation_type']
|
||||
|
||||
def __str__(self):
|
||||
return f"{self.display_name} - {self.tokens_per_credit} tokens/credit"
|
||||
|
||||
def save(self, *args, **kwargs):
|
||||
# Track token ratio changes
|
||||
if self.pk:
|
||||
try:
|
||||
old = CreditCostConfig.objects.get(pk=self.pk)
|
||||
if old.tokens_per_credit != self.tokens_per_credit:
|
||||
self.previous_tokens_per_credit = old.tokens_per_credit
|
||||
except CreditCostConfig.DoesNotExist:
|
||||
pass
|
||||
super().save(*args, **kwargs)
|
||||
return f"{self.display_name} - {self.base_credits} credits"
|
||||
|
||||
|
||||
class BillingConfiguration(models.Model):
|
||||
@@ -421,6 +398,20 @@ class Invoice(AccountBaseModel):
|
||||
def tax_amount(self):
|
||||
return self.tax
|
||||
|
||||
@property
|
||||
def tax_rate(self):
|
||||
"""Get tax rate from metadata if stored"""
|
||||
if self.metadata and 'tax_rate' in self.metadata:
|
||||
return self.metadata['tax_rate']
|
||||
return 0
|
||||
|
||||
@property
|
||||
def discount_amount(self):
|
||||
"""Get discount amount from metadata if stored"""
|
||||
if self.metadata and 'discount_amount' in self.metadata:
|
||||
return self.metadata['discount_amount']
|
||||
return 0
|
||||
|
||||
@property
|
||||
def total_amount(self):
|
||||
return self.total
|
||||
@@ -510,6 +501,7 @@ class Payment(AccountBaseModel):
|
||||
manual_reference = models.CharField(
|
||||
max_length=255,
|
||||
blank=True,
|
||||
null=True,
|
||||
help_text="Bank transfer reference, wallet transaction ID, etc."
|
||||
)
|
||||
manual_notes = models.TextField(blank=True, help_text="Admin notes for manual payments")
|
||||
@@ -549,9 +541,24 @@ class Payment(AccountBaseModel):
|
||||
models.Index(fields=['account', 'payment_method']),
|
||||
models.Index(fields=['invoice', 'status']),
|
||||
]
|
||||
constraints = [
|
||||
# Ensure manual_reference is unique when not null/empty
|
||||
# This prevents duplicate bank transfer references
|
||||
models.UniqueConstraint(
|
||||
fields=['manual_reference'],
|
||||
name='unique_manual_reference_when_not_null',
|
||||
condition=models.Q(manual_reference__isnull=False) & ~models.Q(manual_reference='')
|
||||
),
|
||||
]
|
||||
|
||||
def __str__(self):
|
||||
return f"Payment {self.id} - {self.get_payment_method_display()} - {self.amount} {self.currency}"
|
||||
|
||||
def save(self, *args, **kwargs):
|
||||
"""Normalize empty manual_reference to NULL for proper uniqueness handling"""
|
||||
if self.manual_reference == '':
|
||||
self.manual_reference = None
|
||||
super().save(*args, **kwargs)
|
||||
|
||||
|
||||
class CreditPackage(models.Model):
|
||||
@@ -601,8 +608,10 @@ class CreditPackage(models.Model):
|
||||
|
||||
class PaymentMethodConfig(models.Model):
|
||||
"""
|
||||
Configure payment methods availability per country
|
||||
Allows enabling/disabling manual payments by region
|
||||
Configure payment methods availability per country.
|
||||
|
||||
For online payments (stripe, paypal): Credentials stored in IntegrationProvider.
|
||||
For manual payments (bank_transfer, local_wallet): Bank/wallet details stored here.
|
||||
"""
|
||||
# Use centralized choices
|
||||
PAYMENT_METHOD_CHOICES = PAYMENT_METHOD_CHOICES
|
||||
@@ -610,7 +619,7 @@ class PaymentMethodConfig(models.Model):
|
||||
country_code = models.CharField(
|
||||
max_length=2,
|
||||
db_index=True,
|
||||
help_text="ISO 2-letter country code (e.g., US, GB, IN)"
|
||||
help_text="ISO 2-letter country code (e.g., US, GB, PK) or '*' for global"
|
||||
)
|
||||
payment_method = models.CharField(max_length=50, choices=PAYMENT_METHOD_CHOICES)
|
||||
is_enabled = models.BooleanField(default=True)
|
||||
@@ -619,21 +628,17 @@ class PaymentMethodConfig(models.Model):
|
||||
display_name = models.CharField(max_length=100, blank=True)
|
||||
instructions = models.TextField(blank=True, help_text="Payment instructions for users")
|
||||
|
||||
# Manual payment details (for bank_transfer/local_wallet)
|
||||
# Manual payment details (for bank_transfer only)
|
||||
bank_name = models.CharField(max_length=255, blank=True)
|
||||
account_number = models.CharField(max_length=255, blank=True)
|
||||
routing_number = models.CharField(max_length=255, blank=True)
|
||||
swift_code = models.CharField(max_length=255, blank=True)
|
||||
account_title = models.CharField(max_length=255, blank=True, help_text="Account holder name")
|
||||
routing_number = models.CharField(max_length=255, blank=True, help_text="Routing/Sort code")
|
||||
swift_code = models.CharField(max_length=255, blank=True, help_text="SWIFT/BIC code for international")
|
||||
iban = models.CharField(max_length=255, blank=True, help_text="IBAN for international transfers")
|
||||
|
||||
# Additional fields for local wallets
|
||||
wallet_type = models.CharField(max_length=100, blank=True, help_text="E.g., PayTM, PhonePe, etc.")
|
||||
wallet_id = models.CharField(max_length=255, blank=True)
|
||||
|
||||
# Webhook configuration (Stripe/PayPal)
|
||||
webhook_url = models.URLField(blank=True, help_text="Webhook URL for payment gateway callbacks")
|
||||
webhook_secret = models.CharField(max_length=255, blank=True, help_text="Webhook secret for signature verification")
|
||||
api_key = models.CharField(max_length=255, blank=True, help_text="API key for payment gateway integration")
|
||||
api_secret = models.CharField(max_length=255, blank=True, help_text="API secret for payment gateway integration")
|
||||
wallet_type = models.CharField(max_length=100, blank=True, help_text="E.g., JazzCash, EasyPaisa, etc.")
|
||||
wallet_id = models.CharField(max_length=255, blank=True, help_text="Mobile number or wallet ID")
|
||||
|
||||
# Order/priority
|
||||
sort_order = models.IntegerField(default=0)
|
||||
@@ -691,18 +696,34 @@ class AccountPaymentMethod(AccountBaseModel):
|
||||
|
||||
class AIModelConfig(models.Model):
|
||||
"""
|
||||
AI Model Configuration - Database-driven model pricing and capabilities.
|
||||
Replaces hardcoded MODEL_RATES and IMAGE_MODEL_RATES from constants.py
|
||||
All AI models (text + image) with pricing and credit configuration.
|
||||
Single Source of Truth for Models.
|
||||
|
||||
Two pricing models:
|
||||
- Text models: Cost per 1M tokens (input/output), credits calculated AFTER AI call
|
||||
- Image models: Cost per image, credits calculated BEFORE AI call
|
||||
Per final-model-schemas.md:
|
||||
| Field | Type | Required | Notes |
|
||||
|-------|------|----------|-------|
|
||||
| id | AutoField PK | Auto | |
|
||||
| model_name | CharField(100) | Yes | gpt-5.1, dall-e-3, runware:97@1 |
|
||||
| model_type | CharField(20) | Yes | text / image |
|
||||
| provider | CharField(50) | Yes | Links to IntegrationProvider |
|
||||
| display_name | CharField(200) | Yes | Human-readable |
|
||||
| is_default | BooleanField | Yes | One default per type |
|
||||
| is_active | BooleanField | Yes | Enable/disable |
|
||||
| cost_per_1k_input | DecimalField | No | Provider cost (USD) - text models |
|
||||
| cost_per_1k_output | DecimalField | No | Provider cost (USD) - text models |
|
||||
| tokens_per_credit | IntegerField | No | Text: tokens per 1 credit (e.g., 1000) |
|
||||
| credits_per_image | IntegerField | No | Image: credits per image (e.g., 1, 5, 15) |
|
||||
| quality_tier | CharField(20) | No | basic / quality / premium |
|
||||
| max_tokens | IntegerField | No | Model token limit |
|
||||
| context_window | IntegerField | No | Model context size |
|
||||
| capabilities | JSONField | No | vision, function_calling, etc. |
|
||||
| created_at | DateTime | Auto | |
|
||||
| updated_at | DateTime | Auto | |
|
||||
"""
|
||||
|
||||
MODEL_TYPE_CHOICES = [
|
||||
('text', 'Text Generation'),
|
||||
('image', 'Image Generation'),
|
||||
('embedding', 'Embedding'),
|
||||
]
|
||||
|
||||
PROVIDER_CHOICES = [
|
||||
@@ -712,145 +733,112 @@ class AIModelConfig(models.Model):
|
||||
('google', 'Google'),
|
||||
]
|
||||
|
||||
QUALITY_TIER_CHOICES = [
|
||||
('basic', 'Basic'),
|
||||
('quality', 'Quality'),
|
||||
('premium', 'Premium'),
|
||||
]
|
||||
|
||||
# Basic Information
|
||||
model_name = models.CharField(
|
||||
max_length=100,
|
||||
unique=True,
|
||||
db_index=True,
|
||||
help_text="Model identifier used in API calls (e.g., 'gpt-4o-mini', 'dall-e-3')"
|
||||
)
|
||||
|
||||
display_name = models.CharField(
|
||||
max_length=200,
|
||||
help_text="Human-readable name shown in UI (e.g., 'GPT-4o mini - Fast & Affordable')"
|
||||
help_text="Model identifier (e.g., 'gpt-5.1', 'dall-e-3', 'runware:97@1')"
|
||||
)
|
||||
|
||||
model_type = models.CharField(
|
||||
max_length=20,
|
||||
choices=MODEL_TYPE_CHOICES,
|
||||
db_index=True,
|
||||
help_text="Type of model - determines which pricing fields are used"
|
||||
help_text="text / image"
|
||||
)
|
||||
|
||||
provider = models.CharField(
|
||||
max_length=50,
|
||||
choices=PROVIDER_CHOICES,
|
||||
db_index=True,
|
||||
help_text="AI provider (OpenAI, Anthropic, etc.)"
|
||||
help_text="Links to IntegrationProvider"
|
||||
)
|
||||
|
||||
# Text Model Pricing (Only for model_type='text')
|
||||
input_cost_per_1m = models.DecimalField(
|
||||
max_digits=10,
|
||||
decimal_places=4,
|
||||
null=True,
|
||||
blank=True,
|
||||
validators=[MinValueValidator(Decimal('0.0001'))],
|
||||
help_text="Cost per 1 million input tokens (USD). For text models only."
|
||||
)
|
||||
|
||||
output_cost_per_1m = models.DecimalField(
|
||||
max_digits=10,
|
||||
decimal_places=4,
|
||||
null=True,
|
||||
blank=True,
|
||||
validators=[MinValueValidator(Decimal('0.0001'))],
|
||||
help_text="Cost per 1 million output tokens (USD). For text models only."
|
||||
)
|
||||
|
||||
context_window = models.IntegerField(
|
||||
null=True,
|
||||
blank=True,
|
||||
validators=[MinValueValidator(1)],
|
||||
help_text="Maximum input tokens (context length). For text models only."
|
||||
)
|
||||
|
||||
max_output_tokens = models.IntegerField(
|
||||
null=True,
|
||||
blank=True,
|
||||
validators=[MinValueValidator(1)],
|
||||
help_text="Maximum output tokens per request. For text models only."
|
||||
)
|
||||
|
||||
# Image Model Pricing (Only for model_type='image')
|
||||
cost_per_image = models.DecimalField(
|
||||
max_digits=10,
|
||||
decimal_places=4,
|
||||
null=True,
|
||||
blank=True,
|
||||
validators=[MinValueValidator(Decimal('0.0001'))],
|
||||
help_text="Fixed cost per image generation (USD). For image models only."
|
||||
)
|
||||
|
||||
valid_sizes = models.JSONField(
|
||||
null=True,
|
||||
blank=True,
|
||||
help_text='Array of valid image sizes (e.g., ["1024x1024", "1024x1792"]). For image models only.'
|
||||
)
|
||||
|
||||
# Capabilities
|
||||
supports_json_mode = models.BooleanField(
|
||||
default=False,
|
||||
help_text="True for models with JSON response format support"
|
||||
)
|
||||
|
||||
supports_vision = models.BooleanField(
|
||||
default=False,
|
||||
help_text="True for models that can analyze images"
|
||||
)
|
||||
|
||||
supports_function_calling = models.BooleanField(
|
||||
default=False,
|
||||
help_text="True for models with function calling capability"
|
||||
)
|
||||
|
||||
# Status & Configuration
|
||||
is_active = models.BooleanField(
|
||||
default=True,
|
||||
db_index=True,
|
||||
help_text="Enable/disable model without deleting"
|
||||
display_name = models.CharField(
|
||||
max_length=200,
|
||||
help_text="Human-readable name"
|
||||
)
|
||||
|
||||
is_default = models.BooleanField(
|
||||
default=False,
|
||||
db_index=True,
|
||||
help_text="Mark as default model for its type (only one per type)"
|
||||
help_text="One default per type"
|
||||
)
|
||||
|
||||
sort_order = models.IntegerField(
|
||||
default=0,
|
||||
help_text="Control order in dropdown lists (lower numbers first)"
|
||||
is_active = models.BooleanField(
|
||||
default=True,
|
||||
db_index=True,
|
||||
help_text="Enable/disable"
|
||||
)
|
||||
|
||||
# Metadata
|
||||
description = models.TextField(
|
||||
blank=True,
|
||||
help_text="Admin notes about model usage, strengths, limitations"
|
||||
)
|
||||
|
||||
release_date = models.DateField(
|
||||
# Text Model Pricing (cost per 1K tokens)
|
||||
cost_per_1k_input = models.DecimalField(
|
||||
max_digits=10,
|
||||
decimal_places=6,
|
||||
null=True,
|
||||
blank=True,
|
||||
help_text="When model was released/added"
|
||||
help_text="Provider cost per 1K input tokens (USD) - text models"
|
||||
)
|
||||
|
||||
deprecation_date = models.DateField(
|
||||
cost_per_1k_output = models.DecimalField(
|
||||
max_digits=10,
|
||||
decimal_places=6,
|
||||
null=True,
|
||||
blank=True,
|
||||
help_text="When model will be removed"
|
||||
help_text="Provider cost per 1K output tokens (USD) - text models"
|
||||
)
|
||||
|
||||
# Audit Fields
|
||||
# Credit Configuration
|
||||
tokens_per_credit = models.IntegerField(
|
||||
null=True,
|
||||
blank=True,
|
||||
help_text="Text: tokens per 1 credit (e.g., 1000, 10000)"
|
||||
)
|
||||
|
||||
credits_per_image = models.IntegerField(
|
||||
null=True,
|
||||
blank=True,
|
||||
help_text="Image: credits per image (e.g., 1, 5, 15)"
|
||||
)
|
||||
|
||||
quality_tier = models.CharField(
|
||||
max_length=20,
|
||||
choices=QUALITY_TIER_CHOICES,
|
||||
null=True,
|
||||
blank=True,
|
||||
help_text="basic / quality / premium - for image models"
|
||||
)
|
||||
|
||||
# Model Limits
|
||||
max_tokens = models.IntegerField(
|
||||
null=True,
|
||||
blank=True,
|
||||
help_text="Model token limit"
|
||||
)
|
||||
|
||||
context_window = models.IntegerField(
|
||||
null=True,
|
||||
blank=True,
|
||||
help_text="Model context size"
|
||||
)
|
||||
|
||||
# Capabilities
|
||||
capabilities = models.JSONField(
|
||||
default=dict,
|
||||
blank=True,
|
||||
help_text="Capabilities: vision, function_calling, json_mode, etc."
|
||||
)
|
||||
|
||||
# Timestamps
|
||||
created_at = models.DateTimeField(auto_now_add=True)
|
||||
updated_at = models.DateTimeField(auto_now=True)
|
||||
updated_by = models.ForeignKey(
|
||||
settings.AUTH_USER_MODEL,
|
||||
null=True,
|
||||
blank=True,
|
||||
on_delete=models.SET_NULL,
|
||||
related_name='ai_model_updates',
|
||||
help_text="Admin who last updated"
|
||||
)
|
||||
|
||||
# History tracking
|
||||
history = HistoricalRecords()
|
||||
@@ -860,7 +848,7 @@ class AIModelConfig(models.Model):
|
||||
db_table = 'igny8_ai_model_config'
|
||||
verbose_name = 'AI Model Configuration'
|
||||
verbose_name_plural = 'AI Model Configurations'
|
||||
ordering = ['model_type', 'sort_order', 'model_name']
|
||||
ordering = ['model_type', 'model_name']
|
||||
indexes = [
|
||||
models.Index(fields=['model_type', 'is_active']),
|
||||
models.Index(fields=['provider', 'is_active']),
|
||||
@@ -873,52 +861,138 @@ class AIModelConfig(models.Model):
|
||||
def save(self, *args, **kwargs):
|
||||
"""Ensure only one is_default per model_type"""
|
||||
if self.is_default:
|
||||
# Unset other defaults for same model_type
|
||||
AIModelConfig.objects.filter(
|
||||
model_type=self.model_type,
|
||||
is_default=True
|
||||
).exclude(pk=self.pk).update(is_default=False)
|
||||
super().save(*args, **kwargs)
|
||||
|
||||
def get_cost_for_tokens(self, input_tokens, output_tokens):
|
||||
"""Calculate cost for text models based on token usage"""
|
||||
if self.model_type != 'text':
|
||||
raise ValueError("get_cost_for_tokens only applies to text models")
|
||||
|
||||
if not self.input_cost_per_1m or not self.output_cost_per_1m:
|
||||
raise ValueError(f"Model {self.model_name} missing cost_per_1m values")
|
||||
|
||||
cost = (
|
||||
(Decimal(input_tokens) * self.input_cost_per_1m) +
|
||||
(Decimal(output_tokens) * self.output_cost_per_1m)
|
||||
) / Decimal('1000000')
|
||||
|
||||
return cost
|
||||
@classmethod
|
||||
def get_default_text_model(cls):
|
||||
"""Get the default text generation model"""
|
||||
return cls.objects.filter(model_type='text', is_default=True, is_active=True).first()
|
||||
|
||||
def get_cost_for_images(self, num_images):
|
||||
"""Calculate cost for image models"""
|
||||
if self.model_type != 'image':
|
||||
raise ValueError("get_cost_for_images only applies to image models")
|
||||
|
||||
if not self.cost_per_image:
|
||||
raise ValueError(f"Model {self.model_name} missing cost_per_image")
|
||||
|
||||
return self.cost_per_image * Decimal(num_images)
|
||||
@classmethod
|
||||
def get_default_image_model(cls):
|
||||
"""Get the default image generation model"""
|
||||
return cls.objects.filter(model_type='image', is_default=True, is_active=True).first()
|
||||
|
||||
def validate_size(self, size):
|
||||
"""Check if size is valid for this image model"""
|
||||
if self.model_type != 'image':
|
||||
raise ValueError("validate_size only applies to image models")
|
||||
|
||||
if not self.valid_sizes:
|
||||
return True # No size restrictions
|
||||
|
||||
return size in self.valid_sizes
|
||||
@classmethod
|
||||
def get_image_models_by_tier(cls):
|
||||
"""Get all active image models grouped by quality tier"""
|
||||
return cls.objects.filter(
|
||||
model_type='image',
|
||||
is_active=True
|
||||
).order_by('quality_tier', 'model_name')
|
||||
|
||||
|
||||
class WebhookEvent(models.Model):
|
||||
"""
|
||||
Store all incoming webhook events for audit and replay capability.
|
||||
|
||||
def get_display_with_pricing(self):
|
||||
"""For dropdowns: show model with pricing"""
|
||||
if self.model_type == 'text':
|
||||
return f"{self.display_name} - ${self.input_cost_per_1m}/${self.output_cost_per_1m} per 1M"
|
||||
elif self.model_type == 'image':
|
||||
return f"{self.display_name} - ${self.cost_per_image} per image"
|
||||
return self.display_name
|
||||
This model provides:
|
||||
- Audit trail of all webhook events
|
||||
- Idempotency verification (via event_id)
|
||||
- Ability to replay failed events
|
||||
- Debugging and monitoring
|
||||
"""
|
||||
PROVIDER_CHOICES = [
|
||||
('stripe', 'Stripe'),
|
||||
('paypal', 'PayPal'),
|
||||
]
|
||||
|
||||
# Unique identifier from the payment provider
|
||||
event_id = models.CharField(
|
||||
max_length=255,
|
||||
unique=True,
|
||||
db_index=True,
|
||||
help_text="Unique event ID from the payment provider"
|
||||
)
|
||||
|
||||
# Payment provider
|
||||
provider = models.CharField(
|
||||
max_length=20,
|
||||
choices=PROVIDER_CHOICES,
|
||||
db_index=True,
|
||||
help_text="Payment provider (stripe or paypal)"
|
||||
)
|
||||
|
||||
# Event type (e.g., 'checkout.session.completed', 'PAYMENT.CAPTURE.COMPLETED')
|
||||
event_type = models.CharField(
|
||||
max_length=100,
|
||||
db_index=True,
|
||||
help_text="Event type from the provider"
|
||||
)
|
||||
|
||||
# Full payload for debugging and replay
|
||||
payload = models.JSONField(
|
||||
help_text="Full webhook payload"
|
||||
)
|
||||
|
||||
# Processing status
|
||||
processed = models.BooleanField(
|
||||
default=False,
|
||||
db_index=True,
|
||||
help_text="Whether this event has been successfully processed"
|
||||
)
|
||||
processed_at = models.DateTimeField(
|
||||
null=True,
|
||||
blank=True,
|
||||
help_text="When the event was processed"
|
||||
)
|
||||
|
||||
# Error tracking
|
||||
error_message = models.TextField(
|
||||
blank=True,
|
||||
help_text="Error message if processing failed"
|
||||
)
|
||||
retry_count = models.IntegerField(
|
||||
default=0,
|
||||
help_text="Number of processing attempts"
|
||||
)
|
||||
|
||||
# Timestamps
|
||||
created_at = models.DateTimeField(auto_now_add=True)
|
||||
|
||||
class Meta:
|
||||
app_label = 'billing'
|
||||
db_table = 'igny8_webhook_events'
|
||||
verbose_name = 'Webhook Event'
|
||||
verbose_name_plural = 'Webhook Events'
|
||||
ordering = ['-created_at']
|
||||
indexes = [
|
||||
models.Index(fields=['provider', 'event_type']),
|
||||
models.Index(fields=['processed', 'created_at']),
|
||||
models.Index(fields=['provider', 'processed']),
|
||||
]
|
||||
|
||||
def __str__(self):
|
||||
return f"{self.provider}:{self.event_type} - {self.event_id[:20]}..."
|
||||
|
||||
@classmethod
|
||||
def record_event(cls, event_id: str, provider: str, event_type: str, payload: dict):
|
||||
"""
|
||||
Record a webhook event. Returns (event, created) tuple.
|
||||
If the event already exists, returns the existing event.
|
||||
"""
|
||||
return cls.objects.get_or_create(
|
||||
event_id=event_id,
|
||||
defaults={
|
||||
'provider': provider,
|
||||
'event_type': event_type,
|
||||
'payload': payload,
|
||||
}
|
||||
)
|
||||
|
||||
def mark_processed(self):
|
||||
"""Mark the event as successfully processed"""
|
||||
from django.utils import timezone
|
||||
self.processed = True
|
||||
self.processed_at = timezone.now()
|
||||
self.save(update_fields=['processed', 'processed_at'])
|
||||
|
||||
def mark_failed(self, error_message: str):
|
||||
"""Mark the event as failed with error message"""
|
||||
self.error_message = error_message
|
||||
self.retry_count += 1
|
||||
self.save(update_fields=['error_message', 'retry_count'])
|
||||
|
||||
@@ -1,6 +1,8 @@
|
||||
"""
|
||||
Credit Service for managing credit transactions and deductions
|
||||
"""
|
||||
import math
|
||||
import logging
|
||||
from django.db import transaction
|
||||
from django.utils import timezone
|
||||
from igny8_core.business.billing.models import CreditTransaction, CreditUsageLog
|
||||
@@ -8,10 +10,151 @@ from igny8_core.business.billing.constants import CREDIT_COSTS
|
||||
from igny8_core.business.billing.exceptions import InsufficientCreditsError, CreditCalculationError
|
||||
from igny8_core.auth.models import Account
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def _check_low_credits_warning(account, previous_balance):
|
||||
"""
|
||||
Check if credits have fallen below threshold and send warning email.
|
||||
Only sends if this is the first time falling below threshold.
|
||||
"""
|
||||
try:
|
||||
from igny8_core.modules.system.email_models import EmailSettings
|
||||
from .email_service import BillingEmailService
|
||||
|
||||
settings = EmailSettings.get_settings()
|
||||
if not settings.send_low_credit_warnings:
|
||||
return
|
||||
|
||||
threshold = settings.low_credit_threshold
|
||||
|
||||
# Only send if we CROSSED below the threshold (wasn't already below)
|
||||
if account.credits < threshold <= previous_balance:
|
||||
logger.info(f"Credits fell below threshold for account {account.id}: {account.credits} < {threshold}")
|
||||
BillingEmailService.send_low_credits_warning(
|
||||
account=account,
|
||||
current_credits=account.credits,
|
||||
threshold=threshold
|
||||
)
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to check/send low credits warning: {e}")
|
||||
|
||||
|
||||
class CreditService:
|
||||
"""Service for managing credits - Token-based only"""
|
||||
|
||||
@staticmethod
|
||||
def calculate_credits_for_image(model_name: str, num_images: int = 1) -> int:
|
||||
"""
|
||||
Calculate credits for image generation based on AIModelConfig.credits_per_image.
|
||||
|
||||
Args:
|
||||
model_name: The AI model name (e.g., 'dall-e-3', 'flux-1-1-pro')
|
||||
num_images: Number of images to generate
|
||||
|
||||
Returns:
|
||||
int: Credits required
|
||||
|
||||
Raises:
|
||||
CreditCalculationError: If model not found or has no credits_per_image
|
||||
"""
|
||||
from igny8_core.business.billing.models import AIModelConfig
|
||||
|
||||
try:
|
||||
model = AIModelConfig.objects.filter(
|
||||
model_name=model_name,
|
||||
is_active=True
|
||||
).first()
|
||||
|
||||
if not model:
|
||||
raise CreditCalculationError(f"Model {model_name} not found or inactive")
|
||||
|
||||
if model.credits_per_image is None:
|
||||
raise CreditCalculationError(
|
||||
f"Model {model_name} has no credits_per_image configured"
|
||||
)
|
||||
|
||||
credits = model.credits_per_image * num_images
|
||||
|
||||
logger.info(
|
||||
f"Calculated credits for {model_name}: "
|
||||
f"{num_images} images × {model.credits_per_image} = {credits} credits"
|
||||
)
|
||||
|
||||
return credits
|
||||
|
||||
except AIModelConfig.DoesNotExist:
|
||||
raise CreditCalculationError(f"Model {model_name} not found")
|
||||
|
||||
@staticmethod
|
||||
def calculate_credits_from_tokens_by_model(model_name: str, total_tokens: int) -> int:
|
||||
"""
|
||||
Calculate credits from token usage based on AIModelConfig.tokens_per_credit.
|
||||
|
||||
This is the model-specific version that uses the model's configured rate.
|
||||
For operation-based calculation, use calculate_credits_from_tokens().
|
||||
|
||||
Args:
|
||||
model_name: The AI model name (e.g., 'gpt-4o', 'claude-3-5-sonnet')
|
||||
total_tokens: Total tokens used (input + output)
|
||||
|
||||
Returns:
|
||||
int: Credits required (minimum 1)
|
||||
|
||||
Raises:
|
||||
CreditCalculationError: If model not found
|
||||
"""
|
||||
from igny8_core.business.billing.models import AIModelConfig, BillingConfiguration
|
||||
|
||||
try:
|
||||
model = AIModelConfig.objects.filter(
|
||||
model_name=model_name,
|
||||
is_active=True
|
||||
).first()
|
||||
|
||||
if model and model.tokens_per_credit:
|
||||
tokens_per_credit = model.tokens_per_credit
|
||||
else:
|
||||
# Fallback to global default
|
||||
billing_config = BillingConfiguration.get_config()
|
||||
tokens_per_credit = billing_config.default_tokens_per_credit
|
||||
logger.info(
|
||||
f"Model {model_name} has no tokens_per_credit, "
|
||||
f"using default: {tokens_per_credit}"
|
||||
)
|
||||
|
||||
if tokens_per_credit <= 0:
|
||||
raise CreditCalculationError(
|
||||
f"Invalid tokens_per_credit for {model_name}: {tokens_per_credit}"
|
||||
)
|
||||
|
||||
# Get rounding mode
|
||||
billing_config = BillingConfiguration.get_config()
|
||||
rounding_mode = billing_config.credit_rounding_mode
|
||||
|
||||
credits_float = total_tokens / tokens_per_credit
|
||||
|
||||
if rounding_mode == 'up':
|
||||
credits = math.ceil(credits_float)
|
||||
elif rounding_mode == 'down':
|
||||
credits = math.floor(credits_float)
|
||||
else: # nearest
|
||||
credits = round(credits_float)
|
||||
|
||||
# Minimum 1 credit
|
||||
credits = max(credits, 1)
|
||||
|
||||
logger.info(
|
||||
f"Calculated credits for {model_name}: "
|
||||
f"{total_tokens} tokens ÷ {tokens_per_credit} = {credits} credits"
|
||||
)
|
||||
|
||||
return credits
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error calculating credits for {model_name}: {e}")
|
||||
raise CreditCalculationError(f"Error calculating credits: {e}")
|
||||
|
||||
@staticmethod
|
||||
def calculate_credits_from_tokens(operation_type, tokens_input, tokens_output):
|
||||
"""
|
||||
@@ -186,6 +329,9 @@ class CreditService:
|
||||
# Check sufficient credits (legacy: amount is already calculated)
|
||||
CreditService.check_credits_legacy(account, amount)
|
||||
|
||||
# Store previous balance for low credits check
|
||||
previous_balance = account.credits
|
||||
|
||||
# Deduct from account.credits
|
||||
account.credits -= amount
|
||||
account.save(update_fields=['credits'])
|
||||
@@ -214,6 +360,9 @@ class CreditService:
|
||||
metadata=metadata or {}
|
||||
)
|
||||
|
||||
# Check and send low credits warning if applicable
|
||||
_check_low_credits_warning(account, previous_balance)
|
||||
|
||||
return account.credits
|
||||
|
||||
@staticmethod
|
||||
@@ -323,4 +472,56 @@ class CreditService:
|
||||
)
|
||||
|
||||
return account.credits
|
||||
|
||||
@staticmethod
|
||||
@transaction.atomic
|
||||
def deduct_credits_for_image(
|
||||
account,
|
||||
model_name: str,
|
||||
num_images: int = 1,
|
||||
description: str = None,
|
||||
metadata: dict = None,
|
||||
cost_usd: float = None,
|
||||
related_object_type: str = None,
|
||||
related_object_id: int = None
|
||||
):
|
||||
"""
|
||||
Deduct credits for image generation based on model's credits_per_image.
|
||||
|
||||
Args:
|
||||
account: Account instance
|
||||
model_name: AI model used (e.g., 'dall-e-3', 'flux-1-1-pro')
|
||||
num_images: Number of images generated
|
||||
description: Optional description
|
||||
metadata: Optional metadata dict
|
||||
cost_usd: Optional cost in USD
|
||||
related_object_type: Optional related object type
|
||||
related_object_id: Optional related object ID
|
||||
|
||||
Returns:
|
||||
int: New credit balance
|
||||
"""
|
||||
credits_required = CreditService.calculate_credits_for_image(model_name, num_images)
|
||||
|
||||
if account.credits < credits_required:
|
||||
raise InsufficientCreditsError(
|
||||
f"Insufficient credits. Required: {credits_required}, Available: {account.credits}"
|
||||
)
|
||||
|
||||
if not description:
|
||||
description = f"Image generation: {num_images} images with {model_name} = {credits_required} credits"
|
||||
|
||||
return CreditService.deduct_credits(
|
||||
account=account,
|
||||
amount=credits_required,
|
||||
operation_type='image_generation',
|
||||
description=description,
|
||||
metadata=metadata,
|
||||
cost_usd=cost_usd,
|
||||
model_used=model_name,
|
||||
tokens_input=None,
|
||||
tokens_output=None,
|
||||
related_object_type=related_object_type,
|
||||
related_object_id=related_object_id
|
||||
)
|
||||
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -14,32 +14,65 @@ from ....auth.models import Account, Subscription
|
||||
class InvoiceService:
|
||||
"""Service for managing invoices"""
|
||||
|
||||
@staticmethod
|
||||
def get_pending_invoice(subscription: Subscription) -> Optional[Invoice]:
|
||||
"""
|
||||
Get pending invoice for a subscription.
|
||||
Used to find existing invoice during payment processing instead of creating duplicates.
|
||||
"""
|
||||
return Invoice.objects.filter(
|
||||
subscription=subscription,
|
||||
status='pending'
|
||||
).order_by('-created_at').first()
|
||||
|
||||
@staticmethod
|
||||
def get_or_create_subscription_invoice(
|
||||
subscription: Subscription,
|
||||
billing_period_start: datetime,
|
||||
billing_period_end: datetime
|
||||
) -> tuple[Invoice, bool]:
|
||||
"""
|
||||
Get existing pending invoice or create new one.
|
||||
Returns tuple of (invoice, created) where created is True if new invoice was created.
|
||||
"""
|
||||
# First try to find existing pending invoice for this subscription
|
||||
existing = InvoiceService.get_pending_invoice(subscription)
|
||||
if existing:
|
||||
return existing, False
|
||||
|
||||
# Create new invoice if none exists
|
||||
invoice = InvoiceService.create_subscription_invoice(
|
||||
subscription=subscription,
|
||||
billing_period_start=billing_period_start,
|
||||
billing_period_end=billing_period_end
|
||||
)
|
||||
return invoice, True
|
||||
|
||||
@staticmethod
|
||||
def generate_invoice_number(account: Account) -> str:
|
||||
"""
|
||||
Generate unique invoice number with atomic locking to prevent duplicates
|
||||
Format: INV-{ACCOUNT_ID}-{YEAR}{MONTH}-{COUNTER}
|
||||
Format: INV-{YY}{MM}{COUNTER} (e.g., INV-26010001)
|
||||
"""
|
||||
from django.db import transaction
|
||||
|
||||
now = timezone.now()
|
||||
prefix = f"INV-{account.id}-{now.year}{now.month:02d}"
|
||||
prefix = f"INV-{now.year % 100:02d}{now.month:02d}"
|
||||
|
||||
# Use atomic transaction with SELECT FOR UPDATE to prevent race conditions
|
||||
with transaction.atomic():
|
||||
# Lock the invoice table for this account/month to get accurate count
|
||||
# Lock the invoice table for this month to get accurate count
|
||||
count = Invoice.objects.select_for_update().filter(
|
||||
account=account,
|
||||
created_at__year=now.year,
|
||||
created_at__month=now.month
|
||||
).count()
|
||||
|
||||
invoice_number = f"{prefix}-{count + 1:04d}"
|
||||
invoice_number = f"{prefix}{count + 1:04d}"
|
||||
|
||||
# Double-check uniqueness (should not happen with lock, but safety check)
|
||||
while Invoice.objects.filter(invoice_number=invoice_number).exists():
|
||||
count += 1
|
||||
invoice_number = f"{prefix}-{count + 1:04d}"
|
||||
invoice_number = f"{prefix}{count + 1:04d}"
|
||||
|
||||
return invoice_number
|
||||
|
||||
@@ -52,6 +85,11 @@ class InvoiceService:
|
||||
) -> Invoice:
|
||||
"""
|
||||
Create invoice for subscription billing period
|
||||
|
||||
SIMPLIFIED CURRENCY LOGIC:
|
||||
- ALL invoices are in USD (consistent for accounting)
|
||||
- PKR equivalent is calculated and stored in metadata for display purposes
|
||||
- Bank transfer users see PKR equivalent but invoice is technically USD
|
||||
"""
|
||||
account = subscription.account
|
||||
plan = subscription.plan
|
||||
@@ -74,12 +112,15 @@ class InvoiceService:
|
||||
invoice_date = timezone.now().date()
|
||||
due_date = invoice_date + timedelta(days=INVOICE_DUE_DATE_OFFSET)
|
||||
|
||||
# Get currency based on billing country
|
||||
# ALWAYS use USD for invoices (simplified accounting)
|
||||
from igny8_core.business.billing.utils.currency import get_currency_for_country, convert_usd_to_local
|
||||
currency = get_currency_for_country(account.billing_country)
|
||||
|
||||
# Convert plan price to local currency
|
||||
local_price = convert_usd_to_local(float(plan.price), account.billing_country)
|
||||
currency = 'USD'
|
||||
usd_price = float(plan.price)
|
||||
|
||||
# Calculate local equivalent for display purposes (if applicable)
|
||||
local_currency = get_currency_for_country(account.billing_country) if account.billing_country else 'USD'
|
||||
local_equivalent = convert_usd_to_local(usd_price, account.billing_country) if local_currency != 'USD' else usd_price
|
||||
|
||||
invoice = Invoice.objects.create(
|
||||
account=account,
|
||||
@@ -95,16 +136,19 @@ class InvoiceService:
|
||||
'billing_period_end': billing_period_end.isoformat(),
|
||||
'subscription_id': subscription.id, # Keep in metadata for backward compatibility
|
||||
'usd_price': str(plan.price), # Store original USD price
|
||||
'exchange_rate': str(local_price / float(plan.price) if plan.price > 0 else 1.0)
|
||||
'local_currency': local_currency, # Store local currency code for display
|
||||
'local_equivalent': str(round(local_equivalent, 2)), # Store local equivalent for display
|
||||
'exchange_rate': str(local_equivalent / usd_price if usd_price > 0 else 1.0),
|
||||
'payment_method': account.payment_method
|
||||
}
|
||||
)
|
||||
|
||||
# Add line item for subscription with converted price
|
||||
# Add line item for subscription in USD
|
||||
invoice.add_line_item(
|
||||
description=f"{plan.name} Plan - {billing_period_start.strftime('%b %Y')}",
|
||||
quantity=1,
|
||||
unit_price=Decimal(str(local_price)),
|
||||
amount=Decimal(str(local_price))
|
||||
unit_price=Decimal(str(usd_price)),
|
||||
amount=Decimal(str(usd_price))
|
||||
)
|
||||
|
||||
invoice.calculate_totals()
|
||||
@@ -120,16 +164,23 @@ class InvoiceService:
|
||||
) -> Invoice:
|
||||
"""
|
||||
Create invoice for credit package purchase
|
||||
|
||||
SIMPLIFIED CURRENCY LOGIC:
|
||||
- ALL invoices are in USD (consistent for accounting)
|
||||
- PKR equivalent is calculated and stored in metadata for display purposes
|
||||
"""
|
||||
from igny8_core.business.billing.config import INVOICE_DUE_DATE_OFFSET
|
||||
invoice_date = timezone.now().date()
|
||||
|
||||
# Get currency based on billing country
|
||||
# ALWAYS use USD for invoices (simplified accounting)
|
||||
from igny8_core.business.billing.utils.currency import get_currency_for_country, convert_usd_to_local
|
||||
currency = get_currency_for_country(account.billing_country)
|
||||
|
||||
# Convert credit package price to local currency
|
||||
local_price = convert_usd_to_local(float(credit_package.price), account.billing_country)
|
||||
currency = 'USD'
|
||||
usd_price = float(credit_package.price)
|
||||
|
||||
# Calculate local equivalent for display purposes (if applicable)
|
||||
local_currency = get_currency_for_country(account.billing_country) if account.billing_country else 'USD'
|
||||
local_equivalent = convert_usd_to_local(usd_price, account.billing_country) if local_currency != 'USD' else usd_price
|
||||
|
||||
invoice = Invoice.objects.create(
|
||||
account=account,
|
||||
@@ -143,16 +194,19 @@ class InvoiceService:
|
||||
'credit_package_id': credit_package.id,
|
||||
'credit_amount': credit_package.credits,
|
||||
'usd_price': str(credit_package.price), # Store original USD price
|
||||
'exchange_rate': str(local_price / float(credit_package.price) if credit_package.price > 0 else 1.0)
|
||||
'local_currency': local_currency, # Store local currency code for display
|
||||
'local_equivalent': str(round(local_equivalent, 2)), # Store local equivalent for display
|
||||
'exchange_rate': str(local_equivalent / usd_price if usd_price > 0 else 1.0),
|
||||
'payment_method': account.payment_method
|
||||
},
|
||||
)
|
||||
|
||||
# Add line item for credit package with converted price
|
||||
# Add line item for credit package in USD
|
||||
invoice.add_line_item(
|
||||
description=f"{credit_package.name} - {credit_package.credits:,} Credits",
|
||||
quantity=1,
|
||||
unit_price=Decimal(str(local_price)),
|
||||
amount=Decimal(str(local_price))
|
||||
unit_price=Decimal(str(usd_price)),
|
||||
amount=Decimal(str(usd_price))
|
||||
)
|
||||
|
||||
invoice.calculate_totals()
|
||||
@@ -212,10 +266,21 @@ class InvoiceService:
|
||||
transaction_id: Optional[str] = None
|
||||
) -> Invoice:
|
||||
"""
|
||||
Mark invoice as paid
|
||||
Mark invoice as paid and record payment details
|
||||
|
||||
Args:
|
||||
invoice: Invoice to mark as paid
|
||||
payment_method: Payment method used ('stripe', 'paypal', 'bank_transfer', etc.)
|
||||
transaction_id: External transaction ID (Stripe payment intent, PayPal capture ID, etc.)
|
||||
"""
|
||||
invoice.status = 'paid'
|
||||
invoice.paid_at = timezone.now()
|
||||
invoice.payment_method = payment_method
|
||||
|
||||
# For Stripe payments, store the transaction ID in stripe_invoice_id field
|
||||
if payment_method == 'stripe' and transaction_id:
|
||||
invoice.stripe_invoice_id = transaction_id
|
||||
|
||||
invoice.save()
|
||||
|
||||
return invoice
|
||||
@@ -239,43 +304,13 @@ class InvoiceService:
|
||||
@staticmethod
|
||||
def generate_pdf(invoice: Invoice) -> bytes:
|
||||
"""
|
||||
Generate PDF for invoice
|
||||
|
||||
TODO: Implement PDF generation using reportlab or weasyprint
|
||||
For now, return placeholder
|
||||
Generate professional PDF invoice using ReportLab
|
||||
"""
|
||||
from io import BytesIO
|
||||
from igny8_core.business.billing.services.pdf_service import InvoicePDFGenerator
|
||||
|
||||
# Placeholder - implement PDF generation
|
||||
buffer = BytesIO()
|
||||
|
||||
# Simple text representation for now
|
||||
content = f"""
|
||||
INVOICE #{invoice.invoice_number}
|
||||
|
||||
Bill To: {invoice.account.name}
|
||||
Email: {invoice.billing_email}
|
||||
|
||||
Date: {invoice.created_at.strftime('%Y-%m-%d')}
|
||||
Due Date: {invoice.due_date.strftime('%Y-%m-%d') if invoice.due_date else 'N/A'}
|
||||
|
||||
Line Items:
|
||||
"""
|
||||
for item in invoice.line_items:
|
||||
content += f" {item['description']} - ${item['amount']}\n"
|
||||
|
||||
content += f"""
|
||||
Subtotal: ${invoice.subtotal}
|
||||
Tax: ${invoice.tax_amount}
|
||||
Total: ${invoice.total_amount}
|
||||
|
||||
Status: {invoice.status.upper()}
|
||||
"""
|
||||
|
||||
buffer.write(content.encode('utf-8'))
|
||||
buffer.seek(0)
|
||||
|
||||
return buffer.getvalue()
|
||||
# Use the professional PDF generator
|
||||
pdf_buffer = InvoicePDFGenerator.generate_invoice_pdf(invoice)
|
||||
return pdf_buffer.getvalue()
|
||||
|
||||
@staticmethod
|
||||
def get_account_invoices(
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
"""
|
||||
Limit Service for Plan Limit Enforcement
|
||||
Manages hard limits (sites, users, keywords, clusters) and monthly limits (ideas, words, images, prompts)
|
||||
Manages hard limits (sites, users, keywords) and monthly limits (ahrefs_queries)
|
||||
"""
|
||||
from django.db import transaction
|
||||
from django.utils import timezone
|
||||
@@ -18,12 +18,12 @@ class LimitExceededError(Exception):
|
||||
|
||||
|
||||
class HardLimitExceededError(LimitExceededError):
|
||||
"""Raised when a hard limit (sites, users, keywords, clusters) is exceeded"""
|
||||
"""Raised when a hard limit (sites, users, keywords) is exceeded"""
|
||||
pass
|
||||
|
||||
|
||||
class MonthlyLimitExceededError(LimitExceededError):
|
||||
"""Raised when a monthly limit (ideas, words, images, prompts) is exceeded"""
|
||||
"""Raised when a monthly limit (ahrefs_queries) is exceeded"""
|
||||
pass
|
||||
|
||||
|
||||
@@ -31,6 +31,7 @@ class LimitService:
|
||||
"""Service for managing and enforcing plan limits"""
|
||||
|
||||
# Map limit types to model/field names
|
||||
# Simplified to only 3 hard limits: sites, users, keywords
|
||||
HARD_LIMIT_MAPPINGS = {
|
||||
'sites': {
|
||||
'model': 'igny8_core_auth.Site',
|
||||
@@ -39,10 +40,10 @@ class LimitService:
|
||||
'filter_field': 'account',
|
||||
},
|
||||
'users': {
|
||||
'model': 'igny8_core_auth.SiteUserAccess',
|
||||
'model': 'igny8_core_auth.User',
|
||||
'plan_field': 'max_users',
|
||||
'display_name': 'Team Users',
|
||||
'filter_field': 'site__account',
|
||||
'display_name': 'Team Members',
|
||||
'filter_field': 'account',
|
||||
},
|
||||
'keywords': {
|
||||
'model': 'planner.Keywords',
|
||||
@@ -50,39 +51,15 @@ class LimitService:
|
||||
'display_name': 'Keywords',
|
||||
'filter_field': 'account',
|
||||
},
|
||||
'clusters': {
|
||||
'model': 'planner.Clusters',
|
||||
'plan_field': 'max_clusters',
|
||||
'display_name': 'Clusters',
|
||||
'filter_field': 'account',
|
||||
},
|
||||
}
|
||||
|
||||
# Simplified to only 1 monthly limit: ahrefs_queries
|
||||
# All other consumption is controlled by credits only
|
||||
MONTHLY_LIMIT_MAPPINGS = {
|
||||
'content_ideas': {
|
||||
'plan_field': 'max_content_ideas',
|
||||
'usage_field': 'usage_content_ideas',
|
||||
'display_name': 'Content Ideas',
|
||||
},
|
||||
'content_words': {
|
||||
'plan_field': 'max_content_words',
|
||||
'usage_field': 'usage_content_words',
|
||||
'display_name': 'Content Words',
|
||||
},
|
||||
'images_basic': {
|
||||
'plan_field': 'max_images_basic',
|
||||
'usage_field': 'usage_images_basic',
|
||||
'display_name': 'Basic Images',
|
||||
},
|
||||
'images_premium': {
|
||||
'plan_field': 'max_images_premium',
|
||||
'usage_field': 'usage_images_premium',
|
||||
'display_name': 'Premium Images',
|
||||
},
|
||||
'image_prompts': {
|
||||
'plan_field': 'max_image_prompts',
|
||||
'usage_field': 'usage_image_prompts',
|
||||
'display_name': 'Image Prompts',
|
||||
'ahrefs_queries': {
|
||||
'plan_field': 'max_ahrefs_queries',
|
||||
'usage_field': 'usage_ahrefs_queries',
|
||||
'display_name': 'Keyword Research Queries',
|
||||
},
|
||||
}
|
||||
|
||||
@@ -318,11 +295,8 @@ class LimitService:
|
||||
Returns:
|
||||
dict: Summary of reset operation
|
||||
"""
|
||||
account.usage_content_ideas = 0
|
||||
account.usage_content_words = 0
|
||||
account.usage_images_basic = 0
|
||||
account.usage_images_premium = 0
|
||||
account.usage_image_prompts = 0
|
||||
# Reset only ahrefs_queries (the only monthly limit now)
|
||||
account.usage_ahrefs_queries = 0
|
||||
|
||||
old_period_end = account.usage_period_end
|
||||
|
||||
@@ -341,8 +315,7 @@ class LimitService:
|
||||
account.usage_period_end = new_period_end
|
||||
|
||||
account.save(update_fields=[
|
||||
'usage_content_ideas', 'usage_content_words',
|
||||
'usage_images_basic', 'usage_images_premium', 'usage_image_prompts',
|
||||
'usage_ahrefs_queries',
|
||||
'usage_period_start', 'usage_period_end', 'updated_at'
|
||||
])
|
||||
|
||||
@@ -353,5 +326,5 @@ class LimitService:
|
||||
'old_period_end': old_period_end.isoformat() if old_period_end else None,
|
||||
'new_period_start': new_period_start.isoformat(),
|
||||
'new_period_end': new_period_end.isoformat(),
|
||||
'limits_reset': 5,
|
||||
'limits_reset': 1,
|
||||
}
|
||||
|
||||
@@ -105,11 +105,15 @@ class PaymentService:
|
||||
) -> Payment:
|
||||
"""
|
||||
Mark payment as completed and update invoice
|
||||
For automatic payments (Stripe/PayPal), sets approved_at but leaves approved_by as None
|
||||
"""
|
||||
from .invoice_service import InvoiceService
|
||||
|
||||
payment.status = 'succeeded'
|
||||
payment.processed_at = timezone.now()
|
||||
# For automatic payments, set approved_at to indicate when payment was verified
|
||||
# approved_by stays None to indicate it was automated, not manual approval
|
||||
payment.approved_at = timezone.now()
|
||||
|
||||
if transaction_id:
|
||||
payment.transaction_reference = transaction_id
|
||||
|
||||
679
backend/igny8_core/business/billing/services/paypal_service.py
Normal file
679
backend/igny8_core/business/billing/services/paypal_service.py
Normal file
@@ -0,0 +1,679 @@
|
||||
"""
|
||||
PayPal Service - REST API v2 integration
|
||||
|
||||
Handles:
|
||||
- Order creation and capture for one-time payments
|
||||
- Subscription management
|
||||
- Webhook verification
|
||||
|
||||
Configuration stored in IntegrationProvider model (provider_id='paypal')
|
||||
|
||||
Endpoints:
|
||||
- Sandbox: https://api-m.sandbox.paypal.com
|
||||
- Production: https://api-m.paypal.com
|
||||
"""
|
||||
import requests
|
||||
import base64
|
||||
import logging
|
||||
from typing import Optional, Dict, Any
|
||||
from django.conf import settings
|
||||
from igny8_core.modules.system.models import IntegrationProvider
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class PayPalConfigurationError(Exception):
|
||||
"""Raised when PayPal is not properly configured"""
|
||||
pass
|
||||
|
||||
|
||||
class PayPalAPIError(Exception):
|
||||
"""Raised when PayPal API returns an error"""
|
||||
def __init__(self, message: str, status_code: int = None, response: dict = None):
|
||||
super().__init__(message)
|
||||
self.status_code = status_code
|
||||
self.response = response
|
||||
|
||||
|
||||
class PayPalService:
|
||||
"""Service for PayPal payment operations using REST API v2"""
|
||||
|
||||
SANDBOX_URL = 'https://api-m.sandbox.paypal.com'
|
||||
PRODUCTION_URL = 'https://api-m.paypal.com'
|
||||
|
||||
def __init__(self):
|
||||
"""
|
||||
Initialize PayPal service with credentials from IntegrationProvider.
|
||||
|
||||
Raises:
|
||||
PayPalConfigurationError: If PayPal provider not configured or missing credentials
|
||||
"""
|
||||
provider = IntegrationProvider.get_provider('paypal')
|
||||
if not provider:
|
||||
raise PayPalConfigurationError(
|
||||
"PayPal provider not configured. Add 'paypal' provider in admin."
|
||||
)
|
||||
|
||||
if not provider.api_key or not provider.api_secret:
|
||||
raise PayPalConfigurationError(
|
||||
"PayPal client credentials not configured. "
|
||||
"Set api_key (Client ID) and api_secret (Client Secret) in provider."
|
||||
)
|
||||
|
||||
self.client_id = provider.api_key
|
||||
self.client_secret = provider.api_secret
|
||||
self.is_sandbox = provider.is_sandbox
|
||||
self.provider = provider
|
||||
self.config = provider.config or {}
|
||||
|
||||
# Set base URL
|
||||
if provider.api_endpoint:
|
||||
self.base_url = provider.api_endpoint.rstrip('/')
|
||||
else:
|
||||
self.base_url = self.SANDBOX_URL if self.is_sandbox else self.PRODUCTION_URL
|
||||
|
||||
# Cache access token
|
||||
self._access_token = None
|
||||
self._token_expires_at = None
|
||||
|
||||
# Configuration
|
||||
self.currency = self.config.get('currency', 'USD')
|
||||
self.webhook_id = self.config.get('webhook_id', '')
|
||||
|
||||
logger.info(
|
||||
f"PayPal service initialized (sandbox={self.is_sandbox}, "
|
||||
f"base_url={self.base_url})"
|
||||
)
|
||||
|
||||
@property
|
||||
def frontend_url(self) -> str:
|
||||
"""Get frontend URL from Django settings"""
|
||||
return getattr(settings, 'FRONTEND_URL', 'http://localhost:3000')
|
||||
|
||||
@property
|
||||
def return_url(self) -> str:
|
||||
"""Get return URL for PayPal redirects"""
|
||||
return self.config.get(
|
||||
'return_url',
|
||||
f'{self.frontend_url}/account/plans?paypal=success'
|
||||
)
|
||||
|
||||
@property
|
||||
def cancel_url(self) -> str:
|
||||
"""Get cancel URL for PayPal redirects"""
|
||||
return self.config.get(
|
||||
'cancel_url',
|
||||
f'{self.frontend_url}/account/plans?paypal=cancel'
|
||||
)
|
||||
|
||||
# ========== Authentication ==========
|
||||
|
||||
def _get_access_token(self) -> str:
|
||||
"""
|
||||
Get OAuth 2.0 access token from PayPal.
|
||||
|
||||
Returns:
|
||||
str: Access token
|
||||
|
||||
Raises:
|
||||
PayPalAPIError: If token request fails
|
||||
"""
|
||||
import time
|
||||
|
||||
# Return cached token if still valid
|
||||
if self._access_token and self._token_expires_at:
|
||||
if time.time() < self._token_expires_at - 60: # 60 second buffer
|
||||
return self._access_token
|
||||
|
||||
# Create Basic auth header
|
||||
auth_string = f'{self.client_id}:{self.client_secret}'
|
||||
auth_bytes = base64.b64encode(auth_string.encode()).decode()
|
||||
|
||||
response = requests.post(
|
||||
f'{self.base_url}/v1/oauth2/token',
|
||||
headers={
|
||||
'Authorization': f'Basic {auth_bytes}',
|
||||
'Content-Type': 'application/x-www-form-urlencoded',
|
||||
},
|
||||
data='grant_type=client_credentials',
|
||||
timeout=30,
|
||||
)
|
||||
|
||||
if response.status_code != 200:
|
||||
logger.error(f"PayPal token request failed: {response.text}")
|
||||
raise PayPalAPIError(
|
||||
"Failed to obtain PayPal access token",
|
||||
status_code=response.status_code,
|
||||
response=response.json() if response.text else None
|
||||
)
|
||||
|
||||
data = response.json()
|
||||
self._access_token = data['access_token']
|
||||
self._token_expires_at = time.time() + data.get('expires_in', 32400)
|
||||
|
||||
logger.debug("PayPal access token obtained successfully")
|
||||
return self._access_token
|
||||
|
||||
def _make_request(
|
||||
self,
|
||||
method: str,
|
||||
endpoint: str,
|
||||
json_data: dict = None,
|
||||
params: dict = None,
|
||||
timeout: int = 30,
|
||||
) -> dict:
|
||||
"""
|
||||
Make authenticated API request to PayPal.
|
||||
|
||||
Args:
|
||||
method: HTTP method (GET, POST, etc.)
|
||||
endpoint: API endpoint (e.g., '/v2/checkout/orders')
|
||||
json_data: JSON body data
|
||||
params: Query parameters
|
||||
timeout: Request timeout in seconds
|
||||
|
||||
Returns:
|
||||
dict: Response JSON
|
||||
|
||||
Raises:
|
||||
PayPalAPIError: If request fails
|
||||
"""
|
||||
token = self._get_access_token()
|
||||
|
||||
headers = {
|
||||
'Authorization': f'Bearer {token}',
|
||||
'Content-Type': 'application/json',
|
||||
}
|
||||
|
||||
url = f'{self.base_url}{endpoint}'
|
||||
|
||||
response = requests.request(
|
||||
method=method,
|
||||
url=url,
|
||||
headers=headers,
|
||||
json=json_data,
|
||||
params=params,
|
||||
timeout=timeout,
|
||||
)
|
||||
|
||||
# Handle no content response
|
||||
if response.status_code == 204:
|
||||
return {}
|
||||
|
||||
# Parse JSON response
|
||||
try:
|
||||
response_data = response.json() if response.text else {}
|
||||
except Exception:
|
||||
response_data = {'raw': response.text}
|
||||
|
||||
# Check for errors
|
||||
if response.status_code >= 400:
|
||||
error_msg = response_data.get('message', str(response_data))
|
||||
logger.error(f"PayPal API error: {error_msg}")
|
||||
raise PayPalAPIError(
|
||||
f"PayPal API error: {error_msg}",
|
||||
status_code=response.status_code,
|
||||
response=response_data
|
||||
)
|
||||
|
||||
return response_data
|
||||
|
||||
# ========== Order Operations ==========
|
||||
|
||||
def create_order(
|
||||
self,
|
||||
account,
|
||||
amount: float,
|
||||
currency: str = None,
|
||||
description: str = '',
|
||||
return_url: str = None,
|
||||
cancel_url: str = None,
|
||||
metadata: dict = None,
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Create PayPal order for one-time payment.
|
||||
|
||||
Args:
|
||||
account: Account model instance
|
||||
amount: Payment amount
|
||||
currency: Currency code (default from config)
|
||||
description: Payment description
|
||||
return_url: URL to redirect after approval
|
||||
cancel_url: URL to redirect on cancellation
|
||||
metadata: Additional metadata to store
|
||||
|
||||
Returns:
|
||||
dict: Order data including order_id and approval_url
|
||||
"""
|
||||
currency = currency or self.currency
|
||||
return_url = return_url or self.return_url
|
||||
cancel_url = cancel_url or self.cancel_url
|
||||
|
||||
# Build order payload
|
||||
order_data = {
|
||||
'intent': 'CAPTURE',
|
||||
'purchase_units': [{
|
||||
'amount': {
|
||||
'currency_code': currency,
|
||||
'value': f'{amount:.2f}',
|
||||
},
|
||||
'description': description or 'IGNY8 Payment',
|
||||
'custom_id': str(account.id),
|
||||
'reference_id': str(account.id),
|
||||
}],
|
||||
'application_context': {
|
||||
'return_url': return_url,
|
||||
'cancel_url': cancel_url,
|
||||
'brand_name': 'IGNY8',
|
||||
'landing_page': 'BILLING',
|
||||
'user_action': 'PAY_NOW',
|
||||
'shipping_preference': 'NO_SHIPPING',
|
||||
}
|
||||
}
|
||||
|
||||
# Create order
|
||||
response = self._make_request('POST', '/v2/checkout/orders', json_data=order_data)
|
||||
|
||||
# Extract approval URL
|
||||
approval_url = None
|
||||
for link in response.get('links', []):
|
||||
if link.get('rel') == 'approve':
|
||||
approval_url = link.get('href')
|
||||
break
|
||||
|
||||
logger.info(
|
||||
f"Created PayPal order {response.get('id')} for account {account.id}, "
|
||||
f"amount {currency} {amount}"
|
||||
)
|
||||
|
||||
return {
|
||||
'order_id': response.get('id'),
|
||||
'status': response.get('status'),
|
||||
'approval_url': approval_url,
|
||||
'links': response.get('links', []),
|
||||
}
|
||||
|
||||
def create_credit_order(
|
||||
self,
|
||||
account,
|
||||
credit_package,
|
||||
return_url: str = None,
|
||||
cancel_url: str = None,
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Create PayPal order for credit package purchase.
|
||||
|
||||
Args:
|
||||
account: Account model instance
|
||||
credit_package: CreditPackage model instance
|
||||
return_url: URL to redirect after approval
|
||||
cancel_url: URL to redirect on cancellation
|
||||
|
||||
Returns:
|
||||
dict: Order data including order_id and approval_url
|
||||
"""
|
||||
return_url = return_url or f'{self.frontend_url}/account/usage?paypal=success'
|
||||
cancel_url = cancel_url or f'{self.frontend_url}/account/usage?paypal=cancel'
|
||||
|
||||
# Add credit package info to custom_id for webhook processing
|
||||
order = self.create_order(
|
||||
account=account,
|
||||
amount=float(credit_package.price),
|
||||
description=f'{credit_package.name} - {credit_package.credits} credits',
|
||||
return_url=f'{return_url}&package_id={credit_package.id}',
|
||||
cancel_url=cancel_url,
|
||||
)
|
||||
|
||||
# Store package info in order
|
||||
order['credit_package_id'] = str(credit_package.id)
|
||||
order['credit_amount'] = credit_package.credits
|
||||
|
||||
return order
|
||||
|
||||
def capture_order(self, order_id: str) -> Dict[str, Any]:
|
||||
"""
|
||||
Capture payment for approved order.
|
||||
|
||||
Call this after customer approves the order at PayPal.
|
||||
|
||||
Args:
|
||||
order_id: PayPal order ID
|
||||
|
||||
Returns:
|
||||
dict: Capture result with payment details
|
||||
"""
|
||||
response = self._make_request(
|
||||
'POST',
|
||||
f'/v2/checkout/orders/{order_id}/capture'
|
||||
)
|
||||
|
||||
# Extract capture details
|
||||
capture_id = None
|
||||
amount = None
|
||||
currency = None
|
||||
|
||||
if response.get('purchase_units'):
|
||||
captures = response['purchase_units'][0].get('payments', {}).get('captures', [])
|
||||
if captures:
|
||||
capture = captures[0]
|
||||
capture_id = capture.get('id')
|
||||
amount = capture.get('amount', {}).get('value')
|
||||
currency = capture.get('amount', {}).get('currency_code')
|
||||
|
||||
logger.info(
|
||||
f"Captured PayPal order {order_id}, capture_id={capture_id}, "
|
||||
f"amount={currency} {amount}"
|
||||
)
|
||||
|
||||
return {
|
||||
'order_id': response.get('id'),
|
||||
'status': response.get('status'),
|
||||
'capture_id': capture_id,
|
||||
'amount': amount,
|
||||
'currency': currency,
|
||||
'payer': response.get('payer', {}),
|
||||
'custom_id': response.get('purchase_units', [{}])[0].get('custom_id'),
|
||||
}
|
||||
|
||||
def get_order(self, order_id: str) -> Dict[str, Any]:
|
||||
"""
|
||||
Get order details.
|
||||
|
||||
Args:
|
||||
order_id: PayPal order ID
|
||||
|
||||
Returns:
|
||||
dict: Order details
|
||||
"""
|
||||
response = self._make_request('GET', f'/v2/checkout/orders/{order_id}')
|
||||
|
||||
return {
|
||||
'order_id': response.get('id'),
|
||||
'status': response.get('status'),
|
||||
'intent': response.get('intent'),
|
||||
'payer': response.get('payer', {}),
|
||||
'purchase_units': response.get('purchase_units', []),
|
||||
'create_time': response.get('create_time'),
|
||||
'update_time': response.get('update_time'),
|
||||
}
|
||||
|
||||
# ========== Subscription Operations ==========
|
||||
|
||||
def create_subscription(
|
||||
self,
|
||||
account,
|
||||
plan_id: str,
|
||||
return_url: str = None,
|
||||
cancel_url: str = None,
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Create PayPal subscription.
|
||||
|
||||
Requires plan to be created in PayPal dashboard first.
|
||||
|
||||
Args:
|
||||
account: Account model instance
|
||||
plan_id: PayPal Plan ID (created in PayPal dashboard)
|
||||
return_url: URL to redirect after approval
|
||||
cancel_url: URL to redirect on cancellation
|
||||
|
||||
Returns:
|
||||
dict: Subscription data including approval_url
|
||||
"""
|
||||
return_url = return_url or self.return_url
|
||||
cancel_url = cancel_url or self.cancel_url
|
||||
|
||||
subscription_data = {
|
||||
'plan_id': plan_id,
|
||||
'custom_id': str(account.id),
|
||||
'application_context': {
|
||||
'return_url': return_url,
|
||||
'cancel_url': cancel_url,
|
||||
'brand_name': 'IGNY8',
|
||||
'locale': 'en-US',
|
||||
'shipping_preference': 'NO_SHIPPING',
|
||||
'user_action': 'SUBSCRIBE_NOW',
|
||||
'payment_method': {
|
||||
'payer_selected': 'PAYPAL',
|
||||
'payee_preferred': 'IMMEDIATE_PAYMENT_REQUIRED',
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
response = self._make_request(
|
||||
'POST',
|
||||
'/v1/billing/subscriptions',
|
||||
json_data=subscription_data
|
||||
)
|
||||
|
||||
# Extract approval URL
|
||||
approval_url = None
|
||||
for link in response.get('links', []):
|
||||
if link.get('rel') == 'approve':
|
||||
approval_url = link.get('href')
|
||||
break
|
||||
|
||||
logger.info(
|
||||
f"Created PayPal subscription {response.get('id')} for account {account.id}"
|
||||
)
|
||||
|
||||
return {
|
||||
'subscription_id': response.get('id'),
|
||||
'status': response.get('status'),
|
||||
'approval_url': approval_url,
|
||||
'links': response.get('links', []),
|
||||
}
|
||||
|
||||
def get_subscription(self, subscription_id: str) -> Dict[str, Any]:
|
||||
"""
|
||||
Get subscription details.
|
||||
|
||||
Args:
|
||||
subscription_id: PayPal subscription ID
|
||||
|
||||
Returns:
|
||||
dict: Subscription details
|
||||
"""
|
||||
response = self._make_request(
|
||||
'GET',
|
||||
f'/v1/billing/subscriptions/{subscription_id}'
|
||||
)
|
||||
|
||||
return {
|
||||
'subscription_id': response.get('id'),
|
||||
'status': response.get('status'),
|
||||
'plan_id': response.get('plan_id'),
|
||||
'start_time': response.get('start_time'),
|
||||
'billing_info': response.get('billing_info', {}),
|
||||
'custom_id': response.get('custom_id'),
|
||||
}
|
||||
|
||||
def cancel_subscription(
|
||||
self,
|
||||
subscription_id: str,
|
||||
reason: str = 'Customer requested cancellation'
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Cancel PayPal subscription.
|
||||
|
||||
Args:
|
||||
subscription_id: PayPal subscription ID
|
||||
reason: Reason for cancellation
|
||||
|
||||
Returns:
|
||||
dict: Cancellation result
|
||||
"""
|
||||
self._make_request(
|
||||
'POST',
|
||||
f'/v1/billing/subscriptions/{subscription_id}/cancel',
|
||||
json_data={'reason': reason}
|
||||
)
|
||||
|
||||
logger.info(f"Cancelled PayPal subscription {subscription_id}")
|
||||
|
||||
return {
|
||||
'subscription_id': subscription_id,
|
||||
'status': 'CANCELLED',
|
||||
}
|
||||
|
||||
def suspend_subscription(self, subscription_id: str, reason: str = '') -> Dict[str, Any]:
|
||||
"""
|
||||
Suspend PayPal subscription.
|
||||
|
||||
Args:
|
||||
subscription_id: PayPal subscription ID
|
||||
reason: Reason for suspension
|
||||
|
||||
Returns:
|
||||
dict: Suspension result
|
||||
"""
|
||||
self._make_request(
|
||||
'POST',
|
||||
f'/v1/billing/subscriptions/{subscription_id}/suspend',
|
||||
json_data={'reason': reason}
|
||||
)
|
||||
|
||||
logger.info(f"Suspended PayPal subscription {subscription_id}")
|
||||
|
||||
return {
|
||||
'subscription_id': subscription_id,
|
||||
'status': 'SUSPENDED',
|
||||
}
|
||||
|
||||
def activate_subscription(self, subscription_id: str, reason: str = '') -> Dict[str, Any]:
|
||||
"""
|
||||
Activate/reactivate PayPal subscription.
|
||||
|
||||
Args:
|
||||
subscription_id: PayPal subscription ID
|
||||
reason: Reason for activation
|
||||
|
||||
Returns:
|
||||
dict: Activation result
|
||||
"""
|
||||
self._make_request(
|
||||
'POST',
|
||||
f'/v1/billing/subscriptions/{subscription_id}/activate',
|
||||
json_data={'reason': reason}
|
||||
)
|
||||
|
||||
logger.info(f"Activated PayPal subscription {subscription_id}")
|
||||
|
||||
return {
|
||||
'subscription_id': subscription_id,
|
||||
'status': 'ACTIVE',
|
||||
}
|
||||
|
||||
# ========== Webhook Verification ==========
|
||||
|
||||
def verify_webhook_signature(
|
||||
self,
|
||||
headers: dict,
|
||||
body: dict,
|
||||
) -> bool:
|
||||
"""
|
||||
Verify webhook signature from PayPal.
|
||||
|
||||
Args:
|
||||
headers: Request headers (dict-like)
|
||||
body: Request body (parsed JSON dict)
|
||||
|
||||
Returns:
|
||||
bool: True if signature is valid
|
||||
"""
|
||||
if not self.webhook_id:
|
||||
logger.warning("PayPal webhook_id not configured, skipping verification")
|
||||
return True # Optionally fail open or closed based on security policy
|
||||
|
||||
verification_data = {
|
||||
'auth_algo': headers.get('PAYPAL-AUTH-ALGO'),
|
||||
'cert_url': headers.get('PAYPAL-CERT-URL'),
|
||||
'transmission_id': headers.get('PAYPAL-TRANSMISSION-ID'),
|
||||
'transmission_sig': headers.get('PAYPAL-TRANSMISSION-SIG'),
|
||||
'transmission_time': headers.get('PAYPAL-TRANSMISSION-TIME'),
|
||||
'webhook_id': self.webhook_id,
|
||||
'webhook_event': body,
|
||||
}
|
||||
|
||||
try:
|
||||
response = self._make_request(
|
||||
'POST',
|
||||
'/v1/notifications/verify-webhook-signature',
|
||||
json_data=verification_data
|
||||
)
|
||||
|
||||
is_valid = response.get('verification_status') == 'SUCCESS'
|
||||
|
||||
if not is_valid:
|
||||
logger.warning(
|
||||
f"PayPal webhook verification failed: {response.get('verification_status')}"
|
||||
)
|
||||
|
||||
return is_valid
|
||||
|
||||
except PayPalAPIError as e:
|
||||
logger.error(f"PayPal webhook verification error: {e}")
|
||||
return False
|
||||
|
||||
# ========== Refunds ==========
|
||||
|
||||
def refund_capture(
|
||||
self,
|
||||
capture_id: str,
|
||||
amount: float = None,
|
||||
currency: str = None,
|
||||
note: str = None,
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Refund a captured payment.
|
||||
|
||||
Args:
|
||||
capture_id: PayPal capture ID
|
||||
amount: Amount to refund (None for full refund)
|
||||
currency: Currency code
|
||||
note: Note to payer
|
||||
|
||||
Returns:
|
||||
dict: Refund details
|
||||
"""
|
||||
refund_data = {}
|
||||
|
||||
if amount:
|
||||
refund_data['amount'] = {
|
||||
'value': f'{amount:.2f}',
|
||||
'currency_code': currency or self.currency,
|
||||
}
|
||||
|
||||
if note:
|
||||
refund_data['note_to_payer'] = note
|
||||
|
||||
response = self._make_request(
|
||||
'POST',
|
||||
f'/v2/payments/captures/{capture_id}/refund',
|
||||
json_data=refund_data if refund_data else None
|
||||
)
|
||||
|
||||
logger.info(
|
||||
f"Refunded PayPal capture {capture_id}, refund_id={response.get('id')}"
|
||||
)
|
||||
|
||||
return {
|
||||
'refund_id': response.get('id'),
|
||||
'status': response.get('status'),
|
||||
'amount': response.get('amount', {}).get('value'),
|
||||
'currency': response.get('amount', {}).get('currency_code'),
|
||||
}
|
||||
|
||||
|
||||
# Convenience function
|
||||
def get_paypal_service() -> PayPalService:
|
||||
"""
|
||||
Get PayPalService instance.
|
||||
|
||||
Returns:
|
||||
PayPalService: Initialized service
|
||||
|
||||
Raises:
|
||||
PayPalConfigurationError: If PayPal not configured
|
||||
"""
|
||||
return PayPalService()
|
||||
@@ -9,17 +9,32 @@ from reportlab.lib import colors
|
||||
from reportlab.lib.pagesizes import letter
|
||||
from reportlab.lib.styles import getSampleStyleSheet, ParagraphStyle
|
||||
from reportlab.lib.units import inch
|
||||
from reportlab.platypus import SimpleDocTemplate, Table, TableStyle, Paragraph, Spacer, Image
|
||||
from reportlab.platypus import SimpleDocTemplate, Table, TableStyle, Paragraph, Spacer, Image, HRFlowable
|
||||
from reportlab.lib.enums import TA_LEFT, TA_RIGHT, TA_CENTER
|
||||
from django.conf import settings
|
||||
import os
|
||||
import logging
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Logo path - check multiple possible locations
|
||||
LOGO_PATHS = [
|
||||
'/data/app/igny8/frontend/public/images/logo/IGNY8_LIGHT_LOGO.png',
|
||||
'/app/static/images/logo/IGNY8_LIGHT_LOGO.png',
|
||||
]
|
||||
|
||||
|
||||
class InvoicePDFGenerator:
|
||||
"""Generate PDF invoices"""
|
||||
|
||||
@staticmethod
|
||||
def get_logo_path():
|
||||
"""Find the logo file from possible locations"""
|
||||
for path in LOGO_PATHS:
|
||||
if os.path.exists(path):
|
||||
return path
|
||||
return None
|
||||
|
||||
@staticmethod
|
||||
def generate_invoice_pdf(invoice):
|
||||
"""
|
||||
@@ -39,8 +54,8 @@ class InvoicePDFGenerator:
|
||||
pagesize=letter,
|
||||
rightMargin=0.75*inch,
|
||||
leftMargin=0.75*inch,
|
||||
topMargin=0.75*inch,
|
||||
bottomMargin=0.75*inch
|
||||
topMargin=0.5*inch,
|
||||
bottomMargin=0.5*inch
|
||||
)
|
||||
|
||||
# Container for PDF elements
|
||||
@@ -51,17 +66,19 @@ class InvoicePDFGenerator:
|
||||
title_style = ParagraphStyle(
|
||||
'CustomTitle',
|
||||
parent=styles['Heading1'],
|
||||
fontSize=24,
|
||||
fontSize=28,
|
||||
textColor=colors.HexColor('#1f2937'),
|
||||
spaceAfter=30,
|
||||
spaceAfter=0,
|
||||
fontName='Helvetica-Bold',
|
||||
)
|
||||
|
||||
heading_style = ParagraphStyle(
|
||||
'CustomHeading',
|
||||
parent=styles['Heading2'],
|
||||
fontSize=14,
|
||||
textColor=colors.HexColor('#374151'),
|
||||
spaceAfter=12,
|
||||
fontSize=12,
|
||||
textColor=colors.HexColor('#1f2937'),
|
||||
spaceAfter=8,
|
||||
fontName='Helvetica-Bold',
|
||||
)
|
||||
|
||||
normal_style = ParagraphStyle(
|
||||
@@ -69,145 +86,292 @@ class InvoicePDFGenerator:
|
||||
parent=styles['Normal'],
|
||||
fontSize=10,
|
||||
textColor=colors.HexColor('#4b5563'),
|
||||
fontName='Helvetica',
|
||||
)
|
||||
|
||||
# Header
|
||||
elements.append(Paragraph('INVOICE', title_style))
|
||||
elements.append(Spacer(1, 0.2*inch))
|
||||
label_style = ParagraphStyle(
|
||||
'LabelStyle',
|
||||
parent=styles['Normal'],
|
||||
fontSize=9,
|
||||
textColor=colors.HexColor('#6b7280'),
|
||||
fontName='Helvetica',
|
||||
)
|
||||
|
||||
# Company info and invoice details side by side
|
||||
company_data = [
|
||||
['<b>From:</b>', f'<b>Invoice #:</b> {invoice.invoice_number}'],
|
||||
[getattr(settings, 'COMPANY_NAME', 'Igny8'), f'<b>Date:</b> {invoice.created_at.strftime("%B %d, %Y")}'],
|
||||
[getattr(settings, 'COMPANY_ADDRESS', ''), f'<b>Due Date:</b> {invoice.due_date.strftime("%B %d, %Y")}'],
|
||||
[getattr(settings, 'COMPANY_EMAIL', settings.DEFAULT_FROM_EMAIL), f'<b>Status:</b> {invoice.status.upper()}'],
|
||||
]
|
||||
value_style = ParagraphStyle(
|
||||
'ValueStyle',
|
||||
parent=styles['Normal'],
|
||||
fontSize=10,
|
||||
textColor=colors.HexColor('#1f2937'),
|
||||
fontName='Helvetica-Bold',
|
||||
)
|
||||
|
||||
company_table = Table(company_data, colWidths=[3.5*inch, 3*inch])
|
||||
company_table.setStyle(TableStyle([
|
||||
('FONTNAME', (0, 0), (-1, -1), 'Helvetica'),
|
||||
('FONTSIZE', (0, 0), (-1, -1), 10),
|
||||
('TEXTCOLOR', (0, 0), (-1, -1), colors.HexColor('#4b5563')),
|
||||
('VALIGN', (0, 0), (-1, -1), 'TOP'),
|
||||
('ALIGN', (1, 0), (1, -1), 'RIGHT'),
|
||||
right_align_style = ParagraphStyle(
|
||||
'RightAlign',
|
||||
parent=styles['Normal'],
|
||||
fontSize=10,
|
||||
textColor=colors.HexColor('#4b5563'),
|
||||
alignment=TA_RIGHT,
|
||||
fontName='Helvetica',
|
||||
)
|
||||
|
||||
right_bold_style = ParagraphStyle(
|
||||
'RightBold',
|
||||
parent=styles['Normal'],
|
||||
fontSize=10,
|
||||
textColor=colors.HexColor('#1f2937'),
|
||||
alignment=TA_RIGHT,
|
||||
fontName='Helvetica-Bold',
|
||||
)
|
||||
|
||||
# Header with Logo and Invoice title
|
||||
logo_path = InvoicePDFGenerator.get_logo_path()
|
||||
header_data = []
|
||||
|
||||
if logo_path:
|
||||
try:
|
||||
logo = Image(logo_path, width=1.5*inch, height=0.5*inch)
|
||||
logo.hAlign = 'LEFT'
|
||||
header_data = [[logo, Paragraph('INVOICE', title_style)]]
|
||||
except Exception as e:
|
||||
logger.warning(f"Could not load logo: {e}")
|
||||
header_data = [[Paragraph('IGNY8', title_style), Paragraph('INVOICE', title_style)]]
|
||||
else:
|
||||
header_data = [[Paragraph('IGNY8', title_style), Paragraph('INVOICE', title_style)]]
|
||||
|
||||
header_table = Table(header_data, colWidths=[3.5*inch, 3*inch])
|
||||
header_table.setStyle(TableStyle([
|
||||
('VALIGN', (0, 0), (-1, -1), 'MIDDLE'),
|
||||
('ALIGN', (0, 0), (0, 0), 'LEFT'),
|
||||
('ALIGN', (1, 0), (1, 0), 'RIGHT'),
|
||||
]))
|
||||
elements.append(company_table)
|
||||
elements.append(header_table)
|
||||
elements.append(Spacer(1, 0.3*inch))
|
||||
|
||||
# Bill to section
|
||||
elements.append(Paragraph('<b>Bill To:</b>', heading_style))
|
||||
bill_to_data = [
|
||||
[invoice.account.name],
|
||||
[invoice.account.owner.email],
|
||||
# Divider line
|
||||
elements.append(HRFlowable(width="100%", thickness=1, color=colors.HexColor('#e5e7eb'), spaceAfter=20))
|
||||
|
||||
# Invoice details section (right side info)
|
||||
invoice_info = [
|
||||
[Paragraph('Invoice Number:', label_style), Paragraph(invoice.invoice_number, value_style)],
|
||||
[Paragraph('Date:', label_style), Paragraph(invoice.created_at.strftime("%B %d, %Y"), value_style)],
|
||||
[Paragraph('Due Date:', label_style), Paragraph(invoice.due_date.strftime("%B %d, %Y"), value_style)],
|
||||
[Paragraph('Status:', label_style), Paragraph(invoice.status.upper(), value_style)],
|
||||
]
|
||||
|
||||
if hasattr(invoice.account, 'billing_email') and invoice.account.billing_email:
|
||||
bill_to_data.append([f'Billing: {invoice.account.billing_email}'])
|
||||
invoice_info_table = Table(invoice_info, colWidths=[1.2*inch, 2*inch])
|
||||
invoice_info_table.setStyle(TableStyle([
|
||||
('VALIGN', (0, 0), (-1, -1), 'MIDDLE'),
|
||||
('BOTTOMPADDING', (0, 0), (-1, -1), 4),
|
||||
('TOPPADDING', (0, 0), (-1, -1), 4),
|
||||
]))
|
||||
|
||||
for line in bill_to_data:
|
||||
elements.append(Paragraph(line[0], normal_style))
|
||||
# From and To section
|
||||
company_name = getattr(settings, 'COMPANY_NAME', 'Igny8')
|
||||
company_email = getattr(settings, 'COMPANY_EMAIL', settings.DEFAULT_FROM_EMAIL)
|
||||
|
||||
elements.append(Spacer(1, 0.3*inch))
|
||||
from_section = [
|
||||
Paragraph('FROM', heading_style),
|
||||
Paragraph(company_name, value_style),
|
||||
Paragraph(company_email, normal_style),
|
||||
]
|
||||
|
||||
customer_name = invoice.account.name if invoice.account else 'N/A'
|
||||
customer_email = invoice.account.owner.email if invoice.account and invoice.account.owner else invoice.account.billing_email if invoice.account else 'N/A'
|
||||
billing_email = invoice.account.billing_email if invoice.account and hasattr(invoice.account, 'billing_email') and invoice.account.billing_email else None
|
||||
|
||||
to_section = [
|
||||
Paragraph('BILL TO', heading_style),
|
||||
Paragraph(customer_name, value_style),
|
||||
Paragraph(customer_email, normal_style),
|
||||
]
|
||||
if billing_email and billing_email != customer_email:
|
||||
to_section.append(Paragraph(f'Billing: {billing_email}', normal_style))
|
||||
|
||||
# Create from/to layout
|
||||
from_content = []
|
||||
for item in from_section:
|
||||
from_content.append([item])
|
||||
from_table = Table(from_content, colWidths=[3*inch])
|
||||
|
||||
to_content = []
|
||||
for item in to_section:
|
||||
to_content.append([item])
|
||||
to_table = Table(to_content, colWidths=[3*inch])
|
||||
|
||||
# Main info layout with From, To, and Invoice details
|
||||
main_info = [[from_table, to_table, invoice_info_table]]
|
||||
main_info_table = Table(main_info, colWidths=[2.3*inch, 2.3*inch, 2.4*inch])
|
||||
main_info_table.setStyle(TableStyle([
|
||||
('VALIGN', (0, 0), (-1, -1), 'TOP'),
|
||||
]))
|
||||
|
||||
elements.append(main_info_table)
|
||||
elements.append(Spacer(1, 0.4*inch))
|
||||
|
||||
# Line items table
|
||||
elements.append(Paragraph('<b>Items:</b>', heading_style))
|
||||
elements.append(Paragraph('ITEMS', heading_style))
|
||||
elements.append(Spacer(1, 0.1*inch))
|
||||
|
||||
# Table header
|
||||
# Table header - use Paragraph for proper rendering
|
||||
line_items_data = [
|
||||
['Description', 'Quantity', 'Unit Price', 'Amount']
|
||||
[
|
||||
Paragraph('Description', ParagraphStyle('Header', fontName='Helvetica-Bold', fontSize=10, textColor=colors.HexColor('#374151'))),
|
||||
Paragraph('Qty', ParagraphStyle('Header', fontName='Helvetica-Bold', fontSize=10, textColor=colors.HexColor('#374151'), alignment=TA_CENTER)),
|
||||
Paragraph('Unit Price', ParagraphStyle('Header', fontName='Helvetica-Bold', fontSize=10, textColor=colors.HexColor('#374151'), alignment=TA_RIGHT)),
|
||||
Paragraph('Amount', ParagraphStyle('Header', fontName='Helvetica-Bold', fontSize=10, textColor=colors.HexColor('#374151'), alignment=TA_RIGHT)),
|
||||
]
|
||||
]
|
||||
|
||||
# Get line items
|
||||
for item in invoice.line_items.all():
|
||||
# Get line items - line_items is a JSON field (list of dicts)
|
||||
items = invoice.line_items or []
|
||||
for item in items:
|
||||
unit_price = float(item.get('unit_price', 0))
|
||||
amount = float(item.get('amount', 0))
|
||||
line_items_data.append([
|
||||
item.description,
|
||||
str(item.quantity),
|
||||
f'{invoice.currency} {item.unit_price:.2f}',
|
||||
f'{invoice.currency} {item.total_price:.2f}'
|
||||
Paragraph(item.get('description', ''), normal_style),
|
||||
Paragraph(str(item.get('quantity', 1)), ParagraphStyle('Center', parent=normal_style, alignment=TA_CENTER)),
|
||||
Paragraph(f'{invoice.currency} {unit_price:.2f}', right_align_style),
|
||||
Paragraph(f'{invoice.currency} {amount:.2f}', right_align_style),
|
||||
])
|
||||
|
||||
# Add subtotal, tax, total rows
|
||||
line_items_data.append(['', '', '<b>Subtotal:</b>', f'<b>{invoice.currency} {invoice.subtotal:.2f}</b>'])
|
||||
|
||||
if invoice.tax_amount and invoice.tax_amount > 0:
|
||||
line_items_data.append(['', '', f'Tax ({invoice.tax_rate}%):', f'{invoice.currency} {invoice.tax_amount:.2f}'])
|
||||
|
||||
if invoice.discount_amount and invoice.discount_amount > 0:
|
||||
line_items_data.append(['', '', 'Discount:', f'-{invoice.currency} {invoice.discount_amount:.2f}'])
|
||||
|
||||
line_items_data.append(['', '', '<b>Total:</b>', f'<b>{invoice.currency} {invoice.total_amount:.2f}</b>'])
|
||||
# Add empty row for spacing before totals
|
||||
line_items_data.append(['', '', '', ''])
|
||||
|
||||
# Create table
|
||||
line_items_table = Table(
|
||||
line_items_data,
|
||||
colWidths=[3*inch, 1*inch, 1.25*inch, 1.25*inch]
|
||||
colWidths=[3.2*inch, 0.8*inch, 1.25*inch, 1.25*inch]
|
||||
)
|
||||
|
||||
num_items = len(items)
|
||||
line_items_table.setStyle(TableStyle([
|
||||
# Header row
|
||||
('BACKGROUND', (0, 0), (-1, 0), colors.HexColor('#f3f4f6')),
|
||||
('TEXTCOLOR', (0, 0), (-1, 0), colors.HexColor('#1f2937')),
|
||||
('FONTNAME', (0, 0), (-1, 0), 'Helvetica-Bold'),
|
||||
('FONTSIZE', (0, 0), (-1, 0), 10),
|
||||
('BOTTOMPADDING', (0, 0), (-1, 0), 12),
|
||||
('TOPPADDING', (0, 0), (-1, 0), 12),
|
||||
|
||||
# Body rows
|
||||
('FONTNAME', (0, 1), (-1, -4), 'Helvetica'),
|
||||
('FONTSIZE', (0, 1), (-1, -4), 9),
|
||||
('TEXTCOLOR', (0, 1), (-1, -4), colors.HexColor('#4b5563')),
|
||||
('ROWBACKGROUNDS', (0, 1), (-1, -4), [colors.white, colors.HexColor('#f9fafb')]),
|
||||
('ROWBACKGROUNDS', (0, 1), (-1, num_items), [colors.white, colors.HexColor('#f9fafb')]),
|
||||
|
||||
# Summary rows (last 3-4 rows)
|
||||
('FONTNAME', (0, -4), (-1, -1), 'Helvetica'),
|
||||
('FONTSIZE', (0, -4), (-1, -1), 9),
|
||||
('ALIGN', (2, 0), (2, -1), 'RIGHT'),
|
||||
('ALIGN', (3, 0), (3, -1), 'RIGHT'),
|
||||
# Alignment
|
||||
('ALIGN', (1, 0), (1, -1), 'CENTER'),
|
||||
('ALIGN', (2, 0), (-1, -1), 'RIGHT'),
|
||||
('VALIGN', (0, 0), (-1, -1), 'MIDDLE'),
|
||||
|
||||
# Grid
|
||||
('GRID', (0, 0), (-1, -4), 0.5, colors.HexColor('#e5e7eb')),
|
||||
('LINEABOVE', (2, -4), (-1, -4), 1, colors.HexColor('#d1d5db')),
|
||||
('LINEABOVE', (2, -1), (-1, -1), 2, colors.HexColor('#1f2937')),
|
||||
# Grid for items only
|
||||
('LINEBELOW', (0, 0), (-1, 0), 1, colors.HexColor('#d1d5db')),
|
||||
('LINEBELOW', (0, num_items), (-1, num_items), 1, colors.HexColor('#e5e7eb')),
|
||||
|
||||
# Padding
|
||||
('TOPPADDING', (0, 0), (-1, -1), 8),
|
||||
('BOTTOMPADDING', (0, 0), (-1, -1), 8),
|
||||
('LEFTPADDING', (0, 0), (-1, -1), 10),
|
||||
('RIGHTPADDING', (0, 0), (-1, -1), 10),
|
||||
('TOPPADDING', (0, 1), (-1, -1), 10),
|
||||
('BOTTOMPADDING', (0, 1), (-1, -1), 10),
|
||||
('LEFTPADDING', (0, 0), (-1, -1), 8),
|
||||
('RIGHTPADDING', (0, 0), (-1, -1), 8),
|
||||
]))
|
||||
|
||||
elements.append(line_items_table)
|
||||
elements.append(Spacer(1, 0.2*inch))
|
||||
|
||||
# Totals section - right aligned
|
||||
totals_data = [
|
||||
[Paragraph('Subtotal:', right_align_style), Paragraph(f'{invoice.currency} {float(invoice.subtotal):.2f}', right_bold_style)],
|
||||
]
|
||||
|
||||
tax_amount = float(invoice.tax or 0)
|
||||
if tax_amount > 0:
|
||||
tax_rate = invoice.metadata.get('tax_rate', 0) if invoice.metadata else 0
|
||||
totals_data.append([
|
||||
Paragraph(f'Tax ({tax_rate}%):', right_align_style),
|
||||
Paragraph(f'{invoice.currency} {tax_amount:.2f}', right_align_style)
|
||||
])
|
||||
|
||||
discount_amount = float(invoice.metadata.get('discount_amount', 0)) if invoice.metadata else 0
|
||||
if discount_amount > 0:
|
||||
totals_data.append([
|
||||
Paragraph('Discount:', right_align_style),
|
||||
Paragraph(f'-{invoice.currency} {discount_amount:.2f}', right_align_style)
|
||||
])
|
||||
|
||||
totals_data.append([
|
||||
Paragraph('Total:', ParagraphStyle('TotalLabel', fontName='Helvetica-Bold', fontSize=12, textColor=colors.HexColor('#1f2937'), alignment=TA_RIGHT)),
|
||||
Paragraph(f'{invoice.currency} {float(invoice.total):.2f}', ParagraphStyle('TotalValue', fontName='Helvetica-Bold', fontSize=12, textColor=colors.HexColor('#1f2937'), alignment=TA_RIGHT))
|
||||
])
|
||||
|
||||
totals_table = Table(totals_data, colWidths=[1.5*inch, 1.5*inch])
|
||||
totals_table.setStyle(TableStyle([
|
||||
('ALIGN', (0, 0), (-1, -1), 'RIGHT'),
|
||||
('VALIGN', (0, 0), (-1, -1), 'MIDDLE'),
|
||||
('TOPPADDING', (0, 0), (-1, -1), 6),
|
||||
('BOTTOMPADDING', (0, 0), (-1, -1), 6),
|
||||
('LINEABOVE', (0, -1), (-1, -1), 2, colors.HexColor('#1f2937')),
|
||||
]))
|
||||
|
||||
# Right-align the totals table
|
||||
totals_wrapper = Table([[totals_table]], colWidths=[6.5*inch])
|
||||
totals_wrapper.setStyle(TableStyle([
|
||||
('ALIGN', (0, 0), (0, 0), 'RIGHT'),
|
||||
]))
|
||||
elements.append(totals_wrapper)
|
||||
elements.append(Spacer(1, 0.4*inch))
|
||||
|
||||
# Payment information
|
||||
if invoice.status == 'paid':
|
||||
elements.append(Paragraph('<b>Payment Information:</b>', heading_style))
|
||||
elements.append(HRFlowable(width="100%", thickness=1, color=colors.HexColor('#e5e7eb'), spaceBefore=10, spaceAfter=15))
|
||||
elements.append(Paragraph('PAYMENT INFORMATION', heading_style))
|
||||
|
||||
payment = invoice.payments.filter(status='succeeded').first()
|
||||
if payment:
|
||||
payment_method = payment.get_payment_method_display() if hasattr(payment, 'get_payment_method_display') else str(payment.payment_method)
|
||||
payment_date = payment.processed_at.strftime("%B %d, %Y") if payment.processed_at else 'N/A'
|
||||
|
||||
payment_info = [
|
||||
f'Payment Method: {payment.get_payment_method_display()}',
|
||||
f'Paid On: {payment.processed_at.strftime("%B %d, %Y")}',
|
||||
[Paragraph('Payment Method:', label_style), Paragraph(payment_method, value_style)],
|
||||
[Paragraph('Paid On:', label_style), Paragraph(payment_date, value_style)],
|
||||
]
|
||||
|
||||
if payment.manual_reference:
|
||||
payment_info.append(f'Reference: {payment.manual_reference}')
|
||||
|
||||
for line in payment_info:
|
||||
elements.append(Paragraph(line, normal_style))
|
||||
payment_info.append([Paragraph('Reference:', label_style), Paragraph(payment.manual_reference, value_style)])
|
||||
|
||||
payment_table = Table(payment_info, colWidths=[1.5*inch, 3*inch])
|
||||
payment_table.setStyle(TableStyle([
|
||||
('VALIGN', (0, 0), (-1, -1), 'MIDDLE'),
|
||||
('BOTTOMPADDING', (0, 0), (-1, -1), 4),
|
||||
('TOPPADDING', (0, 0), (-1, -1), 4),
|
||||
]))
|
||||
elements.append(payment_table)
|
||||
elements.append(Spacer(1, 0.2*inch))
|
||||
|
||||
# Footer / Notes
|
||||
if invoice.notes:
|
||||
elements.append(Spacer(1, 0.2*inch))
|
||||
elements.append(Paragraph('<b>Notes:</b>', heading_style))
|
||||
elements.append(Paragraph('NOTES', heading_style))
|
||||
elements.append(Paragraph(invoice.notes, normal_style))
|
||||
|
||||
# Terms
|
||||
elements.append(Spacer(1, 0.3*inch))
|
||||
elements.append(Paragraph('<b>Terms & Conditions:</b>', heading_style))
|
||||
terms = getattr(settings, 'INVOICE_TERMS', 'Payment is due within 7 days of invoice date.')
|
||||
elements.append(Paragraph(terms, normal_style))
|
||||
elements.append(HRFlowable(width="100%", thickness=1, color=colors.HexColor('#e5e7eb'), spaceAfter=15))
|
||||
|
||||
terms_style = ParagraphStyle(
|
||||
'Terms',
|
||||
parent=styles['Normal'],
|
||||
fontSize=8,
|
||||
textColor=colors.HexColor('#9ca3af'),
|
||||
fontName='Helvetica',
|
||||
)
|
||||
terms = getattr(settings, 'INVOICE_TERMS', 'Payment is due within 7 days of invoice date. Thank you for your business!')
|
||||
elements.append(Paragraph(f'Terms & Conditions: {terms}', terms_style))
|
||||
|
||||
# Footer with company info
|
||||
elements.append(Spacer(1, 0.2*inch))
|
||||
footer_style = ParagraphStyle(
|
||||
'Footer',
|
||||
parent=styles['Normal'],
|
||||
fontSize=8,
|
||||
textColor=colors.HexColor('#9ca3af'),
|
||||
fontName='Helvetica',
|
||||
alignment=TA_CENTER,
|
||||
)
|
||||
elements.append(Paragraph(f'Generated by IGNY8 • {company_email}', footer_style))
|
||||
|
||||
# Build PDF
|
||||
doc.build(elements)
|
||||
|
||||
627
backend/igny8_core/business/billing/services/stripe_service.py
Normal file
627
backend/igny8_core/business/billing/services/stripe_service.py
Normal file
@@ -0,0 +1,627 @@
|
||||
"""
|
||||
Stripe Service - Wrapper for Stripe API operations
|
||||
|
||||
Handles:
|
||||
- Checkout sessions for subscriptions and credit packages
|
||||
- Billing portal sessions for subscription management
|
||||
- Webhook event construction and verification
|
||||
- Customer management
|
||||
|
||||
Configuration stored in IntegrationProvider model (provider_id='stripe')
|
||||
"""
|
||||
import stripe
|
||||
import logging
|
||||
from typing import Optional, Dict, Any
|
||||
from django.conf import settings
|
||||
from django.utils import timezone
|
||||
from igny8_core.modules.system.models import IntegrationProvider
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class StripeConfigurationError(Exception):
|
||||
"""Raised when Stripe is not properly configured"""
|
||||
pass
|
||||
|
||||
|
||||
class StripeService:
|
||||
"""Service for Stripe payment operations"""
|
||||
|
||||
def __init__(self):
|
||||
"""
|
||||
Initialize Stripe service with credentials from IntegrationProvider.
|
||||
|
||||
Raises:
|
||||
StripeConfigurationError: If Stripe provider not configured or missing credentials
|
||||
"""
|
||||
provider = IntegrationProvider.get_provider('stripe')
|
||||
if not provider:
|
||||
raise StripeConfigurationError(
|
||||
"Stripe provider not configured. Add 'stripe' provider in admin."
|
||||
)
|
||||
|
||||
if not provider.api_secret:
|
||||
raise StripeConfigurationError(
|
||||
"Stripe secret key not configured. Set api_secret in provider."
|
||||
)
|
||||
|
||||
self.is_sandbox = provider.is_sandbox
|
||||
self.provider = provider
|
||||
|
||||
# Set Stripe API key
|
||||
stripe.api_key = provider.api_secret
|
||||
|
||||
# Store keys for reference
|
||||
self.publishable_key = provider.api_key
|
||||
self.webhook_secret = provider.webhook_secret
|
||||
self.config = provider.config or {}
|
||||
|
||||
# Default currency from config
|
||||
self.currency = self.config.get('currency', 'usd')
|
||||
|
||||
logger.info(
|
||||
f"Stripe service initialized (sandbox={self.is_sandbox}, "
|
||||
f"currency={self.currency})"
|
||||
)
|
||||
|
||||
@property
|
||||
def frontend_url(self) -> str:
|
||||
"""Get frontend URL from Django settings"""
|
||||
return getattr(settings, 'FRONTEND_URL', 'http://localhost:3000')
|
||||
|
||||
def get_publishable_key(self) -> str:
|
||||
"""Return publishable key for frontend use"""
|
||||
return self.publishable_key
|
||||
|
||||
# ========== Customer Management ==========
|
||||
|
||||
def _get_or_create_customer(self, account) -> str:
|
||||
"""
|
||||
Get existing Stripe customer or create new one.
|
||||
|
||||
Args:
|
||||
account: Account model instance
|
||||
|
||||
Returns:
|
||||
str: Stripe customer ID
|
||||
"""
|
||||
# Return existing customer if available
|
||||
if account.stripe_customer_id:
|
||||
try:
|
||||
# Verify customer still exists in Stripe
|
||||
stripe.Customer.retrieve(account.stripe_customer_id)
|
||||
return account.stripe_customer_id
|
||||
except stripe.error.InvalidRequestError:
|
||||
# Customer was deleted, create new one
|
||||
logger.warning(
|
||||
f"Stripe customer {account.stripe_customer_id} not found, creating new"
|
||||
)
|
||||
|
||||
# Create new customer
|
||||
customer = stripe.Customer.create(
|
||||
email=account.billing_email or account.owner.email,
|
||||
name=account.name,
|
||||
metadata={
|
||||
'account_id': str(account.id),
|
||||
'environment': 'sandbox' if self.is_sandbox else 'production'
|
||||
},
|
||||
)
|
||||
|
||||
# Save customer ID to account
|
||||
account.stripe_customer_id = customer.id
|
||||
account.save(update_fields=['stripe_customer_id', 'updated_at'])
|
||||
|
||||
logger.info(f"Created Stripe customer {customer.id} for account {account.id}")
|
||||
|
||||
return customer.id
|
||||
|
||||
def get_customer(self, account) -> Optional[Dict]:
|
||||
"""
|
||||
Get Stripe customer details.
|
||||
|
||||
Args:
|
||||
account: Account model instance
|
||||
|
||||
Returns:
|
||||
dict: Customer data or None if not found
|
||||
"""
|
||||
if not account.stripe_customer_id:
|
||||
return None
|
||||
|
||||
try:
|
||||
customer = stripe.Customer.retrieve(account.stripe_customer_id)
|
||||
return {
|
||||
'id': customer.id,
|
||||
'email': customer.email,
|
||||
'name': customer.name,
|
||||
'created': customer.created,
|
||||
'default_source': customer.default_source,
|
||||
}
|
||||
except stripe.error.InvalidRequestError:
|
||||
return None
|
||||
|
||||
# ========== Checkout Sessions ==========
|
||||
|
||||
def create_checkout_session(
|
||||
self,
|
||||
account,
|
||||
plan,
|
||||
success_url: Optional[str] = None,
|
||||
cancel_url: Optional[str] = None,
|
||||
allow_promotion_codes: bool = True,
|
||||
trial_period_days: Optional[int] = None,
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Create Stripe Checkout session for new subscription.
|
||||
|
||||
Args:
|
||||
account: Account model instance
|
||||
plan: Plan model instance with stripe_price_id
|
||||
success_url: URL to redirect after successful payment
|
||||
cancel_url: URL to redirect if payment is canceled
|
||||
allow_promotion_codes: Allow discount codes in checkout
|
||||
trial_period_days: Optional trial period (overrides plan default)
|
||||
|
||||
Returns:
|
||||
dict: Session data with checkout_url and session_id
|
||||
|
||||
Raises:
|
||||
ValueError: If plan has no stripe_price_id
|
||||
"""
|
||||
if not plan.stripe_price_id:
|
||||
raise ValueError(
|
||||
f"Plan '{plan.name}' (id={plan.id}) has no stripe_price_id configured"
|
||||
)
|
||||
|
||||
# Get or create customer
|
||||
customer_id = self._get_or_create_customer(account)
|
||||
|
||||
# Build URLs
|
||||
if not success_url:
|
||||
success_url = f'{self.frontend_url}/account/plans?success=true&session_id={{CHECKOUT_SESSION_ID}}'
|
||||
if not cancel_url:
|
||||
cancel_url = f'{self.frontend_url}/account/plans?canceled=true'
|
||||
|
||||
# Build subscription data
|
||||
subscription_data = {
|
||||
'metadata': {
|
||||
'account_id': str(account.id),
|
||||
'plan_id': str(plan.id),
|
||||
}
|
||||
}
|
||||
|
||||
if trial_period_days:
|
||||
subscription_data['trial_period_days'] = trial_period_days
|
||||
|
||||
# Create checkout session
|
||||
session = stripe.checkout.Session.create(
|
||||
customer=customer_id,
|
||||
payment_method_types=self.config.get('payment_methods', ['card']),
|
||||
mode='subscription',
|
||||
line_items=[{
|
||||
'price': plan.stripe_price_id,
|
||||
'quantity': 1,
|
||||
}],
|
||||
success_url=success_url,
|
||||
cancel_url=cancel_url,
|
||||
allow_promotion_codes=allow_promotion_codes,
|
||||
metadata={
|
||||
'account_id': str(account.id),
|
||||
'plan_id': str(plan.id),
|
||||
'type': 'subscription',
|
||||
},
|
||||
subscription_data=subscription_data,
|
||||
)
|
||||
|
||||
logger.info(
|
||||
f"Created Stripe checkout session {session.id} for account {account.id}, "
|
||||
f"plan {plan.name}"
|
||||
)
|
||||
|
||||
return {
|
||||
'checkout_url': session.url,
|
||||
'session_id': session.id,
|
||||
}
|
||||
|
||||
def create_credit_checkout_session(
|
||||
self,
|
||||
account,
|
||||
credit_package,
|
||||
success_url: Optional[str] = None,
|
||||
cancel_url: Optional[str] = None,
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Create Stripe Checkout session for one-time credit purchase.
|
||||
|
||||
Args:
|
||||
account: Account model instance
|
||||
credit_package: CreditPackage model instance
|
||||
success_url: URL to redirect after successful payment
|
||||
cancel_url: URL to redirect if payment is canceled
|
||||
|
||||
Returns:
|
||||
dict: Session data with checkout_url and session_id
|
||||
"""
|
||||
# Get or create customer
|
||||
customer_id = self._get_or_create_customer(account)
|
||||
|
||||
# Build URLs
|
||||
if not success_url:
|
||||
success_url = f'{self.frontend_url}/account/usage?purchase=success&session_id={{CHECKOUT_SESSION_ID}}'
|
||||
if not cancel_url:
|
||||
cancel_url = f'{self.frontend_url}/account/usage?purchase=canceled'
|
||||
|
||||
# Use existing Stripe price if available, otherwise create price_data
|
||||
if credit_package.stripe_price_id:
|
||||
line_items = [{
|
||||
'price': credit_package.stripe_price_id,
|
||||
'quantity': 1,
|
||||
}]
|
||||
else:
|
||||
# Create price_data for dynamic pricing
|
||||
line_items = [{
|
||||
'price_data': {
|
||||
'currency': self.currency,
|
||||
'product_data': {
|
||||
'name': credit_package.name,
|
||||
'description': f'{credit_package.credits} credits',
|
||||
},
|
||||
'unit_amount': int(credit_package.price * 100), # Convert to cents
|
||||
},
|
||||
'quantity': 1,
|
||||
}]
|
||||
|
||||
# Create checkout session
|
||||
session = stripe.checkout.Session.create(
|
||||
customer=customer_id,
|
||||
payment_method_types=self.config.get('payment_methods', ['card']),
|
||||
mode='payment',
|
||||
line_items=line_items,
|
||||
success_url=success_url,
|
||||
cancel_url=cancel_url,
|
||||
metadata={
|
||||
'account_id': str(account.id),
|
||||
'credit_package_id': str(credit_package.id),
|
||||
'credit_amount': str(credit_package.credits),
|
||||
'type': 'credit_purchase',
|
||||
},
|
||||
)
|
||||
|
||||
logger.info(
|
||||
f"Created Stripe credit checkout session {session.id} for account {account.id}, "
|
||||
f"package {credit_package.name} ({credit_package.credits} credits)"
|
||||
)
|
||||
|
||||
return {
|
||||
'checkout_url': session.url,
|
||||
'session_id': session.id,
|
||||
}
|
||||
|
||||
def get_checkout_session(self, session_id: str) -> Optional[Dict]:
|
||||
"""
|
||||
Retrieve checkout session details.
|
||||
|
||||
Args:
|
||||
session_id: Stripe checkout session ID
|
||||
|
||||
Returns:
|
||||
dict: Session data or None if not found
|
||||
"""
|
||||
try:
|
||||
session = stripe.checkout.Session.retrieve(session_id)
|
||||
return {
|
||||
'id': session.id,
|
||||
'status': session.status,
|
||||
'payment_status': session.payment_status,
|
||||
'customer': session.customer,
|
||||
'subscription': session.subscription,
|
||||
'metadata': session.metadata,
|
||||
'amount_total': session.amount_total,
|
||||
'currency': session.currency,
|
||||
}
|
||||
except stripe.error.InvalidRequestError as e:
|
||||
logger.error(f"Failed to retrieve checkout session {session_id}: {e}")
|
||||
return None
|
||||
|
||||
# ========== Billing Portal ==========
|
||||
|
||||
def create_billing_portal_session(
|
||||
self,
|
||||
account,
|
||||
return_url: Optional[str] = None,
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Create Stripe Billing Portal session for subscription management.
|
||||
|
||||
Allows customers to:
|
||||
- Update payment method
|
||||
- View billing history
|
||||
- Cancel subscription
|
||||
- Update billing info
|
||||
|
||||
Args:
|
||||
account: Account model instance
|
||||
return_url: URL to return to after portal session
|
||||
|
||||
Returns:
|
||||
dict: Portal session data with portal_url
|
||||
|
||||
Raises:
|
||||
ValueError: If account has no Stripe customer
|
||||
"""
|
||||
if not self.config.get('billing_portal_enabled', True):
|
||||
raise ValueError("Billing portal is disabled in configuration")
|
||||
|
||||
# Get or create customer
|
||||
customer_id = self._get_or_create_customer(account)
|
||||
|
||||
if not return_url:
|
||||
return_url = f'{self.frontend_url}/account/plans'
|
||||
|
||||
# Create billing portal session
|
||||
session = stripe.billing_portal.Session.create(
|
||||
customer=customer_id,
|
||||
return_url=return_url,
|
||||
)
|
||||
|
||||
logger.info(
|
||||
f"Created Stripe billing portal session for account {account.id}"
|
||||
)
|
||||
|
||||
return {
|
||||
'portal_url': session.url,
|
||||
}
|
||||
|
||||
# ========== Subscription Management ==========
|
||||
|
||||
def get_subscription(self, subscription_id: str) -> Optional[Dict]:
|
||||
"""
|
||||
Get subscription details from Stripe.
|
||||
|
||||
Args:
|
||||
subscription_id: Stripe subscription ID
|
||||
|
||||
Returns:
|
||||
dict: Subscription data or None if not found
|
||||
"""
|
||||
try:
|
||||
sub = stripe.Subscription.retrieve(subscription_id)
|
||||
return {
|
||||
'id': sub.id,
|
||||
'status': sub.status,
|
||||
'current_period_start': sub.current_period_start,
|
||||
'current_period_end': sub.current_period_end,
|
||||
'cancel_at_period_end': sub.cancel_at_period_end,
|
||||
'canceled_at': sub.canceled_at,
|
||||
'ended_at': sub.ended_at,
|
||||
'customer': sub.customer,
|
||||
'items': [{
|
||||
'id': item.id,
|
||||
'price_id': item.price.id,
|
||||
'quantity': item.quantity,
|
||||
} for item in sub['items'].data],
|
||||
'metadata': sub.metadata,
|
||||
}
|
||||
except stripe.error.InvalidRequestError as e:
|
||||
logger.error(f"Failed to retrieve subscription {subscription_id}: {e}")
|
||||
return None
|
||||
|
||||
def cancel_subscription(
|
||||
self,
|
||||
subscription_id: str,
|
||||
at_period_end: bool = True
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Cancel a Stripe subscription.
|
||||
|
||||
Args:
|
||||
subscription_id: Stripe subscription ID
|
||||
at_period_end: If True, cancel at end of billing period
|
||||
|
||||
Returns:
|
||||
dict: Updated subscription data
|
||||
"""
|
||||
if at_period_end:
|
||||
sub = stripe.Subscription.modify(
|
||||
subscription_id,
|
||||
cancel_at_period_end=True
|
||||
)
|
||||
logger.info(f"Subscription {subscription_id} marked for cancellation at period end")
|
||||
else:
|
||||
sub = stripe.Subscription.delete(subscription_id)
|
||||
logger.info(f"Subscription {subscription_id} canceled immediately")
|
||||
|
||||
return {
|
||||
'id': sub.id,
|
||||
'status': sub.status,
|
||||
'cancel_at_period_end': sub.cancel_at_period_end,
|
||||
}
|
||||
|
||||
def update_subscription(
|
||||
self,
|
||||
subscription_id: str,
|
||||
new_price_id: str,
|
||||
proration_behavior: str = 'create_prorations'
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Update subscription to a new plan/price.
|
||||
|
||||
Args:
|
||||
subscription_id: Stripe subscription ID
|
||||
new_price_id: New Stripe price ID
|
||||
proration_behavior: How to handle proration
|
||||
- 'create_prorations': Prorate the change
|
||||
- 'none': No proration
|
||||
- 'always_invoice': Invoice immediately
|
||||
|
||||
Returns:
|
||||
dict: Updated subscription data
|
||||
"""
|
||||
# Get current subscription
|
||||
sub = stripe.Subscription.retrieve(subscription_id)
|
||||
|
||||
# Update the subscription item
|
||||
updated = stripe.Subscription.modify(
|
||||
subscription_id,
|
||||
items=[{
|
||||
'id': sub['items'].data[0].id,
|
||||
'price': new_price_id,
|
||||
}],
|
||||
proration_behavior=proration_behavior,
|
||||
)
|
||||
|
||||
logger.info(
|
||||
f"Updated subscription {subscription_id} to price {new_price_id}"
|
||||
)
|
||||
|
||||
return {
|
||||
'id': updated.id,
|
||||
'status': updated.status,
|
||||
'current_period_end': updated.current_period_end,
|
||||
}
|
||||
|
||||
# ========== Webhook Handling ==========
|
||||
|
||||
def construct_webhook_event(
|
||||
self,
|
||||
payload: bytes,
|
||||
sig_header: str
|
||||
) -> stripe.Event:
|
||||
"""
|
||||
Verify and construct webhook event from Stripe.
|
||||
|
||||
Args:
|
||||
payload: Raw request body
|
||||
sig_header: Stripe-Signature header value
|
||||
|
||||
Returns:
|
||||
stripe.Event: Verified event object
|
||||
|
||||
Raises:
|
||||
stripe.error.SignatureVerificationError: If signature is invalid
|
||||
"""
|
||||
if not self.webhook_secret:
|
||||
raise StripeConfigurationError(
|
||||
"Webhook secret not configured. Set webhook_secret in provider."
|
||||
)
|
||||
|
||||
return stripe.Webhook.construct_event(
|
||||
payload, sig_header, self.webhook_secret
|
||||
)
|
||||
|
||||
# ========== Invoice Operations ==========
|
||||
|
||||
def get_invoice(self, invoice_id: str) -> Optional[Dict]:
|
||||
"""
|
||||
Get invoice details from Stripe.
|
||||
|
||||
Args:
|
||||
invoice_id: Stripe invoice ID
|
||||
|
||||
Returns:
|
||||
dict: Invoice data or None if not found
|
||||
"""
|
||||
try:
|
||||
invoice = stripe.Invoice.retrieve(invoice_id)
|
||||
return {
|
||||
'id': invoice.id,
|
||||
'status': invoice.status,
|
||||
'amount_due': invoice.amount_due,
|
||||
'amount_paid': invoice.amount_paid,
|
||||
'currency': invoice.currency,
|
||||
'customer': invoice.customer,
|
||||
'subscription': invoice.subscription,
|
||||
'invoice_pdf': invoice.invoice_pdf,
|
||||
'hosted_invoice_url': invoice.hosted_invoice_url,
|
||||
}
|
||||
except stripe.error.InvalidRequestError as e:
|
||||
logger.error(f"Failed to retrieve invoice {invoice_id}: {e}")
|
||||
return None
|
||||
|
||||
def get_upcoming_invoice(self, customer_id: str) -> Optional[Dict]:
|
||||
"""
|
||||
Get upcoming invoice for a customer.
|
||||
|
||||
Args:
|
||||
customer_id: Stripe customer ID
|
||||
|
||||
Returns:
|
||||
dict: Upcoming invoice preview or None
|
||||
"""
|
||||
try:
|
||||
invoice = stripe.Invoice.upcoming(customer=customer_id)
|
||||
return {
|
||||
'amount_due': invoice.amount_due,
|
||||
'currency': invoice.currency,
|
||||
'next_payment_attempt': invoice.next_payment_attempt,
|
||||
'lines': [{
|
||||
'description': line.description,
|
||||
'amount': line.amount,
|
||||
} for line in invoice.lines.data],
|
||||
}
|
||||
except stripe.error.InvalidRequestError:
|
||||
return None
|
||||
|
||||
# ========== Refunds ==========
|
||||
|
||||
def create_refund(
|
||||
self,
|
||||
payment_intent_id: Optional[str] = None,
|
||||
charge_id: Optional[str] = None,
|
||||
amount: Optional[int] = None,
|
||||
reason: Optional[str] = None,
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Create a refund for a payment.
|
||||
|
||||
Args:
|
||||
payment_intent_id: Stripe PaymentIntent ID
|
||||
charge_id: Stripe Charge ID (alternative to payment_intent_id)
|
||||
amount: Amount to refund in cents (None for full refund)
|
||||
reason: Reason for refund ('duplicate', 'fraudulent', 'requested_by_customer')
|
||||
|
||||
Returns:
|
||||
dict: Refund data
|
||||
"""
|
||||
params = {}
|
||||
|
||||
if payment_intent_id:
|
||||
params['payment_intent'] = payment_intent_id
|
||||
elif charge_id:
|
||||
params['charge'] = charge_id
|
||||
else:
|
||||
raise ValueError("Either payment_intent_id or charge_id required")
|
||||
|
||||
if amount:
|
||||
params['amount'] = amount
|
||||
|
||||
if reason:
|
||||
params['reason'] = reason
|
||||
|
||||
refund = stripe.Refund.create(**params)
|
||||
|
||||
logger.info(
|
||||
f"Created refund {refund.id} for "
|
||||
f"{'payment_intent ' + payment_intent_id if payment_intent_id else 'charge ' + charge_id}"
|
||||
)
|
||||
|
||||
return {
|
||||
'id': refund.id,
|
||||
'amount': refund.amount,
|
||||
'status': refund.status,
|
||||
'reason': refund.reason,
|
||||
}
|
||||
|
||||
|
||||
# Convenience function
|
||||
def get_stripe_service() -> StripeService:
|
||||
"""
|
||||
Get StripeService instance.
|
||||
|
||||
Returns:
|
||||
StripeService: Initialized service
|
||||
|
||||
Raises:
|
||||
StripeConfigurationError: If Stripe not configured
|
||||
"""
|
||||
return StripeService()
|
||||
@@ -172,7 +172,7 @@ def _attempt_stripe_renewal(subscription: Subscription, invoice: Invoice) -> boo
|
||||
payment_method='stripe',
|
||||
status='processing',
|
||||
stripe_payment_intent_id=intent.id,
|
||||
metadata={'renewal': True}
|
||||
metadata={'renewal': True, 'auto_approved': True}
|
||||
)
|
||||
|
||||
return True
|
||||
@@ -210,7 +210,7 @@ def _attempt_paypal_renewal(subscription: Subscription, invoice: Invoice) -> boo
|
||||
payment_method='paypal',
|
||||
status='processing',
|
||||
paypal_order_id=subscription.metadata['paypal_subscription_id'],
|
||||
metadata={'renewal': True}
|
||||
metadata={'renewal': True, 'auto_approved': True}
|
||||
)
|
||||
return True
|
||||
else:
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
"""Billing routes including bank transfer confirmation and credit endpoints."""
|
||||
from django.urls import path, include
|
||||
from rest_framework.routers import DefaultRouter
|
||||
from .views import (
|
||||
from .billing_views import (
|
||||
BillingViewSet,
|
||||
InvoiceViewSet,
|
||||
PaymentViewSet,
|
||||
@@ -15,6 +15,24 @@ from igny8_core.modules.billing.views import (
|
||||
CreditTransactionViewSet,
|
||||
AIModelConfigViewSet,
|
||||
)
|
||||
# Payment gateway views
|
||||
from .views.stripe_views import (
|
||||
StripeConfigView,
|
||||
StripeCheckoutView,
|
||||
StripeCreditCheckoutView,
|
||||
StripeBillingPortalView,
|
||||
StripeReturnVerificationView,
|
||||
stripe_webhook,
|
||||
)
|
||||
from .views.paypal_views import (
|
||||
PayPalConfigView,
|
||||
PayPalCreateOrderView,
|
||||
PayPalCreateSubscriptionOrderView,
|
||||
PayPalCaptureOrderView,
|
||||
PayPalCreateSubscriptionView,
|
||||
PayPalReturnVerificationView,
|
||||
paypal_webhook,
|
||||
)
|
||||
|
||||
router = DefaultRouter()
|
||||
router.register(r'admin', BillingViewSet, basename='billing-admin')
|
||||
@@ -35,4 +53,21 @@ urlpatterns = [
|
||||
path('', include(router.urls)),
|
||||
# User-facing usage summary endpoint for plan limits
|
||||
path('usage-summary/', get_usage_summary, name='usage-summary'),
|
||||
|
||||
# Stripe endpoints
|
||||
path('stripe/config/', StripeConfigView.as_view(), name='stripe-config'),
|
||||
path('stripe/checkout/', StripeCheckoutView.as_view(), name='stripe-checkout'),
|
||||
path('stripe/credit-checkout/', StripeCreditCheckoutView.as_view(), name='stripe-credit-checkout'),
|
||||
path('stripe/billing-portal/', StripeBillingPortalView.as_view(), name='stripe-billing-portal'),
|
||||
path('stripe/verify-return/', StripeReturnVerificationView.as_view(), name='stripe-verify-return'),
|
||||
path('webhooks/stripe/', stripe_webhook, name='stripe-webhook'),
|
||||
|
||||
# PayPal endpoints
|
||||
path('paypal/config/', PayPalConfigView.as_view(), name='paypal-config'),
|
||||
path('paypal/create-order/', PayPalCreateOrderView.as_view(), name='paypal-create-order'),
|
||||
path('paypal/create-subscription-order/', PayPalCreateSubscriptionOrderView.as_view(), name='paypal-create-subscription-order'),
|
||||
path('paypal/capture-order/', PayPalCaptureOrderView.as_view(), name='paypal-capture-order'),
|
||||
path('paypal/create-subscription/', PayPalCreateSubscriptionView.as_view(), name='paypal-create-subscription'),
|
||||
path('paypal/verify-return/', PayPalReturnVerificationView.as_view(), name='paypal-verify-return'),
|
||||
path('webhooks/paypal/', paypal_webhook, name='paypal-webhook'),
|
||||
]
|
||||
|
||||
@@ -5,6 +5,8 @@ API endpoints for generating and downloading invoice PDFs
|
||||
from django.http import HttpResponse
|
||||
from rest_framework.decorators import api_view, permission_classes
|
||||
from rest_framework.permissions import IsAuthenticated
|
||||
from rest_framework.response import Response
|
||||
from rest_framework import status
|
||||
from igny8_core.business.billing.models import Invoice
|
||||
from igny8_core.business.billing.services.pdf_service import InvoicePDFGenerator
|
||||
from igny8_core.business.billing.utils.errors import not_found_response
|
||||
@@ -22,20 +24,46 @@ def download_invoice_pdf(request, invoice_id):
|
||||
GET /api/v1/billing/invoices/<id>/pdf/
|
||||
"""
|
||||
try:
|
||||
invoice = Invoice.objects.prefetch_related('line_items').get(
|
||||
# Note: line_items is a JSONField, not a related model - no prefetch needed
|
||||
invoice = Invoice.objects.select_related('account', 'account__owner', 'subscription', 'subscription__plan').get(
|
||||
id=invoice_id,
|
||||
account=request.user.account
|
||||
)
|
||||
except Invoice.DoesNotExist:
|
||||
return not_found_response('Invoice', invoice_id)
|
||||
|
||||
# Generate PDF
|
||||
pdf_buffer = InvoicePDFGenerator.generate_invoice_pdf(invoice)
|
||||
|
||||
# Return PDF response
|
||||
response = HttpResponse(pdf_buffer.read(), content_type='application/pdf')
|
||||
response['Content-Disposition'] = f'attachment; filename="invoice_{invoice.invoice_number}.pdf"'
|
||||
|
||||
logger.info(f'Invoice PDF downloaded: {invoice.invoice_number} by user {request.user.id}')
|
||||
|
||||
return response
|
||||
try:
|
||||
# Generate PDF
|
||||
pdf_buffer = InvoicePDFGenerator.generate_invoice_pdf(invoice)
|
||||
|
||||
# Build descriptive filename: IGNY8-Invoice-INV123456-Growth-2026-01-08.pdf
|
||||
plan_name = ''
|
||||
if invoice.subscription and invoice.subscription.plan:
|
||||
plan_name = invoice.subscription.plan.name.replace(' ', '-')
|
||||
elif invoice.metadata and 'plan_name' in invoice.metadata:
|
||||
plan_name = invoice.metadata['plan_name'].replace(' ', '-')
|
||||
|
||||
date_str = invoice.invoice_date.strftime('%Y-%m-%d') if invoice.invoice_date else ''
|
||||
|
||||
filename_parts = ['IGNY8', 'Invoice', invoice.invoice_number]
|
||||
if plan_name:
|
||||
filename_parts.append(plan_name)
|
||||
if date_str:
|
||||
filename_parts.append(date_str)
|
||||
|
||||
filename = '-'.join(filename_parts) + '.pdf'
|
||||
|
||||
# Return PDF response
|
||||
response = HttpResponse(pdf_buffer.read(), content_type='application/pdf')
|
||||
response['Content-Disposition'] = f'attachment; filename="{filename}"'
|
||||
|
||||
logger.info(f'Invoice PDF downloaded: {invoice.invoice_number} by user {request.user.id}')
|
||||
|
||||
return response
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f'Failed to generate PDF for invoice {invoice_id}: {str(e)}', exc_info=True)
|
||||
return Response(
|
||||
{'error': 'Failed to generate PDF', 'detail': str(e)},
|
||||
status=status.HTTP_500_INTERNAL_SERVER_ERROR
|
||||
)
|
||||
|
||||
1106
backend/igny8_core/business/billing/views/paypal_views.py
Normal file
1106
backend/igny8_core/business/billing/views/paypal_views.py
Normal file
File diff suppressed because it is too large
Load Diff
@@ -160,20 +160,18 @@ def initiate_refund(request, payment_id):
|
||||
def _process_stripe_refund(payment: Payment, amount: Decimal, reason: str) -> bool:
|
||||
"""Process Stripe refund"""
|
||||
try:
|
||||
import stripe
|
||||
from igny8_core.business.billing.utils.payment_gateways import get_stripe_client
|
||||
from igny8_core.business.billing.services.stripe_service import StripeService
|
||||
|
||||
stripe_client = get_stripe_client()
|
||||
stripe_service = StripeService()
|
||||
|
||||
refund = stripe_client.Refund.create(
|
||||
payment_intent=payment.stripe_payment_intent_id,
|
||||
refund = stripe_service.create_refund(
|
||||
payment_intent_id=payment.stripe_payment_intent_id,
|
||||
amount=int(amount * 100), # Convert to cents
|
||||
reason='requested_by_customer',
|
||||
metadata={'reason': reason}
|
||||
)
|
||||
|
||||
payment.metadata['stripe_refund_id'] = refund.id
|
||||
return refund.status == 'succeeded'
|
||||
payment.metadata['stripe_refund_id'] = refund.get('id')
|
||||
return refund.get('status') == 'succeeded'
|
||||
|
||||
except Exception as e:
|
||||
logger.exception(f"Stripe refund failed for payment {payment.id}: {str(e)}")
|
||||
@@ -183,25 +181,19 @@ def _process_stripe_refund(payment: Payment, amount: Decimal, reason: str) -> bo
|
||||
def _process_paypal_refund(payment: Payment, amount: Decimal, reason: str) -> bool:
|
||||
"""Process PayPal refund"""
|
||||
try:
|
||||
from igny8_core.business.billing.utils.payment_gateways import get_paypal_client
|
||||
from igny8_core.business.billing.services.paypal_service import PayPalService
|
||||
|
||||
paypal_client = get_paypal_client()
|
||||
paypal_service = PayPalService()
|
||||
|
||||
refund_request = {
|
||||
'amount': {
|
||||
'value': str(amount),
|
||||
'currency_code': payment.currency
|
||||
},
|
||||
'note_to_payer': reason
|
||||
}
|
||||
|
||||
refund = paypal_client.payments.captures.refund(
|
||||
payment.paypal_capture_id,
|
||||
refund_request
|
||||
refund = paypal_service.refund_capture(
|
||||
capture_id=payment.paypal_capture_id,
|
||||
amount=float(amount),
|
||||
currency=payment.currency,
|
||||
note=reason,
|
||||
)
|
||||
|
||||
payment.metadata['paypal_refund_id'] = refund.id
|
||||
return refund.status == 'COMPLETED'
|
||||
payment.metadata['paypal_refund_id'] = refund.get('id')
|
||||
return refund.get('status') == 'COMPLETED'
|
||||
|
||||
except Exception as e:
|
||||
logger.exception(f"PayPal refund failed for payment {payment.id}: {str(e)}")
|
||||
|
||||
1016
backend/igny8_core/business/billing/views/stripe_views.py
Normal file
1016
backend/igny8_core/business/billing/views/stripe_views.py
Normal file
File diff suppressed because it is too large
Load Diff
@@ -119,10 +119,40 @@ class Tasks(SoftDeletableModel, SiteSectorBaseModel):
|
||||
|
||||
objects = SoftDeleteManager()
|
||||
all_objects = models.Manager()
|
||||
|
||||
|
||||
def __str__(self):
|
||||
return self.title
|
||||
|
||||
def soft_delete(self, user=None, reason=None, retention_days=None):
|
||||
"""
|
||||
Override soft_delete to cascade to related models.
|
||||
This ensures Images and ContentClusterMap are also deleted when a Task is deleted.
|
||||
"""
|
||||
import logging
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Soft-delete related Images (which are also SoftDeletable)
|
||||
related_images = self.images.filter(is_deleted=False)
|
||||
images_count = related_images.count()
|
||||
for image in related_images:
|
||||
image.soft_delete(user=user, reason=f"Parent task deleted: {reason or 'No reason'}")
|
||||
|
||||
# Hard-delete ContentClusterMap (not soft-deletable)
|
||||
cluster_maps_count = self.cluster_mappings.count()
|
||||
self.cluster_mappings.all().delete()
|
||||
|
||||
# Hard-delete ContentAttribute (not soft-deletable)
|
||||
attributes_count = self.attribute_mappings.count()
|
||||
self.attribute_mappings.all().delete()
|
||||
|
||||
logger.info(
|
||||
f"[Tasks.soft_delete] Task {self.id} '{self.title}' cascade delete: "
|
||||
f"{images_count} images, {cluster_maps_count} cluster maps, {attributes_count} attributes"
|
||||
)
|
||||
|
||||
# Call parent soft_delete
|
||||
super().soft_delete(user=user, reason=reason, retention_days=retention_days)
|
||||
|
||||
|
||||
class ContentTaxonomyRelation(models.Model):
|
||||
"""Through model for Content-Taxonomy many-to-many relationship"""
|
||||
@@ -241,7 +271,8 @@ class Content(SoftDeletableModel, SiteSectorBaseModel):
|
||||
STATUS_CHOICES = [
|
||||
('draft', 'Draft'),
|
||||
('review', 'Review'),
|
||||
('published', 'Published'),
|
||||
('approved', 'Approved'), # Ready for publishing to external site
|
||||
('published', 'Published'), # Actually published on external site
|
||||
]
|
||||
status = models.CharField(
|
||||
max_length=50,
|
||||
@@ -251,6 +282,33 @@ class Content(SoftDeletableModel, SiteSectorBaseModel):
|
||||
help_text="Content status"
|
||||
)
|
||||
|
||||
# Publishing scheduler fields
|
||||
SITE_STATUS_CHOICES = [
|
||||
('not_published', 'Not Published'),
|
||||
('scheduled', 'Scheduled'),
|
||||
('publishing', 'Publishing'),
|
||||
('published', 'Published'),
|
||||
('failed', 'Failed'),
|
||||
]
|
||||
site_status = models.CharField(
|
||||
max_length=50,
|
||||
choices=SITE_STATUS_CHOICES,
|
||||
default='not_published',
|
||||
db_index=True,
|
||||
help_text="External site publishing status"
|
||||
)
|
||||
scheduled_publish_at = models.DateTimeField(
|
||||
null=True,
|
||||
blank=True,
|
||||
db_index=True,
|
||||
help_text="Scheduled time for publishing to external site"
|
||||
)
|
||||
site_status_updated_at = models.DateTimeField(
|
||||
null=True,
|
||||
blank=True,
|
||||
help_text="Last time site_status was changed"
|
||||
)
|
||||
|
||||
created_at = models.DateTimeField(auto_now_add=True)
|
||||
updated_at = models.DateTimeField(auto_now=True)
|
||||
|
||||
@@ -326,6 +384,61 @@ class Content(SoftDeletableModel, SiteSectorBaseModel):
|
||||
logger = logging.getLogger(__name__)
|
||||
logger.error(f"Error incrementing word usage for content {self.id}: {str(e)}")
|
||||
|
||||
def soft_delete(self, user=None, reason=None, retention_days=None):
|
||||
"""
|
||||
Override soft_delete to cascade to related models.
|
||||
This ensures Images, ContentClusterMap, ContentAttribute are also deleted.
|
||||
"""
|
||||
import logging
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Soft-delete related Images (which are also SoftDeletable)
|
||||
related_images = self.images.filter(is_deleted=False)
|
||||
images_count = related_images.count()
|
||||
for image in related_images:
|
||||
image.soft_delete(user=user, reason=f"Parent content deleted: {reason or 'No reason'}")
|
||||
|
||||
# Hard-delete ContentClusterMap (not soft-deletable)
|
||||
cluster_maps_count = self.cluster_mappings.count()
|
||||
self.cluster_mappings.all().delete()
|
||||
|
||||
# Hard-delete ContentAttribute (not soft-deletable)
|
||||
attributes_count = self.attributes.count()
|
||||
self.attributes.all().delete()
|
||||
|
||||
# Hard-delete ContentTaxonomyRelation (through model for many-to-many)
|
||||
taxonomy_relations_count = ContentTaxonomyRelation.objects.filter(content=self).count()
|
||||
ContentTaxonomyRelation.objects.filter(content=self).delete()
|
||||
|
||||
logger.info(
|
||||
f"[Content.soft_delete] Content {self.id} '{self.title}' cascade delete: "
|
||||
f"{images_count} images, {cluster_maps_count} cluster maps, "
|
||||
f"{attributes_count} attributes, {taxonomy_relations_count} taxonomy relations"
|
||||
)
|
||||
|
||||
# Call parent soft_delete
|
||||
super().soft_delete(user=user, reason=reason, retention_days=retention_days)
|
||||
|
||||
def hard_delete(self, using=None, keep_parents=False):
|
||||
"""
|
||||
Override hard_delete to cascade to related models.
|
||||
Django CASCADE should handle this, but we explicitly clean up for safety.
|
||||
"""
|
||||
import logging
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Hard-delete related Images (including soft-deleted ones)
|
||||
images_count = Images.all_objects.filter(content=self).count()
|
||||
Images.all_objects.filter(content=self).delete()
|
||||
|
||||
logger.info(
|
||||
f"[Content.hard_delete] Content {self.id} '{self.title}' hard delete: "
|
||||
f"{images_count} images removed"
|
||||
)
|
||||
|
||||
# Call parent hard_delete (Django CASCADE will handle the rest)
|
||||
return super().hard_delete(using=using, keep_parents=keep_parents)
|
||||
|
||||
|
||||
class ContentTaxonomy(SiteSectorBaseModel):
|
||||
"""
|
||||
@@ -455,10 +568,33 @@ class Images(SoftDeletableModel, SiteSectorBaseModel):
|
||||
models.Index(fields=['content', 'position']),
|
||||
models.Index(fields=['task', 'position']),
|
||||
]
|
||||
# Ensure unique position per content+image_type combination
|
||||
constraints = [
|
||||
models.UniqueConstraint(
|
||||
fields=['content', 'image_type', 'position'],
|
||||
name='unique_content_image_type_position',
|
||||
condition=models.Q(is_deleted=False)
|
||||
),
|
||||
]
|
||||
|
||||
objects = SoftDeleteManager()
|
||||
all_objects = models.Manager()
|
||||
|
||||
@property
|
||||
def aspect_ratio(self):
|
||||
"""
|
||||
Determine aspect ratio based on position for layout rendering.
|
||||
Position 0, 2: square (1:1)
|
||||
Position 1, 3: landscape (16:9 or similar)
|
||||
Featured: always landscape
|
||||
"""
|
||||
if self.image_type == 'featured':
|
||||
return 'landscape'
|
||||
elif self.image_type == 'in_article':
|
||||
# Even positions are square, odd positions are landscape
|
||||
return 'square' if (self.position or 0) % 2 == 0 else 'landscape'
|
||||
return 'square' # Default
|
||||
|
||||
def save(self, *args, **kwargs):
|
||||
"""Track image usage when creating new images"""
|
||||
is_new = self.pk is None
|
||||
@@ -675,3 +811,14 @@ class ContentAttribute(SiteSectorBaseModel):
|
||||
|
||||
# Backward compatibility alias
|
||||
ContentAttributeMap = ContentAttribute
|
||||
|
||||
class ImagePrompts(Images):
|
||||
"""
|
||||
Proxy model for Images to provide a separate admin interface focused on prompts.
|
||||
This allows a dedicated "Image Prompts" view in the admin sidebar.
|
||||
"""
|
||||
class Meta:
|
||||
proxy = True
|
||||
verbose_name = 'Image Prompt'
|
||||
verbose_name_plural = 'Image Prompts'
|
||||
app_label = 'writer'
|
||||
@@ -26,17 +26,7 @@ class ContentValidationService:
|
||||
"""
|
||||
errors = []
|
||||
|
||||
# Stage 3: Enforce "no cluster, no task" rule when feature flag enabled
|
||||
from django.conf import settings
|
||||
if getattr(settings, 'USE_SITE_BUILDER_REFACTOR', False):
|
||||
if not task.cluster:
|
||||
errors.append({
|
||||
'field': 'cluster',
|
||||
'code': 'missing_cluster',
|
||||
'message': 'Task must be associated with a cluster before content generation',
|
||||
})
|
||||
|
||||
# Stage 3: Validate entity_type is set
|
||||
# Validate entity_type is set
|
||||
if not task.content_type:
|
||||
errors.append({
|
||||
'field': 'content_type',
|
||||
|
||||
@@ -0,0 +1,38 @@
|
||||
# Generated by Django 5.2.9 on 2026-01-01 06:37
|
||||
|
||||
import django.core.validators
|
||||
import django.db.models.deletion
|
||||
from django.db import migrations, models
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
|
||||
dependencies = [
|
||||
('igny8_core_auth', '0018_add_country_remove_intent_seedkeyword'),
|
||||
('integration', '0002_add_sync_event_model'),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.CreateModel(
|
||||
name='PublishingSettings',
|
||||
fields=[
|
||||
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
|
||||
('auto_approval_enabled', models.BooleanField(default=True, help_text="Automatically approve content after review (moves to 'approved' status)")),
|
||||
('auto_publish_enabled', models.BooleanField(default=True, help_text='Automatically publish approved content to the external site')),
|
||||
('daily_publish_limit', models.PositiveIntegerField(default=3, help_text='Maximum number of articles to publish per day', validators=[django.core.validators.MinValueValidator(1)])),
|
||||
('weekly_publish_limit', models.PositiveIntegerField(default=15, help_text='Maximum number of articles to publish per week', validators=[django.core.validators.MinValueValidator(1)])),
|
||||
('monthly_publish_limit', models.PositiveIntegerField(default=50, help_text='Maximum number of articles to publish per month', validators=[django.core.validators.MinValueValidator(1)])),
|
||||
('publish_days', models.JSONField(default=list, help_text='Days of the week to publish (mon, tue, wed, thu, fri, sat, sun)')),
|
||||
('publish_time_slots', models.JSONField(default=list, help_text="Times of day to publish (HH:MM format, e.g., ['09:00', '14:00', '18:00'])")),
|
||||
('created_at', models.DateTimeField(auto_now_add=True)),
|
||||
('updated_at', models.DateTimeField(auto_now=True)),
|
||||
('account', models.ForeignKey(db_column='tenant_id', on_delete=django.db.models.deletion.CASCADE, related_name='%(class)s_set', to='igny8_core_auth.account')),
|
||||
('site', models.OneToOneField(help_text='Site these publishing settings belong to', on_delete=django.db.models.deletion.CASCADE, related_name='publishing_settings', to='igny8_core_auth.site')),
|
||||
],
|
||||
options={
|
||||
'verbose_name': 'Publishing Settings',
|
||||
'verbose_name_plural': 'Publishing Settings',
|
||||
'db_table': 'igny8_publishing_settings',
|
||||
},
|
||||
),
|
||||
]
|
||||
@@ -244,3 +244,100 @@ class SyncEvent(AccountBaseModel):
|
||||
def __str__(self):
|
||||
return f"{self.get_event_type_display()} - {self.description[:50]}"
|
||||
|
||||
|
||||
class PublishingSettings(AccountBaseModel):
|
||||
"""
|
||||
Site-level publishing configuration settings.
|
||||
Controls automatic approval, publishing limits, and scheduling.
|
||||
"""
|
||||
|
||||
DEFAULT_PUBLISH_DAYS = ['mon', 'tue', 'wed', 'thu', 'fri']
|
||||
DEFAULT_TIME_SLOTS = ['09:00', '14:00', '18:00']
|
||||
|
||||
site = models.OneToOneField(
|
||||
'igny8_core_auth.Site',
|
||||
on_delete=models.CASCADE,
|
||||
related_name='publishing_settings',
|
||||
help_text="Site these publishing settings belong to"
|
||||
)
|
||||
|
||||
# Auto-approval settings
|
||||
auto_approval_enabled = models.BooleanField(
|
||||
default=True,
|
||||
help_text="Automatically approve content after review (moves to 'approved' status)"
|
||||
)
|
||||
|
||||
# Auto-publish settings
|
||||
auto_publish_enabled = models.BooleanField(
|
||||
default=True,
|
||||
help_text="Automatically publish approved content to the external site"
|
||||
)
|
||||
|
||||
# Publishing limits
|
||||
daily_publish_limit = models.PositiveIntegerField(
|
||||
default=3,
|
||||
validators=[MinValueValidator(1)],
|
||||
help_text="Maximum number of articles to publish per day"
|
||||
)
|
||||
|
||||
weekly_publish_limit = models.PositiveIntegerField(
|
||||
default=15,
|
||||
validators=[MinValueValidator(1)],
|
||||
help_text="Maximum number of articles to publish per week"
|
||||
)
|
||||
|
||||
monthly_publish_limit = models.PositiveIntegerField(
|
||||
default=50,
|
||||
validators=[MinValueValidator(1)],
|
||||
help_text="Maximum number of articles to publish per month"
|
||||
)
|
||||
|
||||
# Publishing schedule
|
||||
publish_days = models.JSONField(
|
||||
default=list,
|
||||
help_text="Days of the week to publish (mon, tue, wed, thu, fri, sat, sun)"
|
||||
)
|
||||
|
||||
publish_time_slots = models.JSONField(
|
||||
default=list,
|
||||
help_text="Times of day to publish (HH:MM format, e.g., ['09:00', '14:00', '18:00'])"
|
||||
)
|
||||
|
||||
created_at = models.DateTimeField(auto_now_add=True)
|
||||
updated_at = models.DateTimeField(auto_now=True)
|
||||
|
||||
class Meta:
|
||||
app_label = 'integration'
|
||||
db_table = 'igny8_publishing_settings'
|
||||
verbose_name = 'Publishing Settings'
|
||||
verbose_name_plural = 'Publishing Settings'
|
||||
|
||||
def __str__(self):
|
||||
return f"Publishing Settings for {self.site.name}"
|
||||
|
||||
def save(self, *args, **kwargs):
|
||||
"""Set defaults for JSON fields if empty"""
|
||||
if not self.publish_days:
|
||||
self.publish_days = self.DEFAULT_PUBLISH_DAYS
|
||||
if not self.publish_time_slots:
|
||||
self.publish_time_slots = self.DEFAULT_TIME_SLOTS
|
||||
super().save(*args, **kwargs)
|
||||
|
||||
@classmethod
|
||||
def get_or_create_for_site(cls, site):
|
||||
"""Get or create publishing settings for a site with defaults"""
|
||||
settings, created = cls.objects.get_or_create(
|
||||
site=site,
|
||||
defaults={
|
||||
'account': site.account,
|
||||
'auto_approval_enabled': True,
|
||||
'auto_publish_enabled': True,
|
||||
'daily_publish_limit': 3,
|
||||
'weekly_publish_limit': 15,
|
||||
'monthly_publish_limit': 50,
|
||||
'publish_days': cls.DEFAULT_PUBLISH_DAYS,
|
||||
'publish_time_slots': cls.DEFAULT_TIME_SLOTS,
|
||||
}
|
||||
)
|
||||
return settings, created
|
||||
|
||||
|
||||
@@ -0,0 +1,259 @@
|
||||
"""
|
||||
Defaults Service
|
||||
Creates sites with default settings for simplified onboarding.
|
||||
"""
|
||||
import logging
|
||||
from typing import Dict, Any, Tuple, Optional
|
||||
from django.db import transaction
|
||||
from django.utils import timezone
|
||||
|
||||
from igny8_core.auth.models import Account, Site
|
||||
from igny8_core.business.integration.models import PublishingSettings
|
||||
from igny8_core.business.automation.models import AutomationConfig
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
# Default settings for new sites
|
||||
DEFAULT_PUBLISHING_SETTINGS = {
|
||||
'auto_approval_enabled': True,
|
||||
'auto_publish_enabled': True,
|
||||
'daily_publish_limit': 3,
|
||||
'weekly_publish_limit': 15,
|
||||
'monthly_publish_limit': 50,
|
||||
'publish_days': ['mon', 'tue', 'wed', 'thu', 'fri'],
|
||||
'publish_time_slots': ['09:00', '14:00', '18:00'],
|
||||
}
|
||||
|
||||
DEFAULT_AUTOMATION_SETTINGS = {
|
||||
'is_enabled': True,
|
||||
'frequency': 'daily',
|
||||
'scheduled_time': '02:00',
|
||||
'stage_1_batch_size': 50,
|
||||
'stage_2_batch_size': 1,
|
||||
'stage_3_batch_size': 20,
|
||||
'stage_4_batch_size': 1,
|
||||
'stage_5_batch_size': 1,
|
||||
'stage_6_batch_size': 1,
|
||||
'within_stage_delay': 3,
|
||||
'between_stage_delay': 5,
|
||||
}
|
||||
|
||||
|
||||
class DefaultsService:
|
||||
"""
|
||||
Service for creating sites with sensible defaults.
|
||||
Used during onboarding for a simplified first-run experience.
|
||||
"""
|
||||
|
||||
def __init__(self, account: Account):
|
||||
self.account = account
|
||||
|
||||
@transaction.atomic
|
||||
def create_site_with_defaults(
|
||||
self,
|
||||
site_data: Dict[str, Any],
|
||||
publishing_overrides: Optional[Dict[str, Any]] = None,
|
||||
automation_overrides: Optional[Dict[str, Any]] = None,
|
||||
) -> Tuple[Site, PublishingSettings, AutomationConfig]:
|
||||
"""
|
||||
Create a new site with default publishing and automation settings.
|
||||
|
||||
Args:
|
||||
site_data: Dict with site fields (name, domain, etc.)
|
||||
publishing_overrides: Optional overrides for publishing settings
|
||||
automation_overrides: Optional overrides for automation settings
|
||||
|
||||
Returns:
|
||||
Tuple of (Site, PublishingSettings, AutomationConfig)
|
||||
"""
|
||||
# Check hard limit for sites BEFORE creating
|
||||
from igny8_core.business.billing.services.limit_service import LimitService, HardLimitExceededError
|
||||
LimitService.check_hard_limit(self.account, 'sites', additional_count=1)
|
||||
|
||||
# Create the site
|
||||
site = Site.objects.create(
|
||||
account=self.account,
|
||||
name=site_data.get('name', 'My Site'),
|
||||
domain=site_data.get('domain', ''),
|
||||
base_url=site_data.get('base_url', ''),
|
||||
hosting_type=site_data.get('hosting_type', 'wordpress'),
|
||||
is_active=site_data.get('is_active', True),
|
||||
)
|
||||
|
||||
logger.info(f"Created site: {site.name} (id={site.id}) for account {self.account.id}")
|
||||
|
||||
# Create publishing settings with defaults
|
||||
publishing_settings = self._create_publishing_settings(
|
||||
site,
|
||||
overrides=publishing_overrides
|
||||
)
|
||||
|
||||
# Create automation config with defaults
|
||||
automation_config = self._create_automation_config(
|
||||
site,
|
||||
overrides=automation_overrides
|
||||
)
|
||||
|
||||
return site, publishing_settings, automation_config
|
||||
|
||||
def _create_publishing_settings(
|
||||
self,
|
||||
site: Site,
|
||||
overrides: Optional[Dict[str, Any]] = None
|
||||
) -> PublishingSettings:
|
||||
"""Create publishing settings with defaults, applying any overrides."""
|
||||
settings_data = {**DEFAULT_PUBLISHING_SETTINGS}
|
||||
|
||||
if overrides:
|
||||
settings_data.update(overrides)
|
||||
|
||||
publishing_settings = PublishingSettings.objects.create(
|
||||
account=self.account,
|
||||
site=site,
|
||||
**settings_data
|
||||
)
|
||||
|
||||
logger.info(
|
||||
f"Created publishing settings for site {site.id}: "
|
||||
f"auto_approval={publishing_settings.auto_approval_enabled}, "
|
||||
f"auto_publish={publishing_settings.auto_publish_enabled}"
|
||||
)
|
||||
|
||||
return publishing_settings
|
||||
|
||||
def _create_automation_config(
|
||||
self,
|
||||
site: Site,
|
||||
overrides: Optional[Dict[str, Any]] = None
|
||||
) -> AutomationConfig:
|
||||
"""Create automation config with defaults, applying any overrides."""
|
||||
config_data = {**DEFAULT_AUTOMATION_SETTINGS}
|
||||
|
||||
if overrides:
|
||||
config_data.update(overrides)
|
||||
|
||||
# Calculate next run time (tomorrow at scheduled time)
|
||||
scheduled_time = config_data.pop('scheduled_time', '02:00')
|
||||
|
||||
automation_config = AutomationConfig.objects.create(
|
||||
account=self.account,
|
||||
site=site,
|
||||
scheduled_time=scheduled_time,
|
||||
**config_data
|
||||
)
|
||||
|
||||
# Set next run to tomorrow at scheduled time if enabled
|
||||
if automation_config.is_enabled:
|
||||
next_run = self._calculate_initial_next_run(scheduled_time)
|
||||
automation_config.next_run_at = next_run
|
||||
automation_config.save(update_fields=['next_run_at'])
|
||||
|
||||
logger.info(
|
||||
f"Created automation config for site {site.id}: "
|
||||
f"enabled={automation_config.is_enabled}, "
|
||||
f"frequency={automation_config.frequency}, "
|
||||
f"next_run={automation_config.next_run_at}"
|
||||
)
|
||||
|
||||
return automation_config
|
||||
|
||||
def _calculate_initial_next_run(self, scheduled_time: str) -> timezone.datetime:
|
||||
"""Calculate the initial next run datetime (tomorrow at scheduled time)."""
|
||||
now = timezone.now()
|
||||
|
||||
# Parse time
|
||||
try:
|
||||
hour, minute = map(int, scheduled_time.split(':'))
|
||||
except (ValueError, AttributeError):
|
||||
hour, minute = 2, 0 # Default to 2:00 AM
|
||||
|
||||
# Set to tomorrow at the scheduled time
|
||||
next_run = now.replace(
|
||||
hour=hour,
|
||||
minute=minute,
|
||||
second=0,
|
||||
microsecond=0
|
||||
)
|
||||
|
||||
# If the time has passed today, schedule for tomorrow
|
||||
if next_run <= now:
|
||||
next_run += timezone.timedelta(days=1)
|
||||
|
||||
return next_run
|
||||
|
||||
@transaction.atomic
|
||||
def apply_defaults_to_existing_site(
|
||||
self,
|
||||
site: Site,
|
||||
force_overwrite: bool = False
|
||||
) -> Tuple[PublishingSettings, AutomationConfig]:
|
||||
"""
|
||||
Apply default settings to an existing site.
|
||||
|
||||
Args:
|
||||
site: Existing Site instance
|
||||
force_overwrite: If True, overwrite existing settings. If False, only create if missing.
|
||||
|
||||
Returns:
|
||||
Tuple of (PublishingSettings, AutomationConfig)
|
||||
"""
|
||||
# Handle publishing settings
|
||||
if force_overwrite:
|
||||
PublishingSettings.objects.filter(site=site).delete()
|
||||
publishing_settings = self._create_publishing_settings(site)
|
||||
else:
|
||||
publishing_settings, created = PublishingSettings.objects.get_or_create(
|
||||
site=site,
|
||||
defaults={
|
||||
'account': self.account,
|
||||
**DEFAULT_PUBLISHING_SETTINGS
|
||||
}
|
||||
)
|
||||
if not created:
|
||||
logger.info(f"Publishing settings already exist for site {site.id}")
|
||||
|
||||
# Handle automation config
|
||||
if force_overwrite:
|
||||
AutomationConfig.objects.filter(site=site).delete()
|
||||
automation_config = self._create_automation_config(site)
|
||||
else:
|
||||
try:
|
||||
automation_config = AutomationConfig.objects.get(site=site)
|
||||
logger.info(f"Automation config already exists for site {site.id}")
|
||||
except AutomationConfig.DoesNotExist:
|
||||
automation_config = self._create_automation_config(site)
|
||||
|
||||
return publishing_settings, automation_config
|
||||
|
||||
|
||||
def create_site_with_defaults(
|
||||
account: Account,
|
||||
site_data: Dict[str, Any],
|
||||
publishing_overrides: Optional[Dict[str, Any]] = None,
|
||||
automation_overrides: Optional[Dict[str, Any]] = None,
|
||||
) -> Tuple[Site, PublishingSettings, AutomationConfig]:
|
||||
"""
|
||||
Convenience function to create a site with default settings.
|
||||
|
||||
This is the main entry point for the onboarding flow.
|
||||
|
||||
Usage:
|
||||
from igny8_core.business.integration.services.defaults_service import create_site_with_defaults
|
||||
|
||||
site, pub_settings, auto_config = create_site_with_defaults(
|
||||
account=request.user.account,
|
||||
site_data={
|
||||
'name': 'My Blog',
|
||||
'domain': 'myblog.com',
|
||||
'hosting_type': 'wordpress',
|
||||
}
|
||||
)
|
||||
"""
|
||||
service = DefaultsService(account)
|
||||
return service.create_site_with_defaults(
|
||||
site_data,
|
||||
publishing_overrides=publishing_overrides,
|
||||
automation_overrides=automation_overrides,
|
||||
)
|
||||
1
backend/igny8_core/business/notifications/__init__.py
Normal file
1
backend/igny8_core/business/notifications/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
# Notifications module
|
||||
40
backend/igny8_core/business/notifications/admin.py
Normal file
40
backend/igny8_core/business/notifications/admin.py
Normal file
@@ -0,0 +1,40 @@
|
||||
"""
|
||||
Notification Admin Configuration
|
||||
"""
|
||||
|
||||
from django.contrib import admin
|
||||
from unfold.admin import ModelAdmin
|
||||
|
||||
from .models import Notification
|
||||
|
||||
|
||||
@admin.register(Notification)
|
||||
class NotificationAdmin(ModelAdmin):
|
||||
list_display = ['title', 'notification_type', 'severity', 'account', 'user', 'is_read', 'created_at']
|
||||
list_filter = ['notification_type', 'severity', 'is_read', 'created_at']
|
||||
search_fields = ['title', 'message', 'account__name', 'user__email']
|
||||
readonly_fields = ['created_at', 'updated_at', 'read_at']
|
||||
ordering = ['-created_at']
|
||||
|
||||
fieldsets = (
|
||||
('Notification', {
|
||||
'fields': ('account', 'user', 'notification_type', 'severity')
|
||||
}),
|
||||
('Content', {
|
||||
'fields': ('title', 'message', 'site')
|
||||
}),
|
||||
('Action', {
|
||||
'fields': ('action_url', 'action_label')
|
||||
}),
|
||||
('Status', {
|
||||
'fields': ('is_read', 'read_at')
|
||||
}),
|
||||
('Metadata', {
|
||||
'fields': ('metadata',),
|
||||
'classes': ('collapse',)
|
||||
}),
|
||||
('Timestamps', {
|
||||
'fields': ('created_at', 'updated_at'),
|
||||
'classes': ('collapse',)
|
||||
}),
|
||||
)
|
||||
13
backend/igny8_core/business/notifications/apps.py
Normal file
13
backend/igny8_core/business/notifications/apps.py
Normal file
@@ -0,0 +1,13 @@
|
||||
"""
|
||||
Notifications App Configuration
|
||||
"""
|
||||
from django.apps import AppConfig
|
||||
|
||||
|
||||
class NotificationsConfig(AppConfig):
|
||||
"""Configuration for the notifications app."""
|
||||
|
||||
default_auto_field = 'django.db.models.BigAutoField'
|
||||
name = 'igny8_core.business.notifications'
|
||||
label = 'notifications'
|
||||
verbose_name = 'Notifications'
|
||||
@@ -0,0 +1,45 @@
|
||||
# Generated by Django 5.2.9 on 2025-12-27 22:02
|
||||
|
||||
import django.db.models.deletion
|
||||
from django.conf import settings
|
||||
from django.db import migrations, models
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
|
||||
initial = True
|
||||
|
||||
dependencies = [
|
||||
('contenttypes', '0002_remove_content_type_name'),
|
||||
('igny8_core_auth', '0018_add_country_remove_intent_seedkeyword'),
|
||||
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.CreateModel(
|
||||
name='Notification',
|
||||
fields=[
|
||||
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
|
||||
('created_at', models.DateTimeField(auto_now_add=True, db_index=True)),
|
||||
('updated_at', models.DateTimeField(auto_now=True)),
|
||||
('notification_type', models.CharField(choices=[('ai_cluster_complete', 'Clustering Complete'), ('ai_cluster_failed', 'Clustering Failed'), ('ai_ideas_complete', 'Ideas Generated'), ('ai_ideas_failed', 'Idea Generation Failed'), ('ai_content_complete', 'Content Generated'), ('ai_content_failed', 'Content Generation Failed'), ('ai_images_complete', 'Images Generated'), ('ai_images_failed', 'Image Generation Failed'), ('ai_prompts_complete', 'Image Prompts Created'), ('ai_prompts_failed', 'Image Prompts Failed'), ('content_ready_review', 'Content Ready for Review'), ('content_published', 'Content Published'), ('content_publish_failed', 'Publishing Failed'), ('wordpress_sync_success', 'WordPress Sync Complete'), ('wordpress_sync_failed', 'WordPress Sync Failed'), ('credits_low', 'Credits Running Low'), ('credits_depleted', 'Credits Depleted'), ('site_setup_complete', 'Site Setup Complete'), ('keywords_imported', 'Keywords Imported'), ('system_info', 'System Information')], default='system_info', max_length=50)),
|
||||
('title', models.CharField(max_length=200)),
|
||||
('message', models.TextField()),
|
||||
('severity', models.CharField(choices=[('info', 'Info'), ('success', 'Success'), ('warning', 'Warning'), ('error', 'Error')], default='info', max_length=20)),
|
||||
('object_id', models.PositiveIntegerField(blank=True, null=True)),
|
||||
('action_url', models.CharField(blank=True, max_length=500, null=True)),
|
||||
('action_label', models.CharField(blank=True, max_length=50, null=True)),
|
||||
('is_read', models.BooleanField(default=False)),
|
||||
('read_at', models.DateTimeField(blank=True, null=True)),
|
||||
('metadata', models.JSONField(blank=True, default=dict)),
|
||||
('account', models.ForeignKey(db_column='tenant_id', on_delete=django.db.models.deletion.CASCADE, related_name='%(class)s_set', to='igny8_core_auth.account')),
|
||||
('content_type', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='contenttypes.contenttype')),
|
||||
('site', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, related_name='notifications', to='igny8_core_auth.site')),
|
||||
('user', models.ForeignKey(blank=True, help_text='If null, notification is visible to all account users', null=True, on_delete=django.db.models.deletion.CASCADE, related_name='notifications', to=settings.AUTH_USER_MODEL)),
|
||||
],
|
||||
options={
|
||||
'ordering': ['-created_at'],
|
||||
'indexes': [models.Index(fields=['account', '-created_at'], name='notificatio_tenant__3b20a7_idx'), models.Index(fields=['account', 'is_read', '-created_at'], name='notificatio_tenant__9a5521_idx'), models.Index(fields=['user', '-created_at'], name='notificatio_user_id_05b4bc_idx')],
|
||||
},
|
||||
),
|
||||
]
|
||||
191
backend/igny8_core/business/notifications/models.py
Normal file
191
backend/igny8_core/business/notifications/models.py
Normal file
@@ -0,0 +1,191 @@
|
||||
"""
|
||||
Notification Models for IGNY8
|
||||
|
||||
This module provides a notification system for tracking AI operations,
|
||||
workflow events, and system alerts.
|
||||
"""
|
||||
|
||||
from django.db import models
|
||||
from django.conf import settings
|
||||
from django.contrib.contenttypes.fields import GenericForeignKey
|
||||
from django.contrib.contenttypes.models import ContentType
|
||||
|
||||
from igny8_core.auth.models import AccountBaseModel
|
||||
|
||||
|
||||
class NotificationType(models.TextChoices):
|
||||
"""Notification type choices"""
|
||||
# AI Operations
|
||||
AI_CLUSTER_COMPLETE = 'ai_cluster_complete', 'Clustering Complete'
|
||||
AI_CLUSTER_FAILED = 'ai_cluster_failed', 'Clustering Failed'
|
||||
AI_IDEAS_COMPLETE = 'ai_ideas_complete', 'Ideas Generated'
|
||||
AI_IDEAS_FAILED = 'ai_ideas_failed', 'Idea Generation Failed'
|
||||
AI_CONTENT_COMPLETE = 'ai_content_complete', 'Content Generated'
|
||||
AI_CONTENT_FAILED = 'ai_content_failed', 'Content Generation Failed'
|
||||
AI_IMAGES_COMPLETE = 'ai_images_complete', 'Images Generated'
|
||||
AI_IMAGES_FAILED = 'ai_images_failed', 'Image Generation Failed'
|
||||
AI_PROMPTS_COMPLETE = 'ai_prompts_complete', 'Image Prompts Created'
|
||||
AI_PROMPTS_FAILED = 'ai_prompts_failed', 'Image Prompts Failed'
|
||||
|
||||
# Workflow
|
||||
CONTENT_READY_REVIEW = 'content_ready_review', 'Content Ready for Review'
|
||||
CONTENT_PUBLISHED = 'content_published', 'Content Published'
|
||||
CONTENT_PUBLISH_FAILED = 'content_publish_failed', 'Publishing Failed'
|
||||
|
||||
# WordPress Sync
|
||||
WORDPRESS_SYNC_SUCCESS = 'wordpress_sync_success', 'WordPress Sync Complete'
|
||||
WORDPRESS_SYNC_FAILED = 'wordpress_sync_failed', 'WordPress Sync Failed'
|
||||
|
||||
# Credits/Billing
|
||||
CREDITS_LOW = 'credits_low', 'Credits Running Low'
|
||||
CREDITS_DEPLETED = 'credits_depleted', 'Credits Depleted'
|
||||
|
||||
# Setup
|
||||
SITE_SETUP_COMPLETE = 'site_setup_complete', 'Site Setup Complete'
|
||||
KEYWORDS_IMPORTED = 'keywords_imported', 'Keywords Imported'
|
||||
|
||||
# System
|
||||
SYSTEM_INFO = 'system_info', 'System Information'
|
||||
|
||||
|
||||
class NotificationSeverity(models.TextChoices):
|
||||
"""Notification severity choices"""
|
||||
INFO = 'info', 'Info'
|
||||
SUCCESS = 'success', 'Success'
|
||||
WARNING = 'warning', 'Warning'
|
||||
ERROR = 'error', 'Error'
|
||||
|
||||
|
||||
class Notification(AccountBaseModel):
|
||||
"""
|
||||
Notification model for tracking events and alerts
|
||||
|
||||
Notifications are account-scoped (via AccountBaseModel) and can optionally target specific users.
|
||||
They support generic relations to link to any related object.
|
||||
"""
|
||||
|
||||
user = models.ForeignKey(
|
||||
settings.AUTH_USER_MODEL,
|
||||
on_delete=models.CASCADE,
|
||||
null=True,
|
||||
blank=True,
|
||||
related_name='notifications',
|
||||
help_text='If null, notification is visible to all account users'
|
||||
)
|
||||
|
||||
# Notification content
|
||||
notification_type = models.CharField(
|
||||
max_length=50,
|
||||
choices=NotificationType.choices,
|
||||
default=NotificationType.SYSTEM_INFO
|
||||
)
|
||||
title = models.CharField(max_length=200)
|
||||
message = models.TextField()
|
||||
severity = models.CharField(
|
||||
max_length=20,
|
||||
choices=NotificationSeverity.choices,
|
||||
default=NotificationSeverity.INFO
|
||||
)
|
||||
|
||||
# Related site (optional)
|
||||
site = models.ForeignKey(
|
||||
'igny8_core_auth.Site',
|
||||
on_delete=models.CASCADE,
|
||||
null=True,
|
||||
blank=True,
|
||||
related_name='notifications'
|
||||
)
|
||||
|
||||
# Generic relation to any object
|
||||
content_type = models.ForeignKey(
|
||||
ContentType,
|
||||
on_delete=models.CASCADE,
|
||||
null=True,
|
||||
blank=True
|
||||
)
|
||||
object_id = models.PositiveIntegerField(null=True, blank=True)
|
||||
content_object = GenericForeignKey('content_type', 'object_id')
|
||||
|
||||
# Action
|
||||
action_url = models.CharField(max_length=500, null=True, blank=True)
|
||||
action_label = models.CharField(max_length=50, null=True, blank=True)
|
||||
|
||||
# Status
|
||||
is_read = models.BooleanField(default=False)
|
||||
read_at = models.DateTimeField(null=True, blank=True)
|
||||
|
||||
# Metadata for counts/details
|
||||
metadata = models.JSONField(default=dict, blank=True)
|
||||
|
||||
class Meta:
|
||||
ordering = ['-created_at']
|
||||
indexes = [
|
||||
models.Index(fields=['account', '-created_at']),
|
||||
models.Index(fields=['account', 'is_read', '-created_at']),
|
||||
models.Index(fields=['user', '-created_at']),
|
||||
]
|
||||
|
||||
def __str__(self):
|
||||
return f"{self.title} ({self.notification_type})"
|
||||
|
||||
def mark_as_read(self):
|
||||
"""Mark notification as read"""
|
||||
if not self.is_read:
|
||||
from django.utils import timezone
|
||||
self.is_read = True
|
||||
self.read_at = timezone.now()
|
||||
self.save(update_fields=['is_read', 'read_at', 'updated_at'])
|
||||
|
||||
@classmethod
|
||||
def create_notification(
|
||||
cls,
|
||||
account,
|
||||
notification_type: str,
|
||||
title: str,
|
||||
message: str,
|
||||
severity: str = NotificationSeverity.INFO,
|
||||
user=None,
|
||||
site=None,
|
||||
content_object=None,
|
||||
action_url: str = None,
|
||||
action_label: str = None,
|
||||
metadata: dict = None
|
||||
):
|
||||
"""
|
||||
Factory method to create notifications
|
||||
|
||||
Args:
|
||||
account: The account this notification belongs to
|
||||
notification_type: Type from NotificationType choices
|
||||
title: Notification title
|
||||
message: Notification message body
|
||||
severity: Severity level from NotificationSeverity choices
|
||||
user: Optional specific user (if None, visible to all account users)
|
||||
site: Optional related site
|
||||
content_object: Optional related object (using GenericForeignKey)
|
||||
action_url: Optional URL for action button
|
||||
action_label: Optional label for action button
|
||||
metadata: Optional dict with additional data (counts, etc.)
|
||||
|
||||
Returns:
|
||||
Created Notification instance
|
||||
"""
|
||||
notification = cls(
|
||||
account=account,
|
||||
user=user,
|
||||
notification_type=notification_type,
|
||||
title=title,
|
||||
message=message,
|
||||
severity=severity,
|
||||
site=site,
|
||||
action_url=action_url,
|
||||
action_label=action_label,
|
||||
metadata=metadata or {}
|
||||
)
|
||||
|
||||
if content_object:
|
||||
notification.content_type = ContentType.objects.get_for_model(content_object)
|
||||
notification.object_id = content_object.pk
|
||||
|
||||
notification.save()
|
||||
return notification
|
||||
90
backend/igny8_core/business/notifications/serializers.py
Normal file
90
backend/igny8_core/business/notifications/serializers.py
Normal file
@@ -0,0 +1,90 @@
|
||||
"""
|
||||
Notification Serializers
|
||||
"""
|
||||
|
||||
from rest_framework import serializers
|
||||
|
||||
from .models import Notification
|
||||
|
||||
|
||||
class NotificationSerializer(serializers.ModelSerializer):
|
||||
"""Serializer for Notification model"""
|
||||
|
||||
site_name = serializers.CharField(source='site.name', read_only=True, default=None)
|
||||
|
||||
class Meta:
|
||||
model = Notification
|
||||
fields = [
|
||||
'id',
|
||||
'notification_type',
|
||||
'title',
|
||||
'message',
|
||||
'severity',
|
||||
'site',
|
||||
'site_name',
|
||||
'action_url',
|
||||
'action_label',
|
||||
'is_read',
|
||||
'read_at',
|
||||
'metadata',
|
||||
'created_at',
|
||||
]
|
||||
read_only_fields = ['id', 'created_at', 'read_at']
|
||||
|
||||
|
||||
class NotificationListSerializer(serializers.ModelSerializer):
|
||||
"""Lightweight serializer for notification lists"""
|
||||
|
||||
site_name = serializers.CharField(source='site.name', read_only=True, default=None)
|
||||
time_ago = serializers.SerializerMethodField()
|
||||
|
||||
class Meta:
|
||||
model = Notification
|
||||
fields = [
|
||||
'id',
|
||||
'notification_type',
|
||||
'title',
|
||||
'message',
|
||||
'severity',
|
||||
'site_name',
|
||||
'action_url',
|
||||
'action_label',
|
||||
'is_read',
|
||||
'created_at',
|
||||
'time_ago',
|
||||
'metadata',
|
||||
]
|
||||
|
||||
def get_time_ago(self, obj):
|
||||
"""Return human-readable time since notification"""
|
||||
from django.utils import timezone
|
||||
from datetime import timedelta
|
||||
|
||||
now = timezone.now()
|
||||
diff = now - obj.created_at
|
||||
|
||||
if diff < timedelta(minutes=1):
|
||||
return 'Just now'
|
||||
elif diff < timedelta(hours=1):
|
||||
minutes = int(diff.total_seconds() / 60)
|
||||
return f'{minutes} minute{"s" if minutes != 1 else ""} ago'
|
||||
elif diff < timedelta(days=1):
|
||||
hours = int(diff.total_seconds() / 3600)
|
||||
return f'{hours} hour{"s" if hours != 1 else ""} ago'
|
||||
elif diff < timedelta(days=7):
|
||||
days = diff.days
|
||||
if days == 1:
|
||||
return 'Yesterday'
|
||||
return f'{days} days ago'
|
||||
else:
|
||||
return obj.created_at.strftime('%b %d, %Y')
|
||||
|
||||
|
||||
class MarkReadSerializer(serializers.Serializer):
|
||||
"""Serializer for marking notifications as read"""
|
||||
|
||||
notification_ids = serializers.ListField(
|
||||
child=serializers.IntegerField(),
|
||||
required=False,
|
||||
help_text='List of notification IDs to mark as read. If empty, marks all as read.'
|
||||
)
|
||||
306
backend/igny8_core/business/notifications/services.py
Normal file
306
backend/igny8_core/business/notifications/services.py
Normal file
@@ -0,0 +1,306 @@
|
||||
"""
|
||||
Notification Service
|
||||
|
||||
Provides methods to create notifications for various events in the system.
|
||||
"""
|
||||
|
||||
from .models import Notification, NotificationType, NotificationSeverity
|
||||
|
||||
|
||||
class NotificationService:
|
||||
"""Service for creating notifications"""
|
||||
|
||||
@staticmethod
|
||||
def notify_clustering_complete(account, site=None, cluster_count=0, keyword_count=0, user=None):
|
||||
"""Create notification when keyword clustering completes"""
|
||||
return Notification.create_notification(
|
||||
account=account,
|
||||
notification_type=NotificationType.AI_CLUSTER_COMPLETE,
|
||||
title='Clustering Complete',
|
||||
message=f'Created {cluster_count} clusters from {keyword_count} keywords',
|
||||
severity=NotificationSeverity.SUCCESS,
|
||||
user=user,
|
||||
site=site,
|
||||
action_url='/planner/clusters',
|
||||
action_label='View Clusters',
|
||||
metadata={'cluster_count': cluster_count, 'keyword_count': keyword_count}
|
||||
)
|
||||
|
||||
@staticmethod
|
||||
def notify_clustering_failed(account, site=None, error=None, user=None):
|
||||
"""Create notification when keyword clustering fails"""
|
||||
return Notification.create_notification(
|
||||
account=account,
|
||||
notification_type=NotificationType.AI_CLUSTER_FAILED,
|
||||
title='Clustering Failed',
|
||||
message=f'Failed to cluster keywords: {error}' if error else 'Failed to cluster keywords',
|
||||
severity=NotificationSeverity.ERROR,
|
||||
user=user,
|
||||
site=site,
|
||||
action_url='/planner/keywords',
|
||||
action_label='View Keywords',
|
||||
metadata={'error': str(error) if error else None}
|
||||
)
|
||||
|
||||
@staticmethod
|
||||
def notify_ideas_complete(account, site=None, idea_count=0, cluster_count=0, user=None):
|
||||
"""Create notification when idea generation completes"""
|
||||
return Notification.create_notification(
|
||||
account=account,
|
||||
notification_type=NotificationType.AI_IDEAS_COMPLETE,
|
||||
title='Ideas Generated',
|
||||
message=f'Generated {idea_count} content ideas from {cluster_count} clusters',
|
||||
severity=NotificationSeverity.SUCCESS,
|
||||
user=user,
|
||||
site=site,
|
||||
action_url='/planner/ideas',
|
||||
action_label='View Ideas',
|
||||
metadata={'idea_count': idea_count, 'cluster_count': cluster_count}
|
||||
)
|
||||
|
||||
@staticmethod
|
||||
def notify_ideas_failed(account, site=None, error=None, user=None):
|
||||
"""Create notification when idea generation fails"""
|
||||
return Notification.create_notification(
|
||||
account=account,
|
||||
notification_type=NotificationType.AI_IDEAS_FAILED,
|
||||
title='Idea Generation Failed',
|
||||
message=f'Failed to generate ideas: {error}' if error else 'Failed to generate ideas',
|
||||
severity=NotificationSeverity.ERROR,
|
||||
user=user,
|
||||
site=site,
|
||||
action_url='/planner/clusters',
|
||||
action_label='View Clusters',
|
||||
metadata={'error': str(error) if error else None}
|
||||
)
|
||||
|
||||
@staticmethod
|
||||
def notify_content_complete(account, site=None, article_count=0, word_count=0, user=None):
|
||||
"""Create notification when content generation completes"""
|
||||
return Notification.create_notification(
|
||||
account=account,
|
||||
notification_type=NotificationType.AI_CONTENT_COMPLETE,
|
||||
title='Content Generated',
|
||||
message=f'Generated {article_count} article{"s" if article_count != 1 else ""} ({word_count:,} words)',
|
||||
severity=NotificationSeverity.SUCCESS,
|
||||
user=user,
|
||||
site=site,
|
||||
action_url='/writer/content',
|
||||
action_label='View Content',
|
||||
metadata={'article_count': article_count, 'word_count': word_count}
|
||||
)
|
||||
|
||||
@staticmethod
|
||||
def notify_content_failed(account, site=None, error=None, user=None):
|
||||
"""Create notification when content generation fails"""
|
||||
return Notification.create_notification(
|
||||
account=account,
|
||||
notification_type=NotificationType.AI_CONTENT_FAILED,
|
||||
title='Content Generation Failed',
|
||||
message=f'Failed to generate content: {error}' if error else 'Failed to generate content',
|
||||
severity=NotificationSeverity.ERROR,
|
||||
user=user,
|
||||
site=site,
|
||||
action_url='/writer/tasks',
|
||||
action_label='View Tasks',
|
||||
metadata={'error': str(error) if error else None}
|
||||
)
|
||||
|
||||
@staticmethod
|
||||
def notify_images_complete(account, site=None, image_count=0, user=None):
|
||||
"""Create notification when image generation completes"""
|
||||
return Notification.create_notification(
|
||||
account=account,
|
||||
notification_type=NotificationType.AI_IMAGES_COMPLETE,
|
||||
title='Images Generated',
|
||||
message=f'Generated {image_count} image{"s" if image_count != 1 else ""}',
|
||||
severity=NotificationSeverity.SUCCESS,
|
||||
user=user,
|
||||
site=site,
|
||||
action_url='/writer/images',
|
||||
action_label='View Images',
|
||||
metadata={'image_count': image_count}
|
||||
)
|
||||
|
||||
@staticmethod
|
||||
def notify_images_failed(account, site=None, error=None, image_count=0, user=None):
|
||||
"""Create notification when image generation fails"""
|
||||
return Notification.create_notification(
|
||||
account=account,
|
||||
notification_type=NotificationType.AI_IMAGES_FAILED,
|
||||
title='Image Generation Failed',
|
||||
message=f'Failed to generate {image_count} image{"s" if image_count != 1 else ""}: {error}' if error else f'Failed to generate images',
|
||||
severity=NotificationSeverity.ERROR,
|
||||
user=user,
|
||||
site=site,
|
||||
action_url='/writer/images',
|
||||
action_label='View Images',
|
||||
metadata={'error': str(error) if error else None, 'image_count': image_count}
|
||||
)
|
||||
|
||||
@staticmethod
|
||||
def notify_prompts_complete(account, site=None, prompt_count=0, user=None):
|
||||
"""Create notification when image prompt generation completes"""
|
||||
in_article_count = prompt_count - 1 if prompt_count > 1 else 0
|
||||
message = f'{prompt_count} image prompts ready (1 featured + {in_article_count} in-article)' if in_article_count > 0 else '1 image prompt ready'
|
||||
|
||||
return Notification.create_notification(
|
||||
account=account,
|
||||
notification_type=NotificationType.AI_PROMPTS_COMPLETE,
|
||||
title='Image Prompts Created',
|
||||
message=message,
|
||||
severity=NotificationSeverity.SUCCESS,
|
||||
user=user,
|
||||
site=site,
|
||||
action_url='/writer/images',
|
||||
action_label='Generate Images',
|
||||
metadata={'prompt_count': prompt_count, 'in_article_count': in_article_count}
|
||||
)
|
||||
|
||||
@staticmethod
|
||||
def notify_prompts_failed(account, site=None, error=None, user=None):
|
||||
"""Create notification when image prompt generation fails"""
|
||||
return Notification.create_notification(
|
||||
account=account,
|
||||
notification_type=NotificationType.AI_PROMPTS_FAILED,
|
||||
title='Image Prompts Failed',
|
||||
message=f'Failed to create image prompts: {error}' if error else 'Failed to create image prompts',
|
||||
severity=NotificationSeverity.ERROR,
|
||||
user=user,
|
||||
site=site,
|
||||
action_url='/writer/content',
|
||||
action_label='View Content',
|
||||
metadata={'error': str(error) if error else None}
|
||||
)
|
||||
|
||||
@staticmethod
|
||||
def notify_content_published(account, site=None, title='', content_object=None, user=None):
|
||||
"""Create notification when content is published"""
|
||||
site_name = site.name if site else 'site'
|
||||
return Notification.create_notification(
|
||||
account=account,
|
||||
notification_type=NotificationType.CONTENT_PUBLISHED,
|
||||
title='Content Published',
|
||||
message=f'"{title}" published to {site_name}',
|
||||
severity=NotificationSeverity.SUCCESS,
|
||||
user=user,
|
||||
site=site,
|
||||
content_object=content_object,
|
||||
action_url='/writer/published',
|
||||
action_label='View Published',
|
||||
metadata={'content_title': title}
|
||||
)
|
||||
|
||||
@staticmethod
|
||||
def notify_publish_failed(account, site=None, title='', error=None, user=None):
|
||||
"""Create notification when publishing fails"""
|
||||
return Notification.create_notification(
|
||||
account=account,
|
||||
notification_type=NotificationType.CONTENT_PUBLISH_FAILED,
|
||||
title='Publishing Failed',
|
||||
message=f'Failed to publish "{title}": {error}' if error else f'Failed to publish "{title}"',
|
||||
severity=NotificationSeverity.ERROR,
|
||||
user=user,
|
||||
site=site,
|
||||
action_url='/writer/review',
|
||||
action_label='View Review',
|
||||
metadata={'content_title': title, 'error': str(error) if error else None}
|
||||
)
|
||||
|
||||
@staticmethod
|
||||
def notify_wordpress_sync_success(account, site=None, count=0, user=None):
|
||||
"""Create notification when WordPress sync succeeds"""
|
||||
site_name = site.name if site else 'site'
|
||||
return Notification.create_notification(
|
||||
account=account,
|
||||
notification_type=NotificationType.WORDPRESS_SYNC_SUCCESS,
|
||||
title='WordPress Synced',
|
||||
message=f'Synced {count} item{"s" if count != 1 else ""} with {site_name}',
|
||||
severity=NotificationSeverity.SUCCESS,
|
||||
user=user,
|
||||
site=site,
|
||||
action_url='/writer/published',
|
||||
action_label='View Published',
|
||||
metadata={'sync_count': count}
|
||||
)
|
||||
|
||||
@staticmethod
|
||||
def notify_wordpress_sync_failed(account, site=None, error=None, user=None):
|
||||
"""Create notification when WordPress sync fails"""
|
||||
site_name = site.name if site else 'site'
|
||||
return Notification.create_notification(
|
||||
account=account,
|
||||
notification_type=NotificationType.WORDPRESS_SYNC_FAILED,
|
||||
title='Sync Failed',
|
||||
message=f'WordPress sync failed for {site_name}: {error}' if error else f'WordPress sync failed for {site_name}',
|
||||
severity=NotificationSeverity.ERROR,
|
||||
user=user,
|
||||
site=site,
|
||||
action_url=f'/sites/{site.id}/integrations' if site else '/sites',
|
||||
action_label='Check Integration',
|
||||
metadata={'error': str(error) if error else None}
|
||||
)
|
||||
|
||||
@staticmethod
|
||||
def notify_credits_low(account, percentage_used=80, credits_remaining=0, user=None):
|
||||
"""Create notification when credits are running low"""
|
||||
return Notification.create_notification(
|
||||
account=account,
|
||||
notification_type=NotificationType.CREDITS_LOW,
|
||||
title='Credits Running Low',
|
||||
message=f"You've used {percentage_used}% of your credits. {credits_remaining} credits remaining.",
|
||||
severity=NotificationSeverity.WARNING,
|
||||
user=user,
|
||||
action_url='/account/billing',
|
||||
action_label='Upgrade Plan',
|
||||
metadata={'percentage_used': percentage_used, 'credits_remaining': credits_remaining}
|
||||
)
|
||||
|
||||
@staticmethod
|
||||
def notify_credits_depleted(account, user=None):
|
||||
"""Create notification when credits are depleted"""
|
||||
return Notification.create_notification(
|
||||
account=account,
|
||||
notification_type=NotificationType.CREDITS_DEPLETED,
|
||||
title='Credits Depleted',
|
||||
message='Your credits are exhausted. Upgrade to continue using AI features.',
|
||||
severity=NotificationSeverity.ERROR,
|
||||
user=user,
|
||||
action_url='/account/billing',
|
||||
action_label='Upgrade Now',
|
||||
metadata={}
|
||||
)
|
||||
|
||||
@staticmethod
|
||||
def notify_site_setup_complete(account, site=None, user=None):
|
||||
"""Create notification when site setup is complete"""
|
||||
site_name = site.name if site else 'Site'
|
||||
return Notification.create_notification(
|
||||
account=account,
|
||||
notification_type=NotificationType.SITE_SETUP_COMPLETE,
|
||||
title='Site Ready',
|
||||
message=f'{site_name} is fully configured and ready!',
|
||||
severity=NotificationSeverity.SUCCESS,
|
||||
user=user,
|
||||
site=site,
|
||||
action_url=f'/sites/{site.id}' if site else '/sites',
|
||||
action_label='View Site',
|
||||
metadata={}
|
||||
)
|
||||
|
||||
@staticmethod
|
||||
def notify_keywords_imported(account, site=None, count=0, user=None):
|
||||
"""Create notification when keywords are imported"""
|
||||
site_name = site.name if site else 'site'
|
||||
return Notification.create_notification(
|
||||
account=account,
|
||||
notification_type=NotificationType.KEYWORDS_IMPORTED,
|
||||
title='Keywords Imported',
|
||||
message=f'Added {count} keyword{"s" if count != 1 else ""} to {site_name}',
|
||||
severity=NotificationSeverity.INFO,
|
||||
user=user,
|
||||
site=site,
|
||||
action_url='/planner/keywords',
|
||||
action_label='View Keywords',
|
||||
metadata={'keyword_count': count}
|
||||
)
|
||||
15
backend/igny8_core/business/notifications/urls.py
Normal file
15
backend/igny8_core/business/notifications/urls.py
Normal file
@@ -0,0 +1,15 @@
|
||||
"""
|
||||
Notification URL Configuration
|
||||
"""
|
||||
|
||||
from django.urls import path, include
|
||||
from rest_framework.routers import DefaultRouter
|
||||
|
||||
from .views import NotificationViewSet
|
||||
|
||||
router = DefaultRouter()
|
||||
router.register(r'notifications', NotificationViewSet, basename='notification')
|
||||
|
||||
urlpatterns = [
|
||||
path('', include(router.urls)),
|
||||
]
|
||||
132
backend/igny8_core/business/notifications/views.py
Normal file
132
backend/igny8_core/business/notifications/views.py
Normal file
@@ -0,0 +1,132 @@
|
||||
"""
|
||||
Notification Views
|
||||
"""
|
||||
|
||||
from rest_framework import viewsets, status
|
||||
from rest_framework.decorators import action
|
||||
from rest_framework.response import Response
|
||||
from rest_framework.permissions import IsAuthenticated
|
||||
from django.utils import timezone
|
||||
|
||||
from igny8_core.api.pagination import CustomPageNumberPagination
|
||||
from igny8_core.api.base import AccountModelViewSet
|
||||
|
||||
from .models import Notification
|
||||
from .serializers import NotificationSerializer, NotificationListSerializer, MarkReadSerializer
|
||||
|
||||
|
||||
class NotificationViewSet(AccountModelViewSet):
|
||||
"""
|
||||
ViewSet for managing notifications
|
||||
|
||||
Endpoints:
|
||||
- GET /api/v1/notifications/ - List notifications
|
||||
- GET /api/v1/notifications/{id}/ - Get notification detail
|
||||
- DELETE /api/v1/notifications/{id}/ - Delete notification
|
||||
- POST /api/v1/notifications/{id}/read/ - Mark single notification as read
|
||||
- POST /api/v1/notifications/read-all/ - Mark all notifications as read
|
||||
- GET /api/v1/notifications/unread-count/ - Get unread notification count
|
||||
"""
|
||||
|
||||
serializer_class = NotificationSerializer
|
||||
pagination_class = CustomPageNumberPagination
|
||||
permission_classes = [IsAuthenticated]
|
||||
|
||||
def get_queryset(self):
|
||||
"""Filter notifications for current account and user"""
|
||||
from django.db.models import Q
|
||||
|
||||
user = self.request.user
|
||||
account = getattr(user, 'account', None)
|
||||
|
||||
if not account:
|
||||
return Notification.objects.none()
|
||||
|
||||
# Get notifications for this account that are either:
|
||||
# - For all users (user=None)
|
||||
# - For this specific user
|
||||
queryset = Notification.objects.filter(
|
||||
Q(account=account, user__isnull=True) |
|
||||
Q(account=account, user=user)
|
||||
).select_related('site').order_by('-created_at')
|
||||
|
||||
# Optional filters
|
||||
is_read = self.request.query_params.get('is_read')
|
||||
if is_read is not None:
|
||||
queryset = queryset.filter(is_read=is_read.lower() == 'true')
|
||||
|
||||
notification_type = self.request.query_params.get('type')
|
||||
if notification_type:
|
||||
queryset = queryset.filter(notification_type=notification_type)
|
||||
|
||||
severity = self.request.query_params.get('severity')
|
||||
if severity:
|
||||
queryset = queryset.filter(severity=severity)
|
||||
|
||||
return queryset
|
||||
|
||||
def get_serializer_class(self):
|
||||
"""Use list serializer for list action"""
|
||||
if self.action == 'list':
|
||||
return NotificationListSerializer
|
||||
return NotificationSerializer
|
||||
|
||||
def list(self, request, *args, **kwargs):
|
||||
"""List notifications with unread count"""
|
||||
queryset = self.filter_queryset(self.get_queryset())
|
||||
|
||||
# Get unread count
|
||||
unread_count = queryset.filter(is_read=False).count()
|
||||
|
||||
page = self.paginate_queryset(queryset)
|
||||
if page is not None:
|
||||
serializer = self.get_serializer(page, many=True)
|
||||
response = self.get_paginated_response(serializer.data)
|
||||
response.data['unread_count'] = unread_count
|
||||
return response
|
||||
|
||||
serializer = self.get_serializer(queryset, many=True)
|
||||
return Response({
|
||||
'results': serializer.data,
|
||||
'unread_count': unread_count
|
||||
})
|
||||
|
||||
@action(detail=True, methods=['post'])
|
||||
def read(self, request, pk=None):
|
||||
"""Mark a single notification as read"""
|
||||
notification = self.get_object()
|
||||
notification.mark_as_read()
|
||||
serializer = self.get_serializer(notification)
|
||||
return Response(serializer.data)
|
||||
|
||||
@action(detail=False, methods=['post'], url_path='read-all')
|
||||
def read_all(self, request):
|
||||
"""Mark all notifications as read"""
|
||||
serializer = MarkReadSerializer(data=request.data)
|
||||
serializer.is_valid(raise_exception=True)
|
||||
|
||||
notification_ids = serializer.validated_data.get('notification_ids', [])
|
||||
|
||||
queryset = self.get_queryset().filter(is_read=False)
|
||||
|
||||
if notification_ids:
|
||||
queryset = queryset.filter(id__in=notification_ids)
|
||||
|
||||
count = queryset.update(is_read=True, read_at=timezone.now())
|
||||
|
||||
return Response({
|
||||
'status': 'success',
|
||||
'marked_read': count
|
||||
})
|
||||
|
||||
@action(detail=False, methods=['get'], url_path='unread-count')
|
||||
def unread_count(self, request):
|
||||
"""Get count of unread notifications"""
|
||||
count = self.get_queryset().filter(is_read=False).count()
|
||||
return Response({'unread_count': count})
|
||||
|
||||
def destroy(self, request, *args, **kwargs):
|
||||
"""Delete a notification"""
|
||||
instance = self.get_object()
|
||||
self.perform_destroy(instance)
|
||||
return Response(status=status.HTTP_204_NO_CONTENT)
|
||||
@@ -1,4 +1,5 @@
|
||||
"""
|
||||
Planning business logic - Keywords, Clusters, ContentIdeas models and services
|
||||
"""
|
||||
|
||||
# Import signals to register cascade handlers
|
||||
from . import signals # noqa: F401
|
||||
|
||||
@@ -1,6 +1,9 @@
|
||||
from django.db import models
|
||||
from igny8_core.auth.models import SiteSectorBaseModel, SeedKeyword
|
||||
from igny8_core.common.soft_delete import SoftDeletableModel, SoftDeleteManager
|
||||
import logging
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class Clusters(SoftDeletableModel, SiteSectorBaseModel):
|
||||
@@ -39,6 +42,27 @@ class Clusters(SoftDeletableModel, SiteSectorBaseModel):
|
||||
|
||||
def __str__(self):
|
||||
return self.name
|
||||
|
||||
def soft_delete(self, user=None, reason=None, retention_days=None):
|
||||
"""
|
||||
Override soft_delete to cascade status reset to related Keywords.
|
||||
When a cluster is deleted, its keywords should:
|
||||
- Have their cluster FK set to NULL (handled by SET_NULL)
|
||||
- Have their status reset to 'new' (orphaned keywords)
|
||||
"""
|
||||
# Reset related keywords status to 'new' and clear cluster FK
|
||||
keywords_count = self.keywords.filter(is_deleted=False).update(
|
||||
cluster=None,
|
||||
status='new'
|
||||
)
|
||||
|
||||
logger.info(
|
||||
f"[Clusters.soft_delete] Cluster {self.id} '{self.name}' cascade: "
|
||||
f"reset {keywords_count} keywords to status='new'"
|
||||
)
|
||||
|
||||
# Call parent soft_delete
|
||||
super().soft_delete(user=user, reason=reason, retention_days=retention_days)
|
||||
|
||||
|
||||
class Keywords(SoftDeletableModel, SiteSectorBaseModel):
|
||||
|
||||
@@ -52,26 +52,12 @@ class ClusteringService:
|
||||
|
||||
# Delegate to AI task
|
||||
from igny8_core.ai.tasks import run_ai_task
|
||||
from django.conf import settings
|
||||
|
||||
payload = {
|
||||
'ids': keyword_ids,
|
||||
'sector_id': sector_id
|
||||
}
|
||||
|
||||
# Stage 1: When USE_SITE_BUILDER_REFACTOR is enabled, payload can include
|
||||
# taxonomy hints and dimension metadata for enhanced clustering.
|
||||
# TODO (Stage 2/3): Enhance clustering to collect and use:
|
||||
# - Taxonomy hints from SiteBlueprintTaxonomy
|
||||
# - Dimension metadata (context_type, dimension_meta) for clusters
|
||||
# - Attribute values from Keywords.attribute_values
|
||||
if getattr(settings, 'USE_SITE_BUILDER_REFACTOR', False):
|
||||
logger.info(
|
||||
f"Clustering with refactor enabled: {len(keyword_ids)} keywords, "
|
||||
f"sector_id={sector_id}, account_id={account.id}"
|
||||
)
|
||||
# Future: Add taxonomy hints and dimension metadata to payload
|
||||
|
||||
try:
|
||||
if hasattr(run_ai_task, 'delay'):
|
||||
# Celery available - queue async
|
||||
|
||||
130
backend/igny8_core/business/planning/signals.py
Normal file
130
backend/igny8_core/business/planning/signals.py
Normal file
@@ -0,0 +1,130 @@
|
||||
"""
|
||||
Cascade signals for Planning models
|
||||
Handles status updates and relationship cleanup when parent records are deleted
|
||||
"""
|
||||
import logging
|
||||
from django.db.models.signals import pre_delete, post_save
|
||||
from django.dispatch import receiver
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
@receiver(pre_delete, sender='planner.Clusters')
|
||||
def handle_cluster_soft_delete(sender, instance, **kwargs):
|
||||
"""
|
||||
When a Cluster is deleted:
|
||||
- Set Keywords.cluster = NULL
|
||||
- Reset Keywords.status to 'new'
|
||||
- Set ContentIdeas.keyword_cluster = NULL
|
||||
- Reset ContentIdeas.status to 'new'
|
||||
"""
|
||||
from igny8_core.business.planning.models import Keywords, ContentIdeas
|
||||
|
||||
# Check if this is a soft delete (is_deleted=True) vs hard delete
|
||||
# Soft deletes trigger delete() which calls soft_delete()
|
||||
if hasattr(instance, 'is_deleted') and instance.is_deleted:
|
||||
return # Skip if already soft-deleted
|
||||
|
||||
try:
|
||||
# Update related Keywords - clear cluster FK and reset status
|
||||
updated_keywords = Keywords.objects.filter(cluster=instance).update(
|
||||
cluster=None,
|
||||
status='new'
|
||||
)
|
||||
if updated_keywords:
|
||||
logger.info(
|
||||
f"[Cascade] Cluster '{instance.name}' (ID: {instance.id}) deleted: "
|
||||
f"Reset {updated_keywords} keywords to status='new', cluster=NULL"
|
||||
)
|
||||
|
||||
# Update related ContentIdeas - clear cluster FK and reset status
|
||||
updated_ideas = ContentIdeas.objects.filter(keyword_cluster=instance).update(
|
||||
keyword_cluster=None,
|
||||
status='new'
|
||||
)
|
||||
if updated_ideas:
|
||||
logger.info(
|
||||
f"[Cascade] Cluster '{instance.name}' (ID: {instance.id}) deleted: "
|
||||
f"Reset {updated_ideas} content ideas to status='new', keyword_cluster=NULL"
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"[Cascade] Error handling cluster deletion cascade: {e}", exc_info=True)
|
||||
|
||||
|
||||
@receiver(pre_delete, sender='planner.ContentIdeas')
|
||||
def handle_idea_soft_delete(sender, instance, **kwargs):
|
||||
"""
|
||||
When a ContentIdea is deleted:
|
||||
- Set Tasks.idea = NULL (don't delete tasks, they may have content)
|
||||
- Log orphaned tasks
|
||||
"""
|
||||
from igny8_core.business.content.models import Tasks
|
||||
|
||||
if hasattr(instance, 'is_deleted') and instance.is_deleted:
|
||||
return
|
||||
|
||||
try:
|
||||
# Update related Tasks - clear idea FK
|
||||
updated_tasks = Tasks.objects.filter(idea=instance).update(idea=None)
|
||||
if updated_tasks:
|
||||
logger.info(
|
||||
f"[Cascade] ContentIdea '{instance.idea_title}' (ID: {instance.id}) deleted: "
|
||||
f"Cleared idea reference from {updated_tasks} tasks"
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"[Cascade] Error handling content idea deletion cascade: {e}", exc_info=True)
|
||||
|
||||
|
||||
@receiver(pre_delete, sender='writer.Tasks')
|
||||
def handle_task_soft_delete(sender, instance, **kwargs):
|
||||
"""
|
||||
When a Task is deleted:
|
||||
- Set Content.task = NULL
|
||||
"""
|
||||
from igny8_core.business.content.models import Content
|
||||
|
||||
if hasattr(instance, 'is_deleted') and instance.is_deleted:
|
||||
return
|
||||
|
||||
try:
|
||||
# Update related Content - clear task FK
|
||||
updated_content = Content.objects.filter(task=instance).update(task=None)
|
||||
if updated_content:
|
||||
logger.info(
|
||||
f"[Cascade] Task '{instance.title}' (ID: {instance.id}) deleted: "
|
||||
f"Cleared task reference from {updated_content} content items"
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"[Cascade] Error handling task deletion cascade: {e}", exc_info=True)
|
||||
|
||||
|
||||
@receiver(pre_delete, sender='writer.Content')
|
||||
def handle_content_soft_delete(sender, instance, **kwargs):
|
||||
"""
|
||||
When Content is deleted:
|
||||
- Soft delete related Images (cascade soft delete)
|
||||
- Clear PublishingRecord references
|
||||
"""
|
||||
from igny8_core.business.content.models import Images
|
||||
|
||||
if hasattr(instance, 'is_deleted') and instance.is_deleted:
|
||||
return
|
||||
|
||||
try:
|
||||
# Soft delete related Images
|
||||
related_images = Images.objects.filter(content=instance)
|
||||
for image in related_images:
|
||||
image.soft_delete(reason='cascade_from_content')
|
||||
|
||||
count = related_images.count()
|
||||
if count:
|
||||
logger.info(
|
||||
f"[Cascade] Content '{instance.title}' (ID: {instance.id}) deleted: "
|
||||
f"Soft deleted {count} related images"
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"[Cascade] Error handling content deletion cascade: {e}", exc_info=True)
|
||||
@@ -19,6 +19,9 @@ app.config_from_object('django.conf:settings', namespace='CELERY')
|
||||
# Load task modules from all registered Django apps.
|
||||
app.autodiscover_tasks()
|
||||
|
||||
# Explicitly import tasks from igny8_core/tasks directory
|
||||
app.autodiscover_tasks(['igny8_core.tasks'])
|
||||
|
||||
# Celery Beat schedule for periodic tasks
|
||||
app.conf.beat_schedule = {
|
||||
'replenish-monthly-credits': {
|
||||
@@ -39,6 +42,15 @@ app.conf.beat_schedule = {
|
||||
'task': 'automation.check_scheduled_automations',
|
||||
'schedule': crontab(minute=0), # Every hour at :00
|
||||
},
|
||||
# Publishing Scheduler Tasks
|
||||
'schedule-approved-content': {
|
||||
'task': 'publishing.schedule_approved_content',
|
||||
'schedule': crontab(minute=0), # Every hour at :00
|
||||
},
|
||||
'process-scheduled-publications': {
|
||||
'task': 'publishing.process_scheduled_publications',
|
||||
'schedule': crontab(minute='*/5'), # Every 5 minutes
|
||||
},
|
||||
# Maintenance: purge expired soft-deleted records daily at 3:15 AM
|
||||
'purge-soft-deleted-records': {
|
||||
'task': 'igny8_core.purge_soft_deleted',
|
||||
|
||||
152
backend/igny8_core/management/commands/cleanup_user_data.py
Normal file
152
backend/igny8_core/management/commands/cleanup_user_data.py
Normal file
@@ -0,0 +1,152 @@
|
||||
"""
|
||||
Management command to clean up all user-generated data (DESTRUCTIVE).
|
||||
This is used before V1.0 production launch to start with a clean database.
|
||||
|
||||
⚠️ WARNING: This permanently deletes ALL user data!
|
||||
|
||||
Usage:
|
||||
# DRY RUN (recommended first):
|
||||
python manage.py cleanup_user_data --dry-run
|
||||
|
||||
# ACTUAL CLEANUP (after reviewing dry-run):
|
||||
python manage.py cleanup_user_data --confirm
|
||||
"""
|
||||
from django.core.management.base import BaseCommand
|
||||
from django.db import transaction
|
||||
from django.conf import settings
|
||||
|
||||
|
||||
class Command(BaseCommand):
|
||||
help = 'Clean up all user-generated data (DESTRUCTIVE - for pre-launch cleanup)'
|
||||
|
||||
def add_arguments(self, parser):
|
||||
parser.add_argument(
|
||||
'--confirm',
|
||||
action='store_true',
|
||||
help='Confirm you want to delete all user data'
|
||||
)
|
||||
parser.add_argument(
|
||||
'--dry-run',
|
||||
action='store_true',
|
||||
help='Show what would be deleted without actually deleting'
|
||||
)
|
||||
|
||||
def handle(self, *args, **options):
|
||||
if not options['confirm'] and not options['dry_run']:
|
||||
self.stdout.write(
|
||||
self.style.ERROR('\n⚠️ ERROR: Must use --confirm or --dry-run flag\n')
|
||||
)
|
||||
self.stdout.write('Usage:')
|
||||
self.stdout.write(' python manage.py cleanup_user_data --dry-run # See what will be deleted')
|
||||
self.stdout.write(' python manage.py cleanup_user_data --confirm # Actually delete data\n')
|
||||
return
|
||||
|
||||
# Safety check: Prevent running in production unless explicitly allowed
|
||||
if getattr(settings, 'ENVIRONMENT', 'production') == 'production' and options['confirm']:
|
||||
self.stdout.write(
|
||||
self.style.ERROR('\n⚠️ BLOCKED: Cannot run cleanup in PRODUCTION environment!\n')
|
||||
)
|
||||
self.stdout.write('To allow this, temporarily set ENVIRONMENT to "staging" in settings.\n')
|
||||
return
|
||||
|
||||
# Import models
|
||||
from igny8_core.auth.models import Site, CustomUser
|
||||
from igny8_core.business.planning.models import Keywords, Clusters
|
||||
from igny8_core.business.content.models import ContentIdea, Tasks, Content, Images
|
||||
from igny8_core.modules.publisher.models import PublishingRecord
|
||||
from igny8_core.business.integration.models import WordPressSyncEvent
|
||||
from igny8_core.modules.billing.models import CreditTransaction, CreditUsageLog, Order
|
||||
from igny8_core.modules.system.models import Notification
|
||||
from igny8_core.modules.writer.models import AutomationRun
|
||||
|
||||
# Define models to clear (ORDER MATTERS - foreign keys)
|
||||
# Delete child records before parent records
|
||||
models_to_clear = [
|
||||
('Notifications', Notification),
|
||||
('Credit Usage Logs', CreditUsageLog),
|
||||
('Credit Transactions', CreditTransaction),
|
||||
('Orders', Order),
|
||||
('WordPress Sync Events', WordPressSyncEvent),
|
||||
('Publishing Records', PublishingRecord),
|
||||
('Automation Runs', AutomationRun),
|
||||
('Images', Images),
|
||||
('Content', Content),
|
||||
('Tasks', Tasks),
|
||||
('Content Ideas', ContentIdea),
|
||||
('Clusters', Clusters),
|
||||
('Keywords', Keywords),
|
||||
('Sites', Site), # Sites should be near last (many foreign keys)
|
||||
# Note: We do NOT delete CustomUser - keep admin users
|
||||
]
|
||||
|
||||
if options['dry_run']:
|
||||
self.stdout.write(self.style.WARNING('\n' + '=' * 70))
|
||||
self.stdout.write(self.style.WARNING('DRY RUN - No data will be deleted'))
|
||||
self.stdout.write(self.style.WARNING('=' * 70 + '\n'))
|
||||
|
||||
total_records = 0
|
||||
for name, model in models_to_clear:
|
||||
count = model.objects.count()
|
||||
total_records += count
|
||||
status = '✓' if count > 0 else '·'
|
||||
self.stdout.write(f' {status} Would delete {count:6d} {name}')
|
||||
|
||||
# Count users (not deleted)
|
||||
user_count = CustomUser.objects.count()
|
||||
self.stdout.write(f'\n → Keeping {user_count:6d} Users (not deleted)')
|
||||
|
||||
self.stdout.write(f'\n Total records to delete: {total_records:,}')
|
||||
self.stdout.write('\n' + '=' * 70)
|
||||
self.stdout.write(self.style.SUCCESS('\nTo proceed with actual deletion, run:'))
|
||||
self.stdout.write(' python manage.py cleanup_user_data --confirm\n')
|
||||
return
|
||||
|
||||
# ACTUAL DELETION
|
||||
self.stdout.write(self.style.ERROR('\n' + '=' * 70))
|
||||
self.stdout.write(self.style.ERROR('⚠️ DELETING ALL USER DATA - THIS CANNOT BE UNDONE!'))
|
||||
self.stdout.write(self.style.ERROR('=' * 70 + '\n'))
|
||||
|
||||
# Final confirmation prompt
|
||||
confirm_text = input('Type "DELETE ALL DATA" to proceed: ')
|
||||
if confirm_text != 'DELETE ALL DATA':
|
||||
self.stdout.write(self.style.WARNING('\nAborted. Data was NOT deleted.\n'))
|
||||
return
|
||||
|
||||
self.stdout.write('\nProceeding with deletion...\n')
|
||||
|
||||
deleted_counts = {}
|
||||
failed_deletions = []
|
||||
|
||||
with transaction.atomic():
|
||||
for name, model in models_to_clear:
|
||||
try:
|
||||
count = model.objects.count()
|
||||
if count > 0:
|
||||
model.objects.all().delete()
|
||||
deleted_counts[name] = count
|
||||
self.stdout.write(
|
||||
self.style.SUCCESS(f'✓ Deleted {count:6d} {name}')
|
||||
)
|
||||
else:
|
||||
self.stdout.write(
|
||||
self.style.WARNING(f'· Skipped {count:6d} {name} (already empty)')
|
||||
)
|
||||
except Exception as e:
|
||||
failed_deletions.append((name, str(e)))
|
||||
self.stdout.write(
|
||||
self.style.ERROR(f'✗ Failed to delete {name}: {str(e)}')
|
||||
)
|
||||
|
||||
# Summary
|
||||
total_deleted = sum(deleted_counts.values())
|
||||
self.stdout.write('\n' + '=' * 70)
|
||||
self.stdout.write(self.style.SUCCESS(f'\nUser Data Cleanup Complete!\n'))
|
||||
self.stdout.write(f' Total records deleted: {total_deleted:,}')
|
||||
self.stdout.write(f' Failed deletions: {len(failed_deletions)}')
|
||||
|
||||
if failed_deletions:
|
||||
self.stdout.write(self.style.WARNING('\nFailed deletions:'))
|
||||
for name, error in failed_deletions:
|
||||
self.stdout.write(f' - {name}: {error}')
|
||||
|
||||
self.stdout.write('\n' + '=' * 70 + '\n')
|
||||
122
backend/igny8_core/management/commands/export_system_config.py
Normal file
122
backend/igny8_core/management/commands/export_system_config.py
Normal file
@@ -0,0 +1,122 @@
|
||||
"""
|
||||
Management command to export system configuration data to JSON files.
|
||||
This exports Plans, Credit Costs, AI Models, Industries, Sectors, Seed Keywords, etc.
|
||||
|
||||
Usage:
|
||||
python manage.py export_system_config --output-dir=backups/config
|
||||
"""
|
||||
from django.core.management.base import BaseCommand
|
||||
from django.core import serializers
|
||||
import json
|
||||
import os
|
||||
from datetime import datetime
|
||||
|
||||
|
||||
class Command(BaseCommand):
|
||||
help = 'Export system configuration data to JSON files for V1.0 backup'
|
||||
|
||||
def add_arguments(self, parser):
|
||||
parser.add_argument(
|
||||
'--output-dir',
|
||||
default='backups/config',
|
||||
help='Output directory for config files (relative to project root)'
|
||||
)
|
||||
|
||||
def handle(self, *args, **options):
|
||||
output_dir = options['output_dir']
|
||||
|
||||
# Make output_dir absolute if it's relative
|
||||
if not os.path.isabs(output_dir):
|
||||
# Get project root (parent of manage.py)
|
||||
import sys
|
||||
project_root = os.path.dirname(os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))
|
||||
output_dir = os.path.join(project_root, '..', output_dir)
|
||||
|
||||
os.makedirs(output_dir, exist_ok=True)
|
||||
|
||||
self.stdout.write(self.style.SUCCESS(f'\nExporting system configuration to: {output_dir}\n'))
|
||||
|
||||
# Import models
|
||||
from igny8_core.modules.billing.models import Plan, CreditCostConfig
|
||||
from igny8_core.modules.system.models import AIModelConfig, GlobalIntegrationSettings
|
||||
from igny8_core.auth.models import Industry, Sector, SeedKeyword, AuthorProfile
|
||||
from igny8_core.ai.models import Prompt, PromptVariable
|
||||
|
||||
# Define what to export
|
||||
exports = {
|
||||
'plans': (Plan.objects.all(), 'Subscription Plans'),
|
||||
'credit_costs': (CreditCostConfig.objects.all(), 'Credit Cost Configurations'),
|
||||
'ai_models': (AIModelConfig.objects.all(), 'AI Model Configurations'),
|
||||
'global_integrations': (GlobalIntegrationSettings.objects.all(), 'Global Integration Settings'),
|
||||
'industries': (Industry.objects.all(), 'Industries'),
|
||||
'sectors': (Sector.objects.all(), 'Sectors'),
|
||||
'seed_keywords': (SeedKeyword.objects.all(), 'Seed Keywords'),
|
||||
'author_profiles': (AuthorProfile.objects.all(), 'Author Profiles'),
|
||||
'prompts': (Prompt.objects.all(), 'AI Prompts'),
|
||||
'prompt_variables': (PromptVariable.objects.all(), 'Prompt Variables'),
|
||||
}
|
||||
|
||||
successful_exports = []
|
||||
failed_exports = []
|
||||
|
||||
for name, (queryset, description) in exports.items():
|
||||
try:
|
||||
count = queryset.count()
|
||||
data = serializers.serialize('json', queryset, indent=2)
|
||||
filepath = os.path.join(output_dir, f'{name}.json')
|
||||
|
||||
with open(filepath, 'w') as f:
|
||||
f.write(data)
|
||||
|
||||
self.stdout.write(
|
||||
self.style.SUCCESS(f'✓ Exported {count:4d} {description:30s} → {name}.json')
|
||||
)
|
||||
successful_exports.append(name)
|
||||
|
||||
except Exception as e:
|
||||
self.stdout.write(
|
||||
self.style.ERROR(f'✗ Failed to export {description}: {str(e)}')
|
||||
)
|
||||
failed_exports.append((name, str(e)))
|
||||
|
||||
# Export metadata
|
||||
metadata = {
|
||||
'exported_at': datetime.now().isoformat(),
|
||||
'django_version': self.get_django_version(),
|
||||
'database': self.get_database_info(),
|
||||
'successful_exports': successful_exports,
|
||||
'failed_exports': failed_exports,
|
||||
'export_count': len(successful_exports),
|
||||
}
|
||||
|
||||
metadata_path = os.path.join(output_dir, 'export_metadata.json')
|
||||
with open(metadata_path, 'w') as f:
|
||||
json.dump(metadata, f, indent=2)
|
||||
|
||||
self.stdout.write(self.style.SUCCESS(f'\n✓ Metadata saved to export_metadata.json'))
|
||||
|
||||
# Summary
|
||||
self.stdout.write('\n' + '=' * 70)
|
||||
self.stdout.write(self.style.SUCCESS(f'\nSystem Configuration Export Complete!\n'))
|
||||
self.stdout.write(f' Successful: {len(successful_exports)} exports')
|
||||
self.stdout.write(f' Failed: {len(failed_exports)} exports')
|
||||
self.stdout.write(f' Location: {output_dir}\n')
|
||||
|
||||
if failed_exports:
|
||||
self.stdout.write(self.style.WARNING('\nFailed exports:'))
|
||||
for name, error in failed_exports:
|
||||
self.stdout.write(f' - {name}: {error}')
|
||||
|
||||
self.stdout.write('=' * 70 + '\n')
|
||||
|
||||
def get_django_version(self):
|
||||
import django
|
||||
return django.get_version()
|
||||
|
||||
def get_database_info(self):
|
||||
from django.conf import settings
|
||||
db_config = settings.DATABASES.get('default', {})
|
||||
return {
|
||||
'engine': db_config.get('ENGINE', '').split('.')[-1],
|
||||
'name': db_config.get('NAME', ''),
|
||||
}
|
||||
@@ -519,6 +519,30 @@ class PaymentMethodConfigAdmin(Igny8ModelAdmin):
|
||||
search_fields = ['country_code', 'display_name', 'payment_method']
|
||||
list_editable = ['is_enabled', 'sort_order']
|
||||
readonly_fields = ['created_at', 'updated_at']
|
||||
|
||||
fieldsets = (
|
||||
('Payment Method', {
|
||||
'fields': ('country_code', 'payment_method', 'display_name', 'is_enabled', 'sort_order')
|
||||
}),
|
||||
('Instructions', {
|
||||
'fields': ('instructions',),
|
||||
'description': 'Instructions shown to users for this payment method'
|
||||
}),
|
||||
('Bank Transfer Details', {
|
||||
'fields': ('bank_name', 'account_title', 'account_number', 'routing_number', 'swift_code', 'iban'),
|
||||
'classes': ('collapse',),
|
||||
'description': 'Only for bank_transfer payment method'
|
||||
}),
|
||||
('Local Wallet Details', {
|
||||
'fields': ('wallet_type', 'wallet_id'),
|
||||
'classes': ('collapse',),
|
||||
'description': 'Only for local_wallet payment method (JazzCash, EasyPaisa, etc.)'
|
||||
}),
|
||||
('Timestamps', {
|
||||
'fields': ('created_at', 'updated_at'),
|
||||
'classes': ('collapse',)
|
||||
}),
|
||||
)
|
||||
|
||||
|
||||
@admin.register(AccountPaymentMethod)
|
||||
@@ -552,79 +576,61 @@ class AccountPaymentMethodAdmin(AccountAdminMixin, Igny8ModelAdmin):
|
||||
|
||||
@admin.register(CreditCostConfig)
|
||||
class CreditCostConfigAdmin(SimpleHistoryAdmin, Igny8ModelAdmin):
|
||||
"""
|
||||
Admin for Credit Cost Configuration.
|
||||
Per final-model-schemas.md - Fixed credits per operation type.
|
||||
"""
|
||||
list_display = [
|
||||
'operation_type',
|
||||
'display_name',
|
||||
'tokens_per_credit_display',
|
||||
'price_per_credit_usd',
|
||||
'min_credits',
|
||||
'is_active',
|
||||
'cost_change_indicator',
|
||||
'updated_at',
|
||||
'updated_by'
|
||||
'base_credits_display',
|
||||
'is_active_icon',
|
||||
]
|
||||
|
||||
list_filter = ['is_active', 'updated_at']
|
||||
list_filter = ['is_active']
|
||||
search_fields = ['operation_type', 'display_name', 'description']
|
||||
actions = ['bulk_activate', 'bulk_deactivate']
|
||||
|
||||
fieldsets = (
|
||||
('Operation', {
|
||||
'fields': ('operation_type', 'display_name', 'description')
|
||||
}),
|
||||
('Token-to-Credit Configuration', {
|
||||
'fields': ('tokens_per_credit', 'min_credits', 'price_per_credit_usd', 'is_active'),
|
||||
'description': 'Configure how tokens are converted to credits for this operation'
|
||||
}),
|
||||
('Audit Trail', {
|
||||
'fields': ('previous_tokens_per_credit', 'updated_by', 'created_at', 'updated_at'),
|
||||
'classes': ('collapse',)
|
||||
('Credits', {
|
||||
'fields': ('base_credits', 'is_active'),
|
||||
'description': 'Fixed credits charged per operation'
|
||||
}),
|
||||
)
|
||||
|
||||
readonly_fields = ['created_at', 'updated_at', 'previous_tokens_per_credit']
|
||||
|
||||
def tokens_per_credit_display(self, obj):
|
||||
"""Show token ratio with color coding"""
|
||||
if obj.tokens_per_credit <= 50:
|
||||
color = 'red' # Expensive (low tokens per credit)
|
||||
elif obj.tokens_per_credit <= 100:
|
||||
color = 'orange'
|
||||
else:
|
||||
color = 'green' # Cheap (high tokens per credit)
|
||||
def base_credits_display(self, obj):
|
||||
"""Show base credits with formatting"""
|
||||
return format_html(
|
||||
'<span style="color: {}; font-weight: bold;">{} tokens/credit</span>',
|
||||
color,
|
||||
obj.tokens_per_credit
|
||||
'<span style="font-weight: bold;">{} credits</span>',
|
||||
obj.base_credits
|
||||
)
|
||||
tokens_per_credit_display.short_description = 'Token Ratio'
|
||||
base_credits_display.short_description = 'Credits'
|
||||
|
||||
def cost_change_indicator(self, obj):
|
||||
"""Show if token ratio changed recently"""
|
||||
if obj.previous_tokens_per_credit is not None:
|
||||
if obj.tokens_per_credit < obj.previous_tokens_per_credit:
|
||||
icon = '📈' # More expensive (fewer tokens per credit)
|
||||
color = 'red'
|
||||
elif obj.tokens_per_credit > obj.previous_tokens_per_credit:
|
||||
icon = '📉' # Cheaper (more tokens per credit)
|
||||
color = 'green'
|
||||
else:
|
||||
icon = '➡️' # Same
|
||||
color = 'gray'
|
||||
|
||||
def is_active_icon(self, obj):
|
||||
"""Active status icon"""
|
||||
if obj.is_active:
|
||||
return format_html(
|
||||
'{} <span style="color: {};">({} → {})</span>',
|
||||
icon,
|
||||
color,
|
||||
obj.previous_tokens_per_credit,
|
||||
obj.tokens_per_credit
|
||||
'<span style="color: green; font-size: 18px;" title="Active">●</span>'
|
||||
)
|
||||
return '—'
|
||||
cost_change_indicator.short_description = 'Recent Change'
|
||||
return format_html(
|
||||
'<span style="color: red; font-size: 18px;" title="Inactive">●</span>'
|
||||
)
|
||||
is_active_icon.short_description = 'Active'
|
||||
|
||||
def save_model(self, request, obj, form, change):
|
||||
"""Track who made the change"""
|
||||
obj.updated_by = request.user
|
||||
super().save_model(request, obj, form, change)
|
||||
@admin.action(description='Activate selected configurations')
|
||||
def bulk_activate(self, request, queryset):
|
||||
"""Bulk activate credit cost configurations"""
|
||||
updated = queryset.update(is_active=True)
|
||||
self.message_user(request, f'{updated} configuration(s) activated.', messages.SUCCESS)
|
||||
|
||||
@admin.action(description='Deactivate selected configurations')
|
||||
def bulk_deactivate(self, request, queryset):
|
||||
"""Bulk deactivate credit cost configurations"""
|
||||
updated = queryset.update(is_active=False)
|
||||
self.message_user(request, f'{updated} configuration(s) deactivated.', messages.WARNING)
|
||||
|
||||
|
||||
class PlanLimitUsageResource(resources.ModelResource):
|
||||
@@ -750,67 +756,60 @@ class BillingConfigurationAdmin(Igny8ModelAdmin):
|
||||
@admin.register(AIModelConfig)
|
||||
class AIModelConfigAdmin(SimpleHistoryAdmin, Igny8ModelAdmin):
|
||||
"""
|
||||
Admin for AI Model Configuration - Database-driven model pricing
|
||||
Replaces hardcoded MODEL_RATES and IMAGE_MODEL_RATES
|
||||
Admin for AI Model Configuration - Single Source of Truth for Models.
|
||||
Per final-model-schemas.md
|
||||
"""
|
||||
list_display = [
|
||||
'model_name',
|
||||
'display_name_short',
|
||||
'model_type_badge',
|
||||
'provider_badge',
|
||||
'pricing_display',
|
||||
'credit_display',
|
||||
'quality_tier',
|
||||
'is_active_icon',
|
||||
'is_default_icon',
|
||||
'sort_order',
|
||||
'updated_at',
|
||||
]
|
||||
|
||||
list_filter = [
|
||||
'model_type',
|
||||
'provider',
|
||||
'quality_tier',
|
||||
'is_active',
|
||||
'is_default',
|
||||
'supports_json_mode',
|
||||
'supports_vision',
|
||||
'supports_function_calling',
|
||||
]
|
||||
|
||||
search_fields = ['model_name', 'display_name', 'description']
|
||||
search_fields = ['model_name', 'display_name']
|
||||
|
||||
ordering = ['model_type', 'sort_order', 'model_name']
|
||||
ordering = ['model_type', 'model_name']
|
||||
|
||||
readonly_fields = ['created_at', 'updated_at', 'updated_by']
|
||||
readonly_fields = ['created_at', 'updated_at']
|
||||
|
||||
fieldsets = (
|
||||
('Basic Information', {
|
||||
'fields': ('model_name', 'display_name', 'model_type', 'provider', 'description'),
|
||||
'description': 'Core model identification and classification'
|
||||
'fields': ('model_name', 'model_type', 'provider', 'display_name'),
|
||||
'description': 'Core model identification'
|
||||
}),
|
||||
('Text Model Pricing', {
|
||||
'fields': ('input_cost_per_1m', 'output_cost_per_1m', 'context_window', 'max_output_tokens'),
|
||||
'description': 'Pricing and limits for TEXT models only (leave blank for image models)',
|
||||
'fields': ('cost_per_1k_input', 'cost_per_1k_output', 'tokens_per_credit', 'max_tokens', 'context_window'),
|
||||
'description': 'For TEXT models only',
|
||||
'classes': ('collapse',)
|
||||
}),
|
||||
('Image Model Pricing', {
|
||||
'fields': ('cost_per_image', 'valid_sizes'),
|
||||
'description': 'Pricing and configuration for IMAGE models only (leave blank for text models)',
|
||||
'fields': ('credits_per_image', 'quality_tier'),
|
||||
'description': 'For IMAGE models only',
|
||||
'classes': ('collapse',)
|
||||
}),
|
||||
('Capabilities', {
|
||||
'fields': ('supports_json_mode', 'supports_vision', 'supports_function_calling'),
|
||||
'description': 'Model features and capabilities'
|
||||
}),
|
||||
('Status & Display', {
|
||||
'fields': ('is_active', 'is_default', 'sort_order'),
|
||||
'description': 'Control model availability and ordering in dropdowns'
|
||||
}),
|
||||
('Lifecycle', {
|
||||
'fields': ('release_date', 'deprecation_date'),
|
||||
'description': 'Model release and deprecation dates',
|
||||
'fields': ('capabilities',),
|
||||
'description': 'JSON: vision, function_calling, json_mode, etc.',
|
||||
'classes': ('collapse',)
|
||||
}),
|
||||
('Audit Trail', {
|
||||
'fields': ('created_at', 'updated_at', 'updated_by'),
|
||||
('Status', {
|
||||
'fields': ('is_active', 'is_default'),
|
||||
}),
|
||||
('Timestamps', {
|
||||
'fields': ('created_at', 'updated_at'),
|
||||
'classes': ('collapse',)
|
||||
}),
|
||||
)
|
||||
@@ -818,8 +817,8 @@ class AIModelConfigAdmin(SimpleHistoryAdmin, Igny8ModelAdmin):
|
||||
# Custom display methods
|
||||
def display_name_short(self, obj):
|
||||
"""Truncated display name for list view"""
|
||||
if len(obj.display_name) > 50:
|
||||
return obj.display_name[:47] + '...'
|
||||
if len(obj.display_name) > 40:
|
||||
return obj.display_name[:37] + '...'
|
||||
return obj.display_name
|
||||
display_name_short.short_description = 'Display Name'
|
||||
|
||||
@@ -828,7 +827,6 @@ class AIModelConfigAdmin(SimpleHistoryAdmin, Igny8ModelAdmin):
|
||||
colors = {
|
||||
'text': '#3498db', # Blue
|
||||
'image': '#e74c3c', # Red
|
||||
'embedding': '#2ecc71', # Green
|
||||
}
|
||||
color = colors.get(obj.model_type, '#95a5a6')
|
||||
return format_html(
|
||||
@@ -842,10 +840,10 @@ class AIModelConfigAdmin(SimpleHistoryAdmin, Igny8ModelAdmin):
|
||||
def provider_badge(self, obj):
|
||||
"""Colored badge for provider"""
|
||||
colors = {
|
||||
'openai': '#10a37f', # OpenAI green
|
||||
'anthropic': '#d97757', # Anthropic orange
|
||||
'runware': '#6366f1', # Purple
|
||||
'google': '#4285f4', # Google blue
|
||||
'openai': '#10a37f',
|
||||
'anthropic': '#d97757',
|
||||
'runware': '#6366f1',
|
||||
'google': '#4285f4',
|
||||
}
|
||||
color = colors.get(obj.provider, '#95a5a6')
|
||||
return format_html(
|
||||
@@ -856,23 +854,20 @@ class AIModelConfigAdmin(SimpleHistoryAdmin, Igny8ModelAdmin):
|
||||
)
|
||||
provider_badge.short_description = 'Provider'
|
||||
|
||||
def pricing_display(self, obj):
|
||||
"""Format pricing based on model type"""
|
||||
if obj.model_type == 'text':
|
||||
def credit_display(self, obj):
|
||||
"""Format credit info based on model type"""
|
||||
if obj.model_type == 'text' and obj.tokens_per_credit:
|
||||
return format_html(
|
||||
'<span style="color: #2c3e50; font-family: monospace;">'
|
||||
'${} / ${} per 1M</span>',
|
||||
obj.input_cost_per_1m,
|
||||
obj.output_cost_per_1m
|
||||
'<span style="font-family: monospace;">{} tokens/credit</span>',
|
||||
obj.tokens_per_credit
|
||||
)
|
||||
elif obj.model_type == 'image':
|
||||
elif obj.model_type == 'image' and obj.credits_per_image:
|
||||
return format_html(
|
||||
'<span style="color: #2c3e50; font-family: monospace;">'
|
||||
'${} per image</span>',
|
||||
obj.cost_per_image
|
||||
'<span style="font-family: monospace;">{} credits/image</span>',
|
||||
obj.credits_per_image
|
||||
)
|
||||
return '-'
|
||||
pricing_display.short_description = 'Pricing'
|
||||
credit_display.short_description = 'Credits'
|
||||
|
||||
def is_active_icon(self, obj):
|
||||
"""Active status icon"""
|
||||
@@ -902,41 +897,27 @@ class AIModelConfigAdmin(SimpleHistoryAdmin, Igny8ModelAdmin):
|
||||
def bulk_activate(self, request, queryset):
|
||||
"""Enable selected models"""
|
||||
count = queryset.update(is_active=True)
|
||||
self.message_user(
|
||||
request,
|
||||
f'{count} model(s) activated successfully.',
|
||||
messages.SUCCESS
|
||||
)
|
||||
self.message_user(request, f'{count} model(s) activated.', messages.SUCCESS)
|
||||
bulk_activate.short_description = 'Activate selected models'
|
||||
|
||||
def bulk_deactivate(self, request, queryset):
|
||||
"""Disable selected models"""
|
||||
count = queryset.update(is_active=False)
|
||||
self.message_user(
|
||||
request,
|
||||
f'{count} model(s) deactivated successfully.',
|
||||
messages.WARNING
|
||||
)
|
||||
self.message_user(request, f'{count} model(s) deactivated.', messages.WARNING)
|
||||
bulk_deactivate.short_description = 'Deactivate selected models'
|
||||
|
||||
def set_as_default(self, request, queryset):
|
||||
"""Set one model as default for its type"""
|
||||
if queryset.count() != 1:
|
||||
self.message_user(
|
||||
request,
|
||||
'Please select exactly one model to set as default.',
|
||||
messages.ERROR
|
||||
)
|
||||
self.message_user(request, 'Select exactly one model.', messages.ERROR)
|
||||
return
|
||||
|
||||
model = queryset.first()
|
||||
# Unset other defaults for same type
|
||||
AIModelConfig.objects.filter(
|
||||
model_type=model.model_type,
|
||||
is_default=True
|
||||
).exclude(pk=model.pk).update(is_default=False)
|
||||
|
||||
# Set this as default
|
||||
model.is_default = True
|
||||
model.save()
|
||||
|
||||
@@ -945,9 +926,4 @@ class AIModelConfigAdmin(SimpleHistoryAdmin, Igny8ModelAdmin):
|
||||
f'{model.model_name} is now the default {model.get_model_type_display()} model.',
|
||||
messages.SUCCESS
|
||||
)
|
||||
set_as_default.short_description = 'Set as default model (for its type)'
|
||||
|
||||
def save_model(self, request, obj, form, change):
|
||||
"""Track who made the change"""
|
||||
obj.updated_by = request.user
|
||||
super().save_model(request, obj, form, change)
|
||||
set_as_default.short_description = 'Set as default model'
|
||||
|
||||
@@ -29,23 +29,10 @@ class Command(BaseCommand):
|
||||
],
|
||||
'Planner': [
|
||||
('max_keywords', 'Max Keywords'),
|
||||
('max_clusters', 'Max Clusters'),
|
||||
('max_content_ideas', 'Max Content Ideas'),
|
||||
('daily_cluster_limit', 'Daily Cluster Limit'),
|
||||
('max_ahrefs_queries', 'Max Ahrefs Queries'),
|
||||
],
|
||||
'Writer': [
|
||||
('monthly_word_count_limit', 'Monthly Word Count Limit'),
|
||||
('daily_content_tasks', 'Daily Content Tasks'),
|
||||
],
|
||||
'Images': [
|
||||
('monthly_image_count', 'Monthly Image Count'),
|
||||
('daily_image_generation_limit', 'Daily Image Generation Limit'),
|
||||
],
|
||||
'AI Credits': [
|
||||
('monthly_ai_credit_limit', 'Monthly AI Credit Limit'),
|
||||
('monthly_cluster_ai_credits', 'Monthly Cluster AI Credits'),
|
||||
('monthly_content_ai_credits', 'Monthly Content AI Credits'),
|
||||
('monthly_image_ai_credits', 'Monthly Image AI Credits'),
|
||||
'Credits': [
|
||||
('included_credits', 'Included Credits'),
|
||||
],
|
||||
}
|
||||
|
||||
|
||||
@@ -0,0 +1,28 @@
|
||||
# Generated by Django 5.2.9 on 2025-12-26 01:13
|
||||
|
||||
from django.db import migrations, models
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
|
||||
dependencies = [
|
||||
('billing', '0020_create_ai_model_config'),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.AlterField(
|
||||
model_name='creditcostconfig',
|
||||
name='operation_type',
|
||||
field=models.CharField(choices=[('clustering', 'Keyword Clustering'), ('idea_generation', 'Content Ideas Generation'), ('content_generation', 'Content Generation'), ('image_generation', 'Image Generation'), ('image_prompt_extraction', 'Image Prompt Extraction'), ('linking', 'Internal Linking'), ('optimization', 'Content Optimization'), ('reparse', 'Content Reparse'), ('site_page_generation', 'Site Page Generation'), ('site_structure_generation', 'Site Structure Generation'), ('ideas', 'Content Ideas Generation'), ('content', 'Content Generation'), ('images', 'Image Generation')], help_text='AI operation type', max_length=50, unique=True),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='creditusagelog',
|
||||
name='operation_type',
|
||||
field=models.CharField(choices=[('clustering', 'Keyword Clustering'), ('idea_generation', 'Content Ideas Generation'), ('content_generation', 'Content Generation'), ('image_generation', 'Image Generation'), ('image_prompt_extraction', 'Image Prompt Extraction'), ('linking', 'Internal Linking'), ('optimization', 'Content Optimization'), ('reparse', 'Content Reparse'), ('site_page_generation', 'Site Page Generation'), ('site_structure_generation', 'Site Structure Generation'), ('ideas', 'Content Ideas Generation'), ('content', 'Content Generation'), ('images', 'Image Generation')], db_index=True, max_length=50),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='historicalcreditcostconfig',
|
||||
name='operation_type',
|
||||
field=models.CharField(choices=[('clustering', 'Keyword Clustering'), ('idea_generation', 'Content Ideas Generation'), ('content_generation', 'Content Generation'), ('image_generation', 'Image Generation'), ('image_prompt_extraction', 'Image Prompt Extraction'), ('linking', 'Internal Linking'), ('optimization', 'Content Optimization'), ('reparse', 'Content Reparse'), ('site_page_generation', 'Site Page Generation'), ('site_structure_generation', 'Site Structure Generation'), ('ideas', 'Content Ideas Generation'), ('content', 'Content Generation'), ('images', 'Image Generation')], db_index=True, help_text='AI operation type', max_length=50),
|
||||
),
|
||||
]
|
||||
@@ -0,0 +1,18 @@
|
||||
# Generated manually
|
||||
# Fix historical table to allow NULL in calculation_mode column
|
||||
|
||||
from django.db import migrations
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
|
||||
dependencies = [
|
||||
('billing', '0021_alter_creditcostconfig_operation_type_and_more'),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.RunSQL(
|
||||
sql='ALTER TABLE billing_historicalcreditcostconfig ALTER COLUMN calculation_mode DROP NOT NULL;',
|
||||
reverse_sql='ALTER TABLE billing_historicalcreditcostconfig ALTER COLUMN calculation_mode SET NOT NULL;',
|
||||
),
|
||||
]
|
||||
@@ -0,0 +1,87 @@
|
||||
"""
|
||||
Migration: Update Runware model configurations in AIModelConfig
|
||||
|
||||
This migration:
|
||||
1. Updates runware:97@1 to have display_name "Hi Dream Full - Standard"
|
||||
2. Adds Bria 3.2 model as civitai:618692@691639
|
||||
"""
|
||||
from decimal import Decimal
|
||||
from django.db import migrations
|
||||
|
||||
|
||||
def update_runware_models(apps, schema_editor):
|
||||
"""Update Runware models in AIModelConfig"""
|
||||
AIModelConfig = apps.get_model('billing', 'AIModelConfig')
|
||||
|
||||
# Update existing runware:97@1 model
|
||||
AIModelConfig.objects.update_or_create(
|
||||
model_name='runware:97@1',
|
||||
defaults={
|
||||
'display_name': 'Hi Dream Full - Standard',
|
||||
'model_type': 'image',
|
||||
'provider': 'runware',
|
||||
'cost_per_image': Decimal('0.008'),
|
||||
'valid_sizes': ['512x512', '768x768', '1024x1024', '1024x1792', '1792x1024'],
|
||||
'supports_json_mode': False,
|
||||
'supports_vision': False,
|
||||
'supports_function_calling': False,
|
||||
'is_active': True,
|
||||
'is_default': True, # Make this the default Runware model
|
||||
'sort_order': 10,
|
||||
'description': 'Hi Dream Full - Standard quality image generation via Runware',
|
||||
}
|
||||
)
|
||||
|
||||
# Add Bria 3.2 Premium model
|
||||
AIModelConfig.objects.update_or_create(
|
||||
model_name='civitai:618692@691639',
|
||||
defaults={
|
||||
'display_name': 'Bria 3.2 - Premium',
|
||||
'model_type': 'image',
|
||||
'provider': 'runware',
|
||||
'cost_per_image': Decimal('0.012'),
|
||||
'valid_sizes': ['512x512', '768x768', '1024x1024', '1024x1792', '1792x1024'],
|
||||
'supports_json_mode': False,
|
||||
'supports_vision': False,
|
||||
'supports_function_calling': False,
|
||||
'is_active': True,
|
||||
'is_default': False,
|
||||
'sort_order': 11,
|
||||
'description': 'Bria 3.2 - Premium quality image generation via Runware/Civitai',
|
||||
}
|
||||
)
|
||||
|
||||
# Optionally remove the old runware:100@1 and runware:101@1 models if they exist
|
||||
AIModelConfig.objects.filter(
|
||||
model_name__in=['runware:100@1', 'runware:101@1']
|
||||
).update(is_active=False)
|
||||
|
||||
|
||||
def reverse_migration(apps, schema_editor):
|
||||
"""Reverse the migration"""
|
||||
AIModelConfig = apps.get_model('billing', 'AIModelConfig')
|
||||
|
||||
# Restore old display name
|
||||
AIModelConfig.objects.filter(model_name='runware:97@1').update(
|
||||
display_name='Runware Standard',
|
||||
is_default=False,
|
||||
)
|
||||
|
||||
# Remove Bria 3.2 model
|
||||
AIModelConfig.objects.filter(model_name='civitai:618692@691639').delete()
|
||||
|
||||
# Re-activate old models
|
||||
AIModelConfig.objects.filter(
|
||||
model_name__in=['runware:100@1', 'runware:101@1']
|
||||
).update(is_active=True)
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
|
||||
dependencies = [
|
||||
('billing', '0022_fix_historical_calculation_mode_null'),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.RunPython(update_runware_models, reverse_migration),
|
||||
]
|
||||
@@ -0,0 +1,113 @@
|
||||
"""
|
||||
Migration: Update Runware/Image model configurations for new model structure
|
||||
|
||||
This migration:
|
||||
1. Updates runware:97@1 to "Hi Dream Full - Basic"
|
||||
2. Adds Bria 3.2 model as bria:10@1 (correct AIR ID)
|
||||
3. Adds Nano Banana (Google) as google:4@2 (Premium tier)
|
||||
4. Removes old civitai model reference
|
||||
5. Adds one_liner_description field values
|
||||
"""
|
||||
from decimal import Decimal
|
||||
from django.db import migrations
|
||||
|
||||
|
||||
def update_image_models(apps, schema_editor):
|
||||
"""Update image models in AIModelConfig"""
|
||||
AIModelConfig = apps.get_model('billing', 'AIModelConfig')
|
||||
|
||||
# Update existing runware:97@1 model
|
||||
AIModelConfig.objects.update_or_create(
|
||||
model_name='runware:97@1',
|
||||
defaults={
|
||||
'display_name': 'Hi Dream Full - Basic',
|
||||
'model_type': 'image',
|
||||
'provider': 'runware',
|
||||
'cost_per_image': Decimal('0.006'), # Basic tier, cheaper
|
||||
'valid_sizes': ['1024x1024', '1280x768', '768x1280'],
|
||||
'supports_json_mode': False,
|
||||
'supports_vision': False,
|
||||
'supports_function_calling': False,
|
||||
'is_active': True,
|
||||
'is_default': True,
|
||||
'sort_order': 10,
|
||||
'description': 'Fast & affordable image generation. Steps: 20, CFG: 7. Good for quick iterations.',
|
||||
}
|
||||
)
|
||||
|
||||
# Add Bria 3.2 model with correct AIR ID
|
||||
AIModelConfig.objects.update_or_create(
|
||||
model_name='bria:10@1',
|
||||
defaults={
|
||||
'display_name': 'Bria 3.2 - Quality',
|
||||
'model_type': 'image',
|
||||
'provider': 'runware', # Via Runware API
|
||||
'cost_per_image': Decimal('0.010'), # Quality tier
|
||||
'valid_sizes': ['1024x1024', '1344x768', '768x1344', '1216x832', '832x1216'],
|
||||
'supports_json_mode': False,
|
||||
'supports_vision': False,
|
||||
'supports_function_calling': False,
|
||||
'is_active': True,
|
||||
'is_default': False,
|
||||
'sort_order': 11,
|
||||
'description': 'Commercial-safe AI. Steps: 8, prompt enhancement enabled. Licensed training data.',
|
||||
}
|
||||
)
|
||||
|
||||
# Add Nano Banana (Google) Premium model
|
||||
AIModelConfig.objects.update_or_create(
|
||||
model_name='google:4@2',
|
||||
defaults={
|
||||
'display_name': 'Nano Banana - Premium',
|
||||
'model_type': 'image',
|
||||
'provider': 'runware', # Via Runware API
|
||||
'cost_per_image': Decimal('0.015'), # Premium tier
|
||||
'valid_sizes': ['1024x1024', '1376x768', '768x1376', '1264x848', '848x1264'],
|
||||
'supports_json_mode': False,
|
||||
'supports_vision': False,
|
||||
'supports_function_calling': False,
|
||||
'is_active': True,
|
||||
'is_default': False,
|
||||
'sort_order': 12,
|
||||
'description': 'Google Gemini 3 Pro. Best quality, text rendering, advanced reasoning. Premium pricing.',
|
||||
}
|
||||
)
|
||||
|
||||
# Deactivate old civitai model (replaced by correct bria:10@1)
|
||||
AIModelConfig.objects.filter(
|
||||
model_name='civitai:618692@691639'
|
||||
).update(is_active=False)
|
||||
|
||||
# Deactivate other old models
|
||||
AIModelConfig.objects.filter(
|
||||
model_name__in=['runware:100@1', 'runware:101@1']
|
||||
).update(is_active=False)
|
||||
|
||||
|
||||
def reverse_migration(apps, schema_editor):
|
||||
"""Reverse the migration"""
|
||||
AIModelConfig = apps.get_model('billing', 'AIModelConfig')
|
||||
|
||||
# Restore old display names
|
||||
AIModelConfig.objects.filter(model_name='runware:97@1').update(
|
||||
display_name='Hi Dream Full - Standard',
|
||||
)
|
||||
|
||||
# Remove new models
|
||||
AIModelConfig.objects.filter(model_name__in=['bria:10@1', 'google:4@2']).delete()
|
||||
|
||||
# Re-activate old models
|
||||
AIModelConfig.objects.filter(
|
||||
model_name__in=['runware:100@1', 'runware:101@1', 'civitai:618692@691639']
|
||||
).update(is_active=True)
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
|
||||
dependencies = [
|
||||
('billing', '0023_update_runware_models'),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.RunPython(update_image_models, reverse_migration),
|
||||
]
|
||||
@@ -0,0 +1,43 @@
|
||||
# Generated by Django 5.2.9 on 2026-01-04 06:11
|
||||
|
||||
from django.db import migrations, models
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
|
||||
dependencies = [
|
||||
('billing', '0024_update_image_models_v2'),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.AddField(
|
||||
model_name='aimodelconfig',
|
||||
name='credits_per_image',
|
||||
field=models.IntegerField(blank=True, help_text='Fixed credits per image generated. For image models only. (e.g., 1, 5, 15)', null=True),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='aimodelconfig',
|
||||
name='quality_tier',
|
||||
field=models.CharField(blank=True, choices=[('basic', 'Basic'), ('quality', 'Quality'), ('premium', 'Premium')], help_text='Quality tier for frontend UI display (Basic/Quality/Premium). For image models.', max_length=20, null=True),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='aimodelconfig',
|
||||
name='tokens_per_credit',
|
||||
field=models.IntegerField(blank=True, help_text='Number of tokens that equal 1 credit. For text models only. (e.g., 1000, 10000)', null=True),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='historicalaimodelconfig',
|
||||
name='credits_per_image',
|
||||
field=models.IntegerField(blank=True, help_text='Fixed credits per image generated. For image models only. (e.g., 1, 5, 15)', null=True),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='historicalaimodelconfig',
|
||||
name='quality_tier',
|
||||
field=models.CharField(blank=True, choices=[('basic', 'Basic'), ('quality', 'Quality'), ('premium', 'Premium')], help_text='Quality tier for frontend UI display (Basic/Quality/Premium). For image models.', max_length=20, null=True),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='historicalaimodelconfig',
|
||||
name='tokens_per_credit',
|
||||
field=models.IntegerField(blank=True, help_text='Number of tokens that equal 1 credit. For text models only. (e.g., 1000, 10000)', null=True),
|
||||
),
|
||||
]
|
||||
@@ -0,0 +1,63 @@
|
||||
# Generated manually for data migration
|
||||
|
||||
from django.db import migrations
|
||||
|
||||
|
||||
def populate_aimodel_credit_fields(apps, schema_editor):
|
||||
"""
|
||||
Populate credit calculation fields in AIModelConfig.
|
||||
- Text models: tokens_per_credit (how many tokens = 1 credit)
|
||||
- Image models: credits_per_image (fixed credits per image) + quality_tier
|
||||
"""
|
||||
AIModelConfig = apps.get_model('billing', 'AIModelConfig')
|
||||
|
||||
# Text models: tokens_per_credit
|
||||
text_model_credits = {
|
||||
'gpt-4o-mini': 10000, # Cheap model: 10k tokens = 1 credit
|
||||
'gpt-4o': 1000, # Premium model: 1k tokens = 1 credit
|
||||
'gpt-5.1': 1000, # Default model: 1k tokens = 1 credit
|
||||
'gpt-5.2': 1000, # Future model
|
||||
'gpt-4.1': 1000, # Legacy
|
||||
'gpt-4-turbo-preview': 500, # Expensive
|
||||
}
|
||||
|
||||
for model_name, tokens_per_credit in text_model_credits.items():
|
||||
AIModelConfig.objects.filter(
|
||||
model_name=model_name,
|
||||
model_type='text'
|
||||
).update(tokens_per_credit=tokens_per_credit)
|
||||
|
||||
# Image models: credits_per_image + quality_tier
|
||||
image_model_credits = {
|
||||
'runware:97@1': {'credits_per_image': 1, 'quality_tier': 'basic'}, # Basic - cheap
|
||||
'dall-e-3': {'credits_per_image': 5, 'quality_tier': 'quality'}, # Quality - mid
|
||||
'google:4@2': {'credits_per_image': 15, 'quality_tier': 'premium'}, # Premium - expensive
|
||||
'dall-e-2': {'credits_per_image': 2, 'quality_tier': 'basic'}, # Legacy
|
||||
}
|
||||
|
||||
for model_name, credits_data in image_model_credits.items():
|
||||
AIModelConfig.objects.filter(
|
||||
model_name=model_name,
|
||||
model_type='image'
|
||||
).update(**credits_data)
|
||||
|
||||
|
||||
def reverse_migration(apps, schema_editor):
|
||||
"""Clear credit fields"""
|
||||
AIModelConfig = apps.get_model('billing', 'AIModelConfig')
|
||||
AIModelConfig.objects.all().update(
|
||||
tokens_per_credit=None,
|
||||
credits_per_image=None,
|
||||
quality_tier=None
|
||||
)
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
|
||||
dependencies = [
|
||||
('billing', '0025_add_aimodel_credit_fields'),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.RunPython(populate_aimodel_credit_fields, reverse_migration),
|
||||
]
|
||||
@@ -0,0 +1,356 @@
|
||||
# Generated by Django 5.2.9 on 2026-01-04 10:40
|
||||
|
||||
import django.core.validators
|
||||
from django.db import migrations, models
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
|
||||
dependencies = [
|
||||
('billing', '0026_populate_aimodel_credits'),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.AlterModelOptions(
|
||||
name='aimodelconfig',
|
||||
options={'ordering': ['model_type', 'model_name'], 'verbose_name': 'AI Model Configuration', 'verbose_name_plural': 'AI Model Configurations'},
|
||||
),
|
||||
migrations.RemoveField(
|
||||
model_name='aimodelconfig',
|
||||
name='cost_per_image',
|
||||
),
|
||||
migrations.RemoveField(
|
||||
model_name='aimodelconfig',
|
||||
name='deprecation_date',
|
||||
),
|
||||
migrations.RemoveField(
|
||||
model_name='aimodelconfig',
|
||||
name='description',
|
||||
),
|
||||
migrations.RemoveField(
|
||||
model_name='aimodelconfig',
|
||||
name='input_cost_per_1m',
|
||||
),
|
||||
migrations.RemoveField(
|
||||
model_name='aimodelconfig',
|
||||
name='max_output_tokens',
|
||||
),
|
||||
migrations.RemoveField(
|
||||
model_name='aimodelconfig',
|
||||
name='output_cost_per_1m',
|
||||
),
|
||||
migrations.RemoveField(
|
||||
model_name='aimodelconfig',
|
||||
name='release_date',
|
||||
),
|
||||
migrations.RemoveField(
|
||||
model_name='aimodelconfig',
|
||||
name='sort_order',
|
||||
),
|
||||
migrations.RemoveField(
|
||||
model_name='aimodelconfig',
|
||||
name='supports_function_calling',
|
||||
),
|
||||
migrations.RemoveField(
|
||||
model_name='aimodelconfig',
|
||||
name='supports_json_mode',
|
||||
),
|
||||
migrations.RemoveField(
|
||||
model_name='aimodelconfig',
|
||||
name='supports_vision',
|
||||
),
|
||||
migrations.RemoveField(
|
||||
model_name='aimodelconfig',
|
||||
name='updated_by',
|
||||
),
|
||||
migrations.RemoveField(
|
||||
model_name='aimodelconfig',
|
||||
name='valid_sizes',
|
||||
),
|
||||
migrations.RemoveField(
|
||||
model_name='creditcostconfig',
|
||||
name='created_at',
|
||||
),
|
||||
migrations.RemoveField(
|
||||
model_name='creditcostconfig',
|
||||
name='id',
|
||||
),
|
||||
migrations.RemoveField(
|
||||
model_name='creditcostconfig',
|
||||
name='min_credits',
|
||||
),
|
||||
migrations.RemoveField(
|
||||
model_name='creditcostconfig',
|
||||
name='previous_tokens_per_credit',
|
||||
),
|
||||
migrations.RemoveField(
|
||||
model_name='creditcostconfig',
|
||||
name='price_per_credit_usd',
|
||||
),
|
||||
migrations.RemoveField(
|
||||
model_name='creditcostconfig',
|
||||
name='tokens_per_credit',
|
||||
),
|
||||
migrations.RemoveField(
|
||||
model_name='creditcostconfig',
|
||||
name='updated_at',
|
||||
),
|
||||
migrations.RemoveField(
|
||||
model_name='creditcostconfig',
|
||||
name='updated_by',
|
||||
),
|
||||
migrations.RemoveField(
|
||||
model_name='historicalaimodelconfig',
|
||||
name='cost_per_image',
|
||||
),
|
||||
migrations.RemoveField(
|
||||
model_name='historicalaimodelconfig',
|
||||
name='deprecation_date',
|
||||
),
|
||||
migrations.RemoveField(
|
||||
model_name='historicalaimodelconfig',
|
||||
name='description',
|
||||
),
|
||||
migrations.RemoveField(
|
||||
model_name='historicalaimodelconfig',
|
||||
name='input_cost_per_1m',
|
||||
),
|
||||
migrations.RemoveField(
|
||||
model_name='historicalaimodelconfig',
|
||||
name='max_output_tokens',
|
||||
),
|
||||
migrations.RemoveField(
|
||||
model_name='historicalaimodelconfig',
|
||||
name='output_cost_per_1m',
|
||||
),
|
||||
migrations.RemoveField(
|
||||
model_name='historicalaimodelconfig',
|
||||
name='release_date',
|
||||
),
|
||||
migrations.RemoveField(
|
||||
model_name='historicalaimodelconfig',
|
||||
name='sort_order',
|
||||
),
|
||||
migrations.RemoveField(
|
||||
model_name='historicalaimodelconfig',
|
||||
name='supports_function_calling',
|
||||
),
|
||||
migrations.RemoveField(
|
||||
model_name='historicalaimodelconfig',
|
||||
name='supports_json_mode',
|
||||
),
|
||||
migrations.RemoveField(
|
||||
model_name='historicalaimodelconfig',
|
||||
name='supports_vision',
|
||||
),
|
||||
migrations.RemoveField(
|
||||
model_name='historicalaimodelconfig',
|
||||
name='updated_by',
|
||||
),
|
||||
migrations.RemoveField(
|
||||
model_name='historicalaimodelconfig',
|
||||
name='valid_sizes',
|
||||
),
|
||||
migrations.RemoveField(
|
||||
model_name='historicalcreditcostconfig',
|
||||
name='created_at',
|
||||
),
|
||||
migrations.RemoveField(
|
||||
model_name='historicalcreditcostconfig',
|
||||
name='id',
|
||||
),
|
||||
migrations.RemoveField(
|
||||
model_name='historicalcreditcostconfig',
|
||||
name='min_credits',
|
||||
),
|
||||
migrations.RemoveField(
|
||||
model_name='historicalcreditcostconfig',
|
||||
name='previous_tokens_per_credit',
|
||||
),
|
||||
migrations.RemoveField(
|
||||
model_name='historicalcreditcostconfig',
|
||||
name='price_per_credit_usd',
|
||||
),
|
||||
migrations.RemoveField(
|
||||
model_name='historicalcreditcostconfig',
|
||||
name='tokens_per_credit',
|
||||
),
|
||||
migrations.RemoveField(
|
||||
model_name='historicalcreditcostconfig',
|
||||
name='updated_at',
|
||||
),
|
||||
migrations.RemoveField(
|
||||
model_name='historicalcreditcostconfig',
|
||||
name='updated_by',
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='aimodelconfig',
|
||||
name='capabilities',
|
||||
field=models.JSONField(blank=True, default=dict, help_text='Capabilities: vision, function_calling, json_mode, etc.'),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='aimodelconfig',
|
||||
name='cost_per_1k_input',
|
||||
field=models.DecimalField(blank=True, decimal_places=6, help_text='Provider cost per 1K input tokens (USD) - text models', max_digits=10, null=True),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='aimodelconfig',
|
||||
name='cost_per_1k_output',
|
||||
field=models.DecimalField(blank=True, decimal_places=6, help_text='Provider cost per 1K output tokens (USD) - text models', max_digits=10, null=True),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='aimodelconfig',
|
||||
name='max_tokens',
|
||||
field=models.IntegerField(blank=True, help_text='Model token limit', null=True),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='creditcostconfig',
|
||||
name='base_credits',
|
||||
field=models.IntegerField(default=1, help_text='Fixed credits per operation', validators=[django.core.validators.MinValueValidator(0)]),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='historicalaimodelconfig',
|
||||
name='capabilities',
|
||||
field=models.JSONField(blank=True, default=dict, help_text='Capabilities: vision, function_calling, json_mode, etc.'),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='historicalaimodelconfig',
|
||||
name='cost_per_1k_input',
|
||||
field=models.DecimalField(blank=True, decimal_places=6, help_text='Provider cost per 1K input tokens (USD) - text models', max_digits=10, null=True),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='historicalaimodelconfig',
|
||||
name='cost_per_1k_output',
|
||||
field=models.DecimalField(blank=True, decimal_places=6, help_text='Provider cost per 1K output tokens (USD) - text models', max_digits=10, null=True),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='historicalaimodelconfig',
|
||||
name='max_tokens',
|
||||
field=models.IntegerField(blank=True, help_text='Model token limit', null=True),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='historicalcreditcostconfig',
|
||||
name='base_credits',
|
||||
field=models.IntegerField(default=1, help_text='Fixed credits per operation', validators=[django.core.validators.MinValueValidator(0)]),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='aimodelconfig',
|
||||
name='context_window',
|
||||
field=models.IntegerField(blank=True, help_text='Model context size', null=True),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='aimodelconfig',
|
||||
name='credits_per_image',
|
||||
field=models.IntegerField(blank=True, help_text='Image: credits per image (e.g., 1, 5, 15)', null=True),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='aimodelconfig',
|
||||
name='display_name',
|
||||
field=models.CharField(help_text='Human-readable name', max_length=200),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='aimodelconfig',
|
||||
name='is_active',
|
||||
field=models.BooleanField(db_index=True, default=True, help_text='Enable/disable'),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='aimodelconfig',
|
||||
name='is_default',
|
||||
field=models.BooleanField(db_index=True, default=False, help_text='One default per type'),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='aimodelconfig',
|
||||
name='model_name',
|
||||
field=models.CharField(db_index=True, help_text="Model identifier (e.g., 'gpt-5.1', 'dall-e-3', 'runware:97@1')", max_length=100, unique=True),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='aimodelconfig',
|
||||
name='model_type',
|
||||
field=models.CharField(choices=[('text', 'Text Generation'), ('image', 'Image Generation')], db_index=True, help_text='text / image', max_length=20),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='aimodelconfig',
|
||||
name='provider',
|
||||
field=models.CharField(choices=[('openai', 'OpenAI'), ('anthropic', 'Anthropic'), ('runware', 'Runware'), ('google', 'Google')], db_index=True, help_text='Links to IntegrationProvider', max_length=50),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='aimodelconfig',
|
||||
name='quality_tier',
|
||||
field=models.CharField(blank=True, choices=[('basic', 'Basic'), ('quality', 'Quality'), ('premium', 'Premium')], help_text='basic / quality / premium - for image models', max_length=20, null=True),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='aimodelconfig',
|
||||
name='tokens_per_credit',
|
||||
field=models.IntegerField(blank=True, help_text='Text: tokens per 1 credit (e.g., 1000, 10000)', null=True),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='creditcostconfig',
|
||||
name='description',
|
||||
field=models.TextField(blank=True, help_text='Admin notes about this operation'),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='creditcostconfig',
|
||||
name='operation_type',
|
||||
field=models.CharField(help_text="Unique operation ID (e.g., 'article_generation', 'image_generation')", max_length=50, primary_key=True, serialize=False, unique=True),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='historicalaimodelconfig',
|
||||
name='context_window',
|
||||
field=models.IntegerField(blank=True, help_text='Model context size', null=True),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='historicalaimodelconfig',
|
||||
name='credits_per_image',
|
||||
field=models.IntegerField(blank=True, help_text='Image: credits per image (e.g., 1, 5, 15)', null=True),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='historicalaimodelconfig',
|
||||
name='display_name',
|
||||
field=models.CharField(help_text='Human-readable name', max_length=200),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='historicalaimodelconfig',
|
||||
name='is_active',
|
||||
field=models.BooleanField(db_index=True, default=True, help_text='Enable/disable'),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='historicalaimodelconfig',
|
||||
name='is_default',
|
||||
field=models.BooleanField(db_index=True, default=False, help_text='One default per type'),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='historicalaimodelconfig',
|
||||
name='model_name',
|
||||
field=models.CharField(db_index=True, help_text="Model identifier (e.g., 'gpt-5.1', 'dall-e-3', 'runware:97@1')", max_length=100),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='historicalaimodelconfig',
|
||||
name='model_type',
|
||||
field=models.CharField(choices=[('text', 'Text Generation'), ('image', 'Image Generation')], db_index=True, help_text='text / image', max_length=20),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='historicalaimodelconfig',
|
||||
name='provider',
|
||||
field=models.CharField(choices=[('openai', 'OpenAI'), ('anthropic', 'Anthropic'), ('runware', 'Runware'), ('google', 'Google')], db_index=True, help_text='Links to IntegrationProvider', max_length=50),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='historicalaimodelconfig',
|
||||
name='quality_tier',
|
||||
field=models.CharField(blank=True, choices=[('basic', 'Basic'), ('quality', 'Quality'), ('premium', 'Premium')], help_text='basic / quality / premium - for image models', max_length=20, null=True),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='historicalaimodelconfig',
|
||||
name='tokens_per_credit',
|
||||
field=models.IntegerField(blank=True, help_text='Text: tokens per 1 credit (e.g., 1000, 10000)', null=True),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='historicalcreditcostconfig',
|
||||
name='description',
|
||||
field=models.TextField(blank=True, help_text='Admin notes about this operation'),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='historicalcreditcostconfig',
|
||||
name='operation_type',
|
||||
field=models.CharField(db_index=True, help_text="Unique operation ID (e.g., 'article_generation', 'image_generation')", max_length=50),
|
||||
),
|
||||
]
|
||||
@@ -0,0 +1,64 @@
|
||||
# Generated by Django 5.2.9 on 2026-01-07 03:19
|
||||
|
||||
from django.db import migrations, models
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
|
||||
dependencies = [
|
||||
('billing', '0027_model_schema_update'),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.RemoveField(
|
||||
model_name='paymentmethodconfig',
|
||||
name='api_key',
|
||||
),
|
||||
migrations.RemoveField(
|
||||
model_name='paymentmethodconfig',
|
||||
name='api_secret',
|
||||
),
|
||||
migrations.RemoveField(
|
||||
model_name='paymentmethodconfig',
|
||||
name='webhook_secret',
|
||||
),
|
||||
migrations.RemoveField(
|
||||
model_name='paymentmethodconfig',
|
||||
name='webhook_url',
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='paymentmethodconfig',
|
||||
name='account_title',
|
||||
field=models.CharField(blank=True, help_text='Account holder name', max_length=255),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='paymentmethodconfig',
|
||||
name='iban',
|
||||
field=models.CharField(blank=True, help_text='IBAN for international transfers', max_length=255),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='paymentmethodconfig',
|
||||
name='country_code',
|
||||
field=models.CharField(db_index=True, help_text="ISO 2-letter country code (e.g., US, GB, PK) or '*' for global", max_length=2),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='paymentmethodconfig',
|
||||
name='routing_number',
|
||||
field=models.CharField(blank=True, help_text='Routing/Sort code', max_length=255),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='paymentmethodconfig',
|
||||
name='swift_code',
|
||||
field=models.CharField(blank=True, help_text='SWIFT/BIC code for international', max_length=255),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='paymentmethodconfig',
|
||||
name='wallet_id',
|
||||
field=models.CharField(blank=True, help_text='Mobile number or wallet ID', max_length=255),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='paymentmethodconfig',
|
||||
name='wallet_type',
|
||||
field=models.CharField(blank=True, help_text='E.g., JazzCash, EasyPaisa, etc.', max_length=100),
|
||||
),
|
||||
]
|
||||
@@ -0,0 +1,63 @@
|
||||
# Generated by Django 5.2.9 on 2026-01-07 12:26
|
||||
|
||||
from django.conf import settings
|
||||
from django.db import migrations, models
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
|
||||
dependencies = [
|
||||
('billing', '0028_cleanup_payment_method_config'),
|
||||
('igny8_core_auth', '0020_fix_historical_account'),
|
||||
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.CreateModel(
|
||||
name='WebhookEvent',
|
||||
fields=[
|
||||
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
|
||||
('event_id', models.CharField(db_index=True, help_text='Unique event ID from the payment provider', max_length=255, unique=True)),
|
||||
('provider', models.CharField(choices=[('stripe', 'Stripe'), ('paypal', 'PayPal')], db_index=True, help_text='Payment provider (stripe or paypal)', max_length=20)),
|
||||
('event_type', models.CharField(db_index=True, help_text='Event type from the provider', max_length=100)),
|
||||
('payload', models.JSONField(help_text='Full webhook payload')),
|
||||
('processed', models.BooleanField(db_index=True, default=False, help_text='Whether this event has been successfully processed')),
|
||||
('processed_at', models.DateTimeField(blank=True, help_text='When the event was processed', null=True)),
|
||||
('error_message', models.TextField(blank=True, help_text='Error message if processing failed')),
|
||||
('retry_count', models.IntegerField(default=0, help_text='Number of processing attempts')),
|
||||
('created_at', models.DateTimeField(auto_now_add=True)),
|
||||
],
|
||||
options={
|
||||
'verbose_name': 'Webhook Event',
|
||||
'verbose_name_plural': 'Webhook Events',
|
||||
'db_table': 'igny8_webhook_events',
|
||||
'ordering': ['-created_at'],
|
||||
},
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='historicalpayment',
|
||||
name='manual_reference',
|
||||
field=models.CharField(blank=True, help_text='Bank transfer reference, wallet transaction ID, etc.', max_length=255, null=True),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='payment',
|
||||
name='manual_reference',
|
||||
field=models.CharField(blank=True, help_text='Bank transfer reference, wallet transaction ID, etc.', max_length=255, null=True),
|
||||
),
|
||||
migrations.AddConstraint(
|
||||
model_name='payment',
|
||||
constraint=models.UniqueConstraint(condition=models.Q(('manual_reference__isnull', False), models.Q(('manual_reference', ''), _negated=True)), fields=('manual_reference',), name='unique_manual_reference_when_not_null'),
|
||||
),
|
||||
migrations.AddIndex(
|
||||
model_name='webhookevent',
|
||||
index=models.Index(fields=['provider', 'event_type'], name='igny8_webho_provide_ee8a78_idx'),
|
||||
),
|
||||
migrations.AddIndex(
|
||||
model_name='webhookevent',
|
||||
index=models.Index(fields=['processed', 'created_at'], name='igny8_webho_process_88c670_idx'),
|
||||
),
|
||||
migrations.AddIndex(
|
||||
model_name='webhookevent',
|
||||
index=models.Index(fields=['provider', 'processed'], name='igny8_webho_provide_df293b_idx'),
|
||||
),
|
||||
]
|
||||
@@ -143,6 +143,83 @@ class UsageLimitsSerializer(serializers.Serializer):
|
||||
limits: LimitCardSerializer = LimitCardSerializer(many=True)
|
||||
|
||||
|
||||
class AccountPaymentMethodSerializer(serializers.Serializer):
|
||||
"""
|
||||
Serializer for Account Payment Methods
|
||||
Handles CRUD operations for account-specific payment methods
|
||||
"""
|
||||
id = serializers.IntegerField(read_only=True)
|
||||
type = serializers.ChoiceField(
|
||||
choices=[
|
||||
('stripe', 'Stripe (Credit/Debit Card)'),
|
||||
('paypal', 'PayPal'),
|
||||
('bank_transfer', 'Bank Transfer (Manual)'),
|
||||
('local_wallet', 'Local Wallet (Manual)'),
|
||||
('manual', 'Manual Payment'),
|
||||
]
|
||||
)
|
||||
display_name = serializers.CharField(max_length=100)
|
||||
is_default = serializers.BooleanField(default=False)
|
||||
is_enabled = serializers.BooleanField(default=True)
|
||||
is_verified = serializers.BooleanField(read_only=True, default=False)
|
||||
instructions = serializers.CharField(required=False, allow_blank=True, default='')
|
||||
metadata = serializers.JSONField(required=False, default=dict)
|
||||
created_at = serializers.DateTimeField(read_only=True)
|
||||
updated_at = serializers.DateTimeField(read_only=True)
|
||||
|
||||
def validate_display_name(self, value):
|
||||
"""Validate display_name uniqueness per account"""
|
||||
account = self.context.get('account')
|
||||
instance = getattr(self, 'instance', None)
|
||||
|
||||
if account:
|
||||
from igny8_core.business.billing.models import AccountPaymentMethod
|
||||
existing = AccountPaymentMethod.objects.filter(
|
||||
account=account,
|
||||
display_name=value
|
||||
)
|
||||
if instance:
|
||||
existing = existing.exclude(pk=instance.pk)
|
||||
if existing.exists():
|
||||
raise serializers.ValidationError(
|
||||
f"A payment method with name '{value}' already exists for this account."
|
||||
)
|
||||
return value
|
||||
|
||||
def create(self, validated_data):
|
||||
from igny8_core.business.billing.models import AccountPaymentMethod
|
||||
account = self.context.get('account')
|
||||
if not account:
|
||||
raise serializers.ValidationError("Account context is required")
|
||||
|
||||
# If this is marked as default, unset other defaults
|
||||
if validated_data.get('is_default', False):
|
||||
AccountPaymentMethod.objects.filter(
|
||||
account=account,
|
||||
is_default=True
|
||||
).update(is_default=False)
|
||||
|
||||
return AccountPaymentMethod.objects.create(
|
||||
account=account,
|
||||
**validated_data
|
||||
)
|
||||
|
||||
def update(self, instance, validated_data):
|
||||
from igny8_core.business.billing.models import AccountPaymentMethod
|
||||
|
||||
# If this is marked as default, unset other defaults
|
||||
if validated_data.get('is_default', False) and not instance.is_default:
|
||||
AccountPaymentMethod.objects.filter(
|
||||
account=instance.account,
|
||||
is_default=True
|
||||
).exclude(pk=instance.pk).update(is_default=False)
|
||||
|
||||
for attr, value in validated_data.items():
|
||||
setattr(instance, attr, value)
|
||||
instance.save()
|
||||
return instance
|
||||
|
||||
|
||||
class AIModelConfigSerializer(serializers.Serializer):
|
||||
"""
|
||||
Serializer for AI Model Configuration (Read-Only API)
|
||||
@@ -178,6 +255,23 @@ class AIModelConfigSerializer(serializers.Serializer):
|
||||
)
|
||||
valid_sizes = serializers.ListField(read_only=True, allow_null=True)
|
||||
|
||||
# Credit calculation fields (NEW)
|
||||
credits_per_image = serializers.IntegerField(
|
||||
read_only=True,
|
||||
allow_null=True,
|
||||
help_text="Credits charged per image generation"
|
||||
)
|
||||
tokens_per_credit = serializers.IntegerField(
|
||||
read_only=True,
|
||||
allow_null=True,
|
||||
help_text="Tokens per credit for text models"
|
||||
)
|
||||
quality_tier = serializers.CharField(
|
||||
read_only=True,
|
||||
allow_null=True,
|
||||
help_text="Quality tier: basic, quality, or premium"
|
||||
)
|
||||
|
||||
# Capabilities
|
||||
supports_json_mode = serializers.BooleanField(read_only=True)
|
||||
supports_vision = serializers.BooleanField(read_only=True)
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user