cleanup
This commit is contained in:
237
docs/igny8-app/app-packaging-backaup-plan.md
Normal file
237
docs/igny8-app/app-packaging-backaup-plan.md
Normal file
@@ -0,0 +1,237 @@
|
||||
bluesalman: gothrough readme and tell me which folders/files inside backeedn adn frontedn folders are acutaly part of igny8 app, and which folders are part of some tech stack which will be enever modifed by app only isntaleld when tech stack isntaleled
|
||||
|
||||
GitHub Copilot: I'll create a short plan, then list the backend and frontend directories to inspect their contents.
|
||||
|
||||
Created 3 todos
|
||||
|
||||
Now I'll list both directories (backend and frontend) to collect the items to classify.
|
||||
|
||||
Read [](file:///data/app/igny8/backend)
|
||||
|
||||
Read [](file:///data/app/igny8/frontend)
|
||||
|
||||
Marking plan steps completed and finishing classification.
|
||||
|
||||
Completed: *List backend files* (1/3)
|
||||
|
||||
Progress: I scanned backend and frontend and classified items below.
|
||||
|
||||
**Backend — Classified**
|
||||
- **App (igny8 application core):**
|
||||
- `igny8_core/`: The Django project / app code (settings, apps, views, models, tasks). This is the main application source you will modify.
|
||||
- `manage.py`: Django management entrypoint — part of the app.
|
||||
- `migrations/`: Django DB migrations — app schema changes (source-controlled).
|
||||
- `staticfiles/`: Collected static assets (app assets). May be generated but belongs to the app.
|
||||
- `requirements.txt`: Dependency manifest for the app (used to install the Python packages the app needs). Not runtime third-party code, but part of the app repo and should be version-controlled.
|
||||
|
||||
- **Repo tooling / app-maintenance scripts (part of the repo; maintained by your team):**
|
||||
- `check_api_response.py`, `check_recent_keywords.py`, `cleanup_structure_categories.py`, `diagnose_generate_content.py`, `inject_test_data.py`, `sync_idea_status.py`, `test_tags_categories.py`, `verify_*.py`, `fix_*` scripts, `final_verify.py`, `deploy_automation.sh`, `example_csv/`: These are maintenance, verification, or migration scripts that are part of the repository and used by developers/ops. They are not third-party code but are repo-level tooling you may edit.
|
||||
|
||||
- **Tech stack / infra artifacts (should not be modified by the app runtime; installed or produced by infra):**
|
||||
- `.venv/`: Virtual environment — environment artifact; not part of app source and typically excluded from commits.
|
||||
- `Dockerfile`: Container build file — infra/build config (used by ops or CI).
|
||||
- `backup_postgres_20251120_232816.sql`, `db_backup_20251120_232646.sqlite3`, `rename_fields_migration.sql`: Database backups / SQL dumps — artifacts, not app source.
|
||||
- `celerybeat-schedule`: Celery scheduler state file — runtime artifact.
|
||||
- `logs/`: Runtime logs — generated artifacts.
|
||||
- Any large generated files (e.g., DB dumps) are infra artifacts and shouldn't be edited as source.
|
||||
|
||||
Notes:
|
||||
- Some items like `deploy_automation.sh` or `fix_*` scripts are operational but live in the repo — treat them as repository tooling (editable by you), whereas backups, `.venv`, and docker images/configs are infra artifacts.
|
||||
|
||||
**Frontend — Classified**
|
||||
- **App (igny8 frontend source / code you will modify):**
|
||||
- `src/`: Application source (React/Vue/Svelte/TSX/etc.) — primary frontend code to edit.
|
||||
- `public/`, `index.html`, `marketing.html`, `banner.png`: Static assets and HTML entrypoints — part of the app.
|
||||
- `package.json`, `package-lock.json`: NPM manifests — define packages the frontend needs (repo-managed).
|
||||
- `eslint.config.js`, `postcss.config.js`, `vite.config.ts`, `vitest.config.ts`, `tsconfig*.json`: Build and tooling config that belong to the frontend source and are edited as part of app development.
|
||||
- README.md, `DESIGN_SYSTEM.md`, `MIGRATION_GUIDE.md`: Documentation for the frontend — repo docs.
|
||||
|
||||
- **Tech stack / infra artifacts (installed/generated by stack — do not edit at runtime):**
|
||||
- `node_modules/`: Installed dependencies — generated by `npm` / `pnpm` / `yarn`; should not be manually edited and typically ignored by VCS.
|
||||
- `dist/`: Built production bundle — generated artifact (output of the build).
|
||||
- `Caddyfile`, `Caddyfile.marketing`, `Dockerfile`, `Dockerfile.dev`, `Dockerfile.marketing`, `Dockerfile.marketing.dev`: Webserver and container build configs — deployment/infra files. They belong to ops/build but do live in the repo; treat as infra configuration, not application logic.
|
||||
- `LICENSE.md`, other top-level docs: docs, not runtime app code.
|
||||
|
||||
Notes:
|
||||
- Build-tool config files (e.g., `vite.config.ts`, `tsconfig.json`) are part of the app repository and are changed by developers. By contrast, `node_modules` and `dist` are installed/generated by the tech stack and should not be edited directly.
|
||||
|
||||
Quick summary (one-line):
|
||||
- Edit: igny8_core, `manage.py`, migrations, src, public, `package.json`, and similar source/config files — these are part of the igny8 app.
|
||||
- Don't manually edit: runtime/generated/installed artifacts like `.venv/`, `node_modules/`, `dist/`, DB backups, `celerybeat-schedule`, and use `Dockerfile`/`Caddyfile` only as infra config (ops-managed).
|
||||
|
||||
Would you like:
|
||||
- a) A file added to the repo (e.g., `ARCHITECTURE.md` or `IGNY8-CONTENTS.md`) that labels each top-level item as "app" vs "infra", or
|
||||
- b) I can produce a JSON/CSV mapping of items for automation (useful for CI), or
|
||||
- c) Walk through any specific file/folder you want deeper inspection of?
|
||||
|
||||
Which option do you prefer?
|
||||
|
||||
bluesalman: ok now cehck teh strcutre of /data adn recrussively all its fodlers including igny8/
|
||||
waht can we do if we want to make igny8 folder as the only full app foodler requried to move to any new location,
|
||||
and jsut install the portainer as current envirnpmaent and isntall teh app package on nwe locatink build the package and it runs there,
|
||||
|
||||
teh curretn setup isnto standrd as it ocntians some node )moduels dist and env,
|
||||
venv i know can be removed completely
|
||||
|
||||
so waht do you think
|
||||
|
||||
GitHub Copilot: I'll make a short plan, then list data and the key subfolders (app, igny8, and their important children) so I can analyze what to keep, remove, or package.
|
||||
|
||||
Created 4 todos
|
||||
|
||||
Read [](file:///data)
|
||||
|
||||
Updating the plan: mark the scan step done (noting data access blocked) and keep other steps ready. Then I'll provide analysis, exact commands, Docker and non-Docker packaging options, and a sample `docker-compose.yml` and `install.sh`.
|
||||
|
||||
bluesalman: waht happend you are stucl
|
||||
|
||||
GitHub Copilot: I'll create a short todo plan, then list data, app, and igny8 to inspect contents before proposing a packaging strategy.
|
||||
|
||||
Created 4 todos
|
||||
|
||||
Read [](file:///data)
|
||||
|
||||
Now I'll run a quick directory size and shallow tree under the workspace `igny8/` so we can see which folders (like `node_modules`, `.venv`, `dist`) are large and where they are.
|
||||
|
||||
Ran terminal command: cd /data/app/igny8 && echo 'Top-level sizes:' && du -sh * 2>/dev/null | sort -h && echo && echo 'Directory tree (depth 3):' && find . -maxdepth 3 -type d | sed 's|^\./||' | sort
|
||||
|
||||
Updating the todo list: mark the listing steps done and the analysis/packaging step in-progress.
|
||||
|
||||
Starting: *Analyze and propose packaging* (4/4)
|
||||
|
||||
Progress update: I inspected the workspace `igny8/` (sizes and a shallow tree). Next I analyzed which files to keep/exclude and drafted packaging, cleanup commands, and a sample `docker-compose.yml` plus a `prepare_package.sh` plan. Below are clear findings and actionable steps.
|
||||
|
||||
**Findings**
|
||||
- **Large/generated artifacts present:**
|
||||
- .venv — virtualenv (can be removed for packaging).
|
||||
- node_modules, node_modules — installed npm dependencies (do not include in package).
|
||||
- dist — built frontend assets (can be produced during build; you can include or build on deploy).
|
||||
- DB dumps and scheduler/state files appear under backend (e.g., `*.sql`, `*.sqlite3`, `celerybeat-schedule`) — these are artifacts and should not be bundled as app source.
|
||||
- **App source & config (keep these):**
|
||||
- igny8_core, manage.py, migrations, requirements.txt — Django app and deps manifest.
|
||||
- src, public, package.json and build config files — frontend source and build config.
|
||||
- `Dockerfile` / `Dockerfile.*` and `Caddyfile` — infra config that should remain in the repo (useful for containerization).
|
||||
- **Repo tooling and docs:** `deploy_automation.sh`, `verify_*.py`, docs, README.md — keep in repo; useful for ops/dev.
|
||||
|
||||
**Goal**
|
||||
Make `igny8/` a single, relocatable application folder that you can move to a new host, then deploy via Portainer (or Docker Compose) without shipping generated artifacts like `node_modules`, `.venv`, or large backups.
|
||||
|
||||
Recommendations and exact steps
|
||||
|
||||
- **Strategy (recommended): Docker image + single `docker-compose.yml` stack**
|
||||
- Keep only source, config, Dockerfiles and manifests in the app folder.
|
||||
- Exclude generated artifacts from the package (`.venv`, `node_modules`, `dist`, backups, logs).
|
||||
- Provide a `docker-compose.yml` that builds the backend and frontend images from the repo and defines services: `db` (Postgres), `redis`, backend (Gunicorn/Django), `worker` (Celery), and frontend (built and served by Caddy or served as static in a small nginx/caddy image).
|
||||
- Deploy via Portainer by uploading the `docker-compose.yml` (Portainer can build images from contexts if configured), or pre-build images, push to container registry, and use compose with image references.
|
||||
|
||||
- **Alternative:** Tarball + install script that runs `docker compose build` and `docker compose up -d` on the new host.
|
||||
|
||||
- **Safety note:** Always create a backup or branch of the repo before mass-deleting artifacts.
|
||||
|
||||
Exact commands to produce a clean package (run from igny8):
|
||||
|
||||
```bash
|
||||
# make a copy first (safe)
|
||||
cp -a /data/app/igny8 /tmp/igny8-package-source
|
||||
|
||||
# inside the copy, remove generated artifacts
|
||||
cd /tmp/igny8-package-source
|
||||
rm -rf backend/.venv
|
||||
rm -rf backend/*.sql backend/*.sqlite3 backend/celerybeat-schedule backend/logs || true
|
||||
rm -rf frontend/node_modules sites/node_modules
|
||||
rm -rf frontend/dist
|
||||
# optional: remove any large backups in root
|
||||
rm -f backup_*.sql || true
|
||||
|
||||
# create archive
|
||||
tar -czf /tmp/igny8-clean.tgz .
|
||||
|
||||
# verify size
|
||||
ls -lh /tmp/igny8-clean.tgz
|
||||
```
|
||||
|
||||
Suggested `.dockerignore` (place in repo root) to keep image builds small:
|
||||
|
||||
```
|
||||
backend/.venv
|
||||
frontend/node_modules
|
||||
sites/node_modules
|
||||
frontend/dist
|
||||
*.sql
|
||||
*.sqlite3
|
||||
logs/
|
||||
*.log
|
||||
.git
|
||||
node_modules
|
||||
```
|
||||
|
||||
Sample minimal `docker-compose.yml` (adapt to your app's env vars and Dockerfiles):
|
||||
|
||||
```yaml
|
||||
version: '3.8'
|
||||
services:
|
||||
db:
|
||||
image: postgres:15
|
||||
environment:
|
||||
POSTGRES_DB: igny8
|
||||
POSTGRES_USER: igny8
|
||||
POSTGRES_PASSWORD: changeme
|
||||
volumes:
|
||||
- db_data:/var/lib/postgresql/data
|
||||
|
||||
redis:
|
||||
image: redis:7
|
||||
command: redis-server --appendonly yes
|
||||
|
||||
backend:
|
||||
build:
|
||||
context: ./backend
|
||||
dockerfile: Dockerfile
|
||||
depends_on:
|
||||
- db
|
||||
- redis
|
||||
environment:
|
||||
DATABASE_URL: postgres://igny8:changeme@db:5432/igny8
|
||||
REDIS_URL: redis://redis:6379/0
|
||||
ports:
|
||||
- "8000:8000"
|
||||
volumes:
|
||||
- static_volume:/app/staticfiles
|
||||
|
||||
worker:
|
||||
build:
|
||||
context: ./backend
|
||||
dockerfile: Dockerfile
|
||||
command: celery -A igny8_core worker --loglevel=info
|
||||
depends_on:
|
||||
- redis
|
||||
- db
|
||||
|
||||
frontend:
|
||||
build:
|
||||
context: ./frontend
|
||||
dockerfile: Dockerfile
|
||||
ports:
|
||||
- "3000:80"
|
||||
depends_on:
|
||||
- backend
|
||||
|
||||
volumes:
|
||||
db_data:
|
||||
static_volume:
|
||||
```
|
||||
|
||||
Notes:
|
||||
- The Dockerfile should install Python deps from `requirements.txt`, collect static, run migrations in a start-up script, and run Gunicorn. If not present, we can produce a small `Dockerfile`.
|
||||
- The Dockerfile should `npm ci && npm run build` and copy `dist/` into a small static-server image (Caddy or nginx). If you prefer to serve frontend via the backend/CDN, build and copy the static files into backend static volume.
|
||||
|
||||
What I'd do next (I can do these for you)
|
||||
- Option A (recommended): Create `docker-compose.yml` and `prepare_package.sh` in the repo (I can add both files), plus `.dockerignore` and a minimal Dockerfile / Dockerfile if missing. Then test `docker compose up --build` locally.
|
||||
- Option B: Create an archive script only (if you prefer to handle Dockerfiles yourself).
|
||||
|
||||
Would you like me to:
|
||||
- **(1)** Create the `prepare_package.sh`, `.dockerignore`, and a `docker-compose.yml` in the repository and test the compose up here, or
|
||||
- **(2)** Only produce the files and commands you can run on your target host (no changes to repo), or
|
||||
- **(3)** Generate a simple Dockerfile and Dockerfile for a fully self-contained build?
|
||||
|
||||
Pick an option and I’ll implement it.
|
||||
@@ -0,0 +1,335 @@
|
||||
# Status Implementation Tables - Complete Reference
|
||||
|
||||
**Date:** December 3, 2025
|
||||
|
||||
---
|
||||
|
||||
## 🎯 KEYWORDS MODULE
|
||||
|
||||
### Backend Model
|
||||
**File:** `/backend/igny8_core/business/planning/models.py`
|
||||
|
||||
| Field | Type | Choices | Default | Required | Description |
|
||||
|-------|------|---------|---------|----------|-------------|
|
||||
| `status` | CharField(50) | `new`, `mapped` | `new` | ✅ Yes | Workflow status |
|
||||
| `disabled` | BooleanField | - | `False` | ✅ Yes | Filter control (exclude from processes) |
|
||||
|
||||
---
|
||||
|
||||
### Frontend Table Column
|
||||
**File:** `/frontend/src/config/pages/keywords.config.tsx` (Lines ~230-248)
|
||||
|
||||
| Status Value | Badge Color | Badge Label | Display When |
|
||||
|--------------|-------------|-------------|--------------|
|
||||
| `new` | Amber/Yellow | New | Keyword not yet clustered |
|
||||
| `mapped` | Green | Mapped | Keyword assigned to cluster |
|
||||
| *(disabled=true)* | Red/Gray | Disabled | User manually disabled (optional display) |
|
||||
|
||||
---
|
||||
|
||||
### Frontend Filter Dropdown
|
||||
**File:** `/frontend/src/config/pages/keywords.config.tsx` (Lines ~310-318)
|
||||
|
||||
| Filter Type | Options | Default | Description |
|
||||
|-------------|---------|---------|-------------|
|
||||
| Status Select | `new`, `mapped` | All | Workflow status filter |
|
||||
| Disabled Checkbox | Show/Hide disabled | Hide disabled | Filter control |
|
||||
|
||||
---
|
||||
|
||||
### Frontend Form Fields
|
||||
**File:** `/frontend/src/config/pages/keywords.config.tsx` (Lines ~560-570)
|
||||
|
||||
| Field | Type | Options | Default | Required | Editable |
|
||||
|-------|------|---------|---------|----------|----------|
|
||||
| Status | Select | `new`, `mapped` | `new` | ✅ Yes | ✅ Yes |
|
||||
| Disabled | Checkbox | true/false | `false` | ❌ No | ✅ Yes |
|
||||
|
||||
---
|
||||
|
||||
## 🎯 CLUSTERS MODULE
|
||||
|
||||
### Backend Model
|
||||
**File:** `/backend/igny8_core/business/planning/models.py`
|
||||
|
||||
| Field | Type | Choices | Default | Required | Description |
|
||||
|-------|------|---------|---------|----------|-------------|
|
||||
| `status` | CharField(50) | `new`, `mapped` | `new` | ✅ Yes | Workflow status |
|
||||
| `disabled` | BooleanField | - | `False` | ✅ Yes | Filter control (exclude from processes) |
|
||||
|
||||
---
|
||||
|
||||
### Frontend Table Column
|
||||
**File:** `/frontend/src/config/pages/clusters.config.tsx` (Lines ~190-200)
|
||||
|
||||
| Status Value | Badge Color | Badge Label | Display When |
|
||||
|--------------|-------------|-------------|--------------|
|
||||
| `new` | Amber/Yellow | New | Cluster created, no ideas generated yet |
|
||||
| `mapped` | Green | Mapped | Ideas generated from cluster |
|
||||
| *(disabled=true)* | Red/Gray | Disabled | User manually disabled (optional display) |
|
||||
|
||||
---
|
||||
|
||||
### Frontend Filter Dropdown
|
||||
**File:** `/frontend/src/config/pages/clusters.config.tsx` (Lines ~240-253)
|
||||
|
||||
| Filter Type | Options | Default | Description |
|
||||
|-------------|---------|---------|-------------|
|
||||
| Status Select | `new`, `mapped` | All | Workflow status filter |
|
||||
| Disabled Checkbox | Show/Hide disabled | Hide disabled | Filter control |
|
||||
|
||||
---
|
||||
|
||||
### Frontend Form Fields
|
||||
**File:** `/frontend/src/config/pages/clusters.config.tsx` (Lines ~405-418)
|
||||
|
||||
| Field | Type | Options | Default | Required | Editable |
|
||||
|-------|------|---------|---------|----------|----------|
|
||||
| Status | Select | `new`, `mapped` | `new` | ✅ Yes | ✅ Yes |
|
||||
| Disabled | Checkbox | true/false | `false` | ❌ No | ✅ Yes |
|
||||
|
||||
---
|
||||
|
||||
## 🎯 IDEAS MODULE
|
||||
|
||||
### Backend Model
|
||||
**File:** `/backend/igny8_core/business/planning/models.py`
|
||||
|
||||
| Field | Type | Choices | Default | Required | Description |
|
||||
|-------|------|---------|---------|----------|-------------|
|
||||
| `status` | CharField(50) | `new`, `queued`, `completed` | `new` | ✅ Yes | Workflow status |
|
||||
| `disabled` | BooleanField | - | `False` | ✅ Yes | Filter control (exclude from processes) |
|
||||
|
||||
---
|
||||
|
||||
### Frontend Table Column
|
||||
**File:** `/frontend/src/config/pages/ideas.config.tsx` (Lines ~170-185)
|
||||
|
||||
| Status Value | Badge Color | Badge Label | Display When |
|
||||
|--------------|-------------|-------------|--------------|
|
||||
| `new` | Amber/Yellow | New | Idea generated, not queued yet |
|
||||
| `queued` | Blue | Queued | Task created in Writer module |
|
||||
| `completed` | Green | Completed | Content generated (Task completed) |
|
||||
| *(disabled=true)* | Red/Gray | Disabled | User manually disabled (optional display) |
|
||||
|
||||
---
|
||||
|
||||
### Frontend Filter Dropdown
|
||||
**File:** `/frontend/src/config/pages/ideas.config.tsx` (Lines ~218-228)
|
||||
|
||||
| Filter Type | Options | Default | Description |
|
||||
|-------------|---------|---------|-------------|
|
||||
| Status Select | `new`, `queued`, `completed` | All | Workflow status filter |
|
||||
| Disabled Checkbox | Show/Hide disabled | Hide disabled | Filter control |
|
||||
|
||||
---
|
||||
|
||||
### Frontend Form Fields
|
||||
**File:** `/frontend/src/config/pages/ideas.config.tsx` (Lines ~372-385)
|
||||
|
||||
| Field | Type | Options | Default | Required | Editable |
|
||||
|-------|------|---------|---------|----------|----------|
|
||||
| Status | Select | `new`, `queued`, `completed` | `new` | ✅ Yes | ✅ Yes |
|
||||
| Disabled | Checkbox | true/false | `false` | ❌ No | ✅ Yes |
|
||||
|
||||
---
|
||||
|
||||
---
|
||||
|
||||
# 🔄 STATUS TRANSITION TABLES
|
||||
|
||||
## KEYWORDS Status Transitions
|
||||
|
||||
| Current Status | Trigger/Action | Next Status | Auto/Manual | Updated By |
|
||||
|----------------|----------------|-------------|-------------|------------|
|
||||
| `new` | AI auto_cluster runs | `mapped` | 🤖 Auto | `auto_cluster.py` Line 297 |
|
||||
| `new` | User manually assigns to cluster | `mapped` | 👤 Manual | User form edit |
|
||||
| `new` | User toggles disabled | *(stays new)* + `disabled=true` | 👤 Manual | User form edit |
|
||||
| `mapped` | User toggles disabled | *(stays mapped)* + `disabled=true` | 👤 Manual | User form edit |
|
||||
| *(any)* | User re-enables | *(stays same)* + `disabled=false` | 👤 Manual | User form edit |
|
||||
|
||||
**Workflow Path:**
|
||||
```
|
||||
new ──[auto_cluster AI]──> mapped
|
||||
│
|
||||
└──[user toggle disabled]──> (status unchanged, disabled flag set)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## CLUSTERS Status Transitions
|
||||
|
||||
| Current Status | Trigger/Action | Next Status | Auto/Manual | Updated By |
|
||||
|----------------|----------------|-------------|-------------|------------|
|
||||
| `new` | AI generate_ideas runs | `mapped` | 🤖 Auto | `generate_ideas.py` (new code) |
|
||||
| `new` | User manually creates ideas | `mapped` | 👤 Manual | User workflow |
|
||||
| `new` | User toggles disabled | *(stays new)* + `disabled=true` | 👤 Manual | User form edit |
|
||||
| `mapped` | User toggles disabled | *(stays mapped)* + `disabled=true` | 👤 Manual | User form edit |
|
||||
| *(any)* | User re-enables | *(stays same)* + `disabled=false` | 👤 Manual | User form edit |
|
||||
|
||||
**Workflow Path:**
|
||||
```
|
||||
new ──[generate_ideas AI]──> mapped
|
||||
│
|
||||
└──[user toggle disabled]──> (status unchanged, disabled flag set)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## IDEAS Status Transitions
|
||||
|
||||
| Current Status | Trigger/Action | Next Status | Auto/Manual | Updated By |
|
||||
|----------------|----------------|-------------|-------------|------------|
|
||||
| `new` | User bulk-queues ideas to writer | `queued` | 👤 Manual | `views.py` Line 1084 |
|
||||
| `queued` | Writer AI generates content | `completed` | 🤖 Auto | `generate_content.py` Line 318 (syncs from Task) |
|
||||
| `queued` | Task status becomes 'completed' | `completed` | 🤖 Auto | Auto-sync from Task.status |
|
||||
| `new` | User toggles disabled | *(stays new)* + `disabled=true` | 👤 Manual | User form edit |
|
||||
| `queued` | User toggles disabled | *(stays queued)* + `disabled=true` | 👤 Manual | User form edit |
|
||||
| `completed` | User toggles disabled | *(stays completed)* + `disabled=true` | 👤 Manual | User form edit |
|
||||
| *(any)* | User re-enables | *(stays same)* + `disabled=false` | 👤 Manual | User form edit |
|
||||
|
||||
**Workflow Path:**
|
||||
```
|
||||
new ──[user bulk_queue]──> queued ──[generate_content AI]──> completed
|
||||
│ │ │
|
||||
└──[user toggle disabled]────┴───────────[user toggle]──────────┘
|
||||
(status unchanged, disabled flag set)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
---
|
||||
|
||||
# 📊 COMBINED STATUS OVERVIEW
|
||||
|
||||
## All Modules - Status Values
|
||||
|
||||
| Module | Workflow Statuses | Filter Status | Total Unique Values |
|
||||
|--------|-------------------|---------------|---------------------|
|
||||
| **Keywords** | `new`, `mapped` | `disabled` (boolean) | 2 status + 1 flag |
|
||||
| **Clusters** | `new`, `mapped` | `disabled` (boolean) | 2 status + 1 flag |
|
||||
| **Ideas** | `new`, `queued`, `completed` | `disabled` (boolean) | 3 status + 1 flag |
|
||||
|
||||
---
|
||||
|
||||
## Process Inclusion Matrix
|
||||
|
||||
| Status | Auto-Cluster AI | Generate-Ideas AI | Bulk-Queue | Generate-Content AI | Dashboard Metrics |
|
||||
|--------|-----------------|-------------------|------------|---------------------|-------------------|
|
||||
| Keywords: `new` | ✅ Included | ❌ N/A | ❌ N/A | ❌ N/A | ✅ Counted |
|
||||
| Keywords: `mapped` | ❌ Already clustered | ✅ Used for ideas | ❌ N/A | ❌ N/A | ✅ Counted |
|
||||
| Keywords: `disabled=true` | ❌ Excluded | ❌ Excluded | ❌ N/A | ❌ N/A | ❌ Excluded |
|
||||
| Clusters: `new` | ❌ N/A | ✅ Included | ❌ N/A | ❌ N/A | ✅ Counted |
|
||||
| Clusters: `mapped` | ❌ N/A | ❌ Already has ideas | ✅ Ideas can queue | ❌ N/A | ✅ Counted |
|
||||
| Clusters: `disabled=true` | ❌ N/A | ❌ Excluded | ❌ Excluded | ❌ N/A | ❌ Excluded |
|
||||
| Ideas: `new` | ❌ N/A | ❌ N/A | ✅ Included | ❌ N/A | ✅ Counted |
|
||||
| Ideas: `queued` | ❌ N/A | ❌ N/A | ❌ Already queued | ✅ Included | ✅ Counted |
|
||||
| Ideas: `completed` | ❌ N/A | ❌ N/A | ❌ Already done | ❌ Already done | ✅ Counted |
|
||||
| Ideas: `disabled=true` | ❌ N/A | ❌ N/A | ❌ Excluded | ❌ Excluded | ❌ Excluded |
|
||||
|
||||
---
|
||||
|
||||
---
|
||||
|
||||
# 🔄 DATA MIGRATION TRANSITION TABLE
|
||||
|
||||
## Keywords Migration
|
||||
|
||||
| Old Status | New Status | New Disabled Flag | Logic |
|
||||
|------------|------------|-------------------|-------|
|
||||
| `pending` | `new` | `false` | Direct mapping |
|
||||
| `active` | `mapped` | `false` | Keyword was clustered |
|
||||
| `archived` | `mapped` | `true` | Preserve data, mark as disabled |
|
||||
|
||||
**SQL Preview:**
|
||||
```sql
|
||||
-- Keywords migration
|
||||
UPDATE igny8_keywords
|
||||
SET status = 'new', disabled = false
|
||||
WHERE status = 'pending';
|
||||
|
||||
UPDATE igny8_keywords
|
||||
SET status = 'mapped', disabled = false
|
||||
WHERE status = 'active';
|
||||
|
||||
UPDATE igny8_keywords
|
||||
SET status = 'mapped', disabled = true
|
||||
WHERE status = 'archived';
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Clusters Migration
|
||||
|
||||
| Old Status | New Status | New Disabled Flag | Logic |
|
||||
|------------|------------|-------------------|-------|
|
||||
| `active` (with ideas_count > 0) | `mapped` | `false` | Cluster has ideas |
|
||||
| `active` (with ideas_count = 0) | `new` | `false` | Cluster has no ideas yet |
|
||||
| *(no archived status exists)* | - | - | - |
|
||||
|
||||
**SQL Preview:**
|
||||
```sql
|
||||
-- Clusters migration
|
||||
UPDATE igny8_clusters
|
||||
SET status = 'mapped', disabled = false
|
||||
WHERE ideas_count > 0;
|
||||
|
||||
UPDATE igny8_clusters
|
||||
SET status = 'new', disabled = false
|
||||
WHERE ideas_count = 0;
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Ideas Migration
|
||||
|
||||
| Old Status | New Status | New Disabled Flag | Logic |
|
||||
|------------|------------|-------------------|-------|
|
||||
| `new` | `new` | `false` | No change |
|
||||
| `scheduled` | `queued` | `false` | Rename to match writer workflow |
|
||||
| `published` | `completed` | `false` | Publishing is separate deployment |
|
||||
| *(no archived status)* | - | - | - |
|
||||
|
||||
**SQL Preview:**
|
||||
```sql
|
||||
-- Ideas migration
|
||||
UPDATE igny8_content_ideas
|
||||
SET status = 'new', disabled = false
|
||||
WHERE status = 'new';
|
||||
|
||||
UPDATE igny8_content_ideas
|
||||
SET status = 'queued', disabled = false
|
||||
WHERE status = 'scheduled';
|
||||
|
||||
UPDATE igny8_content_ideas
|
||||
SET status = 'completed', disabled = false
|
||||
WHERE status = 'published';
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
---
|
||||
|
||||
# 📋 SUMMARY COMPARISON TABLE
|
||||
|
||||
## Before vs After
|
||||
|
||||
| Module | Current Status Values | New Status Values | Current Filter | New Filter |
|
||||
|--------|----------------------|-------------------|----------------|------------|
|
||||
| **Keywords** | `pending`, `active`, `archived` | `new`, `mapped` | Status dropdown | Status dropdown + disabled checkbox |
|
||||
| **Clusters** | `active` (hardcoded, no choices) | `new`, `mapped` | Status dropdown | Status dropdown + disabled checkbox |
|
||||
| **Ideas** | `new`, `scheduled`, `published` | `new`, `queued`, `completed` | Status dropdown | Status dropdown + disabled checkbox |
|
||||
|
||||
---
|
||||
|
||||
## Status Count Changes
|
||||
|
||||
| Module | Before | After | Change |
|
||||
|--------|--------|-------|--------|
|
||||
| **Keywords** | 3 status values | 2 workflow + 1 boolean flag | -1 status (simplified) |
|
||||
| **Clusters** | 1 hardcoded value | 2 workflow + 1 boolean flag | +1 status (proper choices) |
|
||||
| **Ideas** | 3 status values | 3 workflow + 1 boolean flag | Same count (renamed) |
|
||||
|
||||
---
|
||||
|
||||
**END OF IMPLEMENTATION TABLES**
|
||||
658
docs/igny8-app/status-related-temporary/status-dependency.md
Normal file
658
docs/igny8-app/status-related-temporary/status-dependency.md
Normal file
@@ -0,0 +1,658 @@
|
||||
bluesalman: https://app.igny8.com/setup/add-keywords
|
||||
|
||||
when keywrods added to wrokflow from this page, the stus in planner/keywrods for the keywrods is saved as pending, which is wrong as per new plan,
|
||||
please check and fix it so that newly added seed keywords in planner kkeywrod s will ahve sttus |New| instaead of pending
|
||||
|
||||
GitHub Copilot: I'll search the backend for the endpoint and code handling adding keywords, then locate where status is set to "pending" and change it to "New". I'll first add a short todo plan.
|
||||
|
||||
Created 5 todos
|
||||
|
||||
Now I'll search the backend for route/view names and keywords insertion code. I'll look for "add-keywords", "add_keywords", "seed keyword", and "planner" references.
|
||||
|
||||
Searched for regex `add-keywords|add_keywords|add_keyword|setup/add-keywords|seed keyword|seed_keywords|planner/keywords|planner keywords|save.*status.*pending` (`**/backend/**`), 20 results
|
||||
|
||||
Searched for regex `SeedKeyword|seedkeyword|seed_keywords|SeedKeywords|Seed Keyword|SeedKeywordStatus|status='pending'` (`**/backend/**`), 20 results
|
||||
|
||||
Searched for regex `setup/add-keywords|add_keywords|setup.views|def add_keywords|/setup/add-keywords` (`**/backend/**`), 20 results
|
||||
|
||||
Searched for files matching `**/backend/**/setup*`, no matches
|
||||
|
||||
Searched for regex `'pending'|"pending"|Pending` (`**/backend/**`), 20 results
|
||||
|
||||
Searched for files matching `**/backend/igny8_core/business/planning`, no matches
|
||||
|
||||
Read [](file:///data/app/igny8/backend/igny8_core/business)
|
||||
|
||||
Read [](file:///data/app/igny8/backend/igny8_core/business/planning)
|
||||
|
||||
Read [](file:///data/app/igny8/backend/igny8_core/business/planning/models.py)
|
||||
|
||||
Searched for regex `Keywords(objects|.create|.bulk_create|Keywords\()|add_keyword(s)?|seed_keyword.*create|save\(.*seed_keyword` (`**/backend/**`), 20 results
|
||||
|
||||
Searched for regex `add-keywords|/setup/add-keywords|setup/add-keywords|addKeywords|add_seed` (`**/frontend/**`), 6 results
|
||||
|
||||
Read [](file:///data/app/igny8/frontend/src/App.tsx#330-330), lines 330 to 410
|
||||
|
||||
Searched for text `IndustriesSectorsKeywords` (`**/frontend/src/**`), 5 results
|
||||
|
||||
Read [](file:///data/app/igny8/frontend/src/pages/Setup/IndustriesSectorsKeywords.tsx#1-1), lines 1 to 400
|
||||
|
||||
Read [](file:///data/app/igny8/frontend/src/pages/Setup/IndustriesSectorsKeywords.tsx#400-400), lines 400 to 800
|
||||
|
||||
Searched for text `addSeedKeywordsToWorkflow` (`**/backend/**|frontend/**`), no results
|
||||
|
||||
Searched for regex `addSeedKeywords|addSeedKeywordsToWorkflow|add_seed_keywords_to_workflow` (`**/frontend/**`), 6 results
|
||||
|
||||
Read [](file:///data/app/igny8/frontend/src/services/api.ts#1880-1880), lines 1880 to 1935
|
||||
|
||||
Searched for text `bulk_add_from_seed|bulk_add_from_seed|bulk_add_from_seed` (`**/backend/**`), 3 results
|
||||
|
||||
Read [](file:///data/app/igny8/backend/igny8_core/modules/planner/views.py#1-1), lines 1 to 420
|
||||
|
||||
I'll change the default status from 'pending' to 'new' in the planner bulk add view, then update the todo list to mark steps done. Applying the patch now.
|
||||
|
||||
Using "Apply Patch"
|
||||
|
||||
|
||||
```
|
||||
|
||||
```
|
||||
|
||||
|
||||
Completed: *Find add-keywords handler* (1/5)
|
||||
|
||||
Update complete — quick summary and next steps
|
||||
|
||||
- **What I changed**: In views.py I updated the `bulk_add_from_seed` action so newly created `Keywords` use `'status': 'new'` instead of `'pending'`.
|
||||
- **Why**: The `Keywords` model defines allowed statuses with `'new'` representing newly added seed keywords; setting `'pending'` was inconsistent with the new plan and caused incorrect state.
|
||||
- **Files modified**:
|
||||
- views.py — changed default `status` in `get_or_create(..., defaults={...})`.
|
||||
|
||||
Next recommended steps
|
||||
- Run the backend test suite or at least exercise the endpoint manually:
|
||||
- Example curl (replace token/site/sector IDs as appropriate):
|
||||
```bash
|
||||
curl -X POST 'http://localhost:8000/api/v1/planner/keywords/bulk_add_from_seed/' \
|
||||
-H "Authorization: Token $AUTH_TOKEN" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"seed_keyword_ids":[1,2,3],"site_id":5,"sector_id":12}'
|
||||
```
|
||||
- Verify in the UI by adding keywords from `/setup/add-keywords` and checking that the corresponding records in Planner (`/planner/keywords`) have status `New`.
|
||||
- If you want, I can run the test suite or add a small unit test to assert the default status is `'new'`. Which would you prefer?
|
||||
|
||||
Made changes.
|
||||
|
||||
bluesalman: cehck agian, if wahtever you did is corrrect and logically true, since sitll new keywrods added have status pedning
|
||||
|
||||
GitHub Copilot: I'll search the backend for any other places that set a new Keyword's status to 'pending', then inspect the planner serializer to see default creation behavior. I'll run a code search across the backend.
|
||||
|
||||
Searched for regex `'pending'|"status': 'pending'|"status": "pending"|defaults={\s*'status': 'pending'` (`**/backend/**`), 20 results
|
||||
|
||||
Searched for regex `Keywords.objects.get_or_create|Keywords.objects.create|create\(.*Keywords` (`**/backend/**`), 9 results
|
||||
|
||||
Read [](file:///data/app/igny8/backend/igny8_core/modules/planner/views.py#520-520), lines 520 to 640
|
||||
|
||||
Read [](file:///data/app/igny8/backend/igny8_core/api/tests/test_integration_planner.py)
|
||||
|
||||
Read [](file:///data/app/igny8/backend/igny8_core/modules/planner/management/commands/add_keywords_to_sectors.py)
|
||||
|
||||
Read [](file:///data/app/igny8/backend/igny8_core/modules/planner/serializers.py)
|
||||
|
||||
Searched for text `planner/keywords|KeywordOpportunities|bulk_update_keywords_status|bulkUpdateKeywordsStatus|status':'pending'` (`**/frontend/**`), 20 results
|
||||
|
||||
Read [](file:///data/app/igny8/frontend/src/pages/Planner/Keywords.tsx#1-1), lines 1 to 400
|
||||
|
||||
Searched for text `createKeyword(` (`**/frontend/**`), 3 results
|
||||
|
||||
Read [](file:///data/app/igny8/frontend/src/services/api.ts#480-480), lines 480 to 580
|
||||
|
||||
I'm about to patch the `bulk_add_from_seed` action to enforce `status='new'` after creation (safeguard and log), so we can be certain newly created keywords are marked correctly. I'll apply a small, focused change.
|
||||
|
||||
Using "Apply Patch"
|
||||
|
||||
|
||||
```
|
||||
|
||||
```
|
||||
|
||||
|
||||
Starting: *Run tests or basic sanity checks* (4/5)
|
||||
|
||||
Progress update — what's done and next steps
|
||||
|
||||
- I added a safeguard in `bulk_add_from_seed` to explicitly set `keyword.status = 'new'` and save it when a Keyword is created. This ensures newly created keywords cannot remain `'pending'` due to DB or other unexpected defaults.
|
||||
- Files changed:
|
||||
- views.py (updated default in `get_or_create` and added explicit status enforcement)
|
||||
|
||||
Next actions I suggest (choose one):
|
||||
- I can run the backend test suite or run a quick Django shell command to simulate the `bulk_add_from_seed` call and verify the created Keywords have `status='new'`. (This requires a running dev environment; tell me if you want me to run tests locally.)
|
||||
- Or you can try adding keywords from `/setup/add-keywords` now and confirm — if still pending, provide a sample request/response or logs and I'll investigate further.
|
||||
|
||||
If you want me to run verification here, I can:
|
||||
- Run a targeted unit test or a Django management shell snippet to call `bulk_add_from_seed` and print the created Keyword statuses.
|
||||
Tell me which verification you prefer and I'll proceed.
|
||||
|
||||
Made changes.
|
||||
|
||||
bluesalman: \Planner\Keywords.tsx
|
||||
|
||||
cehck this if somethign is overrididng it on th page wiwh tpending status
|
||||
|
||||
GitHub Copilot: I'll add an updated TODO, search the frontend for any hardcoded `pending` defaults, then patch the `formData` initial `status` in Keywords.tsx to `'new'`.
|
||||
|
||||
Added 2 todos
|
||||
|
||||
Searched for regex `status: 'pending'|status: "pending"|'status': 'pending'|"status": "pending"` (`**/frontend/**`), 20 results
|
||||
|
||||
Read [](file:///data/app/igny8/frontend/src/pages/Planner/Keywords.tsx#70-70), lines 70 to 110
|
||||
|
||||
Using "Apply Patch"
|
||||
|
||||
|
||||
```
|
||||
|
||||
```
|
||||
|
||||
|
||||
Completed: *Inspect frontend Keywords page* (4/7)
|
||||
|
||||
Done — what I checked and changed
|
||||
|
||||
- What I inspected:
|
||||
- Searched the frontend for hardcoded `pending` occurrences and found that Keywords.tsx initialized the create form's `formData.status` as `'pending'`.
|
||||
- Confirmed backend `bulk_add_from_seed` now sets `Keywords` created from seed to `'new'` (we previously updated this).
|
||||
|
||||
- What I changed:
|
||||
- Updated Keywords.tsx initial `formData.status` from `'pending'` to `'new'`.
|
||||
- Also added a safeguard in views.py (in `bulk_add_from_seed`) to explicitly set `keyword.status = 'new'` and save when a keyword is created.
|
||||
|
||||
- Files modified:
|
||||
- views.py — set defaults to `'new'` and enforce status after create.
|
||||
- Keywords.tsx — changed default `formData.status` to `'new'`.
|
||||
|
||||
Why this should fix the issue
|
||||
- The UI create form was explicitly sending `'pending'` when creating keywords via the Planner page; changing the form default to `'new'` prevents that.
|
||||
- The setup flow (`/setup/add-keywords`) uses the bulk-add API which now defaults to `'new'` and enforces it, preventing DB or other code from leaving it as `'pending'`.
|
||||
|
||||
Next steps / verification
|
||||
- To confirm end-to-end, either:
|
||||
- Add seed keywords from the UI (`/setup/add-keywords`) and check `/planner/keywords` — new records should show status `New`.
|
||||
- Or run this curl (replace token/site/sector IDs) to test bulk endpoint:
|
||||
```bash
|
||||
curl -X POST 'http://localhost:8000/api/v1/planner/keywords/bulk_add_from_seed/' \
|
||||
-H "Authorization: Token $AUTH_TOKEN" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"seed_keyword_ids":[1,2,3],"site_id":5,"sector_id":12}'
|
||||
```
|
||||
- Or run a Django shell check:
|
||||
```bash
|
||||
python manage.py shell
|
||||
# then in shell:
|
||||
from igny8_core.modules.planner.views import KeywordViewSet
|
||||
# (or directly create Keywords via ORM to verify defaults)
|
||||
```
|
||||
|
||||
Would you like me to:
|
||||
- Run a quick Django shell snippet here to simulate a `get_or_create` and print the created keyword status? (Requires dev environment / DB access)
|
||||
- Or run the integration tests that touch the planner endpoints? I'll proceed with whichever you prefer.
|
||||
|
||||
Made changes.
|
||||
|
||||
bluesalman: when cluster is delted teh mapeed keywros status which now have no cluster mapped their staus is notupdated,
|
||||
so when soem mapped clsuters are delted it should remove the mapping status and change the status of keywrods back to new,
|
||||
|
||||
similarly th whole plalner and workflwo pipeline works in one idmenstion as cgoing forward from keywrod to clsuter to idea to taks, to contetn to images to review to publish and preovious module statuas changes atumatically
|
||||
simialrrly we need to make sure that when in pipeline someone try to delte a record, it hsold not have any mapped or assocaited record upstream,
|
||||
|
||||
so if user try to delte a clsuter which ahs a mapped ide, or similarly try to delte keywrod and that has mapepd clsuter, it hsouldnott allow to delte,
|
||||
|
||||
this applies to both single record delete as well as bulk delete fatures for all moduel sin planner in writer
|
||||
|
||||
and when some record in any modeul is eligble to be edelted as it has no upstream matching//mapped record, then when delted it should updated the statuas of previous assocation back to orignal sattusu which was before mapping
|
||||
|
||||
this need s to be carefully planned,
|
||||
|
||||
udneratnd the current flow and statusus from docs drerelated douemetns of planner writer, and status mapping tbales docs,
|
||||
also see the code base,
|
||||
|
||||
then plan it and show me the plan of thiis backrward delttion as welll as limits applciatin for edlete in , show me this in chat in table format,
|
||||
|
||||
|
||||
GitHub Copilot: I'll add a small plan (todo list) and then scan the codebase and docs for planner/writer status flows, delete endpoints, and model relationships. I'll collect relevant files to build the deletion plan.
|
||||
|
||||
Starting: *Gather planner/writer docs* (1/5)
|
||||
|
||||
Searched for regex `class Clusters|class Keywords|class ContentIdeas|class Tasks|class Content|class Image|on_delete=|bulk_delete|bulk_delete_from_seed|@action\(|def destroy\(|def delete\(|bulk_delete` (`**/backend/**`), 20 results
|
||||
|
||||
Searched for regex `class Tasks|class Task|class Content|class Image|ContentIdeas|keyword_objects|keyword_cluster|writer` (`**/backend/**`), 20 results
|
||||
|
||||
Now I'll collect the status fields and relationships for relevant models (planner + writer modules) to summarize current flow. I'll search models for status choices and FK relationships.
|
||||
|
||||
Searched for regex `class .*models.py|status = models.CharField|class .*\(|from .*modules.*writer.models|class Tasks|class Content` (`**/backend/**`), 20 results
|
||||
|
||||
Searched for files matching `**/backend/igny8_core/modules/writer/**/models.py`, 1 match
|
||||
|
||||
Read [](file:///data/app/igny8/backend/igny8_core/modules/writer/models.py)
|
||||
|
||||
Read [](file:///data/app/igny8/backend/igny8_core/business/content/models.py#1-1), lines 1 to 500
|
||||
|
||||
Next I'll outline what I found about the current flow, then propose a safe, testable plan (in table form) to implement the backward-deletion and deletion-limits behavior you described.
|
||||
|
||||
**Quick summary of current state (from code + docs)**
|
||||
- Core pipeline (forward): Keyword -> Cluster -> ContentIdea -> Task -> Content -> Images -> (Review -> Publish).
|
||||
- Relevant models & status fields:
|
||||
- `planner.Keywords` — statuses: `('new','New'), ('mapped','Mapped')` (field `status`, default `'new'`).
|
||||
- `planner.Clusters` — statuses: `('new','New'), ('mapped','Mapped')` (field `status`, default `'new'`).
|
||||
- `planner.ContentIdeas` — statuses: `('new','New'), ('queued','Queued'), ('completed','Completed')`.
|
||||
- `writer.Tasks` — statuses: `('queued','Queued'), ('completed','Completed')`.
|
||||
- `writer.Content` — statuses: `('draft','Draft'), ('review','Review'), ('published','Published')`.
|
||||
- `writer.Images` — status default `'pending'` (values: pending/generated/failed).
|
||||
- Important relationships (from models):
|
||||
- `Keywords.seed_keyword` — FK(PROTECT) to global SeedKeyword (prevents deleting seed keyword).
|
||||
- `Keywords.cluster` — FK(SET_NULL) to `Clusters`. (Deleting a Cluster currently sets `Keywords.cluster = NULL`.)
|
||||
- `ContentIdeas.keyword_objects` — M2M to `Keywords`.
|
||||
- `ContentIdeas.keyword_cluster` — FK(SET_NULL) to `Clusters`.
|
||||
- `Tasks.cluster` — FK(SET_NULL) to `Clusters`.
|
||||
- `Content.cluster` — FK(SET_NULL) to `Clusters`.
|
||||
- `writer.ContentClusterMap.cluster` — FK(CASCADE) to `Clusters` (this will delete mappings when cluster deleted).
|
||||
- Current behavior you observed:
|
||||
- When a `Cluster` is deleted, `Keywords.cluster` becomes NULL (because of SET_NULL) but the `Keywords.status` often remains `'mapped'` (incorrect). You want those Keywords to revert to `'new'`.
|
||||
- You want deletions to be blocked if there are dependent/mapped downstream records (both single and bulk deletes).
|
||||
- You want status rollback to original pre-mapping status where possible, not just a generic `'new'`.
|
||||
|
||||
Plan (high-level) — what we will implement
|
||||
- Enforce deletion guardrails: prevent deleting an entity if downstream mapped records exist (single + bulk).
|
||||
- On allowed deletions, perform controlled rollback updates to upstream/related records (e.g., set Keywords back to `'new'` if their cluster was removed).
|
||||
- Track previous statuses at mapping time so rollbacks can restore the original status instead of guessing.
|
||||
- Add tests and API-level checks to make behavior explicit and safe.
|
||||
|
||||
Action plan (step-by-step) — next work to do
|
||||
- I will implement this plan in increments and add tests. If you want me to start coding, I will.
|
||||
- Below I present a detailed plan table describing each entity, what blocks deletion, and what automatic rollback/update should happen when deletion is allowed.
|
||||
|
||||
**Plan table — Deletion rules & rollback behavior**
|
||||
|
||||
- Columns: **Entity** | **Blocks Deletion If** | **On Delete (allowed)** | **Implementation Notes / Where to change**
|
||||
|
||||
| **Entity** | **Blocks Deletion If** | **On Delete (allowed)** | **Implementation Notes / Where to change** |
|
||||
|-----------|------------------------|-------------------------|-------------------------------------------|
|
||||
| `Cluster` (`planner.Clusters`) | Any of: <br>- `ContentIdeas` with `keyword_cluster = cluster` (idea mapped) <br>- `writer.ContentClusterMap` entries referencing cluster (content mapped) <br>- `writer.Tasks` where `cluster = cluster` (active tasks mapped) <br>(Also consider queued writer jobs referencing cluster) | - If allowed (no blocking downstream): delete cluster. <br>- After deletion, update `planner.Keywords` that referenced this cluster to set `status` → previously-recorded status (fallback `'new'`) and ensure `cluster` is NULL. | - Add `pre_delete` signal or ViewSet checks to block deletion if blocking dependencies exist (return 400). <br>- Add `post_delete` signal to update `Keywords` status for keywords that had `cluster=deleted_pk` (safe because instance.pk available). <br>- Prefer pre_delete check (fail fast) and post_delete update (apply rollback). <br>- Also update any `ContentIdeas` keyword cluster references are already SET_NULL — but we will block deletion if ideas exist. |
|
||||
| `Keyword` (`planner.Keywords`) | Any of: <br>- `keyword.cluster` is not NULL (keyword mapped to a cluster) <br>- `ContentIdeas.keyword_objects` contains this keyword (idea uses keyword) <br>- Any other downstream references (e.g., queued processes referencing keyword) | - If allowed (no blocking references): delete keyword. <br>- On delete: if this keyword was previously mapped (we recorded previous status), restore previous status on other objects that referenced it (rare). | - Add `pre_delete` on Keywords to check M2M `contentideas` and `cluster`. <br>- For UI/bulk delete, ViewSet should call logic that attempts deletion per-keyword and returns details about blocked vs deleted items. |
|
||||
| `ContentIdea` (`planner.ContentIdeas`) | Any of: <br>- `writer.Tasks` referencing the idea (`Tasks.idea`) <br>- `writer.Content` or `ContentClusterMap` referencing the idea indirectly (if used) | - If allowed: delete idea; if idea was the reason a cluster was mapped to something, optionally revert cluster status — but clusters are normally independent. | - Add `pre_delete` to check `Tasks.exists()` and block if exists. <br>- On delete allowed, remove any leftover associations (M2M to keywords) and optionally update `Keywords` mapped status if mapping implied change. |
|
||||
| `Task` (`writer.Tasks`) | Content (final content) or images already generated for that task; or a Task is currently in-progress/queued | - If allowed: delete task; on delete, optionally set statuses on dependent `Images` or `Content` back to previous states if they were generated as result of this task. | - Check `Images` or `Content` referencing the task. Block if final content exists. |
|
||||
| `Content` (`writer.Content`) | If published or external sync state would be impacted; if `ContentClusterMap` or `Images` depend on it | - If allowed: delete content; update `ContentClusterMap` (cascade) and optionally change `cluster` or other upstream statuses if appropriate. | - Block deletion if it's published or external state exists (optional policy). |
|
||||
|
||||
Notes:
|
||||
- “Blocks Deletion If” checks must be applied both for single `DELETE` (ViewSet destroy) and bulk delete endpoints. Bulk deletes should iterate items and return a structured result: {deleted: n, skipped: [(id, reason), ...]}.
|
||||
- Restoration of "original status" requires recording the previous status at the time of the mapping operation (e.g., when a Keyword is mapped to a Cluster we should save prior status). Without that, we must fallback to safe defaults (Keywords -> `'new'`, Clusters -> `'new'`).
|
||||
- There are two layers for enforcement:
|
||||
1. Model-level signals (pre_delete/post_delete) — catches deletes triggered anywhere (shell, admin, bulk queryset.delete). pre_delete allows blocking; post_delete can update related rows.
|
||||
2. API-level ViewSet checks (destroy/bulk_delete override) — gives user-friendly error messages and prevents accidental DB operations. Both should be used together.
|
||||
|
||||
Implementation details / code plan (concrete changes)
|
||||
1. Add status-history (preferred): new DB model and migration
|
||||
- Model: `StatusHistory` (generic) or add `previous_status` JSONField on mapping models:
|
||||
- Fields: `model_name`, `object_id`, `field_name`, `previous_value`, `changed_by`, `changed_at`, `context` (why changed).
|
||||
- On mapping events (e.g., when `Keyword.cluster` is set or ContentIdea.keyword_cluster set), record the prior status in `StatusHistory`.
|
||||
- Benefit: allows precise rollback to "original status".
|
||||
|
||||
2. Signals & checks
|
||||
- `pre_delete` signal for `Clusters`:
|
||||
- Query for blocking dependencies (ContentIdeas, ContentClusterMap, Tasks). If any exist, raise `ProtectedError` or return API error.
|
||||
- `post_delete` signal for `Clusters`:
|
||||
- Update `Keywords.objects.filter(cluster=deleted_pk)` — but cluster FK will already become NULL (SET_NULL) when cluster deleted; use previous pk to find keywords and set `status` back to `previous_value` (from StatusHistory) OR `'new'` fallback.
|
||||
- `pre_delete` for `Keywords`:
|
||||
- Check `contentideas_set.exists()` or `keyword.cluster is not None` and block deletion if needed.
|
||||
- Similar pre_delete/post_delete logic for `ContentIdeas` and `Tasks` as required.
|
||||
|
||||
3. ViewSet changes (API UX)
|
||||
- Override `destroy` in `ClusterViewSet` / `KeywordViewSet` to:
|
||||
- Run dependency check and return 400 with a helpful message if blocked.
|
||||
- Improve `bulk_delete` actions so they iterate and for each record:
|
||||
- Attempt deletion (or perform the same checks) and return a detailed report of skipped items and reasons (do not fail the whole batch unless you prefer that policy).
|
||||
|
||||
4. Tests
|
||||
- Add unit & integration tests verifying:
|
||||
- Deletion blocked when downstream objects exist.
|
||||
- Deletion allowed when no downstream objects; verify keywords statuses changed to recorded previous values (or `'new'` fallback).
|
||||
- Bulk delete returns expected report for mixed batches.
|
||||
|
||||
5. Data migration (if implementing StatusHistory)
|
||||
- Optionally scan existing mapped keywords/clusters and set `previous_status` for future rollbacks (best-effort).
|
||||
- Communicate that existing historical pre-mapping states cannot be reconstructed and will default to safe values.
|
||||
|
||||
Example pseudocode for cluster pre/post delete signals
|
||||
- pre_delete (Cluster):
|
||||
- if ContentIdeas.objects.filter(keyword_cluster=cluster).exists(): raise ProtectedError("Cluster has mapped ideas")
|
||||
- if ContentClusterMap.objects.filter(cluster=cluster).exists(): raise ProtectedError("Cluster has mapped contents")
|
||||
- if Tasks.objects.filter(cluster=cluster).exists(): raise ProtectedError("Cluster has mapped tasks")
|
||||
- post_delete (Cluster):
|
||||
- keywords_to_fix = Keywords.objects.filter(cluster__isnull=True, previous_cluster_id=deleted_pk) OR Keywords.objects.filter(previous_cluster_id=deleted_pk) # or use stored history
|
||||
- For each keyword: set status = previous_status if present else 'new'; save()
|
||||
|
||||
Bulk delete endpoint pattern (safe)
|
||||
- For each id in request.data['ids']:
|
||||
- Try: run same pre-delete checks; if fail, append to skipped with reason
|
||||
- Else: perform instance.delete(); append to deleted list
|
||||
- Return success_response({deleted_count, skipped: [{id, reason}]})
|
||||
|
||||
Risk & considerations
|
||||
- Race conditions: concurrent operations could create race conditions; use DB transactions and row-locking for critical checks if necessary.
|
||||
- Legacy data: without historical statuses you cannot perfectly restore "previous status" — plan needs a small migration & fallback policy.
|
||||
- UX: blocking deletions can confuse users; consider adding a `force=true` option that will delete and also cascade/clear downstream (with warnings).
|
||||
- Performance: bulk-delete checks must be optimized (use `.exists()` queries and `prefetch_related`).
|
||||
|
||||
Estimated implementation steps & time (rough)
|
||||
- Phase 1 (1–2 days): Add model-level `pre_delete` checks and ViewSet-level guards for `Clusters` and `Keywords`; add post_delete handler to set `Keywords.status='new'` when their cluster was removed (fallback behavior). Add unit tests.
|
||||
- Phase 2 (1–2 days): Add StatusHistory model and update mapping operations to record previous statuses. Extend post_delete logic to restore recorded previous states. Add migration + tests.
|
||||
- Phase 3 (0.5–1 day): Apply same pattern across other modules (Ideas -> Tasks -> Content) with fine-grained policies. Add integration tests for bulk ops and UI acceptance tests.
|
||||
- Phase 4 (0.5 day): Documentation update and release notes.
|
||||
|
||||
Concrete next steps I can take now (pick any):
|
||||
- A: Implement Phase 1 now: add pre_delete + post_delete for `Clusters` & `Keywords` and update `bulk_delete` endpoints to return detailed reports. I will write tests for cluster delete behavior. (I can start coding immediately.)
|
||||
- B: Draft the exact schema for `StatusHistory` and a migration; then implement mapping-time recording hooks (this is needed to fully restore previous statuses).
|
||||
- C: Implement API-level UX: `bulk_delete` returns breakdown of deleted/skipped and reasons for all planner & writer modules.
|
||||
|
||||
Which step do you want me to implement first? If you want me to proceed, I’ll:
|
||||
- start with Phase 1 (safe changes + tests), or
|
||||
- prepare the `StatusHistory` schema and migration (Phase 2).
|
||||
|
||||
If you prefer, I can also open a PR-style patch and run the relevant unit tests locally — tell me to proceed and whether to start with Phase 1 (fast safety fix) or Phase 2 (longer, reversible solution).
|
||||
|
||||
bluesalman: ia sked for oa dcouement, with only safety, of record cano nt be delted if it has currently associated/mapped record upstream,
|
||||
that is enoigh no need to make a huge backup database, we already have daily backupds for that,
|
||||
so here only delte eleigbility is enough, which is no updstream recor dperesent, if ther eus it gives error on deletion, with mesage you cancno tdelted if associated, name of the upstream module and record titel show, and it says delte teh otehr xyz in xyz moduel record firest to delte this,
|
||||
|
||||
and 2nd thing ia sked is udpating the downstreem moduel stusu when an upstream record is delted
|
||||
|
||||
both cn be tracked easily through curent sttau sof record, we no all teh moduels defautl new stuas , if it didfertn then we need to hecek upstream and giv ethis rroero,
|
||||
adn when delted we just newewd to set the previous down stream moduel to whatever is the fresh new reacord status is,
|
||||
|
||||
simple very simple
|
||||
|
||||
|
||||
|
||||
GitHub Copilot: I'll add a short todo list for implementing the safety-only deletion policy and then present the requested document with rules, error messages, and implementation notes.
|
||||
|
||||
Completed: *Draft safety deletion policy doc* (1/6)
|
||||
|
||||
**Safety-Only Deletion Policy & Implementation Plan**
|
||||
|
||||
**Scope:**
|
||||
- Minimal, safe behavior only: block deletes when there are currently associated/mapped upstream records. Provide clear error message telling the user which upstream records block deletion and what to remove first.
|
||||
- When deletion is allowed (no upstream associations), perform the delete and set downstream (child) records' statuses to the module default (e.g., `Keywords` → `'new'`) — simple fallback, no full status history tracking.
|
||||
- Applies to single-record `DELETE` and API `bulk_delete` actions across planner and writer modules.
|
||||
|
||||
**Assumptions:**
|
||||
- Existing daily DB backups suffice; no historical status audit required here.
|
||||
- Default “new” statuses are known from models (planner/writer models). Use those defaults for rollback.
|
||||
- We implement checks at API level (ViewSets) — prevents user-driven deletes via UI/API and provides friendly messages. (Optional: add model `pre_delete` protections later.)
|
||||
|
||||
**Quick Implementation Summary (one-liner):**
|
||||
- Add a reusable helper that inspects upstream dependencies, returns blocking info; call it from `destroy` and `bulk_delete` for each module; if allowed, delete and update downstream statuses to default.
|
||||
|
||||
**Table: Entities, Block Conditions, Error Message, On-Delete Behavior, Implementation Location**
|
||||
|
||||
| Entity | Blocks Deletion If (upstream exists) | Example Error Message (returned to client) | On Delete (allowed) — downstream updates | Implementation Location |
|
||||
|--------|--------------------------------------|---------------------------------------------|------------------------------------------|-------------------------|
|
||||
| `Cluster` (`planner.Clusters`) | Any mapped: `ContentIdeas` with `keyword_cluster`, `writer.ContentClusterMap` entries, `writer.Tasks` where `cluster` set, or `writer.Content` referencing cluster | "Cannot delete Cluster 'NAME' — mapped records exist in: ContentIdeas: 'Idea title', Tasks: 'Task title'. Delete those first." | Keywords whose `cluster` becomes NULL should have `status` set to `'new'` (model default) after cluster deletion. | `ClusterViewSet.destroy` + `ClusterViewSet.bulk_delete` (in views.py). Use helper in shared utils. |
|
||||
| `Keyword` (`planner.Keywords`) | Mapped cluster (keyword.cluster != NULL) OR `ContentIdeas` (M2M) referencing the keyword | "Cannot delete Keyword 'TERM' — mapped to Cluster 'CLUSTER_NAME' and used in Idea 'IDEA_TITLE'. Delete those first." | N/A (deleting keyword only affects fewer downstream things); if allowed, remove M2M references in `ContentIdeas`. | `KeywordViewSet.destroy` + `bulk_delete` (in planner views). |
|
||||
| `ContentIdea` (`planner.ContentIdeas`) | `writer.Tasks` with `idea` set, or `Content` that used this idea during generation | "Cannot delete Idea 'TITLE' — tasks exist: 'Task title'. Delete those tasks first." | If allowed, clear M2M relations to `Keywords` (no status change needed), update any clusters if appropriate to stay consistent (usually no change). | `ContentIdeasViewSet.destroy` + `bulk_delete` (planner views). |
|
||||
| `Task` (`writer.Tasks`) | `writer.Content` generated from this task (`content` referencing it) or `Images` attached to it | "Cannot delete Task 'TITLE' — content exists: 'Content title'. Delete that content first." | If allowed, cascade-delete or set images' `task` to NULL (model currently CASCADE); adopt policy: block if `Content` exists. | `TaskViewSet.destroy` + `bulk_delete` (writer views). |
|
||||
| `Content` (`writer.Content`) | Published content or `ContentClusterMap`/`Images` referencing it (depending on policy) | "Cannot delete Content 'TITLE' — images or content mappings exist. Delete images/mappings first." | If allowed, `ContentClusterMap` entries cascade; update any upstream mapping flags if needed (e.g., cluster mapped counts). | `ContentViewSet.destroy` + `bulk_delete` (writer views). |
|
||||
|
||||
**API behavior / Error format**
|
||||
- Single delete (DELETE /{module}/{id}/): return HTTP 400 with JSON:
|
||||
- { "success": false, "error": "Cannot delete X: mapped records present", "details": [ { "module": "ContentIdeas", "id": 123, "title": "Idea title" }, ... ], "message": "Delete the listed records in module Y before deleting this." }
|
||||
- Bulk delete (POST /.../bulk_delete/): iterate IDs and return per-id result:
|
||||
- { "deleted": [1,2], "skipped": [ { "id": 3, "reason": "Mapped records exist", "details": [...] }, ... ] }
|
||||
|
||||
**Simple Helper (design, pseudo-code)**
|
||||
|
||||
- Add a small helper util `check_delete_blockers(instance)` that:
|
||||
- Accepts a model instance (Cluster/Keyword/Idea/Task/Content).
|
||||
- Runs fast `.exists()` queries for known upstream relationships.
|
||||
- Returns list of blocking tuples: [(module_name, id, title_field), ...] or empty list if safe.
|
||||
- Implement in `igny8_core.modules.planner.utils` or `igny8_core.api.utils` for reuse.
|
||||
|
||||
Example (concise Python-like pseudocode):
|
||||
```python
|
||||
def check_delete_blockers_cluster(cluster):
|
||||
blockers = []
|
||||
# Ideas
|
||||
qs = ContentIdeas.objects.filter(keyword_cluster=cluster).values('id','idea_title')[:5]
|
||||
blockers += [('ContentIdeas', r['id'], r['idea_title']) for r in qs]
|
||||
# Writer mappings
|
||||
from igny8_core.business.content.models import ContentClusterMap, Tasks, Content
|
||||
if ContentClusterMap.objects.filter(cluster=cluster).exists():
|
||||
cm = ContentClusterMap.objects.filter(cluster=cluster).values('id','content_id')[:5]
|
||||
# map to titles if possible
|
||||
if Tasks.objects.filter(cluster=cluster).exists():
|
||||
t = Tasks.objects.filter(cluster=cluster).values('id','title')[:5]
|
||||
blockers += [('Tasks', r['id'], r['title']) for r in t]
|
||||
if Content.objects.filter(cluster=cluster).exists():
|
||||
c = Content.objects.filter(cluster=cluster).values('id','title')[:5]
|
||||
blockers += [('Content', r['id'], r['title']) for r in c]
|
||||
return blockers
|
||||
```
|
||||
|
||||
**Using the helper in ViewSets**
|
||||
- In `destroy(self, request, *args, **kwargs)`:
|
||||
- instance = self.get_object()
|
||||
- blockers = check_delete_blockers_{model}(instance)
|
||||
- if blockers: return error_response(...) with details and message
|
||||
- else: proceed with `instance.delete()` and after delete run downstream simple updates:
|
||||
- e.g., for Cluster: Keywords.objects.filter(cluster=instance).update(status='new', cluster=None)
|
||||
- (Because cluster FK uses `SET_NULL` in models, you might need to find keywords that had cluster==pk before deletion. Best approach: perform updates before deletion: Keywords.objects.filter(cluster=instance).update(cluster=None, status='new') then delete cluster to avoid race.)
|
||||
|
||||
**Bulk delete approach**
|
||||
- For each id:
|
||||
- Try to fetch object
|
||||
- Run the same `check_delete_blockers`
|
||||
- If blockers -> add to `skipped` with reasons
|
||||
- Else -> perform controlled update of downstream fields (set cluster->NULL, status->default) then delete, add to `deleted`
|
||||
- Return both lists; HTTP 200.
|
||||
|
||||
**Downstream status update (simple):**
|
||||
- Before deleting an upstream record, find child records and set their status to default (from model definitions). Examples:
|
||||
- When deleting `Cluster`:
|
||||
- Keywords: `Keywords.objects.filter(cluster=cluster).update(cluster=None, status='new')`
|
||||
- ContentIdeas: if cluster deletion allowed, they shouldn't exist (we block), but if model had SET_NULL, ensure consistent state.
|
||||
- When deleting `Keyword`:
|
||||
- Remove from `ContentIdeas.keyword_objects` M2M (or block if used in ideas per policy).
|
||||
- Important: run these updates inside a DB transaction so partial states do not occur.
|
||||
|
||||
**User-friendly message examples**
|
||||
- Single delete error:
|
||||
- "Cannot delete Cluster 'Mobile Apps' — mapped records exist: ContentIdeas: 'How to build mobile apps' (ID 412), Tasks: 'Write mobile apps guide' (ID 776). Delete those first."
|
||||
- Bulk delete report:
|
||||
- { "deleted": [5,7], "skipped": [ {"id":3,"reason":"Mapped records exist","details":[{"module":"ContentIdeas","id":412,"title":"How to build mobile apps"}]} ] }
|
||||
|
||||
**Testing to add (minimal)**
|
||||
- Unit tests:
|
||||
- Attempt to delete a `Cluster` with a `ContentIdea` pointing to it -> assert 400 and error contains idea title.
|
||||
- Delete a `Cluster` with no upstream -> perform deletion and assert `Keywords` that referenced cluster now have `status == 'new'` and `cluster == None`.
|
||||
- Bulk-delete mix of deletable and blocked items -> assert correct `deleted`/`skipped` lists.
|
||||
- Integration tests for planner → writer interactions:
|
||||
- Create cluster → create content mapping → attempt cluster delete -> assert blocked.
|
||||
|
||||
**Developer notes & safe rollout**
|
||||
- Implement API-level checks first (fast). This prevents UI/API deletes and gives friendly messages. Tests added before merging. No DB schema changes needed.
|
||||
- Optional later improvement: add a small `StatusHistory` or `previous_status` if precise restoration ever required.
|
||||
- For backwards compatibility: existing code that uses `.delete()` outside viewsets (e.g., admin or scripts) won't get the friendly error unless we also add `pre_delete` model handlers — recommend adding `pre_delete` signals after the API behavior is in place.
|
||||
|
||||
**Minimal code snippet to return an error in a ViewSet (DRF style)**
|
||||
```python
|
||||
from igny8_core.api.response import error_response, success_response
|
||||
|
||||
def destroy(self, request, *args, **kwargs):
|
||||
instance = self.get_object()
|
||||
blockers = check_delete_blockers_cluster(instance)
|
||||
if blockers:
|
||||
details = [{'module': m, 'id': id, 'title': title} for m,id,title in blockers]
|
||||
return error_response(
|
||||
error=f"Cannot delete Cluster '{instance.name}' - mapped records exist",
|
||||
request=request,
|
||||
status_code=status.HTTP_400_BAD_REQUEST,
|
||||
data={'details': details, 'message': 'Delete the listed records first'}
|
||||
)
|
||||
# Safe path: update child records then delete inside transaction
|
||||
with transaction.atomic():
|
||||
Keywords.objects.filter(cluster=instance).update(cluster=None, status='new')
|
||||
instance.delete()
|
||||
return success_response(data={'deleted_id': kwargs.get('pk')}, request=request)
|
||||
```
|
||||
|
||||
**Next steps (recommended immediate actions)**
|
||||
- I can implement Phase 1 changes (API-level checks + bulk_delete reporting + tests) now. This is low-risk and requires no migrations.
|
||||
- If you want, I will:
|
||||
- Implement helper utils, update `ClusterViewSet.destroy` and `bulk_delete` and `KeywordViewSet` similarly, and add unit tests for the blocking behavior.
|
||||
- Run tests locally and show results.
|
||||
|
||||
Which immediate action do you want me to take now:
|
||||
- A) Implement Phase 1 (API-level checks + bulk delete + tests), or
|
||||
- B) Only produce the final doc (above) for handoff and stop here?
|
||||
|
||||
bluesalman: only make th edocument
|
||||
|
||||
GitHub Copilot: **Safety Deletion Policy — Backward-Deletion Rules & Implementation Doc**
|
||||
|
||||
**Purpose:**
|
||||
- Define a minimal, safe deletion policy: forbid deletes when there are currently associated/mapped upstream records; provide a clear error message listing blocking upstream records and how to resolve; when a deletion is permitted, update downstream statuses to the module default (no historical audit).
|
||||
- This is an API-level safety document only — no DB audit/history required.
|
||||
|
||||
**Scope:**
|
||||
- Planner + Writer modules (primary entities): `Clusters`, `Keywords`, `ContentIdeas`, `Tasks`, `Content`, `Images`.
|
||||
- Applies to single-record `DELETE` and API `bulk_delete` actions.
|
||||
- Behavior must be consistent and user-friendly (clear messages).
|
||||
|
||||
**High-level Rules (short):**
|
||||
- Block delete if any upstream/mapped record exists. Return HTTP 400 with structured details (module, id, title) and an instruction to delete upstream records first.
|
||||
- For an allowed delete: in a DB transaction, update affected downstream child records to model defaults (e.g., `Keywords.status = 'new'`) then delete the record.
|
||||
- Bulk delete: iterate IDs, perform same checks per item, do not abort whole batch. Return per-id results (deleted/skipped with reason).
|
||||
|
||||
**Entity Rules Table**
|
||||
|
||||
| Entity | Blocks Deletion If (upstream exists) | Error Message (client) | On Delete (allowed) — downstream updates | Implementation Notes |
|
||||
|---|---:|---|---|---|
|
||||
| Cluster (`planner.Clusters`) | Any mapped items: `ContentIdeas` (keyword_cluster), `writer.ContentClusterMap`, `writer.Tasks` with `cluster`, `writer.Content` referencing cluster | "Cannot delete Cluster 'NAME' — mapped records exist in: ContentIdeas: 'Idea title' (ID). Delete those first." | Before delete: set `Keywords` referencing cluster → `cluster = NULL` and `status = 'new'` (model default). Then delete cluster. | Implement in `ClusterViewSet.destroy` and `bulk_delete`. Use helper `check_delete_blockers(cluster)`. Run updates inside a `transaction.atomic()` and perform child updates before `instance.delete()` to avoid orphan visibility. |
|
||||
| Keyword (`planner.Keywords`) | `keyword.cluster != NULL` OR M2M `ContentIdeas` referencing this keyword | "Cannot delete Keyword 'TERM' — mapped to Cluster 'CLUSTER' and used in Idea 'IDEA_TITLE'. Delete those first." | If allowed, remove M2M references (orphan removal). No status rollback needed for upstream. | Implement in `KeywordViewSet.destroy` + `bulk_delete`. For M2M, block deletion unless explicit policy allows cascade. |
|
||||
| ContentIdea (`planner.ContentIdeas`) | `writer.Tasks` where `idea` set; `Content` generated from idea | "Cannot delete Idea 'TITLE' — tasks exist: 'Task title'. Delete those tasks first." | If allowed, clear M2M `keyword_objects`. | Implement in `ContentIdeasViewSet`. Block if `Tasks.exists()`; otherwise delete inside transaction. |
|
||||
| Task (`writer.Tasks`) | Exists `Content` generated from the task OR final images attached | "Cannot delete Task 'TITLE' — content exists: 'Content title'. Delete that content first." | If allowed, delete or clear `Images` per policy (prefer blocking if final content exists). | Implement in writer `TaskViewSet`. Prefer strict block if `Content` exists. |
|
||||
| Content (`writer.Content`) | Published content or `ContentClusterMap`/`Images` referencing it | "Cannot delete Content 'TITLE' — images or mappings exist. Delete them first." | If allowed, delete `ContentClusterMap` entries (they `CASCADE`) and `Images` as per current model behavior. Update cluster or counters if needed. | Implement in writer `ContentViewSet`. Consider blocking published content deletion by policy. |
|
||||
| Images (`writer.Images`) | Typically safe to delete unless other policies apply (e.g., external sync) | "Cannot delete Image 'ID' — blocked by external sync." | If allowed, delete image record. | Implement in `ImagesViewSet` if present; otherwise admin allowed. |
|
||||
|
||||
**API Response Examples**
|
||||
|
||||
- Single-delete error (HTTP 400):
|
||||
{
|
||||
"success": false,
|
||||
"error": "Cannot delete Cluster 'Mobile Apps' — mapped records exist",
|
||||
"details": [
|
||||
{"module": "ContentIdeas", "id": 412, "title": "How to build mobile apps"},
|
||||
{"module": "Tasks", "id": 776, "title": "Write mobile apps guide"}
|
||||
],
|
||||
"message": "Delete the listed records in their modules first, then retry."
|
||||
}
|
||||
|
||||
- Bulk delete response (HTTP 200):
|
||||
{
|
||||
"deleted": [5,7],
|
||||
"skipped": [
|
||||
{"id":3,"reason":"Mapped records exist","details":[{"module":"ContentIdeas","id":412,"title":"How to build mobile apps"}]}
|
||||
],
|
||||
"message":"Some items skipped due to mapped records. See details."
|
||||
}
|
||||
|
||||
- Success (single delete):
|
||||
{
|
||||
"success": true,
|
||||
"data": {"deleted_id": 12}
|
||||
}
|
||||
|
||||
**Helper design (concise)**
|
||||
|
||||
- New small utility in `igny8_core.api.utils` (or `igny8_core.modules.planner.utils`) with functions:
|
||||
- `check_delete_blockers_for_cluster(cluster) -> List[dict]`
|
||||
- `check_delete_blockers_for_keyword(keyword) -> List[dict]`
|
||||
- (others as needed)
|
||||
|
||||
- Implementation pattern:
|
||||
- Fast `.exists()` queries and `.values('id', 'title_field')[:5]` to show up to N blockers (avoid heavy scans).
|
||||
- Return a short list of blockers with module name, id, display title.
|
||||
|
||||
Example pseudocode:
|
||||
def check_delete_blockers_cluster(cluster):
|
||||
blockers = []
|
||||
ideas = ContentIdeas.objects.filter(keyword_cluster=cluster).values('id','idea_title')[:5]
|
||||
blockers += [ {'module':'ContentIdeas','id':r['id'],'title':r['idea_title']} for r in ideas ]
|
||||
tasks = Tasks.objects.filter(cluster=cluster).values('id','title')[:5]
|
||||
blockers += [ {'module':'Tasks','id':r['id'],'title':r['title']} for r in tasks ]
|
||||
if ContentClusterMap.objects.filter(cluster=cluster).exists():
|
||||
# get sample content titles
|
||||
return blockers
|
||||
|
||||
**Delete execution flow (safe sequence)**
|
||||
|
||||
- For single delete in ViewSet:
|
||||
1. Fetch instance via `self.get_object()`.
|
||||
2. Run `blockers = check_delete_blockers_*`.
|
||||
3. If `blockers`: return `error_response` (HTTP 400) with `details`.
|
||||
4. Else: inside `transaction.atomic()`:
|
||||
- Update children to defaults BEFORE deleting upstream:
|
||||
- Example: `Keywords.objects.filter(cluster=instance).update(cluster=None, status='new')`
|
||||
- Call `instance.delete()`.
|
||||
- Return success response.
|
||||
|
||||
- For bulk delete:
|
||||
- Iterate ids; for each id, apply same procedure; collect `deleted` and `skipped` with reasons; return both lists. Do not abort entire batch on one failure.
|
||||
|
||||
**Downstream status update rules (simple fallback)**
|
||||
|
||||
- When an upstream mapping is removed (allowed delete):
|
||||
- Use model default for downstream status:
|
||||
- `Keywords` default: `'new'` → set status `'new'`.
|
||||
- `Clusters` default: `'new'` (if applicable).
|
||||
- `ContentIdeas` default: `'new'`.
|
||||
- `Tasks` default: `'queued'`.
|
||||
- `Content` default: `'draft'`.
|
||||
- `Images` default: `'pending'`.
|
||||
- Implementation: set downstream `status` explicitly in the pre-delete update inside the same transaction.
|
||||
|
||||
**DB & Concurrency notes**
|
||||
- Run updates + delete in a single `transaction.atomic()` to avoid transient inconsistent states.
|
||||
- Prefer updating children before deletion to avoid races where FK ON DELETE behavior may set NULL after delete. Example: call `Keywords.objects.filter(cluster=instance).update(cluster=None, status='new')` then `instance.delete()`.
|
||||
|
||||
**Testing checklist (minimal, priority)**
|
||||
- Block tests:
|
||||
- Attempt to delete a `Cluster` that has `ContentIdeas` → assert HTTP 400 and response includes idea title.
|
||||
- Bulk delete mix of deletable and blocked clusters → assert deleted/skipped counts & reasons.
|
||||
- Allowed-delete tests:
|
||||
- Delete a `Cluster` with no blockers → assert `Keywords` previously referencing it have `status == 'new'` and `cluster == None`.
|
||||
- Regression check:
|
||||
- Ensure existing `bulk_delete` endpoints preserve format but return the enhanced report.
|
||||
|
||||
**Developer implementation notes**
|
||||
- Where to change:
|
||||
- Planner viewset: views.py — implement `destroy` override and enhanced `bulk_delete`.
|
||||
- Writer viewsets: `backend/igny8_core/modules/writer/*` (or business/content viewsets) — add `destroy` + `bulk_delete` checks similarly.
|
||||
- Add reusable helper module: `backend/igny8_core/api/utils/delete_checks.py` (or similar).
|
||||
- No DB migrations are required for the minimal policy.
|
||||
- Optional later improvement: add `StatusHistory` model if precise original-status restoration is desired.
|
||||
|
||||
**UX / Messaging guidance**
|
||||
- Keep messages short and actionable: include the blocking module, record title, and next step (delete the blocking records first). Example:
|
||||
- "Cannot delete Cluster 'NAME' — mapped records exist in ContentIdeas: 'Idea title' (ID). Delete those items in Planner → Ideas first, then retry."
|
||||
- For bulk operations, provide counts and a link or list to the specific blocking records.
|
||||
|
||||
**Rollout / Deployment**
|
||||
- Implement API-level checks and tests, deploy to staging. Verify UI flows (single and bulk deletes) show messages and blocked items correctly.
|
||||
- After staging verification, deploy to production. No DB schema migration required.
|
||||
|
||||
---
|
||||
|
||||
If you want, I can now:
|
||||
- Produce a ready-to-copy developer ticket/PR patch that adds the helper and one example ViewSet implementation (e.g., `ClusterViewSet.destroy` + `bulk_delete`) and unit tests — small, safe Phase 1 patch.
|
||||
- Or stop here with this document for handoff.
|
||||
|
||||
Which do you prefer?
|
||||
Reference in New Issue
Block a user