658 lines
45 KiB
Markdown
658 lines
45 KiB
Markdown
bluesalman: https://app.igny8.com/setup/add-keywords
|
||
|
||
when keywrods added to wrokflow from this page, the stus in planner/keywrods for the keywrods is saved as pending, which is wrong as per new plan,
|
||
please check and fix it so that newly added seed keywords in planner kkeywrod s will ahve sttus |New| instaead of pending
|
||
|
||
GitHub Copilot: I'll search the backend for the endpoint and code handling adding keywords, then locate where status is set to "pending" and change it to "New". I'll first add a short todo plan.
|
||
|
||
Created 5 todos
|
||
|
||
Now I'll search the backend for route/view names and keywords insertion code. I'll look for "add-keywords", "add_keywords", "seed keyword", and "planner" references.
|
||
|
||
Searched for regex `add-keywords|add_keywords|add_keyword|setup/add-keywords|seed keyword|seed_keywords|planner/keywords|planner keywords|save.*status.*pending` (`**/backend/**`), 20 results
|
||
|
||
Searched for regex `SeedKeyword|seedkeyword|seed_keywords|SeedKeywords|Seed Keyword|SeedKeywordStatus|status='pending'` (`**/backend/**`), 20 results
|
||
|
||
Searched for regex `setup/add-keywords|add_keywords|setup.views|def add_keywords|/setup/add-keywords` (`**/backend/**`), 20 results
|
||
|
||
Searched for files matching `**/backend/**/setup*`, no matches
|
||
|
||
Searched for regex `'pending'|"pending"|Pending` (`**/backend/**`), 20 results
|
||
|
||
Searched for files matching `**/backend/igny8_core/business/planning`, no matches
|
||
|
||
Read [](file:///data/app/igny8/backend/igny8_core/business)
|
||
|
||
Read [](file:///data/app/igny8/backend/igny8_core/business/planning)
|
||
|
||
Read [](file:///data/app/igny8/backend/igny8_core/business/planning/models.py)
|
||
|
||
Searched for regex `Keywords(objects|.create|.bulk_create|Keywords\()|add_keyword(s)?|seed_keyword.*create|save\(.*seed_keyword` (`**/backend/**`), 20 results
|
||
|
||
Searched for regex `add-keywords|/setup/add-keywords|setup/add-keywords|addKeywords|add_seed` (`**/frontend/**`), 6 results
|
||
|
||
Read [](file:///data/app/igny8/frontend/src/App.tsx#330-330), lines 330 to 410
|
||
|
||
Searched for text `IndustriesSectorsKeywords` (`**/frontend/src/**`), 5 results
|
||
|
||
Read [](file:///data/app/igny8/frontend/src/pages/Setup/IndustriesSectorsKeywords.tsx#1-1), lines 1 to 400
|
||
|
||
Read [](file:///data/app/igny8/frontend/src/pages/Setup/IndustriesSectorsKeywords.tsx#400-400), lines 400 to 800
|
||
|
||
Searched for text `addSeedKeywordsToWorkflow` (`**/backend/**|frontend/**`), no results
|
||
|
||
Searched for regex `addSeedKeywords|addSeedKeywordsToWorkflow|add_seed_keywords_to_workflow` (`**/frontend/**`), 6 results
|
||
|
||
Read [](file:///data/app/igny8/frontend/src/services/api.ts#1880-1880), lines 1880 to 1935
|
||
|
||
Searched for text `bulk_add_from_seed|bulk_add_from_seed|bulk_add_from_seed` (`**/backend/**`), 3 results
|
||
|
||
Read [](file:///data/app/igny8/backend/igny8_core/modules/planner/views.py#1-1), lines 1 to 420
|
||
|
||
I'll change the default status from 'pending' to 'new' in the planner bulk add view, then update the todo list to mark steps done. Applying the patch now.
|
||
|
||
Using "Apply Patch"
|
||
|
||
|
||
```
|
||
|
||
```
|
||
|
||
|
||
Completed: *Find add-keywords handler* (1/5)
|
||
|
||
Update complete — quick summary and next steps
|
||
|
||
- **What I changed**: In views.py I updated the `bulk_add_from_seed` action so newly created `Keywords` use `'status': 'new'` instead of `'pending'`.
|
||
- **Why**: The `Keywords` model defines allowed statuses with `'new'` representing newly added seed keywords; setting `'pending'` was inconsistent with the new plan and caused incorrect state.
|
||
- **Files modified**:
|
||
- views.py — changed default `status` in `get_or_create(..., defaults={...})`.
|
||
|
||
Next recommended steps
|
||
- Run the backend test suite or at least exercise the endpoint manually:
|
||
- Example curl (replace token/site/sector IDs as appropriate):
|
||
```bash
|
||
curl -X POST 'http://localhost:8000/api/v1/planner/keywords/bulk_add_from_seed/' \
|
||
-H "Authorization: Token $AUTH_TOKEN" \
|
||
-H "Content-Type: application/json" \
|
||
-d '{"seed_keyword_ids":[1,2,3],"site_id":5,"sector_id":12}'
|
||
```
|
||
- Verify in the UI by adding keywords from `/setup/add-keywords` and checking that the corresponding records in Planner (`/planner/keywords`) have status `New`.
|
||
- If you want, I can run the test suite or add a small unit test to assert the default status is `'new'`. Which would you prefer?
|
||
|
||
Made changes.
|
||
|
||
bluesalman: cehck agian, if wahtever you did is corrrect and logically true, since sitll new keywrods added have status pedning
|
||
|
||
GitHub Copilot: I'll search the backend for any other places that set a new Keyword's status to 'pending', then inspect the planner serializer to see default creation behavior. I'll run a code search across the backend.
|
||
|
||
Searched for regex `'pending'|"status': 'pending'|"status": "pending"|defaults={\s*'status': 'pending'` (`**/backend/**`), 20 results
|
||
|
||
Searched for regex `Keywords.objects.get_or_create|Keywords.objects.create|create\(.*Keywords` (`**/backend/**`), 9 results
|
||
|
||
Read [](file:///data/app/igny8/backend/igny8_core/modules/planner/views.py#520-520), lines 520 to 640
|
||
|
||
Read [](file:///data/app/igny8/backend/igny8_core/api/tests/test_integration_planner.py)
|
||
|
||
Read [](file:///data/app/igny8/backend/igny8_core/modules/planner/management/commands/add_keywords_to_sectors.py)
|
||
|
||
Read [](file:///data/app/igny8/backend/igny8_core/modules/planner/serializers.py)
|
||
|
||
Searched for text `planner/keywords|KeywordOpportunities|bulk_update_keywords_status|bulkUpdateKeywordsStatus|status':'pending'` (`**/frontend/**`), 20 results
|
||
|
||
Read [](file:///data/app/igny8/frontend/src/pages/Planner/Keywords.tsx#1-1), lines 1 to 400
|
||
|
||
Searched for text `createKeyword(` (`**/frontend/**`), 3 results
|
||
|
||
Read [](file:///data/app/igny8/frontend/src/services/api.ts#480-480), lines 480 to 580
|
||
|
||
I'm about to patch the `bulk_add_from_seed` action to enforce `status='new'` after creation (safeguard and log), so we can be certain newly created keywords are marked correctly. I'll apply a small, focused change.
|
||
|
||
Using "Apply Patch"
|
||
|
||
|
||
```
|
||
|
||
```
|
||
|
||
|
||
Starting: *Run tests or basic sanity checks* (4/5)
|
||
|
||
Progress update — what's done and next steps
|
||
|
||
- I added a safeguard in `bulk_add_from_seed` to explicitly set `keyword.status = 'new'` and save it when a Keyword is created. This ensures newly created keywords cannot remain `'pending'` due to DB or other unexpected defaults.
|
||
- Files changed:
|
||
- views.py (updated default in `get_or_create` and added explicit status enforcement)
|
||
|
||
Next actions I suggest (choose one):
|
||
- I can run the backend test suite or run a quick Django shell command to simulate the `bulk_add_from_seed` call and verify the created Keywords have `status='new'`. (This requires a running dev environment; tell me if you want me to run tests locally.)
|
||
- Or you can try adding keywords from `/setup/add-keywords` now and confirm — if still pending, provide a sample request/response or logs and I'll investigate further.
|
||
|
||
If you want me to run verification here, I can:
|
||
- Run a targeted unit test or a Django management shell snippet to call `bulk_add_from_seed` and print the created Keyword statuses.
|
||
Tell me which verification you prefer and I'll proceed.
|
||
|
||
Made changes.
|
||
|
||
bluesalman: \Planner\Keywords.tsx
|
||
|
||
cehck this if somethign is overrididng it on th page wiwh tpending status
|
||
|
||
GitHub Copilot: I'll add an updated TODO, search the frontend for any hardcoded `pending` defaults, then patch the `formData` initial `status` in Keywords.tsx to `'new'`.
|
||
|
||
Added 2 todos
|
||
|
||
Searched for regex `status: 'pending'|status: "pending"|'status': 'pending'|"status": "pending"` (`**/frontend/**`), 20 results
|
||
|
||
Read [](file:///data/app/igny8/frontend/src/pages/Planner/Keywords.tsx#70-70), lines 70 to 110
|
||
|
||
Using "Apply Patch"
|
||
|
||
|
||
```
|
||
|
||
```
|
||
|
||
|
||
Completed: *Inspect frontend Keywords page* (4/7)
|
||
|
||
Done — what I checked and changed
|
||
|
||
- What I inspected:
|
||
- Searched the frontend for hardcoded `pending` occurrences and found that Keywords.tsx initialized the create form's `formData.status` as `'pending'`.
|
||
- Confirmed backend `bulk_add_from_seed` now sets `Keywords` created from seed to `'new'` (we previously updated this).
|
||
|
||
- What I changed:
|
||
- Updated Keywords.tsx initial `formData.status` from `'pending'` to `'new'`.
|
||
- Also added a safeguard in views.py (in `bulk_add_from_seed`) to explicitly set `keyword.status = 'new'` and save when a keyword is created.
|
||
|
||
- Files modified:
|
||
- views.py — set defaults to `'new'` and enforce status after create.
|
||
- Keywords.tsx — changed default `formData.status` to `'new'`.
|
||
|
||
Why this should fix the issue
|
||
- The UI create form was explicitly sending `'pending'` when creating keywords via the Planner page; changing the form default to `'new'` prevents that.
|
||
- The setup flow (`/setup/add-keywords`) uses the bulk-add API which now defaults to `'new'` and enforces it, preventing DB or other code from leaving it as `'pending'`.
|
||
|
||
Next steps / verification
|
||
- To confirm end-to-end, either:
|
||
- Add seed keywords from the UI (`/setup/add-keywords`) and check `/planner/keywords` — new records should show status `New`.
|
||
- Or run this curl (replace token/site/sector IDs) to test bulk endpoint:
|
||
```bash
|
||
curl -X POST 'http://localhost:8000/api/v1/planner/keywords/bulk_add_from_seed/' \
|
||
-H "Authorization: Token $AUTH_TOKEN" \
|
||
-H "Content-Type: application/json" \
|
||
-d '{"seed_keyword_ids":[1,2,3],"site_id":5,"sector_id":12}'
|
||
```
|
||
- Or run a Django shell check:
|
||
```bash
|
||
python manage.py shell
|
||
# then in shell:
|
||
from igny8_core.modules.planner.views import KeywordViewSet
|
||
# (or directly create Keywords via ORM to verify defaults)
|
||
```
|
||
|
||
Would you like me to:
|
||
- Run a quick Django shell snippet here to simulate a `get_or_create` and print the created keyword status? (Requires dev environment / DB access)
|
||
- Or run the integration tests that touch the planner endpoints? I'll proceed with whichever you prefer.
|
||
|
||
Made changes.
|
||
|
||
bluesalman: when cluster is delted teh mapeed keywros status which now have no cluster mapped their staus is notupdated,
|
||
so when soem mapped clsuters are delted it should remove the mapping status and change the status of keywrods back to new,
|
||
|
||
similarly th whole plalner and workflwo pipeline works in one idmenstion as cgoing forward from keywrod to clsuter to idea to taks, to contetn to images to review to publish and preovious module statuas changes atumatically
|
||
simialrrly we need to make sure that when in pipeline someone try to delte a record, it hsold not have any mapped or assocaited record upstream,
|
||
|
||
so if user try to delte a clsuter which ahs a mapped ide, or similarly try to delte keywrod and that has mapepd clsuter, it hsouldnott allow to delte,
|
||
|
||
this applies to both single record delete as well as bulk delete fatures for all moduel sin planner in writer
|
||
|
||
and when some record in any modeul is eligble to be edelted as it has no upstream matching//mapped record, then when delted it should updated the statuas of previous assocation back to orignal sattusu which was before mapping
|
||
|
||
this need s to be carefully planned,
|
||
|
||
udneratnd the current flow and statusus from docs drerelated douemetns of planner writer, and status mapping tbales docs,
|
||
also see the code base,
|
||
|
||
then plan it and show me the plan of thiis backrward delttion as welll as limits applciatin for edlete in , show me this in chat in table format,
|
||
|
||
|
||
GitHub Copilot: I'll add a small plan (todo list) and then scan the codebase and docs for planner/writer status flows, delete endpoints, and model relationships. I'll collect relevant files to build the deletion plan.
|
||
|
||
Starting: *Gather planner/writer docs* (1/5)
|
||
|
||
Searched for regex `class Clusters|class Keywords|class ContentIdeas|class Tasks|class Content|class Image|on_delete=|bulk_delete|bulk_delete_from_seed|@action\(|def destroy\(|def delete\(|bulk_delete` (`**/backend/**`), 20 results
|
||
|
||
Searched for regex `class Tasks|class Task|class Content|class Image|ContentIdeas|keyword_objects|keyword_cluster|writer` (`**/backend/**`), 20 results
|
||
|
||
Now I'll collect the status fields and relationships for relevant models (planner + writer modules) to summarize current flow. I'll search models for status choices and FK relationships.
|
||
|
||
Searched for regex `class .*models.py|status = models.CharField|class .*\(|from .*modules.*writer.models|class Tasks|class Content` (`**/backend/**`), 20 results
|
||
|
||
Searched for files matching `**/backend/igny8_core/modules/writer/**/models.py`, 1 match
|
||
|
||
Read [](file:///data/app/igny8/backend/igny8_core/modules/writer/models.py)
|
||
|
||
Read [](file:///data/app/igny8/backend/igny8_core/business/content/models.py#1-1), lines 1 to 500
|
||
|
||
Next I'll outline what I found about the current flow, then propose a safe, testable plan (in table form) to implement the backward-deletion and deletion-limits behavior you described.
|
||
|
||
**Quick summary of current state (from code + docs)**
|
||
- Core pipeline (forward): Keyword -> Cluster -> ContentIdea -> Task -> Content -> Images -> (Review -> Publish).
|
||
- Relevant models & status fields:
|
||
- `planner.Keywords` — statuses: `('new','New'), ('mapped','Mapped')` (field `status`, default `'new'`).
|
||
- `planner.Clusters` — statuses: `('new','New'), ('mapped','Mapped')` (field `status`, default `'new'`).
|
||
- `planner.ContentIdeas` — statuses: `('new','New'), ('queued','Queued'), ('completed','Completed')`.
|
||
- `writer.Tasks` — statuses: `('queued','Queued'), ('completed','Completed')`.
|
||
- `writer.Content` — statuses: `('draft','Draft'), ('review','Review'), ('published','Published')`.
|
||
- `writer.Images` — status default `'pending'` (values: pending/generated/failed).
|
||
- Important relationships (from models):
|
||
- `Keywords.seed_keyword` — FK(PROTECT) to global SeedKeyword (prevents deleting seed keyword).
|
||
- `Keywords.cluster` — FK(SET_NULL) to `Clusters`. (Deleting a Cluster currently sets `Keywords.cluster = NULL`.)
|
||
- `ContentIdeas.keyword_objects` — M2M to `Keywords`.
|
||
- `ContentIdeas.keyword_cluster` — FK(SET_NULL) to `Clusters`.
|
||
- `Tasks.cluster` — FK(SET_NULL) to `Clusters`.
|
||
- `Content.cluster` — FK(SET_NULL) to `Clusters`.
|
||
- `writer.ContentClusterMap.cluster` — FK(CASCADE) to `Clusters` (this will delete mappings when cluster deleted).
|
||
- Current behavior you observed:
|
||
- When a `Cluster` is deleted, `Keywords.cluster` becomes NULL (because of SET_NULL) but the `Keywords.status` often remains `'mapped'` (incorrect). You want those Keywords to revert to `'new'`.
|
||
- You want deletions to be blocked if there are dependent/mapped downstream records (both single and bulk deletes).
|
||
- You want status rollback to original pre-mapping status where possible, not just a generic `'new'`.
|
||
|
||
Plan (high-level) — what we will implement
|
||
- Enforce deletion guardrails: prevent deleting an entity if downstream mapped records exist (single + bulk).
|
||
- On allowed deletions, perform controlled rollback updates to upstream/related records (e.g., set Keywords back to `'new'` if their cluster was removed).
|
||
- Track previous statuses at mapping time so rollbacks can restore the original status instead of guessing.
|
||
- Add tests and API-level checks to make behavior explicit and safe.
|
||
|
||
Action plan (step-by-step) — next work to do
|
||
- I will implement this plan in increments and add tests. If you want me to start coding, I will.
|
||
- Below I present a detailed plan table describing each entity, what blocks deletion, and what automatic rollback/update should happen when deletion is allowed.
|
||
|
||
**Plan table — Deletion rules & rollback behavior**
|
||
|
||
- Columns: **Entity** | **Blocks Deletion If** | **On Delete (allowed)** | **Implementation Notes / Where to change**
|
||
|
||
| **Entity** | **Blocks Deletion If** | **On Delete (allowed)** | **Implementation Notes / Where to change** |
|
||
|-----------|------------------------|-------------------------|-------------------------------------------|
|
||
| `Cluster` (`planner.Clusters`) | Any of: <br>- `ContentIdeas` with `keyword_cluster = cluster` (idea mapped) <br>- `writer.ContentClusterMap` entries referencing cluster (content mapped) <br>- `writer.Tasks` where `cluster = cluster` (active tasks mapped) <br>(Also consider queued writer jobs referencing cluster) | - If allowed (no blocking downstream): delete cluster. <br>- After deletion, update `planner.Keywords` that referenced this cluster to set `status` → previously-recorded status (fallback `'new'`) and ensure `cluster` is NULL. | - Add `pre_delete` signal or ViewSet checks to block deletion if blocking dependencies exist (return 400). <br>- Add `post_delete` signal to update `Keywords` status for keywords that had `cluster=deleted_pk` (safe because instance.pk available). <br>- Prefer pre_delete check (fail fast) and post_delete update (apply rollback). <br>- Also update any `ContentIdeas` keyword cluster references are already SET_NULL — but we will block deletion if ideas exist. |
|
||
| `Keyword` (`planner.Keywords`) | Any of: <br>- `keyword.cluster` is not NULL (keyword mapped to a cluster) <br>- `ContentIdeas.keyword_objects` contains this keyword (idea uses keyword) <br>- Any other downstream references (e.g., queued processes referencing keyword) | - If allowed (no blocking references): delete keyword. <br>- On delete: if this keyword was previously mapped (we recorded previous status), restore previous status on other objects that referenced it (rare). | - Add `pre_delete` on Keywords to check M2M `contentideas` and `cluster`. <br>- For UI/bulk delete, ViewSet should call logic that attempts deletion per-keyword and returns details about blocked vs deleted items. |
|
||
| `ContentIdea` (`planner.ContentIdeas`) | Any of: <br>- `writer.Tasks` referencing the idea (`Tasks.idea`) <br>- `writer.Content` or `ContentClusterMap` referencing the idea indirectly (if used) | - If allowed: delete idea; if idea was the reason a cluster was mapped to something, optionally revert cluster status — but clusters are normally independent. | - Add `pre_delete` to check `Tasks.exists()` and block if exists. <br>- On delete allowed, remove any leftover associations (M2M to keywords) and optionally update `Keywords` mapped status if mapping implied change. |
|
||
| `Task` (`writer.Tasks`) | Content (final content) or images already generated for that task; or a Task is currently in-progress/queued | - If allowed: delete task; on delete, optionally set statuses on dependent `Images` or `Content` back to previous states if they were generated as result of this task. | - Check `Images` or `Content` referencing the task. Block if final content exists. |
|
||
| `Content` (`writer.Content`) | If published or external sync state would be impacted; if `ContentClusterMap` or `Images` depend on it | - If allowed: delete content; update `ContentClusterMap` (cascade) and optionally change `cluster` or other upstream statuses if appropriate. | - Block deletion if it's published or external state exists (optional policy). |
|
||
|
||
Notes:
|
||
- “Blocks Deletion If” checks must be applied both for single `DELETE` (ViewSet destroy) and bulk delete endpoints. Bulk deletes should iterate items and return a structured result: {deleted: n, skipped: [(id, reason), ...]}.
|
||
- Restoration of "original status" requires recording the previous status at the time of the mapping operation (e.g., when a Keyword is mapped to a Cluster we should save prior status). Without that, we must fallback to safe defaults (Keywords -> `'new'`, Clusters -> `'new'`).
|
||
- There are two layers for enforcement:
|
||
1. Model-level signals (pre_delete/post_delete) — catches deletes triggered anywhere (shell, admin, bulk queryset.delete). pre_delete allows blocking; post_delete can update related rows.
|
||
2. API-level ViewSet checks (destroy/bulk_delete override) — gives user-friendly error messages and prevents accidental DB operations. Both should be used together.
|
||
|
||
Implementation details / code plan (concrete changes)
|
||
1. Add status-history (preferred): new DB model and migration
|
||
- Model: `StatusHistory` (generic) or add `previous_status` JSONField on mapping models:
|
||
- Fields: `model_name`, `object_id`, `field_name`, `previous_value`, `changed_by`, `changed_at`, `context` (why changed).
|
||
- On mapping events (e.g., when `Keyword.cluster` is set or ContentIdea.keyword_cluster set), record the prior status in `StatusHistory`.
|
||
- Benefit: allows precise rollback to "original status".
|
||
|
||
2. Signals & checks
|
||
- `pre_delete` signal for `Clusters`:
|
||
- Query for blocking dependencies (ContentIdeas, ContentClusterMap, Tasks). If any exist, raise `ProtectedError` or return API error.
|
||
- `post_delete` signal for `Clusters`:
|
||
- Update `Keywords.objects.filter(cluster=deleted_pk)` — but cluster FK will already become NULL (SET_NULL) when cluster deleted; use previous pk to find keywords and set `status` back to `previous_value` (from StatusHistory) OR `'new'` fallback.
|
||
- `pre_delete` for `Keywords`:
|
||
- Check `contentideas_set.exists()` or `keyword.cluster is not None` and block deletion if needed.
|
||
- Similar pre_delete/post_delete logic for `ContentIdeas` and `Tasks` as required.
|
||
|
||
3. ViewSet changes (API UX)
|
||
- Override `destroy` in `ClusterViewSet` / `KeywordViewSet` to:
|
||
- Run dependency check and return 400 with a helpful message if blocked.
|
||
- Improve `bulk_delete` actions so they iterate and for each record:
|
||
- Attempt deletion (or perform the same checks) and return a detailed report of skipped items and reasons (do not fail the whole batch unless you prefer that policy).
|
||
|
||
4. Tests
|
||
- Add unit & integration tests verifying:
|
||
- Deletion blocked when downstream objects exist.
|
||
- Deletion allowed when no downstream objects; verify keywords statuses changed to recorded previous values (or `'new'` fallback).
|
||
- Bulk delete returns expected report for mixed batches.
|
||
|
||
5. Data migration (if implementing StatusHistory)
|
||
- Optionally scan existing mapped keywords/clusters and set `previous_status` for future rollbacks (best-effort).
|
||
- Communicate that existing historical pre-mapping states cannot be reconstructed and will default to safe values.
|
||
|
||
Example pseudocode for cluster pre/post delete signals
|
||
- pre_delete (Cluster):
|
||
- if ContentIdeas.objects.filter(keyword_cluster=cluster).exists(): raise ProtectedError("Cluster has mapped ideas")
|
||
- if ContentClusterMap.objects.filter(cluster=cluster).exists(): raise ProtectedError("Cluster has mapped contents")
|
||
- if Tasks.objects.filter(cluster=cluster).exists(): raise ProtectedError("Cluster has mapped tasks")
|
||
- post_delete (Cluster):
|
||
- keywords_to_fix = Keywords.objects.filter(cluster__isnull=True, previous_cluster_id=deleted_pk) OR Keywords.objects.filter(previous_cluster_id=deleted_pk) # or use stored history
|
||
- For each keyword: set status = previous_status if present else 'new'; save()
|
||
|
||
Bulk delete endpoint pattern (safe)
|
||
- For each id in request.data['ids']:
|
||
- Try: run same pre-delete checks; if fail, append to skipped with reason
|
||
- Else: perform instance.delete(); append to deleted list
|
||
- Return success_response({deleted_count, skipped: [{id, reason}]})
|
||
|
||
Risk & considerations
|
||
- Race conditions: concurrent operations could create race conditions; use DB transactions and row-locking for critical checks if necessary.
|
||
- Legacy data: without historical statuses you cannot perfectly restore "previous status" — plan needs a small migration & fallback policy.
|
||
- UX: blocking deletions can confuse users; consider adding a `force=true` option that will delete and also cascade/clear downstream (with warnings).
|
||
- Performance: bulk-delete checks must be optimized (use `.exists()` queries and `prefetch_related`).
|
||
|
||
Estimated implementation steps & time (rough)
|
||
- Phase 1 (1–2 days): Add model-level `pre_delete` checks and ViewSet-level guards for `Clusters` and `Keywords`; add post_delete handler to set `Keywords.status='new'` when their cluster was removed (fallback behavior). Add unit tests.
|
||
- Phase 2 (1–2 days): Add StatusHistory model and update mapping operations to record previous statuses. Extend post_delete logic to restore recorded previous states. Add migration + tests.
|
||
- Phase 3 (0.5–1 day): Apply same pattern across other modules (Ideas -> Tasks -> Content) with fine-grained policies. Add integration tests for bulk ops and UI acceptance tests.
|
||
- Phase 4 (0.5 day): Documentation update and release notes.
|
||
|
||
Concrete next steps I can take now (pick any):
|
||
- A: Implement Phase 1 now: add pre_delete + post_delete for `Clusters` & `Keywords` and update `bulk_delete` endpoints to return detailed reports. I will write tests for cluster delete behavior. (I can start coding immediately.)
|
||
- B: Draft the exact schema for `StatusHistory` and a migration; then implement mapping-time recording hooks (this is needed to fully restore previous statuses).
|
||
- C: Implement API-level UX: `bulk_delete` returns breakdown of deleted/skipped and reasons for all planner & writer modules.
|
||
|
||
Which step do you want me to implement first? If you want me to proceed, I’ll:
|
||
- start with Phase 1 (safe changes + tests), or
|
||
- prepare the `StatusHistory` schema and migration (Phase 2).
|
||
|
||
If you prefer, I can also open a PR-style patch and run the relevant unit tests locally — tell me to proceed and whether to start with Phase 1 (fast safety fix) or Phase 2 (longer, reversible solution).
|
||
|
||
bluesalman: ia sked for oa dcouement, with only safety, of record cano nt be delted if it has currently associated/mapped record upstream,
|
||
that is enoigh no need to make a huge backup database, we already have daily backupds for that,
|
||
so here only delte eleigbility is enough, which is no updstream recor dperesent, if ther eus it gives error on deletion, with mesage you cancno tdelted if associated, name of the upstream module and record titel show, and it says delte teh otehr xyz in xyz moduel record firest to delte this,
|
||
|
||
and 2nd thing ia sked is udpating the downstreem moduel stusu when an upstream record is delted
|
||
|
||
both cn be tracked easily through curent sttau sof record, we no all teh moduels defautl new stuas , if it didfertn then we need to hecek upstream and giv ethis rroero,
|
||
adn when delted we just newewd to set the previous down stream moduel to whatever is the fresh new reacord status is,
|
||
|
||
simple very simple
|
||
|
||
|
||
|
||
GitHub Copilot: I'll add a short todo list for implementing the safety-only deletion policy and then present the requested document with rules, error messages, and implementation notes.
|
||
|
||
Completed: *Draft safety deletion policy doc* (1/6)
|
||
|
||
**Safety-Only Deletion Policy & Implementation Plan**
|
||
|
||
**Scope:**
|
||
- Minimal, safe behavior only: block deletes when there are currently associated/mapped upstream records. Provide clear error message telling the user which upstream records block deletion and what to remove first.
|
||
- When deletion is allowed (no upstream associations), perform the delete and set downstream (child) records' statuses to the module default (e.g., `Keywords` → `'new'`) — simple fallback, no full status history tracking.
|
||
- Applies to single-record `DELETE` and API `bulk_delete` actions across planner and writer modules.
|
||
|
||
**Assumptions:**
|
||
- Existing daily DB backups suffice; no historical status audit required here.
|
||
- Default “new” statuses are known from models (planner/writer models). Use those defaults for rollback.
|
||
- We implement checks at API level (ViewSets) — prevents user-driven deletes via UI/API and provides friendly messages. (Optional: add model `pre_delete` protections later.)
|
||
|
||
**Quick Implementation Summary (one-liner):**
|
||
- Add a reusable helper that inspects upstream dependencies, returns blocking info; call it from `destroy` and `bulk_delete` for each module; if allowed, delete and update downstream statuses to default.
|
||
|
||
**Table: Entities, Block Conditions, Error Message, On-Delete Behavior, Implementation Location**
|
||
|
||
| Entity | Blocks Deletion If (upstream exists) | Example Error Message (returned to client) | On Delete (allowed) — downstream updates | Implementation Location |
|
||
|--------|--------------------------------------|---------------------------------------------|------------------------------------------|-------------------------|
|
||
| `Cluster` (`planner.Clusters`) | Any mapped: `ContentIdeas` with `keyword_cluster`, `writer.ContentClusterMap` entries, `writer.Tasks` where `cluster` set, or `writer.Content` referencing cluster | "Cannot delete Cluster 'NAME' — mapped records exist in: ContentIdeas: 'Idea title', Tasks: 'Task title'. Delete those first." | Keywords whose `cluster` becomes NULL should have `status` set to `'new'` (model default) after cluster deletion. | `ClusterViewSet.destroy` + `ClusterViewSet.bulk_delete` (in views.py). Use helper in shared utils. |
|
||
| `Keyword` (`planner.Keywords`) | Mapped cluster (keyword.cluster != NULL) OR `ContentIdeas` (M2M) referencing the keyword | "Cannot delete Keyword 'TERM' — mapped to Cluster 'CLUSTER_NAME' and used in Idea 'IDEA_TITLE'. Delete those first." | N/A (deleting keyword only affects fewer downstream things); if allowed, remove M2M references in `ContentIdeas`. | `KeywordViewSet.destroy` + `bulk_delete` (in planner views). |
|
||
| `ContentIdea` (`planner.ContentIdeas`) | `writer.Tasks` with `idea` set, or `Content` that used this idea during generation | "Cannot delete Idea 'TITLE' — tasks exist: 'Task title'. Delete those tasks first." | If allowed, clear M2M relations to `Keywords` (no status change needed), update any clusters if appropriate to stay consistent (usually no change). | `ContentIdeasViewSet.destroy` + `bulk_delete` (planner views). |
|
||
| `Task` (`writer.Tasks`) | `writer.Content` generated from this task (`content` referencing it) or `Images` attached to it | "Cannot delete Task 'TITLE' — content exists: 'Content title'. Delete that content first." | If allowed, cascade-delete or set images' `task` to NULL (model currently CASCADE); adopt policy: block if `Content` exists. | `TaskViewSet.destroy` + `bulk_delete` (writer views). |
|
||
| `Content` (`writer.Content`) | Published content or `ContentClusterMap`/`Images` referencing it (depending on policy) | "Cannot delete Content 'TITLE' — images or content mappings exist. Delete images/mappings first." | If allowed, `ContentClusterMap` entries cascade; update any upstream mapping flags if needed (e.g., cluster mapped counts). | `ContentViewSet.destroy` + `bulk_delete` (writer views). |
|
||
|
||
**API behavior / Error format**
|
||
- Single delete (DELETE /{module}/{id}/): return HTTP 400 with JSON:
|
||
- { "success": false, "error": "Cannot delete X: mapped records present", "details": [ { "module": "ContentIdeas", "id": 123, "title": "Idea title" }, ... ], "message": "Delete the listed records in module Y before deleting this." }
|
||
- Bulk delete (POST /.../bulk_delete/): iterate IDs and return per-id result:
|
||
- { "deleted": [1,2], "skipped": [ { "id": 3, "reason": "Mapped records exist", "details": [...] }, ... ] }
|
||
|
||
**Simple Helper (design, pseudo-code)**
|
||
|
||
- Add a small helper util `check_delete_blockers(instance)` that:
|
||
- Accepts a model instance (Cluster/Keyword/Idea/Task/Content).
|
||
- Runs fast `.exists()` queries for known upstream relationships.
|
||
- Returns list of blocking tuples: [(module_name, id, title_field), ...] or empty list if safe.
|
||
- Implement in `igny8_core.modules.planner.utils` or `igny8_core.api.utils` for reuse.
|
||
|
||
Example (concise Python-like pseudocode):
|
||
```python
|
||
def check_delete_blockers_cluster(cluster):
|
||
blockers = []
|
||
# Ideas
|
||
qs = ContentIdeas.objects.filter(keyword_cluster=cluster).values('id','idea_title')[:5]
|
||
blockers += [('ContentIdeas', r['id'], r['idea_title']) for r in qs]
|
||
# Writer mappings
|
||
from igny8_core.business.content.models import ContentClusterMap, Tasks, Content
|
||
if ContentClusterMap.objects.filter(cluster=cluster).exists():
|
||
cm = ContentClusterMap.objects.filter(cluster=cluster).values('id','content_id')[:5]
|
||
# map to titles if possible
|
||
if Tasks.objects.filter(cluster=cluster).exists():
|
||
t = Tasks.objects.filter(cluster=cluster).values('id','title')[:5]
|
||
blockers += [('Tasks', r['id'], r['title']) for r in t]
|
||
if Content.objects.filter(cluster=cluster).exists():
|
||
c = Content.objects.filter(cluster=cluster).values('id','title')[:5]
|
||
blockers += [('Content', r['id'], r['title']) for r in c]
|
||
return blockers
|
||
```
|
||
|
||
**Using the helper in ViewSets**
|
||
- In `destroy(self, request, *args, **kwargs)`:
|
||
- instance = self.get_object()
|
||
- blockers = check_delete_blockers_{model}(instance)
|
||
- if blockers: return error_response(...) with details and message
|
||
- else: proceed with `instance.delete()` and after delete run downstream simple updates:
|
||
- e.g., for Cluster: Keywords.objects.filter(cluster=instance).update(status='new', cluster=None)
|
||
- (Because cluster FK uses `SET_NULL` in models, you might need to find keywords that had cluster==pk before deletion. Best approach: perform updates before deletion: Keywords.objects.filter(cluster=instance).update(cluster=None, status='new') then delete cluster to avoid race.)
|
||
|
||
**Bulk delete approach**
|
||
- For each id:
|
||
- Try to fetch object
|
||
- Run the same `check_delete_blockers`
|
||
- If blockers -> add to `skipped` with reasons
|
||
- Else -> perform controlled update of downstream fields (set cluster->NULL, status->default) then delete, add to `deleted`
|
||
- Return both lists; HTTP 200.
|
||
|
||
**Downstream status update (simple):**
|
||
- Before deleting an upstream record, find child records and set their status to default (from model definitions). Examples:
|
||
- When deleting `Cluster`:
|
||
- Keywords: `Keywords.objects.filter(cluster=cluster).update(cluster=None, status='new')`
|
||
- ContentIdeas: if cluster deletion allowed, they shouldn't exist (we block), but if model had SET_NULL, ensure consistent state.
|
||
- When deleting `Keyword`:
|
||
- Remove from `ContentIdeas.keyword_objects` M2M (or block if used in ideas per policy).
|
||
- Important: run these updates inside a DB transaction so partial states do not occur.
|
||
|
||
**User-friendly message examples**
|
||
- Single delete error:
|
||
- "Cannot delete Cluster 'Mobile Apps' — mapped records exist: ContentIdeas: 'How to build mobile apps' (ID 412), Tasks: 'Write mobile apps guide' (ID 776). Delete those first."
|
||
- Bulk delete report:
|
||
- { "deleted": [5,7], "skipped": [ {"id":3,"reason":"Mapped records exist","details":[{"module":"ContentIdeas","id":412,"title":"How to build mobile apps"}]} ] }
|
||
|
||
**Testing to add (minimal)**
|
||
- Unit tests:
|
||
- Attempt to delete a `Cluster` with a `ContentIdea` pointing to it -> assert 400 and error contains idea title.
|
||
- Delete a `Cluster` with no upstream -> perform deletion and assert `Keywords` that referenced cluster now have `status == 'new'` and `cluster == None`.
|
||
- Bulk-delete mix of deletable and blocked items -> assert correct `deleted`/`skipped` lists.
|
||
- Integration tests for planner → writer interactions:
|
||
- Create cluster → create content mapping → attempt cluster delete -> assert blocked.
|
||
|
||
**Developer notes & safe rollout**
|
||
- Implement API-level checks first (fast). This prevents UI/API deletes and gives friendly messages. Tests added before merging. No DB schema changes needed.
|
||
- Optional later improvement: add a small `StatusHistory` or `previous_status` if precise restoration ever required.
|
||
- For backwards compatibility: existing code that uses `.delete()` outside viewsets (e.g., admin or scripts) won't get the friendly error unless we also add `pre_delete` model handlers — recommend adding `pre_delete` signals after the API behavior is in place.
|
||
|
||
**Minimal code snippet to return an error in a ViewSet (DRF style)**
|
||
```python
|
||
from igny8_core.api.response import error_response, success_response
|
||
|
||
def destroy(self, request, *args, **kwargs):
|
||
instance = self.get_object()
|
||
blockers = check_delete_blockers_cluster(instance)
|
||
if blockers:
|
||
details = [{'module': m, 'id': id, 'title': title} for m,id,title in blockers]
|
||
return error_response(
|
||
error=f"Cannot delete Cluster '{instance.name}' - mapped records exist",
|
||
request=request,
|
||
status_code=status.HTTP_400_BAD_REQUEST,
|
||
data={'details': details, 'message': 'Delete the listed records first'}
|
||
)
|
||
# Safe path: update child records then delete inside transaction
|
||
with transaction.atomic():
|
||
Keywords.objects.filter(cluster=instance).update(cluster=None, status='new')
|
||
instance.delete()
|
||
return success_response(data={'deleted_id': kwargs.get('pk')}, request=request)
|
||
```
|
||
|
||
**Next steps (recommended immediate actions)**
|
||
- I can implement Phase 1 changes (API-level checks + bulk_delete reporting + tests) now. This is low-risk and requires no migrations.
|
||
- If you want, I will:
|
||
- Implement helper utils, update `ClusterViewSet.destroy` and `bulk_delete` and `KeywordViewSet` similarly, and add unit tests for the blocking behavior.
|
||
- Run tests locally and show results.
|
||
|
||
Which immediate action do you want me to take now:
|
||
- A) Implement Phase 1 (API-level checks + bulk delete + tests), or
|
||
- B) Only produce the final doc (above) for handoff and stop here?
|
||
|
||
bluesalman: only make th edocument
|
||
|
||
GitHub Copilot: **Safety Deletion Policy — Backward-Deletion Rules & Implementation Doc**
|
||
|
||
**Purpose:**
|
||
- Define a minimal, safe deletion policy: forbid deletes when there are currently associated/mapped upstream records; provide a clear error message listing blocking upstream records and how to resolve; when a deletion is permitted, update downstream statuses to the module default (no historical audit).
|
||
- This is an API-level safety document only — no DB audit/history required.
|
||
|
||
**Scope:**
|
||
- Planner + Writer modules (primary entities): `Clusters`, `Keywords`, `ContentIdeas`, `Tasks`, `Content`, `Images`.
|
||
- Applies to single-record `DELETE` and API `bulk_delete` actions.
|
||
- Behavior must be consistent and user-friendly (clear messages).
|
||
|
||
**High-level Rules (short):**
|
||
- Block delete if any upstream/mapped record exists. Return HTTP 400 with structured details (module, id, title) and an instruction to delete upstream records first.
|
||
- For an allowed delete: in a DB transaction, update affected downstream child records to model defaults (e.g., `Keywords.status = 'new'`) then delete the record.
|
||
- Bulk delete: iterate IDs, perform same checks per item, do not abort whole batch. Return per-id results (deleted/skipped with reason).
|
||
|
||
**Entity Rules Table**
|
||
|
||
| Entity | Blocks Deletion If (upstream exists) | Error Message (client) | On Delete (allowed) — downstream updates | Implementation Notes |
|
||
|---|---:|---|---|---|
|
||
| Cluster (`planner.Clusters`) | Any mapped items: `ContentIdeas` (keyword_cluster), `writer.ContentClusterMap`, `writer.Tasks` with `cluster`, `writer.Content` referencing cluster | "Cannot delete Cluster 'NAME' — mapped records exist in: ContentIdeas: 'Idea title' (ID). Delete those first." | Before delete: set `Keywords` referencing cluster → `cluster = NULL` and `status = 'new'` (model default). Then delete cluster. | Implement in `ClusterViewSet.destroy` and `bulk_delete`. Use helper `check_delete_blockers(cluster)`. Run updates inside a `transaction.atomic()` and perform child updates before `instance.delete()` to avoid orphan visibility. |
|
||
| Keyword (`planner.Keywords`) | `keyword.cluster != NULL` OR M2M `ContentIdeas` referencing this keyword | "Cannot delete Keyword 'TERM' — mapped to Cluster 'CLUSTER' and used in Idea 'IDEA_TITLE'. Delete those first." | If allowed, remove M2M references (orphan removal). No status rollback needed for upstream. | Implement in `KeywordViewSet.destroy` + `bulk_delete`. For M2M, block deletion unless explicit policy allows cascade. |
|
||
| ContentIdea (`planner.ContentIdeas`) | `writer.Tasks` where `idea` set; `Content` generated from idea | "Cannot delete Idea 'TITLE' — tasks exist: 'Task title'. Delete those tasks first." | If allowed, clear M2M `keyword_objects`. | Implement in `ContentIdeasViewSet`. Block if `Tasks.exists()`; otherwise delete inside transaction. |
|
||
| Task (`writer.Tasks`) | Exists `Content` generated from the task OR final images attached | "Cannot delete Task 'TITLE' — content exists: 'Content title'. Delete that content first." | If allowed, delete or clear `Images` per policy (prefer blocking if final content exists). | Implement in writer `TaskViewSet`. Prefer strict block if `Content` exists. |
|
||
| Content (`writer.Content`) | Published content or `ContentClusterMap`/`Images` referencing it | "Cannot delete Content 'TITLE' — images or mappings exist. Delete them first." | If allowed, delete `ContentClusterMap` entries (they `CASCADE`) and `Images` as per current model behavior. Update cluster or counters if needed. | Implement in writer `ContentViewSet`. Consider blocking published content deletion by policy. |
|
||
| Images (`writer.Images`) | Typically safe to delete unless other policies apply (e.g., external sync) | "Cannot delete Image 'ID' — blocked by external sync." | If allowed, delete image record. | Implement in `ImagesViewSet` if present; otherwise admin allowed. |
|
||
|
||
**API Response Examples**
|
||
|
||
- Single-delete error (HTTP 400):
|
||
{
|
||
"success": false,
|
||
"error": "Cannot delete Cluster 'Mobile Apps' — mapped records exist",
|
||
"details": [
|
||
{"module": "ContentIdeas", "id": 412, "title": "How to build mobile apps"},
|
||
{"module": "Tasks", "id": 776, "title": "Write mobile apps guide"}
|
||
],
|
||
"message": "Delete the listed records in their modules first, then retry."
|
||
}
|
||
|
||
- Bulk delete response (HTTP 200):
|
||
{
|
||
"deleted": [5,7],
|
||
"skipped": [
|
||
{"id":3,"reason":"Mapped records exist","details":[{"module":"ContentIdeas","id":412,"title":"How to build mobile apps"}]}
|
||
],
|
||
"message":"Some items skipped due to mapped records. See details."
|
||
}
|
||
|
||
- Success (single delete):
|
||
{
|
||
"success": true,
|
||
"data": {"deleted_id": 12}
|
||
}
|
||
|
||
**Helper design (concise)**
|
||
|
||
- New small utility in `igny8_core.api.utils` (or `igny8_core.modules.planner.utils`) with functions:
|
||
- `check_delete_blockers_for_cluster(cluster) -> List[dict]`
|
||
- `check_delete_blockers_for_keyword(keyword) -> List[dict]`
|
||
- (others as needed)
|
||
|
||
- Implementation pattern:
|
||
- Fast `.exists()` queries and `.values('id', 'title_field')[:5]` to show up to N blockers (avoid heavy scans).
|
||
- Return a short list of blockers with module name, id, display title.
|
||
|
||
Example pseudocode:
|
||
def check_delete_blockers_cluster(cluster):
|
||
blockers = []
|
||
ideas = ContentIdeas.objects.filter(keyword_cluster=cluster).values('id','idea_title')[:5]
|
||
blockers += [ {'module':'ContentIdeas','id':r['id'],'title':r['idea_title']} for r in ideas ]
|
||
tasks = Tasks.objects.filter(cluster=cluster).values('id','title')[:5]
|
||
blockers += [ {'module':'Tasks','id':r['id'],'title':r['title']} for r in tasks ]
|
||
if ContentClusterMap.objects.filter(cluster=cluster).exists():
|
||
# get sample content titles
|
||
return blockers
|
||
|
||
**Delete execution flow (safe sequence)**
|
||
|
||
- For single delete in ViewSet:
|
||
1. Fetch instance via `self.get_object()`.
|
||
2. Run `blockers = check_delete_blockers_*`.
|
||
3. If `blockers`: return `error_response` (HTTP 400) with `details`.
|
||
4. Else: inside `transaction.atomic()`:
|
||
- Update children to defaults BEFORE deleting upstream:
|
||
- Example: `Keywords.objects.filter(cluster=instance).update(cluster=None, status='new')`
|
||
- Call `instance.delete()`.
|
||
- Return success response.
|
||
|
||
- For bulk delete:
|
||
- Iterate ids; for each id, apply same procedure; collect `deleted` and `skipped` with reasons; return both lists. Do not abort entire batch on one failure.
|
||
|
||
**Downstream status update rules (simple fallback)**
|
||
|
||
- When an upstream mapping is removed (allowed delete):
|
||
- Use model default for downstream status:
|
||
- `Keywords` default: `'new'` → set status `'new'`.
|
||
- `Clusters` default: `'new'` (if applicable).
|
||
- `ContentIdeas` default: `'new'`.
|
||
- `Tasks` default: `'queued'`.
|
||
- `Content` default: `'draft'`.
|
||
- `Images` default: `'pending'`.
|
||
- Implementation: set downstream `status` explicitly in the pre-delete update inside the same transaction.
|
||
|
||
**DB & Concurrency notes**
|
||
- Run updates + delete in a single `transaction.atomic()` to avoid transient inconsistent states.
|
||
- Prefer updating children before deletion to avoid races where FK ON DELETE behavior may set NULL after delete. Example: call `Keywords.objects.filter(cluster=instance).update(cluster=None, status='new')` then `instance.delete()`.
|
||
|
||
**Testing checklist (minimal, priority)**
|
||
- Block tests:
|
||
- Attempt to delete a `Cluster` that has `ContentIdeas` → assert HTTP 400 and response includes idea title.
|
||
- Bulk delete mix of deletable and blocked clusters → assert deleted/skipped counts & reasons.
|
||
- Allowed-delete tests:
|
||
- Delete a `Cluster` with no blockers → assert `Keywords` previously referencing it have `status == 'new'` and `cluster == None`.
|
||
- Regression check:
|
||
- Ensure existing `bulk_delete` endpoints preserve format but return the enhanced report.
|
||
|
||
**Developer implementation notes**
|
||
- Where to change:
|
||
- Planner viewset: views.py — implement `destroy` override and enhanced `bulk_delete`.
|
||
- Writer viewsets: `backend/igny8_core/modules/writer/*` (or business/content viewsets) — add `destroy` + `bulk_delete` checks similarly.
|
||
- Add reusable helper module: `backend/igny8_core/api/utils/delete_checks.py` (or similar).
|
||
- No DB migrations are required for the minimal policy.
|
||
- Optional later improvement: add `StatusHistory` model if precise original-status restoration is desired.
|
||
|
||
**UX / Messaging guidance**
|
||
- Keep messages short and actionable: include the blocking module, record title, and next step (delete the blocking records first). Example:
|
||
- "Cannot delete Cluster 'NAME' — mapped records exist in ContentIdeas: 'Idea title' (ID). Delete those items in Planner → Ideas first, then retry."
|
||
- For bulk operations, provide counts and a link or list to the specific blocking records.
|
||
|
||
**Rollout / Deployment**
|
||
- Implement API-level checks and tests, deploy to staging. Verify UI flows (single and bulk deletes) show messages and blocked items correctly.
|
||
- After staging verification, deploy to production. No DB schema migration required.
|
||
|
||
---
|
||
|
||
If you want, I can now:
|
||
- Produce a ready-to-copy developer ticket/PR patch that adds the helper and one example ViewSet implementation (e.g., `ClusterViewSet.destroy` + `bulk_delete`) and unit tests — small, safe Phase 1 patch.
|
||
- Or stop here with this document for handoff.
|
||
|
||
Which do you prefer? |