docs re-org

This commit is contained in:
IGNY8 VPS (Salman)
2025-12-09 13:26:35 +00:00
parent 4d13a57068
commit 6a4f95c35a
231 changed files with 11353 additions and 31152 deletions

View File

@@ -0,0 +1,59 @@
# Backup and Recovery
## Purpose
Describe what needs backup and how to restore based on the runtime components present (Postgres DB, Redis, logs, sites data). No automated backup scripts are versioned here; guidance is derived from files in the repo.
## Code Locations (exact paths)
- Database defaults: `backend/igny8_core/settings.py` (Postgres/SQLite selection via env)
- Compose mounts: `docker-compose.app.yml` (backend logs volume, sites data mounts)
- Example DB dumps present: `backend/backup_postgres_20251120_232816.sql`, `backend/db_backup_20251120_232646.sqlite3`
## High-Level Responsibilities
- Preserve database state (Postgres primary target).
- Preserve published sites data (`/data/app/sites-data`) consumed by sites renderer.
- Preserve application logs for audit/debug.
## Detailed Behavior
- Primary datastore: Postgres (preferred). `settings.py` falls back to SQLite only in DEBUG/no-env scenarios; production should use Postgres to enable reliable backups.
- Redis: used as Celery broker/result backend; not a system of record—no backups required beyond availability.
- Files to keep:
- Postgres DB: dump regularly (e.g., `pg_dump`); sample dumps in `backend/` illustrate format.
- Sites data: mounted read-only into sites renderer from `/data/app/sites-data`; back up that directory to retain deployed static sites.
- Logs: stored under `/data/app/logs` and `backend/logs/publish-sync-logs`; optional for troubleshooting.
- Static assets: `backend/staticfiles` can be regenerated via `collectstatic`; not required to back up if build pipeline exists.
## Data Structures / Models Involved (no code)
- All Django models persist in Postgres; automation logs/files live in `logs/` directory.
## Execution Flow (Suggested)
- Postgres backup: `pg_dump -Fc -h <db_host> -U <user> <db_name> > igny8_<date>.dump`
- Sites data backup: archive `/data/app/sites-data` (and any uploads stored under `/data/app/igny8/sites` if used).
- Logs backup: optional tar of `/data/app/logs` and `backend/logs/publish-sync-logs`.
- Restore Postgres: `pg_restore -c -d <db_name> -h <db_host> -U <user> igny8_<date>.dump` (ensure DB created and app stopped before restore).
## Cross-Module Interactions
- Billing, tenancy, automation data all live in the DB; restoring DB restores these states.
- Sites renderer consumes `/data/app/sites-data`; ensure it matches DB state if site URLs/records reference deployed assets.
## State Transitions (if applicable)
- After restore, run `python manage.py migrate` to align schema, then restart backend/worker/beat.
## Error Handling
- Backups should be verified via `pg_restore --list` or test restores; failed restores will surface at DB connection time.
## Tenancy Rules
- Restoring DB restores tenant isolation; ensure backups are tenant-wide and protected.
## Billing Rules (if applicable)
- Billing data (credits, invoices, payments) is DB-resident; backups must be handled securely due to financial data sensitivity.
## Background Tasks / Schedulers (if applicable)
- Stop Celery beat/worker during restore to avoid tasks running on partial data.
## Key Design Considerations
- Use Postgres in all non-local environments to avoid SQLite backups.
- Keep DB credentials and dumps secure; avoid storing secrets in dumps.
## How Developers Should Work With This Module
- Add automated backup scripts (cron/Portainer stacks) external to this repo; document their paths and retention when added.
- Remove legacy `.sql` dumps from repo in production to avoid stale data exposure; rely on managed backups instead.

View File

@@ -0,0 +1,62 @@
# CI/CD Pipeline
## Purpose
Document the build-and-deploy sequence using the artifacts present in the repo. No CI config is versioned here; this describes the expected pipeline based on Dockerfiles and compose.
## Code Locations (exact paths)
- Compose stack definition: `docker-compose.app.yml`
- Backend image definition: `backend/Dockerfile`
- Frontend image definition: `frontend/Dockerfile` (production build via Caddy)
- Backend config: `backend/igny8_core/settings.py`
## High-Level Responsibilities
- Build backend and frontend images from repo sources.
- Push images to registry (external step, not defined here).
- Deploy via `docker-compose.app.yml` consuming those images on the target host/Portainer.
## Detailed Behavior
- Build steps (as indicated in compose comments):
- Backend: `docker build -t igny8-backend:latest -f backend/Dockerfile backend`
- Frontend app: `docker build -t igny8-frontend-dev:latest -f frontend/Dockerfile.dev frontend`
- Marketing: `docker build -t igny8-marketing-dev:latest -f frontend/Dockerfile.marketing.dev frontend`
- Sites renderer: `docker build -t igny8-sites-dev:latest -f sites/Dockerfile.dev sites`
- Deploy steps:
- Ensure external infra stack (Postgres/Redis/network `igny8_net`) is running.
- Pull/receive built images on target.
- Run `docker compose -f docker-compose.app.yml -p igny8-app up -d`.
- Healthcheck: backend service health gated by `/api/v1/system/status/`; frontend depends_on backend health.
- No automated migrations are defined in compose; run `python manage.py migrate` inside backend container before switching traffic if schema changes are present.
## Data Structures / Models Involved (no code)
- None; pipeline operates on container images.
## Execution Flow
- CI: build images → (optional tests not defined here) → push to registry.
- CD: pull images on host → `docker compose up -d` using new tags → healthcheck ensures backend ready.
## Cross-Module Interactions
- Celery worker/beat use the same backend image; ensure it is rebuilt alongside backend changes.
## State Transitions (if applicable)
- Service rollout occurs when new containers start; existing volumes (code mounts in compose) mean host filesystem is source of truth in current compose.
## Error Handling
- Build failures visible during `docker build`.
- Deploy failures captured via Docker/Portainer logs; unhealthy backend blocks dependent services.
## Tenancy Rules
- Not altered by CI/CD; enforced at runtime by application.
## Billing Rules (if applicable)
- None in pipeline.
## Background Tasks / Schedulers (if applicable)
- Celery beat/worker containers start via compose; ensure images include latest task code.
## Key Design Considerations
- Current compose uses local bind mounts (`/data/app/igny8/...`) which expect code present on host; in an immutable-image pipeline, remove bind mounts and bake code into images instead.
- External infra stack separation requires coordination to ensure DB/Redis available during deploy.
## How Developers Should Work With This Module
- If introducing automated CI, codify the build steps above and add migrations/test stages.
- When changing service names or ports, update compose and healthcheck references consistently.

View File

@@ -0,0 +1,74 @@
# Deployment Guide
## Purpose
Describe how to deploy the IGNY8 stack using the provided Dockerfiles and `docker-compose.app.yml`, including service wiring and required external dependencies.
## Code Locations (exact paths)
- App compose stack: `docker-compose.app.yml`
- Backend image: `backend/Dockerfile`
- Frontend image: `frontend/Dockerfile`
- Backend settings/env: `backend/igny8_core/settings.py`
## High-Level Responsibilities
- Build images for backend, frontend, marketing, and sites renderer.
- Bring up the app stack (backend, frontend, marketing, sites, Celery worker, Celery beat) on the shared external network.
- Rely on external infra services (Postgres, Redis) defined outside this repo (referenced in compose comments).
## Detailed Behavior
- Backend container:
- Image `igny8-backend:latest` from `backend/Dockerfile` (Python 3.11 slim, installs `requirements.txt`, runs Gunicorn on 8010).
- Mounted volumes: `/data/app/igny8/backend` (code), `/data/app/igny8` (shared), `/data/app/logs` (logs).
- Env vars for DB/Redis and security flags (USE_SECURE_COOKIES/PROXY, DEBUG, SECRET_KEY).
- Healthcheck hits `http://localhost:8010/api/v1/system/status/`.
- Frontend container:
- Image `igny8-frontend-dev:latest` from `frontend/Dockerfile.dev` (built separately; serves via Vite dev server on 5173 exposed as 8021).
- Env `VITE_BACKEND_URL`.
- Marketing dev and Sites renderer containers: images `igny8-marketing-dev:latest` and `igny8-sites-dev:latest`, ports 8023→5174 and 8024→5176; Sites mounts `/data/app/igny8/sites` and `/data/app/sites-data`.
- Celery worker/beat:
- Use `igny8-backend:latest`, commands `celery -A igny8_core worker` and `celery -A igny8_core beat`.
- Share same DB/Redis env and code volumes.
- Network: `igny8_net` marked `external: true`; compose expects Postgres and Redis running in another stack (`/data/app/docker-compose.yml` per comment).
- Ports: backend 8011→8010, frontend 8021→5173, marketing 8023→5174, sites 8024→5176.
## Data Structures / Models Involved (no code)
- Not model-specific; relies on runtime env (Postgres DB, Redis broker).
## Execution Flow
- Build images:
- `docker build -t igny8-backend:latest -f backend/Dockerfile backend`
- `docker build -t igny8-frontend-dev:latest -f frontend/Dockerfile.dev frontend`
- `docker build -t igny8-marketing-dev:latest -f frontend/Dockerfile.marketing.dev frontend`
- `docker build -t igny8-sites-dev:latest -f sites/Dockerfile.dev sites`
- Ensure external infra stack (Postgres, Redis, network `igny8_net`) is up.
- Run: `docker compose -f docker-compose.app.yml -p igny8-app up -d`.
- Healthcheck will mark backend healthy before frontend depends_on proceeds.
## Cross-Module Interactions
- Backend depends on Postgres/Redis; Celery worker/beat rely on same env to process automation/AI tasks.
- Frontend depends on backend health; sites renderer reads deployed sites from `/data/app/sites-data`.
## State Transitions (if applicable)
- Container lifecycle managed by Docker restart policy (`restart: always`).
## Error Handling
- Backend healthcheck fails if status endpoint not reachable; container marked unhealthy causing depends_on wait.
- Gunicorn exit surfaces via Docker logs; no auto-restart beyond Docker restart policy.
## Tenancy Rules
- Enforced in application layer via middleware; deployment does not alter tenancy.
## Billing Rules (if applicable)
- None at deployment layer.
## Background Tasks / Schedulers (if applicable)
- Celery beat runs schedules (e.g., automation scheduler) using same image and env.
## Key Design Considerations
- Compose uses images (not builds) to avoid accidental rebuilds in Portainer; images must exist beforehand.
- External network requirement means infra stack must pre-create `igny8_net` and services.
- Backend served by Gunicorn with 4 workers per compose command; adjust via compose if scaling container horizontally.
## How Developers Should Work With This Module
- When changing env vars, update `docker-compose.app.yml` and keep parity with `settings.py`.
- For new services (e.g., monitoring), add to compose and attach to `igny8_net`.
- Keep healthcheck endpoint stable (`/api/v1/system/status/`) or update compose accordingly.

View File

@@ -0,0 +1,72 @@
# Environment Setup
## Purpose
Outline required runtime dependencies, environment variables, and local setup steps derived from the codebase configuration.
## Code Locations (exact paths)
- Django settings/env: `backend/igny8_core/settings.py`
- Backend dependencies: `backend/requirements.txt`
- Backend image provisioning: `backend/Dockerfile`
- Frontend env/build: `frontend/Dockerfile`, `frontend/package.json`, `frontend/vite.config.ts`
- Compose stack env: `docker-compose.app.yml`
## High-Level Responsibilities
- Provide prerequisites for backend (Python 3.11, Postgres, Redis) and frontend (Node 18).
- Enumerate environment variables consumed by backend and compose files.
- Describe local or containerized setup flows.
## Detailed Behavior
- Backend settings require:
- `SECRET_KEY`, `DEBUG`, `USE_SECURE_COOKIES`, `USE_SECURE_PROXY_HEADER`.
- Database: `DATABASE_URL` or `DB_HOST`, `DB_NAME`, `DB_USER`, `DB_PASSWORD`, `DB_PORT`; falls back to SQLite in DEBUG if none provided.
- Redis/Celery: `CELERY_BROKER_URL`, `CELERY_RESULT_BACKEND` default to `redis://{REDIS_HOST}:{REDIS_PORT}/0`.
- JWT: `JWT_SECRET_KEY`, expiry defaults (15m access, 30d refresh).
- CORS: allowed origins include local ports (5173/5174/5176/8024) and `app.igny8.com`.
- Stripe/PayPal keys optional (`STRIPE_PUBLIC_KEY`, `STRIPE_SECRET_KEY`, `PAYPAL_*`).
- Backend Dockerfile installs system deps (gcc, libpq-dev) and pip installs `requirements.txt`; runs `collectstatic` (best-effort).
- Frontend expects `VITE_BACKEND_URL` (compose sets to `https://api.igny8.com/api`); build via `npm install` then `npm run build` (Dockerfile).
- Compose injects DB/Redis env to backend/worker/beat and secure cookie/proxy flags for production use.
## Data Structures / Models Involved (no code)
- Not model-specific; environment affects DB connections, auth, CORS, Celery, billing keys.
## Execution Flow
- Local (backend):
- `python -m venv .venv && source .venv/bin/activate`
- `pip install -r backend/requirements.txt`
- Set env vars (DB/REDIS/JWT/CORS/SECRET_KEY).
- `python backend/manage.py migrate` then `python backend/manage.py runserver 8010`.
- Local (frontend):
- `npm install` in `frontend/`
- Set `VITE_BACKEND_URL`
- `npm run dev -- --host --port 5173`
- Containerized:
- Build images per Dockerfiles; run `docker compose -f docker-compose.app.yml up -d` with external Postgres/Redis and network `igny8_net`.
## Cross-Module Interactions
- Celery uses same Redis host/port as defined in env; automation tasks rely on this.
- CORS/secure cookie flags must align with frontend host to allow auth.
## State Transitions (if applicable)
- `DEBUG` toggles throttling bypass (`IGNY8_DEBUG_THROTTLE`) and SQLite fallback.
## Error Handling
- Missing DB env → falls back to SQLite in DEBUG; in prod must set Postgres vars.
- Healthcheck in compose will fail if env misconfigured and backend cannot start.
## Tenancy Rules
- Unchanged by env; account context enforced in middleware once app runs.
## Billing Rules (if applicable)
- Stripe/PayPal keys optional; without them payment flows are disabled/pending.
## Background Tasks / Schedulers (if applicable)
- Celery broker/backend must be reachable; worker/beat require the same env set.
## Key Design Considerations
- Prefer Postgres in all shared/test/prod; SQLite only for local development.
- Keep SECRET_KEY/JWT keys distinct and secret in production.
## How Developers Should Work With This Module
- Add new env variables in `settings.py` with safe defaults; document in this file.
- Mirror envs in compose and deployment systems (Portainer/CI) to avoid drift.

View File

@@ -0,0 +1,62 @@
# Logging and Monitoring
## Purpose
Explain runtime logging, request tracing, and resource tracking implemented in code, plus where logs are written.
## Code Locations (exact paths)
- Django logging config: `backend/igny8_core/settings.py` (`LOGGING`, `PUBLISH_SYNC_LOG_DIR`)
- Request ID middleware: `backend/igny8_core/middleware/request_id.py`
- Resource tracking middleware: `backend/igny8_core/middleware/resource_tracker.py`
- Publish/sync logging categories: `publish_sync`, `wordpress_api`, `webhooks` loggers in `settings.py`
## High-Level Responsibilities
- Attach request IDs to each request/response for traceability.
- Optionally track CPU/memory/I/O per request for authenticated admin/developer users.
- Persist publish/sync logs to rotating files and console.
## Detailed Behavior
- Request ID:
- `RequestIDMiddleware` assigns a UUID per request; adds header `X-Request-ID` (exposed via CORS `x-resource-tracking-id`).
- Resource tracking:
- `ResourceTrackingMiddleware` measures CPU/memory/I/O deltas per request for authenticated admin/developer users; stores metrics in cache; can be toggled via custom header `x-debug-resource-tracking`.
- Logging configuration (`settings.py`):
- Log directory: `logs/publish-sync-logs` (auto-created).
- Handlers: console + rotating file handlers (10 MB, 10 backups) for `publish_sync.log`, `wordpress-api.log`, `webhooks.log`.
- Formatters: verbose general formatter; publish_sync formatter with timestamp/level/message.
- Loggers: `publish_sync`, `wordpress_api`, `webhooks` all INFO level, non-propagating.
- Static files and general Django logging fall back to default console (not customized beyond above).
## Data Structures / Models Involved (no code)
- None; logging is operational.
## Execution Flow
- Incoming request → RequestIDMiddleware → AccountContextMiddleware → ResourceTrackingMiddleware → view.
- Views/services log using standard `logging.getLogger('publish_sync')` etc.; output to console and rotating files.
## Cross-Module Interactions
- WordPress integration and publish flows should log to `wordpress_api`/`publish_sync` to capture sync diagnostics.
- CORS exposes request ID header so frontend can correlate with logs.
## State Transitions (if applicable)
- Log rotation occurs when files exceed 10 MB (backupCount 10).
## Error Handling
- Logging failures fall back to console; middleware errors propagate via unified exception handler.
## Tenancy Rules
- Request ID and resource tracking apply to all requests; data remains in application logs scoped by account within log content.
## Billing Rules (if applicable)
- None.
## Background Tasks / Schedulers (if applicable)
- Celery tasks can log using same loggers; no dedicated Celery handler configured beyond console.
## Key Design Considerations
- Request ID early in middleware to ensure downstream logs include correlation ID.
- Resource tracking limited to privileged users to avoid overhead for all traffic.
## How Developers Should Work With This Module
- Use named loggers (`publish_sync`, `wordpress_api`, `webhooks`) for relevant flows to keep logs organized.
- When adding new log domains, extend `LOGGING` in `settings.py` with dedicated handlers/formatters as needed.
- Surface request IDs in error responses to aid log correlation (already enabled via unified handler).

View File

@@ -0,0 +1,63 @@
# Scaling and Load Balancing
## Purpose
Outline how services can be scaled based on existing Docker/Gunicorn/Celery configuration and external dependencies.
## Code Locations (exact paths)
- Compose stack: `docker-compose.app.yml`
- Backend process model: `docker-compose.app.yml` (Gunicorn command), `backend/Dockerfile`
- Celery worker/beat commands: `docker-compose.app.yml`
- Celery settings: `backend/igny8_core/settings.py` (Celery config)
## High-Level Responsibilities
- Describe horizontal/vertical scaling levers for backend and workers.
- Note reliance on external Postgres/Redis and shared network.
## Detailed Behavior
- Backend container runs Gunicorn with `--workers 4 --timeout 120` (from compose). Scaling options:
- Increase workers via compose command args.
- Run additional backend containers on the same `igny8_net` behind a reverse proxy (proxy not defined here; compose assumes external Caddy/infra stack handles routing).
- Celery:
- Worker command `celery -A igny8_core worker --loglevel=info --concurrency=4`; concurrency can be increased per container or by adding more worker replicas.
- Beat runs separately to schedule tasks.
- Broker/backend: Redis from external infra; settings enforce prefetch multiplier 1 and task soft/hard time limits (25/30 min).
- Frontend/marketing/sites:
- Served via dev servers in current compose; for production, build static assets (frontend Dockerfile uses Caddy). Can scale by running additional containers behind the proxy.
- Network:
- `igny8_net` is external; load balancer (e.g., Caddy/infra) not defined in this repo but must front multiple backend/frontend replicas if added.
## Data Structures / Models Involved (no code)
- None; operational scaling only.
## Execution Flow
- Scaling out: start additional containers with the same images on `igny8_net`; configure external proxy to round-robin to backend instances.
- Scaling up: raise Gunicorn worker count or Celery concurrency in compose overrides.
## Cross-Module Interactions
- Celery workload includes automation/AI tasks; ensure Redis/Postgres sized accordingly when increasing concurrency.
- Request ID and resource tracking middleware remain per-instance; logs aggregate by container.
## State Transitions (if applicable)
- New replicas join network immediately; no shared session storage configured (stateless JWT APIs), so backend replicas are safe behind load balancer.
## Error Handling
- If backend not reachable, healthcheck fails and depends_on blocks frontend start.
- Celery tasks exceeding time limits are terminated per settings.
## Tenancy Rules
- Unchanged by scaling; tenancy enforced per request in each instance.
## Billing Rules (if applicable)
- None; credit deductions occur in application code regardless of scale.
## Background Tasks / Schedulers (if applicable)
- Celery beat should remain single active scheduler; if running multiple beats, use external lock (not implemented) to avoid duplicate schedules.
## Key Design Considerations
- Ensure reverse proxy/ingress (outside this repo) balances across backend replicas and terminates TLS.
- Keep Redis/Postgres highly available and sized for additional connections when scaling workers/backends.
## How Developers Should Work With This Module
- Use compose override or Portainer to adjust worker counts; validate resource limits.
- Avoid multiple Celery beat instances unless coordination is added.
- When introducing production-ready load balancing, add proxy config to infra repo and keep ports consistent with compose.