1908 lines
54 KiB
Markdown
1908 lines
54 KiB
Markdown
# IGNY8 Phase 0: Document 00E - Legacy Cleanup
|
||
|
||
**Status:** Pre-Implementation
|
||
**Phase:** Phase 0 - Foundation & Infrastructure
|
||
**Document ID:** 00E
|
||
**Version:** 1.0
|
||
**Created:** 2026-03-23
|
||
|
||
---
|
||
|
||
## 1. Current State
|
||
|
||
### 1.1 Legacy VPS Infrastructure
|
||
|
||
The old VPS running the original IGNY8 deployment contains multiple containerized services that have been superseded by the Phase 0 migration activities. The following services are currently running:
|
||
|
||
- **Gitea Container**: Self-hosted Git repository manager, serving as the primary git repository backend
|
||
- **Docker Containers (4-5 additional)**: Likely including test environments, development services, and other supporting infrastructure
|
||
- **Docker Images & Volumes**: Associated image layers and persistent data volumes
|
||
- **System Resources**: Consuming approximately 1.5GB RAM
|
||
|
||
### 1.2 Dependencies and Prior Work
|
||
|
||
This task depends on successful completion of prior Phase 0 documents:
|
||
|
||
- **00A - Repository Consolidation**: All git repositories have been migrated from Gitea to GitHub (verified in GitHub organization)
|
||
- **00B - Version Matrix**: Reference for infrastructure component versions currently in production
|
||
- **00C - 3-Stage Migration Flow**: All production services have been migrated to new infrastructure and are operational; DNS has been flipped to new VPS with 24-48+ hours of stable production operation confirmed
|
||
- **Network Verification**: Production DNS records have been updated and traffic is flowing to new VPS; test DNS records (created during 00C validation) remain but will be cleaned up in Phase 3
|
||
|
||
### 1.3 Current Risk State
|
||
|
||
**High Risk Items:**
|
||
- Data loss: Gitea database and volumes may contain historical metadata not yet captured
|
||
- Service interruption: Any lingering production DNS records pointing to old VPS could cause outages
|
||
- Recovery complexity: Once containers are destroyed, data recovery becomes significantly more difficult
|
||
- Premature cleanup: Old VPS must not be decommissioned until DNS migration is complete and stable
|
||
|
||
**Mitigations in Place:**
|
||
- All repositories verified on GitHub before Gitea destruction
|
||
- All production workloads confirmed running on new VPS
|
||
- DNS has been verified as fully migrated (per 00C completion)
|
||
- New VPS confirmed stable for 24-48+ hours before old VPS cleanup begins
|
||
- Grace period available (1-2 weeks) for monitoring and verification
|
||
- Test DNS records will be identified and removed as cleanup items
|
||
|
||
### 1.4 Resource Recovery Opportunity
|
||
|
||
- **RAM Recovery**: ~1.5GB when all legacy containers are stopped
|
||
- **Disk Space Recovery**: Dependent on container image sizes and volume data
|
||
- **Cost Reduction**: VPS subscription can be cancelled after grace period
|
||
- **Operational Simplification**: Reduced container inventory and support burden
|
||
|
||
---
|
||
|
||
## 2. What to Build
|
||
|
||
### 2.1 Cleanup Objectives
|
||
|
||
**Primary Goals:**
|
||
1. Safely decommission all legacy containers on the old VPS (ONLY after DNS has been flipped and new VPS is stable 24-48+ hours)
|
||
2. Verify complete migration of all critical data to new infrastructure
|
||
3. Document the legacy system state for historical reference
|
||
4. Optionally capture a snapshot of the old VPS before final teardown
|
||
5. Recover compute and storage resources for cost optimization
|
||
6. Remove test DNS records (test-app.igny8.com, test-api.igny8.com, test-marketing.igny8.com) that were created for validation
|
||
|
||
**Secondary Goals:**
|
||
1. Establish a documented decommission process for future infrastructure cleanup
|
||
2. Create an audit trail of what services existed on legacy infrastructure
|
||
3. Validate that no orphaned DNS records remain pointing to old VPS (production DNS already migrated per 00C)
|
||
4. Ensure zero data loss during the cleanup process
|
||
|
||
### 2.2 Service Inventory to Remove
|
||
|
||
| Service | Container Type | Status | Impact | Action |
|
||
|---------|---|---|---|---|
|
||
| Gitea | Docker Container | Running | High - contains git history | Verify all repos on GitHub, then destroy |
|
||
| Legacy Test Environment | Docker Container | Running/Dormant | Low | Stop and remove |
|
||
| Development Service 1 | Docker Container | Running/Dormant | Low | Stop and remove |
|
||
| Development Service 2 | Docker Container | Running/Dormant | Low | Stop and remove |
|
||
| Additional Service | Docker Container | TBD | TBD | Inventory and remove as needed |
|
||
|
||
### 2.2a DNS Test Records to Remove
|
||
|
||
The following test DNS records were created during the 3-stage migration validation (documented in 00C) and must be removed as part of legacy cleanup:
|
||
|
||
| Record | Type | Created For | Status | Action |
|
||
|--------|------|-------------|--------|--------|
|
||
| test-app.igny8.com | CNAME/A | Validation testing | Active | Remove from DNS provider |
|
||
| test-api.igny8.com | CNAME/A | Validation testing | Active | Remove from DNS provider |
|
||
| test-marketing.igny8.com | CNAME/A | Validation testing | Active | Remove from DNS provider |
|
||
|
||
**Timing:** These records should be removed ONLY after the new VPS has been in stable production for 24-48+ hours and DNS has been fully migrated to point all production subdomains (app.igny8.com, api.igny8.com, marketing.igny8.com) to the new VPS.
|
||
|
||
### 2.3 Cleanup Phases
|
||
|
||
**Phase 1: Pre-Cleanup Verification (Week 1-2)**
|
||
- Verify all GitHub repositories are complete and accessible (per 00A - Repository Consolidation)
|
||
- Confirm no production dependencies on legacy VPS
|
||
- Monitor new infrastructure for stability
|
||
- Document what exists on legacy VPS
|
||
- Verify DNS migration completed and production traffic flowing to new VPS (per 00C - 3-stage migration flow)
|
||
|
||
**Phase 2: Grace Period (Days 7-14 of Execution)**
|
||
- Keep legacy VPS running as fallback (minimum 24-48+ hours after DNS flip, per 00C)
|
||
- Monitor for any unexpected traffic or requests
|
||
- Verify no production DNS records resolve to old VPS
|
||
- Identify and remove test DNS records (test-app.igny8.com, test-api.igny8.com, test-marketing.igny8.com)
|
||
- Create optional snapshot before destruction
|
||
|
||
**Phase 3: Service Cleanup (Day 15+)**
|
||
- Stop and remove non-critical containers
|
||
- Remove test DNS records created during validation phase (per 00C)
|
||
- Verify disk space recovery
|
||
- Create post-cleanup documentation
|
||
|
||
**Phase 4: VPS Decommission (Day 21+)**
|
||
- After grace period expires and all verifications pass
|
||
- Confirm all DNS records removed from legacy provider
|
||
- Cancel VPS subscription
|
||
- Archive decommission documentation
|
||
|
||
---
|
||
|
||
## 3. Data Models / APIs
|
||
|
||
### 3.1 Verification Checklist Schema
|
||
|
||
Before any destructive action, complete this verification:
|
||
|
||
```
|
||
VERIFICATION_CHECKLIST = {
|
||
"phase_name": string,
|
||
"execution_date": ISO8601_timestamp,
|
||
"verified_items": [
|
||
{
|
||
"item_id": string,
|
||
"service_name": string,
|
||
"verification_type": "repository|endpoint|dependency|dns",
|
||
"verification_status": "passed|failed|blocked",
|
||
"verification_details": string,
|
||
"timestamp": ISO8601_timestamp
|
||
}
|
||
],
|
||
"blocking_issues": [string],
|
||
"authorized_by": string,
|
||
"can_proceed": boolean
|
||
}
|
||
```
|
||
|
||
### 3.2 Service Inventory Schema
|
||
|
||
Document each service on legacy VPS:
|
||
|
||
```
|
||
SERVICE_INVENTORY = {
|
||
"service_id": string,
|
||
"service_name": string,
|
||
"container_id": string (Docker),
|
||
"image_id": string,
|
||
"status": "running|stopped|unknown",
|
||
"resource_usage": {
|
||
"memory_mb": number,
|
||
"disk_mb": number,
|
||
"cpu_usage_percent": number
|
||
},
|
||
"volume_mounts": [
|
||
{
|
||
"volume_name": string,
|
||
"mount_path": string,
|
||
"size_mb": number
|
||
}
|
||
],
|
||
"dependencies": [string],
|
||
"data_migrated": boolean,
|
||
"cleanup_status": "pending|completed",
|
||
"notes": string
|
||
}
|
||
```
|
||
|
||
### 3.3 DNS Verification Schema
|
||
|
||
Verify all DNS records are updated:
|
||
|
||
```
|
||
DNS_VERIFICATION = {
|
||
"verification_date": ISO8601_timestamp,
|
||
"records_checked": number,
|
||
"records_pointing_to_old_vps": number,
|
||
"old_vps_ip": string,
|
||
"new_vps_ip": string,
|
||
"blocking_records": [
|
||
{
|
||
"record_name": string,
|
||
"current_ip": string,
|
||
"expected_ip": string,
|
||
"status": "correct|incorrect"
|
||
}
|
||
],
|
||
"all_clear": boolean,
|
||
"verified_by": string
|
||
}
|
||
```
|
||
|
||
### 3.4 Data Backup Schema
|
||
|
||
Document backup created before cleanup:
|
||
|
||
```
|
||
BACKUP_METADATA = {
|
||
"backup_id": string,
|
||
"backup_type": "full_snapshot|selective_export",
|
||
"creation_date": ISO8601_timestamp,
|
||
"vps_identifier": string,
|
||
"storage_location": string,
|
||
"size_gb": number,
|
||
"checksums": {
|
||
"sha256": string
|
||
},
|
||
"services_included": [string],
|
||
"retention_policy": "7days|30days|permanent",
|
||
"can_restore_from": boolean
|
||
}
|
||
```
|
||
|
||
---
|
||
|
||
## 4. Implementation Steps
|
||
|
||
### 4.1 Phase 1: Pre-Cleanup Verification (Days 1-3)
|
||
|
||
#### Step 1.1: Inventory All Services on Legacy VPS
|
||
|
||
**Objective:** Create complete documentation of all services currently running
|
||
|
||
**Commands:**
|
||
```bash
|
||
# SSH into legacy VPS
|
||
ssh user@old-vps-ip
|
||
|
||
# List all Docker containers
|
||
docker ps -a
|
||
|
||
# List all Docker images
|
||
docker images
|
||
|
||
# Check Docker volumes
|
||
docker volume ls
|
||
|
||
# Get detailed information on each container
|
||
docker inspect $(docker ps -aq) > /tmp/container_details.json
|
||
|
||
# Export service inventory as JSON for documentation
|
||
docker ps -a --format "{{.ID}}|{{.Names}}|{{.Image}}|{{.Status}}" > /tmp/service_list.txt
|
||
```
|
||
|
||
**Documentation Outcome:**
|
||
- Create SERVICE_INVENTORY entries for each identified service
|
||
- Record resource usage for each container
|
||
- Note any inter-container dependencies
|
||
- Store output in `/tmp/legacy_vps_inventory/`
|
||
|
||
**Acceptance Criteria:**
|
||
- [ ] All containers identified and documented
|
||
- [ ] No services are unknown or undocumented
|
||
- [ ] Resource usage baseline captured
|
||
- [ ] Service interdependencies mapped
|
||
|
||
---
|
||
|
||
#### Step 1.2: Verify All GitHub Repositories
|
||
|
||
**Objective:** Confirm all repositories migrated from Gitea to GitHub are complete and accessible
|
||
|
||
**Commands:**
|
||
```bash
|
||
# List all repositories in GitHub organization
|
||
gh repo list YOUR-ORG --limit 1000 --json name,createdAt,description
|
||
|
||
# For each repository, verify key details:
|
||
# - Commit history is present
|
||
# - Branches are complete
|
||
# - Tags are present
|
||
# - Access is functional
|
||
|
||
# Sample verification for single repo:
|
||
gh repo view YOUR-ORG/repo-name --json description,defaultBranchRef,isArchived
|
||
|
||
# Verify large repos synced correctly (check commit count)
|
||
git clone https://github.com/YOUR-ORG/repo-name /tmp/verify-repo
|
||
cd /tmp/verify-repo
|
||
git rev-list --count HEAD # Should match original commit count
|
||
```
|
||
|
||
**Documentation Outcome:**
|
||
- List all repositories verified
|
||
- Note any discrepancies or incomplete migrations
|
||
- Record commit counts for each major repo
|
||
- Verify branch structure matches original
|
||
|
||
**Acceptance Criteria:**
|
||
- [ ] All expected repositories present in GitHub
|
||
- [ ] Commit history complete for all repos
|
||
- [ ] All branches migrated correctly
|
||
- [ ] Tags/releases present if applicable
|
||
- [ ] No data loss detected
|
||
- [ ] README: Cross-reference 00A - Repository Consolidation for full migration details
|
||
|
||
---
|
||
|
||
#### Step 1.3: Verify Production Services are Stable
|
||
|
||
**Objective:** Confirm new VPS has been running successfully and requires no fallback to legacy services
|
||
|
||
**Commands:**
|
||
```bash
|
||
# SSH into new VPS
|
||
ssh user@new-vps-ip
|
||
|
||
# Check all critical services are running
|
||
docker ps -a | grep -v Exit
|
||
|
||
# Verify application health endpoints
|
||
curl -v https://app.igny8.local/health
|
||
curl -v https://api.igny8.local/status
|
||
|
||
# Check recent logs for errors
|
||
docker logs --tail 100 [service-name] | grep -i error
|
||
|
||
# Verify database connectivity
|
||
docker exec [database-container] status_command
|
||
|
||
# Check disk space on new VPS
|
||
df -h
|
||
|
||
# Check memory usage
|
||
free -h
|
||
```
|
||
|
||
**Documentation Outcome:**
|
||
- Record uptime of all critical services
|
||
- Note any error conditions
|
||
- Document current resource usage on new VPS
|
||
- Baseline performance metrics
|
||
|
||
**Acceptance Criteria:**
|
||
- [ ] All production services running without errors (>99% uptime for past 7 days)
|
||
- [ ] No critical errors in recent logs
|
||
- [ ] Database connectivity verified
|
||
- [ ] All health endpoints responding normally
|
||
- [ ] New VPS has adequate resource headroom (>30% free memory, >20% free disk)
|
||
|
||
---
|
||
|
||
#### Step 1.4: Verify DNS Records are Fully Migrated
|
||
|
||
**Objective:** Confirm all production DNS records point to new VPS and identify test DNS records to be removed (per 00C 3-stage validation flow)
|
||
|
||
**Commands:**
|
||
```bash
|
||
# Get old VPS IP address from provider
|
||
OLD_VPS_IP="x.x.x.x"
|
||
|
||
# Get new VPS IP address
|
||
NEW_VPS_IP="y.y.y.y"
|
||
|
||
# Check all production DNS records for organization
|
||
nslookup igny8.local
|
||
nslookup api.igny8.local
|
||
nslookup app.igny8.local
|
||
nslookup git.igny8.local # Should NOT resolve to old VPS
|
||
|
||
# Use dig for more detailed DNS information
|
||
dig igny8.local +short
|
||
dig @8.8.8.8 igny8.local +short # Check public DNS
|
||
|
||
# Search DNS for any remaining old VPS references
|
||
getent hosts | grep $OLD_VPS_IP
|
||
|
||
# Verify all subdomains point to new VPS
|
||
for domain in api app git cdn mail; do
|
||
echo "Checking $domain.igny8.local..."
|
||
dig $domain.igny8.local +short
|
||
done
|
||
|
||
# IMPORTANT: Identify test DNS records created during 00C validation that must be removed
|
||
echo "=== TEST DNS RECORDS (to be removed during Phase 3) ==="
|
||
nslookup test-app.igny8.com # Should exist at this point (created during 00C)
|
||
nslookup test-api.igny8.com # Should exist at this point (created during 00C)
|
||
nslookup test-marketing.igny8.com # Should exist at this point (created during 00C)
|
||
|
||
# Record these test records for removal in Step 3.4
|
||
```
|
||
|
||
**Documentation Outcome:**
|
||
- Create DNS_VERIFICATION record with all findings
|
||
- List all production DNS records and their current IP targets
|
||
- Note any records still pointing to old VPS
|
||
- Record TTL values for each record
|
||
- Document test DNS records found (test-app.igny8.com, test-api.igny8.com, test-marketing.igny8.com) for Phase 3 removal
|
||
|
||
**Acceptance Criteria:**
|
||
- [ ] No production DNS records resolve to old VPS IP address
|
||
- [ ] All critical services point to new VPS IP
|
||
- [ ] DNS propagation complete (verified on public DNS resolvers)
|
||
- [ ] No CNAME or A records for git/gitea services pointing to old VPS
|
||
- [ ] TTL values appropriate for stability
|
||
- [ ] Test DNS records identified and documented for removal in Phase 3
|
||
|
||
---
|
||
|
||
#### Step 1.5: Document Legacy VPS Contents
|
||
|
||
**Objective:** Create historical record of what existed on legacy infrastructure
|
||
|
||
**Commands:**
|
||
```bash
|
||
# SSH into legacy VPS
|
||
ssh user@old-vps-ip
|
||
|
||
# Create comprehensive inventory report
|
||
cat > /tmp/legacy_vps_report.md << 'EOF'
|
||
# Legacy VPS Decommission Report
|
||
Date: $(date -u +%Y-%m-%dT%H:%M:%SZ)
|
||
|
||
## System Information
|
||
EOF
|
||
|
||
uname -a >> /tmp/legacy_vps_report.md
|
||
echo "" >> /tmp/legacy_vps_report.md
|
||
echo "## Docker Information" >> /tmp/legacy_vps_report.md
|
||
docker --version >> /tmp/legacy_vps_report.md
|
||
docker ps -a >> /tmp/legacy_vps_report.md
|
||
echo "" >> /tmp/legacy_vps_report.md
|
||
echo "## Disk Usage" >> /tmp/legacy_vps_report.md
|
||
df -h >> /tmp/legacy_vps_report.md
|
||
echo "" >> /tmp/legacy_vps_report.md
|
||
echo "## Memory Usage" >> /tmp/legacy_vps_report.md
|
||
free -h >> /tmp/legacy_vps_report.md
|
||
|
||
# Export all container configurations
|
||
docker ps -a --format '{{.ID}}' | while read container_id; do
|
||
echo "=== Container: $container_id ===" >> /tmp/legacy_vps_report.md
|
||
docker inspect $container_id >> /tmp/legacy_vps_report.md
|
||
done
|
||
|
||
# Copy report to documentation location
|
||
```
|
||
|
||
**Documentation Outcome:**
|
||
- System configuration snapshot
|
||
- All running services documented
|
||
- Resource usage baseline captured
|
||
- Service configuration details exported
|
||
|
||
**Acceptance Criteria:**
|
||
- [ ] Comprehensive report created and stored
|
||
- [ ] All containers documented in detail
|
||
- [ ] System specs recorded for reference
|
||
- [ ] Report location: `/tmp/legacy_vps_inventory/final_report.md`
|
||
|
||
---
|
||
|
||
### 4.2 Phase 2: Grace Period and Snapshot (Days 4-14)
|
||
|
||
#### Step 2.1: Create Optional VPS Snapshot
|
||
|
||
**Objective:** Create a backup of legacy VPS before any destructive actions, as insurance policy
|
||
|
||
**Commands:**
|
||
```bash
|
||
# Contact VPS provider (DigitalOcean, Linode, AWS, etc.) for snapshot creation
|
||
# Typical command for major providers:
|
||
|
||
# DigitalOcean example (using doctl):
|
||
doctl compute droplet-action snapshot $LEGACY_VPS_ID \
|
||
--snapshot-name "igny8-legacy-vps-backup-$(date +%Y%m%d)" \
|
||
--wait
|
||
|
||
# AWS example:
|
||
aws ec2 create-image \
|
||
--instance-id $LEGACY_INSTANCE_ID \
|
||
--name "igny8-legacy-vps-backup-$(date +%Y%m%d)" \
|
||
--no-reboot
|
||
|
||
# Alternative: Create full disk backup
|
||
ssh user@old-vps-ip "sudo dd if=/dev/sda | gzip" > /backup/legacy_vps_full_backup.img.gz
|
||
|
||
# Or selective backup of critical volumes
|
||
ssh user@old-vps-ip "docker volume ls" | awk '{print $2}' | tail -n +2 | while read vol; do
|
||
docker volume inspect $vol
|
||
docker run --rm -v $vol:/volume -v /tmp:/backup \
|
||
alpine tar czf /backup/$vol.tar.gz -C /volume .
|
||
done
|
||
```
|
||
|
||
**Documentation Outcome:**
|
||
- Create BACKUP_METADATA entry with snapshot details
|
||
- Record snapshot ID and storage location
|
||
- Note restoration procedure if needed
|
||
- Document retention policy (keep for 30+ days minimum)
|
||
|
||
**Acceptance Criteria:**
|
||
- [ ] Snapshot created successfully
|
||
- [ ] Snapshot size recorded
|
||
- [ ] Checksum computed and stored
|
||
- [ ] Restoration procedure tested (optional but recommended)
|
||
- [ ] Snapshot retention policy: minimum 30 days
|
||
|
||
---
|
||
|
||
#### Step 2.2: Monitor for Unexpected Activity
|
||
|
||
**Objective:** Verify legacy VPS receives no traffic during grace period
|
||
|
||
**Commands:**
|
||
```bash
|
||
# SSH into legacy VPS
|
||
ssh user@old-vps-ip
|
||
|
||
# Monitor network traffic to old VPS
|
||
tcpdump -i eth0 -n 'tcp port 80 or tcp port 443' -c 100
|
||
|
||
# Check container logs for any incoming requests
|
||
docker logs gitea | grep "GET\|POST" | tail -20
|
||
docker logs [other-service] | grep "GET\|POST"
|
||
|
||
# Monitor system processes
|
||
ps aux | grep -E "docker|nginx|apache"
|
||
|
||
# Check firewall rules (if applicable)
|
||
sudo iptables -L -n
|
||
|
||
# Monitor for any SSH access
|
||
grep "Accepted" /var/log/auth.log | tail -20
|
||
```
|
||
|
||
**Documentation Outcome:**
|
||
- Record any unexpected traffic detected
|
||
- Note any access attempts to legacy services
|
||
- Document network baseline for reference
|
||
|
||
**Acceptance Criteria:**
|
||
- [ ] No traffic to HTTP/HTTPS ports during grace period
|
||
- [ ] No unexpected SSH login attempts
|
||
- [ ] All containers remain in expected state (running/stopped)
|
||
- [ ] Monitor for minimum 7 days
|
||
|
||
---
|
||
|
||
#### Step 2.3: Final DNS Verification and Stability Confirmation
|
||
|
||
**Objective:** Re-verify DNS records are stable and production has been running error-free for 24-48+ hours before cleanup phase (per 00C migration requirements)
|
||
|
||
**Prerequisites:**
|
||
- New VPS has been running in production for minimum 24-48 hours
|
||
- All production traffic successfully routed to new VPS
|
||
- No rollback occurred during grace period
|
||
- Production monitoring shows stable metrics
|
||
|
||
**Commands:**
|
||
```bash
|
||
# Repeat DNS verification from Step 1.4
|
||
nslookup igny8.local
|
||
dig api.igny8.local +short
|
||
dig app.igny8.local +short
|
||
|
||
# Check for any CNAME chains
|
||
dig igny8.local CNAME
|
||
|
||
# Verify mail records don't point to old VPS
|
||
dig igny8.local MX
|
||
dig igny8.local NS
|
||
|
||
# Use external DNS checker
|
||
curl "https://dns.google/resolve?name=igny8.local&type=A" | jq .
|
||
|
||
# Verify test DNS records still exist (to be removed in Step 3.4)
|
||
nslookup test-app.igny8.com
|
||
nslookup test-api.igny8.com
|
||
nslookup test-marketing.igny8.com
|
||
```
|
||
|
||
**Documentation Outcome:**
|
||
- Updated DNS_VERIFICATION record
|
||
- Confirm no changes since first verification in Step 1.4
|
||
- Timestamp when 24-48 hour stability requirement was met
|
||
- Final sign-off on DNS migration complete
|
||
|
||
**Acceptance Criteria:**
|
||
- [ ] All production DNS records verified pointing to new VPS
|
||
- [ ] No new records added pointing to old VPS
|
||
- [ ] DNS TTL values appropriate
|
||
- [ ] External DNS resolvers report correct IPs
|
||
- [ ] Test DNS records documented for Phase 3 removal
|
||
- [ ] New VPS confirmed stable for minimum 24-48 hours
|
||
- [ ] No production issues detected during grace period
|
||
- [ ] Authorized approval to proceed to Phase 3
|
||
|
||
---
|
||
|
||
### 4.3 Phase 3: Service Cleanup (Day 15+)
|
||
|
||
#### Step 3.1: Stop Non-Critical Containers First
|
||
|
||
**Objective:** Stop and verify impact of stopping non-critical services before removing Gitea
|
||
|
||
**Commands:**
|
||
```bash
|
||
# SSH into legacy VPS
|
||
ssh user@old-vps-ip
|
||
|
||
# Identify and stop test/development containers
|
||
docker ps -a | grep -E "test|dev|sandbox|local"
|
||
|
||
# Stop non-critical service (example)
|
||
docker stop [dev-service-container-id]
|
||
|
||
# Wait 2 minutes and verify no errors
|
||
sleep 120
|
||
|
||
# Check new VPS services still operational
|
||
ssh user@new-vps-ip "docker ps -a"
|
||
|
||
# If all appears normal, remove the container
|
||
docker rm [dev-service-container-id]
|
||
|
||
# Repeat for each non-critical service
|
||
```
|
||
|
||
**Documentation Outcome:**
|
||
- Record which containers stopped first
|
||
- Note any cascading failures (should be none)
|
||
- Track disk space recovered with each removal
|
||
|
||
**Acceptance Criteria:**
|
||
- [ ] Each non-critical container stopped individually
|
||
- [ ] Production services remain unaffected
|
||
- [ ] 5-minute verification window between each stop
|
||
- [ ] No errors in new VPS logs after each stop
|
||
- [ ] Document services removed in order
|
||
|
||
---
|
||
|
||
#### Step 3.2: Verify Gitea Data Before Removal
|
||
|
||
**Objective:** Final comprehensive verification before destroying Gitea and its data
|
||
|
||
**Commands:**
|
||
```bash
|
||
# SSH into legacy VPS
|
||
ssh user@old-vps-ip
|
||
|
||
# Export Gitea database backup as final insurance
|
||
docker exec gitea /bin/bash -c "pg_dump -U gitea gitea > /tmp/gitea_backup.sql" 2>/dev/null
|
||
|
||
# Or for SQLite:
|
||
docker cp gitea:/data/gitea.db /tmp/gitea.db.backup
|
||
|
||
# Verify Gitea is still running
|
||
docker ps | grep gitea
|
||
|
||
# Check Gitea logs for any errors
|
||
docker logs gitea | tail -50
|
||
|
||
# List all repositories in Gitea
|
||
docker exec gitea /app/gitea/gitea dump-repo --all
|
||
|
||
# Cross-reference with GitHub
|
||
echo "Gitea Repositories:"
|
||
docker exec gitea /bin/bash -c "find /data/gitea-repositories -name '.git' -type d" | wc -l
|
||
|
||
echo "GitHub Repositories:"
|
||
gh repo list YOUR-ORG --limit 1000 | wc -l
|
||
|
||
# Verify both counts match
|
||
```
|
||
|
||
**Documentation Outcome:**
|
||
- Final Gitea backup created and stored
|
||
- Repository count verification
|
||
- Gitea configuration exported for reference
|
||
- User and access control documentation
|
||
|
||
**Acceptance Criteria:**
|
||
- [ ] Gitea database backed up successfully
|
||
- [ ] Repository count matches GitHub count
|
||
- [ ] All critical Gitea data verified complete
|
||
- [ ] No errors detected in Gitea logs
|
||
- [ ] Backup stored in `/tmp/legacy_vps_inventory/gitea_backup/`
|
||
- [ ] README: Cross-reference 00A - Repository Consolidation for GitHub migration verification
|
||
|
||
---
|
||
|
||
#### Step 3.3: Stop and Remove Gitea Container
|
||
|
||
**Objective:** Remove Gitea and its associated resources
|
||
|
||
**Commands:**
|
||
```bash
|
||
# SSH into legacy VPS
|
||
ssh user@old-vps-ip
|
||
|
||
# First, verify once more everything is on GitHub
|
||
# (See Step 3.2 verification complete)
|
||
|
||
# Display current state
|
||
docker ps | grep gitea
|
||
|
||
# Stop the Gitea container
|
||
docker stop gitea
|
||
|
||
# Wait 30 seconds
|
||
sleep 30
|
||
|
||
# Verify it's stopped
|
||
docker ps -a | grep gitea
|
||
|
||
# Remove the container
|
||
docker rm gitea
|
||
|
||
# Remove associated volumes
|
||
docker volume ls | grep gitea
|
||
docker volume rm gitea-data gitea-config # Adjust names as needed
|
||
|
||
# Remove the Gitea image
|
||
docker images | grep gitea
|
||
docker rmi [gitea-image-id]
|
||
|
||
# Verify removal
|
||
docker ps -a | grep gitea # Should show nothing
|
||
docker volume ls | grep gitea # Should show nothing
|
||
```
|
||
|
||
**Safety Checkpoint:**
|
||
Before executing stop/remove:
|
||
- [ ] All repositories confirmed on GitHub
|
||
- [ ] Gitea database backup created
|
||
- [ ] No services depend on Gitea
|
||
- [ ] Grace period has completed (minimum 7 days)
|
||
- [ ] Written authorization obtained (if required by policy)
|
||
|
||
**Documentation Outcome:**
|
||
- Timestamp of Gitea removal
|
||
- Final state verification
|
||
- Disk space recovered from Gitea volumes
|
||
|
||
**Acceptance Criteria:**
|
||
- [ ] Gitea container removed
|
||
- [ ] Gitea volumes removed
|
||
- [ ] Gitea image removed
|
||
- [ ] No references to Gitea remain in Docker system
|
||
- [ ] Disk space available verified
|
||
- [ ] No impact on new VPS services
|
||
|
||
---
|
||
|
||
#### Step 3.4: Remove Test DNS Records
|
||
|
||
**Objective:** Remove test DNS records created during 00C validation phase now that production has been stable on new VPS for 24-48+ hours
|
||
|
||
**Test Records to Remove:**
|
||
- test-app.igny8.com
|
||
- test-api.igny8.com
|
||
- test-marketing.igny8.com
|
||
|
||
**Instructions:**
|
||
|
||
1. **If using Cloudflare DNS:**
|
||
```bash
|
||
# Login to Cloudflare dashboard
|
||
# Navigate to DNS records for igny8.com domain
|
||
# Find records: test-app, test-api, test-marketing
|
||
# Delete each record
|
||
# Wait for propagation (typically 5-30 minutes)
|
||
```
|
||
|
||
2. **If using other DNS provider (Route53, GoDaddy, etc.):**
|
||
```bash
|
||
# Access DNS provider dashboard
|
||
# Locate test-app.igny8.com record - DELETE
|
||
# Locate test-api.igny8.com record - DELETE
|
||
# Locate test-marketing.igny8.com record - DELETE
|
||
# Save changes and allow propagation
|
||
```
|
||
|
||
3. **Verify removal:**
|
||
```bash
|
||
# Wait 5 minutes for DNS propagation
|
||
sleep 300
|
||
|
||
# Verify test records are gone
|
||
nslookup test-app.igny8.com # Should return NXDOMAIN or not found
|
||
nslookup test-api.igny8.com # Should return NXDOMAIN or not found
|
||
nslookup test-marketing.igny8.com # Should return NXDOMAIN or not found
|
||
|
||
# Verify production records still resolve
|
||
nslookup app.igny8.com # Should resolve to new VPS IP
|
||
nslookup api.igny8.com # Should resolve to new VPS IP
|
||
nslookup marketing.igny8.com # Should resolve to new VPS IP
|
||
```
|
||
|
||
**Documentation Outcome:**
|
||
- Record date/time of test DNS record deletion
|
||
- Screenshot or log of DNS provider showing records removed
|
||
- Verification that production DNS records still active
|
||
|
||
**Acceptance Criteria:**
|
||
- [ ] test-app.igny8.com removed from DNS
|
||
- [ ] test-api.igny8.com removed from DNS
|
||
- [ ] test-marketing.igny8.com removed from DNS
|
||
- [ ] Removal verified with nslookup/dig
|
||
- [ ] Production DNS records (app.igny8.com, api.igny8.com, etc.) still active
|
||
- [ ] Documentation of deletion timestamps recorded
|
||
|
||
---
|
||
|
||
#### Step 3.5: Remove Remaining Containers and Cleanup
|
||
|
||
**Objective:** Remove all remaining legacy containers and associated images/volumes
|
||
|
||
**Commands:**
|
||
```bash
|
||
# SSH into legacy VPS
|
||
ssh user@old-vps-ip
|
||
|
||
# List remaining containers
|
||
docker ps -a
|
||
|
||
# For each remaining container:
|
||
docker stop [container-id]
|
||
docker rm [container-id]
|
||
|
||
# List and remove unused volumes
|
||
docker volume ls
|
||
docker volume prune -f
|
||
|
||
# List and remove unused images
|
||
docker images
|
||
docker image prune -f
|
||
|
||
# Deep cleanup (remove dangling images and layers)
|
||
docker system prune -f
|
||
|
||
# Verify cleanup
|
||
docker ps -a # Should be empty
|
||
docker volume ls # Should be empty
|
||
docker images # Should show only essential OS/utilities
|
||
|
||
# Check disk space recovery
|
||
df -h
|
||
```
|
||
|
||
**Documentation Outcome:**
|
||
- List of all containers removed
|
||
- Disk space before/after cleanup
|
||
- Total recovery in GB
|
||
|
||
**Acceptance Criteria:**
|
||
- [ ] All legacy containers removed
|
||
- [ ] All unused volumes removed
|
||
- [ ] All unused images removed
|
||
- [ ] Docker system is clean
|
||
- [ ] Disk space recovered quantified
|
||
- [ ] Verify >1.5GB RAM savings by stopping all containers
|
||
|
||
---
|
||
|
||
#### Step 3.6: Verify Network Access to Legacy VPS is Not Required
|
||
|
||
**Objective:** Confirm that all applications function correctly without legacy VPS online
|
||
|
||
**Commands:**
|
||
```bash
|
||
# From new VPS, verify all critical functions work
|
||
ssh user@new-vps-ip
|
||
|
||
# Run smoke tests for all critical services
|
||
curl -v https://api.igny8.local/health
|
||
curl -v https://app.igny8.local/
|
||
|
||
# Run database operations
|
||
docker exec [app-container] /app/bin/test-db-connection
|
||
|
||
# Verify all external integrations work
|
||
# - GitHub API connectivity
|
||
# - Email notifications
|
||
# - Any other external service calls
|
||
|
||
# Check application logs for any errors
|
||
docker logs [app-container] | grep -i error | tail -20
|
||
|
||
# Test critical user workflows
|
||
# - Login
|
||
# - Create resource
|
||
# - Update resource
|
||
# - Delete resource
|
||
# - Export data
|
||
|
||
# Monitor application performance
|
||
docker stats --no-stream
|
||
```
|
||
|
||
**Documentation Outcome:**
|
||
- Results of smoke tests
|
||
- Application functionality verification
|
||
- Performance metrics baseline
|
||
|
||
**Acceptance Criteria:**
|
||
- [ ] All API endpoints responding normally
|
||
- [ ] No errors related to missing legacy services
|
||
- [ ] Database operations successful
|
||
- [ ] External integrations functional
|
||
- [ ] Application performance acceptable
|
||
- [ ] No cascading failures from legacy VPS removal
|
||
|
||
---
|
||
|
||
### 4.4 Phase 4: VPS Decommission (Day 21+)
|
||
|
||
#### Step 4.1: Final System Verification
|
||
|
||
**Objective:** Confirm grace period is complete and everything is working before decommission
|
||
|
||
**Checklist:**
|
||
- [ ] Minimum 14 days have passed since cleanup started
|
||
- [ ] No production incidents reported
|
||
- [ ] All monitoring dashboards show green
|
||
- [ ] New VPS operating normally for 14+ consecutive days
|
||
- [ ] Legacy VPS receives zero traffic (monitor last 7 days)
|
||
- [ ] All backups successfully created and tested
|
||
- [ ] Snapshot retention policy set (minimum 30 days)
|
||
- [ ] Documentation complete and archived
|
||
|
||
---
|
||
|
||
#### Step 4.2: Archive Legacy System Documentation
|
||
|
||
**Objective:** Store final documentation for compliance and future reference
|
||
|
||
**Procedures:**
|
||
```bash
|
||
# Create archive of all legacy documentation
|
||
mkdir -p /backup/igny8_legacy_archive/
|
||
cp /tmp/legacy_vps_inventory/* /backup/igny8_legacy_archive/
|
||
cp /tmp/legacy_vps_report.md /backup/igny8_legacy_archive/
|
||
cp /tmp/legacy_vps_inventory/gitea_backup/* /backup/igny8_legacy_archive/
|
||
|
||
# Create checksum file
|
||
cd /backup/igny8_legacy_archive/
|
||
find . -type f -exec sha256sum {} \; > CHECKSUMS.sha256
|
||
|
||
# Store in centralized documentation repository
|
||
# Upload to: Internal wiki / GitHub / Document management system
|
||
# Retention: Permanent or per compliance policy
|
||
```
|
||
|
||
**Documentation Outcome:**
|
||
- Complete archive of legacy system
|
||
- Checksums for integrity verification
|
||
- Retention policy documented
|
||
|
||
**Acceptance Criteria:**
|
||
- [ ] All legacy documentation archived
|
||
- [ ] Archive checksums computed and stored
|
||
- [ ] Archive location documented
|
||
- [ ] Retention policy defined
|
||
|
||
---
|
||
|
||
#### Step 4.3: Cancel VPS Subscription
|
||
|
||
**Objective:** Terminate legacy VPS service and recover monthly costs
|
||
|
||
**Procedures:**
|
||
```bash
|
||
# Contact VPS provider and request cancellation
|
||
# Typical process:
|
||
# 1. Log into provider account (DigitalOcean/Linode/AWS/etc)
|
||
# 2. Navigate to Droplets/Instances
|
||
# 3. Select legacy VPS instance
|
||
# 4. Choose "Destroy" or "Terminate"
|
||
# 5. Confirm action
|
||
# 6. Verify cancellation in billing
|
||
|
||
# Example DigitalOcean CLI:
|
||
doctl compute droplet delete $LEGACY_VPS_ID --force
|
||
|
||
# Verify droplet is destroyed
|
||
doctl compute droplet list | grep $LEGACY_VPS_ID # Should return nothing
|
||
|
||
# Verify snapshot was retained (if created)
|
||
doctl compute image list --type snapshot | grep igny8-legacy
|
||
|
||
# Verify final billing shows cancellation
|
||
# (Check provider's billing portal, should show service ended)
|
||
```
|
||
|
||
**Documentation Outcome:**
|
||
- Cancellation date recorded
|
||
- Confirmation of service termination
|
||
- Monthly cost savings calculated
|
||
- Final billing statement
|
||
|
||
**Acceptance Criteria:**
|
||
- [ ] VPS service terminated at provider
|
||
- [ ] Billing shows cancellation
|
||
- [ ] Snapshot retained (if applicable)
|
||
- [ ] Cost recovery documented
|
||
|
||
---
|
||
|
||
#### Step 4.4: Update Infrastructure Documentation
|
||
|
||
**Objective:** Update all infrastructure diagrams and documentation to reflect legacy VPS removal
|
||
|
||
**Procedures:**
|
||
- Update infrastructure architecture diagrams
|
||
- Remove all references to legacy VPS IP addresses
|
||
- Update runbooks and operational procedures
|
||
- Update disaster recovery plans
|
||
- Update network diagrams
|
||
- Remove old VPS from inventory management systems
|
||
- Update knowledge base articles
|
||
|
||
**Documentation Outcome:**
|
||
- Updated architecture diagrams
|
||
- Cleaned infrastructure documentation
|
||
- Updated operational procedures
|
||
|
||
**Acceptance Criteria:**
|
||
- [ ] All infrastructure diagrams updated
|
||
- [ ] No references to legacy VPS remain in public documentation
|
||
- [ ] IP ranges documented only for new VPS
|
||
- [ ] Runbooks reflect current infrastructure only
|
||
|
||
---
|
||
|
||
## 5. Acceptance Criteria
|
||
|
||
### 5.1 Phase 1 Completion Criteria
|
||
|
||
- [ ] All services on legacy VPS documented in SERVICE_INVENTORY format
|
||
- [ ] All GitHub repositories verified complete and accessible
|
||
- [ ] Production services confirmed stable (>99% uptime, no errors)
|
||
- [ ] All DNS records verified pointing to new VPS only
|
||
- [ ] Comprehensive legacy VPS documentation created
|
||
- [ ] No blocking issues identified
|
||
- [ ] Sign-off obtained from infrastructure team
|
||
|
||
### 5.2 Phase 2 Completion Criteria
|
||
|
||
- [ ] Optional VPS snapshot created and verified (if chosen)
|
||
- [ ] BACKUP_METADATA recorded for all backups
|
||
- [ ] Minimum 7-day grace period completed with no issues
|
||
- [ ] Zero unexpected traffic to legacy VPS during monitoring
|
||
- [ ] Final DNS verification shows no changes
|
||
- [ ] All backups retention policies documented
|
||
|
||
### 5.3 Phase 3 Completion Criteria
|
||
|
||
- [ ] All non-critical containers stopped and removed
|
||
- [ ] Gitea database backed up successfully
|
||
- [ ] Gitea repository count matches GitHub count
|
||
- [ ] Gitea container, volumes, and images removed
|
||
- [ ] All remaining legacy containers removed
|
||
- [ ] All unused volumes and images pruned
|
||
- [ ] Disk space recovery quantified (minimum 1.5GB)
|
||
- [ ] Smoke tests pass on new VPS (zero errors)
|
||
- [ ] Production services unaffected by cleanup
|
||
- [ ] No cascading failures reported
|
||
|
||
### 5.4 Phase 4 Completion Criteria
|
||
|
||
- [ ] Minimum 14-day grace period completed successfully
|
||
- [ ] All production metrics normal for past 7 days
|
||
- [ ] Zero legacy VPS traffic for past 7 days
|
||
- [ ] Legacy system documentation archived
|
||
- [ ] Archive checksums verified
|
||
- [ ] VPS subscription cancelled
|
||
- [ ] Billing shows service cancellation
|
||
- [ ] Infrastructure documentation updated
|
||
- [ ] All Phase 0 tasks complete
|
||
|
||
### 5.5 Overall Project Acceptance
|
||
|
||
- [ ] Phase 00A - Repository Consolidation complete (verified in 00A document)
|
||
- [ ] Phase 00C - Production Migration complete (verified in 00C document)
|
||
- [ ] Phase 00E - Legacy Cleanup complete (this document)
|
||
- [ ] Zero data loss throughout migration
|
||
- [ ] No production outages caused by migration
|
||
- [ ] New VPS operating at full capacity
|
||
- [ ] Cost recovery from VPS cancellation achieved
|
||
- [ ] All backups retained per compliance policy
|
||
|
||
---
|
||
|
||
## 6. Claude Code Instructions
|
||
|
||
### 6.1 Execution Workflow
|
||
|
||
This section provides step-by-step instructions for executing the legacy cleanup using Claude Code.
|
||
|
||
#### Pre-Execution Checklist
|
||
|
||
Before starting, verify:
|
||
```bash
|
||
# Verify you have access to both VPS instances
|
||
ssh -o ConnectTimeout=5 user@old-vps-ip "echo 'Legacy VPS accessible'"
|
||
ssh -o ConnectTimeout=5 user@new-vps-ip "echo 'New VPS accessible'"
|
||
|
||
# Verify you have required tools installed
|
||
which gh # GitHub CLI
|
||
which docker # Docker CLI
|
||
which curl # For health checks
|
||
which dig # For DNS verification
|
||
|
||
# Verify GitHub access
|
||
gh auth status
|
||
gh repo list YOUR-ORG --limit 1
|
||
|
||
# Set environment variables
|
||
export OLD_VPS_IP="x.x.x.x"
|
||
export NEW_VPS_IP="y.y.y.y"
|
||
export GITHUB_ORG="YOUR-ORG"
|
||
export LEGACY_BACKUP_DIR="/backup/igny8_legacy_archive"
|
||
```
|
||
|
||
---
|
||
|
||
#### Phase 1 Execution (Days 1-3)
|
||
|
||
**Step 1: Inventory Services**
|
||
```bash
|
||
# Clone this repository to your local machine
|
||
git clone https://github.com/$GITHUB_ORG/igny8-infrastructure.git
|
||
cd igny8-infrastructure
|
||
|
||
# Create inventory script
|
||
cat > scripts/inventory_legacy_vps.sh << 'EOF'
|
||
#!/bin/bash
|
||
set -e
|
||
|
||
VPS_IP=$1
|
||
OUTPUT_DIR=$2
|
||
|
||
echo "Inventorying legacy VPS at $VPS_IP..."
|
||
mkdir -p $OUTPUT_DIR
|
||
|
||
ssh user@$VPS_IP "docker ps -a --format 'table {{.ID}}\t{{.Names}}\t{{.Image}}\t{{.Status}}'" > $OUTPUT_DIR/containers.txt
|
||
ssh user@$VPS_IP "docker images" > $OUTPUT_DIR/images.txt
|
||
ssh user@$VPS_IP "docker volume ls" > $OUTPUT_DIR/volumes.txt
|
||
ssh user@$VPS_IP "docker inspect \$(docker ps -aq)" > $OUTPUT_DIR/container_details.json
|
||
ssh user@$VPS_IP "df -h" > $OUTPUT_DIR/disk_usage.txt
|
||
ssh user@$VPS_IP "free -h" > $OUTPUT_DIR/memory_usage.txt
|
||
|
||
echo "Inventory saved to $OUTPUT_DIR"
|
||
EOF
|
||
|
||
chmod +x scripts/inventory_legacy_vps.sh
|
||
|
||
# Execute inventory
|
||
./scripts/inventory_legacy_vps.sh $OLD_VPS_IP ./tmp/legacy_inventory/
|
||
|
||
# Review results
|
||
cat ./tmp/legacy_inventory/containers.txt
|
||
```
|
||
|
||
**Step 2: Verify GitHub Repositories**
|
||
```bash
|
||
# Create verification script
|
||
cat > scripts/verify_github_repos.sh << 'EOF'
|
||
#!/bin/bash
|
||
GITHUB_ORG=$1
|
||
|
||
echo "Verifying GitHub repositories..."
|
||
gh repo list $GITHUB_ORG --limit 1000 --json name,createdAt > repos.json
|
||
|
||
# For each major repo, verify commit count
|
||
jq '.[] | .name' repos.json | while read -r repo; do
|
||
repo="${repo%\"}"
|
||
repo="${repo#\"}"
|
||
|
||
echo "Checking $repo..."
|
||
git clone --depth 1 https://github.com/$GITHUB_ORG/$repo /tmp/verify_$repo
|
||
|
||
commit_count=$(cd /tmp/verify_$repo && git rev-list --count HEAD)
|
||
echo " Commits: $commit_count"
|
||
|
||
rm -rf /tmp/verify_$repo
|
||
done
|
||
|
||
echo "GitHub verification complete"
|
||
EOF
|
||
|
||
chmod +x scripts/verify_github_repos.sh
|
||
./scripts/verify_github_repos.sh $GITHUB_ORG
|
||
```
|
||
|
||
**Step 3: Verify New VPS Production Readiness**
|
||
```bash
|
||
# Create health check script
|
||
cat > scripts/verify_new_vps.sh << 'EOF'
|
||
#!/bin/bash
|
||
VPS_IP=$1
|
||
|
||
echo "Checking new VPS health..."
|
||
|
||
# Check services
|
||
ssh user@$VPS_IP "docker ps -a" | grep -E "Up|Exited"
|
||
|
||
# Check endpoints
|
||
curl -s https://api.igny8.local/health | jq .
|
||
curl -s https://app.igny8.local/ > /dev/null && echo "App endpoint OK"
|
||
|
||
# Check resources
|
||
ssh user@$VPS_IP "free -h | awk 'NR==2'"
|
||
ssh user@$VPS_IP "df -h | grep '/$'"
|
||
|
||
# Check logs for errors
|
||
ssh user@$VPS_IP "docker logs \$(docker ps -q) 2>&1 | grep -i error | wc -l"
|
||
|
||
echo "Health check complete"
|
||
EOF
|
||
|
||
chmod +x scripts/verify_new_vps.sh
|
||
./scripts/verify_new_vps.sh $NEW_VPS_IP
|
||
```
|
||
|
||
**Step 4: Verify DNS**
|
||
```bash
|
||
# Create DNS verification script
|
||
cat > scripts/verify_dns.sh << 'EOF'
|
||
#!/bin/bash
|
||
OLD_IP=$1
|
||
NEW_IP=$2
|
||
|
||
echo "Verifying DNS records..."
|
||
|
||
domains=("igny8.local" "api.igny8.local" "app.igny8.local" "git.igny8.local")
|
||
|
||
for domain in "${domains[@]}"; do
|
||
current_ip=$(dig +short $domain @8.8.8.8 | head -1)
|
||
|
||
echo "$domain -> $current_ip"
|
||
|
||
if [ "$current_ip" = "$OLD_IP" ]; then
|
||
echo " WARNING: Still pointing to old VPS!"
|
||
elif [ "$current_ip" = "$NEW_IP" ]; then
|
||
echo " OK: Pointing to new VPS"
|
||
else
|
||
echo " WARNING: Unexpected IP address"
|
||
fi
|
||
done
|
||
|
||
echo "DNS verification complete"
|
||
EOF
|
||
|
||
chmod +x scripts/verify_dns.sh
|
||
./scripts/verify_dns.sh $OLD_VPS_IP $NEW_VPS_IP
|
||
```
|
||
|
||
**Step 5: Generate Phase 1 Report**
|
||
```bash
|
||
# Consolidate findings
|
||
cat > Phase1_Verification_Report.md << 'EOF'
|
||
# Phase 1 Verification Report
|
||
|
||
**Date:** $(date)
|
||
**Old VPS IP:** $OLD_VPS_IP
|
||
**New VPS IP:** $NEW_VPS_IP
|
||
|
||
## Services Inventory
|
||
[COPY CONTENTS OF tmp/legacy_inventory/containers.txt]
|
||
|
||
## GitHub Verification
|
||
[COPY RESULTS OF verify_github_repos.sh]
|
||
|
||
## New VPS Health
|
||
[COPY RESULTS OF verify_new_vps.sh]
|
||
|
||
## DNS Verification
|
||
[COPY RESULTS OF verify_dns.sh]
|
||
|
||
## Sign-Off
|
||
- [ ] Infrastructure team verified all items
|
||
- [ ] No blocking issues identified
|
||
- [ ] Approved to proceed to Phase 2
|
||
|
||
**Signed by:** _______________
|
||
**Date:** _______________
|
||
EOF
|
||
|
||
echo "Phase 1 Report Generated"
|
||
git add Phase1_Verification_Report.md
|
||
git commit -m "Phase 1: Legacy cleanup verification complete"
|
||
```
|
||
|
||
---
|
||
|
||
#### Phase 2 Execution (Days 4-14)
|
||
|
||
**Step 1: Create VPS Snapshot (Optional)**
|
||
```bash
|
||
# Example for DigitalOcean
|
||
echo "Creating snapshot of legacy VPS..."
|
||
doctl compute droplet-action snapshot $LEGACY_DROPLET_ID \
|
||
--snapshot-name "igny8-legacy-backup-$(date +%Y%m%d)" \
|
||
--wait
|
||
|
||
# Verify snapshot created
|
||
doctl compute image list --type snapshot | grep igny8-legacy
|
||
|
||
# Record snapshot details
|
||
doctl compute image get [snapshot-id] --format ID,Name,Size,CreatedAt --json > snapshot_metadata.json
|
||
```
|
||
|
||
**Step 2: Monitor Grace Period**
|
||
```bash
|
||
# Create monitoring script
|
||
cat > scripts/monitor_legacy_vps.sh << 'EOF'
|
||
#!/bin/bash
|
||
VPS_IP=$1
|
||
DAYS=$2
|
||
|
||
echo "Monitoring legacy VPS for $DAYS days..."
|
||
|
||
for day in $(seq 1 $DAYS); do
|
||
echo "=== Day $day ==="
|
||
|
||
# Check for traffic
|
||
ssh user@$VPS_IP "tcpdump -i eth0 'tcp port 80 or tcp port 443' -c 10 -n 2>/dev/null || echo 'No traffic detected'"
|
||
|
||
# Check container status
|
||
ssh user@$VPS_IP "docker ps -a --format 'table {{.Names}}\t{{.Status}}'"
|
||
|
||
# Check for errors in logs
|
||
ssh user@$VPS_IP "docker logs gitea 2>&1 | tail -5"
|
||
|
||
# Check disk/memory
|
||
ssh user@$VPS_IP "echo 'Disk:' && df -h | grep '/$' && echo 'Memory:' && free -h | awk 'NR==2'"
|
||
|
||
echo ""
|
||
sleep 86400 # Wait 24 hours
|
||
done
|
||
|
||
echo "Monitoring complete"
|
||
EOF
|
||
|
||
chmod +x scripts/monitor_legacy_vps.sh
|
||
# Run in background or scheduled job
|
||
./scripts/monitor_legacy_vps.sh $OLD_VPS_IP 7 &
|
||
```
|
||
|
||
**Step 3: Final DNS Verification Before Cleanup**
|
||
```bash
|
||
# Repeat DNS checks
|
||
./scripts/verify_dns.sh $OLD_VPS_IP $NEW_VPS_IP
|
||
|
||
# Create final verification report
|
||
cat > Phase2_Grace_Period_Report.md << 'EOF'
|
||
# Phase 2: Grace Period Report
|
||
|
||
**Period:** 7-14 days
|
||
**Monitoring Status:** Complete
|
||
**Incidents:** None
|
||
**Traffic to Legacy VPS:** Zero
|
||
|
||
## Findings
|
||
- All DNS records confirmed pointing to new VPS
|
||
- No unexpected traffic detected
|
||
- All new VPS services stable
|
||
- No cascading failures
|
||
|
||
**Approved to proceed to Phase 3**
|
||
|
||
**Signed by:** _______________
|
||
**Date:** _______________
|
||
EOF
|
||
```
|
||
|
||
---
|
||
|
||
#### Phase 3 Execution (Day 15+)
|
||
|
||
**Step 1: Backup Gitea**
|
||
```bash
|
||
# Create backup script
|
||
cat > scripts/backup_gitea.sh << 'EOF'
|
||
#!/bin/bash
|
||
VPS_IP=$1
|
||
BACKUP_DIR=$2
|
||
|
||
mkdir -p $BACKUP_DIR
|
||
|
||
echo "Backing up Gitea..."
|
||
|
||
# Backup database
|
||
ssh user@$VPS_IP "docker exec gitea /app/gitea/gitea dump -c /etc/gitea/app.ini" > $BACKUP_DIR/gitea_dump.zip
|
||
|
||
# Backup configuration
|
||
ssh user@$VPS_IP "docker cp gitea:/etc/gitea $BACKUP_DIR/gitea_config"
|
||
|
||
# Backup repositories
|
||
ssh user@$VPS_IP "docker exec gitea find /data/gitea-repositories -name '.git' -type d | wc -l" > $BACKUP_DIR/repo_count.txt
|
||
|
||
# Compute checksums
|
||
cd $BACKUP_DIR
|
||
sha256sum * > CHECKSUMS.sha256
|
||
|
||
echo "Gitea backup complete"
|
||
ls -lh $BACKUP_DIR/
|
||
EOF
|
||
|
||
chmod +x scripts/backup_gitea.sh
|
||
./scripts/backup_gitea.sh $OLD_VPS_IP ./backup/gitea/
|
||
```
|
||
|
||
**Step 2: Stop Gitea**
|
||
```bash
|
||
# Stop Gitea container
|
||
echo "Stopping Gitea container..."
|
||
ssh user@$OLD_VPS_IP "docker stop gitea"
|
||
sleep 30
|
||
|
||
# Verify stopped
|
||
ssh user@$OLD_VPS_IP "docker ps | grep gitea" || echo "Gitea stopped successfully"
|
||
|
||
# Remove container
|
||
ssh user@$OLD_VPS_IP "docker rm gitea"
|
||
echo "Gitea container removed"
|
||
```
|
||
|
||
**Step 3: Remove Other Containers**
|
||
```bash
|
||
# Create cleanup script
|
||
cat > scripts/cleanup_legacy_containers.sh << 'EOF'
|
||
#!/bin/bash
|
||
VPS_IP=$1
|
||
|
||
echo "Cleaning up legacy containers..."
|
||
|
||
# Get list of all containers
|
||
containers=$(ssh user@$VPS_IP "docker ps -aq")
|
||
|
||
for container in $containers; do
|
||
name=$(ssh user@$VPS_IP "docker ps -a --filter id=$container --format '{{.Names}}'")
|
||
echo "Stopping $name..."
|
||
ssh user@$VPS_IP "docker stop $container"
|
||
ssh user@$VPS_IP "docker rm $container"
|
||
done
|
||
|
||
# Prune volumes and images
|
||
echo "Pruning volumes..."
|
||
ssh user@$VPS_IP "docker volume prune -f"
|
||
|
||
echo "Pruning images..."
|
||
ssh user@$VPS_IP "docker image prune -f"
|
||
|
||
echo "System cleanup..."
|
||
ssh user@$VPS_IP "docker system prune -f"
|
||
|
||
# Verify cleanup
|
||
echo ""
|
||
echo "Verification:"
|
||
ssh user@$VPS_IP "docker ps -a"
|
||
ssh user@$VPS_IP "docker volume ls"
|
||
ssh user@$VPS_IP "docker images"
|
||
|
||
echo "Cleanup complete"
|
||
EOF
|
||
|
||
chmod +x scripts/cleanup_legacy_containers.sh
|
||
./scripts/cleanup_legacy_containers.sh $OLD_VPS_IP
|
||
```
|
||
|
||
**Step 4: Verify New VPS Still Operational**
|
||
```bash
|
||
# Create smoke test script
|
||
cat > scripts/smoke_test.sh << 'EOF'
|
||
#!/bin/bash
|
||
NEW_VPS_IP=$1
|
||
|
||
echo "Running smoke tests on new VPS..."
|
||
|
||
# Test APIs
|
||
echo "Testing API health..."
|
||
curl -s https://api.igny8.local/health | jq . || echo "FAILED"
|
||
|
||
# Test app
|
||
echo "Testing web app..."
|
||
curl -s -o /dev/null -w "%{http_code}" https://app.igny8.local/ || echo "FAILED"
|
||
|
||
# Test database
|
||
echo "Testing database..."
|
||
ssh user@$NEW_VPS_IP "docker exec app-container /app/bin/db-health-check" || echo "FAILED"
|
||
|
||
# Check logs
|
||
echo "Checking for errors in logs..."
|
||
ssh user@$NEW_VPS_IP "docker logs --tail=50 app-container | grep -i error" || echo "No errors found"
|
||
|
||
echo "Smoke tests complete"
|
||
EOF
|
||
|
||
chmod +x scripts/smoke_test.sh
|
||
./scripts/smoke_test.sh $NEW_VPS_IP
|
||
```
|
||
|
||
**Step 5: Generate Phase 3 Report**
|
||
```bash
|
||
cat > Phase3_Cleanup_Report.md << 'EOF'
|
||
# Phase 3: Service Cleanup Report
|
||
|
||
**Date:** $(date)
|
||
|
||
## Services Removed
|
||
- Gitea Container: REMOVED
|
||
- Additional Containers: [LIST]
|
||
- Unused Volumes: PRUNED
|
||
- Unused Images: PRUNED
|
||
|
||
## Disk Space Recovery
|
||
**Before:** [RECORD FROM PHASE 1]
|
||
**After:** [RECORD FROM PHASE 3]
|
||
**Recovered:** [CALCULATED]
|
||
|
||
## Smoke Test Results
|
||
**Status:** ALL PASS ✓
|
||
|
||
## Sign-Off
|
||
- [ ] All containers successfully removed
|
||
- [ ] Gitea backup verified
|
||
- [ ] New VPS fully operational
|
||
- [ ] No production impact
|
||
- [ ] Ready for Phase 4
|
||
|
||
**Signed by:** _______________
|
||
**Date:** _______________
|
||
EOF
|
||
```
|
||
|
||
---
|
||
|
||
#### Phase 4 Execution (Day 21+)
|
||
|
||
**Step 1: Final System Verification**
|
||
```bash
|
||
cat > Phase4_Final_Checklist.md << 'EOF'
|
||
# Phase 4: Final Verification Checklist
|
||
|
||
**Pre-Decommission Checks:**
|
||
- [ ] Minimum 14 days passed since Phase 3 cleanup
|
||
- [ ] New VPS uptime: _______ days (Target: 14+)
|
||
- [ ] Zero production incidents related to cleanup
|
||
- [ ] All monitoring dashboards green for past 7 days
|
||
- [ ] Legacy VPS receives zero traffic (past 7 days)
|
||
- [ ] All backups created and retention policies set
|
||
- [ ] Git backup/snapshots tested (if applicable)
|
||
- [ ] Infrastructure documentation updated
|
||
|
||
**Proceed to VPS Cancellation:** YES / NO
|
||
|
||
EOF
|
||
```
|
||
|
||
**Step 2: Archive Legacy Documentation**
|
||
```bash
|
||
# Create archive
|
||
mkdir -p $LEGACY_BACKUP_DIR
|
||
cp ./backup/gitea/* $LEGACY_BACKUP_DIR/
|
||
cp ./Phase*_Report.md $LEGACY_BACKUP_DIR/
|
||
cp ./tmp/legacy_inventory/* $LEGACY_BACKUP_DIR/
|
||
|
||
# Compute checksums
|
||
cd $LEGACY_BACKUP_DIR
|
||
find . -type f -exec sha256sum {} \; > CHECKSUMS.sha256
|
||
|
||
# Create archive tarball
|
||
tar -czf igny8_legacy_archive_$(date +%Y%m%d).tar.gz *
|
||
|
||
echo "Archive created: $(ls -lh *.tar.gz)"
|
||
```
|
||
|
||
**Step 3: Cancel VPS Subscription**
|
||
```bash
|
||
# Example for DigitalOcean
|
||
doctl compute droplet delete $LEGACY_DROPLET_ID --force
|
||
|
||
# Verify cancellation
|
||
doctl compute droplet list | grep -i legacy || echo "VPS successfully deleted"
|
||
|
||
# Verify billing
|
||
# (Check provider portal for cancellation confirmation)
|
||
echo "Verify in DigitalOcean Billing: Account > Billing > History"
|
||
```
|
||
|
||
**Step 4: Update Infrastructure Documentation**
|
||
```bash
|
||
# Remove legacy VPS from all documentation
|
||
find . -type f -name "*.md" -o -name "*.yml" -o -name "*.yaml" | \
|
||
xargs grep -l "$OLD_VPS_IP" | \
|
||
while read file; do
|
||
echo "Found reference in: $file"
|
||
done
|
||
|
||
# Update architecture diagrams
|
||
# (Remove legacy VPS from all diagrams)
|
||
# Commit changes
|
||
git add -A
|
||
git commit -m "Remove legacy VPS references from documentation"
|
||
```
|
||
|
||
**Step 5: Generate Final Report**
|
||
```bash
|
||
cat > Phase4_Decommission_Report.md << 'EOF'
|
||
# Phase 4: VPS Decommission Report
|
||
|
||
**Completion Date:** $(date)
|
||
|
||
## Actions Completed
|
||
- [x] VPS subscription cancelled
|
||
- [x] Service termination confirmed
|
||
- [x] Legacy documentation archived
|
||
- [x] Infrastructure documentation updated
|
||
- [x] Monitoring removed for legacy VPS
|
||
|
||
## Archive Location
|
||
**Path:** $LEGACY_BACKUP_DIR
|
||
**Size:** [SIZE]
|
||
**Checksum:** [HASH]
|
||
**Retention:** [POLICY]
|
||
|
||
## Cost Recovery
|
||
**Monthly Savings:** $[AMOUNT]
|
||
**Annual Savings:** $[AMOUNT]
|
||
|
||
## Project Completion
|
||
All Phase 0 tasks complete:
|
||
- [x] 00A - Repository Consolidation
|
||
- [x] 00C - Production Migration
|
||
- [x] 00E - Legacy Cleanup
|
||
|
||
**Project Status:** COMPLETE ✓
|
||
|
||
**Final Sign-Off by:** _______________
|
||
**Date:** _______________
|
||
**Title:** _______________
|
||
EOF
|
||
|
||
# Create final commit
|
||
git add -A
|
||
git commit -m "Phase 4: Legacy VPS decommissioned - Phase 0 complete"
|
||
git push origin main
|
||
```
|
||
|
||
---
|
||
|
||
### 6.2 Troubleshooting Guide
|
||
|
||
#### Problem: DNS Records Still Point to Old VPS
|
||
|
||
**Diagnosis:**
|
||
```bash
|
||
dig igny8.local +short
|
||
# Returns: x.x.x.x (old VPS IP)
|
||
```
|
||
|
||
**Resolution:**
|
||
1. Check DNS provider settings (Route53, Cloudflare, etc.)
|
||
2. Verify TTL has expired (may need to wait 24+ hours)
|
||
3. Manually update DNS records if needed
|
||
4. Flush local DNS cache: `sudo systemctl restart systemd-resolved`
|
||
5. Re-verify from external DNS: `dig @8.8.8.8 igny8.local`
|
||
|
||
---
|
||
|
||
#### Problem: Gitea Container Won't Stop
|
||
|
||
**Diagnosis:**
|
||
```bash
|
||
ssh user@$OLD_VPS_IP "docker stop gitea"
|
||
# Hangs or timeout
|
||
```
|
||
|
||
**Resolution:**
|
||
```bash
|
||
# Use force kill
|
||
ssh user@$OLD_VPS_IP "docker kill gitea"
|
||
|
||
# Or check why it's not stopping
|
||
ssh user@$OLD_VPS_IP "docker logs gitea | tail -20"
|
||
|
||
# Check if processes are stuck
|
||
ssh user@$OLD_VPS_IP "docker top gitea"
|
||
```
|
||
|
||
---
|
||
|
||
#### Problem: Repository Count Mismatch
|
||
|
||
**Diagnosis:**
|
||
```bash
|
||
# GitHub repos: 50
|
||
# Gitea repos: 48
|
||
```
|
||
|
||
**Resolution:**
|
||
1. Identify missing repositories in GitHub
|
||
2. Manually migrate missing repos
|
||
3. Check for fork relationships (forks may not be migrated by default)
|
||
4. Review Gitea UI for archived/hidden repositories
|
||
5. Do NOT proceed until counts match exactly
|
||
|
||
---
|
||
|
||
#### Problem: New VPS Errors After Container Removal
|
||
|
||
**Diagnosis:**
|
||
```bash
|
||
curl https://api.igny8.local/health
|
||
# Returns: 500 Internal Server Error
|
||
```
|
||
|
||
**Resolution:**
|
||
1. Check if removed container was a dependency
|
||
2. Review error logs: `docker logs [app-container]`
|
||
3. If critical: restore snapshot and retry removal process
|
||
4. Verify container dependencies before removal
|
||
|
||
---
|
||
|
||
### 6.3 Rollback Procedures
|
||
|
||
If critical issues emerge during cleanup, follow these procedures:
|
||
|
||
#### Rollback During Phase 1-2 (Pre-Cleanup)
|
||
- No rollback needed, no changes made yet
|
||
- Continue normal operations
|
||
|
||
#### Rollback During Phase 3 (Container Removal)
|
||
- If new VPS experiences failures:
|
||
1. Stop all removals immediately
|
||
2. Verify new VPS state
|
||
3. If snapshot exists: restore snapshot (if available)
|
||
4. Investigate root cause before continuing
|
||
5. Can resume cleanup after root cause fixed
|
||
|
||
#### Rollback During Phase 4 (VPS Decommission)
|
||
- If VPS already cancelled:
|
||
1. Create new VPS with same IP if possible
|
||
2. Restore from snapshot (if created in Phase 2)
|
||
3. Restore from backups in archive
|
||
4. Update DNS to point back to legacy VPS if needed
|
||
5. Contact VPS provider for emergency restore if needed
|
||
|
||
---
|
||
|
||
### 6.4 Success Criteria Checklist
|
||
|
||
Use this checklist to verify successful completion:
|
||
|
||
```
|
||
PHASE 1: Pre-Cleanup Verification
|
||
[ ] All services documented in SERVICE_INVENTORY format
|
||
[ ] All GitHub repositories verified and accessible
|
||
[ ] New VPS confirmed stable and operational
|
||
[ ] DNS records verified pointing only to new VPS
|
||
[ ] Legacy VPS documentation complete
|
||
[ ] Infrastructure team sign-off obtained
|
||
|
||
PHASE 2: Grace Period Monitoring
|
||
[ ] VPS snapshot created (if chosen)
|
||
[ ] 7-day monitoring period complete
|
||
[ ] Zero unexpected traffic to legacy VPS
|
||
[ ] All backups retention policies set
|
||
[ ] Final DNS verification shows no changes
|
||
|
||
PHASE 3: Service Cleanup
|
||
[ ] All non-critical containers removed
|
||
[ ] Gitea database backed up
|
||
[ ] Gitea repository count matches GitHub
|
||
[ ] Gitea container and volumes removed
|
||
[ ] All legacy containers removed
|
||
[ ] Unused volumes and images pruned
|
||
[ ] >1.5GB disk space recovered
|
||
[ ] Smoke tests pass with zero errors
|
||
[ ] Production services unaffected
|
||
|
||
PHASE 4: VPS Decommission
|
||
[ ] Minimum 14-day grace period complete
|
||
[ ] All production metrics normal
|
||
[ ] Legacy documentation archived
|
||
[ ] Archive checksums verified
|
||
[ ] VPS subscription cancelled
|
||
[ ] Billing shows cancellation
|
||
[ ] Infrastructure documentation updated
|
||
[ ] All Phase 0 tasks complete
|
||
|
||
FINAL PROJECT STATUS
|
||
[ ] Phase 00A: Repository Consolidation - COMPLETE
|
||
[ ] Phase 00C: Production Migration - COMPLETE
|
||
[ ] Phase 00E: Legacy Cleanup - COMPLETE
|
||
[ ] Zero data loss throughout project
|
||
[ ] Zero production outages caused by migration
|
||
[ ] Cost recovery from VPS cancellation achieved
|
||
[ ] New infrastructure operating at full capacity
|
||
[ ] All compliance requirements met
|
||
```
|
||
|
||
---
|
||
|
||
## 7. Appendix
|
||
|
||
### 7.1 Related Documents
|
||
|
||
- **00A - Repository Consolidation**: Gitea to GitHub migration details; verify all repositories migrated before Gitea removal
|
||
- **00B - Version Matrix**: Reference infrastructure component versions (PostgreSQL, Redis, Caddy, OS versions) for new production environment
|
||
- **00C - 3-Stage Migration Flow**: DNS migration strategy, test DNS records created, timing requirements for old VPS decommission (24-48+ hours stability required)
|
||
- **Infrastructure Architecture Diagrams**: Updated after Phase 0 completion
|
||
- **Backup and Disaster Recovery Plan**: Legacy backup procedures
|
||
|
||
### 7.2 Environment Variables Reference
|
||
|
||
```bash
|
||
# Legacy VPS
|
||
export OLD_VPS_IP="x.x.x.x" # Old VPS IP address
|
||
export OLD_VPS_USER="user" # SSH user
|
||
export OLD_SSH_KEY="/path/to/key" # SSH private key
|
||
|
||
# New VPS
|
||
export NEW_VPS_IP="y.y.y.y" # New VPS IP address
|
||
export NEW_VPS_USER="user" # SSH user
|
||
export NEW_SSH_KEY="/path/to/key" # SSH private key
|
||
|
||
# GitHub
|
||
export GITHUB_ORG="your-org-name" # GitHub organization
|
||
export GITHUB_TOKEN="[token]" # GitHub personal access token
|
||
|
||
# Backups
|
||
export LEGACY_BACKUP_DIR="/backup/igny8_legacy_archive"
|
||
export SNAPSHOT_RETENTION_DAYS="30" # Keep snapshots for 30 days
|
||
|
||
# VPS Provider (adjust based on provider)
|
||
export LEGACY_DROPLET_ID="[id]" # DigitalOcean droplet ID
|
||
export LEGACY_INSTANCE_ID="[id]" # AWS instance ID
|
||
```
|
||
|
||
### 7.3 Script Repository
|
||
|
||
All scripts referenced in Section 6 are available in:
|
||
```
|
||
https://github.com/$GITHUB_ORG/igny8-infrastructure
|
||
└── scripts/
|
||
├── inventory_legacy_vps.sh
|
||
├── verify_github_repos.sh
|
||
├── verify_new_vps.sh
|
||
├── verify_dns.sh
|
||
├── monitor_legacy_vps.sh
|
||
├── backup_gitea.sh
|
||
├── cleanup_legacy_containers.sh
|
||
└── smoke_test.sh
|
||
```
|
||
|
||
### 7.4 Risk Assessment
|
||
|
||
| Risk | Probability | Impact | Mitigation |
|
||
|------|---|---|---|
|
||
| Data loss during Gitea removal | Low | Critical | Backup before removal, verify GitHub completion |
|
||
| DNS still pointing to old VPS | Low | Medium | Verify DNS before cleanup, monitor for 7+ days |
|
||
| Production service dependency on legacy containers | Very Low | Critical | Smoke tests after each removal, grace period |
|
||
| Incomplete repository migration | Very Low | High | Cross-reference repository counts, test clones |
|
||
| Unable to cancel VPS subscription | Very Low | Low | Keep snapshot, can request reactivation from provider |
|
||
| Insufficient disk space recovery | Very Low | Low | Prune unused images/volumes, calculate before cleanup |
|
||
|
||
### 7.5 Cost Analysis
|
||
|
||
**Monthly VPS Cost:** $[AMOUNT]
|
||
**Estimated Annual Savings:** $[AMOUNT × 12]
|
||
|
||
**Justification:**
|
||
- Old VPS no longer needed after production migration
|
||
- All services running on new, more efficient infrastructure
|
||
- Resource recovery enables potential future consolidation
|
||
|
||
---
|
||
|
||
## Document History
|
||
|
||
| Version | Date | Author | Changes |
|
||
|---------|------|--------|---------|
|
||
| 1.0 | 2026-03-23 | Infrastructure Team | Initial creation |
|
||
|
||
---
|
||
|
||
## Approval Sign-Off
|
||
|
||
**Project Manager:** ________________________ **Date:** __________
|
||
|
||
**Infrastructure Lead:** ________________________ **Date:** __________
|
||
|
||
**Operations Manager:** ________________________ **Date:** __________
|
||
|
||
---
|
||
|
||
**Document Status:** Ready for Phase 1 Execution
|
||
**Last Updated:** 2026-03-23
|