239 lines
6.5 KiB
Markdown
239 lines
6.5 KiB
Markdown
# Quick Start Guide - Automation Runs Detail View
|
|
|
|
## 🚀 How to Test the New Features
|
|
|
|
### 1. Start the Application
|
|
|
|
**Backend:**
|
|
```bash
|
|
cd /data/app/igny8/backend
|
|
python manage.py runserver
|
|
```
|
|
|
|
**Frontend:**
|
|
```bash
|
|
cd /data/app/igny8/frontend
|
|
npm run dev
|
|
```
|
|
|
|
### 2. Access the Overview Page
|
|
|
|
Navigate to: `http://localhost:5173/automation/overview`
|
|
|
|
You should see:
|
|
- ✅ **Run Statistics Summary** - Cards showing total/completed/failed/running runs
|
|
- ✅ **Predictive Cost Analysis** - Donut chart with estimated credits for next run
|
|
- ✅ **Attention Items Alert** - Warning if there are failed/skipped items
|
|
- ✅ **Enhanced Run History** - Table with clickable run titles
|
|
|
|
### 3. Explore the Detail Page
|
|
|
|
**Option A: Click a Run Title**
|
|
- Click any run title in the history table (e.g., "mysite.com #42")
|
|
- You'll navigate to `/automation/runs/{run_id}`
|
|
|
|
**Option B: Direct URL**
|
|
- Find a run_id from the backend
|
|
- Navigate to: `http://localhost:5173/automation/runs/run_20260117_140523_manual`
|
|
|
|
You should see:
|
|
- ✅ **Run Summary Card** - Status, dates, duration, credits
|
|
- ✅ **Insights Panel** - Auto-generated alerts and recommendations
|
|
- ✅ **Credit Breakdown Chart** - Donut chart showing credit distribution
|
|
- ✅ **Efficiency Metrics** - Performance stats with historical comparison
|
|
- ✅ **Stage Accordion** - Expandable sections for all 7 stages
|
|
|
|
### 4. Test Different Scenarios
|
|
|
|
#### Scenario 1: Site with No Runs
|
|
- Create a new site or use one with 0 automation runs
|
|
- Visit `/automation/overview`
|
|
- **Expected:** "No automation runs yet" message
|
|
|
|
#### Scenario 2: Site with Few Runs (< 3 completed)
|
|
- Use a site with 1-2 completed runs
|
|
- **Expected:** Predictive analysis shows "Low confidence"
|
|
|
|
#### Scenario 3: Site with Many Runs (> 10)
|
|
- Use a site with 10+ completed runs
|
|
- **Expected:** Full historical averages, "High confidence" predictions
|
|
|
|
#### Scenario 4: Failed Run
|
|
- Find a run with status='failed'
|
|
- View its detail page
|
|
- **Expected:** Error insights, red status badge, error messages in stages
|
|
|
|
#### Scenario 5: Running Run
|
|
- Trigger a new automation run (if possible)
|
|
- View overview page while it's running
|
|
- **Expected:** "Running Runs: 1" in statistics
|
|
|
|
### 5. Test Interactions
|
|
|
|
- [ ] Click run title → navigates to detail page
|
|
- [ ] Expand/collapse stage accordion sections
|
|
- [ ] Change page in history pagination
|
|
- [ ] Hover over chart sections to see tooltips
|
|
- [ ] Toggle dark mode (if available in app)
|
|
|
|
### 6. Verify Data Accuracy
|
|
|
|
#### Backend API Tests
|
|
```bash
|
|
# Get overview stats
|
|
curl -H "Authorization: Bearer <token>" \
|
|
"http://localhost:8000/api/v1/automation/overview_stats/?site_id=1"
|
|
|
|
# Get enhanced history
|
|
curl -H "Authorization: Bearer <token>" \
|
|
"http://localhost:8000/api/v1/automation/history/?site_id=1&page=1&page_size=10"
|
|
|
|
# Get run detail
|
|
curl -H "Authorization: Bearer <token>" \
|
|
"http://localhost:8000/api/v1/automation/run_detail/?site_id=1&run_id=run_xxx"
|
|
```
|
|
|
|
#### Verify Calculations
|
|
- Check that run numbers are sequential (1, 2, 3...)
|
|
- Verify historical averages match manual calculations
|
|
- Confirm predictive estimates align with pending items
|
|
- Ensure stage status icons match actual stage results
|
|
|
|
### 7. Mobile Responsive Testing
|
|
|
|
**Test on different screen sizes:**
|
|
```
|
|
- 320px (iPhone SE)
|
|
- 768px (iPad)
|
|
- 1024px (Desktop)
|
|
- 1920px (Large Desktop)
|
|
```
|
|
|
|
**What to check:**
|
|
- Cards stack properly on mobile
|
|
- Tables scroll horizontally if needed
|
|
- Charts resize appropriately
|
|
- Text remains readable
|
|
- Buttons are touch-friendly
|
|
|
|
### 8. Dark Mode Testing
|
|
|
|
If your app supports dark mode:
|
|
- [ ] Toggle to dark mode
|
|
- [ ] Verify all text is readable
|
|
- [ ] Check chart colors are visible
|
|
- [ ] Ensure borders/dividers are visible
|
|
- [ ] Confirm badge colors have good contrast
|
|
|
|
### 9. Performance Check
|
|
|
|
Open browser DevTools:
|
|
- **Network tab**: Check API response times
|
|
- overview_stats should be < 1s
|
|
- run_detail should be < 500ms
|
|
- history should be < 300ms
|
|
- **Performance tab**: Record page load
|
|
- Initial render should be < 2s
|
|
- Chart rendering should be < 100ms
|
|
- **Console**: Check for errors or warnings
|
|
|
|
### 10. Browser Compatibility
|
|
|
|
Test in multiple browsers:
|
|
- [ ] Chrome/Edge (Chromium)
|
|
- [ ] Firefox
|
|
- [ ] Safari (if on Mac)
|
|
|
|
---
|
|
|
|
## 🐛 Common Issues & Solutions
|
|
|
|
### Issue: "No data available"
|
|
**Solution:** Ensure the site has at least one automation run in the database.
|
|
|
|
### Issue: Charts not rendering
|
|
**Solution:** Check that ApexCharts is installed: `npm list react-apexcharts`
|
|
|
|
### Issue: 404 on detail page
|
|
**Solution:** Verify the route is added in App.tsx and the run_id is valid
|
|
|
|
### Issue: Historical averages showing 0
|
|
**Solution:** Need at least 3 completed runs for historical data
|
|
|
|
### Issue: Predictive analysis shows "Low confidence"
|
|
**Solution:** Normal if < 3 completed runs exist
|
|
|
|
### Issue: Dark mode colors look wrong
|
|
**Solution:** Verify Tailwind dark: classes are applied correctly
|
|
|
|
---
|
|
|
|
## 📸 Screenshots to Capture
|
|
|
|
For documentation/demo purposes:
|
|
|
|
1. **Overview Page - Full View**
|
|
- Shows all 4 components
|
|
- With real data
|
|
|
|
2. **Predictive Analysis Chart**
|
|
- Donut chart with 7 stages
|
|
- Credit breakdown visible
|
|
|
|
3. **Run History Table**
|
|
- Multiple runs visible
|
|
- Stage status icons clear
|
|
|
|
4. **Detail Page - Run Summary**
|
|
- Top section with status and metrics
|
|
|
|
5. **Stage Accordion - Expanded**
|
|
- One stage expanded showing details
|
|
- Historical comparison visible
|
|
|
|
6. **Credit Breakdown Chart**
|
|
- Donut chart on detail page
|
|
|
|
7. **Insights Panel**
|
|
- With actual insights displayed
|
|
|
|
8. **Mobile View**
|
|
- Both overview and detail pages
|
|
|
|
---
|
|
|
|
## ✅ Final Verification Checklist
|
|
|
|
Before marking complete:
|
|
- [ ] All 3 new endpoints return data
|
|
- [ ] Overview page loads without errors
|
|
- [ ] Detail page loads without errors
|
|
- [ ] Routing works (click run title)
|
|
- [ ] Pagination works in history
|
|
- [ ] Charts render correctly
|
|
- [ ] Stage accordion expands/collapses
|
|
- [ ] Historical comparisons show variance %
|
|
- [ ] Auto-generated insights appear
|
|
- [ ] Dark mode looks good
|
|
- [ ] Mobile layout is usable
|
|
- [ ] No console errors
|
|
- [ ] TypeScript compiles without errors
|
|
- [ ] Backend tests pass (if any)
|
|
|
|
---
|
|
|
|
## 🎉 Success!
|
|
|
|
If all above items work, the implementation is complete and ready for:
|
|
1. User acceptance testing (UAT)
|
|
2. Staging deployment
|
|
3. Production deployment
|
|
4. User training/documentation
|
|
|
|
---
|
|
|
|
**Need help?** Check:
|
|
- `/docs/plans/AUTOMATION_RUNS_DETAIL_VIEW_UX_PLAN.md` - Full specification
|
|
- `/docs/plans/AUTOMATION_RUNS_IMPLEMENTATION_LOG.md` - Detailed implementation notes
|
|
- `/docs/plans/AUTOMATION_RUNS_IMPLEMENTATION_SUMMARY.md` - High-level overview
|