4 Commits

Author SHA1 Message Date
Paul R Kartchner
91146e1219 ci: automate Prisma migrations in pipeline and deploy [skip-deploy]
Some checks failed
Basil CI/CD Pipeline / Shared Package Tests (push) Successful in 3m18s
Basil CI/CD Pipeline / Code Linting (push) Successful in 3m39s
Basil CI/CD Pipeline / Web Tests (push) Successful in 4m1s
Basil CI/CD Pipeline / API Tests (push) Failing after 4m7s
Basil CI/CD Pipeline / Trigger Deployment (push) Has been skipped
Basil CI/CD Pipeline / Security Scanning (push) Successful in 3m42s
Basil CI/CD Pipeline / Build All Packages (push) Has been skipped
Basil CI/CD Pipeline / E2E Tests (push) Has been skipped
Basil CI/CD Pipeline / Build & Push Docker Images (push) Has been skipped
Moves migration handling into the pipeline and production deploy so
schema changes ship atomically with the code that depends on them.
Previously migrations were manual and the migrations/ directory was
gitignored, which caused silent drift between environments.

- Track packages/api/prisma/migrations/ in git (including the baseline
  20260416000000_init and the family-tenant delta).
- Add `prisma:deploy` script that runs `prisma migrate deploy` (the
  non-interactive, CI-safe command). `prisma:migrate` still maps to
  `migrate dev` for local authoring.
- Pipeline test-api and e2e-tests jobs now use `prisma:deploy` and
  test-api adds a drift check (`prisma migrate diff --exit-code`) that
  fails the build if schema.prisma has changes without a corresponding
  migration.
- deploy.sh runs migrations against prod using `docker run --rm` with
  the freshly pulled API image before restarting containers, so a
  failing migration aborts the deploy with the old containers still
  serving traffic.

The [skip-deploy] tag avoids re-triggering deployment for this
infrastructure commit; the updated deploy.sh must be pulled to the
production host out-of-band before the next deployment benefits from
the new migration step.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-04-17 08:16:51 -06:00
Paul R Kartchner
c3e3d66fef feat: add family-based multi-tenant access control
Some checks failed
Basil CI/CD Pipeline / Code Linting (push) Successful in 3m18s
Basil CI/CD Pipeline / Web Tests (push) Successful in 3m31s
Basil CI/CD Pipeline / Security Scanning (push) Has been cancelled
Basil CI/CD Pipeline / API Tests (push) Failing after 3m56s
Basil CI/CD Pipeline / Shared Package Tests (push) Successful in 3m11s
Basil CI/CD Pipeline / Trigger Deployment (push) Has been cancelled
Basil CI/CD Pipeline / Build All Packages (push) Has been cancelled
Basil CI/CD Pipeline / E2E Tests (push) Has been cancelled
Basil CI/CD Pipeline / Build & Push Docker Images (push) Has been cancelled
Introduces Family as the tenant boundary so recipes and cookbooks can be
scoped per household instead of every user seeing everything. Adds a
centralized access filter, an invite/membership UI, a first-login prompt
to create a family, and locks down the previously unauthenticated backup
routes to admin only.

- Family and FamilyMember models with OWNER/MEMBER roles; familyId on
  Recipe and Cookbook (ON DELETE SET NULL so deleting a family orphans
  content rather than destroying it).
- access.service.ts composes a single WhereInput covering owner, family,
  PUBLIC visibility, and direct share; admins short-circuit to full
  access.
- recipes/cookbooks routes now require auth, strip client-supplied
  userId/familyId on create, and gate mutations with canMutate checks.
  Auto-filter helpers scoped to the same family to prevent cross-tenant
  leakage via shared tag names.
- families.routes.ts exposes list/create/get/rename/delete plus
  add/remove member, with last-owner protection on removal.
- FamilyGate component blocks the authenticated UI with a modal if the
  user has zero memberships, prompting them to create their first
  family; Family page provides ongoing management.
- backup.routes.ts now requires admin; it had no auth at all before.
- Bumps version to 2026.04.008 and documents the monotonic PPP counter
  in CLAUDE.md.

Migration SQL is generated locally but not tracked (per existing
.gitignore); apply 20260416010000_add_family_tenant to prod during
deploy. Run backfill-family-tenant.ts once post-migration to assign
existing content to a default owner's family.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-04-17 08:08:10 -06:00
Paul R Kartchner
fb18caa3c2 feat: add comprehensive PostgreSQL backup and restore scripts
All checks were successful
Basil CI/CD Pipeline / Shared Package Tests (push) Successful in 1m10s
Basil CI/CD Pipeline / Code Linting (push) Successful in 1m18s
Basil CI/CD Pipeline / Web Tests (push) Successful in 1m29s
Basil CI/CD Pipeline / Security Scanning (push) Successful in 1m14s
Basil CI/CD Pipeline / API Tests (push) Successful in 1m45s
Basil CI/CD Pipeline / Trigger Deployment (push) Successful in 12s
Basil CI/CD Pipeline / Build All Packages (push) Successful in 1m31s
Basil CI/CD Pipeline / E2E Tests (push) Has been skipped
Basil CI/CD Pipeline / Build & Push Docker Images (push) Successful in 14m27s
Added production-grade backup and restore scripts for PostgreSQL servers
that can backup all databases automatically with retention management.

New scripts:
- scripts/backup-all-postgres-databases.sh - Backs up all databases on a
  PostgreSQL server with automatic retention, compression, verification,
  and notification support
- scripts/restore-postgres-database.sh - Restores individual databases
  with safety backups and verification
- scripts/README-POSTGRES-BACKUP.md - Complete documentation with examples,
  best practices, and troubleshooting

Features:
- Automatic detection and backup of all user databases
- Excludes system databases (postgres, template0, template1)
- Backs up global objects (roles, tablespaces)
- Optional gzip compression (80-90% space savings)
- Automatic retention management (configurable days)
- Integrity verification (gzip -t for compressed files)
- Safety backups before restore operations
- Detailed logging with color-coded output
- Backup summary reporting
- Email/Slack notification support (optional)
- Interactive restore with confirmation prompts
- Force mode for automation
- Verbose debugging mode
- Comprehensive error handling

Backup directory structure:
  /var/backups/postgresql/YYYYMMDD/
    - globals_YYYYMMDD_HHMMSS.sql.gz
    - database1_YYYYMMDD_HHMMSS.sql.gz
    - database2_YYYYMMDD_HHMMSS.sql.gz

Usage examples:
  # Backup all databases with compression
  ./backup-all-postgres-databases.sh -c

  # Custom configuration
  ./backup-all-postgres-databases.sh -h db.server.com -U backup_user -d /mnt/backups -r 60 -c

  # Restore database
  ./restore-postgres-database.sh /var/backups/postgresql/20260120/mydb_20260120_020001.sql.gz

  # Force restore (skip confirmation)
  ./restore-postgres-database.sh backup.sql.gz -d mydb -f

Automation:
  # Add to crontab for daily backups at 2 AM
  0 2 * * * /path/to/backup-all-postgres-databases.sh -c >> /var/log/postgres-backup.log 2>&1

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-25 21:39:32 -07:00
Paul R Kartchner
883b7820ed docs: add comprehensive database migration and backup documentation
All checks were successful
Basil CI/CD Pipeline / Shared Package Tests (push) Successful in 1m38s
Basil CI/CD Pipeline / Security Scanning (push) Successful in 1m55s
Basil CI/CD Pipeline / Web Tests (push) Successful in 2m9s
Basil CI/CD Pipeline / Build All Packages (push) Successful in 1m31s
Basil CI/CD Pipeline / Code Linting (push) Successful in 1m57s
Basil CI/CD Pipeline / API Tests (push) Successful in 2m34s
Basil CI/CD Pipeline / E2E Tests (push) Has been skipped
Basil CI/CD Pipeline / Build & Push Docker Images (push) Successful in 5m5s
Basil CI/CD Pipeline / Trigger Deployment (push) Successful in 12s
Added complete guide for migrating from containerized PostgreSQL to standalone
server with production-grade backup strategies.

New files:
- docs/DATABASE-MIGRATION-GUIDE.md - Complete migration guide with step-by-step
  instructions, troubleshooting, and rollback procedures
- scripts/backup-standalone-postgres.sh - Automated backup script with daily,
  weekly, and monthly retention policies
- scripts/restore-standalone-postgres.sh - Safe restore script with verification
  and pre-restore safety backup

Features:
- Hybrid backup strategy (PostgreSQL native + Basil API)
- Automated retention policy (30/90/365 days)
- Integrity verification
- Safety backups before restore
- Complete troubleshooting guide
- Rollback procedures

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-20 15:29:35 -07:00
31 changed files with 3893 additions and 68 deletions

View File

@@ -87,8 +87,17 @@ jobs:
- name: Generate Prisma Client - name: Generate Prisma Client
run: cd packages/api && npm run prisma:generate run: cd packages/api && npm run prisma:generate
- name: Run database migrations - name: Apply database migrations
run: cd packages/api && npm run prisma:migrate run: cd packages/api && npm run prisma:deploy
env:
DATABASE_URL: postgresql://basil:basil@postgres:5432/basil_test?schema=public
- name: Check for schema drift
run: |
cd packages/api && npx prisma migrate diff \
--from-url "$DATABASE_URL" \
--to-schema-datamodel ./prisma/schema.prisma \
--exit-code && echo "✓ schema.prisma matches applied migrations"
env: env:
DATABASE_URL: postgresql://basil:basil@postgres:5432/basil_test?schema=public DATABASE_URL: postgresql://basil:basil@postgres:5432/basil_test?schema=public
@@ -276,8 +285,8 @@ jobs:
- name: Build application - name: Build application
run: npm run build run: npm run build
- name: Run database migrations - name: Apply database migrations
run: cd packages/api && npm run prisma:migrate run: cd packages/api && npm run prisma:deploy
env: env:
DATABASE_URL: postgresql://basil:basil@postgres:5432/basil?schema=public DATABASE_URL: postgresql://basil:basil@postgres:5432/basil?schema=public

2
.gitignore vendored
View File

@@ -62,5 +62,5 @@ backups/
docker-compose.override.yml docker-compose.override.yml
# Prisma # Prisma
packages/api/prisma/migrations/ # Migrations are tracked. Applied automatically by deploy.sh (via `prisma migrate deploy`).
# Pipeline Test # Pipeline Test

View File

@@ -279,13 +279,13 @@ Basil includes a complete CI/CD pipeline with Gitea Actions for automated testin
Basil uses calendar versioning with the format: `YYYY.MM.PPP` Basil uses calendar versioning with the format: `YYYY.MM.PPP`
- `YYYY` - Four-digit year (e.g., 2026) - `YYYY` - Four-digit year (e.g., 2026)
- `MM` - Two-digit month with zero-padding (e.g., 01 for January, 12 for December) - `MM` - Two-digit month with zero-padding (e.g., 01 for January, 12 for December)
- `PPP` - Three-digit patch number with zero-padding that increases with each deployment in a month - `PPP` - Three-digit patch number with zero-padding that increases with every deployment. **Does not reset at month boundaries** — it is a monotonically increasing counter across the lifetime of the project.
### Examples ### Examples
- `2026.01.001` - First deployment in January 2026 - `2026.01.006` - Sixth deployment (in January 2026)
- `2026.01.002` - Second deployment in January 2026 - `2026.04.007` - Seventh deployment (in April 2026 — patch continues from previous month, does not reset)
- `2026.02.001` - First deployment in February 2026 (patch resets to 001) - `2026.04.008` - Eighth deployment (still in April 2026)
- `2026.02.003` - Third deployment in February 2026 - `2026.05.009` - Ninth deployment (in May 2026 — patch continues, does not reset)
### Version Update Process ### Version Update Process
When deploying to production: When deploying to production:

View File

@@ -0,0 +1,465 @@
# Database Migration Guide: Container → Standalone PostgreSQL
This guide covers migrating Basil from containerized PostgreSQL to a standalone PostgreSQL server and setting up production-grade backups.
## Table of Contents
1. [Why Migrate?](#why-migrate)
2. [Pre-Migration Checklist](#pre-migration-checklist)
3. [Migration Steps](#migration-steps)
4. [Backup Strategy](#backup-strategy)
5. [Testing & Verification](#testing--verification)
6. [Rollback Plan](#rollback-plan)
---
## Why Migrate?
### Standalone PostgreSQL Advantages
- ✅ Dedicated database resources (no competition with app containers)
- ✅ Standard PostgreSQL backup/restore tools
- ✅ Point-in-time recovery (PITR) capabilities
- ✅ Better monitoring and administration
- ✅ Industry best practice for production
- ✅ Easier to scale independently
### When to Keep Containerized
- Local development environments
- Staging/test environments
- Simple single-server deployments
- Environments where simplicity > resilience
---
## Pre-Migration Checklist
- [ ] Standalone PostgreSQL server is installed and accessible
- [ ] PostgreSQL version is 13 or higher (check: `psql --version`)
- [ ] Network connectivity from app server to DB server
- [ ] Firewall rules allow PostgreSQL port (default: 5432)
- [ ] You have PostgreSQL superuser credentials
- [ ] Current Basil data is backed up
- [ ] Maintenance window scheduled (expect ~15-30 min downtime)
---
## Migration Steps
### Step 1: Create Backup of Current Data
**Option A: Use Basil's Built-in API (Recommended)**
```bash
# Create full backup (database + uploaded images)
curl -X POST http://localhost:3001/api/backup
# List available backups
curl http://localhost:3001/api/backup
# Download the latest backup
curl -O http://localhost:3001/api/backup/basil-backup-YYYY-MM-DDTHH-MM-SS.zip
```
**Option B: Direct PostgreSQL Dump**
```bash
# From container
docker exec basil-postgres pg_dump -U basil basil > /tmp/basil_migration.sql
# Verify backup
head -20 /tmp/basil_migration.sql
```
### Step 2: Prepare Standalone PostgreSQL Server
SSH into your PostgreSQL server:
```bash
ssh your-postgres-server
# Switch to postgres user
sudo -u postgres psql
```
Create database and user:
```sql
-- Create database
CREATE DATABASE basil;
-- Create user with password
CREATE USER basil WITH ENCRYPTED PASSWORD 'your-secure-password-here';
-- Grant privileges
GRANT ALL PRIVILEGES ON DATABASE basil TO basil;
-- Connect to basil database
\c basil
-- Grant schema permissions
GRANT ALL ON SCHEMA public TO basil;
-- Exit
\q
```
**Security Best Practices:**
```bash
# Generate strong password
openssl rand -base64 32
# Store in password manager or .pgpass file
echo "your-postgres-server:5432:basil:basil:your-password" >> ~/.pgpass
chmod 600 ~/.pgpass
```
### Step 3: Update Firewall Rules
On PostgreSQL server:
```bash
# Allow app server to connect
sudo ufw allow from <app-server-ip> to any port 5432
# Or edit pg_hba.conf
sudo nano /etc/postgresql/15/main/pg_hba.conf
```
Add line:
```
host basil basil <app-server-ip>/32 scram-sha-256
```
Reload PostgreSQL:
```bash
sudo systemctl reload postgresql
```
### Step 4: Test Connectivity
From app server:
```bash
# Test connection
psql -h your-postgres-server -U basil -d basil -c "SELECT version();"
# Should show PostgreSQL version
```
### Step 5: Update Basil Configuration
**On app server**, update environment configuration:
```bash
# Edit .env file
cd /srv/docker-compose/basil
nano .env
```
Add or update:
```bash
DATABASE_URL=postgresql://basil:your-password@your-postgres-server-ip:5432/basil?schema=public
```
**Update docker-compose.yml:**
```yaml
services:
api:
environment:
- DATABASE_URL=${DATABASE_URL}
# ... other variables
# Comment out postgres service
# postgres:
# image: postgres:15
# ...
```
### Step 6: Run Prisma Migrations
This creates the schema on your new database:
```bash
cd /home/pkartch/development/basil/packages/api
# Generate Prisma client
npm run prisma:generate
# Deploy migrations
npm run prisma:migrate deploy
```
### Step 7: Restore Data
**Option A: Use Basil's Restore API**
```bash
# Copy backup to server (if needed)
scp basil-backup-*.zip app-server:/tmp/
# Restore via API
curl -X POST http://localhost:3001/api/backup/restore \
-F "backup=@/tmp/basil-backup-YYYY-MM-DDTHH-MM-SS.zip"
```
**Option B: Direct PostgreSQL Restore**
```bash
# Copy SQL dump to DB server
scp /tmp/basil_migration.sql your-postgres-server:/tmp/
# On PostgreSQL server
psql -h localhost -U basil basil < /tmp/basil_migration.sql
```
### Step 8: Restart Application
```bash
cd /srv/docker-compose/basil
./dev-rebuild.sh
# Or
docker-compose down
docker-compose up -d
```
### Step 9: Verify Migration
```bash
# Check API logs
docker-compose logs api | grep -i "database\|connected"
# Test API
curl http://localhost:3001/api/recipes
curl http://localhost:3001/api/cookbooks
# Check database directly
psql -h your-postgres-server -U basil basil -c "SELECT COUNT(*) FROM \"Recipe\";"
psql -h your-postgres-server -U basil basil -c "SELECT COUNT(*) FROM \"Cookbook\";"
```
---
## Backup Strategy
### Daily Automated Backups
**On PostgreSQL server:**
```bash
# Copy backup script to server
scp scripts/backup-standalone-postgres.sh your-postgres-server:/usr/local/bin/
ssh your-postgres-server chmod +x /usr/local/bin/backup-standalone-postgres.sh
# Set up cron job
ssh your-postgres-server
sudo crontab -e
```
Add:
```cron
# Daily backup at 2 AM
0 2 * * * /usr/local/bin/backup-standalone-postgres.sh >> /var/log/basil-backup.log 2>&1
```
### Weekly Application Backups
**On app server:**
```bash
sudo crontab -e
```
Add:
```cron
# Weekly full backup (DB + images) on Sundays at 3 AM
0 3 * * 0 curl -X POST http://localhost:3001/api/backup >> /var/log/basil-api-backup.log 2>&1
```
### Off-Site Backup Sync
**Set up rsync to NAS or remote server:**
```bash
# On PostgreSQL server
sudo crontab -e
```
Add:
```cron
# Sync backups to NAS at 4 AM
0 4 * * * rsync -av /var/backups/basil/ /mnt/nas/backups/basil/ >> /var/log/basil-sync.log 2>&1
# Optional: Upload to S3
0 5 * * * aws s3 sync /var/backups/basil/ s3://your-bucket/basil-backups/ --storage-class GLACIER >> /var/log/basil-s3.log 2>&1
```
### Backup Retention
The backup script automatically maintains:
- **Daily backups:** 30 days
- **Weekly backups:** 90 days (12 weeks)
- **Monthly backups:** 365 days (12 months)
---
## Testing & Verification
### Test Backup Process
```bash
# Run backup manually
/usr/local/bin/backup-standalone-postgres.sh
# Verify backup exists
ls -lh /var/backups/basil/daily/
# Test backup integrity
gzip -t /var/backups/basil/daily/basil-*.sql.gz
```
### Test Restore Process
**On a test server (NOT production!):**
```bash
# Copy restore script
scp scripts/restore-standalone-postgres.sh test-server:/tmp/
# Run restore
/tmp/restore-standalone-postgres.sh /var/backups/basil/daily/basil-YYYYMMDD.sql.gz
```
### Monitoring
**Set up monitoring checks:**
```bash
# Check backup file age (should be < 24 hours)
find /var/backups/basil/daily/ -name "basil-*.sql.gz" -mtime -1 | grep -q . || echo "ALERT: No recent backup!"
# Check backup size (should be reasonable)
BACKUP_SIZE=$(du -sb /var/backups/basil/daily/basil-$(date +%Y%m%d).sql.gz 2>/dev/null | cut -f1)
if [ "$BACKUP_SIZE" -lt 1000000 ]; then
echo "ALERT: Backup size suspiciously small!"
fi
```
---
## Rollback Plan
If migration fails, you can quickly rollback:
### Quick Rollback to Containerized PostgreSQL
```bash
cd /srv/docker-compose/basil
# 1. Restore old docker-compose.yml (uncomment postgres service)
nano docker-compose.yml
# 2. Remove DATABASE_URL override
nano .env # Comment out or remove DATABASE_URL
# 3. Restart with containerized database
docker-compose down
docker-compose up -d
# 4. Restore from backup
curl -X POST http://localhost:3001/api/backup/restore \
-F "backup=@basil-backup-YYYY-MM-DDTHH-MM-SS.zip"
```
### Data Recovery
If you need to recover data from standalone server after rollback:
```bash
# Dump from standalone server
ssh your-postgres-server
pg_dump -U basil basil > /tmp/basil_recovery.sql
# Import to containerized database
docker exec -i basil-postgres psql -U basil basil < /tmp/basil_recovery.sql
```
---
## Troubleshooting
### Connection Issues
**Error: "Connection refused"**
```bash
# Check PostgreSQL is listening on network
sudo netstat -tlnp | grep 5432
# Verify postgresql.conf
grep "listen_addresses" /etc/postgresql/*/main/postgresql.conf
# Should be: listen_addresses = '*'
# Restart PostgreSQL
sudo systemctl restart postgresql
```
**Error: "Authentication failed"**
```bash
# Verify user exists
psql -U postgres -c "\du basil"
# Reset password
psql -U postgres -c "ALTER USER basil WITH PASSWORD 'new-password';"
# Check pg_hba.conf authentication method
sudo cat /etc/postgresql/*/main/pg_hba.conf | grep basil
```
### Migration Issues
**Error: "Relation already exists"**
```bash
# Drop and recreate database
psql -U postgres -c "DROP DATABASE basil;"
psql -U postgres -c "CREATE DATABASE basil;"
psql -U postgres -c "GRANT ALL PRIVILEGES ON DATABASE basil TO basil;"
# Re-run migrations
cd packages/api
npm run prisma:migrate deploy
```
**Error: "Foreign key constraint violation"**
```bash
# Restore with --no-owner --no-privileges flags
pg_restore --no-owner --no-privileges -U basil -d basil backup.sql
```
---
## Additional Resources
- [PostgreSQL Backup Documentation](https://www.postgresql.org/docs/current/backup.html)
- [Prisma Migration Guide](https://www.prisma.io/docs/concepts/components/prisma-migrate)
- [Docker PostgreSQL Volume Management](https://docs.docker.com/storage/volumes/)
---
## Summary Checklist
Post-migration verification:
- [ ] Application connects to standalone PostgreSQL
- [ ] All recipes visible in UI
- [ ] All cookbooks visible in UI
- [ ] Recipe import works
- [ ] Image uploads work
- [ ] Daily backups running
- [ ] Weekly API backups running
- [ ] Backup integrity verified
- [ ] Restore process tested (on test server)
- [ ] Monitoring alerts configured
- [ ] Old containerized database backed up (for safety)
- [ ] Documentation updated with new DATABASE_URL
**Congratulations! You've successfully migrated to standalone PostgreSQL! 🎉**

View File

@@ -13,6 +13,7 @@
"test:coverage": "vitest run --coverage", "test:coverage": "vitest run --coverage",
"prisma:generate": "prisma generate", "prisma:generate": "prisma generate",
"prisma:migrate": "prisma migrate dev", "prisma:migrate": "prisma migrate dev",
"prisma:deploy": "prisma migrate deploy",
"prisma:studio": "prisma studio", "prisma:studio": "prisma studio",
"create-admin": "tsx src/scripts/create-admin.ts", "create-admin": "tsx src/scripts/create-admin.ts",
"lint": "eslint src --ext .ts" "lint": "eslint src --ext .ts"

View File

@@ -0,0 +1,455 @@
-- CreateEnum
CREATE TYPE "TokenType" AS ENUM ('EMAIL_VERIFICATION', 'PASSWORD_RESET');
-- CreateEnum
CREATE TYPE "Role" AS ENUM ('USER', 'ADMIN');
-- CreateEnum
CREATE TYPE "Visibility" AS ENUM ('PRIVATE', 'SHARED', 'PUBLIC');
-- CreateEnum
CREATE TYPE "MealType" AS ENUM ('BREAKFAST', 'LUNCH', 'DINNER', 'SNACK', 'DESSERT', 'OTHER');
-- CreateTable
CREATE TABLE "User" (
"id" TEXT NOT NULL,
"email" TEXT NOT NULL,
"username" TEXT,
"passwordHash" TEXT,
"name" TEXT,
"avatar" TEXT,
"provider" TEXT NOT NULL DEFAULT 'local',
"providerId" TEXT,
"role" "Role" NOT NULL DEFAULT 'USER',
"emailVerified" BOOLEAN NOT NULL DEFAULT false,
"emailVerifiedAt" TIMESTAMP(3),
"createdAt" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP,
"updatedAt" TIMESTAMP(3) NOT NULL,
CONSTRAINT "User_pkey" PRIMARY KEY ("id")
);
-- CreateTable
CREATE TABLE "VerificationToken" (
"id" TEXT NOT NULL,
"userId" TEXT NOT NULL,
"token" TEXT NOT NULL,
"type" "TokenType" NOT NULL,
"expiresAt" TIMESTAMP(3) NOT NULL,
"createdAt" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP,
CONSTRAINT "VerificationToken_pkey" PRIMARY KEY ("id")
);
-- CreateTable
CREATE TABLE "RefreshToken" (
"id" TEXT NOT NULL,
"userId" TEXT NOT NULL,
"token" TEXT NOT NULL,
"expiresAt" TIMESTAMP(3) NOT NULL,
"createdAt" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP,
CONSTRAINT "RefreshToken_pkey" PRIMARY KEY ("id")
);
-- CreateTable
CREATE TABLE "Recipe" (
"id" TEXT NOT NULL,
"title" TEXT NOT NULL,
"description" TEXT,
"prepTime" INTEGER,
"cookTime" INTEGER,
"totalTime" INTEGER,
"servings" INTEGER,
"imageUrl" TEXT,
"sourceUrl" TEXT,
"author" TEXT,
"cuisine" TEXT,
"categories" TEXT[] DEFAULT ARRAY[]::TEXT[],
"rating" DOUBLE PRECISION,
"userId" TEXT,
"visibility" "Visibility" NOT NULL DEFAULT 'PRIVATE',
"createdAt" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP,
"updatedAt" TIMESTAMP(3) NOT NULL,
CONSTRAINT "Recipe_pkey" PRIMARY KEY ("id")
);
-- CreateTable
CREATE TABLE "RecipeSection" (
"id" TEXT NOT NULL,
"recipeId" TEXT NOT NULL,
"name" TEXT NOT NULL,
"order" INTEGER NOT NULL,
"timing" TEXT,
CONSTRAINT "RecipeSection_pkey" PRIMARY KEY ("id")
);
-- CreateTable
CREATE TABLE "Ingredient" (
"id" TEXT NOT NULL,
"recipeId" TEXT,
"sectionId" TEXT,
"name" TEXT NOT NULL,
"amount" TEXT,
"unit" TEXT,
"notes" TEXT,
"order" INTEGER NOT NULL,
CONSTRAINT "Ingredient_pkey" PRIMARY KEY ("id")
);
-- CreateTable
CREATE TABLE "Instruction" (
"id" TEXT NOT NULL,
"recipeId" TEXT,
"sectionId" TEXT,
"step" INTEGER NOT NULL,
"text" TEXT NOT NULL,
"imageUrl" TEXT,
"timing" TEXT,
CONSTRAINT "Instruction_pkey" PRIMARY KEY ("id")
);
-- CreateTable
CREATE TABLE "IngredientInstructionMapping" (
"id" TEXT NOT NULL,
"ingredientId" TEXT NOT NULL,
"instructionId" TEXT NOT NULL,
"order" INTEGER NOT NULL,
CONSTRAINT "IngredientInstructionMapping_pkey" PRIMARY KEY ("id")
);
-- CreateTable
CREATE TABLE "RecipeImage" (
"id" TEXT NOT NULL,
"recipeId" TEXT NOT NULL,
"url" TEXT NOT NULL,
"order" INTEGER NOT NULL,
CONSTRAINT "RecipeImage_pkey" PRIMARY KEY ("id")
);
-- CreateTable
CREATE TABLE "Tag" (
"id" TEXT NOT NULL,
"name" TEXT NOT NULL,
CONSTRAINT "Tag_pkey" PRIMARY KEY ("id")
);
-- CreateTable
CREATE TABLE "RecipeTag" (
"recipeId" TEXT NOT NULL,
"tagId" TEXT NOT NULL,
CONSTRAINT "RecipeTag_pkey" PRIMARY KEY ("recipeId","tagId")
);
-- CreateTable
CREATE TABLE "CookbookTag" (
"cookbookId" TEXT NOT NULL,
"tagId" TEXT NOT NULL,
CONSTRAINT "CookbookTag_pkey" PRIMARY KEY ("cookbookId","tagId")
);
-- CreateTable
CREATE TABLE "RecipeShare" (
"id" TEXT NOT NULL,
"recipeId" TEXT NOT NULL,
"userId" TEXT NOT NULL,
"createdAt" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP,
CONSTRAINT "RecipeShare_pkey" PRIMARY KEY ("id")
);
-- CreateTable
CREATE TABLE "Cookbook" (
"id" TEXT NOT NULL,
"name" TEXT NOT NULL,
"description" TEXT,
"coverImageUrl" TEXT,
"userId" TEXT,
"autoFilterCategories" TEXT[] DEFAULT ARRAY[]::TEXT[],
"autoFilterTags" TEXT[] DEFAULT ARRAY[]::TEXT[],
"autoFilterCookbookTags" TEXT[] DEFAULT ARRAY[]::TEXT[],
"createdAt" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP,
"updatedAt" TIMESTAMP(3) NOT NULL,
CONSTRAINT "Cookbook_pkey" PRIMARY KEY ("id")
);
-- CreateTable
CREATE TABLE "CookbookRecipe" (
"id" TEXT NOT NULL,
"cookbookId" TEXT NOT NULL,
"recipeId" TEXT NOT NULL,
"addedAt" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP,
CONSTRAINT "CookbookRecipe_pkey" PRIMARY KEY ("id")
);
-- CreateTable
CREATE TABLE "CookbookInclusion" (
"id" TEXT NOT NULL,
"parentCookbookId" TEXT NOT NULL,
"childCookbookId" TEXT NOT NULL,
"addedAt" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP,
CONSTRAINT "CookbookInclusion_pkey" PRIMARY KEY ("id")
);
-- CreateTable
CREATE TABLE "MealPlan" (
"id" TEXT NOT NULL,
"userId" TEXT,
"date" TIMESTAMP(3) NOT NULL,
"notes" TEXT,
"createdAt" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP,
"updatedAt" TIMESTAMP(3) NOT NULL,
CONSTRAINT "MealPlan_pkey" PRIMARY KEY ("id")
);
-- CreateTable
CREATE TABLE "Meal" (
"id" TEXT NOT NULL,
"mealPlanId" TEXT NOT NULL,
"mealType" "MealType" NOT NULL,
"order" INTEGER NOT NULL,
"servings" INTEGER,
"notes" TEXT,
"createdAt" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP,
"updatedAt" TIMESTAMP(3) NOT NULL,
CONSTRAINT "Meal_pkey" PRIMARY KEY ("id")
);
-- CreateTable
CREATE TABLE "MealRecipe" (
"mealId" TEXT NOT NULL,
"recipeId" TEXT NOT NULL,
CONSTRAINT "MealRecipe_pkey" PRIMARY KEY ("mealId")
);
-- CreateIndex
CREATE UNIQUE INDEX "User_email_key" ON "User"("email");
-- CreateIndex
CREATE UNIQUE INDEX "User_username_key" ON "User"("username");
-- CreateIndex
CREATE INDEX "User_email_idx" ON "User"("email");
-- CreateIndex
CREATE INDEX "User_provider_providerId_idx" ON "User"("provider", "providerId");
-- CreateIndex
CREATE UNIQUE INDEX "VerificationToken_token_key" ON "VerificationToken"("token");
-- CreateIndex
CREATE INDEX "VerificationToken_userId_idx" ON "VerificationToken"("userId");
-- CreateIndex
CREATE INDEX "VerificationToken_token_idx" ON "VerificationToken"("token");
-- CreateIndex
CREATE UNIQUE INDEX "RefreshToken_token_key" ON "RefreshToken"("token");
-- CreateIndex
CREATE INDEX "RefreshToken_userId_idx" ON "RefreshToken"("userId");
-- CreateIndex
CREATE INDEX "RefreshToken_token_idx" ON "RefreshToken"("token");
-- CreateIndex
CREATE INDEX "Recipe_title_idx" ON "Recipe"("title");
-- CreateIndex
CREATE INDEX "Recipe_cuisine_idx" ON "Recipe"("cuisine");
-- CreateIndex
CREATE INDEX "Recipe_userId_idx" ON "Recipe"("userId");
-- CreateIndex
CREATE INDEX "Recipe_visibility_idx" ON "Recipe"("visibility");
-- CreateIndex
CREATE INDEX "RecipeSection_recipeId_idx" ON "RecipeSection"("recipeId");
-- CreateIndex
CREATE INDEX "Ingredient_recipeId_idx" ON "Ingredient"("recipeId");
-- CreateIndex
CREATE INDEX "Ingredient_sectionId_idx" ON "Ingredient"("sectionId");
-- CreateIndex
CREATE INDEX "Instruction_recipeId_idx" ON "Instruction"("recipeId");
-- CreateIndex
CREATE INDEX "Instruction_sectionId_idx" ON "Instruction"("sectionId");
-- CreateIndex
CREATE INDEX "IngredientInstructionMapping_instructionId_idx" ON "IngredientInstructionMapping"("instructionId");
-- CreateIndex
CREATE INDEX "IngredientInstructionMapping_ingredientId_idx" ON "IngredientInstructionMapping"("ingredientId");
-- CreateIndex
CREATE UNIQUE INDEX "IngredientInstructionMapping_ingredientId_instructionId_key" ON "IngredientInstructionMapping"("ingredientId", "instructionId");
-- CreateIndex
CREATE INDEX "RecipeImage_recipeId_idx" ON "RecipeImage"("recipeId");
-- CreateIndex
CREATE UNIQUE INDEX "Tag_name_key" ON "Tag"("name");
-- CreateIndex
CREATE INDEX "RecipeTag_recipeId_idx" ON "RecipeTag"("recipeId");
-- CreateIndex
CREATE INDEX "RecipeTag_tagId_idx" ON "RecipeTag"("tagId");
-- CreateIndex
CREATE INDEX "CookbookTag_cookbookId_idx" ON "CookbookTag"("cookbookId");
-- CreateIndex
CREATE INDEX "CookbookTag_tagId_idx" ON "CookbookTag"("tagId");
-- CreateIndex
CREATE INDEX "RecipeShare_recipeId_idx" ON "RecipeShare"("recipeId");
-- CreateIndex
CREATE INDEX "RecipeShare_userId_idx" ON "RecipeShare"("userId");
-- CreateIndex
CREATE UNIQUE INDEX "RecipeShare_recipeId_userId_key" ON "RecipeShare"("recipeId", "userId");
-- CreateIndex
CREATE INDEX "Cookbook_name_idx" ON "Cookbook"("name");
-- CreateIndex
CREATE INDEX "Cookbook_userId_idx" ON "Cookbook"("userId");
-- CreateIndex
CREATE INDEX "CookbookRecipe_cookbookId_idx" ON "CookbookRecipe"("cookbookId");
-- CreateIndex
CREATE INDEX "CookbookRecipe_recipeId_idx" ON "CookbookRecipe"("recipeId");
-- CreateIndex
CREATE UNIQUE INDEX "CookbookRecipe_cookbookId_recipeId_key" ON "CookbookRecipe"("cookbookId", "recipeId");
-- CreateIndex
CREATE INDEX "CookbookInclusion_parentCookbookId_idx" ON "CookbookInclusion"("parentCookbookId");
-- CreateIndex
CREATE INDEX "CookbookInclusion_childCookbookId_idx" ON "CookbookInclusion"("childCookbookId");
-- CreateIndex
CREATE UNIQUE INDEX "CookbookInclusion_parentCookbookId_childCookbookId_key" ON "CookbookInclusion"("parentCookbookId", "childCookbookId");
-- CreateIndex
CREATE INDEX "MealPlan_userId_idx" ON "MealPlan"("userId");
-- CreateIndex
CREATE INDEX "MealPlan_date_idx" ON "MealPlan"("date");
-- CreateIndex
CREATE INDEX "MealPlan_userId_date_idx" ON "MealPlan"("userId", "date");
-- CreateIndex
CREATE UNIQUE INDEX "MealPlan_userId_date_key" ON "MealPlan"("userId", "date");
-- CreateIndex
CREATE INDEX "Meal_mealPlanId_idx" ON "Meal"("mealPlanId");
-- CreateIndex
CREATE INDEX "Meal_mealType_idx" ON "Meal"("mealType");
-- CreateIndex
CREATE INDEX "MealRecipe_recipeId_idx" ON "MealRecipe"("recipeId");
-- AddForeignKey
ALTER TABLE "VerificationToken" ADD CONSTRAINT "VerificationToken_userId_fkey" FOREIGN KEY ("userId") REFERENCES "User"("id") ON DELETE CASCADE ON UPDATE CASCADE;
-- AddForeignKey
ALTER TABLE "RefreshToken" ADD CONSTRAINT "RefreshToken_userId_fkey" FOREIGN KEY ("userId") REFERENCES "User"("id") ON DELETE CASCADE ON UPDATE CASCADE;
-- AddForeignKey
ALTER TABLE "Recipe" ADD CONSTRAINT "Recipe_userId_fkey" FOREIGN KEY ("userId") REFERENCES "User"("id") ON DELETE SET NULL ON UPDATE CASCADE;
-- AddForeignKey
ALTER TABLE "RecipeSection" ADD CONSTRAINT "RecipeSection_recipeId_fkey" FOREIGN KEY ("recipeId") REFERENCES "Recipe"("id") ON DELETE CASCADE ON UPDATE CASCADE;
-- AddForeignKey
ALTER TABLE "Ingredient" ADD CONSTRAINT "Ingredient_recipeId_fkey" FOREIGN KEY ("recipeId") REFERENCES "Recipe"("id") ON DELETE CASCADE ON UPDATE CASCADE;
-- AddForeignKey
ALTER TABLE "Ingredient" ADD CONSTRAINT "Ingredient_sectionId_fkey" FOREIGN KEY ("sectionId") REFERENCES "RecipeSection"("id") ON DELETE CASCADE ON UPDATE CASCADE;
-- AddForeignKey
ALTER TABLE "Instruction" ADD CONSTRAINT "Instruction_recipeId_fkey" FOREIGN KEY ("recipeId") REFERENCES "Recipe"("id") ON DELETE CASCADE ON UPDATE CASCADE;
-- AddForeignKey
ALTER TABLE "Instruction" ADD CONSTRAINT "Instruction_sectionId_fkey" FOREIGN KEY ("sectionId") REFERENCES "RecipeSection"("id") ON DELETE CASCADE ON UPDATE CASCADE;
-- AddForeignKey
ALTER TABLE "IngredientInstructionMapping" ADD CONSTRAINT "IngredientInstructionMapping_ingredientId_fkey" FOREIGN KEY ("ingredientId") REFERENCES "Ingredient"("id") ON DELETE CASCADE ON UPDATE CASCADE;
-- AddForeignKey
ALTER TABLE "IngredientInstructionMapping" ADD CONSTRAINT "IngredientInstructionMapping_instructionId_fkey" FOREIGN KEY ("instructionId") REFERENCES "Instruction"("id") ON DELETE CASCADE ON UPDATE CASCADE;
-- AddForeignKey
ALTER TABLE "RecipeImage" ADD CONSTRAINT "RecipeImage_recipeId_fkey" FOREIGN KEY ("recipeId") REFERENCES "Recipe"("id") ON DELETE CASCADE ON UPDATE CASCADE;
-- AddForeignKey
ALTER TABLE "RecipeTag" ADD CONSTRAINT "RecipeTag_recipeId_fkey" FOREIGN KEY ("recipeId") REFERENCES "Recipe"("id") ON DELETE CASCADE ON UPDATE CASCADE;
-- AddForeignKey
ALTER TABLE "RecipeTag" ADD CONSTRAINT "RecipeTag_tagId_fkey" FOREIGN KEY ("tagId") REFERENCES "Tag"("id") ON DELETE CASCADE ON UPDATE CASCADE;
-- AddForeignKey
ALTER TABLE "CookbookTag" ADD CONSTRAINT "CookbookTag_cookbookId_fkey" FOREIGN KEY ("cookbookId") REFERENCES "Cookbook"("id") ON DELETE CASCADE ON UPDATE CASCADE;
-- AddForeignKey
ALTER TABLE "CookbookTag" ADD CONSTRAINT "CookbookTag_tagId_fkey" FOREIGN KEY ("tagId") REFERENCES "Tag"("id") ON DELETE CASCADE ON UPDATE CASCADE;
-- AddForeignKey
ALTER TABLE "RecipeShare" ADD CONSTRAINT "RecipeShare_recipeId_fkey" FOREIGN KEY ("recipeId") REFERENCES "Recipe"("id") ON DELETE CASCADE ON UPDATE CASCADE;
-- AddForeignKey
ALTER TABLE "RecipeShare" ADD CONSTRAINT "RecipeShare_userId_fkey" FOREIGN KEY ("userId") REFERENCES "User"("id") ON DELETE CASCADE ON UPDATE CASCADE;
-- AddForeignKey
ALTER TABLE "Cookbook" ADD CONSTRAINT "Cookbook_userId_fkey" FOREIGN KEY ("userId") REFERENCES "User"("id") ON DELETE SET NULL ON UPDATE CASCADE;
-- AddForeignKey
ALTER TABLE "CookbookRecipe" ADD CONSTRAINT "CookbookRecipe_cookbookId_fkey" FOREIGN KEY ("cookbookId") REFERENCES "Cookbook"("id") ON DELETE CASCADE ON UPDATE CASCADE;
-- AddForeignKey
ALTER TABLE "CookbookRecipe" ADD CONSTRAINT "CookbookRecipe_recipeId_fkey" FOREIGN KEY ("recipeId") REFERENCES "Recipe"("id") ON DELETE CASCADE ON UPDATE CASCADE;
-- AddForeignKey
ALTER TABLE "CookbookInclusion" ADD CONSTRAINT "CookbookInclusion_parentCookbookId_fkey" FOREIGN KEY ("parentCookbookId") REFERENCES "Cookbook"("id") ON DELETE CASCADE ON UPDATE CASCADE;
-- AddForeignKey
ALTER TABLE "CookbookInclusion" ADD CONSTRAINT "CookbookInclusion_childCookbookId_fkey" FOREIGN KEY ("childCookbookId") REFERENCES "Cookbook"("id") ON DELETE CASCADE ON UPDATE CASCADE;
-- AddForeignKey
ALTER TABLE "MealPlan" ADD CONSTRAINT "MealPlan_userId_fkey" FOREIGN KEY ("userId") REFERENCES "User"("id") ON DELETE SET NULL ON UPDATE CASCADE;
-- AddForeignKey
ALTER TABLE "Meal" ADD CONSTRAINT "Meal_mealPlanId_fkey" FOREIGN KEY ("mealPlanId") REFERENCES "MealPlan"("id") ON DELETE CASCADE ON UPDATE CASCADE;
-- AddForeignKey
ALTER TABLE "MealRecipe" ADD CONSTRAINT "MealRecipe_mealId_fkey" FOREIGN KEY ("mealId") REFERENCES "Meal"("id") ON DELETE CASCADE ON UPDATE CASCADE;
-- AddForeignKey
ALTER TABLE "MealRecipe" ADD CONSTRAINT "MealRecipe_recipeId_fkey" FOREIGN KEY ("recipeId") REFERENCES "Recipe"("id") ON DELETE CASCADE ON UPDATE CASCADE;

View File

@@ -0,0 +1,59 @@
-- CreateEnum
CREATE TYPE "FamilyRole" AS ENUM ('OWNER', 'MEMBER');
-- AlterTable
ALTER TABLE "Cookbook" ADD COLUMN "familyId" TEXT;
-- AlterTable
ALTER TABLE "Recipe" ADD COLUMN "familyId" TEXT;
-- CreateTable
CREATE TABLE "Family" (
"id" TEXT NOT NULL,
"name" TEXT NOT NULL,
"createdAt" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP,
"updatedAt" TIMESTAMP(3) NOT NULL,
CONSTRAINT "Family_pkey" PRIMARY KEY ("id")
);
-- CreateTable
CREATE TABLE "FamilyMember" (
"id" TEXT NOT NULL,
"userId" TEXT NOT NULL,
"familyId" TEXT NOT NULL,
"role" "FamilyRole" NOT NULL DEFAULT 'MEMBER',
"joinedAt" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP,
CONSTRAINT "FamilyMember_pkey" PRIMARY KEY ("id")
);
-- CreateIndex
CREATE INDEX "Family_name_idx" ON "Family"("name");
-- CreateIndex
CREATE INDEX "FamilyMember_userId_idx" ON "FamilyMember"("userId");
-- CreateIndex
CREATE INDEX "FamilyMember_familyId_idx" ON "FamilyMember"("familyId");
-- CreateIndex
CREATE UNIQUE INDEX "FamilyMember_userId_familyId_key" ON "FamilyMember"("userId", "familyId");
-- CreateIndex
CREATE INDEX "Cookbook_familyId_idx" ON "Cookbook"("familyId");
-- CreateIndex
CREATE INDEX "Recipe_familyId_idx" ON "Recipe"("familyId");
-- AddForeignKey
ALTER TABLE "FamilyMember" ADD CONSTRAINT "FamilyMember_userId_fkey" FOREIGN KEY ("userId") REFERENCES "User"("id") ON DELETE CASCADE ON UPDATE CASCADE;
-- AddForeignKey
ALTER TABLE "FamilyMember" ADD CONSTRAINT "FamilyMember_familyId_fkey" FOREIGN KEY ("familyId") REFERENCES "Family"("id") ON DELETE CASCADE ON UPDATE CASCADE;
-- AddForeignKey
ALTER TABLE "Recipe" ADD CONSTRAINT "Recipe_familyId_fkey" FOREIGN KEY ("familyId") REFERENCES "Family"("id") ON DELETE SET NULL ON UPDATE CASCADE;
-- AddForeignKey
ALTER TABLE "Cookbook" ADD CONSTRAINT "Cookbook_familyId_fkey" FOREIGN KEY ("familyId") REFERENCES "Family"("id") ON DELETE SET NULL ON UPDATE CASCADE;

View File

@@ -0,0 +1,3 @@
# Please do not edit this file manually
# It should be added in your version-control system (e.g., Git)
provider = "postgresql"

View File

@@ -29,11 +29,45 @@ model User {
refreshTokens RefreshToken[] refreshTokens RefreshToken[]
verificationTokens VerificationToken[] verificationTokens VerificationToken[]
mealPlans MealPlan[] mealPlans MealPlan[]
familyMemberships FamilyMember[]
@@index([email]) @@index([email])
@@index([provider, providerId]) @@index([provider, providerId])
} }
enum FamilyRole {
OWNER
MEMBER
}
model Family {
id String @id @default(cuid())
name String
createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
members FamilyMember[]
recipes Recipe[]
cookbooks Cookbook[]
@@index([name])
}
model FamilyMember {
id String @id @default(cuid())
userId String
familyId String
role FamilyRole @default(MEMBER)
joinedAt DateTime @default(now())
user User @relation(fields: [userId], references: [id], onDelete: Cascade)
family Family @relation(fields: [familyId], references: [id], onDelete: Cascade)
@@unique([userId, familyId])
@@index([userId])
@@index([familyId])
}
model VerificationToken { model VerificationToken {
id String @id @default(cuid()) id String @id @default(cuid())
userId String userId String
@@ -91,12 +125,14 @@ model Recipe {
cuisine String? cuisine String?
categories String[] @default([]) // Changed from single category to array categories String[] @default([]) // Changed from single category to array
rating Float? rating Float?
userId String? // Recipe owner userId String? // Recipe owner (creator)
familyId String? // Owning family (tenant scope)
visibility Visibility @default(PRIVATE) visibility Visibility @default(PRIVATE)
createdAt DateTime @default(now()) createdAt DateTime @default(now())
updatedAt DateTime @updatedAt updatedAt DateTime @updatedAt
user User? @relation(fields: [userId], references: [id], onDelete: SetNull) user User? @relation(fields: [userId], references: [id], onDelete: SetNull)
family Family? @relation(fields: [familyId], references: [id], onDelete: SetNull)
sections RecipeSection[] sections RecipeSection[]
ingredients Ingredient[] ingredients Ingredient[]
instructions Instruction[] instructions Instruction[]
@@ -109,6 +145,7 @@ model Recipe {
@@index([title]) @@index([title])
@@index([cuisine]) @@index([cuisine])
@@index([userId]) @@index([userId])
@@index([familyId])
@@index([visibility]) @@index([visibility])
} }
@@ -236,7 +273,8 @@ model Cookbook {
name String name String
description String? description String?
coverImageUrl String? coverImageUrl String?
userId String? // Cookbook owner userId String? // Cookbook owner (creator)
familyId String? // Owning family (tenant scope)
autoFilterCategories String[] @default([]) // Auto-add recipes matching these categories autoFilterCategories String[] @default([]) // Auto-add recipes matching these categories
autoFilterTags String[] @default([]) // Auto-add recipes matching these tags autoFilterTags String[] @default([]) // Auto-add recipes matching these tags
autoFilterCookbookTags String[] @default([]) // Auto-add cookbooks matching these tags autoFilterCookbookTags String[] @default([]) // Auto-add cookbooks matching these tags
@@ -244,6 +282,7 @@ model Cookbook {
updatedAt DateTime @updatedAt updatedAt DateTime @updatedAt
user User? @relation(fields: [userId], references: [id], onDelete: SetNull) user User? @relation(fields: [userId], references: [id], onDelete: SetNull)
family Family? @relation(fields: [familyId], references: [id], onDelete: SetNull)
recipes CookbookRecipe[] recipes CookbookRecipe[]
tags CookbookTag[] tags CookbookTag[]
includedCookbooks CookbookInclusion[] @relation("ParentCookbook") includedCookbooks CookbookInclusion[] @relation("ParentCookbook")
@@ -251,6 +290,7 @@ model Cookbook {
@@index([name]) @@index([name])
@@index([userId]) @@index([userId])
@@index([familyId])
} }
model CookbookRecipe { model CookbookRecipe {

View File

@@ -10,6 +10,7 @@ import tagsRoutes from './routes/tags.routes';
import backupRoutes from './routes/backup.routes'; import backupRoutes from './routes/backup.routes';
import authRoutes from './routes/auth.routes'; import authRoutes from './routes/auth.routes';
import mealPlansRoutes from './routes/meal-plans.routes'; import mealPlansRoutes from './routes/meal-plans.routes';
import familiesRoutes from './routes/families.routes';
import './config/passport'; // Initialize passport strategies import './config/passport'; // Initialize passport strategies
import { testEmailConfig } from './services/email.service'; import { testEmailConfig } from './services/email.service';
import { APP_VERSION } from './version'; import { APP_VERSION } from './version';
@@ -40,6 +41,7 @@ app.use('/api/cookbooks', cookbooksRoutes);
app.use('/api/tags', tagsRoutes); app.use('/api/tags', tagsRoutes);
app.use('/api/backup', backupRoutes); app.use('/api/backup', backupRoutes);
app.use('/api/meal-plans', mealPlansRoutes); app.use('/api/meal-plans', mealPlansRoutes);
app.use('/api/families', familiesRoutes);
// Health check // Health check
app.get('/health', (req, res) => { app.get('/health', (req, res) => {

View File

@@ -2,10 +2,13 @@ import express, { Request, Response } from 'express';
import path from 'path'; import path from 'path';
import fs from 'fs/promises'; import fs from 'fs/promises';
import { createBackup, restoreBackup, listBackups, deleteBackup } from '../services/backup.service'; import { createBackup, restoreBackup, listBackups, deleteBackup } from '../services/backup.service';
import { requireAuth, requireAdmin } from '../middleware/auth.middleware';
import multer from 'multer'; import multer from 'multer';
const router = express.Router(); const router = express.Router();
router.use(requireAuth, requireAdmin);
// Configure multer for backup file uploads // Configure multer for backup file uploads
const upload = multer({ const upload = multer({
dest: '/tmp/basil-restore/', dest: '/tmp/basil-restore/',

View File

@@ -2,8 +2,16 @@ import { Router, Request, Response } from 'express';
import multer from 'multer'; import multer from 'multer';
import prisma from '../config/database'; import prisma from '../config/database';
import { StorageService } from '../services/storage.service'; import { StorageService } from '../services/storage.service';
import {
getAccessContext,
buildCookbookAccessFilter,
canMutateCookbook,
getPrimaryFamilyId,
} from '../services/access.service';
import { requireAuth } from '../middleware/auth.middleware';
const router = Router(); const router = Router();
router.use(requireAuth);
const upload = multer({ const upload = multer({
storage: multer.memoryStorage(), storage: multer.memoryStorage(),
limits: { limits: {
@@ -57,9 +65,11 @@ async function applyFiltersToExistingRecipes(cookbookId: string) {
}); });
} }
// Find matching recipes // Find matching recipes within the same family (tenant scope).
if (!cookbook.familyId) return;
const matchingRecipes = await prisma.recipe.findMany({ const matchingRecipes = await prisma.recipe.findMany({
where: { where: {
familyId: cookbook.familyId,
OR: whereConditions OR: whereConditions
}, },
select: { id: true } select: { id: true }
@@ -107,11 +117,13 @@ async function applyFiltersToExistingCookbooks(cookbookId: string) {
return; return;
} }
// Find matching cookbooks (excluding self) // Find matching cookbooks within the same family (tenant scope).
if (!cookbook.familyId) return;
const matchingCookbooks = await prisma.cookbook.findMany({ const matchingCookbooks = await prisma.cookbook.findMany({
where: { where: {
AND: [ AND: [
{ id: { not: cookbookId } }, { id: { not: cookbookId } },
{ familyId: cookbook.familyId },
{ {
tags: { tags: {
some: { some: {
@@ -166,11 +178,14 @@ async function autoAddToParentCookbooks(cookbookId: string) {
const cookbookTags = cookbook.tags.map((ct: any) => ct.tag.name); const cookbookTags = cookbook.tags.map((ct: any) => ct.tag.name);
if (cookbookTags.length === 0) return; if (cookbookTags.length === 0) return;
// Find parent cookbooks with filters matching this cookbook's tags // Find parent cookbooks with filters matching this cookbook's tags,
// scoped to the same family.
if (!cookbook.familyId) return;
const parentCookbooks = await prisma.cookbook.findMany({ const parentCookbooks = await prisma.cookbook.findMany({
where: { where: {
AND: [ AND: [
{ id: { not: cookbookId } }, { id: { not: cookbookId } },
{ familyId: cookbook.familyId },
{ autoFilterCookbookTags: { hasSome: cookbookTags } } { autoFilterCookbookTags: { hasSome: cookbookTags } }
] ]
} }
@@ -203,6 +218,8 @@ async function autoAddToParentCookbooks(cookbookId: string) {
router.get('/', async (req: Request, res: Response) => { router.get('/', async (req: Request, res: Response) => {
try { try {
const { includeChildren = 'false' } = req.query; const { includeChildren = 'false' } = req.query;
const ctx = await getAccessContext(req.user!);
const accessFilter = buildCookbookAccessFilter(ctx);
// Get child cookbook IDs to exclude from main listing (unless includeChildren is true) // Get child cookbook IDs to exclude from main listing (unless includeChildren is true)
const childCookbookIds = includeChildren === 'true' ? [] : ( const childCookbookIds = includeChildren === 'true' ? [] : (
@@ -213,8 +230,11 @@ router.get('/', async (req: Request, res: Response) => {
).map((ci: any) => ci.childCookbookId); ).map((ci: any) => ci.childCookbookId);
const cookbooks = await prisma.cookbook.findMany({ const cookbooks = await prisma.cookbook.findMany({
where: includeChildren === 'true' ? {} : { where: {
id: { notIn: childCookbookIds } AND: [
accessFilter,
includeChildren === 'true' ? {} : { id: { notIn: childCookbookIds } },
],
}, },
include: { include: {
_count: { _count: {
@@ -256,9 +276,10 @@ router.get('/', async (req: Request, res: Response) => {
router.get('/:id', async (req: Request, res: Response) => { router.get('/:id', async (req: Request, res: Response) => {
try { try {
const { id } = req.params; const { id } = req.params;
const ctx = await getAccessContext(req.user!);
const cookbook = await prisma.cookbook.findUnique({ const cookbook = await prisma.cookbook.findFirst({
where: { id }, where: { AND: [{ id }, buildCookbookAccessFilter(ctx)] },
include: { include: {
recipes: { recipes: {
include: { include: {
@@ -342,11 +363,15 @@ router.post('/', async (req: Request, res: Response) => {
return res.status(400).json({ error: 'Name is required' }); return res.status(400).json({ error: 'Name is required' });
} }
const familyId = await getPrimaryFamilyId(req.user!.id);
const cookbook = await prisma.cookbook.create({ const cookbook = await prisma.cookbook.create({
data: { data: {
name, name,
description, description,
coverImageUrl, coverImageUrl,
userId: req.user!.id,
familyId,
autoFilterCategories: autoFilterCategories || [], autoFilterCategories: autoFilterCategories || [],
autoFilterTags: autoFilterTags || [], autoFilterTags: autoFilterTags || [],
autoFilterCookbookTags: autoFilterCookbookTags || [], autoFilterCookbookTags: autoFilterCookbookTags || [],
@@ -388,6 +413,16 @@ router.put('/:id', async (req: Request, res: Response) => {
const { id } = req.params; const { id } = req.params;
const { name, description, coverImageUrl, autoFilterCategories, autoFilterTags, autoFilterCookbookTags, tags } = req.body; const { name, description, coverImageUrl, autoFilterCategories, autoFilterTags, autoFilterCookbookTags, tags } = req.body;
const ctx = await getAccessContext(req.user!);
const existing = await prisma.cookbook.findUnique({
where: { id },
select: { userId: true, familyId: true },
});
if (!existing) return res.status(404).json({ error: 'Cookbook not found' });
if (!canMutateCookbook(ctx, existing)) {
return res.status(403).json({ error: 'Forbidden' });
}
const updateData: any = {}; const updateData: any = {};
if (name !== undefined) updateData.name = name; if (name !== undefined) updateData.name = name;
if (description !== undefined) updateData.description = description; if (description !== undefined) updateData.description = description;
@@ -460,6 +495,15 @@ router.put('/:id', async (req: Request, res: Response) => {
router.delete('/:id', async (req: Request, res: Response) => { router.delete('/:id', async (req: Request, res: Response) => {
try { try {
const { id } = req.params; const { id } = req.params;
const ctx = await getAccessContext(req.user!);
const cookbook = await prisma.cookbook.findUnique({
where: { id },
select: { userId: true, familyId: true },
});
if (!cookbook) return res.status(404).json({ error: 'Cookbook not found' });
if (!canMutateCookbook(ctx, cookbook)) {
return res.status(403).json({ error: 'Forbidden' });
}
await prisma.cookbook.delete({ await prisma.cookbook.delete({
where: { id } where: { id }
@@ -476,6 +520,26 @@ router.delete('/:id', async (req: Request, res: Response) => {
router.post('/:id/recipes/:recipeId', async (req: Request, res: Response) => { router.post('/:id/recipes/:recipeId', async (req: Request, res: Response) => {
try { try {
const { id, recipeId } = req.params; const { id, recipeId } = req.params;
const ctx = await getAccessContext(req.user!);
const cookbook = await prisma.cookbook.findUnique({
where: { id },
select: { userId: true, familyId: true },
});
if (!cookbook) return res.status(404).json({ error: 'Cookbook not found' });
if (!canMutateCookbook(ctx, cookbook)) {
return res.status(403).json({ error: 'Forbidden' });
}
// Prevent pulling recipes from other tenants into this cookbook.
const recipe = await prisma.recipe.findUnique({
where: { id: recipeId },
select: { userId: true, familyId: true, visibility: true },
});
if (!recipe) return res.status(404).json({ error: 'Recipe not found' });
const sameFamily = !!recipe.familyId && recipe.familyId === cookbook.familyId;
const ownedByUser = recipe.userId === ctx.userId;
if (ctx.role !== 'ADMIN' && !sameFamily && !ownedByUser) {
return res.status(403).json({ error: 'Cannot add recipe from a different tenant' });
}
// Check if recipe is already in cookbook // Check if recipe is already in cookbook
const existing = await prisma.cookbookRecipe.findUnique({ const existing = await prisma.cookbookRecipe.findUnique({
@@ -509,6 +573,15 @@ router.post('/:id/recipes/:recipeId', async (req: Request, res: Response) => {
router.delete('/:id/recipes/:recipeId', async (req: Request, res: Response) => { router.delete('/:id/recipes/:recipeId', async (req: Request, res: Response) => {
try { try {
const { id, recipeId } = req.params; const { id, recipeId } = req.params;
const ctx = await getAccessContext(req.user!);
const cookbook = await prisma.cookbook.findUnique({
where: { id },
select: { userId: true, familyId: true },
});
if (!cookbook) return res.status(404).json({ error: 'Cookbook not found' });
if (!canMutateCookbook(ctx, cookbook)) {
return res.status(403).json({ error: 'Forbidden' });
}
await prisma.cookbookRecipe.delete({ await prisma.cookbookRecipe.delete({
where: { where: {
@@ -536,6 +609,26 @@ router.post('/:id/cookbooks/:childCookbookId', async (req: Request, res: Respons
return res.status(400).json({ error: 'Cannot add cookbook to itself' }); return res.status(400).json({ error: 'Cannot add cookbook to itself' });
} }
const ctx = await getAccessContext(req.user!);
const parent = await prisma.cookbook.findUnique({
where: { id },
select: { userId: true, familyId: true },
});
if (!parent) return res.status(404).json({ error: 'Cookbook not found' });
if (!canMutateCookbook(ctx, parent)) {
return res.status(403).json({ error: 'Forbidden' });
}
const child = await prisma.cookbook.findUnique({
where: { id: childCookbookId },
select: { userId: true, familyId: true },
});
if (!child) return res.status(404).json({ error: 'Cookbook not found' });
const sameFamily = !!child.familyId && child.familyId === parent.familyId;
const ownedByUser = child.userId === ctx.userId;
if (ctx.role !== 'ADMIN' && !sameFamily && !ownedByUser) {
return res.status(403).json({ error: 'Cannot nest a cookbook from a different tenant' });
}
// Check if cookbook is already included // Check if cookbook is already included
const existing = await prisma.cookbookInclusion.findUnique({ const existing = await prisma.cookbookInclusion.findUnique({
where: { where: {
@@ -568,6 +661,15 @@ router.post('/:id/cookbooks/:childCookbookId', async (req: Request, res: Respons
router.delete('/:id/cookbooks/:childCookbookId', async (req: Request, res: Response) => { router.delete('/:id/cookbooks/:childCookbookId', async (req: Request, res: Response) => {
try { try {
const { id, childCookbookId } = req.params; const { id, childCookbookId } = req.params;
const ctx = await getAccessContext(req.user!);
const parent = await prisma.cookbook.findUnique({
where: { id },
select: { userId: true, familyId: true },
});
if (!parent) return res.status(404).json({ error: 'Cookbook not found' });
if (!canMutateCookbook(ctx, parent)) {
return res.status(403).json({ error: 'Forbidden' });
}
await prisma.cookbookInclusion.delete({ await prisma.cookbookInclusion.delete({
where: { where: {
@@ -594,10 +696,14 @@ router.post('/:id/image', upload.single('image'), async (req: Request, res: Resp
return res.status(400).json({ error: 'No image provided' }); return res.status(400).json({ error: 'No image provided' });
} }
// Delete old cover image if it exists const ctx = await getAccessContext(req.user!);
const cookbook = await prisma.cookbook.findUnique({ const cookbook = await prisma.cookbook.findUnique({
where: { id } where: { id }
}); });
if (!cookbook) return res.status(404).json({ error: 'Cookbook not found' });
if (!canMutateCookbook(ctx, cookbook)) {
return res.status(403).json({ error: 'Forbidden' });
}
if (cookbook?.coverImageUrl) { if (cookbook?.coverImageUrl) {
await storageService.deleteFile(cookbook.coverImageUrl); await storageService.deleteFile(cookbook.coverImageUrl);
@@ -629,10 +735,14 @@ router.post('/:id/image-from-url', async (req: Request, res: Response) => {
return res.status(400).json({ error: 'URL is required' }); return res.status(400).json({ error: 'URL is required' });
} }
// Delete old cover image if it exists const ctx = await getAccessContext(req.user!);
const cookbook = await prisma.cookbook.findUnique({ const cookbook = await prisma.cookbook.findUnique({
where: { id } where: { id }
}); });
if (!cookbook) return res.status(404).json({ error: 'Cookbook not found' });
if (!canMutateCookbook(ctx, cookbook)) {
return res.status(403).json({ error: 'Forbidden' });
}
if (cookbook?.coverImageUrl) { if (cookbook?.coverImageUrl) {
await storageService.deleteFile(cookbook.coverImageUrl); await storageService.deleteFile(cookbook.coverImageUrl);

View File

@@ -0,0 +1,237 @@
import { Router, Request, Response } from 'express';
import prisma from '../config/database';
import { requireAuth } from '../middleware/auth.middleware';
import { FamilyRole } from '@prisma/client';
const router = Router();
router.use(requireAuth);
async function getMembership(userId: string, familyId: string) {
return prisma.familyMember.findUnique({
where: { userId_familyId: { userId, familyId } },
});
}
// List the current user's families.
router.get('/', async (req: Request, res: Response) => {
try {
const userId = req.user!.id;
const memberships = await prisma.familyMember.findMany({
where: { userId },
include: {
family: { include: { _count: { select: { members: true } } } },
},
orderBy: { joinedAt: 'asc' },
});
res.json({
data: memberships.map((m) => ({
id: m.family.id,
name: m.family.name,
role: m.role,
memberCount: m.family._count.members,
joinedAt: m.joinedAt,
})),
});
} catch (error) {
console.error('Error fetching families:', error);
res.status(500).json({ error: 'Failed to fetch families' });
}
});
// Create a new family (caller becomes OWNER).
router.post('/', async (req: Request, res: Response) => {
try {
const { name } = req.body;
if (!name || typeof name !== 'string' || !name.trim()) {
return res.status(400).json({ error: 'Name is required' });
}
const family = await prisma.family.create({
data: {
name: name.trim(),
members: { create: { userId: req.user!.id, role: 'OWNER' } },
},
});
res.status(201).json({ data: family });
} catch (error) {
console.error('Error creating family:', error);
res.status(500).json({ error: 'Failed to create family' });
}
});
// Get a family including its members. Must be a member.
router.get('/:id', async (req: Request, res: Response) => {
try {
const userId = req.user!.id;
const membership = await getMembership(userId, req.params.id);
if (!membership && req.user!.role !== 'ADMIN') {
return res.status(404).json({ error: 'Family not found' });
}
const family = await prisma.family.findUnique({
where: { id: req.params.id },
include: {
members: {
include: { user: { select: { id: true, email: true, name: true, avatar: true } } },
orderBy: { joinedAt: 'asc' },
},
},
});
if (!family) return res.status(404).json({ error: 'Family not found' });
res.json({
data: {
id: family.id,
name: family.name,
createdAt: family.createdAt,
updatedAt: family.updatedAt,
myRole: membership?.role ?? null,
members: family.members.map((m) => ({
userId: m.userId,
email: m.user.email,
name: m.user.name,
avatar: m.user.avatar,
role: m.role,
joinedAt: m.joinedAt,
})),
},
});
} catch (error) {
console.error('Error fetching family:', error);
res.status(500).json({ error: 'Failed to fetch family' });
}
});
// Rename a family. OWNER only.
router.put('/:id', async (req: Request, res: Response) => {
try {
const userId = req.user!.id;
const membership = await getMembership(userId, req.params.id);
const isAdmin = req.user!.role === 'ADMIN';
if (!membership || (membership.role !== 'OWNER' && !isAdmin)) {
return res.status(403).json({ error: 'Owner access required' });
}
const { name } = req.body;
if (!name || typeof name !== 'string' || !name.trim()) {
return res.status(400).json({ error: 'Name is required' });
}
const family = await prisma.family.update({
where: { id: req.params.id },
data: { name: name.trim() },
});
res.json({ data: family });
} catch (error) {
console.error('Error updating family:', error);
res.status(500).json({ error: 'Failed to update family' });
}
});
// Delete a family. OWNER only. Recipes/cookbooks in this family get familyId=NULL.
router.delete('/:id', async (req: Request, res: Response) => {
try {
const userId = req.user!.id;
const membership = await getMembership(userId, req.params.id);
const isAdmin = req.user!.role === 'ADMIN';
if (!membership || (membership.role !== 'OWNER' && !isAdmin)) {
return res.status(403).json({ error: 'Owner access required' });
}
await prisma.family.delete({ where: { id: req.params.id } });
res.json({ message: 'Family deleted' });
} catch (error) {
console.error('Error deleting family:', error);
res.status(500).json({ error: 'Failed to delete family' });
}
});
// Add an existing user to a family by email. OWNER only.
router.post('/:id/members', async (req: Request, res: Response) => {
try {
const userId = req.user!.id;
const membership = await getMembership(userId, req.params.id);
const isAdmin = req.user!.role === 'ADMIN';
if (!membership || (membership.role !== 'OWNER' && !isAdmin)) {
return res.status(403).json({ error: 'Owner access required' });
}
const { email, role } = req.body;
if (!email || typeof email !== 'string') {
return res.status(400).json({ error: 'Email is required' });
}
const invitedRole: FamilyRole = role === 'OWNER' ? 'OWNER' : 'MEMBER';
const invitee = await prisma.user.findUnique({
where: { email: email.toLowerCase() },
select: { id: true, email: true, name: true, avatar: true },
});
if (!invitee) {
return res.status(404).json({ error: 'No user with that email exists on this server' });
}
const existing = await getMembership(invitee.id, req.params.id);
if (existing) {
return res.status(409).json({ error: 'User is already a member' });
}
const newMember = await prisma.familyMember.create({
data: { userId: invitee.id, familyId: req.params.id, role: invitedRole },
});
res.status(201).json({
data: {
userId: invitee.id,
email: invitee.email,
name: invitee.name,
avatar: invitee.avatar,
role: newMember.role,
joinedAt: newMember.joinedAt,
},
});
} catch (error) {
console.error('Error adding member:', error);
res.status(500).json({ error: 'Failed to add member' });
}
});
// Remove a member (or leave as self). OWNER can remove anyone; a member can only remove themselves.
router.delete('/:id/members/:userId', async (req: Request, res: Response) => {
try {
const currentUserId = req.user!.id;
const targetUserId = req.params.userId;
const membership = await getMembership(currentUserId, req.params.id);
const isAdmin = req.user!.role === 'ADMIN';
if (!membership && !isAdmin) {
return res.status(403).json({ error: 'Not a member of this family' });
}
const isOwner = membership?.role === 'OWNER';
const isSelf = targetUserId === currentUserId;
if (!isOwner && !isSelf && !isAdmin) {
return res.status(403).json({ error: 'Only owners can remove other members' });
}
const target = await getMembership(targetUserId, req.params.id);
if (!target) {
return res.status(404).json({ error: 'Member not found' });
}
// Don't let the last OWNER leave/be removed — would orphan the family.
if (target.role === 'OWNER') {
const ownerCount = await prisma.familyMember.count({
where: { familyId: req.params.id, role: 'OWNER' },
});
if (ownerCount <= 1) {
return res.status(400).json({ error: 'Cannot remove the last owner; transfer ownership or delete the family first' });
}
}
await prisma.familyMember.delete({
where: { userId_familyId: { userId: targetUserId, familyId: req.params.id } },
});
res.json({ message: 'Member removed' });
} catch (error) {
console.error('Error removing member:', error);
res.status(500).json({ error: 'Failed to remove member' });
}
});
export default router;

View File

@@ -4,9 +4,17 @@ import prisma from '../config/database';
import { StorageService } from '../services/storage.service'; import { StorageService } from '../services/storage.service';
import { ScraperService } from '../services/scraper.service'; import { ScraperService } from '../services/scraper.service';
import { autoMapIngredients, saveIngredientMappings } from '../services/ingredientMatcher.service'; import { autoMapIngredients, saveIngredientMappings } from '../services/ingredientMatcher.service';
import {
getAccessContext,
buildRecipeAccessFilter,
canMutateRecipe,
getPrimaryFamilyId,
} from '../services/access.service';
import { requireAuth } from '../middleware/auth.middleware';
import { ApiResponse, RecipeImportRequest } from '@basil/shared'; import { ApiResponse, RecipeImportRequest } from '@basil/shared';
const router = Router(); const router = Router();
router.use(requireAuth);
const upload = multer({ const upload = multer({
storage: multer.memoryStorage(), storage: multer.memoryStorage(),
limits: { limits: {
@@ -23,7 +31,8 @@ const upload = multer({
const storageService = StorageService.getInstance(); const storageService = StorageService.getInstance();
const scraperService = new ScraperService(); const scraperService = new ScraperService();
// Helper function to auto-add recipe to cookbooks based on their filters // Helper function to auto-add recipe to cookbooks based on their filters.
// Scoped to same family to prevent cross-tenant leakage via shared tag names.
async function autoAddToCookbooks(recipeId: string) { async function autoAddToCookbooks(recipeId: string) {
try { try {
// Get the recipe with its category and tags // Get the recipe with its category and tags
@@ -43,9 +52,11 @@ async function autoAddToCookbooks(recipeId: string) {
const recipeTags = recipe.tags.map((rt: any) => rt.tag.name); const recipeTags = recipe.tags.map((rt: any) => rt.tag.name);
const recipeCategories = recipe.categories || []; const recipeCategories = recipe.categories || [];
// Get all cookbooks with auto-filters // Get cookbooks in the same family with auto-filters. Skip unscoped recipes.
if (!recipe.familyId) return;
const cookbooks = await prisma.cookbook.findMany({ const cookbooks = await prisma.cookbook.findMany({
where: { where: {
familyId: recipe.familyId,
OR: [ OR: [
{ autoFilterCategories: { isEmpty: false } }, { autoFilterCategories: { isEmpty: false } },
{ autoFilterTags: { isEmpty: false } } { autoFilterTags: { isEmpty: false } }
@@ -107,36 +118,35 @@ router.get('/', async (req, res) => {
const limitNum = parseInt(limit as string); const limitNum = parseInt(limit as string);
const skip = (pageNum - 1) * limitNum; const skip = (pageNum - 1) * limitNum;
const where: any = {}; const ctx = await getAccessContext(req.user!);
const where: any = { AND: [buildRecipeAccessFilter(ctx)] };
if (search) { if (search) {
where.OR = [ where.AND.push({
{ title: { contains: search as string, mode: 'insensitive' } }, OR: [
{ description: { contains: search as string, mode: 'insensitive' } }, { title: { contains: search as string, mode: 'insensitive' } },
{ { description: { contains: search as string, mode: 'insensitive' } },
tags: { {
some: { tags: {
tag: { some: {
name: { contains: search as string, mode: 'insensitive' } tag: {
name: { contains: search as string, mode: 'insensitive' }
}
} }
} }
} },
}, ],
]; });
}
if (cuisine) where.cuisine = cuisine;
if (category) {
where.categories = {
has: category as string
};
} }
if (cuisine) where.AND.push({ cuisine });
if (category) where.AND.push({ categories: { has: category as string } });
if (tag) { if (tag) {
where.tags = { where.AND.push({
some: { tags: {
tag: { some: {
name: { equals: tag as string, mode: 'insensitive' } tag: { name: { equals: tag as string, mode: 'insensitive' } },
} },
} },
}; });
} }
const [recipes, total] = await Promise.all([ const [recipes, total] = await Promise.all([
@@ -215,8 +225,9 @@ router.get('/', async (req, res) => {
// Get single recipe // Get single recipe
router.get('/:id', async (req, res) => { router.get('/:id', async (req, res) => {
try { try {
const recipe = await prisma.recipe.findUnique({ const ctx = await getAccessContext(req.user!);
where: { id: req.params.id }, const recipe = await prisma.recipe.findFirst({
where: { AND: [{ id: req.params.id }, buildRecipeAccessFilter(ctx)] },
include: { include: {
sections: { sections: {
orderBy: { order: 'asc' }, orderBy: { order: 'asc' },
@@ -285,11 +296,17 @@ router.get('/:id', async (req, res) => {
router.post('/', async (req, res) => { router.post('/', async (req, res) => {
try { try {
const { title, description, sections, ingredients, instructions, tags, ...recipeData } = req.body; const { title, description, sections, ingredients, instructions, tags, ...recipeData } = req.body;
// Strip any client-supplied ownership — always derive server-side.
delete recipeData.userId;
delete recipeData.familyId;
const familyId = await getPrimaryFamilyId(req.user!.id);
const recipe = await prisma.recipe.create({ const recipe = await prisma.recipe.create({
data: { data: {
title, title,
description, description,
userId: req.user!.id,
familyId,
...recipeData, ...recipeData,
sections: sections sections: sections
? { ? {
@@ -361,7 +378,20 @@ router.post('/', async (req, res) => {
// Update recipe // Update recipe
router.put('/:id', async (req, res) => { router.put('/:id', async (req, res) => {
try { try {
const ctx = await getAccessContext(req.user!);
const existing = await prisma.recipe.findUnique({
where: { id: req.params.id },
select: { userId: true, familyId: true, visibility: true },
});
if (!existing) return res.status(404).json({ error: 'Recipe not found' });
if (!canMutateRecipe(ctx, existing)) {
return res.status(403).json({ error: 'Forbidden' });
}
const { sections, ingredients, instructions, tags, ...recipeData } = req.body; const { sections, ingredients, instructions, tags, ...recipeData } = req.body;
// Block client from reassigning ownership via update.
delete recipeData.userId;
delete recipeData.familyId;
// Only delete relations that are being updated (not undefined) // Only delete relations that are being updated (not undefined)
if (sections !== undefined) { if (sections !== undefined) {
@@ -465,20 +495,23 @@ router.put('/:id', async (req, res) => {
// Delete recipe // Delete recipe
router.delete('/:id', async (req, res) => { router.delete('/:id', async (req, res) => {
try { try {
const ctx = await getAccessContext(req.user!);
// Get recipe to delete associated images // Get recipe to delete associated images
const recipe = await prisma.recipe.findUnique({ const recipe = await prisma.recipe.findUnique({
where: { id: req.params.id }, where: { id: req.params.id },
include: { images: true }, include: { images: true },
}); });
if (!recipe) return res.status(404).json({ error: 'Recipe not found' });
if (!canMutateRecipe(ctx, recipe)) {
return res.status(403).json({ error: 'Forbidden' });
}
if (recipe) { // Delete images from storage
// Delete images from storage if (recipe.imageUrl) {
if (recipe.imageUrl) { await storageService.deleteFile(recipe.imageUrl);
await storageService.deleteFile(recipe.imageUrl); }
} for (const image of recipe.images) {
for (const image of recipe.images) { await storageService.deleteFile(image.url);
await storageService.deleteFile(image.url);
}
} }
await prisma.recipe.delete({ where: { id: req.params.id } }); await prisma.recipe.delete({ where: { id: req.params.id } });
@@ -505,16 +538,20 @@ router.post('/:id/images', upload.single('image'), async (req, res) => {
return res.status(400).json({ error: 'No image provided' }); return res.status(400).json({ error: 'No image provided' });
} }
const ctx = await getAccessContext(req.user!);
const existingRecipe = await prisma.recipe.findUnique({
where: { id: req.params.id },
select: { imageUrl: true, userId: true, familyId: true, visibility: true },
});
if (!existingRecipe) return res.status(404).json({ error: 'Recipe not found' });
if (!canMutateRecipe(ctx, existingRecipe)) {
return res.status(403).json({ error: 'Forbidden' });
}
console.log('Saving file to storage...'); console.log('Saving file to storage...');
const imageUrl = await storageService.saveFile(req.file, 'recipes'); const imageUrl = await storageService.saveFile(req.file, 'recipes');
console.log('File saved, URL:', imageUrl); console.log('File saved, URL:', imageUrl);
// Get existing recipe to delete old image
const existingRecipe = await prisma.recipe.findUnique({
where: { id: req.params.id },
select: { imageUrl: true },
});
// Delete old image from storage if it exists // Delete old image from storage if it exists
if (existingRecipe?.imageUrl) { if (existingRecipe?.imageUrl) {
console.log('Deleting old image:', existingRecipe.imageUrl); console.log('Deleting old image:', existingRecipe.imageUrl);
@@ -550,12 +587,17 @@ router.post('/:id/images', upload.single('image'), async (req, res) => {
// Delete recipe image // Delete recipe image
router.delete('/:id/image', async (req, res) => { router.delete('/:id/image', async (req, res) => {
try { try {
const ctx = await getAccessContext(req.user!);
const recipe = await prisma.recipe.findUnique({ const recipe = await prisma.recipe.findUnique({
where: { id: req.params.id }, where: { id: req.params.id },
select: { imageUrl: true }, select: { imageUrl: true, userId: true, familyId: true, visibility: true },
}); });
if (!recipe) return res.status(404).json({ error: 'Recipe not found' });
if (!canMutateRecipe(ctx, recipe)) {
return res.status(403).json({ error: 'Forbidden' });
}
if (!recipe?.imageUrl) { if (!recipe.imageUrl) {
return res.status(404).json({ error: 'No image to delete' }); return res.status(404).json({ error: 'No image to delete' });
} }
@@ -606,6 +648,16 @@ router.post('/:id/ingredient-mappings', async (req, res) => {
return res.status(400).json({ error: 'Mappings must be an array' }); return res.status(400).json({ error: 'Mappings must be an array' });
} }
const ctx = await getAccessContext(req.user!);
const recipe = await prisma.recipe.findUnique({
where: { id: req.params.id },
select: { userId: true, familyId: true, visibility: true },
});
if (!recipe) return res.status(404).json({ error: 'Recipe not found' });
if (!canMutateRecipe(ctx, recipe)) {
return res.status(403).json({ error: 'Forbidden' });
}
await saveIngredientMappings(mappings); await saveIngredientMappings(mappings);
res.json({ message: 'Mappings updated successfully' }); res.json({ message: 'Mappings updated successfully' });
@@ -618,6 +670,16 @@ router.post('/:id/ingredient-mappings', async (req, res) => {
// Regenerate ingredient-instruction mappings // Regenerate ingredient-instruction mappings
router.post('/:id/regenerate-mappings', async (req, res) => { router.post('/:id/regenerate-mappings', async (req, res) => {
try { try {
const ctx = await getAccessContext(req.user!);
const recipe = await prisma.recipe.findUnique({
where: { id: req.params.id },
select: { userId: true, familyId: true, visibility: true },
});
if (!recipe) return res.status(404).json({ error: 'Recipe not found' });
if (!canMutateRecipe(ctx, recipe)) {
return res.status(403).json({ error: 'Forbidden' });
}
await autoMapIngredients(req.params.id); await autoMapIngredients(req.params.id);
res.json({ message: 'Mappings regenerated successfully' }); res.json({ message: 'Mappings regenerated successfully' });

View File

@@ -0,0 +1,155 @@
#!/usr/bin/env node
/**
* Backfill default families for existing data.
*
* For every user, ensure they have a personal Family (as OWNER).
* Any Recipe or Cookbook that they own (userId = them) but has no familyId
* is assigned to that family.
*
* Orphan content (userId IS NULL) is assigned to --owner (default: first ADMIN user)
* so existing legacy records don't disappear behind the access filter.
*
* Idempotent — safe to re-run.
*
* Usage:
* npx tsx src/scripts/backfill-family-tenant.ts
* npx tsx src/scripts/backfill-family-tenant.ts --owner admin@basil.local
* npx tsx src/scripts/backfill-family-tenant.ts --dry-run
*/
import { PrismaClient, User, Family } from '@prisma/client';
const prisma = new PrismaClient();
interface Options {
ownerEmail?: string;
dryRun: boolean;
}
function parseArgs(): Options {
const args = process.argv.slice(2);
const opts: Options = { dryRun: false };
for (let i = 0; i < args.length; i++) {
if (args[i] === '--dry-run') opts.dryRun = true;
else if (args[i] === '--owner' && args[i + 1]) {
opts.ownerEmail = args[++i];
}
}
return opts;
}
async function ensurePersonalFamily(user: User, dryRun: boolean): Promise<Family> {
const existing = await prisma.familyMember.findFirst({
where: { userId: user.id, role: 'OWNER' },
include: { family: true },
});
if (existing) return existing.family;
const name = `${user.name || user.email.split('@')[0]}'s Family`;
if (dryRun) {
console.log(` [dry-run] would create Family "${name}" for ${user.email}`);
return { id: '<dry-run>', name, createdAt: new Date(), updatedAt: new Date() };
}
const family = await prisma.family.create({
data: {
name,
members: {
create: { userId: user.id, role: 'OWNER' },
},
},
});
console.log(` Created Family "${family.name}" (${family.id}) for ${user.email}`);
return family;
}
async function main() {
const opts = parseArgs();
console.log(`\n🌿 Family tenant backfill${opts.dryRun ? ' [DRY RUN]' : ''}\n`);
// 1. Pick legacy owner for orphan records.
let legacyOwner: User | null = null;
if (opts.ownerEmail) {
legacyOwner = await prisma.user.findUnique({ where: { email: opts.ownerEmail.toLowerCase() } });
if (!legacyOwner) {
console.error(`❌ No user with email ${opts.ownerEmail}`);
process.exit(1);
}
} else {
legacyOwner = await prisma.user.findFirst({
where: { role: 'ADMIN' },
orderBy: { createdAt: 'asc' },
});
}
if (!legacyOwner) {
console.warn('⚠️ No admin user found; orphan recipes/cookbooks will be left with familyId = NULL');
} else {
console.log(`Legacy owner for orphan content: ${legacyOwner.email}\n`);
}
// 2. Ensure every user has a personal family.
const users = await prisma.user.findMany({ orderBy: { createdAt: 'asc' } });
console.log(`Processing ${users.length} user(s):`);
const userFamily = new Map<string, Family>();
for (const u of users) {
const fam = await ensurePersonalFamily(u, opts.dryRun);
userFamily.set(u.id, fam);
}
// 3. Backfill Recipe.familyId and Cookbook.familyId.
const targets = [
{ label: 'Recipe', model: prisma.recipe },
{ label: 'Cookbook', model: prisma.cookbook },
] as const;
let totalUpdated = 0;
for (const { label, model } of targets) {
// Owned content without a familyId — assign to owner's family.
const ownedRows: { id: string; userId: string | null }[] = await (model as any).findMany({
where: { familyId: null, userId: { not: null } },
select: { id: true, userId: true },
});
for (const row of ownedRows) {
const fam = userFamily.get(row.userId!);
if (!fam) continue;
if (!opts.dryRun) {
await (model as any).update({ where: { id: row.id }, data: { familyId: fam.id } });
}
totalUpdated++;
}
console.log(` ${label}: ${ownedRows.length} owned row(s) assigned to owner's family`);
// Orphan content — assign to legacy owner's family if configured.
if (legacyOwner) {
const legacyFam = userFamily.get(legacyOwner.id)!;
const orphans: { id: string }[] = await (model as any).findMany({
where: { familyId: null, userId: null },
select: { id: true },
});
for (const row of orphans) {
if (!opts.dryRun) {
await (model as any).update({
where: { id: row.id },
data: { familyId: legacyFam.id, userId: legacyOwner.id },
});
}
totalUpdated++;
}
console.log(` ${label}: ${orphans.length} orphan row(s) assigned to ${legacyOwner.email}'s family`);
}
}
console.log(`\n✅ Backfill complete (${totalUpdated} row(s) ${opts.dryRun ? 'would be ' : ''}updated)\n`);
}
main()
.catch((err) => {
console.error('❌ Backfill failed:', err);
process.exit(1);
})
.finally(async () => {
await prisma.$disconnect();
});

View File

@@ -0,0 +1,108 @@
import type { Prisma, User } from '@prisma/client';
import prisma from '../config/database';
export interface AccessContext {
userId: string;
role: 'USER' | 'ADMIN';
familyIds: string[];
}
export async function getAccessContext(user: User): Promise<AccessContext> {
const memberships = await prisma.familyMember.findMany({
where: { userId: user.id },
select: { familyId: true },
});
return {
userId: user.id,
role: user.role,
familyIds: memberships.map((m) => m.familyId),
};
}
export function buildRecipeAccessFilter(ctx: AccessContext): Prisma.RecipeWhereInput {
if (ctx.role === 'ADMIN') return {};
return {
OR: [
{ userId: ctx.userId },
{ familyId: { in: ctx.familyIds } },
{ visibility: 'PUBLIC' },
{ sharedWith: { some: { userId: ctx.userId } } },
],
};
}
export function buildCookbookAccessFilter(ctx: AccessContext): Prisma.CookbookWhereInput {
if (ctx.role === 'ADMIN') return {};
return {
OR: [
{ userId: ctx.userId },
{ familyId: { in: ctx.familyIds } },
],
};
}
type RecipeAccessSubject = {
userId: string | null;
familyId: string | null;
visibility: 'PRIVATE' | 'SHARED' | 'PUBLIC';
};
type CookbookAccessSubject = {
userId: string | null;
familyId: string | null;
};
export function canReadRecipe(
ctx: AccessContext,
recipe: RecipeAccessSubject,
sharedUserIds: string[] = [],
): boolean {
if (ctx.role === 'ADMIN') return true;
if (recipe.userId === ctx.userId) return true;
if (recipe.familyId && ctx.familyIds.includes(recipe.familyId)) return true;
if (recipe.visibility === 'PUBLIC') return true;
if (sharedUserIds.includes(ctx.userId)) return true;
return false;
}
export function canMutateRecipe(
ctx: AccessContext,
recipe: RecipeAccessSubject,
): boolean {
if (ctx.role === 'ADMIN') return true;
if (recipe.userId === ctx.userId) return true;
if (recipe.familyId && ctx.familyIds.includes(recipe.familyId)) return true;
return false;
}
export function canReadCookbook(
ctx: AccessContext,
cookbook: CookbookAccessSubject,
): boolean {
if (ctx.role === 'ADMIN') return true;
if (cookbook.userId === ctx.userId) return true;
if (cookbook.familyId && ctx.familyIds.includes(cookbook.familyId)) return true;
return false;
}
export function canMutateCookbook(
ctx: AccessContext,
cookbook: CookbookAccessSubject,
): boolean {
return canReadCookbook(ctx, cookbook);
}
export async function getPrimaryFamilyId(userId: string): Promise<string | null> {
const owner = await prisma.familyMember.findFirst({
where: { userId, role: 'OWNER' },
orderBy: { joinedAt: 'asc' },
select: { familyId: true },
});
if (owner) return owner.familyId;
const any = await prisma.familyMember.findFirst({
where: { userId },
orderBy: { joinedAt: 'asc' },
select: { familyId: true },
});
return any?.familyId ?? null;
}

View File

@@ -3,4 +3,4 @@
* Example: 2026.01.002 (January 2026, patch 2), 2026.02.003 (February 2026, patch 3) * Example: 2026.01.002 (January 2026, patch 2), 2026.02.003 (February 2026, patch 3)
* Month and patch are zero-padded. Patch increments with each deployment in a month. * Month and patch are zero-padded. Patch increments with each deployment in a month.
*/ */
export const APP_VERSION = '2026.01.006'; export const APP_VERSION = '2026.04.008';

View File

@@ -4,6 +4,7 @@ import { ThemeProvider } from './contexts/ThemeContext';
import ProtectedRoute from './components/ProtectedRoute'; import ProtectedRoute from './components/ProtectedRoute';
import UserMenu from './components/UserMenu'; import UserMenu from './components/UserMenu';
import ThemeToggle from './components/ThemeToggle'; import ThemeToggle from './components/ThemeToggle';
import FamilyGate from './components/FamilyGate';
import Login from './pages/Login'; import Login from './pages/Login';
import Register from './pages/Register'; import Register from './pages/Register';
import AuthCallback from './pages/AuthCallback'; import AuthCallback from './pages/AuthCallback';
@@ -16,6 +17,7 @@ import RecipeImport from './pages/RecipeImport';
import NewRecipe from './pages/NewRecipe'; import NewRecipe from './pages/NewRecipe';
import UnifiedEditRecipe from './pages/UnifiedEditRecipe'; import UnifiedEditRecipe from './pages/UnifiedEditRecipe';
import CookingMode from './pages/CookingMode'; import CookingMode from './pages/CookingMode';
import Family from './pages/Family';
import { APP_VERSION } from './version'; import { APP_VERSION } from './version';
import './App.css'; import './App.css';
@@ -24,6 +26,7 @@ function App() {
<Router> <Router>
<ThemeProvider> <ThemeProvider>
<AuthProvider> <AuthProvider>
<FamilyGate>
<div className="app"> <div className="app">
<header className="header"> <header className="header">
<div className="container"> <div className="container">
@@ -64,6 +67,7 @@ function App() {
<Route path="/recipes/:id/cook" element={<ProtectedRoute><CookingMode /></ProtectedRoute>} /> <Route path="/recipes/:id/cook" element={<ProtectedRoute><CookingMode /></ProtectedRoute>} />
<Route path="/recipes/new" element={<ProtectedRoute><NewRecipe /></ProtectedRoute>} /> <Route path="/recipes/new" element={<ProtectedRoute><NewRecipe /></ProtectedRoute>} />
<Route path="/recipes/import" element={<ProtectedRoute><RecipeImport /></ProtectedRoute>} /> <Route path="/recipes/import" element={<ProtectedRoute><RecipeImport /></ProtectedRoute>} />
<Route path="/family" element={<ProtectedRoute><Family /></ProtectedRoute>} />
</Routes> </Routes>
</div> </div>
</main> </main>
@@ -74,6 +78,7 @@ function App() {
</div> </div>
</footer> </footer>
</div> </div>
</FamilyGate>
</AuthProvider> </AuthProvider>
</ThemeProvider> </ThemeProvider>
</Router> </Router>

View File

@@ -0,0 +1,101 @@
import { useEffect, useState, FormEvent, ReactNode } from 'react';
import { familiesApi } from '../services/api';
import { useAuth } from '../contexts/AuthContext';
import '../styles/FamilyGate.css';
interface FamilyGateProps {
children: ReactNode;
}
type CheckState = 'idle' | 'checking' | 'needs-family' | 'ready';
export default function FamilyGate({ children }: FamilyGateProps) {
const { isAuthenticated, loading: authLoading, logout } = useAuth();
const [state, setState] = useState<CheckState>('idle');
const [name, setName] = useState('');
const [submitting, setSubmitting] = useState(false);
const [error, setError] = useState<string | null>(null);
useEffect(() => {
if (authLoading) return;
if (!isAuthenticated) {
setState('idle');
return;
}
let cancelled = false;
(async () => {
setState('checking');
try {
const res = await familiesApi.list();
if (cancelled) return;
const count = res.data?.length ?? 0;
setState(count === 0 ? 'needs-family' : 'ready');
} catch {
if (!cancelled) setState('ready');
}
})();
return () => { cancelled = true; };
}, [isAuthenticated, authLoading]);
async function handleCreate(e: FormEvent) {
e.preventDefault();
const trimmed = name.trim();
if (!trimmed) return;
setSubmitting(true);
setError(null);
try {
await familiesApi.create(trimmed);
setState('ready');
} catch (e: any) {
setError(e?.response?.data?.error || 'Failed to create family');
} finally {
setSubmitting(false);
}
}
const showModal = isAuthenticated && state === 'needs-family';
return (
<>
{children}
{showModal && (
<div className="family-gate-overlay" role="dialog" aria-modal="true">
<div className="family-gate-modal">
<h2>Create your family</h2>
<p>
To keep recipes organized and shareable, every account belongs to
a family. Name yours to get started you can invite others later.
</p>
<form onSubmit={handleCreate}>
<label htmlFor="family-gate-name">Family name</label>
<input
id="family-gate-name"
type="text"
value={name}
onChange={(e) => setName(e.target.value)}
placeholder="e.g. Smith Family"
autoFocus
disabled={submitting}
required
/>
{error && <div className="family-gate-error">{error}</div>}
<div className="family-gate-actions">
<button
type="button"
className="family-gate-secondary"
onClick={logout}
disabled={submitting}
>
Sign out
</button>
<button type="submit" disabled={submitting || !name.trim()}>
{submitting ? 'Creating…' : 'Create family'}
</button>
</div>
</form>
</div>
</div>
)}
</>
);
}

View File

@@ -96,6 +96,13 @@ const UserMenu: React.FC = () => {
> >
My Cookbooks My Cookbooks
</Link> </Link>
<Link
to="/family"
className="user-menu-link"
onClick={() => setIsOpen(false)}
>
Family
</Link>
{isAdmin && ( {isAdmin && (
<> <>
<div className="user-menu-divider"></div> <div className="user-menu-divider"></div>

View File

@@ -0,0 +1,245 @@
import { useEffect, useState, FormEvent } from 'react';
import {
familiesApi,
FamilySummary,
FamilyDetail,
FamilyMemberInfo,
} from '../services/api';
import { useAuth } from '../contexts/AuthContext';
import '../styles/Family.css';
export default function Family() {
const { user } = useAuth();
const [families, setFamilies] = useState<FamilySummary[]>([]);
const [selectedId, setSelectedId] = useState<string | null>(null);
const [detail, setDetail] = useState<FamilyDetail | null>(null);
const [loading, setLoading] = useState(true);
const [error, setError] = useState<string | null>(null);
const [newFamilyName, setNewFamilyName] = useState('');
const [inviteEmail, setInviteEmail] = useState('');
const [inviteRole, setInviteRole] = useState<'MEMBER' | 'OWNER'>('MEMBER');
const [busy, setBusy] = useState(false);
async function loadFamilies() {
setError(null);
try {
const res = await familiesApi.list();
const list = res.data ?? [];
setFamilies(list);
if (!selectedId && list.length > 0) setSelectedId(list[0].id);
if (selectedId && !list.find((f) => f.id === selectedId)) {
setSelectedId(list[0]?.id ?? null);
}
} catch (e: any) {
setError(e?.response?.data?.error || e?.message || 'Failed to load families');
}
}
async function loadDetail(id: string) {
try {
const res = await familiesApi.get(id);
setDetail(res.data ?? null);
} catch (e: any) {
setError(e?.response?.data?.error || e?.message || 'Failed to load family');
setDetail(null);
}
}
useEffect(() => {
(async () => {
setLoading(true);
await loadFamilies();
setLoading(false);
})();
}, []);
useEffect(() => {
if (selectedId) loadDetail(selectedId);
else setDetail(null);
}, [selectedId]);
async function handleCreateFamily(e: FormEvent) {
e.preventDefault();
if (!newFamilyName.trim()) return;
setBusy(true);
setError(null);
try {
const res = await familiesApi.create(newFamilyName.trim());
setNewFamilyName('');
if (res.data) setSelectedId(res.data.id);
await loadFamilies();
} catch (e: any) {
setError(e?.response?.data?.error || 'Failed to create family');
} finally {
setBusy(false);
}
}
async function handleInvite(e: FormEvent) {
e.preventDefault();
if (!selectedId || !inviteEmail.trim()) return;
setBusy(true);
setError(null);
try {
await familiesApi.addMember(selectedId, inviteEmail.trim(), inviteRole);
setInviteEmail('');
setInviteRole('MEMBER');
await loadDetail(selectedId);
await loadFamilies();
} catch (e: any) {
setError(e?.response?.data?.error || 'Failed to add member');
} finally {
setBusy(false);
}
}
async function handleRemoveMember(member: FamilyMemberInfo) {
if (!selectedId) return;
const isSelf = member.userId === user?.id;
const confirmMsg = isSelf
? `Leave "${detail?.name}"?`
: `Remove ${member.name || member.email} from this family?`;
if (!confirm(confirmMsg)) return;
setBusy(true);
setError(null);
try {
await familiesApi.removeMember(selectedId, member.userId);
await loadFamilies();
if (isSelf) {
setSelectedId(null);
} else {
await loadDetail(selectedId);
}
} catch (e: any) {
setError(e?.response?.data?.error || 'Failed to remove member');
} finally {
setBusy(false);
}
}
async function handleDeleteFamily() {
if (!selectedId || !detail) return;
if (!confirm(`Delete family "${detail.name}"? Recipes and cookbooks in this family will lose their family assignment (they won't be deleted).`)) return;
setBusy(true);
setError(null);
try {
await familiesApi.remove(selectedId);
setSelectedId(null);
await loadFamilies();
} catch (e: any) {
setError(e?.response?.data?.error || 'Failed to delete family');
} finally {
setBusy(false);
}
}
if (loading) return <div className="family-page">Loading</div>;
const isOwner = detail?.myRole === 'OWNER';
return (
<div className="family-page">
<h2>Families</h2>
{error && <div className="family-error">{error}</div>}
<section className="family-create">
<form onSubmit={handleCreateFamily} className="family-create-form">
<label>
Create a new family:
<input
type="text"
value={newFamilyName}
placeholder="e.g. Smith Family"
onChange={(e) => setNewFamilyName(e.target.value)}
disabled={busy}
/>
</label>
<button type="submit" disabled={busy || !newFamilyName.trim()}>Create</button>
</form>
</section>
<div className="family-layout">
<aside className="family-list">
<h3>Your families</h3>
{families.length === 0 && <p className="muted">You're not in any family yet.</p>}
<ul>
{families.map((f) => (
<li key={f.id} className={f.id === selectedId ? 'active' : ''}>
<button onClick={() => setSelectedId(f.id)}>
<strong>{f.name}</strong>
<span className="family-meta">{f.role} · {f.memberCount} member{f.memberCount === 1 ? '' : 's'}</span>
</button>
</li>
))}
</ul>
</aside>
<main className="family-detail">
{!detail && <p className="muted">Select a family to see its members.</p>}
{detail && (
<>
<div className="family-detail-header">
<h3>{detail.name}</h3>
{isOwner && (
<button className="danger" onClick={handleDeleteFamily} disabled={busy}>
Delete family
</button>
)}
</div>
<h4>Members</h4>
<table className="family-members">
<thead>
<tr><th>Name</th><th>Email</th><th>Role</th><th></th></tr>
</thead>
<tbody>
{detail.members.map((m) => (
<tr key={m.userId}>
<td>{m.name || ''}</td>
<td>{m.email}</td>
<td>{m.role}</td>
<td>
{(isOwner || m.userId === user?.id) && (
<button onClick={() => handleRemoveMember(m)} disabled={busy}>
{m.userId === user?.id ? 'Leave' : 'Remove'}
</button>
)}
</td>
</tr>
))}
</tbody>
</table>
{isOwner && (
<>
<h4>Invite a member</h4>
<p className="muted">User must already have a Basil account on this server.</p>
<form onSubmit={handleInvite} className="family-invite-form">
<input
type="email"
placeholder="email@example.com"
value={inviteEmail}
onChange={(e) => setInviteEmail(e.target.value)}
disabled={busy}
required
/>
<select
value={inviteRole}
onChange={(e) => setInviteRole(e.target.value as 'MEMBER' | 'OWNER')}
disabled={busy}
>
<option value="MEMBER">Member</option>
<option value="OWNER">Owner</option>
</select>
<button type="submit" disabled={busy || !inviteEmail.trim()}>Add</button>
</form>
</>
)}
</>
)}
</main>
</div>
</div>
);
}

View File

@@ -237,4 +237,67 @@ export const mealPlansApi = {
}, },
}; };
export type FamilyRole = 'OWNER' | 'MEMBER';
export interface FamilySummary {
id: string;
name: string;
role: FamilyRole;
memberCount: number;
joinedAt: string;
}
export interface FamilyMemberInfo {
userId: string;
email: string;
name: string | null;
avatar: string | null;
role: FamilyRole;
joinedAt: string;
}
export interface FamilyDetail {
id: string;
name: string;
createdAt: string;
updatedAt: string;
myRole: FamilyRole | null;
members: FamilyMemberInfo[];
}
export const familiesApi = {
list: async (): Promise<ApiResponse<FamilySummary[]>> => {
const response = await api.get('/families');
return response.data;
},
create: async (name: string): Promise<ApiResponse<{ id: string; name: string }>> => {
const response = await api.post('/families', { name });
return response.data;
},
get: async (id: string): Promise<ApiResponse<FamilyDetail>> => {
const response = await api.get(`/families/${id}`);
return response.data;
},
rename: async (id: string, name: string): Promise<ApiResponse<{ id: string; name: string }>> => {
const response = await api.put(`/families/${id}`, { name });
return response.data;
},
remove: async (id: string): Promise<ApiResponse<void>> => {
const response = await api.delete(`/families/${id}`);
return response.data;
},
addMember: async (
familyId: string,
email: string,
role: FamilyRole = 'MEMBER',
): Promise<ApiResponse<FamilyMemberInfo>> => {
const response = await api.post(`/families/${familyId}/members`, { email, role });
return response.data;
},
removeMember: async (familyId: string, userId: string): Promise<ApiResponse<void>> => {
const response = await api.delete(`/families/${familyId}/members/${userId}`);
return response.data;
},
};
export default api; export default api;

View File

@@ -0,0 +1,173 @@
.family-page {
padding: 1rem 0;
}
.family-page h2 {
margin-bottom: 1rem;
color: var(--text-primary);
}
.family-page h3,
.family-page h4 {
color: var(--text-primary);
}
.family-error {
background-color: #ffebee;
color: #d32f2f;
border: 1px solid #f5c2c7;
border-radius: 4px;
padding: 0.75rem 1rem;
margin-bottom: 1rem;
}
.family-create {
margin-bottom: 1.5rem;
}
.family-create-form {
display: flex;
gap: 0.75rem;
align-items: flex-end;
flex-wrap: wrap;
}
.family-create-form label {
display: flex;
flex-direction: column;
gap: 0.35rem;
flex: 1 1 260px;
color: var(--text-secondary);
font-size: 0.9rem;
}
.family-create-form input,
.family-invite-form input,
.family-invite-form select {
padding: 0.6rem 0.75rem;
border: 1px solid var(--border-color);
border-radius: 4px;
background-color: var(--bg-secondary);
color: var(--text-primary);
font-size: 1rem;
}
.family-layout {
display: grid;
grid-template-columns: 260px 1fr;
gap: 1.5rem;
}
@media (max-width: 720px) {
.family-layout {
grid-template-columns: 1fr;
}
}
.family-list h3,
.family-detail h3 {
margin-top: 0;
}
.family-list ul {
list-style: none;
padding: 0;
margin: 0;
}
.family-list li {
margin-bottom: 0.5rem;
}
.family-list li button {
width: 100%;
text-align: left;
padding: 0.75rem 1rem;
border: 1px solid var(--border-color);
border-radius: 6px;
background-color: var(--bg-secondary);
color: var(--text-primary);
cursor: pointer;
display: flex;
flex-direction: column;
gap: 0.25rem;
transition: border-color 0.2s, background-color 0.2s;
}
.family-list li button:hover {
border-color: var(--brand-primary);
background-color: var(--bg-tertiary);
}
.family-list li.active button {
border-color: var(--brand-primary);
background-color: var(--bg-tertiary);
box-shadow: inset 3px 0 0 var(--brand-primary);
}
.family-meta {
font-size: 0.8rem;
color: var(--text-secondary);
}
.family-detail-header {
display: flex;
justify-content: space-between;
align-items: center;
margin-bottom: 0.5rem;
}
.family-members {
width: 100%;
border-collapse: collapse;
margin-bottom: 1.5rem;
}
.family-members th,
.family-members td {
text-align: left;
padding: 0.6rem 0.75rem;
border-bottom: 1px solid var(--border-light);
color: var(--text-primary);
}
.family-members th {
color: var(--text-secondary);
font-weight: 600;
font-size: 0.85rem;
text-transform: uppercase;
letter-spacing: 0.03em;
}
.family-invite-form {
display: flex;
gap: 0.5rem;
flex-wrap: wrap;
}
.family-invite-form input[type="email"] {
flex: 1 1 240px;
}
.family-page button.danger {
background-color: #d32f2f;
color: white;
border: none;
padding: 0.5rem 1rem;
border-radius: 4px;
font-size: 0.9rem;
}
.family-page button.danger:hover {
background-color: #b71c1c;
}
.family-members button {
padding: 0.4rem 0.8rem;
font-size: 0.85rem;
}
.muted {
color: var(--text-secondary);
font-style: italic;
}

View File

@@ -0,0 +1,77 @@
.family-gate-overlay {
position: fixed;
inset: 0;
background: rgba(0, 0, 0, 0.55);
display: flex;
align-items: center;
justify-content: center;
z-index: 2000;
padding: 1rem;
}
.family-gate-modal {
background: var(--bg-secondary);
color: var(--text-primary);
border-radius: 8px;
max-width: 440px;
width: 100%;
padding: 1.75rem;
box-shadow: 0 12px 40px rgba(0, 0, 0, 0.25);
}
.family-gate-modal h2 {
margin: 0 0 0.5rem;
color: var(--brand-primary);
}
.family-gate-modal p {
margin: 0 0 1.25rem;
color: var(--text-secondary);
line-height: 1.45;
}
.family-gate-modal label {
display: block;
font-size: 0.9rem;
color: var(--text-secondary);
margin-bottom: 0.35rem;
}
.family-gate-modal input {
width: 100%;
padding: 0.6rem 0.75rem;
border: 1px solid var(--border-color);
border-radius: 4px;
background-color: var(--bg-primary);
color: var(--text-primary);
font-size: 1rem;
margin-bottom: 1rem;
box-sizing: border-box;
}
.family-gate-error {
background-color: #ffebee;
color: #d32f2f;
border: 1px solid #f5c2c7;
border-radius: 4px;
padding: 0.5rem 0.75rem;
margin-bottom: 1rem;
font-size: 0.9rem;
}
.family-gate-actions {
display: flex;
justify-content: flex-end;
gap: 0.75rem;
}
.family-gate-secondary {
background-color: transparent;
color: var(--text-secondary);
border: 1px solid var(--border-color);
}
.family-gate-secondary:hover {
background-color: var(--bg-tertiary);
color: var(--text-primary);
}

View File

@@ -3,4 +3,4 @@
* Example: 2026.01.002 (January 2026, patch 2), 2026.02.003 (February 2026, patch 3) * Example: 2026.01.002 (January 2026, patch 2), 2026.02.003 (February 2026, patch 3)
* Month and patch are zero-padded. Patch increments with each deployment in a month. * Month and patch are zero-padded. Patch increments with each deployment in a month.
*/ */
export const APP_VERSION = '2026.01.006'; export const APP_VERSION = '2026.04.008';

View File

@@ -0,0 +1,458 @@
# PostgreSQL Backup Scripts
Comprehensive backup and restore scripts for PostgreSQL databases.
## Scripts Overview
### 1. `backup-all-postgres-databases.sh`
Backs up all databases on a PostgreSQL server (excluding system databases).
**Features:**
- ✅ Backs up all user databases automatically
- ✅ Includes global objects (roles, tablespaces)
- ✅ Optional gzip compression
- ✅ Automatic retention management
- ✅ Integrity verification
- ✅ Detailed logging with color output
- ✅ Backup summary reporting
- ✅ Email/Slack notification support (optional)
### 2. `restore-postgres-database.sh`
Restores a single database from backup.
**Features:**
- ✅ Safety backup before restore
- ✅ Interactive confirmation
- ✅ Automatic database name detection
- ✅ Compressed file support
- ✅ Integrity verification
- ✅ Post-restore verification
---
## Quick Start
### Backup All Databases
```bash
# Basic usage
./backup-all-postgres-databases.sh
# With compression (recommended)
./backup-all-postgres-databases.sh -c
# Custom configuration
./backup-all-postgres-databases.sh \
-h db.example.com \
-U postgres \
-d /mnt/backups \
-r 60 \
-c
```
### Restore a Database
```bash
# Interactive restore (with confirmation)
./restore-postgres-database.sh /var/backups/postgresql/20260120/mydb_20260120_020001.sql.gz
# Force restore (skip confirmation)
./restore-postgres-database.sh backup.sql.gz -d mydb -f
```
---
## Detailed Usage
### Backup Script Options
```bash
./backup-all-postgres-databases.sh [options]
Options:
-h HOST Database host (default: localhost)
-p PORT Database port (default: 5432)
-U USER Database user (default: postgres)
-d BACKUP_DIR Backup directory (default: /var/backups/postgresql)
-r DAYS Retention days (default: 30)
-c Enable compression (gzip)
-v Verbose output
-H Show help
```
### Restore Script Options
```bash
./restore-postgres-database.sh <backup_file> [options]
Options:
-h HOST Database host (default: localhost)
-p PORT Database port (default: 5432)
-U USER Database user (default: postgres)
-d DBNAME Target database name (default: from filename)
-f Force restore (skip confirmation)
-v Verbose output
-H Show help
```
---
## Automated Backups with Cron
### Daily Backups (Recommended)
```bash
# Edit crontab
sudo crontab -e
# Add daily backup at 2 AM with compression
0 2 * * * /path/to/backup-all-postgres-databases.sh -c >> /var/log/postgres-backup.log 2>&1
```
### Alternative Schedules
```bash
# Every 6 hours
0 */6 * * * /path/to/backup-all-postgres-databases.sh -c
# Twice daily (2 AM and 2 PM)
0 2,14 * * * /path/to/backup-all-postgres-databases.sh -c
# Weekly on Sundays at 3 AM
0 3 * * 0 /path/to/backup-all-postgres-databases.sh -c -r 90
```
---
## Backup Directory Structure
```
/var/backups/postgresql/
├── 20260120/ # Date-based subdirectory
│ ├── globals_20260120_020001.sql.gz # Global objects backup
│ ├── basil_20260120_020001.sql.gz # Database backup
│ ├── myapp_20260120_020001.sql.gz # Database backup
│ └── wiki_20260120_020001.sql.gz # Database backup
├── 20260121/
│ ├── globals_20260121_020001.sql.gz
│ └── ...
└── 20260122/
└── ...
```
---
## Configuration Examples
### Local PostgreSQL Server
```bash
./backup-all-postgres-databases.sh \
-h localhost \
-U postgres \
-c
```
### Remote PostgreSQL Server
```bash
./backup-all-postgres-databases.sh \
-h db.example.com \
-p 5432 \
-U backup_user \
-d /mnt/network/backups \
-r 60 \
-c \
-v
```
### High-Frequency Backups
```bash
# Short retention for frequent backups
./backup-all-postgres-databases.sh \
-r 7 \
-c
```
---
## Authentication Setup
### Option 1: .pgpass File (Recommended)
Create `~/.pgpass` with connection credentials:
```bash
echo "localhost:5432:*:postgres:your-password" >> ~/.pgpass
chmod 600 ~/.pgpass
```
Format: `hostname:port:database:username:password`
### Option 2: Environment Variables
```bash
export PGPASSWORD="your-password"
./backup-all-postgres-databases.sh
```
### Option 3: Peer Authentication (Local Only)
Run as the postgres system user:
```bash
sudo -u postgres ./backup-all-postgres-databases.sh
```
---
## Monitoring and Notifications
### Email Notifications
Edit the scripts and uncomment the email notification section:
```bash
# In backup-all-postgres-databases.sh, uncomment:
if command -v mail &> /dev/null; then
echo "$summary" | mail -s "PostgreSQL Backup $status - $(hostname)" admin@example.com
fi
```
### Slack Notifications
Set webhook URL and uncomment:
```bash
export SLACK_WEBHOOK_URL="https://hooks.slack.com/services/YOUR/WEBHOOK/URL"
# In script, uncomment:
if [ -n "$SLACK_WEBHOOK_URL" ]; then
curl -X POST "$SLACK_WEBHOOK_URL" \
-H 'Content-Type: application/json' \
-d "{\"text\":\"PostgreSQL Backup $status\n$summary\"}"
fi
```
### Log Rotation
Create `/etc/logrotate.d/postgres-backup`:
```
/var/log/postgres-backup.log {
daily
rotate 30
compress
delaycompress
missingok
notifempty
}
```
---
## Backup Verification
### Manual Verification
```bash
# List backups
ls -lh /var/backups/postgresql/$(date +%Y%m%d)/
# Verify compressed file integrity
gzip -t /var/backups/postgresql/20260120/basil_20260120_020001.sql.gz
# Preview backup contents
gunzip -c backup.sql.gz | head -50
```
### Test Restore (Recommended Monthly)
```bash
# Restore to a test database
./restore-postgres-database.sh backup.sql.gz -d test_restore -f
# Verify
psql -d test_restore -c "\dt"
# Cleanup
dropdb test_restore
```
---
## Disaster Recovery
### Full Server Restore
1. **Install PostgreSQL** on new server
2. **Restore global objects first**:
```bash
gunzip -c globals_YYYYMMDD_HHMMSS.sql.gz | psql -U postgres -d postgres
```
3. **Restore each database**:
```bash
./restore-postgres-database.sh basil_20260120_020001.sql.gz
./restore-postgres-database.sh myapp_20260120_020001.sql.gz
```
### Point-in-Time Recovery
For PITR capabilities, enable WAL archiving in `postgresql.conf`:
```
wal_level = replica
archive_mode = on
archive_command = 'cp %p /var/lib/postgresql/wal_archive/%f'
max_wal_senders = 3
```
Then use `pg_basebackup` and WAL replay for PITR.
---
## Troubleshooting
### Permission Denied
```bash
# Fix backup directory permissions
sudo chown -R postgres:postgres /var/backups/postgresql
sudo chmod 755 /var/backups/postgresql
# Fix script permissions
chmod +x backup-all-postgres-databases.sh
```
### Connection Failed
```bash
# Test connection manually
psql -h localhost -U postgres -c "SELECT version();"
# Check pg_hba.conf
sudo cat /etc/postgresql/*/main/pg_hba.conf
# Ensure proper authentication line exists:
# local all postgres peer
# host all all 127.0.0.1/32 scram-sha-256
```
### Out of Disk Space
```bash
# Check disk usage
df -h /var/backups
# Clean old backups manually
find /var/backups/postgresql -type d -name "????????" -mtime +30 -exec rm -rf {} \;
# Reduce retention period
./backup-all-postgres-databases.sh -r 7
```
### Backup File Corrupted
```bash
# Verify integrity
gzip -t backup.sql.gz
# If corrupted, use previous backup
ls -lt /var/backups/postgresql/*/basil_*.sql.gz | head
```
---
## Performance Optimization
### Large Databases
For very large databases, consider:
```bash
# Parallel dump (PostgreSQL 9.3+)
pg_dump -Fd -j 4 -f backup_directory mydb
# Custom format (smaller, faster restore)
pg_dump -Fc mydb > backup.custom
# Restore from custom format
pg_restore -d mydb backup.custom
```
### Network Backups
```bash
# Direct SSH backup (no local storage)
pg_dump mydb | gzip | ssh backup-server "cat > /backups/mydb.sql.gz"
```
---
## Best Practices
1. **Always test restores** - Backups are worthless if you can't restore
2. **Monitor backup completion** - Set up alerts for failed backups
3. **Use compression** - Saves 80-90% of disk space
4. **Multiple backup locations** - Local + remote/cloud storage
5. **Verify backup integrity** - Run gzip -t on compressed backups
6. **Document procedures** - Keep runbooks for disaster recovery
7. **Encrypt sensitive backups** - Use gpg for encryption if needed
8. **Regular retention review** - Adjust based on compliance requirements
---
## Security Considerations
### Encryption at Rest
```bash
# Encrypt backup with GPG
pg_dump mydb | gzip | gpg --encrypt --recipient admin@example.com > backup.sql.gz.gpg
# Decrypt for restore
gpg --decrypt backup.sql.gz.gpg | gunzip | psql mydb
```
### Secure Transfer
```bash
# Use SCP with key authentication
scp -i ~/.ssh/backup_key backup.sql.gz backup-server:/secure/backups/
# Or rsync over SSH
rsync -av -e "ssh -i ~/.ssh/backup_key" \
/var/backups/postgresql/ \
backup-server:/secure/backups/
```
### Access Control
```bash
# Restrict backup directory permissions
chmod 700 /var/backups/postgresql
chown postgres:postgres /var/backups/postgresql
# Restrict script permissions
chmod 750 backup-all-postgres-databases.sh
chown root:postgres backup-all-postgres-databases.sh
```
---
## Additional Resources
- [PostgreSQL Backup Documentation](https://www.postgresql.org/docs/current/backup.html)
- [pg_dump Manual](https://www.postgresql.org/docs/current/app-pgdump.html)
- [pg_restore Manual](https://www.postgresql.org/docs/current/app-pgrestore.html)
- [Continuous Archiving and PITR](https://www.postgresql.org/docs/current/continuous-archiving.html)
---
## Support
For issues or questions:
- Check script help: `./backup-all-postgres-databases.sh -H`
- Review logs: `tail -f /var/log/postgres-backup.log`
- Test connection: `psql -h localhost -U postgres`

View File

@@ -0,0 +1,402 @@
#!/bin/bash
#
# PostgreSQL All Databases Backup Script
# Backs up all databases on a PostgreSQL server using pg_dump
#
# Usage:
# ./backup-all-postgres-databases.sh [options]
#
# Options:
# -h HOST Database host (default: localhost)
# -p PORT Database port (default: 5432)
# -U USER Database user (default: postgres)
# -d BACKUP_DIR Backup directory (default: /var/backups/postgresql)
# -r DAYS Retention days (default: 30)
# -c Enable compression (gzip)
# -v Verbose output
#
# Cron example (daily at 2 AM):
# 0 2 * * * /path/to/backup-all-postgres-databases.sh -c >> /var/log/postgres-backup.log 2>&1
set -e
set -o pipefail
# Default configuration
DB_HOST="localhost"
DB_PORT="5432"
DB_USER="postgres"
BACKUP_DIR="/var/backups/postgresql"
RETENTION_DAYS=30
COMPRESS=false
VERBOSE=false
# Color output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# Parse command line arguments
while getopts "h:p:U:d:r:cvH" opt; do
case $opt in
h) DB_HOST="$OPTARG" ;;
p) DB_PORT="$OPTARG" ;;
U) DB_USER="$OPTARG" ;;
d) BACKUP_DIR="$OPTARG" ;;
r) RETENTION_DAYS="$OPTARG" ;;
c) COMPRESS=true ;;
v) VERBOSE=true ;;
H)
echo "PostgreSQL All Databases Backup Script"
echo ""
echo "Usage: $0 [options]"
echo ""
echo "Options:"
echo " -h HOST Database host (default: localhost)"
echo " -p PORT Database port (default: 5432)"
echo " -U USER Database user (default: postgres)"
echo " -d BACKUP_DIR Backup directory (default: /var/backups/postgresql)"
echo " -r DAYS Retention days (default: 30)"
echo " -c Enable compression (gzip)"
echo " -v Verbose output"
echo " -H Show this help"
echo ""
exit 0
;;
\?)
echo "Invalid option: -$OPTARG" >&2
exit 1
;;
esac
done
# Logging functions
log_info() {
echo -e "${GREEN}[INFO]${NC} $(date '+%Y-%m-%d %H:%M:%S') - $1"
}
log_warn() {
echo -e "${YELLOW}[WARN]${NC} $(date '+%Y-%m-%d %H:%M:%S') - $1"
}
log_error() {
echo -e "${RED}[ERROR]${NC} $(date '+%Y-%m-%d %H:%M:%S') - $1" >&2
}
log_debug() {
if [ "$VERBOSE" = true ]; then
echo -e "${BLUE}[DEBUG]${NC} $(date '+%Y-%m-%d %H:%M:%S') - $1"
fi
}
# Check dependencies
check_dependencies() {
log_debug "Checking dependencies..."
if ! command -v psql &> /dev/null; then
log_error "psql not found. Please install PostgreSQL client tools."
exit 1
fi
if ! command -v pg_dump &> /dev/null; then
log_error "pg_dump not found. Please install PostgreSQL client tools."
exit 1
fi
if [ "$COMPRESS" = true ] && ! command -v gzip &> /dev/null; then
log_error "gzip not found. Please install gzip or disable compression."
exit 1
fi
log_debug "All dependencies satisfied"
}
# Test database connection
test_connection() {
log_debug "Testing database connection..."
if ! psql -h "$DB_HOST" -p "$DB_PORT" -U "$DB_USER" -d postgres -c "SELECT version();" &> /dev/null; then
log_error "Cannot connect to PostgreSQL server at $DB_HOST:$DB_PORT"
log_error "Check credentials, network connectivity, and pg_hba.conf settings"
exit 1
fi
log_debug "Database connection successful"
}
# Create backup directory structure
create_backup_dirs() {
local timestamp=$(date +%Y%m%d)
local backup_subdir="$BACKUP_DIR/$timestamp"
log_debug "Creating backup directory: $backup_subdir"
mkdir -p "$backup_subdir"
if [ ! -w "$backup_subdir" ]; then
log_error "Backup directory is not writable: $backup_subdir"
exit 1
fi
echo "$backup_subdir"
}
# Get list of databases to backup
get_databases() {
log_debug "Retrieving database list..."
# Get all databases except system databases
local databases=$(psql -h "$DB_HOST" -p "$DB_PORT" -U "$DB_USER" -d postgres -t -c \
"SELECT datname FROM pg_database
WHERE datname NOT IN ('postgres', 'template0', 'template1')
AND datistemplate = false
ORDER BY datname;")
if [ -z "$databases" ]; then
log_warn "No user databases found to backup"
return 1
fi
echo "$databases"
}
# Backup a single database
backup_database() {
local db_name="$1"
local backup_dir="$2"
local timestamp=$(date +%Y%m%d_%H%M%S)
local backup_file="$backup_dir/${db_name}_${timestamp}.sql"
log_info "Backing up database: $db_name"
# Add compression extension if enabled
if [ "$COMPRESS" = true ]; then
backup_file="${backup_file}.gz"
fi
# Perform backup
local start_time=$(date +%s)
if [ "$COMPRESS" = true ]; then
if pg_dump -h "$DB_HOST" -p "$DB_PORT" -U "$DB_USER" -d "$db_name" \
--no-owner --no-privileges --create --clean | gzip > "$backup_file"; then
local status="SUCCESS"
else
local status="FAILED"
fi
else
if pg_dump -h "$DB_HOST" -p "$DB_PORT" -U "$DB_USER" -d "$db_name" \
--no-owner --no-privileges --create --clean > "$backup_file"; then
local status="SUCCESS"
else
local status="FAILED"
fi
fi
local end_time=$(date +%s)
local duration=$((end_time - start_time))
if [ "$status" = "SUCCESS" ]; then
# Verify backup file exists and has content
if [ ! -s "$backup_file" ]; then
log_error "Backup file is empty: $backup_file"
return 1
fi
# Verify compressed file integrity if compression is enabled
if [ "$COMPRESS" = true ]; then
if ! gzip -t "$backup_file" 2>/dev/null; then
log_error "Backup file is corrupted: $backup_file"
return 1
fi
fi
local file_size=$(du -h "$backup_file" | cut -f1)
log_info "$db_name backup completed - Size: $file_size, Duration: ${duration}s"
log_debug " File: $backup_file"
return 0
else
log_error "$db_name backup failed"
# Remove failed backup file
rm -f "$backup_file"
return 1
fi
}
# Backup global objects (roles, tablespaces, etc.)
backup_globals() {
local backup_dir="$1"
local timestamp=$(date +%Y%m%d_%H%M%S)
local backup_file="$backup_dir/globals_${timestamp}.sql"
log_info "Backing up global objects (roles, tablespaces)..."
if [ "$COMPRESS" = true ]; then
backup_file="${backup_file}.gz"
fi
if [ "$COMPRESS" = true ]; then
if pg_dumpall -h "$DB_HOST" -p "$DB_PORT" -U "$DB_USER" --globals-only | gzip > "$backup_file"; then
local status="SUCCESS"
else
local status="FAILED"
fi
else
if pg_dumpall -h "$DB_HOST" -p "$DB_PORT" -U "$DB_USER" --globals-only > "$backup_file"; then
local status="SUCCESS"
else
local status="FAILED"
fi
fi
if [ "$status" = "SUCCESS" ]; then
local file_size=$(du -h "$backup_file" | cut -f1)
log_info "✓ Global objects backup completed - Size: $file_size"
return 0
else
log_error "✗ Global objects backup failed"
rm -f "$backup_file"
return 1
fi
}
# Clean up old backups
cleanup_old_backups() {
log_info "Cleaning up backups older than $RETENTION_DAYS days..."
local deleted_count=0
# Find and delete old backup directories
while IFS= read -r old_dir; do
log_debug "Deleting old backup directory: $old_dir"
rm -rf "$old_dir"
((deleted_count++))
done < <(find "$BACKUP_DIR" -maxdepth 1 -type d -name "????????" -mtime +$RETENTION_DAYS 2>/dev/null)
if [ $deleted_count -gt 0 ]; then
log_info "Deleted $deleted_count old backup directories"
else
log_debug "No old backups to delete"
fi
}
# Generate backup summary
generate_summary() {
local backup_dir="$1"
local total_dbs="$2"
local successful_dbs="$3"
local failed_dbs="$4"
local total_size=$(du -sh "$backup_dir" 2>/dev/null | cut -f1)
echo ""
log_info "================================================"
log_info "Backup Summary"
log_info "================================================"
log_info "Backup Directory: $backup_dir"
log_info "Total Databases: $total_dbs"
log_info "Successful: $successful_dbs"
log_info "Failed: $failed_dbs"
log_info "Total Size: $total_size"
log_info "Retention: $RETENTION_DAYS days"
log_info "Compression: $([ "$COMPRESS" = true ] && echo "Enabled" || echo "Disabled")"
log_info "================================================"
echo ""
}
# Send notification (optional)
send_notification() {
local status="$1"
local summary="$2"
# Uncomment and configure to enable email notifications
# if command -v mail &> /dev/null; then
# echo "$summary" | mail -s "PostgreSQL Backup $status - $(hostname)" your-email@example.com
# fi
# Uncomment and configure to enable Slack notifications
# if [ -n "$SLACK_WEBHOOK_URL" ]; then
# curl -X POST "$SLACK_WEBHOOK_URL" \
# -H 'Content-Type: application/json' \
# -d "{\"text\":\"PostgreSQL Backup $status\n$summary\"}"
# fi
}
# Main execution
main() {
local start_time=$(date +%s)
log_info "================================================"
log_info "PostgreSQL All Databases Backup Script"
log_info "================================================"
log_info "Host: $DB_HOST:$DB_PORT"
log_info "User: $DB_USER"
log_info "Backup Directory: $BACKUP_DIR"
log_info "Compression: $([ "$COMPRESS" = true ] && echo "Enabled" || echo "Disabled")"
log_info "Retention: $RETENTION_DAYS days"
log_info "================================================"
echo ""
# Perform checks
check_dependencies
test_connection
# Create backup directory
local backup_subdir=$(create_backup_dirs)
# Get list of databases
local databases=$(get_databases)
if [ -z "$databases" ]; then
log_warn "No databases to backup. Exiting."
exit 0
fi
# Backup global objects first
backup_globals "$backup_subdir"
# Backup each database
local total_dbs=0
local successful_dbs=0
local failed_dbs=0
while IFS= read -r db; do
# Trim whitespace
db=$(echo "$db" | xargs)
if [ -n "$db" ]; then
((total_dbs++))
if backup_database "$db" "$backup_subdir"; then
((successful_dbs++))
else
((failed_dbs++))
fi
fi
done <<< "$databases"
# Cleanup old backups
cleanup_old_backups
# Calculate total execution time
local end_time=$(date +%s)
local total_duration=$((end_time - start_time))
# Generate summary
generate_summary "$backup_subdir" "$total_dbs" "$successful_dbs" "$failed_dbs"
log_info "Total execution time: ${total_duration}s"
# Send notification
if [ $failed_dbs -gt 0 ]; then
send_notification "COMPLETED WITH ERRORS" "$(generate_summary "$backup_subdir" "$total_dbs" "$successful_dbs" "$failed_dbs")"
exit 1
else
send_notification "SUCCESS" "$(generate_summary "$backup_subdir" "$total_dbs" "$successful_dbs" "$failed_dbs")"
log_info "All backups completed successfully! ✓"
exit 0
fi
}
# Run main function
main

View File

@@ -0,0 +1,74 @@
#!/bin/bash
#
# Basil Backup Script for Standalone PostgreSQL
# Place on database server and run via cron
#
# Cron example (daily at 2 AM):
# 0 2 * * * /path/to/backup-standalone-postgres.sh
set -e
# Configuration
DB_HOST="localhost"
DB_PORT="5432"
DB_NAME="basil"
DB_USER="basil"
BACKUP_DIR="/var/backups/basil"
RETENTION_DAYS=30
# Create backup directories
mkdir -p "$BACKUP_DIR/daily"
mkdir -p "$BACKUP_DIR/weekly"
mkdir -p "$BACKUP_DIR/monthly"
# Timestamp
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
DATE=$(date +%Y%m%d)
DAY_OF_WEEK=$(date +%u) # 1=Monday, 7=Sunday
DAY_OF_MONTH=$(date +%d)
# Daily backup
echo "Starting daily backup: $TIMESTAMP"
DAILY_BACKUP="$BACKUP_DIR/daily/basil-$DATE.sql.gz"
pg_dump -h "$DB_HOST" -p "$DB_PORT" -U "$DB_USER" "$DB_NAME" | gzip > "$DAILY_BACKUP"
echo "Daily backup completed: $DAILY_BACKUP"
# Weekly backup (on Sundays)
if [ "$DAY_OF_WEEK" -eq 7 ]; then
echo "Creating weekly backup"
WEEK=$(date +%V)
WEEKLY_BACKUP="$BACKUP_DIR/weekly/basil-week$WEEK-$DATE.sql.gz"
cp "$DAILY_BACKUP" "$WEEKLY_BACKUP"
echo "Weekly backup completed: $WEEKLY_BACKUP"
fi
# Monthly backup (on 1st of month)
if [ "$DAY_OF_MONTH" -eq 01 ]; then
echo "Creating monthly backup"
MONTH=$(date +%Y%m)
MONTHLY_BACKUP="$BACKUP_DIR/monthly/basil-$MONTH.sql.gz"
cp "$DAILY_BACKUP" "$MONTHLY_BACKUP"
echo "Monthly backup completed: $MONTHLY_BACKUP"
fi
# Cleanup old backups
echo "Cleaning up old backups..."
find "$BACKUP_DIR/daily" -name "basil-*.sql.gz" -mtime +$RETENTION_DAYS -delete
find "$BACKUP_DIR/weekly" -name "basil-*.sql.gz" -mtime +90 -delete
find "$BACKUP_DIR/monthly" -name "basil-*.sql.gz" -mtime +365 -delete
# Verify backup integrity
echo "Verifying backup integrity..."
if gzip -t "$DAILY_BACKUP"; then
BACKUP_SIZE=$(du -h "$DAILY_BACKUP" | cut -f1)
echo "Backup verification successful. Size: $BACKUP_SIZE"
else
echo "ERROR: Backup verification failed!" >&2
exit 1
fi
# Optional: Send notification (uncomment to enable)
# echo "Basil backup completed successfully on $(hostname) at $(date)" | \
# mail -s "Basil Backup Success" your-email@example.com
echo "Backup process completed successfully"

View File

@@ -131,6 +131,32 @@ EOF
log "Docker Compose override file created" log "Docker Compose override file created"
} }
# Apply database migrations using the newly-pulled API image.
# Runs before restart so a failed migration leaves the old containers running.
run_migrations() {
log "Applying database migrations..."
if [ -z "$DATABASE_URL" ]; then
error "DATABASE_URL not set in .env — cannot apply migrations"
exit 1
fi
local API_IMAGE="${DOCKER_REGISTRY}/${DOCKER_USERNAME}/basil-api:${IMAGE_TAG}"
# Use --network=host so the container can reach the same DB host the app uses.
# schema.prisma and migrations/ ship inside the API image.
docker run --rm \
--network host \
-e DATABASE_URL="$DATABASE_URL" \
"$API_IMAGE" \
npx prisma migrate deploy || {
error "Migration failed — aborting deploy. Old containers are still running."
exit 1
}
log "Migrations applied successfully"
}
# Restart containers # Restart containers
restart_containers() { restart_containers() {
log "Restarting containers..." log "Restarting containers..."
@@ -219,6 +245,7 @@ main() {
login_to_harbor login_to_harbor
create_backup create_backup
pull_images pull_images
run_migrations
update_docker_compose update_docker_compose
restart_containers restart_containers
health_check health_check

View File

@@ -0,0 +1,396 @@
#!/bin/bash
#
# PostgreSQL Database Restore Script
# Restores a single database from backup created by backup-all-postgres-databases.sh
#
# Usage:
# ./restore-postgres-database.sh <backup_file> [options]
#
# Options:
# -h HOST Database host (default: localhost)
# -p PORT Database port (default: 5432)
# -U USER Database user (default: postgres)
# -d DBNAME Target database name (default: extracted from backup filename)
# -f Force restore (skip confirmation)
# -v Verbose output
#
# Examples:
# ./restore-postgres-database.sh /var/backups/postgresql/20260120/mydb_20260120_020001.sql.gz
# ./restore-postgres-database.sh backup.sql -d mydb -f
set -e
set -o pipefail
# Default configuration
DB_HOST="localhost"
DB_PORT="5432"
DB_USER="postgres"
DB_NAME=""
FORCE=false
VERBOSE=false
# Color output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# Logging functions
log_info() {
echo -e "${GREEN}[INFO]${NC} $(date '+%Y-%m-%d %H:%M:%S') - $1"
}
log_warn() {
echo -e "${YELLOW}[WARN]${NC} $(date '+%Y-%m-%d %H:%M:%S') - $1"
}
log_error() {
echo -e "${RED}[ERROR]${NC} $(date '+%Y-%m-%d %H:%M:%S') - $1" >&2
}
log_debug() {
if [ "$VERBOSE" = true ]; then
echo -e "${BLUE}[DEBUG]${NC} $(date '+%Y-%m-%d %H:%M:%S') - $1"
fi
}
# Show usage
show_usage() {
echo "PostgreSQL Database Restore Script"
echo ""
echo "Usage: $0 <backup_file> [options]"
echo ""
echo "Options:"
echo " -h HOST Database host (default: localhost)"
echo " -p PORT Database port (default: 5432)"
echo " -U USER Database user (default: postgres)"
echo " -d DBNAME Target database name (default: extracted from filename)"
echo " -f Force restore (skip confirmation)"
echo " -v Verbose output"
echo " -H Show this help"
echo ""
echo "Examples:"
echo " $0 /var/backups/postgresql/20260120/mydb_20260120_020001.sql.gz"
echo " $0 backup.sql -d mydb -f"
echo ""
}
# Extract database name from backup filename
extract_db_name() {
local filename=$(basename "$1")
# Remove extension(s) and timestamp
# Format: dbname_YYYYMMDD_HHMMSS.sql[.gz]
echo "$filename" | sed -E 's/_[0-9]{8}_[0-9]{6}\.sql(\.gz)?$//'
}
# Check if file is compressed
is_compressed() {
[[ "$1" == *.gz ]]
}
# Verify backup file
verify_backup() {
local backup_file="$1"
log_debug "Verifying backup file: $backup_file"
if [ ! -f "$backup_file" ]; then
log_error "Backup file not found: $backup_file"
exit 1
fi
if [ ! -r "$backup_file" ]; then
log_error "Backup file is not readable: $backup_file"
exit 1
fi
if [ ! -s "$backup_file" ]; then
log_error "Backup file is empty: $backup_file"
exit 1
fi
# Verify compressed file integrity
if is_compressed "$backup_file"; then
log_debug "Verifying gzip integrity..."
if ! gzip -t "$backup_file" 2>/dev/null; then
log_error "Backup file is corrupted (gzip test failed)"
exit 1
fi
fi
log_debug "Backup file verification passed"
}
# Test database connection
test_connection() {
log_debug "Testing database connection..."
if ! psql -h "$DB_HOST" -p "$DB_PORT" -U "$DB_USER" -d postgres -c "SELECT version();" &> /dev/null; then
log_error "Cannot connect to PostgreSQL server at $DB_HOST:$DB_PORT"
log_error "Check credentials, network connectivity, and pg_hba.conf settings"
exit 1
fi
log_debug "Database connection successful"
}
# Check if database exists
database_exists() {
local db_name="$1"
psql -h "$DB_HOST" -p "$DB_PORT" -U "$DB_USER" -d postgres -t -c \
"SELECT 1 FROM pg_database WHERE datname='$db_name';" | grep -q 1
}
# Create safety backup
create_safety_backup() {
local db_name="$1"
local timestamp=$(date +%Y%m%d_%H%M%S)
local safety_file="/tmp/${db_name}_pre-restore_${timestamp}.sql.gz"
log_info "Creating safety backup before restore..."
if pg_dump -h "$DB_HOST" -p "$DB_PORT" -U "$DB_USER" -d "$db_name" | gzip > "$safety_file"; then
log_info "Safety backup created: $safety_file"
echo "$safety_file"
return 0
else
log_error "Failed to create safety backup"
return 1
fi
}
# Drop and recreate database
recreate_database() {
local db_name="$1"
log_info "Dropping and recreating database: $db_name"
# Terminate existing connections
psql -h "$DB_HOST" -p "$DB_PORT" -U "$DB_USER" -d postgres <<EOF
SELECT pg_terminate_backend(pid)
FROM pg_stat_activity
WHERE datname = '$db_name' AND pid <> pg_backend_pid();
EOF
# Drop and recreate
psql -h "$DB_HOST" -p "$DB_PORT" -U "$DB_USER" -d postgres <<EOF
DROP DATABASE IF EXISTS $db_name;
CREATE DATABASE $db_name;
EOF
log_debug "Database recreated successfully"
}
# Restore database
restore_database() {
local backup_file="$1"
local db_name="$2"
log_info "Restoring database from: $backup_file"
local start_time=$(date +%s)
# Restore based on compression
if is_compressed "$backup_file"; then
if gunzip -c "$backup_file" | psql -h "$DB_HOST" -p "$DB_PORT" -U "$DB_USER" -d postgres -v ON_ERROR_STOP=1; then
local status="SUCCESS"
else
local status="FAILED"
fi
else
if psql -h "$DB_HOST" -p "$DB_PORT" -U "$DB_USER" -d postgres -f "$backup_file" -v ON_ERROR_STOP=1; then
local status="SUCCESS"
else
local status="FAILED"
fi
fi
local end_time=$(date +%s)
local duration=$((end_time - start_time))
if [ "$status" = "SUCCESS" ]; then
log_info "✓ Database restore completed in ${duration}s"
return 0
else
log_error "✗ Database restore failed"
return 1
fi
}
# Verify restore
verify_restore() {
local db_name="$1"
log_info "Verifying restored database..."
# Check if database exists
if ! database_exists "$db_name"; then
log_error "Database not found after restore: $db_name"
return 1
fi
# Get table count
local table_count=$(psql -h "$DB_HOST" -p "$DB_PORT" -U "$DB_USER" -d "$db_name" -t -c \
"SELECT COUNT(*) FROM information_schema.tables WHERE table_schema = 'public';")
table_count=$(echo "$table_count" | xargs)
# Get row count estimate
local row_count=$(psql -h "$DB_HOST" -p "$DB_PORT" -U "$DB_USER" -d "$db_name" -t -c \
"SELECT SUM(n_live_tup) FROM pg_stat_user_tables;")
row_count=$(echo "$row_count" | xargs)
row_count=${row_count:-0}
log_info "Database: $db_name"
log_info "Tables: $table_count"
log_info "Approximate rows: $row_count"
return 0
}
# Parse command line arguments
BACKUP_FILE=""
while [[ $# -gt 0 ]]; do
case $1 in
-h)
DB_HOST="$2"
shift 2
;;
-p)
DB_PORT="$2"
shift 2
;;
-U)
DB_USER="$2"
shift 2
;;
-d)
DB_NAME="$2"
shift 2
;;
-f)
FORCE=true
shift
;;
-v)
VERBOSE=true
shift
;;
-H)
show_usage
exit 0
;;
-*)
log_error "Unknown option: $1"
show_usage
exit 1
;;
*)
if [ -z "$BACKUP_FILE" ]; then
BACKUP_FILE="$1"
else
log_error "Multiple backup files specified"
show_usage
exit 1
fi
shift
;;
esac
done
# Main execution
main() {
# Check if backup file was provided
if [ -z "$BACKUP_FILE" ]; then
log_error "No backup file specified"
show_usage
exit 1
fi
# Verify backup file
verify_backup "$BACKUP_FILE"
# Extract database name if not provided
if [ -z "$DB_NAME" ]; then
DB_NAME=$(extract_db_name "$BACKUP_FILE")
log_debug "Extracted database name: $DB_NAME"
fi
log_info "================================================"
log_info "PostgreSQL Database Restore"
log_info "================================================"
log_info "Backup File: $BACKUP_FILE"
log_info "Target Database: $DB_NAME"
log_info "Host: $DB_HOST:$DB_PORT"
log_info "User: $DB_USER"
log_info "================================================"
echo ""
# Test connection
test_connection
# Check if database exists
local db_exists=false
if database_exists "$DB_NAME"; then
db_exists=true
log_warn "Database '$DB_NAME' already exists and will be DROPPED"
fi
# Confirmation prompt (unless force flag is set)
if [ "$FORCE" != true ]; then
echo ""
echo -e "${RED}WARNING: This will destroy all current data in database: $DB_NAME${NC}"
echo ""
read -p "Are you sure you want to continue? (type 'yes' to confirm): " CONFIRM
if [ "$CONFIRM" != "yes" ]; then
log_info "Restore cancelled by user"
exit 0
fi
fi
# Create safety backup if database exists
local safety_file=""
if [ "$db_exists" = true ]; then
safety_file=$(create_safety_backup "$DB_NAME")
fi
# Recreate database
recreate_database "$DB_NAME"
# Restore from backup
if restore_database "$BACKUP_FILE" "$DB_NAME"; then
verify_restore "$DB_NAME"
echo ""
log_info "================================================"
log_info "Restore completed successfully! ✓"
log_info "================================================"
if [ -n "$safety_file" ]; then
echo ""
log_info "A safety backup was created before restore:"
log_info " $safety_file"
echo ""
log_info "To rollback to the previous state, run:"
log_info " $0 $safety_file -d $DB_NAME -f"
echo ""
fi
exit 0
else
log_error "Restore failed!"
if [ -n "$safety_file" ]; then
echo ""
log_warn "You can restore the previous state using:"
log_warn " $0 $safety_file -d $DB_NAME -f"
fi
exit 1
fi
}
# Run main function
main

View File

@@ -0,0 +1,88 @@
#!/bin/bash
#
# Basil Restore Script for Standalone PostgreSQL
# Run manually when you need to restore from backup
#
# Usage: ./restore-standalone-postgres.sh /path/to/backup.sql.gz
set -e
# Configuration
DB_HOST="localhost"
DB_PORT="5432"
DB_NAME="basil"
DB_USER="basil"
# Check arguments
if [ $# -eq 0 ]; then
echo "Usage: $0 /path/to/backup.sql.gz"
echo ""
echo "Available backups:"
echo "Daily:"
ls -lh /var/backups/basil/daily/ 2>/dev/null | tail -5
echo ""
echo "Weekly:"
ls -lh /var/backups/basil/weekly/ 2>/dev/null | tail -5
exit 1
fi
BACKUP_FILE="$1"
# Verify backup file exists
if [ ! -f "$BACKUP_FILE" ]; then
echo "ERROR: Backup file not found: $BACKUP_FILE"
exit 1
fi
# Verify backup integrity
echo "Verifying backup integrity..."
if ! gzip -t "$BACKUP_FILE"; then
echo "ERROR: Backup file is corrupted!"
exit 1
fi
# Confirm restore
echo "===== WARNING ====="
echo "This will DESTROY all current data in database: $DB_NAME"
echo "Backup file: $BACKUP_FILE"
echo "Database: $DB_USER@$DB_HOST:$DB_PORT/$DB_NAME"
echo ""
read -p "Are you sure you want to continue? (type 'yes' to confirm): " CONFIRM
if [ "$CONFIRM" != "yes" ]; then
echo "Restore cancelled."
exit 0
fi
# Create backup of current database before restore
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
PRE_RESTORE_BACKUP="/tmp/basil-pre-restore-$TIMESTAMP.sql.gz"
echo "Creating safety backup of current database..."
pg_dump -h "$DB_HOST" -p "$DB_PORT" -U "$DB_USER" "$DB_NAME" | gzip > "$PRE_RESTORE_BACKUP"
echo "Safety backup created: $PRE_RESTORE_BACKUP"
# Drop and recreate database
echo "Dropping existing database..."
psql -h "$DB_HOST" -p "$DB_PORT" -U "$DB_USER" postgres <<EOF
DROP DATABASE IF EXISTS $DB_NAME;
CREATE DATABASE $DB_NAME;
GRANT ALL PRIVILEGES ON DATABASE $DB_NAME TO $DB_USER;
EOF
# Restore from backup
echo "Restoring from backup..."
gunzip -c "$BACKUP_FILE" | psql -h "$DB_HOST" -p "$DB_PORT" -U "$DB_USER" "$DB_NAME"
# Verify restore
echo "Verifying restore..."
RECIPE_COUNT=$(psql -h "$DB_HOST" -p "$DB_PORT" -U "$DB_USER" "$DB_NAME" -t -c "SELECT COUNT(*) FROM \"Recipe\";")
COOKBOOK_COUNT=$(psql -h "$DB_HOST" -p "$DB_PORT" -U "$DB_USER" "$DB_NAME" -t -c "SELECT COUNT(*) FROM \"Cookbook\";")
echo ""
echo "===== Restore Complete ====="
echo "Recipes: $RECIPE_COUNT"
echo "Cookbooks: $COOKBOOK_COUNT"
echo "Pre-restore backup saved at: $PRE_RESTORE_BACKUP"
echo ""
echo "If something went wrong, you can restore from the safety backup:"
echo " gunzip -c $PRE_RESTORE_BACKUP | psql -h $DB_HOST -U $DB_USER $DB_NAME"