19 Commits

Author SHA1 Message Date
Paul R Kartchner
91146e1219 ci: automate Prisma migrations in pipeline and deploy [skip-deploy]
Some checks failed
Basil CI/CD Pipeline / Shared Package Tests (push) Successful in 3m18s
Basil CI/CD Pipeline / Code Linting (push) Successful in 3m39s
Basil CI/CD Pipeline / Web Tests (push) Successful in 4m1s
Basil CI/CD Pipeline / API Tests (push) Failing after 4m7s
Basil CI/CD Pipeline / Trigger Deployment (push) Has been skipped
Basil CI/CD Pipeline / Security Scanning (push) Successful in 3m42s
Basil CI/CD Pipeline / Build All Packages (push) Has been skipped
Basil CI/CD Pipeline / E2E Tests (push) Has been skipped
Basil CI/CD Pipeline / Build & Push Docker Images (push) Has been skipped
Moves migration handling into the pipeline and production deploy so
schema changes ship atomically with the code that depends on them.
Previously migrations were manual and the migrations/ directory was
gitignored, which caused silent drift between environments.

- Track packages/api/prisma/migrations/ in git (including the baseline
  20260416000000_init and the family-tenant delta).
- Add `prisma:deploy` script that runs `prisma migrate deploy` (the
  non-interactive, CI-safe command). `prisma:migrate` still maps to
  `migrate dev` for local authoring.
- Pipeline test-api and e2e-tests jobs now use `prisma:deploy` and
  test-api adds a drift check (`prisma migrate diff --exit-code`) that
  fails the build if schema.prisma has changes without a corresponding
  migration.
- deploy.sh runs migrations against prod using `docker run --rm` with
  the freshly pulled API image before restarting containers, so a
  failing migration aborts the deploy with the old containers still
  serving traffic.

The [skip-deploy] tag avoids re-triggering deployment for this
infrastructure commit; the updated deploy.sh must be pulled to the
production host out-of-band before the next deployment benefits from
the new migration step.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-04-17 08:16:51 -06:00
Paul R Kartchner
c3e3d66fef feat: add family-based multi-tenant access control
Some checks failed
Basil CI/CD Pipeline / Code Linting (push) Successful in 3m18s
Basil CI/CD Pipeline / Web Tests (push) Successful in 3m31s
Basil CI/CD Pipeline / Security Scanning (push) Has been cancelled
Basil CI/CD Pipeline / API Tests (push) Failing after 3m56s
Basil CI/CD Pipeline / Shared Package Tests (push) Successful in 3m11s
Basil CI/CD Pipeline / Trigger Deployment (push) Has been cancelled
Basil CI/CD Pipeline / Build All Packages (push) Has been cancelled
Basil CI/CD Pipeline / E2E Tests (push) Has been cancelled
Basil CI/CD Pipeline / Build & Push Docker Images (push) Has been cancelled
Introduces Family as the tenant boundary so recipes and cookbooks can be
scoped per household instead of every user seeing everything. Adds a
centralized access filter, an invite/membership UI, a first-login prompt
to create a family, and locks down the previously unauthenticated backup
routes to admin only.

- Family and FamilyMember models with OWNER/MEMBER roles; familyId on
  Recipe and Cookbook (ON DELETE SET NULL so deleting a family orphans
  content rather than destroying it).
- access.service.ts composes a single WhereInput covering owner, family,
  PUBLIC visibility, and direct share; admins short-circuit to full
  access.
- recipes/cookbooks routes now require auth, strip client-supplied
  userId/familyId on create, and gate mutations with canMutate checks.
  Auto-filter helpers scoped to the same family to prevent cross-tenant
  leakage via shared tag names.
- families.routes.ts exposes list/create/get/rename/delete plus
  add/remove member, with last-owner protection on removal.
- FamilyGate component blocks the authenticated UI with a modal if the
  user has zero memberships, prompting them to create their first
  family; Family page provides ongoing management.
- backup.routes.ts now requires admin; it had no auth at all before.
- Bumps version to 2026.04.008 and documents the monotonic PPP counter
  in CLAUDE.md.

Migration SQL is generated locally but not tracked (per existing
.gitignore); apply 20260416010000_add_family_tenant to prod during
deploy. Run backfill-family-tenant.ts once post-migration to assign
existing content to a default owner's family.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-04-17 08:08:10 -06:00
Paul R Kartchner
fb18caa3c2 feat: add comprehensive PostgreSQL backup and restore scripts
All checks were successful
Basil CI/CD Pipeline / Shared Package Tests (push) Successful in 1m10s
Basil CI/CD Pipeline / Code Linting (push) Successful in 1m18s
Basil CI/CD Pipeline / Web Tests (push) Successful in 1m29s
Basil CI/CD Pipeline / Security Scanning (push) Successful in 1m14s
Basil CI/CD Pipeline / API Tests (push) Successful in 1m45s
Basil CI/CD Pipeline / Trigger Deployment (push) Successful in 12s
Basil CI/CD Pipeline / Build All Packages (push) Successful in 1m31s
Basil CI/CD Pipeline / E2E Tests (push) Has been skipped
Basil CI/CD Pipeline / Build & Push Docker Images (push) Successful in 14m27s
Added production-grade backup and restore scripts for PostgreSQL servers
that can backup all databases automatically with retention management.

New scripts:
- scripts/backup-all-postgres-databases.sh - Backs up all databases on a
  PostgreSQL server with automatic retention, compression, verification,
  and notification support
- scripts/restore-postgres-database.sh - Restores individual databases
  with safety backups and verification
- scripts/README-POSTGRES-BACKUP.md - Complete documentation with examples,
  best practices, and troubleshooting

Features:
- Automatic detection and backup of all user databases
- Excludes system databases (postgres, template0, template1)
- Backs up global objects (roles, tablespaces)
- Optional gzip compression (80-90% space savings)
- Automatic retention management (configurable days)
- Integrity verification (gzip -t for compressed files)
- Safety backups before restore operations
- Detailed logging with color-coded output
- Backup summary reporting
- Email/Slack notification support (optional)
- Interactive restore with confirmation prompts
- Force mode for automation
- Verbose debugging mode
- Comprehensive error handling

Backup directory structure:
  /var/backups/postgresql/YYYYMMDD/
    - globals_YYYYMMDD_HHMMSS.sql.gz
    - database1_YYYYMMDD_HHMMSS.sql.gz
    - database2_YYYYMMDD_HHMMSS.sql.gz

Usage examples:
  # Backup all databases with compression
  ./backup-all-postgres-databases.sh -c

  # Custom configuration
  ./backup-all-postgres-databases.sh -h db.server.com -U backup_user -d /mnt/backups -r 60 -c

  # Restore database
  ./restore-postgres-database.sh /var/backups/postgresql/20260120/mydb_20260120_020001.sql.gz

  # Force restore (skip confirmation)
  ./restore-postgres-database.sh backup.sql.gz -d mydb -f

Automation:
  # Add to crontab for daily backups at 2 AM
  0 2 * * * /path/to/backup-all-postgres-databases.sh -c >> /var/log/postgres-backup.log 2>&1

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-25 21:39:32 -07:00
Paul R Kartchner
883b7820ed docs: add comprehensive database migration and backup documentation
All checks were successful
Basil CI/CD Pipeline / Shared Package Tests (push) Successful in 1m38s
Basil CI/CD Pipeline / Security Scanning (push) Successful in 1m55s
Basil CI/CD Pipeline / Web Tests (push) Successful in 2m9s
Basil CI/CD Pipeline / Build All Packages (push) Successful in 1m31s
Basil CI/CD Pipeline / Code Linting (push) Successful in 1m57s
Basil CI/CD Pipeline / API Tests (push) Successful in 2m34s
Basil CI/CD Pipeline / E2E Tests (push) Has been skipped
Basil CI/CD Pipeline / Build & Push Docker Images (push) Successful in 5m5s
Basil CI/CD Pipeline / Trigger Deployment (push) Successful in 12s
Added complete guide for migrating from containerized PostgreSQL to standalone
server with production-grade backup strategies.

New files:
- docs/DATABASE-MIGRATION-GUIDE.md - Complete migration guide with step-by-step
  instructions, troubleshooting, and rollback procedures
- scripts/backup-standalone-postgres.sh - Automated backup script with daily,
  weekly, and monthly retention policies
- scripts/restore-standalone-postgres.sh - Safe restore script with verification
  and pre-restore safety backup

Features:
- Hybrid backup strategy (PostgreSQL native + Basil API)
- Automated retention policy (30/90/365 days)
- Integrity verification
- Safety backups before restore
- Complete troubleshooting guide
- Rollback procedures

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-20 15:29:35 -07:00
Paul R Kartchner
0e941db4e6 chore: bump version to 2026.01.006
All checks were successful
Basil CI/CD Pipeline / Code Linting (push) Successful in 1m20s
Basil CI/CD Pipeline / Web Tests (push) Successful in 1m21s
Basil CI/CD Pipeline / Shared Package Tests (push) Successful in 1m16s
Basil CI/CD Pipeline / Security Scanning (push) Successful in 1m37s
Basil CI/CD Pipeline / API Tests (push) Successful in 1m42s
Basil CI/CD Pipeline / E2E Tests (push) Has been skipped
Basil CI/CD Pipeline / Build All Packages (push) Successful in 1m27s
Basil CI/CD Pipeline / Build & Push Docker Images (push) Successful in 5m1s
Basil CI/CD Pipeline / Trigger Deployment (push) Successful in 13s
2026-01-19 21:38:55 -07:00
Paul R Kartchner
8d6ddd7e8f fix: remove conflicting cookbook-count CSS rule causing styling issues
Some checks failed
Basil CI/CD Pipeline / Code Linting (push) Has started running
Basil CI/CD Pipeline / API Tests (push) Has started running
Basil CI/CD Pipeline / Security Scanning (push) Has been cancelled
Basil CI/CD Pipeline / Build All Packages (push) Has been cancelled
Basil CI/CD Pipeline / E2E Tests (push) Has been cancelled
Basil CI/CD Pipeline / Build & Push Docker Images (push) Has been cancelled
Basil CI/CD Pipeline / Trigger Deployment (push) Has been cancelled
Basil CI/CD Pipeline / Shared Package Tests (push) Has been cancelled
Basil CI/CD Pipeline / Web Tests (push) Has been cancelled
2026-01-19 21:37:56 -07:00
Paul R Kartchner
05cf8d7c00 chore: bump version to 2026.01.005
All checks were successful
Basil CI/CD Pipeline / Code Linting (push) Successful in 1m14s
Basil CI/CD Pipeline / Shared Package Tests (push) Successful in 1m29s
Basil CI/CD Pipeline / API Tests (push) Successful in 1m56s
Basil CI/CD Pipeline / Web Tests (push) Successful in 1m36s
Basil CI/CD Pipeline / E2E Tests (push) Has been skipped
Basil CI/CD Pipeline / Build & Push Docker Images (push) Successful in 4m57s
Basil CI/CD Pipeline / Security Scanning (push) Successful in 1m37s
Basil CI/CD Pipeline / Build All Packages (push) Successful in 1m33s
Basil CI/CD Pipeline / Trigger Deployment (push) Successful in 12s
2026-01-19 21:19:43 -07:00
7a02017c69 Merge pull request 'feat: implement responsive column-based styling for all thumbnail cards' (#9) from feature/cookbook-pagination into main
All checks were successful
Basil CI/CD Pipeline / Shared Package Tests (push) Successful in 1m29s
Basil CI/CD Pipeline / Code Linting (push) Successful in 1m41s
Basil CI/CD Pipeline / Web Tests (push) Successful in 1m59s
Basil CI/CD Pipeline / Security Scanning (push) Successful in 1m44s
Basil CI/CD Pipeline / API Tests (push) Successful in 2m4s
Basil CI/CD Pipeline / Build All Packages (push) Successful in 1m34s
Basil CI/CD Pipeline / E2E Tests (push) Has been skipped
Basil CI/CD Pipeline / Build & Push Docker Images (push) Successful in 5m9s
Basil CI/CD Pipeline / Trigger Deployment (push) Successful in 12s
Reviewed-on: #9
2026-01-20 04:05:02 +00:00
Paul R Kartchner
0e611c379e feat: implement responsive column-based styling for all thumbnail cards
All checks were successful
Basil CI/CD Pipeline / Shared Package Tests (pull_request) Successful in 1m10s
Basil CI/CD Pipeline / Code Linting (pull_request) Successful in 1m15s
Basil CI/CD Pipeline / Web Tests (pull_request) Successful in 1m29s
Basil CI/CD Pipeline / API Tests (pull_request) Successful in 1m41s
Basil CI/CD Pipeline / Security Scanning (pull_request) Successful in 1m9s
Basil CI/CD Pipeline / Build All Packages (pull_request) Successful in 1m32s
Basil CI/CD Pipeline / E2E Tests (pull_request) Has been skipped
Basil CI/CD Pipeline / Build & Push Docker Images (pull_request) Has been skipped
Basil CI/CD Pipeline / Trigger Deployment (pull_request) Has been skipped
Implemented consistent responsive styling across all recipe and cookbook thumbnail displays
with column-specific font sizes and description visibility rules.

Changes:
- Added responsive font sizing for 3, 5, 7, and 9 column layouts
- Hide descriptions at 7+ columns to prevent text cutoff
- Show 2-line descriptions for 3 and 5 columns with proper truncation
- Applied consistent card styling (1px border, 8px radius) across all pages
- Updated RecipeList, Cookbooks, and CookbookDetail pages
- Documented all 7 thumbnail display locations in CLAUDE.md

Styling rules:
- Column 3: Larger fonts (0.95rem title, 0.8rem desc, 0.75rem meta)
- Column 5: Medium fonts (0.85rem title, 0.75rem desc, 0.7rem meta)
- Column 7-9: Smallest fonts, descriptions hidden

Pages affected:
- Recipe List (My Recipes)
- Cookbooks page (Recent Recipes section)
- Cookbooks page (Main grid)
- Cookbook Detail (Recipes section)
- Cookbook Detail (Nested cookbooks)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-19 17:00:16 -07:00
Paul R Kartchner
a20dfd848c feat: unify all card styling to match working RecipeList pattern
Using RecipeList.css as the gold standard, applied consistent styling
across ALL cookbook and recipe card locations in Basil.

## Changes Summary

All cards now use RecipeList.css pattern:
- aspect-ratio: 1 / 1 (square cards)
- Image: height: 60% (not aspect-ratio 16/9)
- Padding: 0.5rem (not 1.25rem)
- Title: 0.75rem, 2-line clamp
- Description: 0.65rem, 1-line clamp
- Meta/stats: 0.6rem
- Tags: 0.55-0.6rem with minimal padding

## Files Updated

### CookbookDetail.css
**Recipes section:**
- Title: 0.9rem → 0.75rem, single-line → 2-line clamp
- Description: 0.75rem → 0.65rem
- Meta: 0.7rem → 0.6rem
- Tags: 0.65rem → 0.55rem with smaller padding

**Nested/included cookbooks section:**
- Title: 0.9rem → 0.75rem, nowrap → 2-line clamp
- Stats: 0.7rem → 0.6rem
- Cover placeholder: 2.5rem icon
- Padding: 0.5rem

### Cookbooks.css
**Main cookbook cards:**
- Title: 0.9rem → 0.75rem, nowrap → 2-line clamp
- Stats: 0.7rem → 0.6rem
- Cover: height 50%, 2.5rem icon
- Padding: 0.5rem

**Recent recipes section:**
- Card: height: 100% → aspect-ratio: 1/1
- Image: aspect-ratio: 16/9 → height: 60%
- Placeholder icon: 4rem → 3rem
- Padding: 1.25rem → 0.5rem
- Title: 1.2rem → 0.75rem
- Description: 0.9rem → 0.65rem, 2-line → 1-line clamp
- Meta: 0.85rem → 0.6rem

## Result

All cookbook and recipe displays now have:
 Consistent square cards across all column counts (3, 5, 7, 9)
 No text cutoff - all titles fit within 2 lines
 Proper text scaling at all column counts
 Same visual appearance as working RecipeList page

## Locations Fixed

1. All Recipes page (/recipes) - Already working 
2. My Cookbooks page (/cookbooks) - Fixed 
3. Cookbook Detail: Nested cookbooks - Fixed 
4. Cookbook Detail: Recipes section - Fixed 

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-19 10:03:32 -07:00
Paul R Kartchner
f1e790bb35 fix: make nested cookbooks and recipes truly square with compact text
Nested Cookbook Cards:
- Reduced cover height to 50% and icon to 2.5rem
- Reduced padding to 0.5rem
- Made title single-line with nowrap (0.9rem font)
- Reduced stats font to 0.7rem
- Hidden description to save space
- Added overflow: hidden to prevent text spillover

Recipe Cards in Cookbook Detail:
- Changed from height: 100% to aspect-ratio: 1/1 for square shape
- Changed image from aspect-ratio 16/9 to height: 60%
- Reduced placeholder icon from 4rem to 3rem
- Reduced padding from 1.25rem to 0.5rem
- Made title single-line with nowrap (0.9rem font, was 1.2rem)
- Reduced description to 1 line clamp (0.75rem font, was 0.9rem)
- Reduced meta font to 0.7rem (was 0.85rem)
- Made tags smaller: 0.65rem font with reduced padding
- Added overflow: hidden to recipe-info

Result: Both nested cookbooks and recipes display as proper
squares with no text overflow, maintaining proportions across
all column settings (3, 5, 7, 9).

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-19 08:58:38 -07:00
Paul R Kartchner
33a857c456 feat: make nested cookbooks responsive and redesign compact toolbar UI
Nested Cookbooks Fix:
- Added dynamic gridStyle to .cookbooks-grid in CookbookDetail.tsx
- Removed hardcoded 5-column grid from CSS, now respects column selector
- Nested cookbooks now respond to column count changes (3, 5, 7, 9)

Toolbar UI Redesign (CookbookDetail.css & Cookbooks.css):
- Reduced toolbar padding from 1.5rem to 0.75rem 1rem
- Changed alignment from flex-end to center for cleaner layout
- Made buttons more compact:
  - Reduced padding to 0.35rem 0.6rem (was 0.5rem 0.75rem)
  - Reduced font size to 0.8rem (was 0.9rem)
  - Reduced min-width to 2rem (was 2.5rem)
- Grouped buttons with subtle border styling instead of individual borders
- Reduced gaps between controls from 2rem/1.5rem to 1.5rem/1rem
- Made labels smaller and lighter weight (0.8rem, 500 weight)
- Updated page navigation with lighter borders and subtle hover states
- Changed colors to more subtle grays (#d0d0d0, #555) instead of bold green
- Reduced box-shadow for subtler appearance
- Added 1px border for better definition

Result: Consistent, compact, user-friendly controls across all recipe
and cookbook list pages.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-19 08:53:16 -07:00
Paul R Kartchner
766307050c fix: add grid layout for nested cookbooks in cookbook detail
Added proper grid styling for included/nested cookbooks:
- Added .cookbooks-grid with 5-column grid layout and 1.5rem gap
- Made .cookbook-card.nested explicitly square with aspect-ratio: 1/1
- Added flexbox display to nested cards for proper content layout
- Added responsive mobile styling (1 column on mobile)

This prevents nested cookbooks from displaying as huge squares
that don't respect column sizing.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-18 23:51:32 -07:00
Paul R Kartchner
822dd036d4 fix: restore square aspect ratio for recipe cards
Reverted recipe cards on All Recipes page back to square:
- Restored aspect-ratio: 1 / 1 on .recipe-card
- Changed image from aspect-ratio: 16/9 back to height: 60%

This ensures recipe cards match the square appearance of
cookbook cards and don't display as tall rectangles.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-18 23:23:24 -07:00
Paul R Kartchner
41789fee80 fix: make cookbook cards truly square with aggressive sizing
Changes to force square aspect ratio:
- Reduced cover from 60% to 50% height
- Reduced icon from 4rem to 2.5rem
- Reduced padding from 0.75rem to 0.5rem
- Changed title from 2-line clamp to single line with nowrap
- Reduced title font from 1rem to 0.9rem
- Reduced recipe/cookbook count from 0.8rem to 0.7rem
- Added overflow:hidden to cookbook-info
- Hidden cookbook tags completely
- Styled cookbook-stats container for compact display

These aggressive reductions ensure all content fits within
the 1:1 aspect ratio without expanding the card vertically.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-18 21:52:00 -07:00
Paul R Kartchner
4633f7c0cc fix: make cookbook cards square and more compact
Changes to cookbook cards:
- Set aspect-ratio: 1 / 1 on cards to maintain square shape
- Changed cover height from 16:9 ratio to 60% fixed height
- Hidden description to reduce card height
- Reduced padding from 1.25rem to 0.75rem
- Reduced title font from 1.3rem to 1rem
- Reduced recipe count font from 0.9rem to 0.8rem

This makes cookbook cards display as squares similar to recipe cards,
preventing them from becoming too tall.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-18 21:46:22 -07:00
Paul R Kartchner
4ce62d5d3e fix: improve card sizing consistency across all pages
- Use flexbox layout with height: 100% for all cards
- Replace fixed heights with aspect-ratio: 16/9 for images
- Add text clamping (2 lines) for titles and descriptions
- Use margin-top: auto to push metadata to bottom
- Ensures cards maintain proportional box shapes

Files updated:
- Cookbooks.css: cookbook and recipe cards
- CookbookDetail.css: recipe cards
- RecipeList.css: recipe cards (removed 1:1 aspect ratio)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-18 21:41:37 -07:00
Paul R Kartchner
70c9f8b751 feat: add pagination and column controls to My Cookbooks page
- Add pagination controls (12, 24, 48, All items per page)
- Add column count selector (3, 5, 7, 9 columns)
- Add prev/next page navigation
- Save preferences to localStorage
- Update URL params for page and limit
- Add responsive toolbar styling
- Show results count with pagination info
- Match UI/UX of All Recipes and Cookbook Detail pages
2026-01-18 21:33:27 -07:00
Paul R Kartchner
be98d20713 feat: add pagination and column controls to cookbook detail page
- Add pagination controls (12, 24, 48, All items per page)
- Add column count selector (3, 5, 7, 9 columns)
- Add prev/next page navigation
- Save preferences to localStorage
- Update URL params for page and limit
- Add responsive toolbar styling
- Match UI/UX of All Recipes page
2026-01-17 08:06:27 -07:00
37 changed files with 4906 additions and 169 deletions

View File

@@ -87,8 +87,17 @@ jobs:
- name: Generate Prisma Client
run: cd packages/api && npm run prisma:generate
- name: Run database migrations
run: cd packages/api && npm run prisma:migrate
- name: Apply database migrations
run: cd packages/api && npm run prisma:deploy
env:
DATABASE_URL: postgresql://basil:basil@postgres:5432/basil_test?schema=public
- name: Check for schema drift
run: |
cd packages/api && npx prisma migrate diff \
--from-url "$DATABASE_URL" \
--to-schema-datamodel ./prisma/schema.prisma \
--exit-code && echo "✓ schema.prisma matches applied migrations"
env:
DATABASE_URL: postgresql://basil:basil@postgres:5432/basil_test?schema=public
@@ -276,8 +285,8 @@ jobs:
- name: Build application
run: npm run build
- name: Run database migrations
run: cd packages/api && npm run prisma:migrate
- name: Apply database migrations
run: cd packages/api && npm run prisma:deploy
env:
DATABASE_URL: postgresql://basil:basil@postgres:5432/basil?schema=public

2
.gitignore vendored
View File

@@ -62,5 +62,5 @@ backups/
docker-compose.override.yml
# Prisma
packages/api/prisma/migrations/
# Migrations are tracked. Applied automatically by deploy.sh (via `prisma migrate deploy`).
# Pipeline Test

107
CLAUDE.md
View File

@@ -279,13 +279,13 @@ Basil includes a complete CI/CD pipeline with Gitea Actions for automated testin
Basil uses calendar versioning with the format: `YYYY.MM.PPP`
- `YYYY` - Four-digit year (e.g., 2026)
- `MM` - Two-digit month with zero-padding (e.g., 01 for January, 12 for December)
- `PPP` - Three-digit patch number with zero-padding that increases with each deployment in a month
- `PPP` - Three-digit patch number with zero-padding that increases with every deployment. **Does not reset at month boundaries** — it is a monotonically increasing counter across the lifetime of the project.
### Examples
- `2026.01.001` - First deployment in January 2026
- `2026.01.002` - Second deployment in January 2026
- `2026.02.001` - First deployment in February 2026 (patch resets to 001)
- `2026.02.003` - Third deployment in February 2026
- `2026.01.006` - Sixth deployment (in January 2026)
- `2026.04.007` - Seventh deployment (in April 2026 — patch continues from previous month, does not reset)
- `2026.04.008` - Eighth deployment (still in April 2026)
- `2026.05.009` - Ninth deployment (in May 2026 — patch continues, does not reset)
### Version Update Process
When deploying to production:
@@ -324,3 +324,100 @@ The current version is displayed in:
- API: `GET /api/version` endpoint returns `{ version: '2026.01.002' }`
- Web: Footer or about section shows current version
- Both packages export `APP_VERSION` constant for internal use
## UI Design System - Thumbnail Cards
### Responsive Column Layout System
All recipe and cookbook thumbnail displays support a responsive column system (3, 5, 7, or 9 columns) with column-specific styling for optimal readability at different densities.
**Column-Responsive Font Sizes:**
- **Column 3** (Largest cards): Title 0.95rem, Description 0.8rem (2 lines), Meta 0.75rem
- **Column 5** (Medium cards): Title 0.85rem, Description 0.75rem (2 lines), Meta 0.7rem
- **Column 7** (Compact): Title 0.75rem, Description hidden, Meta 0.6rem
- **Column 9** (Most compact): Title 0.75rem, Description hidden, Meta 0.6rem
**Implementation Pattern:**
1. Add `gridClassName = \`recipes-grid columns-${columnCount}\`` or `\`cookbooks-grid columns-${columnCount}\``
2. Apply className to grid container: `<div className={gridClassName} style={gridStyle}>`
3. Use column-specific CSS selectors: `.columns-3 .recipe-info h3 { font-size: 0.95rem; }`
### Recipe Thumbnail Display Locations
All locations use square aspect ratio (1:1) cards with 60% image height.
1. **Recipe List Page** (`packages/web/src/pages/RecipeList.tsx`)
- Class: `recipe-grid-enhanced columns-{3|5|7|9}`
- CSS: `packages/web/src/styles/RecipeList.css`
- Features: Main recipe browsing with pagination, search, filtering
- Displays: Image, title, description, time, rating
- Status: ✅ Responsive column styling applied
2. **Cookbooks Page - Recent Recipes** (`packages/web/src/pages/Cookbooks.tsx`)
- Class: `recipes-grid columns-{3|5|7|9}`
- CSS: `packages/web/src/styles/Cookbooks.css`
- Features: Shows 6 most recent recipes below cookbook list
- Displays: Image, title, description, time, rating
- Status: ✅ Responsive column styling applied
3. **Cookbook Detail - Recipes Section** (`packages/web/src/pages/CookbookDetail.tsx`)
- Class: `recipes-grid columns-{3|5|7|9}`
- CSS: `packages/web/src/styles/CookbookDetail.css`
- Features: Paginated recipes within a cookbook, with remove button
- Displays: Image, title, description, time, rating, remove button
- Status: ✅ Responsive column styling applied
4. **Add Meal Modal - Recipe Selection** (`packages/web/src/components/meal-planner/AddMealModal.tsx`)
- Class: `recipe-list` with `recipe-item`
- CSS: `packages/web/src/styles/AddMealModal.css`
- Features: Selectable recipe list for adding to meal plan
- Displays: Small thumbnail, title, description
- Status: ⚠️ Needs responsive column styling review
5. **Meal Card Component** (`packages/web/src/components/meal-planner/MealCard.tsx`)
- Class: `meal-card` with `meal-card-image`
- CSS: `packages/web/src/styles/MealCard.css`
- Features: Recipe thumbnail in meal planner (compact & full views)
- Displays: Recipe image as part of meal display
- Status: ⚠️ Different use case - calendar/list view, not grid-based
### Cookbook Thumbnail Display Locations
All locations use square aspect ratio (1:1) cards with 50% image height.
1. **Cookbooks Page - Main Grid** (`packages/web/src/pages/Cookbooks.tsx`)
- Class: `cookbooks-grid`
- CSS: `packages/web/src/styles/Cookbooks.css`
- Features: Main cookbook browsing with pagination
- Displays: Cover image, name, recipe count, cookbook count
- Status: ✅ Already has compact styling (description/tags hidden)
- Note: Could benefit from column-responsive font sizes
2. **Cookbook Detail - Nested Cookbooks** (`packages/web/src/pages/CookbookDetail.tsx`)
- Class: `cookbooks-grid` with `cookbook-card nested`
- CSS: `packages/web/src/styles/CookbookDetail.css`
- Features: Child cookbooks within parent cookbook
- Displays: Cover image, name, recipe count, cookbook count
- Status: ✅ Already has compact styling (description/tags hidden)
- Note: Could benefit from column-responsive font sizes
### Key CSS Classes
- `recipe-card` - Individual recipe card
- `recipe-grid-enhanced` or `recipes-grid` - Recipe grid container
- `cookbook-card` - Individual cookbook card
- `cookbooks-grid` - Cookbook grid container
- `columns-{3|5|7|9}` - Dynamic column count modifier class
### Styling Consistency Rules
1. **Image Heights**: Recipes 60%, Cookbooks 50%
2. **Aspect Ratio**: All cards are square (1:1)
3. **Border**: 1px solid #e0e0e0 (not box-shadow)
4. **Border Radius**: 8px
5. **Hover Effect**: `translateY(-2px)` with `box-shadow: 0 4px 12px rgba(0, 0, 0, 0.1)`
6. **Description Display**:
- Columns 3 & 5: Show 2 lines
- Columns 7 & 9: Hide completely
7. **Font Scaling**: Larger fonts for fewer columns, smaller for more columns
8. **Text Truncation**: Use `-webkit-line-clamp` with `text-overflow: ellipsis`

View File

@@ -0,0 +1,465 @@
# Database Migration Guide: Container → Standalone PostgreSQL
This guide covers migrating Basil from containerized PostgreSQL to a standalone PostgreSQL server and setting up production-grade backups.
## Table of Contents
1. [Why Migrate?](#why-migrate)
2. [Pre-Migration Checklist](#pre-migration-checklist)
3. [Migration Steps](#migration-steps)
4. [Backup Strategy](#backup-strategy)
5. [Testing & Verification](#testing--verification)
6. [Rollback Plan](#rollback-plan)
---
## Why Migrate?
### Standalone PostgreSQL Advantages
- ✅ Dedicated database resources (no competition with app containers)
- ✅ Standard PostgreSQL backup/restore tools
- ✅ Point-in-time recovery (PITR) capabilities
- ✅ Better monitoring and administration
- ✅ Industry best practice for production
- ✅ Easier to scale independently
### When to Keep Containerized
- Local development environments
- Staging/test environments
- Simple single-server deployments
- Environments where simplicity > resilience
---
## Pre-Migration Checklist
- [ ] Standalone PostgreSQL server is installed and accessible
- [ ] PostgreSQL version is 13 or higher (check: `psql --version`)
- [ ] Network connectivity from app server to DB server
- [ ] Firewall rules allow PostgreSQL port (default: 5432)
- [ ] You have PostgreSQL superuser credentials
- [ ] Current Basil data is backed up
- [ ] Maintenance window scheduled (expect ~15-30 min downtime)
---
## Migration Steps
### Step 1: Create Backup of Current Data
**Option A: Use Basil's Built-in API (Recommended)**
```bash
# Create full backup (database + uploaded images)
curl -X POST http://localhost:3001/api/backup
# List available backups
curl http://localhost:3001/api/backup
# Download the latest backup
curl -O http://localhost:3001/api/backup/basil-backup-YYYY-MM-DDTHH-MM-SS.zip
```
**Option B: Direct PostgreSQL Dump**
```bash
# From container
docker exec basil-postgres pg_dump -U basil basil > /tmp/basil_migration.sql
# Verify backup
head -20 /tmp/basil_migration.sql
```
### Step 2: Prepare Standalone PostgreSQL Server
SSH into your PostgreSQL server:
```bash
ssh your-postgres-server
# Switch to postgres user
sudo -u postgres psql
```
Create database and user:
```sql
-- Create database
CREATE DATABASE basil;
-- Create user with password
CREATE USER basil WITH ENCRYPTED PASSWORD 'your-secure-password-here';
-- Grant privileges
GRANT ALL PRIVILEGES ON DATABASE basil TO basil;
-- Connect to basil database
\c basil
-- Grant schema permissions
GRANT ALL ON SCHEMA public TO basil;
-- Exit
\q
```
**Security Best Practices:**
```bash
# Generate strong password
openssl rand -base64 32
# Store in password manager or .pgpass file
echo "your-postgres-server:5432:basil:basil:your-password" >> ~/.pgpass
chmod 600 ~/.pgpass
```
### Step 3: Update Firewall Rules
On PostgreSQL server:
```bash
# Allow app server to connect
sudo ufw allow from <app-server-ip> to any port 5432
# Or edit pg_hba.conf
sudo nano /etc/postgresql/15/main/pg_hba.conf
```
Add line:
```
host basil basil <app-server-ip>/32 scram-sha-256
```
Reload PostgreSQL:
```bash
sudo systemctl reload postgresql
```
### Step 4: Test Connectivity
From app server:
```bash
# Test connection
psql -h your-postgres-server -U basil -d basil -c "SELECT version();"
# Should show PostgreSQL version
```
### Step 5: Update Basil Configuration
**On app server**, update environment configuration:
```bash
# Edit .env file
cd /srv/docker-compose/basil
nano .env
```
Add or update:
```bash
DATABASE_URL=postgresql://basil:your-password@your-postgres-server-ip:5432/basil?schema=public
```
**Update docker-compose.yml:**
```yaml
services:
api:
environment:
- DATABASE_URL=${DATABASE_URL}
# ... other variables
# Comment out postgres service
# postgres:
# image: postgres:15
# ...
```
### Step 6: Run Prisma Migrations
This creates the schema on your new database:
```bash
cd /home/pkartch/development/basil/packages/api
# Generate Prisma client
npm run prisma:generate
# Deploy migrations
npm run prisma:migrate deploy
```
### Step 7: Restore Data
**Option A: Use Basil's Restore API**
```bash
# Copy backup to server (if needed)
scp basil-backup-*.zip app-server:/tmp/
# Restore via API
curl -X POST http://localhost:3001/api/backup/restore \
-F "backup=@/tmp/basil-backup-YYYY-MM-DDTHH-MM-SS.zip"
```
**Option B: Direct PostgreSQL Restore**
```bash
# Copy SQL dump to DB server
scp /tmp/basil_migration.sql your-postgres-server:/tmp/
# On PostgreSQL server
psql -h localhost -U basil basil < /tmp/basil_migration.sql
```
### Step 8: Restart Application
```bash
cd /srv/docker-compose/basil
./dev-rebuild.sh
# Or
docker-compose down
docker-compose up -d
```
### Step 9: Verify Migration
```bash
# Check API logs
docker-compose logs api | grep -i "database\|connected"
# Test API
curl http://localhost:3001/api/recipes
curl http://localhost:3001/api/cookbooks
# Check database directly
psql -h your-postgres-server -U basil basil -c "SELECT COUNT(*) FROM \"Recipe\";"
psql -h your-postgres-server -U basil basil -c "SELECT COUNT(*) FROM \"Cookbook\";"
```
---
## Backup Strategy
### Daily Automated Backups
**On PostgreSQL server:**
```bash
# Copy backup script to server
scp scripts/backup-standalone-postgres.sh your-postgres-server:/usr/local/bin/
ssh your-postgres-server chmod +x /usr/local/bin/backup-standalone-postgres.sh
# Set up cron job
ssh your-postgres-server
sudo crontab -e
```
Add:
```cron
# Daily backup at 2 AM
0 2 * * * /usr/local/bin/backup-standalone-postgres.sh >> /var/log/basil-backup.log 2>&1
```
### Weekly Application Backups
**On app server:**
```bash
sudo crontab -e
```
Add:
```cron
# Weekly full backup (DB + images) on Sundays at 3 AM
0 3 * * 0 curl -X POST http://localhost:3001/api/backup >> /var/log/basil-api-backup.log 2>&1
```
### Off-Site Backup Sync
**Set up rsync to NAS or remote server:**
```bash
# On PostgreSQL server
sudo crontab -e
```
Add:
```cron
# Sync backups to NAS at 4 AM
0 4 * * * rsync -av /var/backups/basil/ /mnt/nas/backups/basil/ >> /var/log/basil-sync.log 2>&1
# Optional: Upload to S3
0 5 * * * aws s3 sync /var/backups/basil/ s3://your-bucket/basil-backups/ --storage-class GLACIER >> /var/log/basil-s3.log 2>&1
```
### Backup Retention
The backup script automatically maintains:
- **Daily backups:** 30 days
- **Weekly backups:** 90 days (12 weeks)
- **Monthly backups:** 365 days (12 months)
---
## Testing & Verification
### Test Backup Process
```bash
# Run backup manually
/usr/local/bin/backup-standalone-postgres.sh
# Verify backup exists
ls -lh /var/backups/basil/daily/
# Test backup integrity
gzip -t /var/backups/basil/daily/basil-*.sql.gz
```
### Test Restore Process
**On a test server (NOT production!):**
```bash
# Copy restore script
scp scripts/restore-standalone-postgres.sh test-server:/tmp/
# Run restore
/tmp/restore-standalone-postgres.sh /var/backups/basil/daily/basil-YYYYMMDD.sql.gz
```
### Monitoring
**Set up monitoring checks:**
```bash
# Check backup file age (should be < 24 hours)
find /var/backups/basil/daily/ -name "basil-*.sql.gz" -mtime -1 | grep -q . || echo "ALERT: No recent backup!"
# Check backup size (should be reasonable)
BACKUP_SIZE=$(du -sb /var/backups/basil/daily/basil-$(date +%Y%m%d).sql.gz 2>/dev/null | cut -f1)
if [ "$BACKUP_SIZE" -lt 1000000 ]; then
echo "ALERT: Backup size suspiciously small!"
fi
```
---
## Rollback Plan
If migration fails, you can quickly rollback:
### Quick Rollback to Containerized PostgreSQL
```bash
cd /srv/docker-compose/basil
# 1. Restore old docker-compose.yml (uncomment postgres service)
nano docker-compose.yml
# 2. Remove DATABASE_URL override
nano .env # Comment out or remove DATABASE_URL
# 3. Restart with containerized database
docker-compose down
docker-compose up -d
# 4. Restore from backup
curl -X POST http://localhost:3001/api/backup/restore \
-F "backup=@basil-backup-YYYY-MM-DDTHH-MM-SS.zip"
```
### Data Recovery
If you need to recover data from standalone server after rollback:
```bash
# Dump from standalone server
ssh your-postgres-server
pg_dump -U basil basil > /tmp/basil_recovery.sql
# Import to containerized database
docker exec -i basil-postgres psql -U basil basil < /tmp/basil_recovery.sql
```
---
## Troubleshooting
### Connection Issues
**Error: "Connection refused"**
```bash
# Check PostgreSQL is listening on network
sudo netstat -tlnp | grep 5432
# Verify postgresql.conf
grep "listen_addresses" /etc/postgresql/*/main/postgresql.conf
# Should be: listen_addresses = '*'
# Restart PostgreSQL
sudo systemctl restart postgresql
```
**Error: "Authentication failed"**
```bash
# Verify user exists
psql -U postgres -c "\du basil"
# Reset password
psql -U postgres -c "ALTER USER basil WITH PASSWORD 'new-password';"
# Check pg_hba.conf authentication method
sudo cat /etc/postgresql/*/main/pg_hba.conf | grep basil
```
### Migration Issues
**Error: "Relation already exists"**
```bash
# Drop and recreate database
psql -U postgres -c "DROP DATABASE basil;"
psql -U postgres -c "CREATE DATABASE basil;"
psql -U postgres -c "GRANT ALL PRIVILEGES ON DATABASE basil TO basil;"
# Re-run migrations
cd packages/api
npm run prisma:migrate deploy
```
**Error: "Foreign key constraint violation"**
```bash
# Restore with --no-owner --no-privileges flags
pg_restore --no-owner --no-privileges -U basil -d basil backup.sql
```
---
## Additional Resources
- [PostgreSQL Backup Documentation](https://www.postgresql.org/docs/current/backup.html)
- [Prisma Migration Guide](https://www.prisma.io/docs/concepts/components/prisma-migrate)
- [Docker PostgreSQL Volume Management](https://docs.docker.com/storage/volumes/)
---
## Summary Checklist
Post-migration verification:
- [ ] Application connects to standalone PostgreSQL
- [ ] All recipes visible in UI
- [ ] All cookbooks visible in UI
- [ ] Recipe import works
- [ ] Image uploads work
- [ ] Daily backups running
- [ ] Weekly API backups running
- [ ] Backup integrity verified
- [ ] Restore process tested (on test server)
- [ ] Monitoring alerts configured
- [ ] Old containerized database backed up (for safety)
- [ ] Documentation updated with new DATABASE_URL
**Congratulations! You've successfully migrated to standalone PostgreSQL! 🎉**

View File

@@ -13,6 +13,7 @@
"test:coverage": "vitest run --coverage",
"prisma:generate": "prisma generate",
"prisma:migrate": "prisma migrate dev",
"prisma:deploy": "prisma migrate deploy",
"prisma:studio": "prisma studio",
"create-admin": "tsx src/scripts/create-admin.ts",
"lint": "eslint src --ext .ts"

View File

@@ -0,0 +1,455 @@
-- CreateEnum
CREATE TYPE "TokenType" AS ENUM ('EMAIL_VERIFICATION', 'PASSWORD_RESET');
-- CreateEnum
CREATE TYPE "Role" AS ENUM ('USER', 'ADMIN');
-- CreateEnum
CREATE TYPE "Visibility" AS ENUM ('PRIVATE', 'SHARED', 'PUBLIC');
-- CreateEnum
CREATE TYPE "MealType" AS ENUM ('BREAKFAST', 'LUNCH', 'DINNER', 'SNACK', 'DESSERT', 'OTHER');
-- CreateTable
CREATE TABLE "User" (
"id" TEXT NOT NULL,
"email" TEXT NOT NULL,
"username" TEXT,
"passwordHash" TEXT,
"name" TEXT,
"avatar" TEXT,
"provider" TEXT NOT NULL DEFAULT 'local',
"providerId" TEXT,
"role" "Role" NOT NULL DEFAULT 'USER',
"emailVerified" BOOLEAN NOT NULL DEFAULT false,
"emailVerifiedAt" TIMESTAMP(3),
"createdAt" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP,
"updatedAt" TIMESTAMP(3) NOT NULL,
CONSTRAINT "User_pkey" PRIMARY KEY ("id")
);
-- CreateTable
CREATE TABLE "VerificationToken" (
"id" TEXT NOT NULL,
"userId" TEXT NOT NULL,
"token" TEXT NOT NULL,
"type" "TokenType" NOT NULL,
"expiresAt" TIMESTAMP(3) NOT NULL,
"createdAt" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP,
CONSTRAINT "VerificationToken_pkey" PRIMARY KEY ("id")
);
-- CreateTable
CREATE TABLE "RefreshToken" (
"id" TEXT NOT NULL,
"userId" TEXT NOT NULL,
"token" TEXT NOT NULL,
"expiresAt" TIMESTAMP(3) NOT NULL,
"createdAt" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP,
CONSTRAINT "RefreshToken_pkey" PRIMARY KEY ("id")
);
-- CreateTable
CREATE TABLE "Recipe" (
"id" TEXT NOT NULL,
"title" TEXT NOT NULL,
"description" TEXT,
"prepTime" INTEGER,
"cookTime" INTEGER,
"totalTime" INTEGER,
"servings" INTEGER,
"imageUrl" TEXT,
"sourceUrl" TEXT,
"author" TEXT,
"cuisine" TEXT,
"categories" TEXT[] DEFAULT ARRAY[]::TEXT[],
"rating" DOUBLE PRECISION,
"userId" TEXT,
"visibility" "Visibility" NOT NULL DEFAULT 'PRIVATE',
"createdAt" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP,
"updatedAt" TIMESTAMP(3) NOT NULL,
CONSTRAINT "Recipe_pkey" PRIMARY KEY ("id")
);
-- CreateTable
CREATE TABLE "RecipeSection" (
"id" TEXT NOT NULL,
"recipeId" TEXT NOT NULL,
"name" TEXT NOT NULL,
"order" INTEGER NOT NULL,
"timing" TEXT,
CONSTRAINT "RecipeSection_pkey" PRIMARY KEY ("id")
);
-- CreateTable
CREATE TABLE "Ingredient" (
"id" TEXT NOT NULL,
"recipeId" TEXT,
"sectionId" TEXT,
"name" TEXT NOT NULL,
"amount" TEXT,
"unit" TEXT,
"notes" TEXT,
"order" INTEGER NOT NULL,
CONSTRAINT "Ingredient_pkey" PRIMARY KEY ("id")
);
-- CreateTable
CREATE TABLE "Instruction" (
"id" TEXT NOT NULL,
"recipeId" TEXT,
"sectionId" TEXT,
"step" INTEGER NOT NULL,
"text" TEXT NOT NULL,
"imageUrl" TEXT,
"timing" TEXT,
CONSTRAINT "Instruction_pkey" PRIMARY KEY ("id")
);
-- CreateTable
CREATE TABLE "IngredientInstructionMapping" (
"id" TEXT NOT NULL,
"ingredientId" TEXT NOT NULL,
"instructionId" TEXT NOT NULL,
"order" INTEGER NOT NULL,
CONSTRAINT "IngredientInstructionMapping_pkey" PRIMARY KEY ("id")
);
-- CreateTable
CREATE TABLE "RecipeImage" (
"id" TEXT NOT NULL,
"recipeId" TEXT NOT NULL,
"url" TEXT NOT NULL,
"order" INTEGER NOT NULL,
CONSTRAINT "RecipeImage_pkey" PRIMARY KEY ("id")
);
-- CreateTable
CREATE TABLE "Tag" (
"id" TEXT NOT NULL,
"name" TEXT NOT NULL,
CONSTRAINT "Tag_pkey" PRIMARY KEY ("id")
);
-- CreateTable
CREATE TABLE "RecipeTag" (
"recipeId" TEXT NOT NULL,
"tagId" TEXT NOT NULL,
CONSTRAINT "RecipeTag_pkey" PRIMARY KEY ("recipeId","tagId")
);
-- CreateTable
CREATE TABLE "CookbookTag" (
"cookbookId" TEXT NOT NULL,
"tagId" TEXT NOT NULL,
CONSTRAINT "CookbookTag_pkey" PRIMARY KEY ("cookbookId","tagId")
);
-- CreateTable
CREATE TABLE "RecipeShare" (
"id" TEXT NOT NULL,
"recipeId" TEXT NOT NULL,
"userId" TEXT NOT NULL,
"createdAt" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP,
CONSTRAINT "RecipeShare_pkey" PRIMARY KEY ("id")
);
-- CreateTable
CREATE TABLE "Cookbook" (
"id" TEXT NOT NULL,
"name" TEXT NOT NULL,
"description" TEXT,
"coverImageUrl" TEXT,
"userId" TEXT,
"autoFilterCategories" TEXT[] DEFAULT ARRAY[]::TEXT[],
"autoFilterTags" TEXT[] DEFAULT ARRAY[]::TEXT[],
"autoFilterCookbookTags" TEXT[] DEFAULT ARRAY[]::TEXT[],
"createdAt" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP,
"updatedAt" TIMESTAMP(3) NOT NULL,
CONSTRAINT "Cookbook_pkey" PRIMARY KEY ("id")
);
-- CreateTable
CREATE TABLE "CookbookRecipe" (
"id" TEXT NOT NULL,
"cookbookId" TEXT NOT NULL,
"recipeId" TEXT NOT NULL,
"addedAt" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP,
CONSTRAINT "CookbookRecipe_pkey" PRIMARY KEY ("id")
);
-- CreateTable
CREATE TABLE "CookbookInclusion" (
"id" TEXT NOT NULL,
"parentCookbookId" TEXT NOT NULL,
"childCookbookId" TEXT NOT NULL,
"addedAt" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP,
CONSTRAINT "CookbookInclusion_pkey" PRIMARY KEY ("id")
);
-- CreateTable
CREATE TABLE "MealPlan" (
"id" TEXT NOT NULL,
"userId" TEXT,
"date" TIMESTAMP(3) NOT NULL,
"notes" TEXT,
"createdAt" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP,
"updatedAt" TIMESTAMP(3) NOT NULL,
CONSTRAINT "MealPlan_pkey" PRIMARY KEY ("id")
);
-- CreateTable
CREATE TABLE "Meal" (
"id" TEXT NOT NULL,
"mealPlanId" TEXT NOT NULL,
"mealType" "MealType" NOT NULL,
"order" INTEGER NOT NULL,
"servings" INTEGER,
"notes" TEXT,
"createdAt" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP,
"updatedAt" TIMESTAMP(3) NOT NULL,
CONSTRAINT "Meal_pkey" PRIMARY KEY ("id")
);
-- CreateTable
CREATE TABLE "MealRecipe" (
"mealId" TEXT NOT NULL,
"recipeId" TEXT NOT NULL,
CONSTRAINT "MealRecipe_pkey" PRIMARY KEY ("mealId")
);
-- CreateIndex
CREATE UNIQUE INDEX "User_email_key" ON "User"("email");
-- CreateIndex
CREATE UNIQUE INDEX "User_username_key" ON "User"("username");
-- CreateIndex
CREATE INDEX "User_email_idx" ON "User"("email");
-- CreateIndex
CREATE INDEX "User_provider_providerId_idx" ON "User"("provider", "providerId");
-- CreateIndex
CREATE UNIQUE INDEX "VerificationToken_token_key" ON "VerificationToken"("token");
-- CreateIndex
CREATE INDEX "VerificationToken_userId_idx" ON "VerificationToken"("userId");
-- CreateIndex
CREATE INDEX "VerificationToken_token_idx" ON "VerificationToken"("token");
-- CreateIndex
CREATE UNIQUE INDEX "RefreshToken_token_key" ON "RefreshToken"("token");
-- CreateIndex
CREATE INDEX "RefreshToken_userId_idx" ON "RefreshToken"("userId");
-- CreateIndex
CREATE INDEX "RefreshToken_token_idx" ON "RefreshToken"("token");
-- CreateIndex
CREATE INDEX "Recipe_title_idx" ON "Recipe"("title");
-- CreateIndex
CREATE INDEX "Recipe_cuisine_idx" ON "Recipe"("cuisine");
-- CreateIndex
CREATE INDEX "Recipe_userId_idx" ON "Recipe"("userId");
-- CreateIndex
CREATE INDEX "Recipe_visibility_idx" ON "Recipe"("visibility");
-- CreateIndex
CREATE INDEX "RecipeSection_recipeId_idx" ON "RecipeSection"("recipeId");
-- CreateIndex
CREATE INDEX "Ingredient_recipeId_idx" ON "Ingredient"("recipeId");
-- CreateIndex
CREATE INDEX "Ingredient_sectionId_idx" ON "Ingredient"("sectionId");
-- CreateIndex
CREATE INDEX "Instruction_recipeId_idx" ON "Instruction"("recipeId");
-- CreateIndex
CREATE INDEX "Instruction_sectionId_idx" ON "Instruction"("sectionId");
-- CreateIndex
CREATE INDEX "IngredientInstructionMapping_instructionId_idx" ON "IngredientInstructionMapping"("instructionId");
-- CreateIndex
CREATE INDEX "IngredientInstructionMapping_ingredientId_idx" ON "IngredientInstructionMapping"("ingredientId");
-- CreateIndex
CREATE UNIQUE INDEX "IngredientInstructionMapping_ingredientId_instructionId_key" ON "IngredientInstructionMapping"("ingredientId", "instructionId");
-- CreateIndex
CREATE INDEX "RecipeImage_recipeId_idx" ON "RecipeImage"("recipeId");
-- CreateIndex
CREATE UNIQUE INDEX "Tag_name_key" ON "Tag"("name");
-- CreateIndex
CREATE INDEX "RecipeTag_recipeId_idx" ON "RecipeTag"("recipeId");
-- CreateIndex
CREATE INDEX "RecipeTag_tagId_idx" ON "RecipeTag"("tagId");
-- CreateIndex
CREATE INDEX "CookbookTag_cookbookId_idx" ON "CookbookTag"("cookbookId");
-- CreateIndex
CREATE INDEX "CookbookTag_tagId_idx" ON "CookbookTag"("tagId");
-- CreateIndex
CREATE INDEX "RecipeShare_recipeId_idx" ON "RecipeShare"("recipeId");
-- CreateIndex
CREATE INDEX "RecipeShare_userId_idx" ON "RecipeShare"("userId");
-- CreateIndex
CREATE UNIQUE INDEX "RecipeShare_recipeId_userId_key" ON "RecipeShare"("recipeId", "userId");
-- CreateIndex
CREATE INDEX "Cookbook_name_idx" ON "Cookbook"("name");
-- CreateIndex
CREATE INDEX "Cookbook_userId_idx" ON "Cookbook"("userId");
-- CreateIndex
CREATE INDEX "CookbookRecipe_cookbookId_idx" ON "CookbookRecipe"("cookbookId");
-- CreateIndex
CREATE INDEX "CookbookRecipe_recipeId_idx" ON "CookbookRecipe"("recipeId");
-- CreateIndex
CREATE UNIQUE INDEX "CookbookRecipe_cookbookId_recipeId_key" ON "CookbookRecipe"("cookbookId", "recipeId");
-- CreateIndex
CREATE INDEX "CookbookInclusion_parentCookbookId_idx" ON "CookbookInclusion"("parentCookbookId");
-- CreateIndex
CREATE INDEX "CookbookInclusion_childCookbookId_idx" ON "CookbookInclusion"("childCookbookId");
-- CreateIndex
CREATE UNIQUE INDEX "CookbookInclusion_parentCookbookId_childCookbookId_key" ON "CookbookInclusion"("parentCookbookId", "childCookbookId");
-- CreateIndex
CREATE INDEX "MealPlan_userId_idx" ON "MealPlan"("userId");
-- CreateIndex
CREATE INDEX "MealPlan_date_idx" ON "MealPlan"("date");
-- CreateIndex
CREATE INDEX "MealPlan_userId_date_idx" ON "MealPlan"("userId", "date");
-- CreateIndex
CREATE UNIQUE INDEX "MealPlan_userId_date_key" ON "MealPlan"("userId", "date");
-- CreateIndex
CREATE INDEX "Meal_mealPlanId_idx" ON "Meal"("mealPlanId");
-- CreateIndex
CREATE INDEX "Meal_mealType_idx" ON "Meal"("mealType");
-- CreateIndex
CREATE INDEX "MealRecipe_recipeId_idx" ON "MealRecipe"("recipeId");
-- AddForeignKey
ALTER TABLE "VerificationToken" ADD CONSTRAINT "VerificationToken_userId_fkey" FOREIGN KEY ("userId") REFERENCES "User"("id") ON DELETE CASCADE ON UPDATE CASCADE;
-- AddForeignKey
ALTER TABLE "RefreshToken" ADD CONSTRAINT "RefreshToken_userId_fkey" FOREIGN KEY ("userId") REFERENCES "User"("id") ON DELETE CASCADE ON UPDATE CASCADE;
-- AddForeignKey
ALTER TABLE "Recipe" ADD CONSTRAINT "Recipe_userId_fkey" FOREIGN KEY ("userId") REFERENCES "User"("id") ON DELETE SET NULL ON UPDATE CASCADE;
-- AddForeignKey
ALTER TABLE "RecipeSection" ADD CONSTRAINT "RecipeSection_recipeId_fkey" FOREIGN KEY ("recipeId") REFERENCES "Recipe"("id") ON DELETE CASCADE ON UPDATE CASCADE;
-- AddForeignKey
ALTER TABLE "Ingredient" ADD CONSTRAINT "Ingredient_recipeId_fkey" FOREIGN KEY ("recipeId") REFERENCES "Recipe"("id") ON DELETE CASCADE ON UPDATE CASCADE;
-- AddForeignKey
ALTER TABLE "Ingredient" ADD CONSTRAINT "Ingredient_sectionId_fkey" FOREIGN KEY ("sectionId") REFERENCES "RecipeSection"("id") ON DELETE CASCADE ON UPDATE CASCADE;
-- AddForeignKey
ALTER TABLE "Instruction" ADD CONSTRAINT "Instruction_recipeId_fkey" FOREIGN KEY ("recipeId") REFERENCES "Recipe"("id") ON DELETE CASCADE ON UPDATE CASCADE;
-- AddForeignKey
ALTER TABLE "Instruction" ADD CONSTRAINT "Instruction_sectionId_fkey" FOREIGN KEY ("sectionId") REFERENCES "RecipeSection"("id") ON DELETE CASCADE ON UPDATE CASCADE;
-- AddForeignKey
ALTER TABLE "IngredientInstructionMapping" ADD CONSTRAINT "IngredientInstructionMapping_ingredientId_fkey" FOREIGN KEY ("ingredientId") REFERENCES "Ingredient"("id") ON DELETE CASCADE ON UPDATE CASCADE;
-- AddForeignKey
ALTER TABLE "IngredientInstructionMapping" ADD CONSTRAINT "IngredientInstructionMapping_instructionId_fkey" FOREIGN KEY ("instructionId") REFERENCES "Instruction"("id") ON DELETE CASCADE ON UPDATE CASCADE;
-- AddForeignKey
ALTER TABLE "RecipeImage" ADD CONSTRAINT "RecipeImage_recipeId_fkey" FOREIGN KEY ("recipeId") REFERENCES "Recipe"("id") ON DELETE CASCADE ON UPDATE CASCADE;
-- AddForeignKey
ALTER TABLE "RecipeTag" ADD CONSTRAINT "RecipeTag_recipeId_fkey" FOREIGN KEY ("recipeId") REFERENCES "Recipe"("id") ON DELETE CASCADE ON UPDATE CASCADE;
-- AddForeignKey
ALTER TABLE "RecipeTag" ADD CONSTRAINT "RecipeTag_tagId_fkey" FOREIGN KEY ("tagId") REFERENCES "Tag"("id") ON DELETE CASCADE ON UPDATE CASCADE;
-- AddForeignKey
ALTER TABLE "CookbookTag" ADD CONSTRAINT "CookbookTag_cookbookId_fkey" FOREIGN KEY ("cookbookId") REFERENCES "Cookbook"("id") ON DELETE CASCADE ON UPDATE CASCADE;
-- AddForeignKey
ALTER TABLE "CookbookTag" ADD CONSTRAINT "CookbookTag_tagId_fkey" FOREIGN KEY ("tagId") REFERENCES "Tag"("id") ON DELETE CASCADE ON UPDATE CASCADE;
-- AddForeignKey
ALTER TABLE "RecipeShare" ADD CONSTRAINT "RecipeShare_recipeId_fkey" FOREIGN KEY ("recipeId") REFERENCES "Recipe"("id") ON DELETE CASCADE ON UPDATE CASCADE;
-- AddForeignKey
ALTER TABLE "RecipeShare" ADD CONSTRAINT "RecipeShare_userId_fkey" FOREIGN KEY ("userId") REFERENCES "User"("id") ON DELETE CASCADE ON UPDATE CASCADE;
-- AddForeignKey
ALTER TABLE "Cookbook" ADD CONSTRAINT "Cookbook_userId_fkey" FOREIGN KEY ("userId") REFERENCES "User"("id") ON DELETE SET NULL ON UPDATE CASCADE;
-- AddForeignKey
ALTER TABLE "CookbookRecipe" ADD CONSTRAINT "CookbookRecipe_cookbookId_fkey" FOREIGN KEY ("cookbookId") REFERENCES "Cookbook"("id") ON DELETE CASCADE ON UPDATE CASCADE;
-- AddForeignKey
ALTER TABLE "CookbookRecipe" ADD CONSTRAINT "CookbookRecipe_recipeId_fkey" FOREIGN KEY ("recipeId") REFERENCES "Recipe"("id") ON DELETE CASCADE ON UPDATE CASCADE;
-- AddForeignKey
ALTER TABLE "CookbookInclusion" ADD CONSTRAINT "CookbookInclusion_parentCookbookId_fkey" FOREIGN KEY ("parentCookbookId") REFERENCES "Cookbook"("id") ON DELETE CASCADE ON UPDATE CASCADE;
-- AddForeignKey
ALTER TABLE "CookbookInclusion" ADD CONSTRAINT "CookbookInclusion_childCookbookId_fkey" FOREIGN KEY ("childCookbookId") REFERENCES "Cookbook"("id") ON DELETE CASCADE ON UPDATE CASCADE;
-- AddForeignKey
ALTER TABLE "MealPlan" ADD CONSTRAINT "MealPlan_userId_fkey" FOREIGN KEY ("userId") REFERENCES "User"("id") ON DELETE SET NULL ON UPDATE CASCADE;
-- AddForeignKey
ALTER TABLE "Meal" ADD CONSTRAINT "Meal_mealPlanId_fkey" FOREIGN KEY ("mealPlanId") REFERENCES "MealPlan"("id") ON DELETE CASCADE ON UPDATE CASCADE;
-- AddForeignKey
ALTER TABLE "MealRecipe" ADD CONSTRAINT "MealRecipe_mealId_fkey" FOREIGN KEY ("mealId") REFERENCES "Meal"("id") ON DELETE CASCADE ON UPDATE CASCADE;
-- AddForeignKey
ALTER TABLE "MealRecipe" ADD CONSTRAINT "MealRecipe_recipeId_fkey" FOREIGN KEY ("recipeId") REFERENCES "Recipe"("id") ON DELETE CASCADE ON UPDATE CASCADE;

View File

@@ -0,0 +1,59 @@
-- CreateEnum
CREATE TYPE "FamilyRole" AS ENUM ('OWNER', 'MEMBER');
-- AlterTable
ALTER TABLE "Cookbook" ADD COLUMN "familyId" TEXT;
-- AlterTable
ALTER TABLE "Recipe" ADD COLUMN "familyId" TEXT;
-- CreateTable
CREATE TABLE "Family" (
"id" TEXT NOT NULL,
"name" TEXT NOT NULL,
"createdAt" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP,
"updatedAt" TIMESTAMP(3) NOT NULL,
CONSTRAINT "Family_pkey" PRIMARY KEY ("id")
);
-- CreateTable
CREATE TABLE "FamilyMember" (
"id" TEXT NOT NULL,
"userId" TEXT NOT NULL,
"familyId" TEXT NOT NULL,
"role" "FamilyRole" NOT NULL DEFAULT 'MEMBER',
"joinedAt" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP,
CONSTRAINT "FamilyMember_pkey" PRIMARY KEY ("id")
);
-- CreateIndex
CREATE INDEX "Family_name_idx" ON "Family"("name");
-- CreateIndex
CREATE INDEX "FamilyMember_userId_idx" ON "FamilyMember"("userId");
-- CreateIndex
CREATE INDEX "FamilyMember_familyId_idx" ON "FamilyMember"("familyId");
-- CreateIndex
CREATE UNIQUE INDEX "FamilyMember_userId_familyId_key" ON "FamilyMember"("userId", "familyId");
-- CreateIndex
CREATE INDEX "Cookbook_familyId_idx" ON "Cookbook"("familyId");
-- CreateIndex
CREATE INDEX "Recipe_familyId_idx" ON "Recipe"("familyId");
-- AddForeignKey
ALTER TABLE "FamilyMember" ADD CONSTRAINT "FamilyMember_userId_fkey" FOREIGN KEY ("userId") REFERENCES "User"("id") ON DELETE CASCADE ON UPDATE CASCADE;
-- AddForeignKey
ALTER TABLE "FamilyMember" ADD CONSTRAINT "FamilyMember_familyId_fkey" FOREIGN KEY ("familyId") REFERENCES "Family"("id") ON DELETE CASCADE ON UPDATE CASCADE;
-- AddForeignKey
ALTER TABLE "Recipe" ADD CONSTRAINT "Recipe_familyId_fkey" FOREIGN KEY ("familyId") REFERENCES "Family"("id") ON DELETE SET NULL ON UPDATE CASCADE;
-- AddForeignKey
ALTER TABLE "Cookbook" ADD CONSTRAINT "Cookbook_familyId_fkey" FOREIGN KEY ("familyId") REFERENCES "Family"("id") ON DELETE SET NULL ON UPDATE CASCADE;

View File

@@ -0,0 +1,3 @@
# Please do not edit this file manually
# It should be added in your version-control system (e.g., Git)
provider = "postgresql"

View File

@@ -29,11 +29,45 @@ model User {
refreshTokens RefreshToken[]
verificationTokens VerificationToken[]
mealPlans MealPlan[]
familyMemberships FamilyMember[]
@@index([email])
@@index([provider, providerId])
}
enum FamilyRole {
OWNER
MEMBER
}
model Family {
id String @id @default(cuid())
name String
createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
members FamilyMember[]
recipes Recipe[]
cookbooks Cookbook[]
@@index([name])
}
model FamilyMember {
id String @id @default(cuid())
userId String
familyId String
role FamilyRole @default(MEMBER)
joinedAt DateTime @default(now())
user User @relation(fields: [userId], references: [id], onDelete: Cascade)
family Family @relation(fields: [familyId], references: [id], onDelete: Cascade)
@@unique([userId, familyId])
@@index([userId])
@@index([familyId])
}
model VerificationToken {
id String @id @default(cuid())
userId String
@@ -91,12 +125,14 @@ model Recipe {
cuisine String?
categories String[] @default([]) // Changed from single category to array
rating Float?
userId String? // Recipe owner
userId String? // Recipe owner (creator)
familyId String? // Owning family (tenant scope)
visibility Visibility @default(PRIVATE)
createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
user User? @relation(fields: [userId], references: [id], onDelete: SetNull)
family Family? @relation(fields: [familyId], references: [id], onDelete: SetNull)
sections RecipeSection[]
ingredients Ingredient[]
instructions Instruction[]
@@ -109,6 +145,7 @@ model Recipe {
@@index([title])
@@index([cuisine])
@@index([userId])
@@index([familyId])
@@index([visibility])
}
@@ -236,7 +273,8 @@ model Cookbook {
name String
description String?
coverImageUrl String?
userId String? // Cookbook owner
userId String? // Cookbook owner (creator)
familyId String? // Owning family (tenant scope)
autoFilterCategories String[] @default([]) // Auto-add recipes matching these categories
autoFilterTags String[] @default([]) // Auto-add recipes matching these tags
autoFilterCookbookTags String[] @default([]) // Auto-add cookbooks matching these tags
@@ -244,6 +282,7 @@ model Cookbook {
updatedAt DateTime @updatedAt
user User? @relation(fields: [userId], references: [id], onDelete: SetNull)
family Family? @relation(fields: [familyId], references: [id], onDelete: SetNull)
recipes CookbookRecipe[]
tags CookbookTag[]
includedCookbooks CookbookInclusion[] @relation("ParentCookbook")
@@ -251,6 +290,7 @@ model Cookbook {
@@index([name])
@@index([userId])
@@index([familyId])
}
model CookbookRecipe {

View File

@@ -10,6 +10,7 @@ import tagsRoutes from './routes/tags.routes';
import backupRoutes from './routes/backup.routes';
import authRoutes from './routes/auth.routes';
import mealPlansRoutes from './routes/meal-plans.routes';
import familiesRoutes from './routes/families.routes';
import './config/passport'; // Initialize passport strategies
import { testEmailConfig } from './services/email.service';
import { APP_VERSION } from './version';
@@ -40,6 +41,7 @@ app.use('/api/cookbooks', cookbooksRoutes);
app.use('/api/tags', tagsRoutes);
app.use('/api/backup', backupRoutes);
app.use('/api/meal-plans', mealPlansRoutes);
app.use('/api/families', familiesRoutes);
// Health check
app.get('/health', (req, res) => {

View File

@@ -2,10 +2,13 @@ import express, { Request, Response } from 'express';
import path from 'path';
import fs from 'fs/promises';
import { createBackup, restoreBackup, listBackups, deleteBackup } from '../services/backup.service';
import { requireAuth, requireAdmin } from '../middleware/auth.middleware';
import multer from 'multer';
const router = express.Router();
router.use(requireAuth, requireAdmin);
// Configure multer for backup file uploads
const upload = multer({
dest: '/tmp/basil-restore/',

View File

@@ -2,8 +2,16 @@ import { Router, Request, Response } from 'express';
import multer from 'multer';
import prisma from '../config/database';
import { StorageService } from '../services/storage.service';
import {
getAccessContext,
buildCookbookAccessFilter,
canMutateCookbook,
getPrimaryFamilyId,
} from '../services/access.service';
import { requireAuth } from '../middleware/auth.middleware';
const router = Router();
router.use(requireAuth);
const upload = multer({
storage: multer.memoryStorage(),
limits: {
@@ -57,9 +65,11 @@ async function applyFiltersToExistingRecipes(cookbookId: string) {
});
}
// Find matching recipes
// Find matching recipes within the same family (tenant scope).
if (!cookbook.familyId) return;
const matchingRecipes = await prisma.recipe.findMany({
where: {
familyId: cookbook.familyId,
OR: whereConditions
},
select: { id: true }
@@ -107,11 +117,13 @@ async function applyFiltersToExistingCookbooks(cookbookId: string) {
return;
}
// Find matching cookbooks (excluding self)
// Find matching cookbooks within the same family (tenant scope).
if (!cookbook.familyId) return;
const matchingCookbooks = await prisma.cookbook.findMany({
where: {
AND: [
{ id: { not: cookbookId } },
{ familyId: cookbook.familyId },
{
tags: {
some: {
@@ -166,11 +178,14 @@ async function autoAddToParentCookbooks(cookbookId: string) {
const cookbookTags = cookbook.tags.map((ct: any) => ct.tag.name);
if (cookbookTags.length === 0) return;
// Find parent cookbooks with filters matching this cookbook's tags
// Find parent cookbooks with filters matching this cookbook's tags,
// scoped to the same family.
if (!cookbook.familyId) return;
const parentCookbooks = await prisma.cookbook.findMany({
where: {
AND: [
{ id: { not: cookbookId } },
{ familyId: cookbook.familyId },
{ autoFilterCookbookTags: { hasSome: cookbookTags } }
]
}
@@ -203,6 +218,8 @@ async function autoAddToParentCookbooks(cookbookId: string) {
router.get('/', async (req: Request, res: Response) => {
try {
const { includeChildren = 'false' } = req.query;
const ctx = await getAccessContext(req.user!);
const accessFilter = buildCookbookAccessFilter(ctx);
// Get child cookbook IDs to exclude from main listing (unless includeChildren is true)
const childCookbookIds = includeChildren === 'true' ? [] : (
@@ -213,8 +230,11 @@ router.get('/', async (req: Request, res: Response) => {
).map((ci: any) => ci.childCookbookId);
const cookbooks = await prisma.cookbook.findMany({
where: includeChildren === 'true' ? {} : {
id: { notIn: childCookbookIds }
where: {
AND: [
accessFilter,
includeChildren === 'true' ? {} : { id: { notIn: childCookbookIds } },
],
},
include: {
_count: {
@@ -256,9 +276,10 @@ router.get('/', async (req: Request, res: Response) => {
router.get('/:id', async (req: Request, res: Response) => {
try {
const { id } = req.params;
const ctx = await getAccessContext(req.user!);
const cookbook = await prisma.cookbook.findUnique({
where: { id },
const cookbook = await prisma.cookbook.findFirst({
where: { AND: [{ id }, buildCookbookAccessFilter(ctx)] },
include: {
recipes: {
include: {
@@ -342,11 +363,15 @@ router.post('/', async (req: Request, res: Response) => {
return res.status(400).json({ error: 'Name is required' });
}
const familyId = await getPrimaryFamilyId(req.user!.id);
const cookbook = await prisma.cookbook.create({
data: {
name,
description,
coverImageUrl,
userId: req.user!.id,
familyId,
autoFilterCategories: autoFilterCategories || [],
autoFilterTags: autoFilterTags || [],
autoFilterCookbookTags: autoFilterCookbookTags || [],
@@ -388,6 +413,16 @@ router.put('/:id', async (req: Request, res: Response) => {
const { id } = req.params;
const { name, description, coverImageUrl, autoFilterCategories, autoFilterTags, autoFilterCookbookTags, tags } = req.body;
const ctx = await getAccessContext(req.user!);
const existing = await prisma.cookbook.findUnique({
where: { id },
select: { userId: true, familyId: true },
});
if (!existing) return res.status(404).json({ error: 'Cookbook not found' });
if (!canMutateCookbook(ctx, existing)) {
return res.status(403).json({ error: 'Forbidden' });
}
const updateData: any = {};
if (name !== undefined) updateData.name = name;
if (description !== undefined) updateData.description = description;
@@ -460,6 +495,15 @@ router.put('/:id', async (req: Request, res: Response) => {
router.delete('/:id', async (req: Request, res: Response) => {
try {
const { id } = req.params;
const ctx = await getAccessContext(req.user!);
const cookbook = await prisma.cookbook.findUnique({
where: { id },
select: { userId: true, familyId: true },
});
if (!cookbook) return res.status(404).json({ error: 'Cookbook not found' });
if (!canMutateCookbook(ctx, cookbook)) {
return res.status(403).json({ error: 'Forbidden' });
}
await prisma.cookbook.delete({
where: { id }
@@ -476,6 +520,26 @@ router.delete('/:id', async (req: Request, res: Response) => {
router.post('/:id/recipes/:recipeId', async (req: Request, res: Response) => {
try {
const { id, recipeId } = req.params;
const ctx = await getAccessContext(req.user!);
const cookbook = await prisma.cookbook.findUnique({
where: { id },
select: { userId: true, familyId: true },
});
if (!cookbook) return res.status(404).json({ error: 'Cookbook not found' });
if (!canMutateCookbook(ctx, cookbook)) {
return res.status(403).json({ error: 'Forbidden' });
}
// Prevent pulling recipes from other tenants into this cookbook.
const recipe = await prisma.recipe.findUnique({
where: { id: recipeId },
select: { userId: true, familyId: true, visibility: true },
});
if (!recipe) return res.status(404).json({ error: 'Recipe not found' });
const sameFamily = !!recipe.familyId && recipe.familyId === cookbook.familyId;
const ownedByUser = recipe.userId === ctx.userId;
if (ctx.role !== 'ADMIN' && !sameFamily && !ownedByUser) {
return res.status(403).json({ error: 'Cannot add recipe from a different tenant' });
}
// Check if recipe is already in cookbook
const existing = await prisma.cookbookRecipe.findUnique({
@@ -509,6 +573,15 @@ router.post('/:id/recipes/:recipeId', async (req: Request, res: Response) => {
router.delete('/:id/recipes/:recipeId', async (req: Request, res: Response) => {
try {
const { id, recipeId } = req.params;
const ctx = await getAccessContext(req.user!);
const cookbook = await prisma.cookbook.findUnique({
where: { id },
select: { userId: true, familyId: true },
});
if (!cookbook) return res.status(404).json({ error: 'Cookbook not found' });
if (!canMutateCookbook(ctx, cookbook)) {
return res.status(403).json({ error: 'Forbidden' });
}
await prisma.cookbookRecipe.delete({
where: {
@@ -536,6 +609,26 @@ router.post('/:id/cookbooks/:childCookbookId', async (req: Request, res: Respons
return res.status(400).json({ error: 'Cannot add cookbook to itself' });
}
const ctx = await getAccessContext(req.user!);
const parent = await prisma.cookbook.findUnique({
where: { id },
select: { userId: true, familyId: true },
});
if (!parent) return res.status(404).json({ error: 'Cookbook not found' });
if (!canMutateCookbook(ctx, parent)) {
return res.status(403).json({ error: 'Forbidden' });
}
const child = await prisma.cookbook.findUnique({
where: { id: childCookbookId },
select: { userId: true, familyId: true },
});
if (!child) return res.status(404).json({ error: 'Cookbook not found' });
const sameFamily = !!child.familyId && child.familyId === parent.familyId;
const ownedByUser = child.userId === ctx.userId;
if (ctx.role !== 'ADMIN' && !sameFamily && !ownedByUser) {
return res.status(403).json({ error: 'Cannot nest a cookbook from a different tenant' });
}
// Check if cookbook is already included
const existing = await prisma.cookbookInclusion.findUnique({
where: {
@@ -568,6 +661,15 @@ router.post('/:id/cookbooks/:childCookbookId', async (req: Request, res: Respons
router.delete('/:id/cookbooks/:childCookbookId', async (req: Request, res: Response) => {
try {
const { id, childCookbookId } = req.params;
const ctx = await getAccessContext(req.user!);
const parent = await prisma.cookbook.findUnique({
where: { id },
select: { userId: true, familyId: true },
});
if (!parent) return res.status(404).json({ error: 'Cookbook not found' });
if (!canMutateCookbook(ctx, parent)) {
return res.status(403).json({ error: 'Forbidden' });
}
await prisma.cookbookInclusion.delete({
where: {
@@ -594,10 +696,14 @@ router.post('/:id/image', upload.single('image'), async (req: Request, res: Resp
return res.status(400).json({ error: 'No image provided' });
}
// Delete old cover image if it exists
const ctx = await getAccessContext(req.user!);
const cookbook = await prisma.cookbook.findUnique({
where: { id }
});
if (!cookbook) return res.status(404).json({ error: 'Cookbook not found' });
if (!canMutateCookbook(ctx, cookbook)) {
return res.status(403).json({ error: 'Forbidden' });
}
if (cookbook?.coverImageUrl) {
await storageService.deleteFile(cookbook.coverImageUrl);
@@ -629,10 +735,14 @@ router.post('/:id/image-from-url', async (req: Request, res: Response) => {
return res.status(400).json({ error: 'URL is required' });
}
// Delete old cover image if it exists
const ctx = await getAccessContext(req.user!);
const cookbook = await prisma.cookbook.findUnique({
where: { id }
});
if (!cookbook) return res.status(404).json({ error: 'Cookbook not found' });
if (!canMutateCookbook(ctx, cookbook)) {
return res.status(403).json({ error: 'Forbidden' });
}
if (cookbook?.coverImageUrl) {
await storageService.deleteFile(cookbook.coverImageUrl);

View File

@@ -0,0 +1,237 @@
import { Router, Request, Response } from 'express';
import prisma from '../config/database';
import { requireAuth } from '../middleware/auth.middleware';
import { FamilyRole } from '@prisma/client';
const router = Router();
router.use(requireAuth);
async function getMembership(userId: string, familyId: string) {
return prisma.familyMember.findUnique({
where: { userId_familyId: { userId, familyId } },
});
}
// List the current user's families.
router.get('/', async (req: Request, res: Response) => {
try {
const userId = req.user!.id;
const memberships = await prisma.familyMember.findMany({
where: { userId },
include: {
family: { include: { _count: { select: { members: true } } } },
},
orderBy: { joinedAt: 'asc' },
});
res.json({
data: memberships.map((m) => ({
id: m.family.id,
name: m.family.name,
role: m.role,
memberCount: m.family._count.members,
joinedAt: m.joinedAt,
})),
});
} catch (error) {
console.error('Error fetching families:', error);
res.status(500).json({ error: 'Failed to fetch families' });
}
});
// Create a new family (caller becomes OWNER).
router.post('/', async (req: Request, res: Response) => {
try {
const { name } = req.body;
if (!name || typeof name !== 'string' || !name.trim()) {
return res.status(400).json({ error: 'Name is required' });
}
const family = await prisma.family.create({
data: {
name: name.trim(),
members: { create: { userId: req.user!.id, role: 'OWNER' } },
},
});
res.status(201).json({ data: family });
} catch (error) {
console.error('Error creating family:', error);
res.status(500).json({ error: 'Failed to create family' });
}
});
// Get a family including its members. Must be a member.
router.get('/:id', async (req: Request, res: Response) => {
try {
const userId = req.user!.id;
const membership = await getMembership(userId, req.params.id);
if (!membership && req.user!.role !== 'ADMIN') {
return res.status(404).json({ error: 'Family not found' });
}
const family = await prisma.family.findUnique({
where: { id: req.params.id },
include: {
members: {
include: { user: { select: { id: true, email: true, name: true, avatar: true } } },
orderBy: { joinedAt: 'asc' },
},
},
});
if (!family) return res.status(404).json({ error: 'Family not found' });
res.json({
data: {
id: family.id,
name: family.name,
createdAt: family.createdAt,
updatedAt: family.updatedAt,
myRole: membership?.role ?? null,
members: family.members.map((m) => ({
userId: m.userId,
email: m.user.email,
name: m.user.name,
avatar: m.user.avatar,
role: m.role,
joinedAt: m.joinedAt,
})),
},
});
} catch (error) {
console.error('Error fetching family:', error);
res.status(500).json({ error: 'Failed to fetch family' });
}
});
// Rename a family. OWNER only.
router.put('/:id', async (req: Request, res: Response) => {
try {
const userId = req.user!.id;
const membership = await getMembership(userId, req.params.id);
const isAdmin = req.user!.role === 'ADMIN';
if (!membership || (membership.role !== 'OWNER' && !isAdmin)) {
return res.status(403).json({ error: 'Owner access required' });
}
const { name } = req.body;
if (!name || typeof name !== 'string' || !name.trim()) {
return res.status(400).json({ error: 'Name is required' });
}
const family = await prisma.family.update({
where: { id: req.params.id },
data: { name: name.trim() },
});
res.json({ data: family });
} catch (error) {
console.error('Error updating family:', error);
res.status(500).json({ error: 'Failed to update family' });
}
});
// Delete a family. OWNER only. Recipes/cookbooks in this family get familyId=NULL.
router.delete('/:id', async (req: Request, res: Response) => {
try {
const userId = req.user!.id;
const membership = await getMembership(userId, req.params.id);
const isAdmin = req.user!.role === 'ADMIN';
if (!membership || (membership.role !== 'OWNER' && !isAdmin)) {
return res.status(403).json({ error: 'Owner access required' });
}
await prisma.family.delete({ where: { id: req.params.id } });
res.json({ message: 'Family deleted' });
} catch (error) {
console.error('Error deleting family:', error);
res.status(500).json({ error: 'Failed to delete family' });
}
});
// Add an existing user to a family by email. OWNER only.
router.post('/:id/members', async (req: Request, res: Response) => {
try {
const userId = req.user!.id;
const membership = await getMembership(userId, req.params.id);
const isAdmin = req.user!.role === 'ADMIN';
if (!membership || (membership.role !== 'OWNER' && !isAdmin)) {
return res.status(403).json({ error: 'Owner access required' });
}
const { email, role } = req.body;
if (!email || typeof email !== 'string') {
return res.status(400).json({ error: 'Email is required' });
}
const invitedRole: FamilyRole = role === 'OWNER' ? 'OWNER' : 'MEMBER';
const invitee = await prisma.user.findUnique({
where: { email: email.toLowerCase() },
select: { id: true, email: true, name: true, avatar: true },
});
if (!invitee) {
return res.status(404).json({ error: 'No user with that email exists on this server' });
}
const existing = await getMembership(invitee.id, req.params.id);
if (existing) {
return res.status(409).json({ error: 'User is already a member' });
}
const newMember = await prisma.familyMember.create({
data: { userId: invitee.id, familyId: req.params.id, role: invitedRole },
});
res.status(201).json({
data: {
userId: invitee.id,
email: invitee.email,
name: invitee.name,
avatar: invitee.avatar,
role: newMember.role,
joinedAt: newMember.joinedAt,
},
});
} catch (error) {
console.error('Error adding member:', error);
res.status(500).json({ error: 'Failed to add member' });
}
});
// Remove a member (or leave as self). OWNER can remove anyone; a member can only remove themselves.
router.delete('/:id/members/:userId', async (req: Request, res: Response) => {
try {
const currentUserId = req.user!.id;
const targetUserId = req.params.userId;
const membership = await getMembership(currentUserId, req.params.id);
const isAdmin = req.user!.role === 'ADMIN';
if (!membership && !isAdmin) {
return res.status(403).json({ error: 'Not a member of this family' });
}
const isOwner = membership?.role === 'OWNER';
const isSelf = targetUserId === currentUserId;
if (!isOwner && !isSelf && !isAdmin) {
return res.status(403).json({ error: 'Only owners can remove other members' });
}
const target = await getMembership(targetUserId, req.params.id);
if (!target) {
return res.status(404).json({ error: 'Member not found' });
}
// Don't let the last OWNER leave/be removed — would orphan the family.
if (target.role === 'OWNER') {
const ownerCount = await prisma.familyMember.count({
where: { familyId: req.params.id, role: 'OWNER' },
});
if (ownerCount <= 1) {
return res.status(400).json({ error: 'Cannot remove the last owner; transfer ownership or delete the family first' });
}
}
await prisma.familyMember.delete({
where: { userId_familyId: { userId: targetUserId, familyId: req.params.id } },
});
res.json({ message: 'Member removed' });
} catch (error) {
console.error('Error removing member:', error);
res.status(500).json({ error: 'Failed to remove member' });
}
});
export default router;

View File

@@ -4,9 +4,17 @@ import prisma from '../config/database';
import { StorageService } from '../services/storage.service';
import { ScraperService } from '../services/scraper.service';
import { autoMapIngredients, saveIngredientMappings } from '../services/ingredientMatcher.service';
import {
getAccessContext,
buildRecipeAccessFilter,
canMutateRecipe,
getPrimaryFamilyId,
} from '../services/access.service';
import { requireAuth } from '../middleware/auth.middleware';
import { ApiResponse, RecipeImportRequest } from '@basil/shared';
const router = Router();
router.use(requireAuth);
const upload = multer({
storage: multer.memoryStorage(),
limits: {
@@ -23,7 +31,8 @@ const upload = multer({
const storageService = StorageService.getInstance();
const scraperService = new ScraperService();
// Helper function to auto-add recipe to cookbooks based on their filters
// Helper function to auto-add recipe to cookbooks based on their filters.
// Scoped to same family to prevent cross-tenant leakage via shared tag names.
async function autoAddToCookbooks(recipeId: string) {
try {
// Get the recipe with its category and tags
@@ -43,9 +52,11 @@ async function autoAddToCookbooks(recipeId: string) {
const recipeTags = recipe.tags.map((rt: any) => rt.tag.name);
const recipeCategories = recipe.categories || [];
// Get all cookbooks with auto-filters
// Get cookbooks in the same family with auto-filters. Skip unscoped recipes.
if (!recipe.familyId) return;
const cookbooks = await prisma.cookbook.findMany({
where: {
familyId: recipe.familyId,
OR: [
{ autoFilterCategories: { isEmpty: false } },
{ autoFilterTags: { isEmpty: false } }
@@ -107,36 +118,35 @@ router.get('/', async (req, res) => {
const limitNum = parseInt(limit as string);
const skip = (pageNum - 1) * limitNum;
const where: any = {};
const ctx = await getAccessContext(req.user!);
const where: any = { AND: [buildRecipeAccessFilter(ctx)] };
if (search) {
where.OR = [
{ title: { contains: search as string, mode: 'insensitive' } },
{ description: { contains: search as string, mode: 'insensitive' } },
{
tags: {
some: {
tag: {
name: { contains: search as string, mode: 'insensitive' }
where.AND.push({
OR: [
{ title: { contains: search as string, mode: 'insensitive' } },
{ description: { contains: search as string, mode: 'insensitive' } },
{
tags: {
some: {
tag: {
name: { contains: search as string, mode: 'insensitive' }
}
}
}
}
},
];
}
if (cuisine) where.cuisine = cuisine;
if (category) {
where.categories = {
has: category as string
};
},
],
});
}
if (cuisine) where.AND.push({ cuisine });
if (category) where.AND.push({ categories: { has: category as string } });
if (tag) {
where.tags = {
some: {
tag: {
name: { equals: tag as string, mode: 'insensitive' }
}
}
};
where.AND.push({
tags: {
some: {
tag: { name: { equals: tag as string, mode: 'insensitive' } },
},
},
});
}
const [recipes, total] = await Promise.all([
@@ -215,8 +225,9 @@ router.get('/', async (req, res) => {
// Get single recipe
router.get('/:id', async (req, res) => {
try {
const recipe = await prisma.recipe.findUnique({
where: { id: req.params.id },
const ctx = await getAccessContext(req.user!);
const recipe = await prisma.recipe.findFirst({
where: { AND: [{ id: req.params.id }, buildRecipeAccessFilter(ctx)] },
include: {
sections: {
orderBy: { order: 'asc' },
@@ -285,11 +296,17 @@ router.get('/:id', async (req, res) => {
router.post('/', async (req, res) => {
try {
const { title, description, sections, ingredients, instructions, tags, ...recipeData } = req.body;
// Strip any client-supplied ownership — always derive server-side.
delete recipeData.userId;
delete recipeData.familyId;
const familyId = await getPrimaryFamilyId(req.user!.id);
const recipe = await prisma.recipe.create({
data: {
title,
description,
userId: req.user!.id,
familyId,
...recipeData,
sections: sections
? {
@@ -361,7 +378,20 @@ router.post('/', async (req, res) => {
// Update recipe
router.put('/:id', async (req, res) => {
try {
const ctx = await getAccessContext(req.user!);
const existing = await prisma.recipe.findUnique({
where: { id: req.params.id },
select: { userId: true, familyId: true, visibility: true },
});
if (!existing) return res.status(404).json({ error: 'Recipe not found' });
if (!canMutateRecipe(ctx, existing)) {
return res.status(403).json({ error: 'Forbidden' });
}
const { sections, ingredients, instructions, tags, ...recipeData } = req.body;
// Block client from reassigning ownership via update.
delete recipeData.userId;
delete recipeData.familyId;
// Only delete relations that are being updated (not undefined)
if (sections !== undefined) {
@@ -465,20 +495,23 @@ router.put('/:id', async (req, res) => {
// Delete recipe
router.delete('/:id', async (req, res) => {
try {
const ctx = await getAccessContext(req.user!);
// Get recipe to delete associated images
const recipe = await prisma.recipe.findUnique({
where: { id: req.params.id },
include: { images: true },
});
if (!recipe) return res.status(404).json({ error: 'Recipe not found' });
if (!canMutateRecipe(ctx, recipe)) {
return res.status(403).json({ error: 'Forbidden' });
}
if (recipe) {
// Delete images from storage
if (recipe.imageUrl) {
await storageService.deleteFile(recipe.imageUrl);
}
for (const image of recipe.images) {
await storageService.deleteFile(image.url);
}
// Delete images from storage
if (recipe.imageUrl) {
await storageService.deleteFile(recipe.imageUrl);
}
for (const image of recipe.images) {
await storageService.deleteFile(image.url);
}
await prisma.recipe.delete({ where: { id: req.params.id } });
@@ -505,16 +538,20 @@ router.post('/:id/images', upload.single('image'), async (req, res) => {
return res.status(400).json({ error: 'No image provided' });
}
const ctx = await getAccessContext(req.user!);
const existingRecipe = await prisma.recipe.findUnique({
where: { id: req.params.id },
select: { imageUrl: true, userId: true, familyId: true, visibility: true },
});
if (!existingRecipe) return res.status(404).json({ error: 'Recipe not found' });
if (!canMutateRecipe(ctx, existingRecipe)) {
return res.status(403).json({ error: 'Forbidden' });
}
console.log('Saving file to storage...');
const imageUrl = await storageService.saveFile(req.file, 'recipes');
console.log('File saved, URL:', imageUrl);
// Get existing recipe to delete old image
const existingRecipe = await prisma.recipe.findUnique({
where: { id: req.params.id },
select: { imageUrl: true },
});
// Delete old image from storage if it exists
if (existingRecipe?.imageUrl) {
console.log('Deleting old image:', existingRecipe.imageUrl);
@@ -550,12 +587,17 @@ router.post('/:id/images', upload.single('image'), async (req, res) => {
// Delete recipe image
router.delete('/:id/image', async (req, res) => {
try {
const ctx = await getAccessContext(req.user!);
const recipe = await prisma.recipe.findUnique({
where: { id: req.params.id },
select: { imageUrl: true },
select: { imageUrl: true, userId: true, familyId: true, visibility: true },
});
if (!recipe) return res.status(404).json({ error: 'Recipe not found' });
if (!canMutateRecipe(ctx, recipe)) {
return res.status(403).json({ error: 'Forbidden' });
}
if (!recipe?.imageUrl) {
if (!recipe.imageUrl) {
return res.status(404).json({ error: 'No image to delete' });
}
@@ -606,6 +648,16 @@ router.post('/:id/ingredient-mappings', async (req, res) => {
return res.status(400).json({ error: 'Mappings must be an array' });
}
const ctx = await getAccessContext(req.user!);
const recipe = await prisma.recipe.findUnique({
where: { id: req.params.id },
select: { userId: true, familyId: true, visibility: true },
});
if (!recipe) return res.status(404).json({ error: 'Recipe not found' });
if (!canMutateRecipe(ctx, recipe)) {
return res.status(403).json({ error: 'Forbidden' });
}
await saveIngredientMappings(mappings);
res.json({ message: 'Mappings updated successfully' });
@@ -618,6 +670,16 @@ router.post('/:id/ingredient-mappings', async (req, res) => {
// Regenerate ingredient-instruction mappings
router.post('/:id/regenerate-mappings', async (req, res) => {
try {
const ctx = await getAccessContext(req.user!);
const recipe = await prisma.recipe.findUnique({
where: { id: req.params.id },
select: { userId: true, familyId: true, visibility: true },
});
if (!recipe) return res.status(404).json({ error: 'Recipe not found' });
if (!canMutateRecipe(ctx, recipe)) {
return res.status(403).json({ error: 'Forbidden' });
}
await autoMapIngredients(req.params.id);
res.json({ message: 'Mappings regenerated successfully' });

View File

@@ -0,0 +1,155 @@
#!/usr/bin/env node
/**
* Backfill default families for existing data.
*
* For every user, ensure they have a personal Family (as OWNER).
* Any Recipe or Cookbook that they own (userId = them) but has no familyId
* is assigned to that family.
*
* Orphan content (userId IS NULL) is assigned to --owner (default: first ADMIN user)
* so existing legacy records don't disappear behind the access filter.
*
* Idempotent — safe to re-run.
*
* Usage:
* npx tsx src/scripts/backfill-family-tenant.ts
* npx tsx src/scripts/backfill-family-tenant.ts --owner admin@basil.local
* npx tsx src/scripts/backfill-family-tenant.ts --dry-run
*/
import { PrismaClient, User, Family } from '@prisma/client';
const prisma = new PrismaClient();
interface Options {
ownerEmail?: string;
dryRun: boolean;
}
function parseArgs(): Options {
const args = process.argv.slice(2);
const opts: Options = { dryRun: false };
for (let i = 0; i < args.length; i++) {
if (args[i] === '--dry-run') opts.dryRun = true;
else if (args[i] === '--owner' && args[i + 1]) {
opts.ownerEmail = args[++i];
}
}
return opts;
}
async function ensurePersonalFamily(user: User, dryRun: boolean): Promise<Family> {
const existing = await prisma.familyMember.findFirst({
where: { userId: user.id, role: 'OWNER' },
include: { family: true },
});
if (existing) return existing.family;
const name = `${user.name || user.email.split('@')[0]}'s Family`;
if (dryRun) {
console.log(` [dry-run] would create Family "${name}" for ${user.email}`);
return { id: '<dry-run>', name, createdAt: new Date(), updatedAt: new Date() };
}
const family = await prisma.family.create({
data: {
name,
members: {
create: { userId: user.id, role: 'OWNER' },
},
},
});
console.log(` Created Family "${family.name}" (${family.id}) for ${user.email}`);
return family;
}
async function main() {
const opts = parseArgs();
console.log(`\n🌿 Family tenant backfill${opts.dryRun ? ' [DRY RUN]' : ''}\n`);
// 1. Pick legacy owner for orphan records.
let legacyOwner: User | null = null;
if (opts.ownerEmail) {
legacyOwner = await prisma.user.findUnique({ where: { email: opts.ownerEmail.toLowerCase() } });
if (!legacyOwner) {
console.error(`❌ No user with email ${opts.ownerEmail}`);
process.exit(1);
}
} else {
legacyOwner = await prisma.user.findFirst({
where: { role: 'ADMIN' },
orderBy: { createdAt: 'asc' },
});
}
if (!legacyOwner) {
console.warn('⚠️ No admin user found; orphan recipes/cookbooks will be left with familyId = NULL');
} else {
console.log(`Legacy owner for orphan content: ${legacyOwner.email}\n`);
}
// 2. Ensure every user has a personal family.
const users = await prisma.user.findMany({ orderBy: { createdAt: 'asc' } });
console.log(`Processing ${users.length} user(s):`);
const userFamily = new Map<string, Family>();
for (const u of users) {
const fam = await ensurePersonalFamily(u, opts.dryRun);
userFamily.set(u.id, fam);
}
// 3. Backfill Recipe.familyId and Cookbook.familyId.
const targets = [
{ label: 'Recipe', model: prisma.recipe },
{ label: 'Cookbook', model: prisma.cookbook },
] as const;
let totalUpdated = 0;
for (const { label, model } of targets) {
// Owned content without a familyId — assign to owner's family.
const ownedRows: { id: string; userId: string | null }[] = await (model as any).findMany({
where: { familyId: null, userId: { not: null } },
select: { id: true, userId: true },
});
for (const row of ownedRows) {
const fam = userFamily.get(row.userId!);
if (!fam) continue;
if (!opts.dryRun) {
await (model as any).update({ where: { id: row.id }, data: { familyId: fam.id } });
}
totalUpdated++;
}
console.log(` ${label}: ${ownedRows.length} owned row(s) assigned to owner's family`);
// Orphan content — assign to legacy owner's family if configured.
if (legacyOwner) {
const legacyFam = userFamily.get(legacyOwner.id)!;
const orphans: { id: string }[] = await (model as any).findMany({
where: { familyId: null, userId: null },
select: { id: true },
});
for (const row of orphans) {
if (!opts.dryRun) {
await (model as any).update({
where: { id: row.id },
data: { familyId: legacyFam.id, userId: legacyOwner.id },
});
}
totalUpdated++;
}
console.log(` ${label}: ${orphans.length} orphan row(s) assigned to ${legacyOwner.email}'s family`);
}
}
console.log(`\n✅ Backfill complete (${totalUpdated} row(s) ${opts.dryRun ? 'would be ' : ''}updated)\n`);
}
main()
.catch((err) => {
console.error('❌ Backfill failed:', err);
process.exit(1);
})
.finally(async () => {
await prisma.$disconnect();
});

View File

@@ -0,0 +1,108 @@
import type { Prisma, User } from '@prisma/client';
import prisma from '../config/database';
export interface AccessContext {
userId: string;
role: 'USER' | 'ADMIN';
familyIds: string[];
}
export async function getAccessContext(user: User): Promise<AccessContext> {
const memberships = await prisma.familyMember.findMany({
where: { userId: user.id },
select: { familyId: true },
});
return {
userId: user.id,
role: user.role,
familyIds: memberships.map((m) => m.familyId),
};
}
export function buildRecipeAccessFilter(ctx: AccessContext): Prisma.RecipeWhereInput {
if (ctx.role === 'ADMIN') return {};
return {
OR: [
{ userId: ctx.userId },
{ familyId: { in: ctx.familyIds } },
{ visibility: 'PUBLIC' },
{ sharedWith: { some: { userId: ctx.userId } } },
],
};
}
export function buildCookbookAccessFilter(ctx: AccessContext): Prisma.CookbookWhereInput {
if (ctx.role === 'ADMIN') return {};
return {
OR: [
{ userId: ctx.userId },
{ familyId: { in: ctx.familyIds } },
],
};
}
type RecipeAccessSubject = {
userId: string | null;
familyId: string | null;
visibility: 'PRIVATE' | 'SHARED' | 'PUBLIC';
};
type CookbookAccessSubject = {
userId: string | null;
familyId: string | null;
};
export function canReadRecipe(
ctx: AccessContext,
recipe: RecipeAccessSubject,
sharedUserIds: string[] = [],
): boolean {
if (ctx.role === 'ADMIN') return true;
if (recipe.userId === ctx.userId) return true;
if (recipe.familyId && ctx.familyIds.includes(recipe.familyId)) return true;
if (recipe.visibility === 'PUBLIC') return true;
if (sharedUserIds.includes(ctx.userId)) return true;
return false;
}
export function canMutateRecipe(
ctx: AccessContext,
recipe: RecipeAccessSubject,
): boolean {
if (ctx.role === 'ADMIN') return true;
if (recipe.userId === ctx.userId) return true;
if (recipe.familyId && ctx.familyIds.includes(recipe.familyId)) return true;
return false;
}
export function canReadCookbook(
ctx: AccessContext,
cookbook: CookbookAccessSubject,
): boolean {
if (ctx.role === 'ADMIN') return true;
if (cookbook.userId === ctx.userId) return true;
if (cookbook.familyId && ctx.familyIds.includes(cookbook.familyId)) return true;
return false;
}
export function canMutateCookbook(
ctx: AccessContext,
cookbook: CookbookAccessSubject,
): boolean {
return canReadCookbook(ctx, cookbook);
}
export async function getPrimaryFamilyId(userId: string): Promise<string | null> {
const owner = await prisma.familyMember.findFirst({
where: { userId, role: 'OWNER' },
orderBy: { joinedAt: 'asc' },
select: { familyId: true },
});
if (owner) return owner.familyId;
const any = await prisma.familyMember.findFirst({
where: { userId },
orderBy: { joinedAt: 'asc' },
select: { familyId: true },
});
return any?.familyId ?? null;
}

View File

@@ -3,4 +3,4 @@
* Example: 2026.01.002 (January 2026, patch 2), 2026.02.003 (February 2026, patch 3)
* Month and patch are zero-padded. Patch increments with each deployment in a month.
*/
export const APP_VERSION = '2026.01.004';
export const APP_VERSION = '2026.04.008';

View File

@@ -4,6 +4,7 @@ import { ThemeProvider } from './contexts/ThemeContext';
import ProtectedRoute from './components/ProtectedRoute';
import UserMenu from './components/UserMenu';
import ThemeToggle from './components/ThemeToggle';
import FamilyGate from './components/FamilyGate';
import Login from './pages/Login';
import Register from './pages/Register';
import AuthCallback from './pages/AuthCallback';
@@ -16,6 +17,7 @@ import RecipeImport from './pages/RecipeImport';
import NewRecipe from './pages/NewRecipe';
import UnifiedEditRecipe from './pages/UnifiedEditRecipe';
import CookingMode from './pages/CookingMode';
import Family from './pages/Family';
import { APP_VERSION } from './version';
import './App.css';
@@ -24,6 +26,7 @@ function App() {
<Router>
<ThemeProvider>
<AuthProvider>
<FamilyGate>
<div className="app">
<header className="header">
<div className="container">
@@ -64,6 +67,7 @@ function App() {
<Route path="/recipes/:id/cook" element={<ProtectedRoute><CookingMode /></ProtectedRoute>} />
<Route path="/recipes/new" element={<ProtectedRoute><NewRecipe /></ProtectedRoute>} />
<Route path="/recipes/import" element={<ProtectedRoute><RecipeImport /></ProtectedRoute>} />
<Route path="/family" element={<ProtectedRoute><Family /></ProtectedRoute>} />
</Routes>
</div>
</main>
@@ -74,6 +78,7 @@ function App() {
</div>
</footer>
</div>
</FamilyGate>
</AuthProvider>
</ThemeProvider>
</Router>

View File

@@ -0,0 +1,101 @@
import { useEffect, useState, FormEvent, ReactNode } from 'react';
import { familiesApi } from '../services/api';
import { useAuth } from '../contexts/AuthContext';
import '../styles/FamilyGate.css';
interface FamilyGateProps {
children: ReactNode;
}
type CheckState = 'idle' | 'checking' | 'needs-family' | 'ready';
export default function FamilyGate({ children }: FamilyGateProps) {
const { isAuthenticated, loading: authLoading, logout } = useAuth();
const [state, setState] = useState<CheckState>('idle');
const [name, setName] = useState('');
const [submitting, setSubmitting] = useState(false);
const [error, setError] = useState<string | null>(null);
useEffect(() => {
if (authLoading) return;
if (!isAuthenticated) {
setState('idle');
return;
}
let cancelled = false;
(async () => {
setState('checking');
try {
const res = await familiesApi.list();
if (cancelled) return;
const count = res.data?.length ?? 0;
setState(count === 0 ? 'needs-family' : 'ready');
} catch {
if (!cancelled) setState('ready');
}
})();
return () => { cancelled = true; };
}, [isAuthenticated, authLoading]);
async function handleCreate(e: FormEvent) {
e.preventDefault();
const trimmed = name.trim();
if (!trimmed) return;
setSubmitting(true);
setError(null);
try {
await familiesApi.create(trimmed);
setState('ready');
} catch (e: any) {
setError(e?.response?.data?.error || 'Failed to create family');
} finally {
setSubmitting(false);
}
}
const showModal = isAuthenticated && state === 'needs-family';
return (
<>
{children}
{showModal && (
<div className="family-gate-overlay" role="dialog" aria-modal="true">
<div className="family-gate-modal">
<h2>Create your family</h2>
<p>
To keep recipes organized and shareable, every account belongs to
a family. Name yours to get started you can invite others later.
</p>
<form onSubmit={handleCreate}>
<label htmlFor="family-gate-name">Family name</label>
<input
id="family-gate-name"
type="text"
value={name}
onChange={(e) => setName(e.target.value)}
placeholder="e.g. Smith Family"
autoFocus
disabled={submitting}
required
/>
{error && <div className="family-gate-error">{error}</div>}
<div className="family-gate-actions">
<button
type="button"
className="family-gate-secondary"
onClick={logout}
disabled={submitting}
>
Sign out
</button>
<button type="submit" disabled={submitting || !name.trim()}>
{submitting ? 'Creating…' : 'Create family'}
</button>
</div>
</form>
</div>
</div>
)}
</>
);
}

View File

@@ -96,6 +96,13 @@ const UserMenu: React.FC = () => {
>
My Cookbooks
</Link>
<Link
to="/family"
className="user-menu-link"
onClick={() => setIsOpen(false)}
>
Family
</Link>
{isAdmin && (
<>
<div className="user-menu-divider"></div>

View File

@@ -1,9 +1,15 @@
import { useState, useEffect } from 'react';
import { useParams, useNavigate } from 'react-router-dom';
import { useParams, useNavigate, useSearchParams } from 'react-router-dom';
import { CookbookWithRecipes, Recipe } from '@basil/shared';
import { cookbooksApi } from '../services/api';
import '../styles/CookbookDetail.css';
const ITEMS_PER_PAGE_OPTIONS = [12, 24, 48, -1]; // -1 = All
// LocalStorage keys
const LS_ITEMS_PER_PAGE = 'basil_cookbook_itemsPerPage';
const LS_COLUMN_COUNT = 'basil_cookbook_columnCount';
// Helper function to extract tag name from string or RecipeTag object
const getTagName = (tag: string | { tag: { name: string } }): string => {
return typeof tag === 'string' ? tag : tag.tag.name;
@@ -12,10 +18,33 @@ const getTagName = (tag: string | { tag: { name: string } }): string => {
function CookbookDetail() {
const { id } = useParams<{ id: string }>();
const navigate = useNavigate();
const [searchParams, setSearchParams] = useSearchParams();
const [cookbook, setCookbook] = useState<CookbookWithRecipes | null>(null);
const [loading, setLoading] = useState(true);
const [error, setError] = useState<string | null>(null);
// Pagination state
const [currentPage, setCurrentPage] = useState(() => {
const page = searchParams.get('page');
return page ? parseInt(page) : 1;
});
const [itemsPerPage, setItemsPerPage] = useState(() => {
const saved = localStorage.getItem(LS_ITEMS_PER_PAGE);
if (saved) return parseInt(saved);
const param = searchParams.get('limit');
return param ? parseInt(param) : 24;
});
// Display controls state
const [columnCount, setColumnCount] = useState<3 | 5 | 7 | 9>(() => {
const saved = localStorage.getItem(LS_COLUMN_COUNT);
if (saved) {
const val = parseInt(saved);
if (val === 3 || val === 5 || val === 7 || val === 9) return val;
}
return 5;
});
// Filters
const [searchQuery, setSearchQuery] = useState('');
const [selectedTags, setSelectedTags] = useState<string[]>([]);
@@ -27,6 +56,28 @@ function CookbookDetail() {
}
}, [id]);
// Save preferences to localStorage
useEffect(() => {
localStorage.setItem(LS_ITEMS_PER_PAGE, itemsPerPage.toString());
}, [itemsPerPage]);
useEffect(() => {
localStorage.setItem(LS_COLUMN_COUNT, columnCount.toString());
}, [columnCount]);
// Update URL params
useEffect(() => {
const params = new URLSearchParams();
if (currentPage > 1) params.set('page', currentPage.toString());
if (itemsPerPage !== 24) params.set('limit', itemsPerPage.toString());
setSearchParams(params, { replace: true });
}, [currentPage, itemsPerPage, setSearchParams]);
// Reset page when filters change
useEffect(() => {
setCurrentPage(1);
}, [searchQuery, selectedTags, selectedCuisine]);
const loadCookbook = async (cookbookId: string) => {
try {
setLoading(true);
@@ -129,6 +180,24 @@ function CookbookDetail() {
setSelectedCuisine('');
};
const handlePageChange = (newPage: number) => {
setCurrentPage(newPage);
window.scrollTo({ top: 0, behavior: 'smooth' });
};
const handleItemsPerPageChange = (value: number) => {
setItemsPerPage(value);
setCurrentPage(1);
};
// Apply pagination to filtered recipes
const getPaginatedRecipes = (filteredRecipes: Recipe[]): Recipe[] => {
if (itemsPerPage === -1) return filteredRecipes;
const startIndex = (currentPage - 1) * itemsPerPage;
const endIndex = startIndex + itemsPerPage;
return filteredRecipes.slice(startIndex, endIndex);
};
if (loading) {
return (
<div className="cookbook-detail-page">
@@ -147,9 +216,19 @@ function CookbookDetail() {
}
const filteredRecipes = getFilteredRecipes();
const paginatedRecipes = getPaginatedRecipes(filteredRecipes);
const allTags = getAllTags();
const allCuisines = getAllCuisines();
const hasActiveFilters = searchQuery || selectedTags.length > 0 || selectedCuisine;
const totalPages = itemsPerPage === -1 ? 1 : Math.ceil(filteredRecipes.length / itemsPerPage);
// Grid style with CSS variables
const gridStyle: React.CSSProperties = {
gridTemplateColumns: `repeat(${columnCount}, 1fr)`,
};
const recipesGridClassName = `recipes-grid columns-${columnCount}`;
const cookbooksGridClassName = `cookbooks-grid columns-${columnCount}`;
return (
<div className="cookbook-detail-page">
@@ -227,11 +306,66 @@ function CookbookDetail() {
</div>
</div>
{/* Display and Pagination Controls */}
<div className="cookbook-toolbar">
<div className="display-controls">
<div className="control-group">
<label>Columns:</label>
<div className="column-buttons">
{([3, 5, 7, 9] as const).map((count) => (
<button
key={count}
className={columnCount === count ? 'active' : ''}
onClick={() => setColumnCount(count)}
>
{count}
</button>
))}
</div>
</div>
</div>
<div className="pagination-controls">
<div className="control-group">
<label>Per page:</label>
<div className="items-per-page">
{ITEMS_PER_PAGE_OPTIONS.map((count) => (
<button
key={count}
className={itemsPerPage === count ? 'active' : ''}
onClick={() => handleItemsPerPageChange(count)}
>
{count === -1 ? 'All' : count}
</button>
))}
</div>
</div>
<div className="page-navigation">
<button
onClick={() => handlePageChange(currentPage - 1)}
disabled={currentPage <= 1}
>
Prev
</button>
<span className="page-info">
Page {currentPage} of {totalPages}
</span>
<button
onClick={() => handlePageChange(currentPage + 1)}
disabled={currentPage >= totalPages}
>
Next
</button>
</div>
</div>
</div>
{/* Included Cookbooks */}
{cookbook.cookbooks && cookbook.cookbooks.length > 0 && (
<section className="included-cookbooks-section">
<h2>Included Cookbooks ({cookbook.cookbooks.length})</h2>
<div className="cookbooks-grid">
<div className={cookbooksGridClassName} style={gridStyle}>
{cookbook.cookbooks.map((childCookbook) => (
<div
key={childCookbook.id}
@@ -272,7 +406,12 @@ function CookbookDetail() {
<div className="results-section">
<h2>Recipes</h2>
<p className="results-count">
Showing {filteredRecipes.length} of {cookbook.recipes.length} recipes
{itemsPerPage === -1 ? (
`Showing all ${filteredRecipes.length} recipes`
) : (
`Showing ${(currentPage - 1) * itemsPerPage + 1}-${Math.min(currentPage * itemsPerPage, filteredRecipes.length)} of ${filteredRecipes.length} recipes`
)}
{filteredRecipes.length < cookbook.recipes.length && ` (filtered from ${cookbook.recipes.length} total)`}
</p>
{filteredRecipes.length === 0 ? (
@@ -284,8 +423,8 @@ function CookbookDetail() {
)}
</div>
) : (
<div className="recipes-grid">
{filteredRecipes.map(recipe => (
<div className={recipesGridClassName} style={gridStyle}>
{paginatedRecipes.map(recipe => (
<div key={recipe.id} className="recipe-card">
<div onClick={() => navigate(`/recipes/${recipe.id}`)}>
{recipe.imageUrl ? (

View File

@@ -1,11 +1,18 @@
import { useState, useEffect } from 'react';
import { useNavigate } from 'react-router-dom';
import { useNavigate, useSearchParams } from 'react-router-dom';
import { Cookbook, Recipe, Tag } from '@basil/shared';
import { cookbooksApi, recipesApi, tagsApi } from '../services/api';
import '../styles/Cookbooks.css';
const ITEMS_PER_PAGE_OPTIONS = [12, 24, 48, -1]; // -1 = All
// LocalStorage keys
const LS_ITEMS_PER_PAGE = 'basil_cookbooks_itemsPerPage';
const LS_COLUMN_COUNT = 'basil_cookbooks_columnCount';
function Cookbooks() {
const navigate = useNavigate();
const [searchParams, setSearchParams] = useSearchParams();
const [cookbooks, setCookbooks] = useState<Cookbook[]>([]);
const [recentRecipes, setRecentRecipes] = useState<Recipe[]>([]);
const [loading, setLoading] = useState(true);
@@ -22,10 +29,49 @@ function Cookbooks() {
const [availableTags, setAvailableTags] = useState<Tag[]>([]);
const [autoAddCollapsed, setAutoAddCollapsed] = useState(true);
// Pagination state
const [currentPage, setCurrentPage] = useState(() => {
const page = searchParams.get('page');
return page ? parseInt(page) : 1;
});
const [itemsPerPage, setItemsPerPage] = useState(() => {
const saved = localStorage.getItem(LS_ITEMS_PER_PAGE);
if (saved) return parseInt(saved);
const param = searchParams.get('limit');
return param ? parseInt(param) : 24;
});
// Display controls state
const [columnCount, setColumnCount] = useState<3 | 5 | 7 | 9>(() => {
const saved = localStorage.getItem(LS_COLUMN_COUNT);
if (saved) {
const val = parseInt(saved);
if (val === 3 || val === 5 || val === 7 || val === 9) return val;
}
return 5;
});
useEffect(() => {
loadData();
}, []);
// Save preferences to localStorage
useEffect(() => {
localStorage.setItem(LS_ITEMS_PER_PAGE, itemsPerPage.toString());
}, [itemsPerPage]);
useEffect(() => {
localStorage.setItem(LS_COLUMN_COUNT, columnCount.toString());
}, [columnCount]);
// Update URL params
useEffect(() => {
const params = new URLSearchParams();
if (currentPage > 1) params.set('page', currentPage.toString());
if (itemsPerPage !== 24) params.set('limit', itemsPerPage.toString());
setSearchParams(params, { replace: true });
}, [currentPage, itemsPerPage, setSearchParams]);
const loadData = async () => {
try {
setLoading(true);
@@ -117,6 +163,35 @@ function Cookbooks() {
setAutoFilterCookbookTags(autoFilterCookbookTags.filter(t => t !== tag));
};
const handlePageChange = (newPage: number) => {
setCurrentPage(newPage);
window.scrollTo({ top: 0, behavior: 'smooth' });
};
const handleItemsPerPageChange = (value: number) => {
setItemsPerPage(value);
setCurrentPage(1);
};
// Apply pagination to cookbooks
const getPaginatedCookbooks = (): Cookbook[] => {
if (itemsPerPage === -1) return cookbooks;
const startIndex = (currentPage - 1) * itemsPerPage;
const endIndex = startIndex + itemsPerPage;
return cookbooks.slice(startIndex, endIndex);
};
const paginatedCookbooks = getPaginatedCookbooks();
const totalPages = itemsPerPage === -1 ? 1 : Math.ceil(cookbooks.length / itemsPerPage);
// Grid style with CSS variables
const gridStyle: React.CSSProperties = {
gridTemplateColumns: `repeat(${columnCount}, 1fr)`,
};
const recipesGridClassName = `recipes-grid columns-${columnCount}`;
const cookbooksGridClassName = `cookbooks-grid columns-${columnCount}`;
if (loading) {
return (
<div className="cookbooks-page">
@@ -150,9 +225,30 @@ function Cookbooks() {
</div>
</header>
{/* Page-level Controls */}
<div className="page-toolbar">
<div className="display-controls">
<div className="control-group">
<label>Columns:</label>
<div className="column-buttons">
{([3, 5, 7, 9] as const).map((count) => (
<button
key={count}
className={columnCount === count ? 'active' : ''}
onClick={() => setColumnCount(count)}
>
{count}
</button>
))}
</div>
</div>
</div>
</div>
{/* Cookbooks Grid */}
<section className="cookbooks-section">
<h2>Cookbooks</h2>
{cookbooks.length === 0 ? (
<div className="empty-state">
<p>No cookbooks yet. Create your first cookbook to organize your recipes!</p>
@@ -161,8 +257,56 @@ function Cookbooks() {
</button>
</div>
) : (
<div className="cookbooks-grid">
{cookbooks.map((cookbook) => (
<>
{/* Pagination Controls */}
<div className="pagination-toolbar">
<div className="pagination-controls">
<div className="control-group">
<label>Per page:</label>
<div className="items-per-page">
{ITEMS_PER_PAGE_OPTIONS.map((count) => (
<button
key={count}
className={itemsPerPage === count ? 'active' : ''}
onClick={() => handleItemsPerPageChange(count)}
>
{count === -1 ? 'All' : count}
</button>
))}
</div>
</div>
<div className="page-navigation">
<button
onClick={() => handlePageChange(currentPage - 1)}
disabled={currentPage <= 1}
>
Prev
</button>
<span className="page-info">
Page {currentPage} of {totalPages}
</span>
<button
onClick={() => handlePageChange(currentPage + 1)}
disabled={currentPage >= totalPages}
>
Next
</button>
</div>
</div>
</div>
{/* Results count */}
<p className="results-count">
{itemsPerPage === -1 ? (
`Showing all ${cookbooks.length} cookbooks`
) : (
`Showing ${(currentPage - 1) * itemsPerPage + 1}-${Math.min(currentPage * itemsPerPage, cookbooks.length)} of ${cookbooks.length} cookbooks`
)}
</p>
<div className={cookbooksGridClassName} style={gridStyle}>
{paginatedCookbooks.map((cookbook) => (
<div
key={cookbook.id}
className="cookbook-card"
@@ -195,12 +339,13 @@ function Cookbooks() {
</div>
))}
</div>
</>
)}
</section>
{/* Recent Recipes */}
<section className="recent-recipes-section">
<div className="section-header">
<div className="section-title-row">
<h2>Recent Recipes</h2>
<button onClick={() => navigate('/recipes')} className="btn-link">
View all
@@ -209,7 +354,7 @@ function Cookbooks() {
{recentRecipes.length === 0 ? (
<p className="empty-state">No recipes yet.</p>
) : (
<div className="recipes-grid">
<div className={recipesGridClassName} style={gridStyle}>
{recentRecipes.map((recipe) => (
<div
key={recipe.id}

View File

@@ -0,0 +1,245 @@
import { useEffect, useState, FormEvent } from 'react';
import {
familiesApi,
FamilySummary,
FamilyDetail,
FamilyMemberInfo,
} from '../services/api';
import { useAuth } from '../contexts/AuthContext';
import '../styles/Family.css';
export default function Family() {
const { user } = useAuth();
const [families, setFamilies] = useState<FamilySummary[]>([]);
const [selectedId, setSelectedId] = useState<string | null>(null);
const [detail, setDetail] = useState<FamilyDetail | null>(null);
const [loading, setLoading] = useState(true);
const [error, setError] = useState<string | null>(null);
const [newFamilyName, setNewFamilyName] = useState('');
const [inviteEmail, setInviteEmail] = useState('');
const [inviteRole, setInviteRole] = useState<'MEMBER' | 'OWNER'>('MEMBER');
const [busy, setBusy] = useState(false);
async function loadFamilies() {
setError(null);
try {
const res = await familiesApi.list();
const list = res.data ?? [];
setFamilies(list);
if (!selectedId && list.length > 0) setSelectedId(list[0].id);
if (selectedId && !list.find((f) => f.id === selectedId)) {
setSelectedId(list[0]?.id ?? null);
}
} catch (e: any) {
setError(e?.response?.data?.error || e?.message || 'Failed to load families');
}
}
async function loadDetail(id: string) {
try {
const res = await familiesApi.get(id);
setDetail(res.data ?? null);
} catch (e: any) {
setError(e?.response?.data?.error || e?.message || 'Failed to load family');
setDetail(null);
}
}
useEffect(() => {
(async () => {
setLoading(true);
await loadFamilies();
setLoading(false);
})();
}, []);
useEffect(() => {
if (selectedId) loadDetail(selectedId);
else setDetail(null);
}, [selectedId]);
async function handleCreateFamily(e: FormEvent) {
e.preventDefault();
if (!newFamilyName.trim()) return;
setBusy(true);
setError(null);
try {
const res = await familiesApi.create(newFamilyName.trim());
setNewFamilyName('');
if (res.data) setSelectedId(res.data.id);
await loadFamilies();
} catch (e: any) {
setError(e?.response?.data?.error || 'Failed to create family');
} finally {
setBusy(false);
}
}
async function handleInvite(e: FormEvent) {
e.preventDefault();
if (!selectedId || !inviteEmail.trim()) return;
setBusy(true);
setError(null);
try {
await familiesApi.addMember(selectedId, inviteEmail.trim(), inviteRole);
setInviteEmail('');
setInviteRole('MEMBER');
await loadDetail(selectedId);
await loadFamilies();
} catch (e: any) {
setError(e?.response?.data?.error || 'Failed to add member');
} finally {
setBusy(false);
}
}
async function handleRemoveMember(member: FamilyMemberInfo) {
if (!selectedId) return;
const isSelf = member.userId === user?.id;
const confirmMsg = isSelf
? `Leave "${detail?.name}"?`
: `Remove ${member.name || member.email} from this family?`;
if (!confirm(confirmMsg)) return;
setBusy(true);
setError(null);
try {
await familiesApi.removeMember(selectedId, member.userId);
await loadFamilies();
if (isSelf) {
setSelectedId(null);
} else {
await loadDetail(selectedId);
}
} catch (e: any) {
setError(e?.response?.data?.error || 'Failed to remove member');
} finally {
setBusy(false);
}
}
async function handleDeleteFamily() {
if (!selectedId || !detail) return;
if (!confirm(`Delete family "${detail.name}"? Recipes and cookbooks in this family will lose their family assignment (they won't be deleted).`)) return;
setBusy(true);
setError(null);
try {
await familiesApi.remove(selectedId);
setSelectedId(null);
await loadFamilies();
} catch (e: any) {
setError(e?.response?.data?.error || 'Failed to delete family');
} finally {
setBusy(false);
}
}
if (loading) return <div className="family-page">Loading</div>;
const isOwner = detail?.myRole === 'OWNER';
return (
<div className="family-page">
<h2>Families</h2>
{error && <div className="family-error">{error}</div>}
<section className="family-create">
<form onSubmit={handleCreateFamily} className="family-create-form">
<label>
Create a new family:
<input
type="text"
value={newFamilyName}
placeholder="e.g. Smith Family"
onChange={(e) => setNewFamilyName(e.target.value)}
disabled={busy}
/>
</label>
<button type="submit" disabled={busy || !newFamilyName.trim()}>Create</button>
</form>
</section>
<div className="family-layout">
<aside className="family-list">
<h3>Your families</h3>
{families.length === 0 && <p className="muted">You're not in any family yet.</p>}
<ul>
{families.map((f) => (
<li key={f.id} className={f.id === selectedId ? 'active' : ''}>
<button onClick={() => setSelectedId(f.id)}>
<strong>{f.name}</strong>
<span className="family-meta">{f.role} · {f.memberCount} member{f.memberCount === 1 ? '' : 's'}</span>
</button>
</li>
))}
</ul>
</aside>
<main className="family-detail">
{!detail && <p className="muted">Select a family to see its members.</p>}
{detail && (
<>
<div className="family-detail-header">
<h3>{detail.name}</h3>
{isOwner && (
<button className="danger" onClick={handleDeleteFamily} disabled={busy}>
Delete family
</button>
)}
</div>
<h4>Members</h4>
<table className="family-members">
<thead>
<tr><th>Name</th><th>Email</th><th>Role</th><th></th></tr>
</thead>
<tbody>
{detail.members.map((m) => (
<tr key={m.userId}>
<td>{m.name || ''}</td>
<td>{m.email}</td>
<td>{m.role}</td>
<td>
{(isOwner || m.userId === user?.id) && (
<button onClick={() => handleRemoveMember(m)} disabled={busy}>
{m.userId === user?.id ? 'Leave' : 'Remove'}
</button>
)}
</td>
</tr>
))}
</tbody>
</table>
{isOwner && (
<>
<h4>Invite a member</h4>
<p className="muted">User must already have a Basil account on this server.</p>
<form onSubmit={handleInvite} className="family-invite-form">
<input
type="email"
placeholder="email@example.com"
value={inviteEmail}
onChange={(e) => setInviteEmail(e.target.value)}
disabled={busy}
required
/>
<select
value={inviteRole}
onChange={(e) => setInviteRole(e.target.value as 'MEMBER' | 'OWNER')}
disabled={busy}
>
<option value="MEMBER">Member</option>
<option value="OWNER">Owner</option>
</select>
<button type="submit" disabled={busy || !inviteEmail.trim()}>Add</button>
</form>
</>
)}
</>
)}
</main>
</div>
</div>
);
}

View File

@@ -132,6 +132,8 @@ function RecipeList() {
gridTemplateColumns: `repeat(${columnCount}, 1fr)`,
};
const gridClassName = `recipe-grid-enhanced columns-${columnCount}`;
const handlePageChange = (newPage: number) => {
setCurrentPage(newPage);
window.scrollTo({ top: 0, behavior: 'smooth' });
@@ -243,7 +245,7 @@ function RecipeList() {
)}
</div>
) : (
<div className="recipe-grid-enhanced" style={gridStyle}>
<div className={gridClassName} style={gridStyle}>
{recipes.map((recipe) => (
<div
key={recipe.id}

View File

@@ -237,4 +237,67 @@ export const mealPlansApi = {
},
};
export type FamilyRole = 'OWNER' | 'MEMBER';
export interface FamilySummary {
id: string;
name: string;
role: FamilyRole;
memberCount: number;
joinedAt: string;
}
export interface FamilyMemberInfo {
userId: string;
email: string;
name: string | null;
avatar: string | null;
role: FamilyRole;
joinedAt: string;
}
export interface FamilyDetail {
id: string;
name: string;
createdAt: string;
updatedAt: string;
myRole: FamilyRole | null;
members: FamilyMemberInfo[];
}
export const familiesApi = {
list: async (): Promise<ApiResponse<FamilySummary[]>> => {
const response = await api.get('/families');
return response.data;
},
create: async (name: string): Promise<ApiResponse<{ id: string; name: string }>> => {
const response = await api.post('/families', { name });
return response.data;
},
get: async (id: string): Promise<ApiResponse<FamilyDetail>> => {
const response = await api.get(`/families/${id}`);
return response.data;
},
rename: async (id: string, name: string): Promise<ApiResponse<{ id: string; name: string }>> => {
const response = await api.put(`/families/${id}`, { name });
return response.data;
},
remove: async (id: string): Promise<ApiResponse<void>> => {
const response = await api.delete(`/families/${id}`);
return response.data;
},
addMember: async (
familyId: string,
email: string,
role: FamilyRole = 'MEMBER',
): Promise<ApiResponse<FamilyMemberInfo>> => {
const response = await api.post(`/families/${familyId}/members`, { email, role });
return response.data;
},
removeMember: async (familyId: string, userId: string): Promise<ApiResponse<void>> => {
const response = await api.delete(`/families/${familyId}/members/${userId}`);
return response.data;
},
};
export default api;

View File

@@ -261,6 +261,118 @@
background-color: #616161;
}
/* Toolbar and Pagination Controls */
.cookbook-toolbar {
display: flex;
flex-wrap: wrap;
gap: 1.5rem;
align-items: center;
justify-content: space-between;
background: white;
padding: 0.75rem 1rem;
border-radius: 8px;
margin-bottom: 1.5rem;
box-shadow: 0 1px 3px rgba(0, 0, 0, 0.08);
border: 1px solid #e0e0e0;
}
.display-controls,
.pagination-controls {
display: flex;
gap: 1rem;
align-items: center;
flex-wrap: wrap;
}
.control-group {
display: flex;
align-items: center;
gap: 0.5rem;
}
.control-group label {
font-size: 0.8rem;
font-weight: 500;
color: #666;
white-space: nowrap;
}
.column-buttons,
.items-per-page {
display: flex;
gap: 0.25rem;
border: 1px solid #d0d0d0;
border-radius: 6px;
overflow: hidden;
}
.column-buttons button,
.items-per-page button {
min-width: 2rem;
padding: 0.35rem 0.6rem;
border: none;
background: white;
font-size: 0.8rem;
font-weight: 500;
cursor: pointer;
transition: all 0.15s;
color: #555;
}
.column-buttons button:not(:last-child),
.items-per-page button:not(:last-child) {
border-right: 1px solid #d0d0d0;
}
.column-buttons button:hover,
.items-per-page button:hover {
background-color: #f5f5f5;
}
.column-buttons button.active,
.items-per-page button.active {
background-color: #2e7d32;
color: white;
}
.page-navigation {
display: flex;
gap: 0.5rem;
align-items: center;
}
.page-navigation button {
padding: 0.35rem 0.75rem;
border: 1px solid #d0d0d0;
background: white;
color: #555;
border-radius: 6px;
font-size: 0.8rem;
font-weight: 500;
cursor: pointer;
transition: all 0.15s;
}
.page-navigation button:hover:not(:disabled) {
background-color: #f5f5f5;
border-color: #2e7d32;
color: #2e7d32;
}
.page-navigation button:disabled {
opacity: 0.4;
cursor: not-allowed;
}
.page-info {
font-size: 0.75rem;
font-weight: 500;
color: #666;
white-space: nowrap;
margin: 0 0.25rem;
}
/* Results Section */
.results-section {
@@ -275,82 +387,131 @@
.recipes-grid {
display: grid;
grid-template-columns: repeat(auto-fill, minmax(300px, 1fr));
gap: 1.5rem;
}
.recipe-card {
background: white;
border-radius: 12px;
cursor: pointer;
border: 1px solid #e0e0e0;
border-radius: 8px;
overflow: hidden;
box-shadow: 0 2px 8px rgba(0, 0, 0, 0.1);
background: white;
position: relative;
transition: transform 0.2s, box-shadow 0.2s;
display: flex;
flex-direction: column;
aspect-ratio: 1 / 1;
}
.recipe-card:hover {
transform: translateY(-4px);
box-shadow: 0 4px 16px rgba(0, 0, 0, 0.15);
transform: translateY(-2px);
box-shadow: 0 4px 12px rgba(0, 0, 0, 0.1);
}
.recipe-card > div:first-child {
cursor: pointer;
display: flex;
flex-direction: column;
flex: 1;
min-height: 0;
}
.recipe-image {
.recipe-card img.recipe-image {
width: 100%;
height: 200px;
height: 60%;
object-fit: cover;
display: block;
flex-shrink: 0;
}
.recipe-image-placeholder {
width: 100%;
height: 200px;
height: 60%;
background: linear-gradient(135deg, #ffb74d 0%, #ff9800 100%);
display: flex;
align-items: center;
justify-content: center;
font-size: 4rem;
font-size: 3rem;
flex-shrink: 0;
}
.recipe-info {
padding: 1.25rem;
padding: 0.5rem;
flex: 1;
display: flex;
flex-direction: column;
justify-content: space-between;
overflow: hidden;
min-height: 0;
}
.recipe-info h3 {
font-size: 1.2rem;
color: #212121;
margin: 0 0 0.5rem 0;
margin: 0 0 0.25rem 0;
font-size: 0.75rem;
line-height: 1.2;
overflow: hidden;
text-overflow: ellipsis;
display: -webkit-box;
-webkit-line-clamp: 2;
-webkit-box-orient: vertical;
flex-shrink: 0;
}
.recipe-info .description {
font-size: 0.9rem;
margin: 0;
font-size: 0.65rem;
color: #666;
margin: 0 0 0.75rem 0;
line-height: 1.4;
overflow: hidden;
text-overflow: ellipsis;
display: -webkit-box;
-webkit-line-clamp: 2;
-webkit-box-orient: vertical;
flex-shrink: 1;
}
.recipe-meta {
display: flex;
gap: 1rem;
font-size: 0.85rem;
color: #757575;
margin-bottom: 0.75rem;
gap: 0.4rem;
font-size: 0.6rem;
color: #888;
flex-shrink: 0;
margin-top: auto;
}
.recipe-tags {
display: flex;
flex-wrap: wrap;
gap: 0.5rem;
display: none;
}
.recipe-tags .tag {
padding: 0.25rem 0.75rem;
background-color: #e8f5e9;
color: #2e7d32;
border-radius: 12px;
/* Column-specific styles for recipes */
.columns-3 .recipe-info h3 {
font-size: 0.95rem;
}
.columns-3 .recipe-info .description {
font-size: 0.8rem;
font-weight: 500;
-webkit-line-clamp: 2;
}
.columns-3 .recipe-meta {
font-size: 0.75rem;
}
.columns-5 .recipe-info h3 {
font-size: 0.85rem;
}
.columns-5 .recipe-info .description {
font-size: 0.75rem;
-webkit-line-clamp: 2;
}
.columns-5 .recipe-meta {
font-size: 0.7rem;
}
.columns-7 .recipe-info .description,
.columns-9 .recipe-info .description {
display: none;
}
.remove-recipe-btn {
@@ -427,9 +588,26 @@
width: 100%;
}
.cookbook-toolbar {
flex-direction: column;
align-items: stretch;
padding: 1rem;
}
.display-controls,
.pagination-controls {
flex-direction: column;
align-items: stretch;
gap: 1rem;
}
.recipes-grid {
grid-template-columns: 1fr;
}
.included-cookbooks-section .cookbooks-grid {
grid-template-columns: 1fr;
}
}
/* Included Cookbooks Section */
@@ -446,11 +624,19 @@
font-size: 1.5rem;
}
.included-cookbooks-section .cookbooks-grid {
display: grid;
gap: 1.5rem;
}
.cookbook-card.nested {
border: 2px solid #e0e0e0;
background: white;
cursor: pointer;
transition: all 0.2s ease;
aspect-ratio: 1 / 1;
display: flex;
flex-direction: column;
}
.cookbook-card.nested:hover {
@@ -458,3 +644,75 @@
box-shadow: 0 4px 12px rgba(0,0,0,0.15);
transform: translateY(-2px);
}
.cookbook-card.nested .cookbook-cover,
.cookbook-card.nested .cookbook-cover-placeholder {
height: 50%;
font-size: 2.5rem;
}
.cookbook-card.nested .cookbook-info {
padding: 0.5rem;
display: flex;
flex-direction: column;
flex: 1;
min-height: 0;
overflow: hidden;
}
.cookbook-card.nested .cookbook-info h3 {
font-size: 0.75rem;
color: #212121;
margin: 0 0 0.25rem 0;
line-height: 1.2;
overflow: hidden;
text-overflow: ellipsis;
display: -webkit-box;
-webkit-line-clamp: 2;
-webkit-box-orient: vertical;
flex-shrink: 0;
}
.cookbook-card.nested .cookbook-info .description {
display: none;
}
.cookbook-card.nested .cookbook-stats {
margin-top: auto;
display: flex;
flex-direction: column;
gap: 0.1rem;
}
.cookbook-card.nested .recipe-count,
.cookbook-card.nested .cookbook-count {
font-size: 0.6rem;
color: #2e7d32;
font-weight: 600;
margin: 0;
line-height: 1.2;
white-space: nowrap;
}
.cookbook-card.nested .cookbook-tags {
display: none;
}
/* Column-specific styles for nested cookbooks */
.cookbooks-grid.columns-3 .cookbook-card.nested .cookbook-info h3 {
font-size: 0.95rem;
}
.cookbooks-grid.columns-3 .cookbook-card.nested .recipe-count,
.cookbooks-grid.columns-3 .cookbook-card.nested .cookbook-count {
font-size: 0.75rem;
}
.cookbooks-grid.columns-5 .cookbook-card.nested .cookbook-info h3 {
font-size: 0.85rem;
}
.cookbooks-grid.columns-5 .cookbook-card.nested .recipe-count,
.cookbooks-grid.columns-5 .cookbook-card.nested .cookbook-count {
font-size: 0.7rem;
}

View File

@@ -26,6 +26,18 @@
gap: 1rem;
}
/* Page-level Controls */
.page-toolbar {
display: flex;
justify-content: flex-start;
background: white;
padding: 0.75rem 1rem;
border-radius: 8px;
margin-bottom: 2rem;
box-shadow: 0 1px 3px rgba(0, 0, 0, 0.08);
border: 1px solid #e0e0e0;
}
/* Cookbooks Section */
.cookbooks-section {
margin-bottom: 3rem;
@@ -37,9 +49,124 @@
margin-bottom: 1.5rem;
}
/* Pagination Controls */
.pagination-toolbar {
display: flex;
flex-wrap: wrap;
align-items: center;
justify-content: flex-end;
background: white;
padding: 0.75rem 1rem;
border-radius: 8px;
margin-bottom: 1.5rem;
box-shadow: 0 1px 3px rgba(0, 0, 0, 0.08);
border: 1px solid #e0e0e0;
}
.display-controls,
.pagination-controls {
display: flex;
gap: 1rem;
align-items: center;
flex-wrap: wrap;
}
.control-group {
display: flex;
align-items: center;
gap: 0.5rem;
}
.control-group label {
font-size: 0.8rem;
font-weight: 500;
color: #666;
white-space: nowrap;
}
.column-buttons,
.items-per-page {
display: flex;
gap: 0.25rem;
border: 1px solid #d0d0d0;
border-radius: 6px;
overflow: hidden;
}
.column-buttons button,
.items-per-page button {
min-width: 2rem;
padding: 0.35rem 0.6rem;
border: none;
background: white;
font-size: 0.8rem;
font-weight: 500;
cursor: pointer;
transition: all 0.15s;
color: #555;
}
.column-buttons button:not(:last-child),
.items-per-page button:not(:last-child) {
border-right: 1px solid #d0d0d0;
}
.column-buttons button:hover,
.items-per-page button:hover {
background-color: #f5f5f5;
}
.column-buttons button.active,
.items-per-page button.active {
background-color: #2e7d32;
color: white;
}
.page-navigation {
display: flex;
gap: 0.5rem;
align-items: center;
}
.page-navigation button {
padding: 0.35rem 0.75rem;
border: 1px solid #d0d0d0;
background: white;
color: #555;
border-radius: 6px;
font-size: 0.8rem;
font-weight: 500;
cursor: pointer;
transition: all 0.15s;
}
.page-navigation button:hover:not(:disabled) {
background-color: #f5f5f5;
border-color: #2e7d32;
color: #2e7d32;
}
.page-navigation button:disabled {
opacity: 0.4;
cursor: not-allowed;
}
.page-info {
font-size: 0.75rem;
font-weight: 500;
color: #666;
white-space: nowrap;
margin: 0 0.25rem;
}
.results-count {
font-size: 0.95rem;
color: #757575;
margin-bottom: 1.5rem;
}
.cookbooks-grid {
display: grid;
grid-template-columns: repeat(auto-fill, minmax(280px, 1fr));
gap: 1.5rem;
}
@@ -50,6 +177,9 @@
box-shadow: 0 2px 8px rgba(0, 0, 0, 0.1);
cursor: pointer;
transition: transform 0.2s, box-shadow 0.2s;
display: flex;
flex-direction: column;
aspect-ratio: 1 / 1;
}
.cookbook-card:hover {
@@ -59,42 +189,86 @@
.cookbook-cover {
width: 100%;
height: 200px;
height: 50%;
object-fit: cover;
flex-shrink: 0;
}
.cookbook-cover-placeholder {
width: 100%;
height: 200px;
height: 50%;
background: linear-gradient(135deg, #81c784 0%, #4caf50 100%);
display: flex;
align-items: center;
justify-content: center;
font-size: 4rem;
font-size: 2.5rem;
flex-shrink: 0;
}
.cookbook-info {
padding: 1.25rem;
padding: 0.5rem;
display: flex;
flex-direction: column;
flex: 1;
min-height: 0;
overflow: hidden;
}
.cookbook-info h3 {
font-size: 1.3rem;
font-size: 0.75rem;
color: #212121;
margin: 0 0 0.5rem 0;
margin: 0 0 0.25rem 0;
line-height: 1.2;
overflow: hidden;
text-overflow: ellipsis;
display: -webkit-box;
-webkit-line-clamp: 2;
-webkit-box-orient: vertical;
flex-shrink: 0;
}
.cookbook-info .description {
font-size: 0.95rem;
color: #666;
margin: 0 0 0.75rem 0;
line-height: 1.4;
display: none;
}
.cookbook-info .recipe-count {
font-size: 0.9rem;
.cookbook-info .cookbook-stats {
margin-top: auto;
display: flex;
flex-direction: column;
gap: 0.1rem;
}
.cookbook-info .recipe-count,
.cookbook-info .cookbook-count {
font-size: 0.6rem;
color: #2e7d32;
font-weight: 600;
margin: 0;
line-height: 1.2;
white-space: nowrap;
}
.cookbook-info .cookbook-tags {
display: none;
}
/* Column-specific styles for Cookbooks */
.cookbooks-grid.columns-3 .cookbook-info h3 {
font-size: 0.95rem;
}
.cookbooks-grid.columns-3 .recipe-count,
.cookbooks-grid.columns-3 .cookbook-count {
font-size: 0.75rem;
}
.cookbooks-grid.columns-5 .cookbook-info h3 {
font-size: 0.85rem;
}
.cookbooks-grid.columns-5 .recipe-count,
.cookbooks-grid.columns-5 .cookbook-count {
font-size: 0.7rem;
}
/* Recent Recipes Section */
@@ -102,77 +276,133 @@
margin-top: 3rem;
}
.section-header {
.recent-recipes-section h2 {
font-size: 1.8rem;
color: #1b5e20;
margin: 0;
}
.section-title-row {
display: flex;
justify-content: space-between;
align-items: center;
margin-bottom: 1.5rem;
}
.section-header h2 {
font-size: 1.8rem;
color: #1b5e20;
margin: 0;
}
.recipes-grid {
display: grid;
grid-template-columns: repeat(auto-fill, minmax(300px, 1fr));
gap: 1.5rem;
}
.recipe-card {
background: white;
border-radius: 12px;
overflow: hidden;
box-shadow: 0 2px 8px rgba(0, 0, 0, 0.1);
.recent-recipes-section .recipe-card {
cursor: pointer;
border: 1px solid #e0e0e0;
border-radius: 8px;
overflow: hidden;
background: white;
transition: transform 0.2s, box-shadow 0.2s;
display: flex;
flex-direction: column;
aspect-ratio: 1 / 1;
}
.recipe-card:hover {
transform: translateY(-4px);
box-shadow: 0 4px 16px rgba(0, 0, 0, 0.15);
.recent-recipes-section .recipe-card:hover {
transform: translateY(-2px);
box-shadow: 0 4px 12px rgba(0, 0, 0, 0.1);
}
.recipe-image {
.recent-recipes-section .recipe-card img {
width: 100%;
height: 200px;
height: 60%;
object-fit: cover;
display: block;
flex-shrink: 0;
}
.recipe-image-placeholder {
width: 100%;
height: 200px;
height: 60%;
background: linear-gradient(135deg, #ffb74d 0%, #ff9800 100%);
display: flex;
align-items: center;
justify-content: center;
font-size: 4rem;
font-size: 3rem;
flex-shrink: 0;
}
.recipe-info {
padding: 1.25rem;
}
.recipe-info h3 {
font-size: 1.2rem;
color: #212121;
margin: 0 0 0.5rem 0;
}
.recipe-info .description {
font-size: 0.9rem;
color: #666;
margin: 0 0 0.75rem 0;
line-height: 1.4;
}
.recipe-meta {
.recent-recipes-section .recipe-info {
padding: 0.5rem;
flex: 1;
display: flex;
gap: 1rem;
flex-direction: column;
justify-content: space-between;
overflow: hidden;
min-height: 0;
}
.recent-recipes-section .recipe-info h3 {
margin: 0 0 0.25rem 0;
font-size: 0.75rem;
line-height: 1.2;
overflow: hidden;
text-overflow: ellipsis;
display: -webkit-box;
-webkit-line-clamp: 2;
-webkit-box-orient: vertical;
flex-shrink: 0;
}
.recent-recipes-section .recipe-info .description {
margin: 0;
font-size: 0.65rem;
color: #666;
overflow: hidden;
text-overflow: ellipsis;
display: -webkit-box;
-webkit-line-clamp: 2;
-webkit-box-orient: vertical;
flex-shrink: 1;
}
.recent-recipes-section .recipe-meta {
display: flex;
gap: 0.4rem;
font-size: 0.6rem;
color: #888;
flex-shrink: 0;
margin-top: auto;
}
/* Column-specific styles for Recent Recipes */
.recent-recipes-section .columns-3 .recipe-info h3 {
font-size: 0.95rem;
}
.recent-recipes-section .columns-3 .recipe-info .description {
font-size: 0.8rem;
-webkit-line-clamp: 2;
}
.recent-recipes-section .columns-3 .recipe-meta {
font-size: 0.75rem;
}
.recent-recipes-section .columns-5 .recipe-info h3 {
font-size: 0.85rem;
color: #757575;
}
.recent-recipes-section .columns-5 .recipe-info .description {
font-size: 0.75rem;
-webkit-line-clamp: 2;
}
.recent-recipes-section .columns-5 .recipe-meta {
font-size: 0.7rem;
}
.recent-recipes-section .columns-7 .recipe-info .description,
.recent-recipes-section .columns-9 .recipe-info .description {
display: none;
}
/* Empty State */
@@ -521,6 +751,20 @@
width: 100%;
}
.page-toolbar,
.pagination-toolbar {
flex-direction: column;
align-items: stretch;
padding: 1rem;
}
.display-controls,
.pagination-controls {
flex-direction: column;
align-items: stretch;
gap: 1rem;
}
.cookbooks-grid,
.recipes-grid {
grid-template-columns: 1fr;
@@ -534,11 +778,6 @@
margin-top: 0.5rem;
}
.cookbook-count {
font-size: 0.875rem;
color: #666;
}
/* Cookbook tags */
.cookbook-tags {
display: flex;

View File

@@ -0,0 +1,173 @@
.family-page {
padding: 1rem 0;
}
.family-page h2 {
margin-bottom: 1rem;
color: var(--text-primary);
}
.family-page h3,
.family-page h4 {
color: var(--text-primary);
}
.family-error {
background-color: #ffebee;
color: #d32f2f;
border: 1px solid #f5c2c7;
border-radius: 4px;
padding: 0.75rem 1rem;
margin-bottom: 1rem;
}
.family-create {
margin-bottom: 1.5rem;
}
.family-create-form {
display: flex;
gap: 0.75rem;
align-items: flex-end;
flex-wrap: wrap;
}
.family-create-form label {
display: flex;
flex-direction: column;
gap: 0.35rem;
flex: 1 1 260px;
color: var(--text-secondary);
font-size: 0.9rem;
}
.family-create-form input,
.family-invite-form input,
.family-invite-form select {
padding: 0.6rem 0.75rem;
border: 1px solid var(--border-color);
border-radius: 4px;
background-color: var(--bg-secondary);
color: var(--text-primary);
font-size: 1rem;
}
.family-layout {
display: grid;
grid-template-columns: 260px 1fr;
gap: 1.5rem;
}
@media (max-width: 720px) {
.family-layout {
grid-template-columns: 1fr;
}
}
.family-list h3,
.family-detail h3 {
margin-top: 0;
}
.family-list ul {
list-style: none;
padding: 0;
margin: 0;
}
.family-list li {
margin-bottom: 0.5rem;
}
.family-list li button {
width: 100%;
text-align: left;
padding: 0.75rem 1rem;
border: 1px solid var(--border-color);
border-radius: 6px;
background-color: var(--bg-secondary);
color: var(--text-primary);
cursor: pointer;
display: flex;
flex-direction: column;
gap: 0.25rem;
transition: border-color 0.2s, background-color 0.2s;
}
.family-list li button:hover {
border-color: var(--brand-primary);
background-color: var(--bg-tertiary);
}
.family-list li.active button {
border-color: var(--brand-primary);
background-color: var(--bg-tertiary);
box-shadow: inset 3px 0 0 var(--brand-primary);
}
.family-meta {
font-size: 0.8rem;
color: var(--text-secondary);
}
.family-detail-header {
display: flex;
justify-content: space-between;
align-items: center;
margin-bottom: 0.5rem;
}
.family-members {
width: 100%;
border-collapse: collapse;
margin-bottom: 1.5rem;
}
.family-members th,
.family-members td {
text-align: left;
padding: 0.6rem 0.75rem;
border-bottom: 1px solid var(--border-light);
color: var(--text-primary);
}
.family-members th {
color: var(--text-secondary);
font-weight: 600;
font-size: 0.85rem;
text-transform: uppercase;
letter-spacing: 0.03em;
}
.family-invite-form {
display: flex;
gap: 0.5rem;
flex-wrap: wrap;
}
.family-invite-form input[type="email"] {
flex: 1 1 240px;
}
.family-page button.danger {
background-color: #d32f2f;
color: white;
border: none;
padding: 0.5rem 1rem;
border-radius: 4px;
font-size: 0.9rem;
}
.family-page button.danger:hover {
background-color: #b71c1c;
}
.family-members button {
padding: 0.4rem 0.8rem;
font-size: 0.85rem;
}
.muted {
color: var(--text-secondary);
font-style: italic;
}

View File

@@ -0,0 +1,77 @@
.family-gate-overlay {
position: fixed;
inset: 0;
background: rgba(0, 0, 0, 0.55);
display: flex;
align-items: center;
justify-content: center;
z-index: 2000;
padding: 1rem;
}
.family-gate-modal {
background: var(--bg-secondary);
color: var(--text-primary);
border-radius: 8px;
max-width: 440px;
width: 100%;
padding: 1.75rem;
box-shadow: 0 12px 40px rgba(0, 0, 0, 0.25);
}
.family-gate-modal h2 {
margin: 0 0 0.5rem;
color: var(--brand-primary);
}
.family-gate-modal p {
margin: 0 0 1.25rem;
color: var(--text-secondary);
line-height: 1.45;
}
.family-gate-modal label {
display: block;
font-size: 0.9rem;
color: var(--text-secondary);
margin-bottom: 0.35rem;
}
.family-gate-modal input {
width: 100%;
padding: 0.6rem 0.75rem;
border: 1px solid var(--border-color);
border-radius: 4px;
background-color: var(--bg-primary);
color: var(--text-primary);
font-size: 1rem;
margin-bottom: 1rem;
box-sizing: border-box;
}
.family-gate-error {
background-color: #ffebee;
color: #d32f2f;
border: 1px solid #f5c2c7;
border-radius: 4px;
padding: 0.5rem 0.75rem;
margin-bottom: 1rem;
font-size: 0.9rem;
}
.family-gate-actions {
display: flex;
justify-content: flex-end;
gap: 0.75rem;
}
.family-gate-secondary {
background-color: transparent;
color: var(--text-secondary);
border: 1px solid var(--border-color);
}
.family-gate-secondary:hover {
background-color: var(--bg-tertiary);
color: var(--text-primary);
}

View File

@@ -266,9 +266,9 @@
overflow: hidden;
background: var(--bg-primary, #ffffff);
transition: transform 0.2s, box-shadow 0.2s;
aspect-ratio: 1 / 1;
display: flex;
flex-direction: column;
aspect-ratio: 1 / 1;
}
.recipe-grid-enhanced .recipe-card:hover {
@@ -278,7 +278,7 @@
.recipe-grid-enhanced .recipe-card img {
width: 100%;
height: 65%;
height: 60%;
object-fit: cover;
display: block;
flex-shrink: 0;
@@ -327,6 +327,38 @@
margin-top: auto;
}
/* Column-specific styles for recipe grid */
.recipe-grid-enhanced.columns-3 .recipe-card-content h3 {
font-size: 0.95rem;
}
.recipe-grid-enhanced.columns-3 .recipe-card-content p {
font-size: 0.8rem;
-webkit-line-clamp: 2;
}
.recipe-grid-enhanced.columns-3 .recipe-meta {
font-size: 0.75rem;
}
.recipe-grid-enhanced.columns-5 .recipe-card-content h3 {
font-size: 0.85rem;
}
.recipe-grid-enhanced.columns-5 .recipe-card-content p {
font-size: 0.75rem;
-webkit-line-clamp: 2;
}
.recipe-grid-enhanced.columns-5 .recipe-meta {
font-size: 0.7rem;
}
.recipe-grid-enhanced.columns-7 .recipe-card-content p,
.recipe-grid-enhanced.columns-9 .recipe-card-content p {
display: none;
}
/* Empty state */
.empty-state {
text-align: center;

View File

@@ -3,4 +3,4 @@
* Example: 2026.01.002 (January 2026, patch 2), 2026.02.003 (February 2026, patch 3)
* Month and patch are zero-padded. Patch increments with each deployment in a month.
*/
export const APP_VERSION = '2026.01.004';
export const APP_VERSION = '2026.04.008';

View File

@@ -0,0 +1,458 @@
# PostgreSQL Backup Scripts
Comprehensive backup and restore scripts for PostgreSQL databases.
## Scripts Overview
### 1. `backup-all-postgres-databases.sh`
Backs up all databases on a PostgreSQL server (excluding system databases).
**Features:**
- ✅ Backs up all user databases automatically
- ✅ Includes global objects (roles, tablespaces)
- ✅ Optional gzip compression
- ✅ Automatic retention management
- ✅ Integrity verification
- ✅ Detailed logging with color output
- ✅ Backup summary reporting
- ✅ Email/Slack notification support (optional)
### 2. `restore-postgres-database.sh`
Restores a single database from backup.
**Features:**
- ✅ Safety backup before restore
- ✅ Interactive confirmation
- ✅ Automatic database name detection
- ✅ Compressed file support
- ✅ Integrity verification
- ✅ Post-restore verification
---
## Quick Start
### Backup All Databases
```bash
# Basic usage
./backup-all-postgres-databases.sh
# With compression (recommended)
./backup-all-postgres-databases.sh -c
# Custom configuration
./backup-all-postgres-databases.sh \
-h db.example.com \
-U postgres \
-d /mnt/backups \
-r 60 \
-c
```
### Restore a Database
```bash
# Interactive restore (with confirmation)
./restore-postgres-database.sh /var/backups/postgresql/20260120/mydb_20260120_020001.sql.gz
# Force restore (skip confirmation)
./restore-postgres-database.sh backup.sql.gz -d mydb -f
```
---
## Detailed Usage
### Backup Script Options
```bash
./backup-all-postgres-databases.sh [options]
Options:
-h HOST Database host (default: localhost)
-p PORT Database port (default: 5432)
-U USER Database user (default: postgres)
-d BACKUP_DIR Backup directory (default: /var/backups/postgresql)
-r DAYS Retention days (default: 30)
-c Enable compression (gzip)
-v Verbose output
-H Show help
```
### Restore Script Options
```bash
./restore-postgres-database.sh <backup_file> [options]
Options:
-h HOST Database host (default: localhost)
-p PORT Database port (default: 5432)
-U USER Database user (default: postgres)
-d DBNAME Target database name (default: from filename)
-f Force restore (skip confirmation)
-v Verbose output
-H Show help
```
---
## Automated Backups with Cron
### Daily Backups (Recommended)
```bash
# Edit crontab
sudo crontab -e
# Add daily backup at 2 AM with compression
0 2 * * * /path/to/backup-all-postgres-databases.sh -c >> /var/log/postgres-backup.log 2>&1
```
### Alternative Schedules
```bash
# Every 6 hours
0 */6 * * * /path/to/backup-all-postgres-databases.sh -c
# Twice daily (2 AM and 2 PM)
0 2,14 * * * /path/to/backup-all-postgres-databases.sh -c
# Weekly on Sundays at 3 AM
0 3 * * 0 /path/to/backup-all-postgres-databases.sh -c -r 90
```
---
## Backup Directory Structure
```
/var/backups/postgresql/
├── 20260120/ # Date-based subdirectory
│ ├── globals_20260120_020001.sql.gz # Global objects backup
│ ├── basil_20260120_020001.sql.gz # Database backup
│ ├── myapp_20260120_020001.sql.gz # Database backup
│ └── wiki_20260120_020001.sql.gz # Database backup
├── 20260121/
│ ├── globals_20260121_020001.sql.gz
│ └── ...
└── 20260122/
└── ...
```
---
## Configuration Examples
### Local PostgreSQL Server
```bash
./backup-all-postgres-databases.sh \
-h localhost \
-U postgres \
-c
```
### Remote PostgreSQL Server
```bash
./backup-all-postgres-databases.sh \
-h db.example.com \
-p 5432 \
-U backup_user \
-d /mnt/network/backups \
-r 60 \
-c \
-v
```
### High-Frequency Backups
```bash
# Short retention for frequent backups
./backup-all-postgres-databases.sh \
-r 7 \
-c
```
---
## Authentication Setup
### Option 1: .pgpass File (Recommended)
Create `~/.pgpass` with connection credentials:
```bash
echo "localhost:5432:*:postgres:your-password" >> ~/.pgpass
chmod 600 ~/.pgpass
```
Format: `hostname:port:database:username:password`
### Option 2: Environment Variables
```bash
export PGPASSWORD="your-password"
./backup-all-postgres-databases.sh
```
### Option 3: Peer Authentication (Local Only)
Run as the postgres system user:
```bash
sudo -u postgres ./backup-all-postgres-databases.sh
```
---
## Monitoring and Notifications
### Email Notifications
Edit the scripts and uncomment the email notification section:
```bash
# In backup-all-postgres-databases.sh, uncomment:
if command -v mail &> /dev/null; then
echo "$summary" | mail -s "PostgreSQL Backup $status - $(hostname)" admin@example.com
fi
```
### Slack Notifications
Set webhook URL and uncomment:
```bash
export SLACK_WEBHOOK_URL="https://hooks.slack.com/services/YOUR/WEBHOOK/URL"
# In script, uncomment:
if [ -n "$SLACK_WEBHOOK_URL" ]; then
curl -X POST "$SLACK_WEBHOOK_URL" \
-H 'Content-Type: application/json' \
-d "{\"text\":\"PostgreSQL Backup $status\n$summary\"}"
fi
```
### Log Rotation
Create `/etc/logrotate.d/postgres-backup`:
```
/var/log/postgres-backup.log {
daily
rotate 30
compress
delaycompress
missingok
notifempty
}
```
---
## Backup Verification
### Manual Verification
```bash
# List backups
ls -lh /var/backups/postgresql/$(date +%Y%m%d)/
# Verify compressed file integrity
gzip -t /var/backups/postgresql/20260120/basil_20260120_020001.sql.gz
# Preview backup contents
gunzip -c backup.sql.gz | head -50
```
### Test Restore (Recommended Monthly)
```bash
# Restore to a test database
./restore-postgres-database.sh backup.sql.gz -d test_restore -f
# Verify
psql -d test_restore -c "\dt"
# Cleanup
dropdb test_restore
```
---
## Disaster Recovery
### Full Server Restore
1. **Install PostgreSQL** on new server
2. **Restore global objects first**:
```bash
gunzip -c globals_YYYYMMDD_HHMMSS.sql.gz | psql -U postgres -d postgres
```
3. **Restore each database**:
```bash
./restore-postgres-database.sh basil_20260120_020001.sql.gz
./restore-postgres-database.sh myapp_20260120_020001.sql.gz
```
### Point-in-Time Recovery
For PITR capabilities, enable WAL archiving in `postgresql.conf`:
```
wal_level = replica
archive_mode = on
archive_command = 'cp %p /var/lib/postgresql/wal_archive/%f'
max_wal_senders = 3
```
Then use `pg_basebackup` and WAL replay for PITR.
---
## Troubleshooting
### Permission Denied
```bash
# Fix backup directory permissions
sudo chown -R postgres:postgres /var/backups/postgresql
sudo chmod 755 /var/backups/postgresql
# Fix script permissions
chmod +x backup-all-postgres-databases.sh
```
### Connection Failed
```bash
# Test connection manually
psql -h localhost -U postgres -c "SELECT version();"
# Check pg_hba.conf
sudo cat /etc/postgresql/*/main/pg_hba.conf
# Ensure proper authentication line exists:
# local all postgres peer
# host all all 127.0.0.1/32 scram-sha-256
```
### Out of Disk Space
```bash
# Check disk usage
df -h /var/backups
# Clean old backups manually
find /var/backups/postgresql -type d -name "????????" -mtime +30 -exec rm -rf {} \;
# Reduce retention period
./backup-all-postgres-databases.sh -r 7
```
### Backup File Corrupted
```bash
# Verify integrity
gzip -t backup.sql.gz
# If corrupted, use previous backup
ls -lt /var/backups/postgresql/*/basil_*.sql.gz | head
```
---
## Performance Optimization
### Large Databases
For very large databases, consider:
```bash
# Parallel dump (PostgreSQL 9.3+)
pg_dump -Fd -j 4 -f backup_directory mydb
# Custom format (smaller, faster restore)
pg_dump -Fc mydb > backup.custom
# Restore from custom format
pg_restore -d mydb backup.custom
```
### Network Backups
```bash
# Direct SSH backup (no local storage)
pg_dump mydb | gzip | ssh backup-server "cat > /backups/mydb.sql.gz"
```
---
## Best Practices
1. **Always test restores** - Backups are worthless if you can't restore
2. **Monitor backup completion** - Set up alerts for failed backups
3. **Use compression** - Saves 80-90% of disk space
4. **Multiple backup locations** - Local + remote/cloud storage
5. **Verify backup integrity** - Run gzip -t on compressed backups
6. **Document procedures** - Keep runbooks for disaster recovery
7. **Encrypt sensitive backups** - Use gpg for encryption if needed
8. **Regular retention review** - Adjust based on compliance requirements
---
## Security Considerations
### Encryption at Rest
```bash
# Encrypt backup with GPG
pg_dump mydb | gzip | gpg --encrypt --recipient admin@example.com > backup.sql.gz.gpg
# Decrypt for restore
gpg --decrypt backup.sql.gz.gpg | gunzip | psql mydb
```
### Secure Transfer
```bash
# Use SCP with key authentication
scp -i ~/.ssh/backup_key backup.sql.gz backup-server:/secure/backups/
# Or rsync over SSH
rsync -av -e "ssh -i ~/.ssh/backup_key" \
/var/backups/postgresql/ \
backup-server:/secure/backups/
```
### Access Control
```bash
# Restrict backup directory permissions
chmod 700 /var/backups/postgresql
chown postgres:postgres /var/backups/postgresql
# Restrict script permissions
chmod 750 backup-all-postgres-databases.sh
chown root:postgres backup-all-postgres-databases.sh
```
---
## Additional Resources
- [PostgreSQL Backup Documentation](https://www.postgresql.org/docs/current/backup.html)
- [pg_dump Manual](https://www.postgresql.org/docs/current/app-pgdump.html)
- [pg_restore Manual](https://www.postgresql.org/docs/current/app-pgrestore.html)
- [Continuous Archiving and PITR](https://www.postgresql.org/docs/current/continuous-archiving.html)
---
## Support
For issues or questions:
- Check script help: `./backup-all-postgres-databases.sh -H`
- Review logs: `tail -f /var/log/postgres-backup.log`
- Test connection: `psql -h localhost -U postgres`

View File

@@ -0,0 +1,402 @@
#!/bin/bash
#
# PostgreSQL All Databases Backup Script
# Backs up all databases on a PostgreSQL server using pg_dump
#
# Usage:
# ./backup-all-postgres-databases.sh [options]
#
# Options:
# -h HOST Database host (default: localhost)
# -p PORT Database port (default: 5432)
# -U USER Database user (default: postgres)
# -d BACKUP_DIR Backup directory (default: /var/backups/postgresql)
# -r DAYS Retention days (default: 30)
# -c Enable compression (gzip)
# -v Verbose output
#
# Cron example (daily at 2 AM):
# 0 2 * * * /path/to/backup-all-postgres-databases.sh -c >> /var/log/postgres-backup.log 2>&1
set -e
set -o pipefail
# Default configuration
DB_HOST="localhost"
DB_PORT="5432"
DB_USER="postgres"
BACKUP_DIR="/var/backups/postgresql"
RETENTION_DAYS=30
COMPRESS=false
VERBOSE=false
# Color output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# Parse command line arguments
while getopts "h:p:U:d:r:cvH" opt; do
case $opt in
h) DB_HOST="$OPTARG" ;;
p) DB_PORT="$OPTARG" ;;
U) DB_USER="$OPTARG" ;;
d) BACKUP_DIR="$OPTARG" ;;
r) RETENTION_DAYS="$OPTARG" ;;
c) COMPRESS=true ;;
v) VERBOSE=true ;;
H)
echo "PostgreSQL All Databases Backup Script"
echo ""
echo "Usage: $0 [options]"
echo ""
echo "Options:"
echo " -h HOST Database host (default: localhost)"
echo " -p PORT Database port (default: 5432)"
echo " -U USER Database user (default: postgres)"
echo " -d BACKUP_DIR Backup directory (default: /var/backups/postgresql)"
echo " -r DAYS Retention days (default: 30)"
echo " -c Enable compression (gzip)"
echo " -v Verbose output"
echo " -H Show this help"
echo ""
exit 0
;;
\?)
echo "Invalid option: -$OPTARG" >&2
exit 1
;;
esac
done
# Logging functions
log_info() {
echo -e "${GREEN}[INFO]${NC} $(date '+%Y-%m-%d %H:%M:%S') - $1"
}
log_warn() {
echo -e "${YELLOW}[WARN]${NC} $(date '+%Y-%m-%d %H:%M:%S') - $1"
}
log_error() {
echo -e "${RED}[ERROR]${NC} $(date '+%Y-%m-%d %H:%M:%S') - $1" >&2
}
log_debug() {
if [ "$VERBOSE" = true ]; then
echo -e "${BLUE}[DEBUG]${NC} $(date '+%Y-%m-%d %H:%M:%S') - $1"
fi
}
# Check dependencies
check_dependencies() {
log_debug "Checking dependencies..."
if ! command -v psql &> /dev/null; then
log_error "psql not found. Please install PostgreSQL client tools."
exit 1
fi
if ! command -v pg_dump &> /dev/null; then
log_error "pg_dump not found. Please install PostgreSQL client tools."
exit 1
fi
if [ "$COMPRESS" = true ] && ! command -v gzip &> /dev/null; then
log_error "gzip not found. Please install gzip or disable compression."
exit 1
fi
log_debug "All dependencies satisfied"
}
# Test database connection
test_connection() {
log_debug "Testing database connection..."
if ! psql -h "$DB_HOST" -p "$DB_PORT" -U "$DB_USER" -d postgres -c "SELECT version();" &> /dev/null; then
log_error "Cannot connect to PostgreSQL server at $DB_HOST:$DB_PORT"
log_error "Check credentials, network connectivity, and pg_hba.conf settings"
exit 1
fi
log_debug "Database connection successful"
}
# Create backup directory structure
create_backup_dirs() {
local timestamp=$(date +%Y%m%d)
local backup_subdir="$BACKUP_DIR/$timestamp"
log_debug "Creating backup directory: $backup_subdir"
mkdir -p "$backup_subdir"
if [ ! -w "$backup_subdir" ]; then
log_error "Backup directory is not writable: $backup_subdir"
exit 1
fi
echo "$backup_subdir"
}
# Get list of databases to backup
get_databases() {
log_debug "Retrieving database list..."
# Get all databases except system databases
local databases=$(psql -h "$DB_HOST" -p "$DB_PORT" -U "$DB_USER" -d postgres -t -c \
"SELECT datname FROM pg_database
WHERE datname NOT IN ('postgres', 'template0', 'template1')
AND datistemplate = false
ORDER BY datname;")
if [ -z "$databases" ]; then
log_warn "No user databases found to backup"
return 1
fi
echo "$databases"
}
# Backup a single database
backup_database() {
local db_name="$1"
local backup_dir="$2"
local timestamp=$(date +%Y%m%d_%H%M%S)
local backup_file="$backup_dir/${db_name}_${timestamp}.sql"
log_info "Backing up database: $db_name"
# Add compression extension if enabled
if [ "$COMPRESS" = true ]; then
backup_file="${backup_file}.gz"
fi
# Perform backup
local start_time=$(date +%s)
if [ "$COMPRESS" = true ]; then
if pg_dump -h "$DB_HOST" -p "$DB_PORT" -U "$DB_USER" -d "$db_name" \
--no-owner --no-privileges --create --clean | gzip > "$backup_file"; then
local status="SUCCESS"
else
local status="FAILED"
fi
else
if pg_dump -h "$DB_HOST" -p "$DB_PORT" -U "$DB_USER" -d "$db_name" \
--no-owner --no-privileges --create --clean > "$backup_file"; then
local status="SUCCESS"
else
local status="FAILED"
fi
fi
local end_time=$(date +%s)
local duration=$((end_time - start_time))
if [ "$status" = "SUCCESS" ]; then
# Verify backup file exists and has content
if [ ! -s "$backup_file" ]; then
log_error "Backup file is empty: $backup_file"
return 1
fi
# Verify compressed file integrity if compression is enabled
if [ "$COMPRESS" = true ]; then
if ! gzip -t "$backup_file" 2>/dev/null; then
log_error "Backup file is corrupted: $backup_file"
return 1
fi
fi
local file_size=$(du -h "$backup_file" | cut -f1)
log_info "$db_name backup completed - Size: $file_size, Duration: ${duration}s"
log_debug " File: $backup_file"
return 0
else
log_error "$db_name backup failed"
# Remove failed backup file
rm -f "$backup_file"
return 1
fi
}
# Backup global objects (roles, tablespaces, etc.)
backup_globals() {
local backup_dir="$1"
local timestamp=$(date +%Y%m%d_%H%M%S)
local backup_file="$backup_dir/globals_${timestamp}.sql"
log_info "Backing up global objects (roles, tablespaces)..."
if [ "$COMPRESS" = true ]; then
backup_file="${backup_file}.gz"
fi
if [ "$COMPRESS" = true ]; then
if pg_dumpall -h "$DB_HOST" -p "$DB_PORT" -U "$DB_USER" --globals-only | gzip > "$backup_file"; then
local status="SUCCESS"
else
local status="FAILED"
fi
else
if pg_dumpall -h "$DB_HOST" -p "$DB_PORT" -U "$DB_USER" --globals-only > "$backup_file"; then
local status="SUCCESS"
else
local status="FAILED"
fi
fi
if [ "$status" = "SUCCESS" ]; then
local file_size=$(du -h "$backup_file" | cut -f1)
log_info "✓ Global objects backup completed - Size: $file_size"
return 0
else
log_error "✗ Global objects backup failed"
rm -f "$backup_file"
return 1
fi
}
# Clean up old backups
cleanup_old_backups() {
log_info "Cleaning up backups older than $RETENTION_DAYS days..."
local deleted_count=0
# Find and delete old backup directories
while IFS= read -r old_dir; do
log_debug "Deleting old backup directory: $old_dir"
rm -rf "$old_dir"
((deleted_count++))
done < <(find "$BACKUP_DIR" -maxdepth 1 -type d -name "????????" -mtime +$RETENTION_DAYS 2>/dev/null)
if [ $deleted_count -gt 0 ]; then
log_info "Deleted $deleted_count old backup directories"
else
log_debug "No old backups to delete"
fi
}
# Generate backup summary
generate_summary() {
local backup_dir="$1"
local total_dbs="$2"
local successful_dbs="$3"
local failed_dbs="$4"
local total_size=$(du -sh "$backup_dir" 2>/dev/null | cut -f1)
echo ""
log_info "================================================"
log_info "Backup Summary"
log_info "================================================"
log_info "Backup Directory: $backup_dir"
log_info "Total Databases: $total_dbs"
log_info "Successful: $successful_dbs"
log_info "Failed: $failed_dbs"
log_info "Total Size: $total_size"
log_info "Retention: $RETENTION_DAYS days"
log_info "Compression: $([ "$COMPRESS" = true ] && echo "Enabled" || echo "Disabled")"
log_info "================================================"
echo ""
}
# Send notification (optional)
send_notification() {
local status="$1"
local summary="$2"
# Uncomment and configure to enable email notifications
# if command -v mail &> /dev/null; then
# echo "$summary" | mail -s "PostgreSQL Backup $status - $(hostname)" your-email@example.com
# fi
# Uncomment and configure to enable Slack notifications
# if [ -n "$SLACK_WEBHOOK_URL" ]; then
# curl -X POST "$SLACK_WEBHOOK_URL" \
# -H 'Content-Type: application/json' \
# -d "{\"text\":\"PostgreSQL Backup $status\n$summary\"}"
# fi
}
# Main execution
main() {
local start_time=$(date +%s)
log_info "================================================"
log_info "PostgreSQL All Databases Backup Script"
log_info "================================================"
log_info "Host: $DB_HOST:$DB_PORT"
log_info "User: $DB_USER"
log_info "Backup Directory: $BACKUP_DIR"
log_info "Compression: $([ "$COMPRESS" = true ] && echo "Enabled" || echo "Disabled")"
log_info "Retention: $RETENTION_DAYS days"
log_info "================================================"
echo ""
# Perform checks
check_dependencies
test_connection
# Create backup directory
local backup_subdir=$(create_backup_dirs)
# Get list of databases
local databases=$(get_databases)
if [ -z "$databases" ]; then
log_warn "No databases to backup. Exiting."
exit 0
fi
# Backup global objects first
backup_globals "$backup_subdir"
# Backup each database
local total_dbs=0
local successful_dbs=0
local failed_dbs=0
while IFS= read -r db; do
# Trim whitespace
db=$(echo "$db" | xargs)
if [ -n "$db" ]; then
((total_dbs++))
if backup_database "$db" "$backup_subdir"; then
((successful_dbs++))
else
((failed_dbs++))
fi
fi
done <<< "$databases"
# Cleanup old backups
cleanup_old_backups
# Calculate total execution time
local end_time=$(date +%s)
local total_duration=$((end_time - start_time))
# Generate summary
generate_summary "$backup_subdir" "$total_dbs" "$successful_dbs" "$failed_dbs"
log_info "Total execution time: ${total_duration}s"
# Send notification
if [ $failed_dbs -gt 0 ]; then
send_notification "COMPLETED WITH ERRORS" "$(generate_summary "$backup_subdir" "$total_dbs" "$successful_dbs" "$failed_dbs")"
exit 1
else
send_notification "SUCCESS" "$(generate_summary "$backup_subdir" "$total_dbs" "$successful_dbs" "$failed_dbs")"
log_info "All backups completed successfully! ✓"
exit 0
fi
}
# Run main function
main

View File

@@ -0,0 +1,74 @@
#!/bin/bash
#
# Basil Backup Script for Standalone PostgreSQL
# Place on database server and run via cron
#
# Cron example (daily at 2 AM):
# 0 2 * * * /path/to/backup-standalone-postgres.sh
set -e
# Configuration
DB_HOST="localhost"
DB_PORT="5432"
DB_NAME="basil"
DB_USER="basil"
BACKUP_DIR="/var/backups/basil"
RETENTION_DAYS=30
# Create backup directories
mkdir -p "$BACKUP_DIR/daily"
mkdir -p "$BACKUP_DIR/weekly"
mkdir -p "$BACKUP_DIR/monthly"
# Timestamp
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
DATE=$(date +%Y%m%d)
DAY_OF_WEEK=$(date +%u) # 1=Monday, 7=Sunday
DAY_OF_MONTH=$(date +%d)
# Daily backup
echo "Starting daily backup: $TIMESTAMP"
DAILY_BACKUP="$BACKUP_DIR/daily/basil-$DATE.sql.gz"
pg_dump -h "$DB_HOST" -p "$DB_PORT" -U "$DB_USER" "$DB_NAME" | gzip > "$DAILY_BACKUP"
echo "Daily backup completed: $DAILY_BACKUP"
# Weekly backup (on Sundays)
if [ "$DAY_OF_WEEK" -eq 7 ]; then
echo "Creating weekly backup"
WEEK=$(date +%V)
WEEKLY_BACKUP="$BACKUP_DIR/weekly/basil-week$WEEK-$DATE.sql.gz"
cp "$DAILY_BACKUP" "$WEEKLY_BACKUP"
echo "Weekly backup completed: $WEEKLY_BACKUP"
fi
# Monthly backup (on 1st of month)
if [ "$DAY_OF_MONTH" -eq 01 ]; then
echo "Creating monthly backup"
MONTH=$(date +%Y%m)
MONTHLY_BACKUP="$BACKUP_DIR/monthly/basil-$MONTH.sql.gz"
cp "$DAILY_BACKUP" "$MONTHLY_BACKUP"
echo "Monthly backup completed: $MONTHLY_BACKUP"
fi
# Cleanup old backups
echo "Cleaning up old backups..."
find "$BACKUP_DIR/daily" -name "basil-*.sql.gz" -mtime +$RETENTION_DAYS -delete
find "$BACKUP_DIR/weekly" -name "basil-*.sql.gz" -mtime +90 -delete
find "$BACKUP_DIR/monthly" -name "basil-*.sql.gz" -mtime +365 -delete
# Verify backup integrity
echo "Verifying backup integrity..."
if gzip -t "$DAILY_BACKUP"; then
BACKUP_SIZE=$(du -h "$DAILY_BACKUP" | cut -f1)
echo "Backup verification successful. Size: $BACKUP_SIZE"
else
echo "ERROR: Backup verification failed!" >&2
exit 1
fi
# Optional: Send notification (uncomment to enable)
# echo "Basil backup completed successfully on $(hostname) at $(date)" | \
# mail -s "Basil Backup Success" your-email@example.com
echo "Backup process completed successfully"

View File

@@ -131,6 +131,32 @@ EOF
log "Docker Compose override file created"
}
# Apply database migrations using the newly-pulled API image.
# Runs before restart so a failed migration leaves the old containers running.
run_migrations() {
log "Applying database migrations..."
if [ -z "$DATABASE_URL" ]; then
error "DATABASE_URL not set in .env — cannot apply migrations"
exit 1
fi
local API_IMAGE="${DOCKER_REGISTRY}/${DOCKER_USERNAME}/basil-api:${IMAGE_TAG}"
# Use --network=host so the container can reach the same DB host the app uses.
# schema.prisma and migrations/ ship inside the API image.
docker run --rm \
--network host \
-e DATABASE_URL="$DATABASE_URL" \
"$API_IMAGE" \
npx prisma migrate deploy || {
error "Migration failed — aborting deploy. Old containers are still running."
exit 1
}
log "Migrations applied successfully"
}
# Restart containers
restart_containers() {
log "Restarting containers..."
@@ -219,6 +245,7 @@ main() {
login_to_harbor
create_backup
pull_images
run_migrations
update_docker_compose
restart_containers
health_check

View File

@@ -0,0 +1,396 @@
#!/bin/bash
#
# PostgreSQL Database Restore Script
# Restores a single database from backup created by backup-all-postgres-databases.sh
#
# Usage:
# ./restore-postgres-database.sh <backup_file> [options]
#
# Options:
# -h HOST Database host (default: localhost)
# -p PORT Database port (default: 5432)
# -U USER Database user (default: postgres)
# -d DBNAME Target database name (default: extracted from backup filename)
# -f Force restore (skip confirmation)
# -v Verbose output
#
# Examples:
# ./restore-postgres-database.sh /var/backups/postgresql/20260120/mydb_20260120_020001.sql.gz
# ./restore-postgres-database.sh backup.sql -d mydb -f
set -e
set -o pipefail
# Default configuration
DB_HOST="localhost"
DB_PORT="5432"
DB_USER="postgres"
DB_NAME=""
FORCE=false
VERBOSE=false
# Color output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# Logging functions
log_info() {
echo -e "${GREEN}[INFO]${NC} $(date '+%Y-%m-%d %H:%M:%S') - $1"
}
log_warn() {
echo -e "${YELLOW}[WARN]${NC} $(date '+%Y-%m-%d %H:%M:%S') - $1"
}
log_error() {
echo -e "${RED}[ERROR]${NC} $(date '+%Y-%m-%d %H:%M:%S') - $1" >&2
}
log_debug() {
if [ "$VERBOSE" = true ]; then
echo -e "${BLUE}[DEBUG]${NC} $(date '+%Y-%m-%d %H:%M:%S') - $1"
fi
}
# Show usage
show_usage() {
echo "PostgreSQL Database Restore Script"
echo ""
echo "Usage: $0 <backup_file> [options]"
echo ""
echo "Options:"
echo " -h HOST Database host (default: localhost)"
echo " -p PORT Database port (default: 5432)"
echo " -U USER Database user (default: postgres)"
echo " -d DBNAME Target database name (default: extracted from filename)"
echo " -f Force restore (skip confirmation)"
echo " -v Verbose output"
echo " -H Show this help"
echo ""
echo "Examples:"
echo " $0 /var/backups/postgresql/20260120/mydb_20260120_020001.sql.gz"
echo " $0 backup.sql -d mydb -f"
echo ""
}
# Extract database name from backup filename
extract_db_name() {
local filename=$(basename "$1")
# Remove extension(s) and timestamp
# Format: dbname_YYYYMMDD_HHMMSS.sql[.gz]
echo "$filename" | sed -E 's/_[0-9]{8}_[0-9]{6}\.sql(\.gz)?$//'
}
# Check if file is compressed
is_compressed() {
[[ "$1" == *.gz ]]
}
# Verify backup file
verify_backup() {
local backup_file="$1"
log_debug "Verifying backup file: $backup_file"
if [ ! -f "$backup_file" ]; then
log_error "Backup file not found: $backup_file"
exit 1
fi
if [ ! -r "$backup_file" ]; then
log_error "Backup file is not readable: $backup_file"
exit 1
fi
if [ ! -s "$backup_file" ]; then
log_error "Backup file is empty: $backup_file"
exit 1
fi
# Verify compressed file integrity
if is_compressed "$backup_file"; then
log_debug "Verifying gzip integrity..."
if ! gzip -t "$backup_file" 2>/dev/null; then
log_error "Backup file is corrupted (gzip test failed)"
exit 1
fi
fi
log_debug "Backup file verification passed"
}
# Test database connection
test_connection() {
log_debug "Testing database connection..."
if ! psql -h "$DB_HOST" -p "$DB_PORT" -U "$DB_USER" -d postgres -c "SELECT version();" &> /dev/null; then
log_error "Cannot connect to PostgreSQL server at $DB_HOST:$DB_PORT"
log_error "Check credentials, network connectivity, and pg_hba.conf settings"
exit 1
fi
log_debug "Database connection successful"
}
# Check if database exists
database_exists() {
local db_name="$1"
psql -h "$DB_HOST" -p "$DB_PORT" -U "$DB_USER" -d postgres -t -c \
"SELECT 1 FROM pg_database WHERE datname='$db_name';" | grep -q 1
}
# Create safety backup
create_safety_backup() {
local db_name="$1"
local timestamp=$(date +%Y%m%d_%H%M%S)
local safety_file="/tmp/${db_name}_pre-restore_${timestamp}.sql.gz"
log_info "Creating safety backup before restore..."
if pg_dump -h "$DB_HOST" -p "$DB_PORT" -U "$DB_USER" -d "$db_name" | gzip > "$safety_file"; then
log_info "Safety backup created: $safety_file"
echo "$safety_file"
return 0
else
log_error "Failed to create safety backup"
return 1
fi
}
# Drop and recreate database
recreate_database() {
local db_name="$1"
log_info "Dropping and recreating database: $db_name"
# Terminate existing connections
psql -h "$DB_HOST" -p "$DB_PORT" -U "$DB_USER" -d postgres <<EOF
SELECT pg_terminate_backend(pid)
FROM pg_stat_activity
WHERE datname = '$db_name' AND pid <> pg_backend_pid();
EOF
# Drop and recreate
psql -h "$DB_HOST" -p "$DB_PORT" -U "$DB_USER" -d postgres <<EOF
DROP DATABASE IF EXISTS $db_name;
CREATE DATABASE $db_name;
EOF
log_debug "Database recreated successfully"
}
# Restore database
restore_database() {
local backup_file="$1"
local db_name="$2"
log_info "Restoring database from: $backup_file"
local start_time=$(date +%s)
# Restore based on compression
if is_compressed "$backup_file"; then
if gunzip -c "$backup_file" | psql -h "$DB_HOST" -p "$DB_PORT" -U "$DB_USER" -d postgres -v ON_ERROR_STOP=1; then
local status="SUCCESS"
else
local status="FAILED"
fi
else
if psql -h "$DB_HOST" -p "$DB_PORT" -U "$DB_USER" -d postgres -f "$backup_file" -v ON_ERROR_STOP=1; then
local status="SUCCESS"
else
local status="FAILED"
fi
fi
local end_time=$(date +%s)
local duration=$((end_time - start_time))
if [ "$status" = "SUCCESS" ]; then
log_info "✓ Database restore completed in ${duration}s"
return 0
else
log_error "✗ Database restore failed"
return 1
fi
}
# Verify restore
verify_restore() {
local db_name="$1"
log_info "Verifying restored database..."
# Check if database exists
if ! database_exists "$db_name"; then
log_error "Database not found after restore: $db_name"
return 1
fi
# Get table count
local table_count=$(psql -h "$DB_HOST" -p "$DB_PORT" -U "$DB_USER" -d "$db_name" -t -c \
"SELECT COUNT(*) FROM information_schema.tables WHERE table_schema = 'public';")
table_count=$(echo "$table_count" | xargs)
# Get row count estimate
local row_count=$(psql -h "$DB_HOST" -p "$DB_PORT" -U "$DB_USER" -d "$db_name" -t -c \
"SELECT SUM(n_live_tup) FROM pg_stat_user_tables;")
row_count=$(echo "$row_count" | xargs)
row_count=${row_count:-0}
log_info "Database: $db_name"
log_info "Tables: $table_count"
log_info "Approximate rows: $row_count"
return 0
}
# Parse command line arguments
BACKUP_FILE=""
while [[ $# -gt 0 ]]; do
case $1 in
-h)
DB_HOST="$2"
shift 2
;;
-p)
DB_PORT="$2"
shift 2
;;
-U)
DB_USER="$2"
shift 2
;;
-d)
DB_NAME="$2"
shift 2
;;
-f)
FORCE=true
shift
;;
-v)
VERBOSE=true
shift
;;
-H)
show_usage
exit 0
;;
-*)
log_error "Unknown option: $1"
show_usage
exit 1
;;
*)
if [ -z "$BACKUP_FILE" ]; then
BACKUP_FILE="$1"
else
log_error "Multiple backup files specified"
show_usage
exit 1
fi
shift
;;
esac
done
# Main execution
main() {
# Check if backup file was provided
if [ -z "$BACKUP_FILE" ]; then
log_error "No backup file specified"
show_usage
exit 1
fi
# Verify backup file
verify_backup "$BACKUP_FILE"
# Extract database name if not provided
if [ -z "$DB_NAME" ]; then
DB_NAME=$(extract_db_name "$BACKUP_FILE")
log_debug "Extracted database name: $DB_NAME"
fi
log_info "================================================"
log_info "PostgreSQL Database Restore"
log_info "================================================"
log_info "Backup File: $BACKUP_FILE"
log_info "Target Database: $DB_NAME"
log_info "Host: $DB_HOST:$DB_PORT"
log_info "User: $DB_USER"
log_info "================================================"
echo ""
# Test connection
test_connection
# Check if database exists
local db_exists=false
if database_exists "$DB_NAME"; then
db_exists=true
log_warn "Database '$DB_NAME' already exists and will be DROPPED"
fi
# Confirmation prompt (unless force flag is set)
if [ "$FORCE" != true ]; then
echo ""
echo -e "${RED}WARNING: This will destroy all current data in database: $DB_NAME${NC}"
echo ""
read -p "Are you sure you want to continue? (type 'yes' to confirm): " CONFIRM
if [ "$CONFIRM" != "yes" ]; then
log_info "Restore cancelled by user"
exit 0
fi
fi
# Create safety backup if database exists
local safety_file=""
if [ "$db_exists" = true ]; then
safety_file=$(create_safety_backup "$DB_NAME")
fi
# Recreate database
recreate_database "$DB_NAME"
# Restore from backup
if restore_database "$BACKUP_FILE" "$DB_NAME"; then
verify_restore "$DB_NAME"
echo ""
log_info "================================================"
log_info "Restore completed successfully! ✓"
log_info "================================================"
if [ -n "$safety_file" ]; then
echo ""
log_info "A safety backup was created before restore:"
log_info " $safety_file"
echo ""
log_info "To rollback to the previous state, run:"
log_info " $0 $safety_file -d $DB_NAME -f"
echo ""
fi
exit 0
else
log_error "Restore failed!"
if [ -n "$safety_file" ]; then
echo ""
log_warn "You can restore the previous state using:"
log_warn " $0 $safety_file -d $DB_NAME -f"
fi
exit 1
fi
}
# Run main function
main

View File

@@ -0,0 +1,88 @@
#!/bin/bash
#
# Basil Restore Script for Standalone PostgreSQL
# Run manually when you need to restore from backup
#
# Usage: ./restore-standalone-postgres.sh /path/to/backup.sql.gz
set -e
# Configuration
DB_HOST="localhost"
DB_PORT="5432"
DB_NAME="basil"
DB_USER="basil"
# Check arguments
if [ $# -eq 0 ]; then
echo "Usage: $0 /path/to/backup.sql.gz"
echo ""
echo "Available backups:"
echo "Daily:"
ls -lh /var/backups/basil/daily/ 2>/dev/null | tail -5
echo ""
echo "Weekly:"
ls -lh /var/backups/basil/weekly/ 2>/dev/null | tail -5
exit 1
fi
BACKUP_FILE="$1"
# Verify backup file exists
if [ ! -f "$BACKUP_FILE" ]; then
echo "ERROR: Backup file not found: $BACKUP_FILE"
exit 1
fi
# Verify backup integrity
echo "Verifying backup integrity..."
if ! gzip -t "$BACKUP_FILE"; then
echo "ERROR: Backup file is corrupted!"
exit 1
fi
# Confirm restore
echo "===== WARNING ====="
echo "This will DESTROY all current data in database: $DB_NAME"
echo "Backup file: $BACKUP_FILE"
echo "Database: $DB_USER@$DB_HOST:$DB_PORT/$DB_NAME"
echo ""
read -p "Are you sure you want to continue? (type 'yes' to confirm): " CONFIRM
if [ "$CONFIRM" != "yes" ]; then
echo "Restore cancelled."
exit 0
fi
# Create backup of current database before restore
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
PRE_RESTORE_BACKUP="/tmp/basil-pre-restore-$TIMESTAMP.sql.gz"
echo "Creating safety backup of current database..."
pg_dump -h "$DB_HOST" -p "$DB_PORT" -U "$DB_USER" "$DB_NAME" | gzip > "$PRE_RESTORE_BACKUP"
echo "Safety backup created: $PRE_RESTORE_BACKUP"
# Drop and recreate database
echo "Dropping existing database..."
psql -h "$DB_HOST" -p "$DB_PORT" -U "$DB_USER" postgres <<EOF
DROP DATABASE IF EXISTS $DB_NAME;
CREATE DATABASE $DB_NAME;
GRANT ALL PRIVILEGES ON DATABASE $DB_NAME TO $DB_USER;
EOF
# Restore from backup
echo "Restoring from backup..."
gunzip -c "$BACKUP_FILE" | psql -h "$DB_HOST" -p "$DB_PORT" -U "$DB_USER" "$DB_NAME"
# Verify restore
echo "Verifying restore..."
RECIPE_COUNT=$(psql -h "$DB_HOST" -p "$DB_PORT" -U "$DB_USER" "$DB_NAME" -t -c "SELECT COUNT(*) FROM \"Recipe\";")
COOKBOOK_COUNT=$(psql -h "$DB_HOST" -p "$DB_PORT" -U "$DB_USER" "$DB_NAME" -t -c "SELECT COUNT(*) FROM \"Cookbook\";")
echo ""
echo "===== Restore Complete ====="
echo "Recipes: $RECIPE_COUNT"
echo "Cookbooks: $COOKBOOK_COUNT"
echo "Pre-restore backup saved at: $PRE_RESTORE_BACKUP"
echo ""
echo "If something went wrong, you can restore from the safety backup:"
echo " gunzip -c $PRE_RESTORE_BACKUP | psql -h $DB_HOST -U $DB_USER $DB_NAME"