2 Commits

Author SHA1 Message Date
Paul R Kartchner
022d0c9529 chore: bump version to 2026.01.003
All checks were successful
Basil CI/CD Pipeline / Code Linting (push) Successful in 1m19s
Basil CI/CD Pipeline / Shared Package Tests (push) Successful in 1m19s
Basil CI/CD Pipeline / Web Tests (push) Successful in 1m32s
Basil CI/CD Pipeline / API Tests (push) Successful in 1m39s
Basil CI/CD Pipeline / Security Scanning (push) Successful in 1m11s
Basil CI/CD Pipeline / Build All Packages (push) Successful in 1m32s
Basil CI/CD Pipeline / E2E Tests (push) Has been skipped
Basil CI/CD Pipeline / Build & Push Docker Images (push) Has been skipped
Basil CI/CD Pipeline / Trigger Deployment (push) Has been skipped
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-16 23:49:50 -07:00
Paul R Kartchner
e20be988ce fix: recipe import from unsupported websites and external URL deletion
Some checks failed
Basil CI/CD Pipeline / Build All Packages (push) Has been cancelled
Basil CI/CD Pipeline / E2E Tests (push) Has been cancelled
Basil CI/CD Pipeline / Build & Push Docker Images (push) Has been cancelled
Basil CI/CD Pipeline / Trigger Deployment (push) Has been cancelled
Basil CI/CD Pipeline / Web Tests (push) Has been cancelled
Basil CI/CD Pipeline / Shared Package Tests (push) Has been cancelled
Basil CI/CD Pipeline / API Tests (push) Has been cancelled
Basil CI/CD Pipeline / Security Scanning (push) Has been cancelled
Basil CI/CD Pipeline / Code Linting (push) Has been cancelled
- Enable wild mode in recipe scraper (supported_only=False) to work with any
  website that uses schema.org structured data, not just officially supported sites
- Fix storage service to skip deletion of external URLs (imported recipe images)
  instead of treating them as local file paths

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-16 23:49:38 -07:00
4 changed files with 10 additions and 5 deletions

View File

@@ -2,7 +2,7 @@
"""
Recipe scraper script using the recipe-scrapers library.
This script is called by the Node.js API to scrape recipes from URLs.
Uses wild mode (supported_only=False) to work with any website, not just officially supported ones.
Uses wild mode (supported_only=False) to work with any website that uses schema.org structured data.
"""
import sys
@@ -52,8 +52,8 @@ def scrape_recipe(url):
html = fetch_html(url)
# Use scrape_html to scrape the recipe
# Works with officially supported websites
scraper = scrape_html(html, org_url=url)
# supported_only=False enables wild mode for any website with schema.org data
scraper = scrape_html(html, org_url=url, supported_only=False)
# Extract recipe data with safe extraction
recipe_data = {

View File

@@ -51,6 +51,11 @@ export class StorageService {
}
async deleteFile(fileUrl: string): Promise<void> {
// Skip deletion if this is an external URL (from imported recipes)
if (fileUrl.startsWith('http://') || fileUrl.startsWith('https://')) {
return;
}
if (storageConfig.type === 'local') {
const basePath = storageConfig.localPath || './uploads';
const filePath = path.join(basePath, fileUrl.replace('/uploads/', ''));

View File

@@ -3,4 +3,4 @@
* Example: 2026.01.002 (January 2026, patch 2), 2026.02.003 (February 2026, patch 3)
* Month and patch are zero-padded. Patch increments with each deployment in a month.
*/
export const APP_VERSION = '2026.01.002';
export const APP_VERSION = '2026.01.003';

View File

@@ -3,4 +3,4 @@
* Example: 2026.01.002 (January 2026, patch 2), 2026.02.003 (February 2026, patch 3)
* Month and patch are zero-padded. Patch increments with each deployment in a month.
*/
export const APP_VERSION = '2026.01.002';
export const APP_VERSION = '2026.01.003';