You are an AI assistant specializing in this specific project. Adhere to the following architectural patterns, conventions, and operational procedures.
This is a web application that processes 3D mesh files into 2D DXF profiles. It consists of a Python backend and a React frontend. The overall architecture is asynchronous and job-based, designed to handle potentially long-running processing tasks without blocking the user interface.
The backend is composed of two main processes: a FastAPI web server (app/main.py) and a background worker (app/worker.py).
/data/jobs_metadata directory, and a corresponding .trigger file is placed in the /data/job_queue. The worker polls this directory for new jobs./data/jobs_metadata. Do not use a database.processing.py can have silent failures (e.g., from the trimesh library). It is critical that error handling is robust. Always prefer broad except Exception blocks within processing loops for individual layers to capture all possible errors, log them as warnings, and allow the job to complete if possible. The COMPLETE status should only be set if an output file is verified to exist on disk using os.path.exists().backend/app/tests../run_tests.sh script from the backend directory. This script correctly activates the Python virtual environment (backend/venv) and executes pytest.test_api.py) require httpx and use TestClient. Pay close attention to Python import paths (sys.path) to avoid ImportError.The frontend is a standard Create React App.
App.js component.JobItem.js, DxfViewer.js). Continue this pattern.@react-three/fiber. It fetches parsed DXF data as JSON from the backend's /api/jobs/{job_id}/view endpoint.npm test.CI=true npm test.@react-three/fiber is complex. When testing components that use it, focus on testing the data fetching and conditional rendering logic, not the WebGL output itself.start_local_dev.sh script in the project root. This starts the backend and frontend servers independently.Dockerfile is a multi-stage build.deploy.sh script handles deployment.appuser). The start.sh script (the container's entrypoint) runs as root to chown the /app/data volume mount, then uses gosu to step down to appuser before launching the application. When modifying deployment scripts, this permission-handling pattern must be maintained.