Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
133 changes: 133 additions & 0 deletions DEPLOYMENT.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,133 @@
# Cloud Run Deployment Guide

This guide will help you deploy the webpage replicator application to Google Cloud Run.

## Prerequisites

1. **Google Cloud Account**: You need a Google Cloud account with billing enabled
2. **Google Cloud CLI**: Install the `gcloud` CLI tool
3. **Docker**: Ensure Docker is installed and running (for local testing)
4. **Project Setup**: Create a Google Cloud project

## Quick Deployment

### Option 1: Using the deployment script (Recommended)

1. **Update the deployment script**:
```bash
# Edit deploy.sh and replace 'your-project-id' with your actual project ID
nano deploy.sh
```

2. **Make the script executable and run it**:
```bash
chmod +x deploy.sh
./deploy.sh
```
Comment on lines +16 to +26
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Avoid editing scripts; accept PROJECT_ID/REGION as inputs

Instructing users to edit the script invites drift and mistakes. Recommend passing inputs via env vars or flags so the same script is reusable across projects and CI.

Apply this doc tweak to show a safer invocation:

-1. **Update the deployment script**:
-   ```bash
-   # Edit deploy.sh and replace 'your-project-id' with your actual project ID
-   nano deploy.sh
-   ```
+1. **Provide inputs and run**:
+   ```bash
+   export PROJECT_ID="your-project-id"
+   export REGION="us-central1" # or your preferred region
+   chmod +x deploy.sh
+   ./deploy.sh
+   ```

I’ve proposed matching script changes in deploy.sh to support this flow. See my deploy.sh comments.

🤖 Prompt for AI Agents
In DEPLOYMENT.md around lines 16 to 26, the doc currently tells users to edit
deploy.sh which encourages drift; replace that section to instruct users to
provide PROJECT_ID and REGION via environment variables or CLI flags and show a
safer invocation (export PROJECT_ID and REGION, chmod +x deploy.sh, ./deploy.sh)
instead of opening the script for manual edits; also mention the alternative of
passing flags and note that deploy.sh must be updated (as you’ve done in
deploy.sh) to read PROJECT_ID/REGION from env or flags and fail fast with a
clear error if they are missing.


### Option 2: Manual deployment using YAML files

1. **Set your project ID**:
```bash
export PROJECT_ID="your-project-id"
gcloud config set project $PROJECT_ID
```

2. **Enable required APIs**:
```bash
gcloud services enable cloudbuild.googleapis.com
gcloud services enable run.googleapis.com
```

3. **Update YAML files**:
- Replace `PROJECT_ID` in both `frontend/cloudrun.yaml` and `backend/cloudrun.yaml` with your actual project ID

4. **Build and deploy backend**:
```bash
cd backend
gcloud builds submit --tag gcr.io/$PROJECT_ID/webpage-replicator-backend
gcloud run services replace cloudrun.yaml --region=us-central1
cd ..
```

5. **Build and deploy frontend**:
```bash
cd frontend
gcloud builds submit --tag gcr.io/$PROJECT_ID/webpage-replicator-frontend
gcloud run services replace cloudrun.yaml --region=us-central1
cd ..
```

## Configuration

### Backend Configuration

The backend service may require environment variables for API keys and other configuration. You can set these using:

```bash
gcloud run services update webpage-replicator-backend \
--region=us-central1 \
--set-env-vars="GEMINI_API_KEY=your-api-key-here"
```

Or use Google Secret Manager for sensitive data:

```bash
# Create a secret
gcloud secrets create gemini-api-key --data-file=api-key.txt

# Update the service to use the secret
gcloud run services update webpage-replicator-backend \
--region=us-central1 \
--set-secrets="GEMINI_API_KEY=gemini-api-key:latest"
```

### Frontend Configuration

If your frontend needs to communicate with the backend, update any API endpoint URLs in your frontend code to use the deployed backend URL.

## Monitoring and Logs

- **View logs**: `gcloud run logs read webpage-replicator-backend --region=us-central1`
- **Monitor metrics**: Visit the Cloud Console > Cloud Run to view metrics and performance

## Costs

Cloud Run pricing is based on:
- CPU and memory allocation
- Number of requests
- Request duration

The current configuration uses:
- **Frontend**: 1 vCPU, 512Mi memory
- **Backend**: 2 vCPU, 1Gi memory

Both services scale to zero when not in use, so you only pay for actual usage.
Comment on lines +96 to +105
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Resource mismatch between docs and script (backend CPU 2 vs 1)

The doc states Backend: 2 vCPU, 1Gi memory (Lines 101–104) but deploy.sh deploys the backend with --cpu 1. This misleads capacity planning and load expectations.

Choose one:

  • Update docs to 1 vCPU, or
  • Update the script to 2 vCPU to match the docs. Suggested script fix is provided in deploy.sh comments.
🧰 Tools
🪛 LanguageTool

[grammar] ~96-~96: There might be a mistake here.
Context: ...## Costs Cloud Run pricing is based on: - CPU and memory allocation - Number of re...

(QB_NEW_EN)


[grammar] ~97-~97: There might be a mistake here.
Context: ...is based on: - CPU and memory allocation - Number of requests - Request duration T...

(QB_NEW_EN)


[grammar] ~98-~98: There might be a mistake here.
Context: ...d memory allocation - Number of requests - Request duration The current configurat...

(QB_NEW_EN)


[grammar] ~101-~101: There might be a mistake here.
Context: ...uration The current configuration uses: - Frontend: 1 vCPU, 512Mi memory - **Bac...

(QB_NEW_EN)


[grammar] ~102-~102: There might be a mistake here.
Context: ...es: - Frontend: 1 vCPU, 512Mi memory - Backend: 2 vCPU, 1Gi memory Both serv...

(QB_NEW_EN)


[grammar] ~103-~103: Ensure spelling is correct
Context: ...PU, 512Mi memory - Backend: 2 vCPU, 1Gi memory Both services scale to zero whe...

(QB_NEW_EN_ORTHOGRAPHY_ERROR_IDS_1)

🤖 Prompt for AI Agents
In DEPLOYMENT.md around lines 96–105 the docs claim "Backend: 2 vCPU, 1Gi
memory" but deploy.sh currently deploys the backend with --cpu 1, creating a
mismatch; either update DEPLOYMENT.md to state "Backend: 1 vCPU, 1Gi memory"
(edit lines 101–104) or change the backend deployment in deploy.sh to use --cpu
2 to match the docs (update the backend gcloud/Cloud Run deploy command to pass
--cpu 2), and ensure the chosen change is applied consistently across docs and
scripts and any related comments.


## Troubleshooting

### Common Issues

1. **Build failures**: Check that all dependencies are properly listed in `package.json`
2. **Port issues**: Ensure your application listens on the port specified in the `PORT` environment variable
3. **Health check failures**: Make sure your backend has a `/health` endpoint or update the health check path

### Useful Commands

```bash
# View service details
gcloud run services describe webpage-replicator-backend --region=us-central1

# View recent deployments
gcloud run revisions list --service=webpage-replicator-backend --region=us-central1

# Delete a service
gcloud run services delete webpage-replicator-backend --region=us-central1
```

## Security Considerations

- Both services are currently configured to allow unauthenticated access
- For production, consider implementing authentication
- Use IAM roles to control access to your services
- Store sensitive configuration in Google Secret Manager
14 changes: 14 additions & 0 deletions backend/.dockerignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
node_modules
npm-debug.log
.git
.gitignore
README.md
.env
.env.local
.env.development.local
.env.test.local
.env.production.local
.DS_Store
*.log
uploads/
temp/
20 changes: 20 additions & 0 deletions backend/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
# Use Node.js official image
FROM node:18-alpine
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Upgrade base image: Node.js 18 is EOL; move to active LTS (20-alpine) and pin digest.

Running on an end-of-life runtime increases security risk and blocks security updates. Recommend Node 20 LTS and pin by digest for reproducibility.

Apply:

-FROM node:18-alpine
+FROM node:20-alpine@sha256:<pin-a-known-good-digest>

If you don’t want to pin yet, at least move to node:20-alpine. I can help fetch and insert the current digest if desired.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
FROM node:18-alpine
# backend/Dockerfile
FROM node:20-alpine@sha256:<pin-a-known-good-digest>
🤖 Prompt for AI Agents
In backend/Dockerfile at line 2, the base image uses Node 18 which is EOL;
update the FROM line to use Node 20 Alpine and pin to a digest for reproducible,
secure builds. Replace "node:18-alpine" with "node:20-alpine@sha256:<DIGEST>"
(or at minimum "node:20-alpine"), where <DIGEST> is the current digest you fetch
from Docker Hub or via "docker pull" / "docker buildx imagetools inspect" and
paste the exact sha256; commit the updated FROM line so CI/builds use the new
LTS image.


# Set working directory
WORKDIR /app

# Copy package files
COPY package*.json ./

# Install dependencies
RUN npm ci --only=production
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Use modern npm flag for production installs (--omit=dev instead of --only=production).

--only=production is deprecated in newer npm; --omit=dev is the supported equivalent.

-RUN npm ci --only=production
+RUN npm ci --omit=dev
🤖 Prompt for AI Agents
In backend/Dockerfile around line 11, the Dockerfile uses the deprecated npm
flag `--only=production`; update the command to use the modern equivalent
`--omit=dev` so the production install excludes devDependencies (e.g., replace
the `npm ci --only=production` invocation with `npm ci --omit=dev`) and ensure
the image build still runs under CI by keeping the same `npm ci` usage.


# Copy application files
COPY . .

# Expose port (assuming Express server runs on port 3001 or process.env.PORT)
EXPOSE 3001

# Start the application
CMD ["npm", "start"]
Comment on lines +13 to +20
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Run as non-root and set NODE_ENV=production for safer, leaner runtime.

Cloud Run recommends non-root. The Node image has a node user; drop privileges and set NODE_ENV.

 # Copy application files
 COPY . .
 
-# Expose port (assuming Express server runs on port 3001 or process.env.PORT)
-EXPOSE 3001
+# Ensure correct env and non-root user
+ENV NODE_ENV=production
+USER node
+
+# Expose port (Express on 3001 or process.env.PORT)
+EXPOSE 3001
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Copy application files
COPY . .
# Expose port (assuming Express server runs on port 3001 or process.env.PORT)
EXPOSE 3001
# Start the application
CMD ["npm", "start"]
# Copy application files
COPY . .
# Ensure correct env and non-root user
ENV NODE_ENV=production
USER node
# Expose port (Express on 3001 or process.env.PORT)
EXPOSE 3001
# Start the application
CMD ["npm", "start"]
🤖 Prompt for AI Agents
In backend/Dockerfile around lines 13 to 20, the container currently runs as
root and doesn’t set NODE_ENV; change it to run as the Node non-root user and
set production env: add ENV NODE_ENV=production, ensure the app files are owned
by the node user (chown -R node:node /usr/src/app or copy files into a directory
owned by node), and switch to USER node before the EXPOSE/CMD lines so the
process runs with reduced privileges while preserving the existing EXPOSE and
CMD.

57 changes: 57 additions & 0 deletions backend/cloudrun.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: webpage-replicator-backend
annotations:
run.googleapis.com/ingress: all
run.googleapis.com/ingress-status: all
spec:
template:
metadata:
annotations:
autoscaling.knative.dev/maxScale: "100"
run.googleapis.com/cpu-throttling: "false"
run.googleapis.com/execution-environment: gen2
spec:
containerConcurrency: 80
timeoutSeconds: 300
containers:
- image: gcr.io/PROJECT_ID/webpage-replicator-backend:latest
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Switch from Container Registry (gcr.io) to Artifact Registry (docker.pkg.dev). Container Registry was sunset in 2025.

Publishing/pulling from gcr.io may fail now. Use Artifact Registry: REGION-docker.pkg.dev/PROJECT_ID/REPO/webpage-replicator-backend:TAG.

Example change:

-      - image: gcr.io/PROJECT_ID/webpage-replicator-backend:latest
+      - image: REGION-docker.pkg.dev/PROJECT_ID/REPO/webpage-replicator-backend:latest

Ensure your deploy script builds/pushes to the AR repository and that Cloud Run has permission to pull from it.

🤖 Prompt for AI Agents
In backend/cloudrun.yaml around line 19, the container image is pointing to the
deprecated Container Registry (gcr.io); update the image reference to Artifact
Registry format
(REGION-docker.pkg.dev/PROJECT_ID/REPO/webpage-replicator-backend:TAG), update
any deploy/build scripts to tag and push the image to the Artifact Registry
repository (ensure proper REGION, PROJECT_ID, REPO and TAG values), and grant
Cloud Run the Artifact Registry read permission (or configure the service
account) so Cloud Run can pull the image.

ports:
- name: http1
containerPort: 3001
env:
- name: PORT
value: "3001"
- name: NODE_ENV
value: "production"
# Add your environment variables here
# - name: GEMINI_API_KEY
# valueFrom:
# secretKeyRef:
# name: gemini-secrets
# key: api-key
resources:
limits:
cpu: 2000m
memory: 1Gi
requests:
cpu: 200m
memory: 256Mi
livenessProbe:
httpGet:
path: /health
port: 3001
initialDelaySeconds: 30
periodSeconds: 30
failureThreshold: 3
readinessProbe:
httpGet:
path: /health
port: 3001
initialDelaySeconds: 5
Comment on lines +41 to +52
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cloud Run probes still hit /health but the new server only exposes /api/health, so both liveness/readiness probes will 404 and the revision will never become ready; can we point the probes at /api/health (or add /health handler)?

Finding type: Logical Bugs

Prompt for AI Agents:

In backend/cloudrun.yaml around lines 41-52, the livenessProbe and readinessProbe are
hitting /health but the service only exposes /api/health, causing 404s and preventing
readiness. Modify both probes so their httpGet.path is /api/health (i.e., change the
path under livenessProbe and the path under readinessProbe to /api/health). Ensure the
port remains 3001 and keep the existing probe timings; alternatively, if you prefer to
change the application, add a /health endpoint that proxies to /api/health instead.

Fix in Cursor

periodSeconds: 10
failureThreshold: 3
traffic:
- percent: 100
latestRevision: true
157 changes: 157 additions & 0 deletions backend/gcs-service.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,157 @@
import { Storage } from '@google-cloud/storage';

class GCSService {
constructor() {
this.storage = new Storage();
this.bucketName = process.env.GCS_BUCKET_NAME;

if (!this.bucketName) {
throw new Error('GCS_BUCKET_NAME environment variable is required');
}

this.bucket = this.storage.bucket(this.bucketName);
}

/**
* Upload a file to GCS
* @param {string} fileName - The name/path of the file in the bucket
* @param {Buffer} fileBuffer - The file content as a buffer
* @param {string} contentType - The MIME type of the file
* @returns {Promise<string>} - The public URL of the uploaded file
*/
async uploadFile(fileName, fileBuffer, contentType = 'application/octet-stream') {
try {
const file = this.bucket.file(fileName);

const stream = file.createWriteStream({
metadata: {
contentType: contentType,
},
resumable: false,
});

return new Promise((resolve, reject) => {
stream.on('error', (error) => {
console.error('Error uploading to GCS:', error);
reject(error);
});

stream.on('finish', () => {
// Make the file publicly readable
file.makePublic().then(() => {
const publicUrl = `https://storage.googleapis.com/${this.bucketName}/${fileName}`;
resolve(publicUrl);
}).catch(reject);
});

stream.end(fileBuffer);
});
} catch (error) {
Comment on lines +22 to +49
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Avoid unconditional makePublic; support Uniform Bucket-Level Access and signed URLs.

Calling file.makePublic() fails on buckets with UBLA enabled and broadly exposes objects. Make public should be opt-in with a safe fallback to a V4 signed URL.

Apply this diff to make public-read behavior configurable and robust:

   async uploadFile(fileName, fileBuffer, contentType = 'application/octet-stream') {
     try {
-      const file = this.bucket.file(fileName);
+      const file = this.bucket.file(fileName);
+      const makePublic = (process.env.GCS_PUBLIC_READ || 'false').toLowerCase() === 'true';
       
       const stream = file.createWriteStream({
         metadata: {
           contentType: contentType,
+          cacheControl: process.env.GCS_CACHE_CONTROL || 'public, max-age=3600',
         },
         resumable: false,
       });

       return new Promise((resolve, reject) => {
         stream.on('error', (error) => {
           console.error('Error uploading to GCS:', error);
           reject(error);
         });

         stream.on('finish', () => {
-          // Make the file publicly readable
-          file.makePublic().then(() => {
-            const publicUrl = `https://storage.googleapis.com/${this.bucketName}/${fileName}`;
-            resolve(publicUrl);
-          }).catch(reject);
+          // Optionally make the file public. If it fails (e.g., UBLA), fall back to a signed URL.
+          const publicUrl = `https://storage.googleapis.com/${this.bucketName}/${fileName}`;
+          const resolveWithSigned = async () => {
+            try {
+              const signed = await this.getSignedUrl(fileName, Number(process.env.GCS_SIGNED_URL_TTL_MINUTES || 60));
+              resolve(signed);
+            } catch (e) {
+              reject(e);
+            }
+          };
+          if (makePublic) {
+            file.makePublic()
+              .then(() => resolve(publicUrl))
+              .catch((err) => {
+                console.warn('makePublic failed; returning signed URL instead:', err?.message || err);
+                resolveWithSigned();
+              });
+          } else {
+            resolveWithSigned();
+          }
         });

         stream.end(fileBuffer);
       });
     } catch (error) {
       console.error('Error in uploadFile:', error);
       throw error;
     }
   }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
async uploadFile(fileName, fileBuffer, contentType = 'application/octet-stream') {
try {
const file = this.bucket.file(fileName);
const stream = file.createWriteStream({
metadata: {
contentType: contentType,
},
resumable: false,
});
return new Promise((resolve, reject) => {
stream.on('error', (error) => {
console.error('Error uploading to GCS:', error);
reject(error);
});
stream.on('finish', () => {
// Make the file publicly readable
file.makePublic().then(() => {
const publicUrl = `https://storage.googleapis.com/${this.bucketName}/${fileName}`;
resolve(publicUrl);
}).catch(reject);
});
stream.end(fileBuffer);
});
} catch (error) {
async uploadFile(fileName, fileBuffer, contentType = 'application/octet-stream') {
try {
const file = this.bucket.file(fileName);
const makePublic = (process.env.GCS_PUBLIC_READ || 'false').toLowerCase() === 'true';
const stream = file.createWriteStream({
metadata: {
contentType: contentType,
cacheControl: process.env.GCS_CACHE_CONTROL || 'public, max-age=3600',
},
resumable: false,
});
return new Promise((resolve, reject) => {
stream.on('error', (error) => {
console.error('Error uploading to GCS:', error);
reject(error);
});
stream.on('finish', () => {
- // Make the file publicly readable
- file.makePublic().then(() => {
- const publicUrl = `https://storage.googleapis.com/${this.bucketName}/${fileName}`;
- resolve(publicUrl);
// Optionally make the file public. If it fails (e.g., UBLA), fall back to a signed URL.
const publicUrl = `https://storage.googleapis.com/${this.bucketName}/${fileName}`;
const resolveWithSigned = async () => {
try {
const signed = await this.getSignedUrl(
fileName,
Number(process.env.GCS_SIGNED_URL_TTL_MINUTES || 60)
);
resolve(signed);
} catch (e) {
reject(e);
}
};
if (makePublic) {
file.makePublic()
.then(() => resolve(publicUrl))
.catch((err) => {
console.warn(
'makePublic failed; returning signed URL instead:',
err?.message || err
);
resolveWithSigned();
});
} else {
resolveWithSigned();
}
});
stream.end(fileBuffer);
});
} catch (error) {
console.error('Error in uploadFile:', error);
throw error;
}
}
🤖 Prompt for AI Agents
In backend/gcs-service.js around lines 22-49, the uploadFile implementation
unconditionally calls file.makePublic(), which fails for buckets with Uniform
Bucket-Level Access (UBLA) and needlessly exposes objects; change it to accept a
configurable option (or read from env/config) to control public-read behavior,
avoid calling makePublic by default, and on upload success either: 1) if
config.makePublic is true attempt file.makePublic() and if that call rejects due
to UBLA or permission error fall back to generating a V4 signed URL via
file.getSignedUrl({ action: 'read', version: 'v4', expires: <reasonable
expiration> }) and resolve with that URL; or 2) if config.makePublic is false
always return a signed V4 URL; ensure errors from makePublic are caught and do
not abort the upload promise but instead trigger the signed-URL fallback, and
expose the config (or function parameter) and expiry time so callers can opt-in
to public objects when allowed.

console.error('Error in uploadFile:', error);
throw error;
}
}

/**
* Download a file from GCS
* @param {string} fileName - The name/path of the file in the bucket
* @returns {Promise<Buffer>} - The file content as a buffer
*/
async downloadFile(fileName) {
try {
const file = this.bucket.file(fileName);
const [fileBuffer] = await file.download();
return fileBuffer;
} catch (error) {
console.error('Error downloading from GCS:', error);
throw error;
}
}

/**
* Check if a file exists in GCS
* @param {string} fileName - The name/path of the file in the bucket
* @returns {Promise<boolean>} - Whether the file exists
*/
async fileExists(fileName) {
try {
const file = this.bucket.file(fileName);
const [exists] = await file.exists();
return exists;
} catch (error) {
console.error('Error checking file existence:', error);
return false;
}
}

/**
* List files in a directory (prefix)
* @param {string} prefix - The directory prefix to list
* @returns {Promise<Array>} - Array of file objects
*/
async listFiles(prefix = '') {
try {
const [files] = await this.bucket.getFiles({
prefix: prefix,
});

return files.map(file => ({
name: file.name,
size: file.metadata.size,
created: file.metadata.timeCreated,
contentType: file.metadata.contentType,
publicUrl: `https://storage.googleapis.com/${this.bucketName}/${file.name}`
}));
} catch (error) {
console.error('Error listing files:', error);
throw error;
}
}

/**
* Delete a file from GCS
* @param {string} fileName - The name/path of the file in the bucket
* @returns {Promise<boolean>} - Whether the deletion was successful
*/
async deleteFile(fileName) {
try {
const file = this.bucket.file(fileName);
await file.delete();
return true;
} catch (error) {
console.error('Error deleting file:', error);
return false;
}
}

/**
* Get a public URL for a file
* @param {string} fileName - The name/path of the file in the bucket
* @returns {string} - The public URL
*/
getPublicUrl(fileName) {
return `https://storage.googleapis.com/${this.bucketName}/${fileName}`;
}

/**
* Generate a signed URL for temporary access
* @param {string} fileName - The name/path of the file in the bucket
* @param {number} expiresInMinutes - Expiration time in minutes (default: 60)
* @returns {Promise<string>} - The signed URL
*/
async getSignedUrl(fileName, expiresInMinutes = 60) {
try {
const file = this.bucket.file(fileName);
const [signedUrl] = await file.getSignedUrl({
action: 'read',
expires: Date.now() + (expiresInMinutes * 60 * 1000),
});
return signedUrl;
} catch (error) {
console.error('Error generating signed URL:', error);
throw error;
}
}
}

export default GCSService;
1 change: 1 addition & 0 deletions backend/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@
"dev": "node --watch server.js"
},
"dependencies": {
Comment on lines 9 to 11
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

start script still runs node server.js even though only server-gcs.js exists now, so npm start in Docker CMD will fail with "Cannot find module 'server.js'".

Finding type: Logical Bugs

Prompt for AI Agents:

In backend/package.json around lines 9 to 11, the npm scripts still reference node
server.js but that file was removed/renamed, causing npm start in Docker to fail. Update
the "start" script to run "node server-gcs.js" (and update the "dev" script from "node
--watch server.js" to "node --watch server-gcs.js" if you want hot-reload during
development) so the scripts point to the existing entry file.

Fix in Cursor

"@google-cloud/storage": "^7.13.0",
"@google/genai": "^1.15.0",
"cors": "^2.8.5",
"dotenv": "^17.2.1",
Expand Down
Loading