Skip to content

Manage concurrent RFD handling #429

@augustuswm

Description

@augustuswm

We have seen when processing multiple RFDs that contain a large number of large images (multi-MB images), we are stressing the CPU and memory of an AWS t3.medium. Note that we are not hitting burst CPU caps. There are two issues at root here:

  1. We over-process RFDs that have included documents that are on the main branch. We should instead be only generating jobs where at least one required file has changed in the RFD's subdirectory.
  2. PDF generation should use a queue with a maximum concurrent threshold as this is the primary CPU and memory consumer.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type
    No fields configured for issues without a type.

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions