From 00a42d3f18e005994ac797d0fe85d7b2d25c7923 Mon Sep 17 00:00:00 2001 From: Benjamin ACH Date: Tue, 5 May 2026 17:56:40 +0200 Subject: [PATCH 1/5] fix(docs): correct event name from `app_deploy` to `app_deployed` --- .../platform/app/troubleshooting/2000-01-01-runtime-issues.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/_posts/platform/app/troubleshooting/2000-01-01-runtime-issues.md b/src/_posts/platform/app/troubleshooting/2000-01-01-runtime-issues.md index 031820d95..628f8c575 100644 --- a/src/_posts/platform/app/troubleshooting/2000-01-01-runtime-issues.md +++ b/src/_posts/platform/app/troubleshooting/2000-01-01-runtime-issues.md @@ -112,5 +112,5 @@ when a Timeout Error occurs). You can modify this behavior by tweaking your [Notifier's configuration]({% post_url platform/app/2000-01-01-notifiers %}). -The `app_crashed`, `app_crashed_repeated` and the `app_deploy` events can be +The `app_crashed`, `app_crashed_repeated` and the `app_deployed` events can be particularly worth considering. From 7280e1a9dd7d39be93ed5416ad2c85f0a6b82f80 Mon Sep 17 00:00:00 2001 From: Benjamin ACH Date: Tue, 5 May 2026 18:36:47 +0200 Subject: [PATCH 2/5] Table is converted to markdown and various changes --- .../internals/2000-01-01-container-sizes.md | 89 ++++++------------- 1 file changed, 26 insertions(+), 63 deletions(-) diff --git a/src/_posts/platform/internals/2000-01-01-container-sizes.md b/src/_posts/platform/internals/2000-01-01-container-sizes.md index f92bbd290..1fb693e5e 100644 --- a/src/_posts/platform/internals/2000-01-01-container-sizes.md +++ b/src/_posts/platform/internals/2000-01-01-container-sizes.md @@ -1,83 +1,46 @@ --- title: Container Sizes -modified_at: 2015-12-02 00:00:00 -tags: internals containers sizes +modified_at: 2026-05-05 00:00:00 +tags: containers sizes index: 2 --- ## Comparative Table -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameMemoryCPU PriorityPID LimitPrice
S - Small256MBLow1280.01€/h
M - Medium (Default)512MBStandard2560.02€/h
L - Large1GBStandard5120.04€/h
XL - eXtra Large2GBHigh10240.08€/h
2XL - eXtra eXtra Large4GBHigh20480.16€/h
+
+| Name | Memory | CPU Priority | PID Limit[^pid-limit] | +| -------------------------- | ------- | ------------ | --------- | +| S - Small | 256 MB | Low | 128 | +| M - Medium (Default) | 512 MB | Standard | 256 | +| L - Large | 1 GB | Standard | 512 | +| XL - eXtra Large | 2 GB | High | 1024 | +| 2XL - eXtra eXtra Large | 4 GB | High | 2048 | +{: .table }
-Bigger container sizes are available upon request on the support. -As a note, each new process requires a PID. And inside each process, each thread needs one too. +Prices are available on the [Scalingo pricing page](https://scalingo.com/pricing). +Bigger container sizes are available upon request on the support. -## Availability of the Sizes - -Our 30 days free trial only gives you access to small and medium containers, if you want -to use another kind of size, please [fill your billing profile and payment -method](https://dashboard.scalingo.com/billing). +{% note %} +Limits apply when using Scalingo under the free trial. For more information, +see [what you can do under the free trial][free-trial-limits]. +{% endnote %} ## Container Limits Containers have various limits depending on their size. Here is a comprehensive list: -- RAM: cf. above-mentioned table -- Swap: twice the amount of RAM. -- CPU access: all containers have access to all CPU cores. But higher priority +- **Memory**: see the comparative table above. +- **Swap**: twice the amount of RAM. +- **CPU**: all containers have access to all CPU cores. But higher priority means twice as much priority compared to standard priority. For example, consider three containers, one has a high priority and two others have a standard priority. When processes in all three containers attempt to use 100% of CPU, the first container would receive 50% of the total CPU time and the two others would receive 25%. -- PID limits: from 128 (S) to 2048 (2XL). -- Ulimit nofile: 1048576. Maximum number of files an application can open. +- **PID limit**: see the comparative table above. +- **Open file limit** (`nofile`): 1,048,576. This is the maximum number of files an application can open. + +[^pid-limit]: Each new process requires a PID. + +[free-trial-limits]: {% post_url platform/getting-started/2000-01-01-free-trial %}#what-can-i-do-under-the-free-trial From 650382aceef24d339da10d6059c2f6ec7e80a219 Mon Sep 17 00:00:00 2001 From: Benjamin ACH Date: Tue, 5 May 2026 18:37:37 +0200 Subject: [PATCH 3/5] Update the page position --- src/_posts/platform/app/2000-01-01-filesystem.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/_posts/platform/app/2000-01-01-filesystem.md b/src/_posts/platform/app/2000-01-01-filesystem.md index e4981abda..38adb0e94 100644 --- a/src/_posts/platform/app/2000-01-01-filesystem.md +++ b/src/_posts/platform/app/2000-01-01-filesystem.md @@ -2,7 +2,7 @@ title: File System and File Storage modified_at: 2023-07-27 00:00:00 tags: app runtime file system disk storage -index: 1 +index: 2 --- ## Introduction From a70546372589f4e42da12cb8be6456880f4d3a99 Mon Sep 17 00:00:00 2001 From: Benjamin ACH Date: Tue, 5 May 2026 18:37:55 +0200 Subject: [PATCH 4/5] Precisions about OOM Crash --- .../platform/app/troubleshooting/2000-01-01-runtime-issues.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/src/_posts/platform/app/troubleshooting/2000-01-01-runtime-issues.md b/src/_posts/platform/app/troubleshooting/2000-01-01-runtime-issues.md index 628f8c575..a9baba245 100644 --- a/src/_posts/platform/app/troubleshooting/2000-01-01-runtime-issues.md +++ b/src/_posts/platform/app/troubleshooting/2000-01-01-runtime-issues.md @@ -25,7 +25,8 @@ The most common causes are: - Configuration issues - Bugs in your application code - Uncaught exception in your code (especially with non-compiled languages) -- Insufficient resources +- Insufficient resources, such as an Out of Memory (OOM) crash when the + application consumes all its allocated memory - Temporary error/unavailability of an external resource A Runtime Error can have several consequences, depending on the severity of the From 4211433836cc54e23a15cb499ca8ca51dad4c352 Mon Sep 17 00:00:00 2001 From: Benjamin ACH Date: Tue, 5 May 2026 18:38:30 +0200 Subject: [PATCH 5/5] add Application Resources documentation and move the OOM section from Metrics to this new page --- .../app/2000-01-01-application-resources.md | 68 +++++++++++++++++++ src/_posts/platform/app/2000-01-01-metrics.md | 51 ++++---------- 2 files changed, 81 insertions(+), 38 deletions(-) create mode 100644 src/_posts/platform/app/2000-01-01-application-resources.md diff --git a/src/_posts/platform/app/2000-01-01-application-resources.md b/src/_posts/platform/app/2000-01-01-application-resources.md new file mode 100644 index 000000000..694bccaed --- /dev/null +++ b/src/_posts/platform/app/2000-01-01-application-resources.md @@ -0,0 +1,68 @@ +--- +title: Application Resources +nav: Resources +modified_at: 2026-05-05 00:00:00 +tags: app resources cpu memory ram swap storage oom +index: 1 +--- + +Application resources define the CPU, memory, swap, and storage available to +each container running your application. + +## Container Sizing + + + +For the full list of available container sizes and their limits, see +[Container Sizes][container-sizes]. + + +## CPU + +## Memory + +### RAM + +### Swap + +### Out of Memory Crashes + +When an application consumes all its allocated memory (RAM + swap), the system +applies a protection mechanism called the **OOM Killer** (Out of Memory Killer). + +#### Sequence of events + +1. The application progressively uses all available RAM +2. The system starts using swap space +3. When memory and swap reach 100% usage, the OOM Killer intervenes +4. The application is immediately terminated by the system + +#### Observable consequences + +* **Abrupt termination:** The application stops without a graceful shutdown process +* **Automatic restart:** The container restarts according to its configuration +* **Restart event:** A "Restart" event appears in the metrics timeline +* **Data loss:** All non-persisted data in memory is lost + +#### Prevention and monitoring + +To avoid this scenario: + +* Regularly monitor memory charts in the [Metrics tab][metrics] +* Set up [alerts][alerts] before reaching memory limits +* Analyze usage spikes in correlation with deployment events +* Consider upgrading to a larger [container size][container-sizes] if needed + +**Note:** The OOM Killer is a system protection mechanism. If your application regularly experiences OOM events, it typically indicates a need for code optimization or increased allocated resources. + +## Storage + +### Ephemeral Filesystem + +## Monitoring Resource Usage + +## Preventing Resource Exhaustion + +[alerts]: {% post_url platform/app/2000-01-01-alerts %} +[container-sizes]: {% post_url platform/internals/2000-01-01-container-sizes %} +[metrics]: {% post_url platform/app/2000-01-01-metrics %} diff --git a/src/_posts/platform/app/2000-01-01-metrics.md b/src/_posts/platform/app/2000-01-01-metrics.md index 882b91079..f172c69a8 100644 --- a/src/_posts/platform/app/2000-01-01-metrics.md +++ b/src/_posts/platform/app/2000-01-01-metrics.md @@ -1,7 +1,7 @@ --- title: Application Metrics nav: Metrics -modified_at: 2026-01-02 12:00:00 +modified_at: 2026-05-05 00:00:00 tags: app metrics index: 35 --- @@ -23,17 +23,20 @@ The application chart displays global data that are not container specific: events and routing metrics. The **Requests per minute** chart show the number of requests the application -receives per minute, the famous **RPM**. The number of server error responses generated by the application (HTTP responses in the 500 range) is displayed on the same chart as red bars. +receives per minute, the famous **RPM**. The number of server error responses +generated by the application (HTTP responses in the 500 range) is displayed on +the same chart as red bars. **Note**: 504 and 503 errors can be generated by our reverse proxy. More information is available in the [routing documentation][routing-errors]. On top of this chart, all the events that happened during the -viewing period are displayed. This can help you link the application behaviour with events -that happened on the platform, e.g. spot a deployment that contains a memory -leak or follow your application behaviour after a scale operation. +viewing period are displayed. This can help you link the application behaviour +with events that happened on the platform, e.g. spot a deployment that contains +a memory leak or follow your application behaviour after a scale operation. -A lot of events are available on the application timeline but only a few relevant are displayed on the metrics view: +A lot of events are available on the application timeline but only a few +relevant are displayed on the metrics view: - Restart event - Deploy event @@ -61,10 +64,11 @@ The container charts use the container types defined in your [Procfile]({% post_url platform/app/2000-01-01-procfile %}). For each container type, two charts are shown. The first one shows the **CPU -usage** and the second one the **memory** and **swap** usage of this type of -container. +usage** and the second one the **memory usage** and **swap usage** usage of this +type of container. -The CPU chart may exceed 100% if the application uses more than one core of the CPU. +The CPU chart may exceed 100% if the application uses more than one core of the +CPU. For the memory chart, the memory (in blue) and swap usage (in red) are stacked. That way the total memory usage of the application can be @@ -94,35 +98,6 @@ platform/internals/2000-01-01-container-sizes %}). If the application has more than one container of a specific type, these charts show the mean CPU usage / memory consumption of all containers of the same type. -## Behavior when memory and swap are fully consumed - -When an application consumes all its allocated memory (RAM + swap), the system applies a protection mechanism called the **OOM Killer** (Out of Memory Killer). - -### Sequence of events - -1. The application progressively uses all available RAM -2. The system starts using swap space (visible in red on the memory chart) -3. When memory and swap reach 100% usage, the OOM Killer intervenes -4. The application is immediately terminated by the system - -### Observable consequences - -* **Abrupt termination:** The application stops without a graceful shutdown process -* **Automatic restart:** The container restarts according its configuration -* **Restart event:** A "Restart" event appears in the metrics timeline -* **Data loss:** All non-persisted data in memory is lost - -### Prevention and monitoring - -To avoid this scenario: - -* Regularly monitor memory charts in the Metrics tab -* Set up alerts before reaching memory limits -* Analyse usage spikes in correlation with deployment events -* Consider upgrading to a larger [container size](/platform/internals/container-sizes) if needed - -**Note:** The OOM Killer is a system protection mechanism. If your application regularly experiences OOM events, it typically indicates a need for code optimization or increased allocated resources. - ## Detailed View If the application has more than one container of a type defined in its