feat(logging): add support for key-value pairs in LambdaJsonEncoder#2377
feat(logging): add support for key-value pairs in LambdaJsonEncoder#2377cmtjk wants to merge 9 commits intoaws-powertools:mainfrom
Conversation
|
We've decided to permit The only issue we see is that the string I'll provide updated code later in my free time. String s = null;
log.atInfo().setMessage("Processing event")
.addKeyValue(null, s)
.addKeyValue("eventId", 54321)
.addKeyValue("eventId", "abcde")
.addKeyValue(null, "fghij")
.addKeyValue("klmnop", s)
.addKeyValue("qrstu", "true")
.addKeyValue("vwxyz", false)
.log();{
"level": "INFO",
"message": "Processing event",
"cold_start": false,
"function_arn": null,
"function_memory_size": 0,
"function_name": null,
"function_request_id": "null",
"function_version": null,
"service": "service_undefined",
"kvpList": [
{
"null": "null"
},
{
"eventId": 54321
},
{
"eventId": "abcde"
},
{
"null": "fghij"
},
{
"klmnop": "null"
},
{
"qrstu": true
},
{
"vwxyz": false
}
],
"timestamp": "2026-02-03T08:17:13.464Z"
} |
There was a problem hiding this comment.
Hey @cmtjk! Awesome work so far. I think adding this feature will be beneficial for the open-source community and a valuable contribution to the project. Here are some of my thoughts:
- We need to add the same behavior to the Log4j2 backend and the LambdaEcsEncoder in the logback backend.
- We need to add thorough tests. E.g. What happens if I set a key via fluent API that clashes with a previously set MDC key? I think it is ok to overwrite for that log statement and document this behavior in the docs.
- We should update the E2E tests with an SLF4J fluent API test. https://github.com/cmtjk/powertools-lambda-java/blob/improv/logback-logger-key-value-pairs/powertools-e2e-tests/src/test/java/software/amazon/lambda/powertools/LoggingE2ET.java#L73
The only issue we see is that the string "true" is currently serialized as the boolean true, which is often equivalent but not always correct.
Good catch. We should fix and make sure it is consistent for both MDC and fluent API keys.
...gback/src/main/java/software/amazon/lambda/powertools/logging/logback/LambdaJsonEncoder.java
Outdated
Show resolved
Hide resolved
.../src/test/java/software/amazon/lambda/powertools/logging/internal/LambdaJsonEncoderTest.java
Outdated
Show resolved
Hide resolved
|
Hi @phipag, I just want to confirm that we should drop the solution mentioned in #2377 (comment) and instead add proper documentation. |
Yes let's do it. Unless you have any strong arguments against it? But I agree that this would be what the user expects. |
31c8a93 to
2b4b6a3
Compare
|
I think the PowertoolsResolver for Log4j does not need changes to display SLF4J fluent key/value pairs — those pairs are already placed into the Log4j ThreadContext and PowertoolsResolver reads the ThreadContext. SLF4J fluent key/value pairs are put into Log4j's context by Log4jEventBuilder via CloseableThreadContext.putAll, the same mechanism used by Log4jMDCAdapter. Because PowertoolsResolver reads MDC/ThreadContext entries, it will pick up and serialize those key/value pairs. |
…e pairs and sort remaining entries to match existing MDC handling behavior
|
Hey @cmtjk, this is looking good already! I see we still have some todos open (https://sonarcloud.io/project/issues?issueStatuses=OPEN%2CCONFIRMED&sinceLeakPeriod=true&pullRequest=2377&id=aws-powertools_powertools-lambda-java). And we would need to update the docs with a code example for fluent API and a small section that we officially support this now. Is there anything else on your end that we need to take care of? |
|
Hi @phipag, |
|
Awesome, thanks @cmtjk. I will keep an eye on the PR state and will do some extensive testing once you give me the ready sign. |
|
Hi @phipag, I refocused on the actual core topic of this pull request and postponed some of the broader considerations for now. It turns out that AWS Powertools logging already supports the SLF4J Fluent Logging API to a very large extent. The only issue is that the Logback implementation currently ignores key-value pairs. Therefore, this PR primarily addresses that specific problem. Regarding key collisions and deduplication: Log4J and Logback behave differently here. While Log4J stores key-value pairs in the same structure as MDCs, Logback keeps them separate. As a result, Log4J deduplicates automatically, whereas Logback leaves this responsibility to the encoder implementation. This is a separate concern and not exclusive to any specific implementation or to the SLF4J Fluent Logging API itself. Key collisions will always result in unexpected behavior, regardless of the logging backend. In the documentation, I decided not to create a separate section or additional code examples specifically for the SLF4J Fluent Logging API. Users can reasonably assume that it is supported. According to its documentation, the SLF4J Fluent Logging API is backward compatible, so there is no reason to assume it would not work—except for the fact that the Logback encoders currently “forgot” to encode the key-value pairs. At the moment, the AWS Powertools logging implementations for Log4J and Logback behave differently. I've aligned the behavior by updating the Logback encoder to match Log4J’s approach—storing key-value pairs in the same structure as MDCs and processing them in the same way. |
9c45f3f to
e6e3786
Compare
|




Summary
Key-value pairs added via SLF4J's fluent API, i.e.
org.slf4j.spi.LoggingEventBuilder::addKeyValue(String var1, Object var2), are ignored bysoftware.amazon.lambda.powertools.logging.logback.LambdaJsonEncoder.This PR aims to properly encode these key-value pairs.
Changes
Serialize
org.slf4j.event.LoggingEvent::getKeyValuePairs()like MDC entries, overriding duplicate keys (see below).Discussion number: 2375
WIP
Key-value handling needs clarification. The current API stores key-value pairs in separate collections and allows duplicate keys, so it’s unclear whether the encoder should preserve duplicates or merge them. For reference, the
ch.qos.logback.classic.encoder.JsonEncoderencodes key-value pairs as an array of{"key": "value"}entries to preserve duplicates.By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.
Disclaimer: We value your time and bandwidth. As such, any pull requests created on non-triaged issues might not be successful.