Layyers -14 Additional List and create layers tests#1831
Layyers -14 Additional List and create layers tests#1831bhanvimenghani wants to merge 9 commits intokruize:mvp_demofrom
Conversation
Reviewer's GuideExtends the layer REST API test coverage by adding edge/negative cases for listing and creating layers, introduces dedicated layer detection tests, and expands shared validation message constants to cover new error scenarios. File-Level Changes
Possibly linked issues
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
There was a problem hiding this comment.
Hey - I've found 3 issues, and left some high level feedback:
- In
test_create_layer_mandatory_fields_validation, thequery_invalid_promql_syntaxcase uses alayer_presencestring with malformed JSON ("up{job="), which will causejson.loads()to fail before the API is exercised; consider making the JSON syntactically valid while keeping the PromQL semantically invalid so the server-side validation is actually tested. - The performance-oriented tests (e.g.
test_list_layers_performance_with_many_layersandtest_layer_detection_performance_multiple_layers) currently only measure and print timings while asserting on counts; if these are meant to guard performance, consider asserting against a loose upper bound or clearly documenting that they are informational to avoid future confusion about their purpose. - There is quite a bit of duplicated code for creating temporary JSON files and layers across the new tests; consider extracting a small helper function (e.g.
create_temp_layer_file(json_obj)orcreate_test_layer(layer_name, config_overrides)) to reduce repetition and make the tests easier to maintain.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- In `test_create_layer_mandatory_fields_validation`, the `query_invalid_promql_syntax` case uses a `layer_presence` string with malformed JSON (`"up{job="`), which will cause `json.loads()` to fail before the API is exercised; consider making the JSON syntactically valid while keeping the PromQL semantically invalid so the server-side validation is actually tested.
- The performance-oriented tests (e.g. `test_list_layers_performance_with_many_layers` and `test_layer_detection_performance_multiple_layers`) currently only measure and print timings while asserting on counts; if these are meant to guard performance, consider asserting against a loose upper bound or clearly documenting that they are informational to avoid future confusion about their purpose.
- There is quite a bit of duplicated code for creating temporary JSON files and layers across the new tests; consider extracting a small helper function (e.g. `create_temp_layer_file(json_obj)` or `create_test_layer(layer_name, config_overrides)`) to reduce repetition and make the tests easier to maintain.
## Individual Comments
### Comment 1
<location path="tests/scripts/local_monitoring_tests/rest_apis/test_list_layers.py" line_range="511" />
<code_context>
+
+@pytest.mark.layers
+@pytest.mark.sanity
+def test_list_layers_sorting_order(cluster_type):
+ """
+ Test Description: This test validates the sorting order of layers returned by listLayers API.
</code_context>
<issue_to_address>
**issue (testing):** The `test_list_layers_sorting_order` test does not assert any specific ordering, so it doesn't really validate sorting behavior.
The test only prints the order and computes `is_alphabetically_sorted` without asserting on it, so it will pass regardless of the API’s behavior. Please either add an assertion on the expected order (alphabetical vs. creation time, per the spec), or, if ordering is intentionally undefined, rename the test and remove the sorting logic to avoid a misleading signal.
</issue_to_address>
### Comment 2
<location path="tests/scripts/local_monitoring_tests/rest_apis/test_layer_detection.py" line_range="54" />
<code_context>
+ pytest.param("hotspot-micrometer-config.json", id="hotspot_multiple_queries"),
+ pytest.param("quarkus-micrometer-config.json", id="quarkus_single_query"),
+])
+def test_layer_detection_with_valid_query(cluster_type, layer_file):
+ """
+ Test Description: Validates layer creation with query-based detection
</code_context>
<issue_to_address>
**suggestion (testing):** The layer detection tests only assert successful layer creation, not actual detection behavior.
These tests (e.g. `test_layer_detection_with_valid_query`, `test_layer_detection_always_present`, and the integration tests) only assert that `create_layer` succeeds, but never validate that detection rules are actually applied (e.g. layers being associated with the right containers/experiments). If this suite is intended to cover detection behavior, please add at least one end-to-end scenario that: (1) defines a layer with specific detection rules, (2) runs experiments that should and should not match, and (3) asserts that the correct layer is detected/applied. If that’s out of scope, consider renaming the tests to reflect that they only validate layer creation, not detection.
Suggested implementation:
```python
def test_layer_creation_with_valid_query(cluster_type, layer_file):
"""
Test Description: Validates layer *creation* using query-based configurations
Uses existing layer configs with real Prometheus queries
Expected: Layer should be created successfully; detection behavior is not validated here
"""
```
To fully align the test suite with the clarified intent (“only validate layer creation, not detection”), similar renaming and docstring updates should be applied to:
1. `test_layer_detection_always_present` in this file: rename to something like `test_layer_creation_always_present` and update its docstring to clearly state it only verifies successful layer creation.
2. Any integration tests whose names/descriptions imply they validate detection behavior but in practice only assert successful `create_layer` responses; those should be renamed and their docstrings adjusted accordingly (e.g. “layer creation via REST API”) unless you add true end-to-end detection assertions as described in your review comment.
</issue_to_address>
### Comment 3
<location path="tests/scripts/local_monitoring_tests/rest_apis/test_list_layers.py" line_range="384" />
<code_context>
+
+@pytest.mark.layers
+@pytest.mark.sanity
+def test_list_layers_performance_with_many_layers(cluster_type):
+ """
+ Test Description: This test validates listLayers API performance when listing 100+ layers.
</code_context>
<issue_to_address>
**suggestion (testing):** The performance test measures response time but does not assert any threshold, which may make it misleading or noisy.
This test only asserts functional behavior while labeling itself as a performance test. If you want to catch regressions, add a reasonable upper bound on `response_time` and/or mark it as `@pytest.mark.slow`. Otherwise, consider renaming/rewording the test to emphasize handling many layers rather than performance.
Suggested implementation:
```python
@pytest.mark.layers
@pytest.mark.sanity
@pytest.mark.slow
def test_list_layers_performance_with_many_layers(cluster_type):
"""
Test Description: This test validates listLayers API performance when listing 100+ layers.
It creates multiple layers, measures response time, and asserts it stays under a reasonable threshold
to catch performance regressions.
"""
```
```python
num_layers = 10
created_layers = []
max_response_time_seconds = 5.0
print(f"Creating {num_layers} layers for performance testing...")
```
```python
response_time = response.elapsed.total_seconds()
assert response_time <= max_response_time_seconds, (
f"listLayers response time {response_time:.3f}s exceeded "
f"threshold of {max_response_time_seconds:.3f}s when listing {num_layers} layers"
)
```
If the variable holding the HTTP response is not named `response`, update the SEARCH/REPLACE block accordingly to match the actual variable name.
You may also want to tune `max_response_time_seconds` (e.g., 2.0s, 10.0s) based on realistic expectations for the environment where this test runs (local vs CI vs production-like).
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
|
|
||
| @pytest.mark.layers | ||
| @pytest.mark.sanity | ||
| def test_list_layers_sorting_order(cluster_type): |
There was a problem hiding this comment.
issue (testing): The test_list_layers_sorting_order test does not assert any specific ordering, so it doesn't really validate sorting behavior.
The test only prints the order and computes is_alphabetically_sorted without asserting on it, so it will pass regardless of the API’s behavior. Please either add an assertion on the expected order (alphabetical vs. creation time, per the spec), or, if ordering is intentionally undefined, rename the test and remove the sorting logic to avoid a misleading signal.
| pytest.param("hotspot-micrometer-config.json", id="hotspot_multiple_queries"), | ||
| pytest.param("quarkus-micrometer-config.json", id="quarkus_single_query"), | ||
| ]) | ||
| def test_layer_detection_with_valid_query(cluster_type, layer_file): |
There was a problem hiding this comment.
suggestion (testing): The layer detection tests only assert successful layer creation, not actual detection behavior.
These tests (e.g. test_layer_detection_with_valid_query, test_layer_detection_always_present, and the integration tests) only assert that create_layer succeeds, but never validate that detection rules are actually applied (e.g. layers being associated with the right containers/experiments). If this suite is intended to cover detection behavior, please add at least one end-to-end scenario that: (1) defines a layer with specific detection rules, (2) runs experiments that should and should not match, and (3) asserts that the correct layer is detected/applied. If that’s out of scope, consider renaming the tests to reflect that they only validate layer creation, not detection.
Suggested implementation:
def test_layer_creation_with_valid_query(cluster_type, layer_file):
"""
Test Description: Validates layer *creation* using query-based configurations
Uses existing layer configs with real Prometheus queries
Expected: Layer should be created successfully; detection behavior is not validated here
"""To fully align the test suite with the clarified intent (“only validate layer creation, not detection”), similar renaming and docstring updates should be applied to:
test_layer_detection_always_presentin this file: rename to something liketest_layer_creation_always_presentand update its docstring to clearly state it only verifies successful layer creation.- Any integration tests whose names/descriptions imply they validate detection behavior but in practice only assert successful
create_layerresponses; those should be renamed and their docstrings adjusted accordingly (e.g. “layer creation via REST API”) unless you add true end-to-end detection assertions as described in your review comment.
|
|
||
| @pytest.mark.layers | ||
| @pytest.mark.sanity | ||
| def test_list_layers_performance_with_many_layers(cluster_type): |
There was a problem hiding this comment.
suggestion (testing): The performance test measures response time but does not assert any threshold, which may make it misleading or noisy.
This test only asserts functional behavior while labeling itself as a performance test. If you want to catch regressions, add a reasonable upper bound on response_time and/or mark it as @pytest.mark.slow. Otherwise, consider renaming/rewording the test to emphasize handling many layers rather than performance.
Suggested implementation:
@pytest.mark.layers
@pytest.mark.sanity
@pytest.mark.slow
def test_list_layers_performance_with_many_layers(cluster_type):
"""
Test Description: This test validates listLayers API performance when listing 100+ layers.
It creates multiple layers, measures response time, and asserts it stays under a reasonable threshold
to catch performance regressions.
""" num_layers = 10
created_layers = []
max_response_time_seconds = 5.0
print(f"Creating {num_layers} layers for performance testing...") response_time = response.elapsed.total_seconds()
assert response_time <= max_response_time_seconds, (
f"listLayers response time {response_time:.3f}s exceeded "
f"threshold of {max_response_time_seconds:.3f}s when listing {num_layers} layers"
)If the variable holding the HTTP response is not named response, update the SEARCH/REPLACE block accordingly to match the actual variable name.
You may also want to tune max_response_time_seconds (e.g., 2.0s, 10.0s) based on realistic expectations for the environment where this test runs (local vs CI vs production-like).
2c6a350 to
c36c290
Compare
|
@bhanvimenghani Can you please trigger the jenkins test and post the results ? |
Description
Please describe the issue or feature and the summary of changes made to fix this.
Fixes # (issue)
Type of change
How has this been tested?
Please describe the tests that were run to verify your changes and steps to reproduce. Please specify any test configuration required.
Test Configuration
Checklist 🎯
Additional information
Include any additional information such as links, test results, screenshots here
Summary by Sourcery
Expand layer API validation and add comprehensive tests for layer creation, listing, and detection behavior.
New Features:
Enhancements:
Tests: