Skip to content

Layyers -14 Additional List and create layers tests#1831

Open
bhanvimenghani wants to merge 9 commits intokruize:mvp_demofrom
bhanvimenghani:extra_layers_tests
Open

Layyers -14 Additional List and create layers tests#1831
bhanvimenghani wants to merge 9 commits intokruize:mvp_demofrom
bhanvimenghani:extra_layers_tests

Conversation

@bhanvimenghani
Copy link
Copy Markdown
Contributor

@bhanvimenghani bhanvimenghani commented Mar 3, 2026

Description

Please describe the issue or feature and the summary of changes made to fix this.

Fixes # (issue)

Type of change

  • Bug fix
  • New feature
  • Docs update
  • Breaking change (What changes might users need to make in their application due to this PR?)
  • Requires DB changes

How has this been tested?

Please describe the tests that were run to verify your changes and steps to reproduce. Please specify any test configuration required.

  • New Test X
  • Functional testsuite

Test Configuration

  • Kubernetes clusters tested on:

Checklist 🎯

  • Followed coding guidelines
  • Comments added
  • Dependent changes merged
  • Documentation updated
  • Tests added or updated

Additional information

Include any additional information such as links, test results, screenshots here

Summary by Sourcery

Expand layer API validation and add comprehensive tests for layer creation, listing, and detection behavior.

New Features:

  • Add layer detection test suite covering query-based, presence-based, and experiment integration scenarios.

Enhancements:

  • Introduce detailed validation message constants for layer queries, labels, tunables, and API structure to standardize error handling.

Tests:

  • Extend listLayers tests with additional negative, edge case, performance, case-sensitivity, and sorting scenarios.
  • Add optional-field acceptance tests for createLayer and broaden mandatory field validation coverage for queries, labels, tunables, and API shape.
  • Introduce cleanup fixtures and multi-layer scenarios to ensure reliable, isolated layer detection tests.

@sourcery-ai
Copy link
Copy Markdown
Contributor

sourcery-ai Bot commented Mar 3, 2026

Reviewer's Guide

Extends the layer REST API test coverage by adding edge/negative cases for listing and creating layers, introduces dedicated layer detection tests, and expands shared validation message constants to cover new error scenarios.

File-Level Changes

Change Details Files
Expand listLayers negative and edge-case coverage and add performance/sorting/case-sensitivity tests.
  • Add additional invalid query parameter scenarios including SQL injection-like input, extremely long names, and multiple invalid parameters.
  • Extend special-character name tests with Unicode and complex symbol inputs.
  • Introduce a performance test that bulk-creates layers, measures listLayers response time, validates count, and cleans up created layers.
  • Add a case-sensitivity test that validates successful exact-case lookup and failures for upper/lower-case mismatches.
  • Add a sorting-order test that creates multiple layers, asserts they are all returned, inspects returned order, and reports whether it’s alphabetically sorted or consistently ordered.
tests/scripts/local_monitoring_tests/rest_apis/test_list_layers.py
Broaden createLayer tests to cover optional fields and comprehensive validation of queries, labels, tunables, and API structure.
  • Add a parametrized positive test that verifies createLayer succeeds when optional fields like details are missing, null, or configured with queries lacking keys.
  • Extend the mandatory-fields validation parametrization to include invalid query objects, label validation cases, tunable name/value-type validation, numeric bounds/step combinations, and API structure (missing/invalid apiVersion/kind/metadata).
  • Modify test_create_layer_mandatory_fields_validation to build JSON manually for API-structure tests with intentionally missing fields while keeping the template path for normal cases.
  • Ensure temporary JSON files are written with either a rendered template or dynamically built JSON before invoking create_layer.
tests/scripts/local_monitoring_tests/rest_apis/test_create_layer.py
Add helper constants for new layer validation error messages used in tests.
  • Introduce message constants for query field validation (missing/empty/null datasource or query and invalid PromQL syntax).
  • Add label validation message constants for missing, null, and empty name/value fields.
  • Add tunable validation message constants for null/empty/missing name, invalid/null/missing value_type, partial bounds, and missing step.
  • Add API structure validation message constants for missing metadata/apiVersion/kind and invalid kind/apiVersion format.
tests/scripts/helpers/utils.py
Introduce a new suite of layer detection tests covering query-based, presence-based, integration, and performance scenarios.
  • Create a new test_layer_detection module that imports shared helpers and determines the layer configuration directory via get_layer_dir().
  • Define an autouse fixture that cleans up a fixed set of layer names before and after each test to keep the DB clean.
  • Add query-based detection tests that load real layer config JSONs, verify presence/queries, and assert successful layer creation with multiple or single queries.
  • Add presence='always' tests using container-config.json to validate always-present behavior and applicability to all containers.
  • Add integration tests that create multiple layers for experiments, verify layers with Prometheus queries and tunables, and measure performance when creating several layers in sequence.
tests/scripts/local_monitoring_tests/rest_apis/test_layer_detection.py

Possibly linked issues

  • #: The PR implements the detailed create/list validation and many detection tests requested in the layer testing issue.

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

Copy link
Copy Markdown
Contributor

@sourcery-ai sourcery-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've found 3 issues, and left some high level feedback:

  • In test_create_layer_mandatory_fields_validation, the query_invalid_promql_syntax case uses a layer_presence string with malformed JSON ("up{job="), which will cause json.loads() to fail before the API is exercised; consider making the JSON syntactically valid while keeping the PromQL semantically invalid so the server-side validation is actually tested.
  • The performance-oriented tests (e.g. test_list_layers_performance_with_many_layers and test_layer_detection_performance_multiple_layers) currently only measure and print timings while asserting on counts; if these are meant to guard performance, consider asserting against a loose upper bound or clearly documenting that they are informational to avoid future confusion about their purpose.
  • There is quite a bit of duplicated code for creating temporary JSON files and layers across the new tests; consider extracting a small helper function (e.g. create_temp_layer_file(json_obj) or create_test_layer(layer_name, config_overrides)) to reduce repetition and make the tests easier to maintain.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- In `test_create_layer_mandatory_fields_validation`, the `query_invalid_promql_syntax` case uses a `layer_presence` string with malformed JSON (`"up{job="`), which will cause `json.loads()` to fail before the API is exercised; consider making the JSON syntactically valid while keeping the PromQL semantically invalid so the server-side validation is actually tested.
- The performance-oriented tests (e.g. `test_list_layers_performance_with_many_layers` and `test_layer_detection_performance_multiple_layers`) currently only measure and print timings while asserting on counts; if these are meant to guard performance, consider asserting against a loose upper bound or clearly documenting that they are informational to avoid future confusion about their purpose.
- There is quite a bit of duplicated code for creating temporary JSON files and layers across the new tests; consider extracting a small helper function (e.g. `create_temp_layer_file(json_obj)` or `create_test_layer(layer_name, config_overrides)`) to reduce repetition and make the tests easier to maintain.

## Individual Comments

### Comment 1
<location path="tests/scripts/local_monitoring_tests/rest_apis/test_list_layers.py" line_range="511" />
<code_context>
+
+@pytest.mark.layers
+@pytest.mark.sanity
+def test_list_layers_sorting_order(cluster_type):
+    """
+    Test Description: This test validates the sorting order of layers returned by listLayers API.
</code_context>
<issue_to_address>
**issue (testing):** The `test_list_layers_sorting_order` test does not assert any specific ordering, so it doesn't really validate sorting behavior.

The test only prints the order and computes `is_alphabetically_sorted` without asserting on it, so it will pass regardless of the API’s behavior. Please either add an assertion on the expected order (alphabetical vs. creation time, per the spec), or, if ordering is intentionally undefined, rename the test and remove the sorting logic to avoid a misleading signal.
</issue_to_address>

### Comment 2
<location path="tests/scripts/local_monitoring_tests/rest_apis/test_layer_detection.py" line_range="54" />
<code_context>
+    pytest.param("hotspot-micrometer-config.json", id="hotspot_multiple_queries"),
+    pytest.param("quarkus-micrometer-config.json", id="quarkus_single_query"),
+])
+def test_layer_detection_with_valid_query(cluster_type, layer_file):
+    """
+    Test Description: Validates layer creation with query-based detection
</code_context>
<issue_to_address>
**suggestion (testing):** The layer detection tests only assert successful layer creation, not actual detection behavior.

These tests (e.g. `test_layer_detection_with_valid_query`, `test_layer_detection_always_present`, and the integration tests) only assert that `create_layer` succeeds, but never validate that detection rules are actually applied (e.g. layers being associated with the right containers/experiments). If this suite is intended to cover detection behavior, please add at least one end-to-end scenario that: (1) defines a layer with specific detection rules, (2) runs experiments that should and should not match, and (3) asserts that the correct layer is detected/applied. If that’s out of scope, consider renaming the tests to reflect that they only validate layer creation, not detection.

Suggested implementation:

```python
def test_layer_creation_with_valid_query(cluster_type, layer_file):
    """
    Test Description: Validates layer *creation* using query-based configurations
    Uses existing layer configs with real Prometheus queries
    Expected: Layer should be created successfully; detection behavior is not validated here
    """

```

To fully align the test suite with the clarified intent (“only validate layer creation, not detection”), similar renaming and docstring updates should be applied to:
1. `test_layer_detection_always_present` in this file: rename to something like `test_layer_creation_always_present` and update its docstring to clearly state it only verifies successful layer creation.
2. Any integration tests whose names/descriptions imply they validate detection behavior but in practice only assert successful `create_layer` responses; those should be renamed and their docstrings adjusted accordingly (e.g. “layer creation via REST API”) unless you add true end-to-end detection assertions as described in your review comment.
</issue_to_address>

### Comment 3
<location path="tests/scripts/local_monitoring_tests/rest_apis/test_list_layers.py" line_range="384" />
<code_context>
+
+@pytest.mark.layers
+@pytest.mark.sanity
+def test_list_layers_performance_with_many_layers(cluster_type):
+    """
+    Test Description: This test validates listLayers API performance when listing 100+ layers.
</code_context>
<issue_to_address>
**suggestion (testing):** The performance test measures response time but does not assert any threshold, which may make it misleading or noisy.

This test only asserts functional behavior while labeling itself as a performance test. If you want to catch regressions, add a reasonable upper bound on `response_time` and/or mark it as `@pytest.mark.slow`. Otherwise, consider renaming/rewording the test to emphasize handling many layers rather than performance.

Suggested implementation:

```python
@pytest.mark.layers
@pytest.mark.sanity
@pytest.mark.slow
def test_list_layers_performance_with_many_layers(cluster_type):
    """
    Test Description: This test validates listLayers API performance when listing 100+ layers.
    It creates multiple layers, measures response time, and asserts it stays under a reasonable threshold
    to catch performance regressions.
    """

```

```python
    num_layers = 10
    created_layers = []
    max_response_time_seconds = 5.0

    print(f"Creating {num_layers} layers for performance testing...")

```

```python
    response_time = response.elapsed.total_seconds()
    assert response_time <= max_response_time_seconds, (
        f"listLayers response time {response_time:.3f}s exceeded "
        f"threshold of {max_response_time_seconds:.3f}s when listing {num_layers} layers"
    )

```

If the variable holding the HTTP response is not named `response`, update the SEARCH/REPLACE block accordingly to match the actual variable name.  
You may also want to tune `max_response_time_seconds` (e.g., 2.0s, 10.0s) based on realistic expectations for the environment where this test runs (local vs CI vs production-like).
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.


@pytest.mark.layers
@pytest.mark.sanity
def test_list_layers_sorting_order(cluster_type):
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (testing): The test_list_layers_sorting_order test does not assert any specific ordering, so it doesn't really validate sorting behavior.

The test only prints the order and computes is_alphabetically_sorted without asserting on it, so it will pass regardless of the API’s behavior. Please either add an assertion on the expected order (alphabetical vs. creation time, per the spec), or, if ordering is intentionally undefined, rename the test and remove the sorting logic to avoid a misleading signal.

pytest.param("hotspot-micrometer-config.json", id="hotspot_multiple_queries"),
pytest.param("quarkus-micrometer-config.json", id="quarkus_single_query"),
])
def test_layer_detection_with_valid_query(cluster_type, layer_file):
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (testing): The layer detection tests only assert successful layer creation, not actual detection behavior.

These tests (e.g. test_layer_detection_with_valid_query, test_layer_detection_always_present, and the integration tests) only assert that create_layer succeeds, but never validate that detection rules are actually applied (e.g. layers being associated with the right containers/experiments). If this suite is intended to cover detection behavior, please add at least one end-to-end scenario that: (1) defines a layer with specific detection rules, (2) runs experiments that should and should not match, and (3) asserts that the correct layer is detected/applied. If that’s out of scope, consider renaming the tests to reflect that they only validate layer creation, not detection.

Suggested implementation:

def test_layer_creation_with_valid_query(cluster_type, layer_file):
    """
    Test Description: Validates layer *creation* using query-based configurations
    Uses existing layer configs with real Prometheus queries
    Expected: Layer should be created successfully; detection behavior is not validated here
    """

To fully align the test suite with the clarified intent (“only validate layer creation, not detection”), similar renaming and docstring updates should be applied to:

  1. test_layer_detection_always_present in this file: rename to something like test_layer_creation_always_present and update its docstring to clearly state it only verifies successful layer creation.
  2. Any integration tests whose names/descriptions imply they validate detection behavior but in practice only assert successful create_layer responses; those should be renamed and their docstrings adjusted accordingly (e.g. “layer creation via REST API”) unless you add true end-to-end detection assertions as described in your review comment.


@pytest.mark.layers
@pytest.mark.sanity
def test_list_layers_performance_with_many_layers(cluster_type):
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (testing): The performance test measures response time but does not assert any threshold, which may make it misleading or noisy.

This test only asserts functional behavior while labeling itself as a performance test. If you want to catch regressions, add a reasonable upper bound on response_time and/or mark it as @pytest.mark.slow. Otherwise, consider renaming/rewording the test to emphasize handling many layers rather than performance.

Suggested implementation:

@pytest.mark.layers
@pytest.mark.sanity
@pytest.mark.slow
def test_list_layers_performance_with_many_layers(cluster_type):
    """
    Test Description: This test validates listLayers API performance when listing 100+ layers.
    It creates multiple layers, measures response time, and asserts it stays under a reasonable threshold
    to catch performance regressions.
    """
    num_layers = 10
    created_layers = []
    max_response_time_seconds = 5.0

    print(f"Creating {num_layers} layers for performance testing...")
    response_time = response.elapsed.total_seconds()
    assert response_time <= max_response_time_seconds, (
        f"listLayers response time {response_time:.3f}s exceeded "
        f"threshold of {max_response_time_seconds:.3f}s when listing {num_layers} layers"
    )

If the variable holding the HTTP response is not named response, update the SEARCH/REPLACE block accordingly to match the actual variable name.
You may also want to tune max_response_time_seconds (e.g., 2.0s, 10.0s) based on realistic expectations for the environment where this test runs (local vs CI vs production-like).

@bhanvimenghani bhanvimenghani changed the title Layer Detection Tests + Additional List and create layers tests Additional List and create layers tests Mar 5, 2026
@bhanvimenghani bhanvimenghani self-assigned this Mar 10, 2026
@bhanvimenghani bhanvimenghani changed the title Additional List and create layers tests Layyers -14 Additional List and create layers tests Mar 12, 2026
@bhanvimenghani bhanvimenghani added this to the Kruize 0.11.0 Release milestone Apr 8, 2026
@bhanvimenghani bhanvimenghani moved this to Under Review in Monitoring Apr 8, 2026
@kusumachalasani
Copy link
Copy Markdown
Contributor

@bhanvimenghani Can you please trigger the jenkins test and post the results ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

Status: Under Review

Development

Successfully merging this pull request may close these issues.

2 participants