issues: Add comprehensive unit test suite for Lambda functions#170
issues: Add comprehensive unit test suite for Lambda functions#170
Conversation
WalkthroughAdds extensive test and benchmark infrastructure, a new mocks package and test utilities, refactors Lambda startup into initializeHandler, and replaces the CI test job with a multi-step workflow that computes Lambda-specific coverage, generates reports, runs benchmarks, uploads to Codecov, and enforces tiered coverage thresholds. Changes
Sequence Diagram(s)sequenceDiagram
participant CI as GitHub Actions CI
participant Tests as go test + bench
participant Coverage as Coverage Calc
participant Report as HTML Report
participant Codecov as Codecov Upload
rect rgb(240,248,255)
CI->>Tests: run unit tests + benchmarks (collect coverage.out)
Tests-->>CI: exit codes, coverage.out
CI->>Coverage: calculate overall + Lambda-specific coverage (with fallback)
Coverage-->>CI: coverage percentages (set GITHUB_OUTPUT)
CI->>Report: generate HTML coverage report
CI->>Codecov: upload coverage (fail_ci_if_error: false)
CI->>CI: evaluate Lambda coverage threshold -> fail / warn / pass
end
rect rgb(245,255,240)
Main->>Init: initializeHandler(ctx)
Init->>AWS: Load AWS config & build services
AWS-->>Init: aws.Config / clients
Init->>Handler: NewComplianceHandler(service)
Init-->>Main: *ComplianceHandler
Main->>Handler: lambda.Start(handleUnifiedRequest)
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes
Possibly related issues
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✨ Finishing touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 5
🧹 Nitpick comments (7)
internal/testutil/aws_helpers.go (1)
38-49: Hardcoded region may not match production default.The
NewMockAWSConfigfunction hardcodesca-central-1as the region. According to the service configuration ininternal/service/compliance.go, the default region is alsoca-central-1, but this should be consistent and perhaps extracted as a constant to prevent drift.Consider extracting the default region as a constant:
// In a shared constants file or at package level const DefaultTestRegion = "ca-central-1" func NewMockAWSConfig() (aws.Config, error) { cfg, err := config.LoadDefaultConfig(context.Background(), config.WithRegion(DefaultTestRegion), config.WithCredentialsProvider(aws.AnonymousCredentials{}), ) // ... }internal/testutil/assertions.go (1)
10-18: Consider adding assertions for all ComplianceResult fields.The
AssertComplianceResultfunction only compares 5 out of 8 fields fromComplianceResult. TheCurrentRetentionandCurrentKmsKeyIdfields are not compared, which might be intentional for flexibility but could miss important differences in tests.If these fields should be compared, add them:
func AssertComplianceResult(t *testing.T, expected, actual types.ComplianceResult) { t.Helper() assert.Equal(t, expected.LogGroupName, actual.LogGroupName, "LogGroupName should match") assert.Equal(t, expected.Region, actual.Region, "Region should match") assert.Equal(t, expected.AccountId, actual.AccountId, "AccountId should match") assert.Equal(t, expected.MissingEncryption, actual.MissingEncryption, "MissingEncryption should match") assert.Equal(t, expected.MissingRetention, actual.MissingRetention, "MissingRetention should match") + + // Compare CurrentRetention (both nil or both equal) + if expected.CurrentRetention == nil { + assert.Nil(t, actual.CurrentRetention, "CurrentRetention should be nil") + } else { + assert.NotNil(t, actual.CurrentRetention, "CurrentRetention should not be nil") + assert.Equal(t, *expected.CurrentRetention, *actual.CurrentRetention, "CurrentRetention value should match") + } + assert.Equal(t, expected.CurrentKmsKeyId, actual.CurrentKmsKeyId, "CurrentKmsKeyId should match") }Alternatively, if partial comparison is intentional, document it:
// AssertComplianceResult validates that a ComplianceResult matches expected values. // Note: CurrentRetention and CurrentKmsKeyId are not compared to allow flexible assertions. func AssertComplianceResult(t *testing.T, expected, actual types.ComplianceResult) {internal/mocks/mock_compliance_service.go (1)
41-53: Consider setting ProcessingDuration in default behavior.The default behavior returns a valid result but leaves
ProcessingDurationat zero, which might not match real-world behavior. Tests relying on default behavior might not catch issues with duration handling.Consider adding a realistic default duration:
func (m *MockComplianceService) ProcessNonCompliantResourcesOptimized(ctx context.Context, request types.BatchComplianceRequest) (*types.BatchRemediationResult, error) { if m.ProcessNonCompliantResourcesOptimizedFunc != nil { return m.ProcessNonCompliantResourcesOptimizedFunc(ctx, request) } // Default behavior: return success return &types.BatchRemediationResult{ TotalProcessed: len(request.NonCompliantResults), SuccessCount: len(request.NonCompliantResults), FailureCount: 0, Results: []types.RemediationResult{}, + ProcessingDuration: time.Duration(len(request.NonCompliantResults)) * 10 * time.Millisecond, + RateLimitHits: 0, }, nil }.github/workflows/ci.yml (1)
173-201: Document the 85% coverage threshold decision.Verification confirms the threshold is not documented anywhere in the repository—it exists only in ci.yml. While functional, adding an inline comment explains the reasoning for future maintainers.
Consider adding the following comment to the workflow:
- name: Check coverage threshold run: | # Lambda functions have testing limitations that justify a lower threshold: # - main() cannot be tested (calls lambda.Start() which blocks) # - Some AWS SDK initialization code is difficult to mock # Therefore, we enforce 85% minimum coverage (vs. typical 90%+ for other packages) COVERAGE=$(echo "${{ steps.coverage.outputs.coverage }}" | cut -d. -f1)Alternatively, create a
TESTING.mdin the repository documenting this strategy and other testing conventions.internal/handler/handler_test.go (3)
140-208: Consolidate MockComplianceService in shared mocks package to avoid duplicationThe extended
MockComplianceServicenicely supports both function-based hooks and the existing field-based behavior, which is great for backwards compatibility. However, this implementation appears conceptually identical to the mock described ininternal/mocks/mock_compliance_service.go, which risks divergence if the interface evolves.Consider moving this implementation into the shared
internal/mockspackage and importing it here, or at least ensuring the two definitions share a single source of truth (e.g., by deleting one or wiring tests to the shared mock). This will reduce maintenance overhead and keep behavior consistent across tests.
215-415: Strengthen HandleConfigRuleEvaluationRequest tests with assertions on payloads and call patternsThe table-driven tests cover success, “no resources”, “all filtered”, and multiple error paths, which is solid. Right now, though, they mainly assert on:
- Whether an error is returned.
- That
configRuleName,region, andbatchSizeare wired correctly into the mock.To better protect against regressions in the batching logic, consider:
- Asserting that
request.NonCompliantResultspassed toProcessNonCompliantResourcesOptimizedmatchesmockValidateResourcesfor the happy-path case.- Asserting that
ProcessNonCompliantResourcesOptimizedFuncis not called when there are no non-compliant resources or when validation filters everything out (e.g., via aBatchCalled boolor similar on the mock).- Optionally asserting that
ValidateResourceExistenceFuncis not invoked whenGetNonCompliantResourcesreturns an empty slice.These small additions would verify the full control flow (fetch → validate → batch) rather than just argument basics and error handling.
417-580: AnalyzeComplianceForRule tests don’t yet validate the full ComplianceResult or unknown-rule behaviorThe scenarios here are well chosen (missing/valid encryption, missing/valid retention, unknown rule), but the assertions are quite coarse:
expectedResultis fully populated, yet onlyMissingEncryption/MissingRetentionare consulted (and even then, only indirectly through the branch logic).- For the
unknown-rulecase, the test checks only that no error is returned, but not that remediation is skipped.To make these tests align with the intent (validating
analyzeComplianceForRule):
- Use
MockComplianceService.RemediateLogGroupFuncto capture thetypes.ComplianceResultpassed into remediation and compare it directly withtt.expectedResult(field by field, or using a helper intestutil).- In the
unknown-rulebranch, addassert.False(t, mockService.RemediateLogGroupCalled)to ensure unknown rules neither error nor trigger remediation.- Optionally, also assert that compliant resources (no missing encryption/retention) do not invoke remediation beyond just checking the boolean flag.
These changes would ensure the test verifies both classification and the full shape of the computed compliance record, not just high-level behavior.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (9)
.github/workflows/ci.yml(1 hunks)cmd/lambda/main.go(1 hunks)internal/handler/handler_test.go(4 hunks)internal/mocks/doc.go(1 hunks)internal/mocks/mock_compliance_service.go(1 hunks)internal/testutil/assertions.go(1 hunks)internal/testutil/aws_helpers.go(1 hunks)internal/testutil/doc.go(1 hunks)internal/testutil/fixtures.go(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (5)
internal/testutil/assertions.go (1)
internal/types/types.go (3)
ComplianceResult(57-65)RemediationResult(68-75)BatchRemediationResult(128-135)
cmd/lambda/main.go (3)
internal/types/types.go (1)
LambdaRequest(138-144)internal/handler/handler.go (2)
ComplianceHandler(14-17)NewComplianceHandler(20-25)internal/service/compliance.go (1)
NewComplianceService(88-115)
internal/testutil/fixtures.go (1)
internal/types/types.go (10)
LambdaRequest(138-144)ConfigEvent(9-18)ConfigRuleInvokingEvent(21-26)ConfigurationItem(29-43)LogGroupConfiguration(46-54)ComplianceResult(57-65)NonCompliantResource(116-125)RemediationResult(68-75)BatchComplianceRequest(108-113)BatchRemediationResult(128-135)
internal/mocks/mock_compliance_service.go (3)
internal/types/types.go (5)
ComplianceResult(57-65)RemediationResult(68-75)BatchComplianceRequest(108-113)BatchRemediationResult(128-135)NonCompliantResource(116-125)internal/handler/handler_test.go (1)
MockComplianceService(142-153)internal/service/interfaces.go (1)
ComplianceServiceInterface(13-18)
internal/handler/handler_test.go (3)
internal/types/types.go (9)
ComplianceResult(57-65)RemediationResult(68-75)NonCompliantResource(116-125)BatchComplianceRequest(108-113)BatchRemediationResult(128-135)ConfigurationItem(29-43)LogGroupConfiguration(46-54)ConfigEvent(9-18)ConfigRuleInvokingEvent(21-26)internal/mocks/mock_compliance_service.go (1)
MockComplianceService(12-24)internal/handler/handler.go (1)
NewComplianceHandler(20-25)
🔇 Additional comments (18)
cmd/lambda/main.go (1)
23-34: LGTM! Clean extraction of initialization logic.The refactor successfully extracts handler initialization from
main(), enabling unit testing of the AWS config loading and handler construction logic without requiring the Lambda runtime. The use ofcontext.TODO()is appropriate here since this is the top-level initialization in the Lambda entrypoint.internal/testutil/doc.go (1)
1-9: LGTM! Clear and comprehensive package documentation.The package documentation clearly describes the purpose and scope of the testutil package, making it easy for developers to understand what utilities are available.
internal/mocks/doc.go (1)
1-6: LGTM! Clear package documentation.The documentation appropriately describes the mocks package purpose and clearly states the benefit of isolated testing without network calls.
internal/testutil/assertions.go (3)
20-26: LGTM! Clean and focused assertion helper.The success assertion correctly validates the three key conditions for successful remediation.
28-37: LGTM! Good error handling in failure assertion.The failure assertion properly handles the optional error message check and provides clear assertion messages.
39-46: LGTM! Straightforward batch result validation.The batch result assertion covers the essential count fields for batch operation validation.
internal/testutil/fixtures.go (6)
10-18: LGTM! Simple and clear test fixture.The fixture provides sensible defaults for Lambda request testing.
20-53: LGTM! Well-structured ConfigEvent fixture with proper error handling.The function correctly marshals the nested ConfigEvent structure and returns an error if marshaling fails, allowing callers to handle encoding issues appropriately.
55-67: LGTM! Clean compliance result fixture.The fixture provides appropriate test data with a pointer to retention days.
83-92: LGTM! Clean remediation result fixture.The fixture returns sensible defaults for successful remediation scenarios.
94-102: LGTM! Flexible batch request builder.The fixture accepts a slice of resources, providing good flexibility for different test scenarios.
104-114: LGTM! Parameterized batch result builder.The fixture allows tests to specify different count combinations, making it flexible for various test cases.
internal/mocks/mock_compliance_service.go (5)
10-24: LGTM! Clean mock design with function-based hooks.The function-based hook approach provides excellent flexibility for tests to control behavior without modifying the mock implementation. This is superior to traditional field-based mocking for complex scenarios.
26-39: LGTM! Appropriate default behavior with hook override.The default success behavior is reasonable for tests that don't need specific error scenarios. The hook-first pattern allows complete control when needed.
55-62: LGTM! Clean default behavior for resource retrieval.Returning an empty slice by default is appropriate for tests that don't need specific resources.
64-71: LGTM! Pass-through default behavior makes sense.Validating all resources by default is a sensible choice that won't block tests from progressing.
73-74: LGTM! Compile-time interface verification is excellent practice.This ensures the mock stays in sync with the interface definition and catches breaking changes at compile time.
internal/handler/handler_test.go (1)
3-13: Imports and test dependencies look appropriateUse of
fmtfor error construction andtestify/assert+testify/requirefor concise test assertions is idiomatic and keeps the tests readable. No issues here.
| // initializeHandler creates and returns a ComplianceHandler with AWS configuration. | ||
| // This function is extracted from main() to allow testing of initialization logic | ||
| // without requiring the Lambda runtime. This is the only production code change made | ||
| // for the unit test suite implementation - it's a safe refactoring that improves | ||
| // testability without changing behavior. | ||
| func initializeHandler(ctx context.Context) (*handler.ComplianceHandler, error) { | ||
| // Load AWS configuration | ||
| cfg, err := config.LoadDefaultConfig(ctx) | ||
| if err != nil { | ||
| return nil, fmt.Errorf("failed to load AWS config: %w", err) | ||
| } | ||
|
|
||
| // Create services | ||
| complianceService := service.NewComplianceService(cfg) | ||
|
|
||
| // Create handler | ||
| h := handler.NewComplianceHandler(complianceService) | ||
|
|
||
| // Start Lambda with unified handler | ||
| lambda.Start(func(ctx context.Context, request types.LambdaRequest) error { | ||
| return handleUnifiedRequest(ctx, h, request) | ||
| }) | ||
| return h, nil | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
Verify initializeHandler is actually tested.
The extraction looks correct and follows best practices with proper error wrapping. However, please confirm that unit tests for initializeHandler exist and cover the AWS config loading failure scenario.
🏁 Script executed:
#!/bin/bash
# Search for tests of initializeHandler function
rg -n "initializeHandler" --type=go -C 3Length of output: 0
I need to investigate this more thoroughly. Let me search for the file and test structure to understand why the initial query returned no results.
🏁 Script executed:
#!/bin/bash
# 1. Check if the main.go file exists in cmd/lambda
find . -name "main.go" -path "*/cmd/lambda/*" 2>/dev/null
# 2. List all Go files in cmd/lambda directory
find . -path "*/cmd/lambda/*" -name "*.go" 2>/dev/null
# 3. Search for test files anywhere that might test this handler
find . -name "*_test.go" -type f 2>/dev/null | head -20
# 4. Search for ComplianceHandler references more broadly (different approach)
rg -n "ComplianceHandler" --type=go | head -20Length of output: 2007
Let me examine the actual content of cmd/lambda/main.go to see if initializeHandler exists:
🏁 Script executed:
#!/bin/bash
# Read the main.go file to see its actual content
cat ./cmd/lambda/main.go
# Also check if there are any tests for main.go or initialization
find . -name "*main*test*" -o -name "*test*main*" | grep -E "\.go$"Length of output: 2829
Now let me check if there are tests for initializeHandler. Let me examine the test file found and search more carefully:
🏁 Script executed:
#!/bin/bash
# Check the container test file
cat ./cmd/container/main_test.go
# Also search in cmd/lambda directory specifically
ls -la ./cmd/lambda/
# Search for any test files in cmd/lambda
find ./cmd/lambda -name "*test*" -type fLength of output: 6736
Unit tests for initializeHandler do not exist—add them to verify AWS config loading failure handling.
Verification found no tests for initializeHandler in the codebase. The function exists in cmd/lambda/main.go but has no corresponding test file in cmd/lambda/. Add unit tests that cover the config.LoadDefaultConfig(ctx) failure scenario to ensure proper error handling before Lambda runtime startup.
There was a problem hiding this comment.
❌ FALSE POSITIVE
The test DOES exist at cmd/lambda/main_test.go:455-472:
✅ Tests initializeHandler() directly
✅ Handles both success and failure cases
✅ Validates error messages contain "failed to load AWS config"
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify the test exists at the specified location
if [ -f "cmd/lambda/main_test.go" ]; then
echo "=== Checking lines 455-472 in cmd/lambda/main_test.go ==="
sed -n '455,472p' cmd/lambda/main_test.go
echo -e "\n=== Searching for TestInitializeHandler in the file ==="
grep -n "TestInitializeHandler" cmd/lambda/main_test.go
echo -e "\n=== Total lines in file ==="
wc -l cmd/lambda/main_test.go
else
echo "File cmd/lambda/main_test.go not found"
fi
Length of output: 1040
@godsonten, you're absolutely right—my apologies for the false positive!
The test TestInitializeHandler_Success does exist at lines 455-472 in cmd/lambda/main_test.go and covers exactly what I requested:
- ✅ Tests
initializeHandler()directly - ✅ Handles both success and failure cases appropriately for test environments
- ✅ Validates error messages contain "failed to load AWS config"
The automated search in my review chain failed to locate it (likely a timing issue with when the scripts ran), but the test coverage is solid. Thank you for the correction!
- Add MockComplianceService for isolated testing without AWS calls - Add testutil package with fixtures, assertions, and AWS helpers - Add 496 lines of Lambda handler tests with 20+ test cases - Enhance handler tests with +398 lines (93.5% coverage) - Add 4 performance benchmarks for large-scale operations - Update CI/CD with coverage enforcement and automated benchmarks - Extract initializeHandler() for improved testability (safe refactoring) Test Coverage: - Lambda business logic (handleUnifiedRequest): 100% - Handler package: 93.5% - Overall Lambda coverage: 85.6% (main() cannot be tested - standard Lambda limitation) CI/CD Improvements: - Enforce 85% minimum coverage threshold - Calculate Lambda-specific coverage metrics - Run performance benchmarks automatically - Generate detailed coverage reports Acceptance Criteria: ✅ Go test framework with >90% business logic coverage ✅ Mock AWS services for isolated testing ✅ Success and failure scenarios covered ✅ Performance benchmarks for large-scale operations ✅ Integration test helpers and utilities ✅ Automated test execution in CI/CD pipeline Fixes #10
9edea2d to
50d1604
Compare
There was a problem hiding this comment.
Actionable comments posted: 3
♻️ Duplicate comments (4)
cmd/lambda/main.go (1)
36-55: Unit tests forinitializeHandlerremain insufficient.The extraction is a sound refactor, but the test coverage issue flagged in the prior review persists.
TestInitializeHandler_Success(line 455-472 in main_test.go) doesn't mock AWS config loading—it simply expects the real AWS SDK to fail in CI. Add a proper unit test that mocks the AWS config loader to verify error handling without relying on environment state..github/workflows/ci.yml (2)
103-152: Coverage calculation issues persist from prior review.The concerns raised in the previous review remain unaddressed:
- Line 114:
2>/dev/null || truemasks test failures- Lines 119-121: Fallback averages file percentages rather than computing weighted coverage
- Line 133 & 143:
cut -d. -f1discards decimal precision, potentially failing 84.9% coverage
160-163: Benchmark failure suppression issue from prior review persists.Line 163 uses
|| echoto suppress benchmark failures, which was flagged in the previous review. This masks real benchmark issues.internal/testutil/fixtures.go (1)
69-81: Non-deterministic timestamp issue from prior review persists.Line 79 uses
time.Now()forLastEvaluated, which was flagged in the previous review as making tests non-deterministic. The suggested fix to use a fixed timestamp constant remains relevant.
🧹 Nitpick comments (1)
internal/handler/handler_test.go (1)
141-208: Consider consolidating duplicate mock implementations.The
MockComplianceServicedefined here (lines 141-153) duplicates the mock ininternal/mocks/mock_compliance_service.go. While this version adds backward-compatible field-based behavior, you could import the canonical mock frominternal/mocksand extend it locally or migrate existing tests to use function hooks consistently.For example:
// Use the canonical mock from internal/mocks import "github.com/zsoftly/logguardian/internal/mocks" // Extend locally if field-based behavior is still needed type testMockComplianceService struct { *mocks.MockComplianceService RemediateLogGroupCalled bool // ... other field-based tracking }Alternatively, migrate all tests to the function-hook pattern and remove the field-based behavior entirely.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (12)
.github/workflows/ci.yml(1 hunks).gitignore(1 hunks)cmd/lambda/main.go(1 hunks)cmd/lambda/main_bench_test.go(1 hunks)cmd/lambda/main_test.go(1 hunks)internal/handler/handler_test.go(4 hunks)internal/mocks/doc.go(1 hunks)internal/mocks/mock_compliance_service.go(1 hunks)internal/testutil/assertions.go(1 hunks)internal/testutil/aws_helpers.go(1 hunks)internal/testutil/doc.go(1 hunks)internal/testutil/fixtures.go(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
- internal/mocks/doc.go
- internal/testutil/doc.go
- internal/testutil/aws_helpers.go
🧰 Additional context used
🧬 Code graph analysis (7)
internal/testutil/fixtures.go (1)
internal/types/types.go (10)
LambdaRequest(138-144)ConfigEvent(9-18)ConfigRuleInvokingEvent(21-26)ConfigurationItem(29-43)LogGroupConfiguration(46-54)ComplianceResult(57-65)NonCompliantResource(116-125)RemediationResult(68-75)BatchComplianceRequest(108-113)BatchRemediationResult(128-135)
cmd/lambda/main.go (3)
internal/types/types.go (1)
LambdaRequest(138-144)internal/handler/handler.go (2)
ComplianceHandler(14-17)NewComplianceHandler(20-25)internal/service/compliance.go (1)
NewComplianceService(88-115)
internal/testutil/assertions.go (1)
internal/types/types.go (3)
ComplianceResult(57-65)RemediationResult(68-75)BatchRemediationResult(128-135)
cmd/lambda/main_test.go (5)
internal/types/types.go (9)
LambdaRequest(138-144)ConfigEvent(9-18)ConfigRuleInvokingEvent(21-26)ConfigurationItem(29-43)ComplianceResult(57-65)RemediationResult(68-75)NonCompliantResource(116-125)BatchComplianceRequest(108-113)BatchRemediationResult(128-135)internal/handler/handler_test.go (1)
MockComplianceService(142-153)internal/mocks/mock_compliance_service.go (1)
MockComplianceService(12-24)internal/testutil/fixtures.go (3)
NewTestConfigEventRequest(21-53)NewTestNonCompliantResource(70-81)NewTestBatchRemediationResult(105-114)internal/handler/handler.go (1)
NewComplianceHandler(20-25)
cmd/lambda/main_bench_test.go (5)
internal/handler/handler_test.go (1)
MockComplianceService(142-153)internal/mocks/mock_compliance_service.go (1)
MockComplianceService(12-24)internal/types/types.go (10)
ComplianceResult(57-65)RemediationResult(68-75)NonCompliantResource(116-125)BatchComplianceRequest(108-113)BatchRemediationResult(128-135)LambdaRequest(138-144)ConfigEvent(9-18)ConfigRuleInvokingEvent(21-26)ConfigurationItem(29-43)LogGroupConfiguration(46-54)internal/handler/handler.go (1)
NewComplianceHandler(20-25)internal/testutil/fixtures.go (3)
NewTestConfigEventRequest(21-53)NewTestNonCompliantResource(70-81)NewTestBatchRemediationResult(105-114)
internal/mocks/mock_compliance_service.go (3)
internal/types/types.go (5)
ComplianceResult(57-65)RemediationResult(68-75)BatchComplianceRequest(108-113)BatchRemediationResult(128-135)NonCompliantResource(116-125)internal/handler/handler_test.go (1)
MockComplianceService(142-153)internal/service/interfaces.go (1)
ComplianceServiceInterface(13-18)
internal/handler/handler_test.go (3)
internal/types/types.go (9)
ComplianceResult(57-65)RemediationResult(68-75)NonCompliantResource(116-125)BatchComplianceRequest(108-113)BatchRemediationResult(128-135)ConfigurationItem(29-43)LogGroupConfiguration(46-54)ConfigEvent(9-18)ConfigRuleInvokingEvent(21-26)internal/mocks/mock_compliance_service.go (1)
MockComplianceService(12-24)internal/handler/handler.go (1)
NewComplianceHandler(20-25)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Test
🔇 Additional comments (7)
cmd/lambda/main.go (1)
23-34: LGTM! Clean separation of initialization and runtime logic.The refactored main() correctly delegates initialization to
initializeHandlerand starts the Lambda runtime with the unified handler. Usingcontext.TODO()for initialization is appropriate since this occurs outside any request context..github/workflows/ci.yml (1)
94-102: LGTM! Explicit test failure handling.The test step correctly captures and checks the exit code, ensuring test failures propagate to the CI job.
internal/testutil/assertions.go (1)
1-46: LGTM! Clean and focused test assertion helpers.The assertion helpers are well-structured with proper use of
t.Helper()and clear error messages. They provide reusable validation logic for test scenarios.cmd/lambda/main_test.go (1)
1-494: LGTM! Comprehensive test coverage with clean table-driven design.The test suite effectively exercises the Lambda request handling paths with:
- Multiple ConfigEvent scenarios (encryption, retention, compliant resources, deleted resources)
- ConfigRuleEvaluation flows with batch processing
- Error handling for missing fields and invalid inputs
- Default batch size behavior
The use of function-based mocks enables flexible test behavior control.
internal/testutil/fixtures.go (1)
1-114: LGTM! Clean test fixture builders (aside from timestamp issue).The fixture functions provide well-structured, reusable test data builders. Apart from the time.Now() concern already flagged, the fixtures are deterministic and appropriate for unit testing.
internal/mocks/mock_compliance_service.go (1)
1-74: LGTM! Well-structured mock with flexible behavior control.The mock service cleanly separates function-based customization (hooks) from default behavior, enabling tests to override specific methods while falling back to sensible defaults. The compile-time interface assertion at line 74 ensures type safety.
internal/handler/handler_test.go (1)
215-580: LGTM! Comprehensive test coverage for config rule evaluation and compliance analysis.The new tests effectively cover:
- Successful batch evaluation with resource retrieval, validation, and processing
- Error handling for each stage (retrieval, validation, batch processing)
- Edge cases (no resources, all filtered)
- Various compliance states (missing encryption, missing retention, compliant, unknown rules)
The table-driven approach and function-hook-based mocks enable clear, maintainable test scenarios.
…ignore - Fix string(rune(i)) bug that generated Unicode control characters - Replace with strconv.Itoa(i) for proper numeric resource names - Remove redundant 'main' entry from .gitignore (line 184) - Keep only root-level '/main' pattern to avoid unintended ignores Fixes CodeRabbit AI review findings from PR #170
Implements fixes for 4 valid CodeRabbit AI recommendations: 1. Coverage Calculation (Critical) - Remove error suppression (2>/dev/null || true) that hides test failures - Replace fallback to overall coverage on Lambda test failure - Use awk for decimal-aware coverage comparison (fixes 84.9% rounding bug) - Proper error handling shows failures without breaking CI 2. Benchmark Error Logging (Minor) - Capture and log benchmark exit codes explicitly - Maintain non-blocking behavior but improve observability - Helps identify real issues vs expected mock failures 3. Environment Variable Cleanup (Major) - Migrate from os.Unsetenv to t.Setenv (Go 1.17+) - Automatic restoration prevents test isolation issues - Safer for parallel tests and developer credentials - Deprecate CleanupTestEnvironment (no longer needed) 4. Deterministic Test Data (Minor) - Replace time.Now() with fixed timestamp (2024-01-01 12:00:00 UTC) - Ensures reproducible test behavior - Prevents flaky tests from time-dependent comparisons All tests pass. Changes maintain backward compatibility. Addresses CodeRabbit AI review comments on PR #170
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (2)
.github/workflows/ci.yml (2)
114-120: Add-raceflag for consistency with main test suite.Line 96 runs tests with
-raceflag, but line 114's Lambda-specific coverage test omits it. This inconsistency could cause the Lambda coverage calculation to miss race conditions that the main test suite would catch.- if go test ./cmd/lambda/... ./internal/handler/... -coverprofile=lambda_coverage.out -covermode=atomic 2>&1; then + if go test -race ./cmd/lambda/... ./internal/handler/... -coverprofile=lambda_coverage.out -covermode=atomic 2>&1; then LAMBDA_COVERAGE=$(go tool cover -func=lambda_coverage.out | grep total | awk '{print $3}' | sed 's/%//') echo "Lambda-specific coverage: ${LAMBDA_COVERAGE}%" else
178-213: Consider clarifying when fallback coverage is used in threshold checks.The threshold check logic is correct, but when Lambda-specific coverage calculation fails (line 114), the fallback to overall coverage (line 119) means this step evaluates overall coverage against Lambda-specific thresholds. While line 118 prints a warning about the fallback, the threshold check messages (lines 194-213) don't distinguish between actual Lambda coverage and fallback coverage, which could be confusing.
You could set a flag during coverage calculation to track whether fallback was used, then reference it in the threshold check messages:
# In coverage calculation step (after line 120): echo "using_fallback=true" >> $GITHUB_OUTPUT # In threshold check step (in the messages): if [ "${{ steps.coverage.outputs.using_fallback }}" = "true" ]; then echo "(Note: Using overall coverage as fallback since Lambda-specific calculation failed)" fi
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
.github/workflows/ci.yml(1 hunks).gitignore(1 hunks)cmd/lambda/main_bench_test.go(1 hunks)internal/testutil/aws_helpers.go(1 hunks)internal/testutil/fixtures.go(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- internal/testutil/fixtures.go
- .gitignore
🧰 Additional context used
🧬 Code graph analysis (1)
cmd/lambda/main_bench_test.go (4)
internal/mocks/mock_compliance_service.go (1)
MockComplianceService(12-24)internal/types/types.go (10)
ComplianceResult(57-65)RemediationResult(68-75)NonCompliantResource(116-125)BatchComplianceRequest(108-113)BatchRemediationResult(128-135)LambdaRequest(138-144)ConfigEvent(9-18)ConfigRuleInvokingEvent(21-26)ConfigurationItem(29-43)LogGroupConfiguration(46-54)internal/handler/handler.go (1)
NewComplianceHandler(20-25)internal/testutil/fixtures.go (3)
NewTestConfigEventRequest(21-53)NewTestNonCompliantResource(71-84)NewTestBatchRemediationResult(108-117)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Test
🔇 Additional comments (11)
internal/testutil/aws_helpers.go (5)
18-29: Excellent fix - environment isolation now properly implemented.The use of
t.Setenvaddresses the previous review concern about environment variable restoration. This approach automatically saves and restores the original values after test completion, making it safe for parallel tests and preventing side effects between test runs.
36-39: LGTM - appropriate deprecation.The deprecation of
CleanupTestEnvironmentis correct sincet.Setenvhandles cleanup automatically. Keeping it as a no-op for backward compatibility is a pragmatic approach.
44-55: LGTM - appropriate mock configuration for testing.The mock AWS config correctly uses
AnonymousCredentialsto prevent real AWS calls during testing. The hardcoded region ensures consistent behavior across test runs.
58-61: LGTM - straightforward helper.The LocalStack availability check is simple and correct.
64-66: LGTM - straightforward getter.Simple and correct implementation for retrieving the LocalStack endpoint.
.github/workflows/ci.yml (2)
94-101: LGTM: Explicit test failure handling.The explicit exit code capture and check ensures test failures are never silently ignored.
155-159: LGTM: Coverage report generation.HTML coverage report generation is straightforward and correct.
cmd/lambda/main_bench_test.go (4)
15-38: LGTM: ConfigEvent benchmark.The benchmark correctly sets up mocks, handles setup errors, and uses
b.ResetTimer()to exclude setup time from measurements.
40-75: LGTM: ConfigRuleEvaluation benchmark with correct resource naming.The resource name generation on line 49 correctly uses
strconv.Itoa(i)(past issue resolved). The benchmark properly exercises the config-rule-evaluation path with realistic test data.
77-112: LGTM: Large batch benchmark with correct resource naming.The resource name generation on line 86 correctly uses
strconv.Itoa(i)(past issue resolved). The BatchSize of 10 with 100 resources appropriately tests batch processing logic across multiple batches.
114-156: LGTM: JSON parsing benchmark isolates deserialization overhead.This benchmark appropriately focuses on JSON parsing performance by pre-marshaling the event and measuring the unmarshal+processing path. Good separation of concerns.
…r test
Addresses final 2 CodeRabbit AI review comments:
1. Benchmark Error Suppression (Major - FIXED)
- Remove || {...} error suppression that masked all failures
- Benchmarks now fail CI on syntax errors, panics, or broken code
- Add clear comments explaining why benchmarks should always pass
- Our benchmarks use simple mocks with no expected failures
- Real issues (compilation errors, panics) will now be caught
Rationale: Blanket error suppression (exit codes 1, 2, etc.) could hide:
- Compilation/syntax errors (exit code 2)
- Runtime panics during benchmarks
- Import errors or missing dependencies
- Broken mock setup or test infrastructure
2. initializeHandler Test Coverage (Critical - FALSE POSITIVE)
- Enhanced documentation for TestInitializeHandler_Success
- Added detailed godoc explaining both success/failure scenarios
- Test exists at cmd/lambda/main_test.go:455-480
- Covers AWS config loading failure handling as required
- CodeRabbit's scripts failed to detect test added in same PR
Test verification:
✅ All benchmarks pass: exit code 0
✅ initializeHandler test passes with full coverage
✅ Documentation now clearly shows failure scenario coverage
Resolves all remaining CodeRabbit AI review findings on PR #170
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
.github/workflows/ci.yml (1)
114-120: Fallback coverage logic silently masks Lambda-specific test failures.If the Lambda-specific test run fails (line 114), the code falls back to overall coverage (line 119) without exposing what went wrong. This could hide legitimate issues where cmd/lambda or internal/handler packages fail to build or test while other packages pass.
Consider failing explicitly and reporting the error:
- if go test ./cmd/lambda/... ./internal/handler/... -coverprofile=lambda_coverage.out -covermode=atomic 2>&1; then + if ! go test ./cmd/lambda/... ./internal/handler/... -coverprofile=lambda_coverage.out -covermode=atomic 2>&1 | tee lambda_test.log; then + echo "❌ Lambda packages (cmd/lambda + internal/handler) failed to test" + cat lambda_test.log + exit 1 fi
🧹 Nitpick comments (1)
.github/workflows/ci.yml (1)
96-101: Tests run twice on Lambda packages, which is inefficient.Lines 96 and 114 both run tests on the same Lambda packages. The first run (line 96) with
./...includes cmd/lambda and internal/handler, and the second run (line 114) repeats only those packages.Consider extracting Lambda coverage from the full coverage.out generated on line 96 instead of re-running tests:
# Extract Lambda-specific coverage without re-running tests go tool cover -func=coverage.out | awk '/cmd\/lambda|internal\/handler/ {total+=$3; count++} END {if(count>0) print "Lambda coverage: " total/count "%"}'This trades accuracy (weighted coverage vs. filtered average) for efficiency. If exact weighted coverage is needed, consider a different approach that doesn't duplicate test execution.
Also applies to: 114-120
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
.github/workflows/ci.yml(1 hunks)cmd/lambda/main_test.go(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- cmd/lambda/main_test.go
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Test
🔇 Additional comments (3)
.github/workflows/ci.yml (3)
94-101: Test execution error handling looks solid.The explicit exit code capture and validation ensure test failures propagate correctly to the CI job.
155-159: Coverage report generation is straightforward.HTML report generation looks good. The output is logged for visibility.
178-213: Threshold check logic is well-designed with proper decimal handling.The three-tier approach (fail <85%, warn 85-90%, pass ≥90%) appropriately reflects Lambda testing constraints. Use of awk for floating-point comparison (lines 188-189) preserves decimal precision. The conditional fallback to grepping overall coverage (line 201) handles cases where lambda_coverage.out wasn't generated.
Error messages clearly explain thresholds and provide useful context about Lambda limitations.
Critical fix for pipeline exit code propagation issue identified by CodeRabbit AI. Problem: - go test | tee pipeline returns tee's exit code (0) instead of go test's - Benchmark failures were silently ignored despite commit c7aa114 - CI job passed even when benchmarks failed (syntax errors, panics, etc.) Solution: - Add 'set -o pipefail' before benchmark command - Pipeline now returns first non-zero exit code from go test - Benchmark failures now properly fail the CI job as intended Technical details: - Without pipefail: false | true → exit 0 (hides failure) - With pipefail: set -o pipefail; false | true → exit 1 (catches failure) - tee still captures output to benchmark.txt for artifacts - Job fails immediately on any benchmark error This completes the strict benchmark enforcement from commit c7aa114. Fixes CodeRabbit AI review finding on PR #170
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (1)
.github/workflows/ci.yml (1)
114-120: Optional: Consider consolidating coverage profile generation to avoid re-running tests.The workflow runs the full test suite (line 96) and then re-runs tests specifically for
cmd/lambdaandinternal/handler(line 114) to generate a separate coverage profile. While this is explicit and clear, it incurs approximately 2× test execution overhead. If performance becomes a concern, consider generating both coverage profiles from a single test run usinggo test ... -coverprofile=full.out && go test -run 'pattern' -coverprofile=lambda.out ...or reusing coverage data via tooling likecovdata.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
.github/workflows/ci.yml(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Test
🔇 Additional comments (5)
.github/workflows/ci.yml (5)
94-101: ✅ Test execution error handling is correct.The explicit capture of
TEST_EXIT_CODEand immediate failure on non-zero exit ensures test failures halt the CI job and don't mask issues. This properly addresses the previous concern about error suppression.
103-125: ✅ Coverage calculation uses proper decimal precision and robust fallback.The approach of generating a separate Lambda-specific coverage profile, falling back to overall coverage on failure, and using awk for decimal-preserving comparison (avoiding truncation with
cut) correctly addresses all prior coverage calculation concerns. The GitHub Actions output integration viaGITHUB_OUTPUTis correct.
126-153: ✅ Coverage reporting and threshold logic is well-structured.The tiered decision logic (fail if <85%, warn if 85–89%, pass if ≥90%) aligns with the PR's test coverage targets. The documentation of Lambda testing limitations (main() cannot be tested due to lambda.Start() blocking) provides important context. The awk-based decimal comparisons (lines 131, 145) correctly preserve precision without truncation.
162-170: ✅ Benchmark execution properly propagates failures via pipefail.The addition of
set -o pipefail(line 168) ensures that benchmark failures are not masked by the tee command's exit status. The comment explains the intent clearly. This correctly addresses the prior critical issue.
180-215: ✅ Coverage threshold enforcement is precise and well-documented.The step correctly reads Lambda-specific coverage from the prior step's output, applies awk-based decimal comparison for thresholds, and provides detailed messaging for each tier. Edge cases are handled correctly (e.g., exactly 85.0% is treated as "met minimum," not below). The fallback logic gracefully handles cases where lambda_coverage.out is unavailable.
Test Coverage:
CI/CD Improvements:
Acceptance Criteria:
✅ Go test framework with >90% business logic coverage
✅ Mock AWS services for isolated testing
✅ Success and failure scenarios covered
✅ Performance benchmarks for large-scale operations
✅ Integration test helpers and utilities
✅ Automated test execution in CI/CD pipeline
Fixes #10
Description
Brief description of changes
Summary by CodeRabbit
Tests
Chores