This document describes the testing infrastructure, practices, and guidelines for the Hera Chrome extension.
- Overview
- Test Infrastructure
- Running Tests
- Test Coverage
- Writing Tests
- CI/CD Integration
- Testing Best Practices
Hera uses a comprehensive testing strategy that includes:
- Unit Tests: Test individual modules and functions in isolation
- Integration Tests: Test how multiple components work together
- Coverage Reporting: Track test coverage and identify untested code
- CI/CD Automation: Automated testing on every commit and PR
- Total Tests: 84 passing
- Unit Tests: 70 tests covering JWT and OIDC validators
- Integration Tests: 14 tests covering evidence collection
- Test Coverage: ~9% for tested modules (JWT & OIDC validators have high coverage)
We use Vitest as our test framework because:
- Native ES Module (ESM) support
- Fast execution with parallel testing
- Built-in coverage reporting
- Excellent TypeScript/JavaScript support
- Modern testing API compatible with Jest
tests/
├── setup.js # Global test setup
├── mocks/
│ └── chrome.js # Chrome Extension API mocks
├── utils/
│ └── test-helpers.js # Testing utility functions
├── unit/
│ ├── jwt-validator.test.js
│ └── oidc-validator.test.js
└── integration/
└── evidence-collection.test.js
vitest.config.js- Main Vitest configurationtests/setup.js- Global test setup (mocks, polyfills).github/workflows/test.yml- CI/CD test automation.github/workflows/security.yml- Security scanning automation
npm test # Run all tests once
npm run test:watch # Run tests in watch mode
npm run test:ui # Open Vitest UI in browsernpm run test:unit # Run only unit tests
npm run test:integration # Run only integration testsnpm run test:coverage # Generate coverage report
npm run test:all # Run lint, validate, and coverageCoverage reports are generated in:
coverage/- Detailed coverage datacoverage/index.html- Visual HTML coverage report
Tests run automatically on:
- Every push to
main,develop, orclaude/**branches - Every pull request to
mainordevelop - Multiple Node.js versions (18.x, 20.x)
| Module | Coverage | Tests |
|---|---|---|
| jwt-validator.js | ~95% | 48 tests |
| oidc-validator.js | ~95% | 46 tests |
| Other modules | 0% | Pending |
Our coverage targets (to be achieved as more tests are added):
- Lines: 70%
- Functions: 70%
- Branches: 65%
- Statements: 70%
-
Run tests with coverage:
npm run test:coverage
-
Open the HTML report:
open coverage/index.html
// tests/unit/my-module.test.js
import { describe, it, expect, beforeEach } from 'vitest';
import { MyModule } from '../../modules/my-module.js';
describe('MyModule', () => {
let module;
beforeEach(() => {
module = new MyModule();
});
it('should perform expected behavior', () => {
const result = module.doSomething('input');
expect(result).toBe('expected');
});
it('should handle edge cases', () => {
expect(() => module.doSomething(null)).toThrow();
});
});// tests/integration/workflow.test.js
import { describe, it, expect, beforeEach } from 'vitest';
import { setMockStorageData, resetChromeMocks } from '../mocks/chrome.js';
describe('OAuth2 Flow Integration', () => {
beforeEach(() => {
resetChromeMocks();
});
it('should track complete OAuth2 flow', async () => {
// Test multi-component interaction
const authRequest = { /* ... */ };
const tokenRequest = { /* ... */ };
// Verify flow correlation
expect(authRequest).toBeDefined();
expect(tokenRequest).toBeDefined();
});
});import { createMockJWT, createMockOIDCTokenResponse } from '../utils/test-helpers.js';
// Create a mock JWT with custom claims
const jwt = createMockJWT(
{ alg: 'RS256' }, // header
{ sub: 'user-123', exp: Date.now() + 3600 } // payload
);
// Create a mock OIDC token response
const tokenResponse = createMockOIDCTokenResponse({
access_token: 'mock-access-token',
idTokenPayload: { sub: 'user-123' }
});import { chromeMock, setMockStorageData } from '../mocks/chrome.js';
// Mock storage data
setMockStorageData({
heraEvidence: { /* test data */ }
});
// Use chrome API in tests
const result = await chrome.storage.local.get(['heraEvidence']);
expect(result.heraEvidence).toBeDefined();Runs on every push and PR:
- Install dependencies -
npm ci - Run linter -
npm run lint - Validate extension -
npm run validate - Run unit tests -
npm run test:unit - Run integration tests -
npm run test:integration - Generate coverage -
npm run test:coverage - Upload coverage to Codecov (optional)
- Archive test results
Runs daily and on main branch changes:
- npm audit - Check for security vulnerabilities
- Dependency check - Find outdated packages
- CodeQL analysis - Static code analysis
Test runs generate artifacts:
test-results-<node-version>- Test execution resultscoverage/- Coverage reportshera-extension- Validated extension files
Use descriptive test names that explain what is being tested:
// ❌ Bad
it('works', () => { /* ... */ });
// ✅ Good
it('should detect missing nonce in implicit flow', () => { /* ... */ });Structure tests clearly:
it('should validate JWT expiration', () => {
// Arrange - Set up test data
const expiredToken = createExpiredJWT();
// Act - Perform the action
const result = validator.validateJWT(expiredToken);
// Assert - Verify the result
expect(result.issues).toContainEqual(
expect.objectContaining({ type: 'TOKEN_EXPIRED' })
);
});Each test should be independent:
beforeEach(() => {
resetChromeMocks(); // Reset mocks before each test
validator = new Validator(); // Create fresh instance
});
afterEach(() => {
// Clean up if needed
});Always test boundary conditions:
describe('JWT parsing', () => {
it('should parse valid JWT', () => { /* ... */ });
it('should reject token with <3 parts', () => { /* ... */ });
it('should reject invalid base64', () => { /* ... */ });
it('should handle URL-safe encoding', () => { /* ... */ });
it('should reject null input', () => { /* ... */ });
});Avoid duplication with helper functions:
// Good - reusable helper
function createTestJWT(claims = {}) {
return createMockJWT(
{ alg: 'RS256' },
{ sub: 'test', ...claims }
);
}
it('test 1', () => {
const jwt = createTestJWT({ exp: futureTime });
// ...
});
it('test 2', () => {
const jwt = createTestJWT({ iss: 'issuer' });
// ...
});For a security extension, always test vulnerability detection:
it('should detect alg:none vulnerability', () => {
const unsafeToken = createJWT({ alg: 'none' });
const result = validator.validateJWT(unsafeToken);
expect(result.issues).toContainEqual(
expect.objectContaining({
type: 'ALG_NONE_VULNERABILITY',
severity: 'CRITICAL',
cvss: 10.0
})
);
});Test with realistic data sizes:
it('should handle large token responses', () => {
const largeResponse = createResponseWithSize(500000); // 500KB
expect(() => validator.analyze(largeResponse)).not.toThrow();
});
it('should limit cache size', () => {
// Add 100 items to cache with max 25
for (let i = 0; i < 100; i++) {
cache.add(createCacheEntry(i));
}
expect(cache.size).toBeLessThanOrEqual(25);
});To expand test coverage, prioritize these modules:
- oauth2-analyzer.js - Core OAuth2 flow analysis
- pkce-validator.js - PKCE implementation checking
- refresh-token-tracker.js - Token rotation detection
- session-security-analyzer.js - Session fixation & hijacking
- cookie-utils.js - Cookie security validation
- request-body-capturer.js - POST body capture & redaction
- flow-analyzer.js - Authentication flow correlation
- Complete OAuth2 authorization code flow
- OIDC implicit/hybrid flows
- PKCE end-to-end validation
- Token refresh rotation
- Evidence persistence across restarts
- Create test file:
tests/unit/module-name.test.js - Import module and test utilities
- Write descriptive test cases
- Run tests:
npm test - Check coverage:
npm run test:coverage - Commit with clear message
- Run coverage report:
npm run test:coverage - Open HTML report:
open coverage/index.html - Identify untested code (red/yellow highlights)
- Add tests for uncovered lines
- Verify coverage improvement
- Keep tests fast (<100ms per test when possible)
- Use
beforeEachfor setup, not before each assertion - Mock expensive operations (network, file I/O)
- Run tests in parallel (Vitest default)
- Check Node.js version match
- Verify all dependencies in package.json
- Look for timing issues (use
vi.useFakeTimers()) - Check for environment-specific code
- Ensure
resetChromeMocks()inbeforeEach - Verify mock implementations match real APIs
- Check for null/undefined mock values
- Ensure source files are in include paths
- Check that tests actually import the modules
- Verify vitest.config.js coverage settings
For questions or issues with tests:
- Check this documentation
- Review existing test files for examples
- Check test output and error messages
- Open an issue with test failure details