diff --git a/_config.yml b/_config.yml
new file mode 100644
index 0000000..0162f6e
--- /dev/null
+++ b/_config.yml
@@ -0,0 +1,15 @@
+title: LinkedIn Automation Testing
+email: your-email@example.com
+description: >- # this means to ignore newlines until "social"
+ A portfolio site for the LinkedIn Automation Testing project.
+baseurl: "" # the subpath of your site, e.g. /blog
+url: "" # the base hostname & protocol for your site
+
+# Build settings
+markdown: kramdown
+theme: jekyll-theme-cayman
+# Remove the line above and choose any supported GitHub Pages theme, such as
+# jekyll-theme-minimal, jekyll-theme-midnight, etc.
+
+# Defaults
+# Add any custom collections or defaults here
diff --git a/index.html b/index.html
new file mode 100644
index 0000000..2e020cd
--- /dev/null
+++ b/index.html
@@ -0,0 +1,95 @@
+
+
+
This project automates outreach activities on LinkedIn using a combination of Playwright and TestNG. The goal is to simplify the process of connecting with recruiters by automating login, searching for recruiters, composing personalised messages and verifying successful delivery. This page summarises the test plans, cycles and strategies used in the project as documented in my Notion workspace.
+
+
+
Test Plan
+
The test plan acts as the anchor for all testing activities. It defines the objectives, scope, schedule and deliverables for each test type:
+
+
Functional & System Test Plan: Verify the end-to-end flow from reading an Excel file to sending a message. Cycles include smoke tests on every commit, core end‑to‑end tests before Monday runs and weekly edge‑case tests. Entry criteria: environment set up, Excel data prepared, framework ready. Exit criteria: 100 % smoke pass, ≥95 % end‑to‑end pass, high‑severity issues resolved.
+
Regression Test Plan: Ensure new changes don’t break core flows. Nightly light regression and full regression before release. Exit criteria: ≥95 % pass rate with no high‑severity defects.
+
User Acceptance Test Plan: Validate business fit with a real recruiter list. A dry run on Friday and a live run on Monday. Exit criteria: 100 % targeted messages sent, no LinkedIn warnings.
+
Performance & Reliability Test Plan: Validate timing, throughput and stability under different load levels (e.g., 10/50/100 recruiters). KPIs include total runtime, per‑message latency and retry rates.
+
+
+
+
Test Cycles
+
Test cycles are structured to map back to project scope and ensure thorough coverage:
+
+
Cycle A – Smoke & Happy Path: Quick validation of core flow – login → search → message.
+
Cycle B – Component Regression – Login: Positive and negative authentication cases (valid login, invalid login, blank fields, expired token).
+
Cycle B – Component Regression – Search: Validate recruiter search and filtering (valid search, filters, invalid names, empty queries, pagination).
+
Cycle B – Component Regression – Messaging: Validate messaging with attachments and duplicates (valid message, empty message, duplicate message, attachments).
Multiple strategies were designed to cover different outreach scenarios on LinkedIn:
+
+
Strategy 1 – Search‑Based Outreach: Read recruiter name and message from Excel, log in, search for the recruiter, open the profile, send a message and log out.
+
Strategy 2 – Direct URL Navigation: Use a recruiter profile URL from Excel, log in, navigate directly to the profile, send a message and log out.
+
Strategy 3 – API‑Based Profile Fetch: (Future scope) Use LinkedIn or third‑party APIs to fetch profiles and then send messages.
+
Strategy 4 – Hybrid Method: Combine search‑based and direct navigation to handle cases where one method fails.
+
Strategy 5 – Connection Filter: Target contacts within the existing network by filtering LinkedIn connections.
+
+
Separate test suites correspond to each strategy (Suite A: search & send, Suite B: direct profile messaging, Suite C: hybrid, Suite D: API‑based). This modular structure makes it easier to assign tasks, track coverage and map defects to specific areas.
+
+
+
Framework & Tools
+
The automation framework leverages Playwright for browser automation and TestNG for test orchestration. Key features include:
+
+
Structured tests using @Before, @Test and @After hooks.
+
Parallel execution to reduce runtime.
+
Data‑driven tests with Excel integration.
+
Automatic report generation with pass/fail status and logs.
+
Retry logic and annotations for flaky tests.
+
+
+
+
Insights & Best Practices
+
+
Not every test case should be automated. Focus on repetitive regression tests, stable requirements and high‑value functionality. Leave exploratory or low‑priority negative paths for manual testing.
+
Balance coverage with maintainability and ROI; aim to automate 60–70 % of core functionalities.
+
Use smoke and sanity cycles to quickly validate critical flows before moving to deeper regression and UAT cycles.
+
Group test files by feature (e.g., login.spec.ts, search.spec.ts) and use tags (smoke, regression, exploratory) to filter tests.
Automating recruiter outreach on LinkedIn with Playwright & TestNG. Explore test plans, cycles, strategies and learnings from this project.
+
+
+
Project Overview
+
This project aims to streamline recruiter outreach on LinkedIn. It automates login, search, message composition and verification using Playwright and TestNG. By combining data-driven execution with robust reporting, the framework accelerates testing and increases coverage.
+
+
+
Test Plans
+
+
+
Functional & System
+
End-to-end flows from Excel import to message delivery. Entry criteria include environment set-up and data readiness. Exit criteria require 100 % smoke pass and ≥95 % end-to-end pass【583227582491469†L117-L130】.
+
+
+
Regression
+
Validate that new changes don’t break existing functionality. Incorporates nightly smoke runs and pre-release full regression with a ≥95 % pass rate【583227582491469†L170-L174】.
+
+
+
User Acceptance
+
Business-focused runs with real recruiter lists. Dry run on Friday and live pilot on Monday. Success measured by targeted messages sent and the absence of warnings【583227582491469†L170-L174】.
+
+
+
Performance & Reliability
+
Benchmark throughput and stability at various batch sizes (e.g., 10/50/100 recruiters). Exit when KPIs for runtime, per-message latency and retry rates are achieved.