Skip to content

Commit 9e01238

Browse files
committed
embed full SKILL.md content in agent skills page for copy/paste
1 parent a73d5f3 commit 9e01238

2 files changed

Lines changed: 140 additions & 43 deletions

File tree

ai-analyst/agent-skills/index.mdx

Lines changed: 6 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
22
title: "Agent Skills"
33
sidebarTitle: "Agent Skills"
4-
description: "Installable skills for coding agents that work with SourceMedium data"
4+
description: "Copy/paste skills for coding agents that work with SourceMedium data"
55
icon: "wand-magic-sparkles"
66
---
77

@@ -21,13 +21,9 @@ Agent Skills package repeatable workflows that help coding agents assist with So
2121
</Card>
2222
</CardGroup>
2323

24-
## Install via skills.sh
24+
## How to use
2525

26-
```bash
27-
# Install this skill
28-
npx skills add sourcemedium/skills --skill sm-bigquery-analyst
29-
```
30-
31-
<Info>
32-
Canonical skill files live in [github.com/sourcemedium/skills](https://github.com/sourcemedium/skills). This docs section is human-facing guidance.
33-
</Info>
26+
1. Click on a skill above
27+
2. Copy the `SKILL.md` content from the page
28+
3. Paste it into your coding agent's skills folder (e.g., `.claude/skills/<skill-name>/SKILL.md` for Claude Code)
29+
4. Ask your coding agent questions about your SourceMedium data
Lines changed: 134 additions & 33 deletions
Original file line numberDiff line numberDiff line change
@@ -1,51 +1,155 @@
11
---
22
title: "SM BigQuery Analyst"
33
sidebarTitle: "SM BigQuery Analyst"
4-
description: "User-facing skill for SourceMedium BigQuery setup, access checks, and SQL-receipt analysis"
4+
description: "Copy/paste skill for coding agents to work with SourceMedium BigQuery data"
55
icon: "database"
66
---
77

8-
## What this skill does
8+
Copy the skill below into your coding agent's skills folder to help it query SourceMedium BigQuery data correctly.
99

10-
This skill helps end users:
10+
## How to use
1111

12-
1. Set up `gcloud` + `bq`.
13-
2. Verify project and dataset/table access.
14-
3. Answer analysis questions with auditable SQL receipts.
15-
4. Route to canonical SourceMedium docs when definitions/setup guidance is needed.
12+
1. Create a skill folder in your coding agent's skills directory (e.g., `.claude/skills/sm-bigquery-analyst/` for Claude Code)
13+
2. Save the content below as `SKILL.md` in that folder
14+
3. Ask your coding agent questions about your SourceMedium data
1615

17-
## Install
16+
## SKILL.md
1817

19-
```bash
20-
npx skills add sourcemedium/skills --skill sm-bigquery-analyst
21-
```
22-
23-
See [github.com/sourcemedium/skills](https://github.com/sourcemedium/skills) for the canonical skill source.
24-
25-
## No access yet?
18+
Copy everything below the horizontal rule into `SKILL.md`:
2619

27-
If the user cannot run queries because of permissions, use:
20+
---
2821

29-
- `/ai-analyst/agent-skills/bigquery-access-request-template`
22+
```markdown
23+
---
24+
name: sm-bigquery-analyst
25+
description: Guide end users through SourceMedium BigQuery setup, access verification, schema-aware SQL analysis, and auditable SQL receipts. Use when users need help from first-run gcloud/bq configuration through answering analytical questions against SourceMedium datasets.
26+
compatibility: Requires gcloud CLI, bq CLI, and network access to BigQuery.
27+
---
3028

31-
That page includes minimum IAM roles, a copy/paste admin request, and official Google links.
29+
# SourceMedium BigQuery Analyst
30+
31+
Use this skill to help end users work with SourceMedium BigQuery data from setup to analysis.
32+
33+
## Workflow
34+
35+
1. Validate local tooling (`gcloud`, `bq`) and authentication state.
36+
2. Confirm active project and dataset/table visibility before writing analysis SQL.
37+
3. Use docs-first guidance for setup, definitions, and table discovery.
38+
4. Answer analytical questions with reproducible SQL receipts.
39+
5. Call out assumptions and caveats explicitly.
40+
41+
## Required Output Format
42+
43+
For analytical questions, always return:
44+
45+
1. `Answer`: concise plain-English conclusion.
46+
2. `SQL (copy/paste)`: BigQuery Standard SQL used for the result.
47+
3. `Notes`: timeframe, metric definitions, grain, scope, timezone, attribution lens.
48+
4. `Verify`: `bq query --use_legacy_sql=false '<SQL>'` command.
49+
50+
If access/setup fails, do not fabricate results. Return:
51+
52+
1. Exact failing step.
53+
2. Exact project/dataset that failed.
54+
3. Direct user to request access from their internal admin.
55+
56+
## Query Guardrails
57+
58+
1. Fully qualify tables as `` `project.dataset.table` ``.
59+
2. For order analyses, default to `WHERE is_order_sm_valid = TRUE`.
60+
3. Use `sm_store_id` (not `smcid` — that name does not exist in customer tables).
61+
4. Use `SAFE_DIVIDE` for ratio math.
62+
5. Handle DATE/TIMESTAMP typing explicitly (`DATE(ts_col)` when comparing to dates).
63+
6. Use `order_net_revenue` for revenue metrics (not `order_gross_revenue` unless explicitly asked).
64+
7. Use `*_local_datetime` columns for date-based reporting (not UTC `*_at` columns).
65+
8. Avoid `LIKE`/`REGEXP` on low-cardinality fields (`sm_channel`, `utm_source`, `utm_medium`, `source_system`); discover values first with a `SELECT DISTINCT` query, then use exact match.
66+
9. `LIKE` is fine for free-text fields (`utm_campaign`, `product_title`, `page_path`).
67+
10. Keep exploration bounded (`LIMIT`, date filters, partition filters). Max 100 rows returned.
68+
11. **LTV tables (`rpt_cohort_ltv_*`)**: always filter `sm_order_line_type` to exactly ONE value (`'all_orders'`, `'subscription_orders_only'`, or `'one_time_orders_only'`). Without this, metrics inflate 3x.
69+
12. For multi-touch attribution questions, use `sm_experimental` dataset. For standard analysis, use `sm_transformed_v2`.
70+
71+
## Datasets
72+
73+
| Dataset | What's in it |
74+
|---------|-------------|
75+
| `sm_transformed_v2` | Core tables — orders, customers, order lines, dimensions, reports |
76+
| `sm_metadata` | Data dictionary, metric catalog, data quality results |
77+
| `sm_experimental` | MTA / attribution models |
78+
79+
## Core Tables
80+
81+
| Table | Grain | Use case |
82+
|-------|-------|----------|
83+
| `obt_orders` | 1 row per order | Revenue, profitability, channel analysis |
84+
| `obt_order_lines` | 1 row per line item | Product performance, margins, COGS |
85+
| `obt_customers` | 1 row per customer | Acquisition, retention, subscription status |
86+
| `rpt_ad_performance_daily` | 1 row per channel/date | Ad spend, impressions, clicks, conversions |
87+
| `rpt_cohort_ltv_*` | 1 row per cohort x month | LTV analysis (see LTV query rules above) |
88+
89+
## Key Column Conventions
90+
91+
| Column | Notes |
92+
|--------|-------|
93+
| `sm_store_id` | Store identifier. One value per project. |
94+
| `sm_channel` | Sales channel: `online_dtc`, `amazon`, `tiktok_shop`, etc. |
95+
| `is_order_sm_valid` | Always filter to `TRUE` for order analyses |
96+
| `order_processed_at_local_datetime` | Use for date-based reporting (localized to store timezone) |
97+
| `order_net_revenue` | Net revenue (gross - discounts - refunds). Most common revenue metric. |
98+
99+
## Example Queries
100+
101+
### Daily revenue by channel
102+
103+
```sql
104+
SELECT
105+
DATE(order_processed_at_local_datetime) AS order_date,
106+
sm_channel,
107+
COUNT(sm_order_key) AS order_count,
108+
SUM(order_net_revenue) AS revenue
109+
FROM `your_project.sm_transformed_v2.obt_orders`
110+
WHERE is_order_sm_valid = TRUE
111+
AND DATE(order_processed_at_local_datetime) >= DATE_SUB(CURRENT_DATE(), INTERVAL 30 DAY)
112+
GROUP BY 1, 2
113+
ORDER BY 1 DESC
114+
```
32115

33-
## Required response format
116+
### New customer acquisition
117+
118+
```sql
119+
SELECT
120+
DATE(order_processed_at_local_datetime) AS order_date,
121+
sm_utm_source_medium,
122+
COUNT(DISTINCT sm_customer_key) AS new_customers,
123+
SUM(order_net_revenue) AS revenue
124+
FROM `your_project.sm_transformed_v2.obt_orders`
125+
WHERE is_order_sm_valid = TRUE
126+
AND is_first_purchase_order = TRUE
127+
GROUP BY 1, 2
128+
ORDER BY 1 DESC
129+
```
34130

35-
For analytical questions, this skill returns:
131+
### Product performance
132+
133+
```sql
134+
SELECT
135+
product_title,
136+
sku,
137+
SUM(order_line_quantity) AS units_sold,
138+
SUM(order_line_net_revenue) AS revenue,
139+
SAFE_DIVIDE(SUM(order_line_gross_profit), SUM(order_line_net_revenue)) AS profit_margin
140+
FROM `your_project.sm_transformed_v2.obt_order_lines`
141+
WHERE is_order_sm_valid = TRUE
142+
GROUP BY 1, 2
143+
ORDER BY revenue DESC
144+
LIMIT 20
145+
```
146+
```
36147

37-
1. Answer
38-
2. SQL (copy/paste)
39-
3. Notes/assumptions
40-
4. Verify command
148+
---
41149

42-
## Guardrails used
150+
## No access yet?
43151

44-
1. Fully-qualified table references
45-
2. `is_order_sm_valid = TRUE` default for order analyses
46-
3. `sm_store_id` naming convention
47-
4. `SAFE_DIVIDE` for ratios
48-
5. Discovery-first handling for categorical values
152+
If you cannot run queries due to permissions, see [BigQuery Access Request Template](/ai-analyst/agent-skills/bigquery-access-request-template) for a copy/paste request you can send to your internal admin.
49153

50154
## Related docs
51155

@@ -68,7 +172,4 @@ For analytical questions, this skill returns:
68172
<Card title="Access Management" icon="lock" href="/onboarding/getting-started/how-to-manage-user-access">
69173
User access roles and permissions guidance.
70174
</Card>
71-
<Card title="Access Request Template" icon="key" href="/ai-analyst/agent-skills/bigquery-access-request-template">
72-
Copy/paste request users can send to internal admins.
73-
</Card>
74175
</CardGroup>

0 commit comments

Comments
 (0)