Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
68 commits
Select commit Hold shift + click to select a range
c1bcbfd
Add configuration for auto review in coderabbit
lostiv May 8, 2026
919b0f9
feat: 添加 Docker Build & Push GitHub Actions 工作流
lostiv May 8, 2026
a73b18b
Merge pull request #1 from lostiv/feat/dockerbuild
lostiv May 8, 2026
2be06ec
fix: 前端 Dockerfile 升级 Node 18 → 22,修复 Actions 构建失败
lostiv May 8, 2026
c5e0be0
Merge pull request #2 from lostiv/feat/dockerbuild
lostiv May 8, 2026
96ce87d
Refactor CodeRabbit configuration in .coderabbit.yaml
lostiv May 8, 2026
81054dd
chore: build-desktop 改为仅手动触发
lostiv May 8, 2026
bb3f5c3
Merge pull request #3 from lostiv/feat/dockerbuild
lostiv May 8, 2026
43b8fca
fix: WebDAV 优先走后端代理,解决浏览器 CORS 跨域问题
lostiv May 8, 2026
e6e2c3a
Merge pull request #4 from lostiv/fix/webdv
lostiv May 8, 2026
b0ed119
@
lostiv May 11, 2026
59ea945
Merge pull request #5 from lostiv/fix/webdv
lostiv May 11, 2026
839af40
@
lostiv May 11, 2026
a036fad
Merge pull request #6 from lostiv/fix/sync
lostiv May 11, 2026
fb9ec53
@
lostiv May 11, 2026
faadb32
Merge pull request #7 from lostiv/fix/sync
lostiv May 11, 2026
82a0927
@
lostiv May 11, 2026
d240aba
@ feat: 将 AI 分析移到后端执行,前端仅发起和轮询
lostiv May 11, 2026
f4eb334
@ fix: 修复 CodeRabbit 指出的 4 个问题
lostiv May 11, 2026
de6617a
@ fix: 修复 CodeRabbit 第二轮意见
lostiv May 11, 2026
b917299
Merge pull request #8 from lostiv/fix/category
lostiv May 11, 2026
1fc6722
@
lostiv May 11, 2026
d75a28c
@
lostiv May 11, 2026
523c54b
@
lostiv May 11, 2026
15ae305
@
lostiv May 11, 2026
9bcf2de
@
lostiv May 11, 2026
8a7b744
@
lostiv May 11, 2026
8fb14b2
@
lostiv May 11, 2026
b96c632
Merge pull request #9 from lostiv/fix/docker
lostiv May 11, 2026
adf5bb2
@
lostiv May 11, 2026
b6f54b4
@
lostiv May 11, 2026
3e911ae
Merge pull request #10 from lostiv/fix/ai
lostiv May 11, 2026
daa72d5
@
lostiv May 11, 2026
48b7338
Merge pull request #11 from lostiv/fix/ai
lostiv May 11, 2026
b16a06c
@ refactor: 以后端为单一数据源,移除双向自动同步
lostiv May 11, 2026
383ae46
@ fix: CodeRabbit 审查 — 添加操作失败时的回滚和补偿逻辑
lostiv May 11, 2026
ff6df11
@ bump version to 0.6.1
lostiv May 11, 2026
193b718
Merge pull request #12 from lostiv/fix/ai
lostiv May 11, 2026
6320ec2
@
lostiv May 11, 2026
bc24b4e
Merge pull request #13 from lostiv/fix/ai
lostiv May 11, 2026
a2f11c5
@
lostiv May 11, 2026
0ec19db
Merge pull request #14 from lostiv/fix/backenai
lostiv May 11, 2026
f63581d
@
lostiv May 11, 2026
c321100
Merge pull request #15 from lostiv/fix/sync
lostiv May 11, 2026
1044444
@
lostiv May 11, 2026
871dabf
refactor: Release迁移到后端代理, 添加Fork和Release自动加载, 订阅发布同步后端
lostiv May 12, 2026
94e6487
fix: CodeRabbit 审查修复 — 副作用、订阅同步、错误处理
lostiv May 12, 2026
b232906
fix: 增量同步循环也加 maxPages 上限和分页延迟
lostiv May 12, 2026
a2c94d6
Merge pull request #16 from lostiv/fix/front
lostiv May 12, 2026
710eb08
feat: WebDAV自动备份和自定义保留份数
lostiv May 12, 2026
d382108
fix: CodeRabbit审查修复 — webdavDeleteFile返回值、输入校验、无障碍属性
lostiv May 12, 2026
1302a63
fix: CodeRabbit审查 — auto_backup_enabled 添加布尔类型校验
lostiv May 12, 2026
0a91e0d
chore: bump version to 0.6.2
lostiv May 12, 2026
43d0ae1
Merge pull request #17 from lostiv/feat/autosync
lostiv May 12, 2026
dd3c415
fix: 自动备份400错误修复及配置切换调度缺陷
lostiv May 12, 2026
5366501
fix: 收敛 data.error 类型守卫,避免非字符串错误信息退化
lostiv May 12, 2026
28aa950
Merge pull request #18 from lostiv/fix/autosync
lostiv May 12, 2026
d15fa76
refactor: 合并自动备份设置到备份恢复面板
lostiv May 12, 2026
6cc793f
style: 统一备份面板按钮为品牌色 brand-indigo
lostiv May 12, 2026
a1675ba
fix: CodeRabbit 审查 — 修复占位符污染密钥、按钮禁用逻辑和无障碍标签
lostiv May 12, 2026
c5ec1a0
Merge pull request #19 from lostiv/fix/autosync
lostiv May 12, 2026
bf6f321
fix: 保存自动备份前置同步 WebDAV、恢复兼容后端格式、文件名加时间戳
lostiv May 12, 2026
f8f36ca
Merge pull request #20 from lostiv/fix/autosync
lostiv May 12, 2026
5f14de8
fix: WebDAV 配置激活后 isActive 字段未同步,导致自动备份校验失败
lostiv May 12, 2026
8b0c69d
fix: hydration 时清理指向已删除配置的 activeWebDAVConfig 孤儿引用
lostiv May 12, 2026
2465e7a
Merge pull request #21 from lostiv/fix/sync
lostiv May 12, 2026
fd20a00
@
lostiv May 13, 2026
5590fca
@
lostiv May 13, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
49 changes: 49 additions & 0 deletions .agents/skills/caveman/SKILL.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
---
name: caveman
description: >
Ultra-compressed communication mode. Cuts token usage ~75% by dropping
filler, articles, and pleasantries while keeping full technical accuracy.
Use when user says "caveman mode", "talk like caveman", "use caveman",
"less tokens", "be brief", or invokes /caveman.
---

Respond terse like smart caveman. All technical substance stay. Only fluff die.

## Persistence

ACTIVE EVERY RESPONSE once triggered. No revert after many turns. No filler drift. Still active if unsure. Off only when user says "stop caveman" or "normal mode".

## Rules

Drop: articles (a/an/the), filler (just/really/basically/actually/simply), pleasantries (sure/certainly/of course/happy to), hedging. Fragments OK. Short synonyms (big not extensive, fix not "implement a solution for"). Abbreviate common terms (DB/auth/config/req/res/fn/impl). Strip conjunctions. Use arrows for causality (X -> Y). One word when one word enough.

Technical terms stay exact. Code blocks unchanged. Errors quoted exact.

Pattern: `[thing] [action] [reason]. [next step].`

Not: "Sure! I'd be happy to help you with that. The issue you're experiencing is likely caused by..."
Yes: "Bug in auth middleware. Token expiry check use `<` not `<=`. Fix:"

### Examples

**"Why React component re-render?"**

> Inline obj prop -> new ref -> re-render. `useMemo`.

**"Explain database connection pooling."**

> Pool = reuse DB conn. Skip handshake -> fast under load.

## Auto-Clarity Exception

Drop caveman temporarily for: security warnings, irreversible action confirmations, multi-step sequences where fragment order risks misread, user asks to clarify or repeats question. Resume caveman after clear part done.

Example -- destructive op:

> **Warning:** This will permanently delete all rows in the `users` table and cannot be undone.
>
> ```sql
> DROP TABLE users;
> ```
>
> Caveman resume. Verify backup exist first.
117 changes: 117 additions & 0 deletions .agents/skills/diagnose/SKILL.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,117 @@
---
name: diagnose
description: Disciplined diagnosis loop for hard bugs and performance regressions. Reproduce → minimise → hypothesise → instrument → fix → regression-test. Use when user says "diagnose this" / "debug this", reports a bug, says something is broken/throwing/failing, or describes a performance regression.
---

# Diagnose

A discipline for hard bugs. Skip phases only when explicitly justified.

When exploring the codebase, use the project's domain glossary to get a clear mental model of the relevant modules, and check ADRs in the area you're touching.

## Phase 1 — Build a feedback loop

**This is the skill.** Everything else is mechanical. If you have a fast, deterministic, agent-runnable pass/fail signal for the bug, you will find the cause — bisection, hypothesis-testing, and instrumentation all just consume that signal. If you don't have one, no amount of staring at code will save you.

Spend disproportionate effort here. **Be aggressive. Be creative. Refuse to give up.**

### Ways to construct one — try them in roughly this order

1. **Failing test** at whatever seam reaches the bug — unit, integration, e2e.
2. **Curl / HTTP script** against a running dev server.
3. **CLI invocation** with a fixture input, diffing stdout against a known-good snapshot.
4. **Headless browser script** (Playwright / Puppeteer) — drives the UI, asserts on DOM/console/network.
5. **Replay a captured trace.** Save a real network request / payload / event log to disk; replay it through the code path in isolation.
6. **Throwaway harness.** Spin up a minimal subset of the system (one service, mocked deps) that exercises the bug code path with a single function call.
7. **Property / fuzz loop.** If the bug is "sometimes wrong output", run 1000 random inputs and look for the failure mode.
8. **Bisection harness.** If the bug appeared between two known states (commit, dataset, version), automate "boot at state X, check, repeat" so you can `git bisect run` it.
9. **Differential loop.** Run the same input through old-version vs new-version (or two configs) and diff outputs.
10. **HITL bash script.** Last resort. If a human must click, drive _them_ with `scripts/hitl-loop.template.sh` so the loop is still structured. Captured output feeds back to you.

Build the right feedback loop, and the bug is 90% fixed.

### Iterate on the loop itself

Treat the loop as a product. Once you have _a_ loop, ask:

- Can I make it faster? (Cache setup, skip unrelated init, narrow the test scope.)
- Can I make the signal sharper? (Assert on the specific symptom, not "didn't crash".)
- Can I make it more deterministic? (Pin time, seed RNG, isolate filesystem, freeze network.)

A 30-second flaky loop is barely better than no loop. A 2-second deterministic loop is a debugging superpower.

### Non-deterministic bugs

The goal is not a clean repro but a **higher reproduction rate**. Loop the trigger 100×, parallelise, add stress, narrow timing windows, inject sleeps. A 50%-flake bug is debuggable; 1% is not — keep raising the rate until it's debuggable.

### When you genuinely cannot build a loop

Stop and say so explicitly. List what you tried. Ask the user for: (a) access to whatever environment reproduces it, (b) a captured artifact (HAR file, log dump, core dump, screen recording with timestamps), or (c) permission to add temporary production instrumentation. Do **not** proceed to hypothesise without a loop.

Do not proceed to Phase 2 until you have a loop you believe in.

## Phase 2 — Reproduce

Run the loop. Watch the bug appear.

Confirm:

- [ ] The loop produces the failure mode the **user** described — not a different failure that happens to be nearby. Wrong bug = wrong fix.
- [ ] The failure is reproducible across multiple runs (or, for non-deterministic bugs, reproducible at a high enough rate to debug against).
- [ ] You have captured the exact symptom (error message, wrong output, slow timing) so later phases can verify the fix actually addresses it.

Do not proceed until you reproduce the bug.

## Phase 3 — Hypothesise

Generate **3–5 ranked hypotheses** before testing any of them. Single-hypothesis generation anchors on the first plausible idea.

Each hypothesis must be **falsifiable**: state the prediction it makes.

> Format: "If <X> is the cause, then <changing Y> will make the bug disappear / <changing Z> will make it worse."
If you cannot state the prediction, the hypothesis is a vibe — discard or sharpen it.

**Show the ranked list to the user before testing.** They often have domain knowledge that re-ranks instantly ("we just deployed a change to #3"), or know hypotheses they've already ruled out. Cheap checkpoint, big time saver. Don't block on it — proceed with your ranking if the user is AFK.

## Phase 4 — Instrument

Each probe must map to a specific prediction from Phase 3. **Change one variable at a time.**

Tool preference:

1. **Debugger / REPL inspection** if the env supports it. One breakpoint beats ten logs.
2. **Targeted logs** at the boundaries that distinguish hypotheses.
3. Never "log everything and grep".

**Tag every debug log** with a unique prefix, e.g. `[DEBUG-a4f2]`. Cleanup at the end becomes a single grep. Untagged logs survive; tagged logs die.

**Perf branch.** For performance regressions, logs are usually wrong. Instead: establish a baseline measurement (timing harness, `performance.now()`, profiler, query plan), then bisect. Measure first, fix second.

## Phase 5 — Fix + regression test

Write the regression test **before the fix** — but only if there is a **correct seam** for it.

A correct seam is one where the test exercises the **real bug pattern** as it occurs at the call site. If the only available seam is too shallow (single-caller test when the bug needs multiple callers, unit test that can't replicate the chain that triggered the bug), a regression test there gives false confidence.

**If no correct seam exists, that itself is the finding.** Note it. The codebase architecture is preventing the bug from being locked down. Flag this for the next phase.

If a correct seam exists:

1. Turn the minimised repro into a failing test at that seam.
2. Watch it fail.
3. Apply the fix.
4. Watch it pass.
5. Re-run the Phase 1 feedback loop against the original (un-minimised) scenario.

## Phase 6 — Cleanup + post-mortem

Required before declaring done:

- [ ] Original repro no longer reproduces (re-run the Phase 1 loop)
- [ ] Regression test passes (or absence of seam is documented)
- [ ] All `[DEBUG-...]` instrumentation removed (`grep` the prefix)
- [ ] Throwaway prototypes deleted (or moved to a clearly-marked debug location)
- [ ] The hypothesis that turned out correct is stated in the commit / PR message — so the next debugger learns

**Then ask: what would have prevented this bug?** If the answer involves architectural change (no good test seam, tangled callers, hidden coupling) hand off to the `/improve-codebase-architecture` skill with the specifics. Make the recommendation **after** the fix is in, not before — you have more information now than when you started.
41 changes: 41 additions & 0 deletions .agents/skills/diagnose/scripts/hitl-loop.template.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
#!/usr/bin/env bash
# Human-in-the-loop reproduction loop.
# Copy this file, edit the steps below, and run it.
# The agent runs the script; the user follows prompts in their terminal.
#
# Usage:
# bash hitl-loop.template.sh
#
# Two helpers:
# step "<instruction>" → show instruction, wait for Enter
# capture VAR "<question>" → show question, read response into VAR
#
# At the end, captured values are printed as KEY=VALUE for the agent to parse.

set -euo pipefail

step() {
printf '\n>>> %s\n' "$1"
read -r -p " [Enter when done] " _
}

capture() {
local var="$1" question="$2" answer
printf '\n>>> %s\n' "$question"
read -r -p " > " answer
printf -v "$var" '%s' "$answer"
}

# --- edit below ---------------------------------------------------------

step "Open the app at http://localhost:3000 and sign in."

capture ERRORED "Click the 'Export' button. Did it throw an error? (y/n)"

capture ERROR_MSG "Paste the error message (or 'none'):"

# --- edit above ---------------------------------------------------------

printf '\n--- Captured ---\n'
printf 'ERRORED=%s\n' "$ERRORED"
printf 'ERROR_MSG=%s\n' "$ERROR_MSG"
10 changes: 10 additions & 0 deletions .agents/skills/grill-me/SKILL.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
---
name: grill-me
description: Interview the user relentlessly about a plan or design until reaching shared understanding, resolving each branch of the decision tree. Use when user wants to stress-test a plan, get grilled on their design, or mentions "grill me".
---

Interview me relentlessly about every aspect of this plan until we reach a shared understanding. Walk down each branch of the design tree, resolving dependencies between decisions one-by-one. For each question, provide your recommended answer.

Ask the questions one at a time.

If a question can be answered by exploring the codebase, explore the codebase instead.
47 changes: 47 additions & 0 deletions .agents/skills/grill-with-docs/ADR-FORMAT.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
# ADR Format

ADRs live in `docs/adr/` and use sequential numbering: `0001-slug.md`, `0002-slug.md`, etc.

Create the `docs/adr/` directory lazily — only when the first ADR is needed.

## Template

```md
# {Short title of the decision}

{1-3 sentences: what's the context, what did we decide, and why.}
```

That's it. An ADR can be a single paragraph. The value is in recording *that* a decision was made and *why* — not in filling out sections.

## Optional sections

Only include these when they add genuine value. Most ADRs won't need them.

- **Status** frontmatter (`proposed | accepted | deprecated | superseded by ADR-NNNN`) — useful when decisions are revisited
- **Considered Options** — only when the rejected alternatives are worth remembering
- **Consequences** — only when non-obvious downstream effects need to be called out

## Numbering

Scan `docs/adr/` for the highest existing number and increment by one.

## When to offer an ADR

All three of these must be true:

1. **Hard to reverse** — the cost of changing your mind later is meaningful
2. **Surprising without context** — a future reader will look at the code and wonder "why on earth did they do it this way?"
3. **The result of a real trade-off** — there were genuine alternatives and you picked one for specific reasons

If a decision is easy to reverse, skip it — you'll just reverse it. If it's not surprising, nobody will wonder why. If there was no real alternative, there's nothing to record beyond "we did the obvious thing."

### What qualifies

- **Architectural shape.** "We're using a monorepo." "The write model is event-sourced, the read model is projected into Postgres."
- **Integration patterns between contexts.** "Ordering and Billing communicate via domain events, not synchronous HTTP."
- **Technology choices that carry lock-in.** Database, message bus, auth provider, deployment target. Not every library — just the ones that would take a quarter to swap out.
- **Boundary and scope decisions.** "Customer data is owned by the Customer context; other contexts reference it by ID only." The explicit no-s are as valuable as the yes-s.
- **Deliberate deviations from the obvious path.** "We're using manual SQL instead of an ORM because X." Anything where a reasonable reader would assume the opposite. These stop the next engineer from "fixing" something that was deliberate.
- **Constraints not visible in the code.** "We can't use AWS because of compliance requirements." "Response times must be under 200ms because of the partner API contract."
- **Rejected alternatives when the rejection is non-obvious.** If you considered GraphQL and picked REST for subtle reasons, record it — otherwise someone will suggest GraphQL again in six months.
77 changes: 77 additions & 0 deletions .agents/skills/grill-with-docs/CONTEXT-FORMAT.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,77 @@
# CONTEXT.md Format

## Structure

```md
# {Context Name}

{One or two sentence description of what this context is and why it exists.}

## Language

**Order**:
{A concise description of the term}
_Avoid_: Purchase, transaction

**Invoice**:
A request for payment sent to a customer after delivery.
_Avoid_: Bill, payment request

**Customer**:
A person or organization that places orders.
_Avoid_: Client, buyer, account

## Relationships

- An **Order** produces one or more **Invoices**
- An **Invoice** belongs to exactly one **Customer**

## Example dialogue

> **Dev:** "When a **Customer** places an **Order**, do we create the **Invoice** immediately?"
> **Domain expert:** "No — an **Invoice** is only generated once a **Fulfillment** is confirmed."

## Flagged ambiguities

- "account" was used to mean both **Customer** and **User** — resolved: these are distinct concepts.
```

## Rules

- **Be opinionated.** When multiple words exist for the same concept, pick the best one and list the others as aliases to avoid.
- **Flag conflicts explicitly.** If a term is used ambiguously, call it out in "Flagged ambiguities" with a clear resolution.
- **Keep definitions tight.** One sentence max. Define what it IS, not what it does.
- **Show relationships.** Use bold term names and express cardinality where obvious.
- **Only include terms specific to this project's context.** General programming concepts (timeouts, error types, utility patterns) don't belong even if the project uses them extensively. Before adding a term, ask: is this a concept unique to this context, or a general programming concept? Only the former belongs.
- **Group terms under subheadings** when natural clusters emerge. If all terms belong to a single cohesive area, a flat list is fine.
- **Write an example dialogue.** A conversation between a dev and a domain expert that demonstrates how the terms interact naturally and clarifies boundaries between related concepts.

## Single vs multi-context repos

**Single context (most repos):** One `CONTEXT.md` at the repo root.

**Multiple contexts:** A `CONTEXT-MAP.md` at the repo root lists the contexts, where they live, and how they relate to each other:

```md
# Context Map

## Contexts

- [Ordering](./src/ordering/CONTEXT.md) — receives and tracks customer orders
- [Billing](./src/billing/CONTEXT.md) — generates invoices and processes payments
- [Fulfillment](./src/fulfillment/CONTEXT.md) — manages warehouse picking and shipping

## Relationships

- **Ordering → Fulfillment**: Ordering emits `OrderPlaced` events; Fulfillment consumes them to start picking
- **Fulfillment → Billing**: Fulfillment emits `ShipmentDispatched` events; Billing consumes them to generate invoices
- **Ordering ↔ Billing**: Shared types for `CustomerId` and `Money`
```

The skill infers which structure applies:

- If `CONTEXT-MAP.md` exists, read it to find contexts
- If only a root `CONTEXT.md` exists, single context
- If neither exists, create a root `CONTEXT.md` lazily when the first term is resolved

When multiple contexts exist, infer which one the current topic relates to. If unclear, ask.
Loading