Releases: d-edgar/ChatHarbor
v2.2.2
v2.2.1
Full Changelog: v2.2.0...v2.2.1
v2.2.0
Full Changelog: v2.1.1...v2.2.0
v2.1.1
Full Changelog: v2.1.0...v2.1.1
v2.1.0
Full Changelog: v2.0.1...v2.1.0
v2.0.1
Full Changelog: v2.0.0...v2.0.1
v2.0.0 Release, AI refocus and transformation.
ChatHarbor v2.0.0
A ground-up rebuild of the native macOS AI chat client — now with multi-provider support, structured brainstorming, model comparison, and much more.
What's New
Multi-Provider Support
ChatHarbor now works with four AI providers out of the box. Switch between them freely, mix and match across conversations, or pit them against each other in a comparison.
- Ollama — Run local models like Llama 3, Mistral, and CodeLlama entirely on your Mac. Pull, manage, and delete models right from the app.
- OpenAI — GPT-4o, GPT-4, o1, o3-mini, and more.
- Anthropic — Claude Opus, Sonnet, and Haiku (latest versions).
- Apple Intelligence — On-device, private, and free. Requires macOS 26+.
All providers stream responses in real time with a unified interface. No need to juggle multiple apps.
Multi-Method Brainstorming
An entirely new way to use AI — structured brainstorming sessions that walk you through proven creative thinking frameworks:
- Osborn Method — Classic divergent ideation → convergent evaluation → synthesis.
- Six Thinking Hats — Parallel thinking from six perspectives (facts, emotions, caution, optimism, creativity, process).
- Reverse Brainstorm — Ask "how could we make this worse?" then invert the answers into real solutions.
- Round Robin — Structured turns where each response builds on the last.
- Custom — Define your own roles, prompts, and flow.
Each session progresses through phases (Setup → Framing → Ideation → Evaluation → Synthesis) with user checkpoints between steps so you stay in control. Sessions track discrete ideas, token usage, and performance stats. Export a full session report when you're done.
Model Comparison
Send the same prompt to 2–4 models simultaneously and watch them respond side by side. Compare quality, speed, style, and cost at a glance — with token counts, generation duration, and tokens-per-second for each response.
Conversation Forking
Right-click any message and fork the conversation to a different model. The entire history replays through the new provider so you can see how a different model would have handled the same thread. Forks can be nested, and the sidebar groups them hierarchically under the original conversation.
Prompt Library
Eight built-in system prompt templates — Default, Code Assistant, Writing Editor, Research Analyst, Socratic Tutor, Concise, Creative Writer, and Devil's Advocate — plus the ability ...
v1.0.20
Full Changelog: v1.0.19...v1.0.20
v1.0.19
Full Changelog: v1.0.18...v1.0.19
v1.0.18
Full Changelog: v1.0.17...v1.0.18