This project was developed to address the challenge of efficiently viewing and managing large ChatGPT data exports. The raw export format, while comprehensive, is often unwieldy due to its massive JSON file size and embedded assets, making it difficult to navigate and use for analysis or archiving. This SPA aims to provide a user-friendly interface that converts these exports into a more accessible, optimized, and searchable format, enhancing the usability of your chat history.
A live, hosted version of this application is available at: https://exoridus.github.io/chatgpt-export-viewer/
This live demo does not store your imported datasets locally. However, it fully supports importing your ChatGPT data export .zip files directly into the browser.
- Import ChatGPT Data Exports: Process official ChatGPT data exports (typically
.ziparchives). - View Conversations: Browse, search, and interact with your imported conversations.
- Blazing Fast Search: Trigram-powered search palette with instant results and jump-to-hit navigation on click.
- Asset Gallery: A dedicated gallery page displaying all referenced asset files and generated outputs in a grid view, grouped by whether they still appear in a conversation.
- Optimize & Convert Data: Transform large, unwieldy JSON exports into smaller, optimized, and easily manageable conversation files.
- Export Processed Data: Export converted conversations as a
.ziparchive, ready for local extraction.
For users who want the simplest way to view their ChatGPT data locally without any build steps:
- Download the release zip: Get the latest
chatgpt-export-viewer-v*.zipfrom the GitHub Releases page. This zip contains the fully pre-built SPA, including theimport-datasetbinary. - Extract: Unzip it to a directory of your choice — the contents will be inside a
chatgpt-export-viewer/folder. - Serve Locally: For the best experience and to avoid potential issues with file loading, serve the extracted directory using a simple static web server:
Then navigate to the local URL provided by
# Install if you don't have it: npm install -g serve serve chatgpt-export-viewerserve(e.g.,http://localhost:3000).
Using the SPA:
- Once loaded, you can import your ChatGPT data export
.zipfile directly through the web interface. This import is temporary and will be lost upon page reload. - While in this temporary state, the SPA displays your conversations. You can then use the built-in export function to create a new
.ziparchive containing the processed conversations. This new zip can be extracted into thechatgpt-export-viewer/conversations/directory for permanent local access, allowing you to view your data offline without needing to run any command-line tools.
This setup involves using source files or the import-dataset binary for a more flexible local development or data processing workflow.
Understanding ChatGPT Exports:
ChatGPT data exports (e.g., your-chatgpt-export.zip) typically contain:
- Microphone recordings and uploaded/generated images.
- A single, very large JSON file (often 100-300+ MB) containing all conversations and metadata. This raw JSON is impractical for direct editing or efficient use.
- A static HTML file that embeds the same large JSON content within a header script, which is also too large and difficult to manage.
The import-dataset Binary:
This cross-platform, zero-dependency executable was developed to address the challenges of raw ChatGPT exports. It:
- Converts the giant
conversation.jsonfile into individual conversation files, each organized within its own directory. - Stores associated asset files (recordings, images) alongside their respective conversations.
- Optimizes and converts conversations into smaller, readable, and searchable JSON objects.
You can download this binary as a standalone tool from the GitHub Releases page for direct conversion of your ChatGPT data export zip. When you build the application locally from source, this binary is also compiled and included within the dist/ directory.
Workflow 1: Using the import-dataset Binary
- Download Binary: Get the
import-datasetbinary from the GitHub Releases page. - Run Conversion: Execute the binary from your terminal, pointing it to your export zip and a target output directory:
This writes
./import-dataset --out ./chatgpt-export-viewer your-export.zip
conversations.json,conversations/<id>/,assets/, andsearch_index.jsondirectly into the target directory. - Serve and View: Serve the output directory with a static web server (e.g.,
npx serve chatgpt-export-viewer). Your converted conversations will be loaded automatically.
Workflow 2: Building from Source (for Contributors/Forkers)
If you are contributing to the project or making local modifications, you would typically build the application from source. The resulting dist/ directory will contain the pre-built SPA, and the import-dataset binary will also be available within it. You can then follow the steps in Workflow 1, or utilize the SPA's built-in import/export features for data conversion.
Note: The chatgpt-export-viewer-v*.zip file from releases is the result of the build process and is intended for users who do not need to modify or build the source code.