Upload large files to Seafile with low memory usage. The file is split into parts and streamed; on Seafile you get several part files plus an MD5 check file.
| Script | Purpose | Typical use |
|---|---|---|
| seafile-upload-stream.py | Upload a single large file in chunks (parts). | One big file (e.g. backup, image); avoids server upload-size limits. Result on Seafile: file.part_000, file.part_001, …, file.md5, file.json. Resume supported. |
| seafile-upload-dir.py | Upload a local folder recursively; each file in one piece. | Many files or whole directory trees. Resume: skips already uploaded files (same size). |
| seafile-download-dir.py | Download a remote folder from Seafile recursively. | Mirror or backup a library folder. Resume: skips already downloaded files. Option --merge to join stream-upload parts locally. |
All scripts use the Seafile Repo-Token API (Seafile 13+). Config via .env or environment variables (see .env-example).
- Python 3
- Seafile 13 or later
- Repo-Token of the library (in Seafile: Library → Advanced → API Token)
Optionally create and activate a virtual environment so dependencies stay isolated:
python3 -m venv venv
source venv/bin/activate # Windows: venv\Scripts\activate
pip install -r requirements.txt
# Copy .env-example to .env and set SEAFILE_REPO_TOKEN (and optionally other vars)
cp .env-example .env
# Or export manually:
# export SEAFILE_REPO_TOKEN=your_repo_token
python seafile-upload-stream.py /path/to/large/fileScripts load variables from a .env file in the current directory if present (see .env-example for all options). Environment variables still override .env.
If the script is interrupted (Ctrl+C, crash, network error), only some parts may be on Seafile; the .md5 file will be missing.
- Progress: A progress file is created next to the file being uploaded (
<file>.seafile-upload.progress). It records up to which part the upload has progressed. - Simply run again: On the next run with the same file, the script detects progress and uploads only the missing parts (no re-upload from part 0). Then it creates and uploads the
.md5file and removes the progress file. - Same file: Resume applies only to the same file (same absolute path and file size). For a different file or changed content, the script starts again at part 0.
On Seafile the file remains as multiple parts (e.g. file.part_000, file.part_001, …) and a check file file.md5. There is no server-side merge; joining is done locally after download:
- Download all part files (
file.part_*) and the filefile.md5from the Seafile library into one local folder (web UI, Seafile client or API). - In the terminal in that folder, concatenate the parts in order:
(
cat file.part_* > file
file.part_*is sorted alphabetically, so the order is correct.) - Verify integrity with the included MD5 file:
Expected output:
md5sum -c file.md5
file: OK
On macOS (no md5sum): run md5 -r file and compare the shown hash with the contents of file.md5.
The script seafile-upload-dir.py uploads all files from a local folder recursively to Seafile – each file in one piece (no part splitting). For single very large files, use seafile-upload-stream.py instead.
export SEAFILE_REPO_TOKEN=your_repo_token
# Upload contents of ./my_folder to /data (or SEAFILE_TARGET_DIR)
python seafile-upload-dir.py ./my_folder
# Specify target folder on Seafile explicitly
python seafile-upload-dir.py ./my_folder /target/pathResume: Already uploaded files (same size) are skipped on the next run. Progress file: .seafile-upload-dir.progress in the source folder (removed when done).
400 Bad Request: The script retries once with a fresh upload link (tokens expire, often after ~1 h). Other causes: upload link was obtained for a different path than parent_dir, or server-side limits (file size, request size). On 400 the script prints the server response body to help diagnose.
The script seafile-download-dir.py downloads folder contents recursively (also via Repo-Token API). Resume: After an interruption, already fully downloaded files (same size) are skipped on the next run; the target folder contains a progress file .seafile-download.progress, which is removed when the download is complete.
export SEAFILE_REPO_TOKEN=your_repo_token
# Default: remote path from SEAFILE_REMOTE_DIR (or /data), output into current directory
python seafile-download-dir.py
# Remote folder and local output via options
python seafile-download-dir.py --seafile-path /data --output-folder ./local_folderOptions: --seafile-path (remote folder, default from env SEAFILE_REMOTE_DIR or /data), --output-folder (local directory, default .). To merge part files after download without a token: --merge <dir> <basename>.
Build the image:
docker build -t seafile-upload .Config is via environment variables. Pass them with --env-file (e.g. a local .env; do not commit secrets):
Stream upload (mount the file, pass path inside container):
docker run --rm --env-file .env -v /host/path/to/file:/data seafile-upload seafile-upload-stream.py /data/yourfileUpload directory (mount local folder):
docker run --rm --env-file .env -v /host/my_folder:/data seafile-upload seafile-upload-dir.py /data /targetDownload directory (mount target folder):
docker run --rm --env-file .env -v /host/output:/out seafile-upload seafile-download-dir.py --seafile-path /data --output-folder /outMerge part files (no env-file or token needed; mount folder that contains the .part_* files):
docker run --rm -v /host/folder_with_parts:/data seafile-upload seafile-download-dir.py --merge /data myfileThis writes the merged file to /data/myfile (i.e. /host/folder_with_parts/myfile on the host) and removes the part files. Verify with md5sum -c myfile.md5 inside the container or on the host.
If the mounted path is read-only, set SEAFILE_PROGRESS_DIR=/tmp (or a writable volume) so progress files are stored there instead of next to the file or in the folder.