This repository contains the on-site portion of the backend of the MEDIATA platform.
For the complete platform (front end, back end, orchestration, and docs), see tecnomod-um/MEDIATA_project.
This component is deployed on-premise at each clinical site. It hosts and processes sensitive datasets locally and responds only to authenticated users holding valid Kerberos tickets.
- Secure Kerberos ticket validation
- Local dataset analysis (univariate + bivariate stats)
- Lightweight preprocessing and integration pipeline
- Dataset and DCAT metadata exposure
- Exportable mapped files and RDF resources
- Java 17+
- Maven 3.6+
- Docker
- Git
1. Clone and configure
git clone https://github.com/tecnomod-um/MEDIATA_node.git
cd MEDIATA_node
cp node-secrets.env.example node-secrets.env
# Edit node-secrets.env with your configuration2. Build
mvn clean package3. Build Docker image
sudo docker build -t taniwha-backend-node .4. Run
sudo docker run -d \
--network host \
--env-file ./node-secrets.env \
-v ./taniwha:/taniwha \
-e PORT=8080 \
-e NAME="YourNodeName" \
-e DESC="Your MEDIATA server" \
-e COLOR=#008000 \
-e NODE_IP=https://yournode.mediata.dev \
taniwha-backend-nodeThe Docker container creates and mounts the following directories in /taniwha:
/taniwha/
├── datasets/ # Source datasets (CSV, TSV, XLSX, TTL)
├── mapped_datasets/ # Processed and harmonized datasets
├── fhir_mappings/ # FHIR mapping configurations
├── dataset_elements/ # Dataset element metadata
└── dataset_metadata/ # DCAT and dataset metadata
Local volume mapping:
- The
-v ./taniwha:/taniwhaflag mounts a localtaniwha/directory - Create this directory before first run:
mkdir -p taniwha - Data persists across container restarts
mvn clean package
java -jar target/TANIWHA_Backend_node.jar- Full Platform Documentation: MEDIATA_project
- API Documentation: See
DOCUMENTATION.md - Development Guidelines: See
GUIDELINES.md - Issue Tracking: GitHub Issues
This project is developed under the MIT License.