The sections below detail how to deploy the PDP.
(Table of contents automatically generated by https://luciopaiva.com/markdown-toc/).
The PDP is configured via two mechanisms:
- For server-side configuration and very simple client-side configuration (such as the URL of the ncWMS service), a set of environment variables.
- For more complex client-side app configuration, configuration code in JavaScript files, at most one file per portal.
An example set of environment variables is found in docker/production/,
specifically in
docker-compose.yaml: defines environment variables specific to the frontend and backend.common.env: defines environment variables common to frontend and backend. Used bydocker-compose.yaml.
Note: Environment variables accessed by JavaScript are injected via
the Python templating engine Jinja2, which substitutes variables using
exactly the same syntax as JavaScript template strings, namely ${...}.
In order to pass environment variables with content of the form
${...} through to JavaScript, the $ must be escaped as $$, thus
$${...}. This appears in the env var BC_BASEMAP_URL.
These items are available
- in the Python code as the
configobject; - in the JavaScript code on the
pdpglobal object.
They are loaded from the environment variables of the same name, upper cased.
| Name | Description |
|---|---|
app_root |
Root location where data portal will be exposed. This location will need to be proxied to whatever port the server will be running on. |
data_root |
Root location of backend data server. Probably <app_root>/data. If you are running in production, this location will need to be proxied to whatever port the data server will be running on. When running a development server, this is redirected internally. |
dsn |
Raster metadata database url of the form dialect[+driver]://username:password@host:port/database. Password must either be supplied or available in the user's ~/.pgpass file. |
pcds_dsn |
PCDS database URL of the form dialect[+driver]://username:password@host:port/database. Password must either be supplied or available in the user's ~/.pgpass file. |
js_min |
Determines use of javascript bundling/minification. Values: true or false. |
geoserver_url |
PCDS Geoserver URL |
ncwms_url |
Raster portal ncWMS 2.x -- modelmeta translator URL. |
old_ncwms_url |
Raster portal pure ncWMS 1.x URL. Used to fill in missing services from ncWMS 2.x. |
na_tiles_url |
MapProxy URL for serving North America base maps |
bc_basemap_url |
Tile server URLs (space separated list) for BC base maps |
use_analytics |
Enable or disable Google Analytics reporting |
analytics |
Google Analytics ID |
Gunicorn is also configured through environment variables. Any environment
variable beginning with GUNICORN_ is interpreted as a configuration value.
Some portals are configured by hard-coded values in the client app JavaScript. Other portals are configured a separate JS configuration file that exports a configuration object processed by the client app.
A separate configuration file can easily be superseded by mounting a volume to
its file path that contains different configuration content. In the Docker
container, such files have internal (target) file paths of the form
<HOMEDIR>/pdp/static/js/<portal>_config.js; for example,
<HOMEDIR>/pdp/static/js/prism_demo_config.js, where
<HOMEDIR> is /opt/dockeragent unless the image is built with a differently
named non-root user (default dockeragent).
Developers are strongly encouraged to keep the JS configuration files in this repo up to date with the most recently deployed configurations. When a configuration is changed for deployment, the repo copy of the configuration file should also be changed appropriately. A new release need not be made right away (that is of course the point of separate configuration), but eventually updates will make their way into releases, and we will also have a typical or standard configurations that are easily accessible.
At present, the following JS portal configuration files exist:
- PRISM:
pdp/static/js/prism_demo_config.js
It is difficult if not impossible to install the PDP on a typical development workstation (particularly since the transition to Ubuntu 20.04).
To fill that gap, we've defined Docker infrastructure that allows you to
build and run a development deployment of the PDP on your workstation
that is equivalent to the production deployment. The infrastructure is in
docker/local-run/.
The infrastructure creates 4 docker containers:
- PDP frontend (
pdp-local-run-fe) - PDP backend (
pdp-local-run-be) - PGbouncer (
pgbouncer-dev): Manages connections to the databases. - Nginx proxy (
pdp-dev-proxy): Proxies frontend and backend containers to a single domain (pdp.localhost:<port>) to avoid CORS issues.
The frontend and backend containers mount your local codebase and install it, so that changes you make dynamically to the code are reflected inside the container as you work.
The running app is available at http://pdp.localhost:5000/portal//map/; for example, http://pdp.localhost:5000/portal/bc_prism/map/.
-
Advance prep
Do each of the following things once per workstation.
-
Configure Docker user namespace mapping.
-
Clone
pdp-docker. -
Follow the instructions in the
pdp-dockerdocumentation: Setting up Docker namespace remapping (with recommended parameters).
-
-
Create
docker/local-run/common-with-passwords.envfromdocker/local-run/common.envby adding passwords for thepcic_metaandcrmpdatabases. -
Create
docker/local-run/pgbounce_users-with-passwords.txtfrom
docker/local-run/pgbounce_users.txtby inserting correct md5 sums. -
Edit your
/etc/hosts: Addpdp.localhostto the line starting with127.0.0.1. The result will look like127.0.0.1 localhost pdp.localhostThis allows the Nginx reverse proxy set up by the docker-compose to refer to the domain
pdp.localhost. Note that the frontend container is configured withAPP_ROOTandDATA_ROOTusing this domain.
-
-
Build the image
The image need only be (re)built when:
- the project is first cloned, or
- any of the
*requirements.txtfiles change, or - the local-run Dockerfile changes.
The built image contains all dependencies specified in those files (but not the PDP codebase). It forms the basis for installing and running your local codebase.
To build the image:
docker pull pcic/pdp-base-minimal docker-compose -f docker/local-run/docker-compose.yaml buildImage build can take several minutes.
-
Mount the gluster
/storagevolumeMount locally to
/storageso that those data files are accessible on your workstation.sudo mount -t cifs -o username=XXXX@uvic.ca //pcic-storage.pcic.uvic.ca/storage/ /storage -
Start the containers
docker-compose -f docker/local-run/docker-compose.yaml up -dThis starts containers for the backend, frontend, pgbouncer, and a local reverse proxy that maps the HTTP addresses
pdp.localhost:5000onto the appropriate containers (mainly to avoid CORS problems).The frontend and backend containers automatically start up the application.
-
Point your browser at
pdp.localhost:5000/portal/<name>/map/This should load the named PDP portal.
-
Change your code
Since your local codebase is mounted to the containers and installed in editable/development mode (
pip install -e .), any code changes you make externally (in your local filesystem) are reflected "live" inside the containers. -
Restart server (Python code changes)
If you change only JavaScript code (or other items under
pdp/static), you can skip this step.If you change Python code, you will have to stop and restart the appropriate server (frontend or backend; you have to decide which depending on the code you changed).
If you're not sure or can't be bothered to determine it, you can stop and restart both backend and frontend:
docker exec -it pdp-local-run-fe ./restart-gunicorn.sh docker exec -it pdp-local-run-be ./restart-gunicorn.shTo restart just one or the other, do the obvious.
-
Refresh browser
You may need to clear caches to ensure you get a fresh copy of changed code or data.
-
Stop the containers when you're done
When you have completed a cycle of development and testing, you may wish to stop the Docker containers.
docker-compose -f docker/local-run/docker-compose.yaml down -
Extra: Run an interactive bash shell inside a container
When the containers are running, you can poke around inside them and/or execute tests inside them by connecting to them interactively:
docker exec -it <container> bashThis starts a bash shell inside the container and connects you to it. You may issue any command from the prompt.
- Data files need only be mounted to the backend service.
- JS configuration files need only be mounted to the frontend service. An example one is included in this directory, and mounted. It overrides the default one in the project.
- If you are getting a
client_login_timeout()error message connecting to the database or error messages while building the local Docker image, your VPN may be interfering with Docker's networking. Try OpenConnect VPN instead of AnyConnect, if applicable.
A production instance should be run in a production ready WSGI container with proper process monitoring. We use gunicorn as the WSGI container, Supervisord for process monitoring, and Apache as a reverse proxy.
In production, the frontend and backend run in separate WSGI containers. This is because the front end serves short, non-blocking requests, whereas the back end serves fewer long, process blocking requests.
We deploy in Docker containers. All the Docker infrastructure necessary to
construct a production deployment is found in
docker/production. The alert reader will note that
this is very similar to the docker/local-run infrastructure.
In your deployment directory, you will need to:
-
Copy the contents of
docker/production/. -
Create
docker/production/common-with-passwords.envfromdocker/local-run/common.envby adding passwords for thepcic_metaandcrmpdatabases. -
Create
docker/production/pgbounce_users-with-passwords.txtfrom
docker/local-run/pgbounce_users.txtby inserting correct md5 sums.
Starting and stopping the containers is done in the usual fashion with
docker-compose.
When making changes to the production Docker infrastructure, it can be helpful to deploy the production containers locally.
To support this, we have also
defined the file docker/production/docker-compose-local.yaml. Like
docker/local-run/docker-compose.yaml, it starts a (different) set of
containers that enable PDP to be run locally, in this case the production
instance. Unfortunately, for reasons not entirely understood just yet,
they too must be deployed to port 5000, i.e., pdp.localhost:5000,
so production and local-run cannot be running simultaneously.
For more information on local deployment, see section Deploying locally for development.
-
Build the image
docker-compose -f docker/production/docker-compose-local.yaml build -
Start the containers
docker-compose -f docker/production/docker-compose-local.yaml up -d -
Point your browser at
pdp.localhost:5000/portal/<name>/map/This should load the named PDP portal.
-
Stop the containers
docker-compose -f docker/production/docker-compose-local.yaml down
Running in gunicorn can be tested with a command similar to the following:
pyenv/bin/gunicorn -b 0.0.0.0:<port1> pdp.wsgi:frontend
pyenv/bin/gunicorn -b 0.0.0.0:<port2> pdp.wsgi:backendNote: This is only an example process monitoring setup. Details can and will be different depending on your particular deployment strategy.
Set up the Supervisord config file using
pyenv/bin/echo_supervisord_conf > /install/location/supervisord.confIn order to run Supervisord, the config file must have a [supervisord] section. Here's a sample section:
[supervisord]
logfile=/install/location/etc/<supervisord_logfile> ; (main log file;default $CWD/supervisord.log)
loglevel=info ; (log level;default info; others: debug,warn,trace)
nodaemon=true ; (start in foreground if true; useful for debugging)Supervisorctl is a command line utility that lets you see the status and output of processes and start, stop and restart them. The following will set up supervisorctl using a unix socket file, but it is also possible to monitor processes using a web interface if you wish to do so.
[unix_http_server]
file = /tmp/supervisord.sock
[supervisorctl]
serverurl = unix:///tmp/supervisord.sock
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterfaceFront end config
[program:pdp_frontend-v.v.v]
command=/install/location/pyenv/bin/gunicorn -b 0.0.0.0:<port> --access-logfile=<access_logfile> --error-logfile=<error_logfile> pdp.wsgi:frontend
directory=/install/location/
user=www-data
environment=OPTION0="",OPTION2=""...
autostart=true
autorestart=true
redirect_stderr=True
killasgroup=TrueBack end config
[program:pdp_backend-v.v.v]
command=/install/location/pyenv/bin/gunicorn -b 0.0.0.0:<port> --workers 10 --worker-class gevent -t 3600 --access-logfile=<access_logfile> --error-logfile=<error_logfile> pdp.wsgi:backend
directory=/install/location/
user=www-data
environment=OPTION0="",OPTION2=""...
autostart=true
autorestart=true
redirect_stderr=True
killasgroup=TrueTo make starting/stop easier, add a group to supervisord.conf
[group:v.v.v]
programs=pdp_frontend-v.v.v,pdp_backend-v.v.vOnce the config file has been set up, start the processes with the following command:
pyenv/bin/supervisord -c path/to/supervisord.confAfter invoking Supervisord, use supervisorctl to monitor and update the running processes
pyenv/bin/supervisorctlWhen upgrading, it's easiest to simply copy the existing config and update the paths/version number.
IMPORTANT: When adding a new version, make sure to set the old version autostart and autorestart to false.
Using supervisorctl, you should then be able to reread the new config, update the old version config (so it stops, picks up new autostart/autorestart=false), and update the new version.
If there are any errors, they can be found in the supervisord_logfile. Errors starting gunicorn can be found in the error_logfile.