- Platform Access
- User Accounts
- Apps, Users and Deployments
- Version Control & Images
- Deploying New Versions
- Emergency Rollback
- Non Persistent Filesystem
- Development, Staging and Production Environments
- Add-ons
- Logging
- Provided Subdomains and Custom Domains
- Scaling
- Routing Tier
- Performance & Caching
- Scheduled Jobs and Background Workers
- Secure Shell (SSH)
- Stacks
TL;DR:
- The command line client cctrl is the primary interface.
- We also offer a web console.
- For full control and integration it's possible to talk directly to the RESTful API.
To control the platform we offer different interfaces. The primary way of controlling your apps and deployments is via the command-line interface (CLI) called cctrl. Additionally we also offer a web console. Both the CLI as well as the web console however are merely frontends to our RESTful API. For deep integration into your apps you can optionally use one of our available API libraries.
Throughout this documentation we will use the CLI as the primary way of controlling the cloudControl platform. Installing cctrl is easy and works on Mac/Linux as well as on Windows. Throughout this documentation we will use the CLI as the primary way of controlling the cloudControl platform. The CLI consists of 2 parts: cctrlapp and cctrluser. To get help for the command line client, just append --help or -h to any of the commands.
Installing cctrl is easy and works on Mac/Linux as well as on Windows.
For Windows we offer an installer. Please download the latest version of the installer from S3. The file is named cctrl-x.x-setup.exe.
On Linux and Mac OS we recommend installing and updating cctrl via pip. cctrl requires Python 2.6+.
$ sudo pip install -U cctrl
If you don't have pip you can install pip via easy_install (on Linux usually part of the python-setuptools package) and then install cctrl.
$ sudo easy_install pip
$ sudo pip install -U cctrl
TL;DR:
- Every developer has their own user account
- User accounts can be created via the web console or via
cctrluser create - User accounts can be deleted via
cctrluser delete
To work on and manage your applications on the platform, a user account is needed. User accounts can be created via the Console or using the following CLI command:
cctrluser create
After this, an activation email is sent to the given email address. Click the link in the email or use the following CLI command to activate the account:
cctrluser activate USER_NAME ACTIVATION_CODE
If you want to delete your user account, please use the following CLI command:
$ cctrluser delete
You can reset your password, in case you forgot it.
TL;DR:
- Applications (apps) have a repository, deployments and users.
- The repository is where your code lives, organized in branches.
- A deployment is a running version of your application, based on the branch with the same name. Exception: the default deployment is based on the master branch.
- Users can be added to apps to gain access to the repository, branches and deployments.
cloudControl PaaS uses a distinct set of naming conventions. To understand how to work with the platform effectively, it's important to understand the following few basic concepts.
An app consists of a repository (with branches), deployments and users. Creating an app allows you to add or remove users to that app, giving them access to the source code as well as allowing them to manage the deployments.
Creating an app is easy. Simply specify a name and the desired type to determine which buildpack to use.
$ cctrlapp APP_NAME create php
You can always list your existing apps using the command line client too.
$ cctrlapp -l
Apps
Nr Name Type
1 myfirstapp php
2 nextbigthing php
[...]
By adding users to an app you can grant fellow developers access to the source code in the repository, allow them to deploy new versions and modify the deployments including their Add-ons. Permissions are based on the user's roles.
You can list, add and remove app users using the command line client.
$ cctrlapp APP_NAME user
Users
Name Email
user1 user1@example.com
user2 user2@example.com
user3 user3@example.com
To add a user please use their email address. If the user is already registered with that address, they will be added to the app. If not, they will first receive an email invitation and will be added after activating their account.
$ cctrlapp APP_NAME user.add user4@example.com
To remove a user, please use their username.
$ cctrlapp APP_NAME user.remove user3
- Owner: Creating an app makes you the owner and gives you full access. The owner can not be removed from the app and gets charged for all their apps' consumption. If you plan on having multiple developers working on the same app, it's recommended to have a separate admin-like account as the owner of all your apps and add the additional developers (including yourself) separately.
- Developer: The default role for users added to an app is the developer role. Developers have full access to the repository and to all deployments. Developers can add more developers or even remove existing ones. They can even delete deployments and also the app itself. Developers however can not change the associated billing account or remove the owner.
For secure access to the app's repository, each developer needs to authenticate via public/private key authentication. Please refer to GitHub's article on generating SSH keys for details on how to create a key. You can simply add your default key to your user account using the command line client. If the default key can not be found, cctrlapp will offer to create one.
$ cctrluser key add
You can also list the available key ids and remove existing keys using those key ids.
$ cctrluser key
Keys
Dohyoonuf7
$ cctrluser key Dohyoonuf7
ssh-rsa AAA[...]
$ cctrluser key.remove Dohyoonuf7
A deployment is the running version of one of your branches made accessible via a provided subdomain. It is based on the branch of the same name, with the exception of the master branch which is used by the default deployment.
Deployments run independently from each other, including seperate runtime environments, file system storage and Add-ons (e.g. databases and caches). This allows you to have different versions of your app running at the same time without interfering with each other. Please refer to the section about development, staging and production environments to understand why this is a good idea.
You can list all the deployments with the details command.
$ cctrlapp APP_NAME details
App
Name: APP_NAME Type: php Owner: user1
Repository: ssh://APP_NAME@cloudcontrolled.com/repository.git
[...]
Deployments
APP_NAME/default
APP_NAME/dev
APP_NAME/stage
TL;DR:
- Git and Bazaar are supported.
- When you push an updated branch, an image of your code gets built, ready to be deployed.
- Image sizes are limited to 200MB (compressed). Use a
.cctrlignorefile to exclude development assets.
The platform supports Git (quick Git tutorial) and Bazaar (Bazaar in five minutes). When you create an app we try to determine if the current working directory has a .git or .bzr directory. If it does, we create the app with the detected version control system. If we can't determine this based on the current working directory Git is used as the default. You can always overwrite this with the --repo command line switch.
$ cctrlapp APP_NAME create php [--repo [git,bzr]]
It's easy to tell what version control system an existing app uses based on the repository URL provided as part of the app details.
$ cctrlapp APP_NAME details
App
Name: APP_NAME Type: php Owner: user1
Repository: ssh://APP_NAME@cloudcontrolled.com/repository.git
[...]
If yours starts with ssh:// and ends with .git then Git is being used. If it starts with bzr+ssh://, Bazaar is being used.
Whenever you push an updated branch, a deployment image is built automatically. This image can then be deployed with the deploy command to the deployment matching the branch name. The contents of the image get generated by the buildpack and usually include your application code in a runnable form and any dependencies that where installed by the buildpack.
You can use the cctrlapp push command or the normal git/bzr push command.
# with cctrlapp:
$ cctrlapp APP_NAME/dev push
# get the REPO_URL from the output of cctrlapp APP_NAME details
# with git:
$ git remote add cctrl REPO_URL
$ git push cctrl dev
# with bzr:
$ bzr push --remember REPO_URL
The repositories support all other remote operations like pulling and cloning as well.
The compressed image size is limited to 200MB. Smaller images can be deployed faster, so we recommend to keep the image size below 50MB. The image size is printed at the end of the build process; if the image exceeds the limit, the push gets rejected.
You can decrease your image size by making sure that no unneeded files (e.g. caches, logs, backup files) are tracked
in your repository. Files that need to be tracked but are not required in the image (e.g. development assets or
source code files in compiled languages), can be added to a .cctrlignore file in the project root directory.
The format is similar to the .gitignore, but without the negation operator !. Here’s an example .cctrlignore:
*.psd
*.pdf
test
spec
During the push a hook is fired that runs the buildpack. A buildpack is a set of scripts that determine how an app in a specific language or framework has to be prepared for deployment on the cloudControl platform. With custom buildpacks, support for new programming languages can be added or custom runtime environments can be build. To support many PaaS with one buildpack, we recommend following the Heroku buildpack API which is compatible with cloudControl and other platforms.
Part of the buildpack scripts is also to pull in library dependencies. The concrete method of doing this varies between different languages and frameworks. E.g. pip and a requirements.txt are used for Python, Maven for Java, npm for node.js, Composer for PHP etc. This allows you to fully control the libraries and versions available to your app in the final runtime environment.
Which buildpack is going to be used is determined by the application type set when creating the app.
A required part of the image is a file called Procfile in the root directory of the repository. It is used to determine how to start the actual application in the container. For a container to be able to receive requests from the routing tier it needs at least the following content:
web: COMMAND_TO_START_THE_APP_AND_LISTEN_ON_A_PORT --port $PORT
For more specific examples of a Procfile please refer to the language and framework guides.
At the end of the buildpack process, the image is ready to be deployed.
The cloudControl platform supports zero downtime deploys for all deployments. To deploy a new version use the cctrlapp deploy command.
$ cctrlapp APP_NAME/DEP_NAME deploy
To deploy a specific version, append your version control systems identifier (full commit-SHA1 for Git or an integer for Bazaar). If not specified, the version to be deployed defaults to the latest image available (the one built during the last successful push).
For every deploy, the image is downloaded to as many of the platform’s nodes as required by the --containers setting and started according to the buildpack’s default or the Procfile. After the new containers are up and running the loadbalancing tier stops sending requests to the old containers and instead sends them to the new version. A log message in the deploy log appears when this process has finished.
If for some reason a new version does not work as expected, you can rollback any deployment to a previous version in a matter of seconds. To do so you can check the deploy log for the previously deployed version (or get it from the version control system directly) and then simply use the Git or Bazaar version identifier that's part of the log output to redeploy this version using the deploy command.
$ cctrlapp APP_NAME/DEP_NAME deploy THE_LAST_WORKING_VERSION
TL;DR:
- Each container has its own filesystem.
- The filesystem is not persistent.
- Don't store uploads on the filesystem.
Deployments on the cloudControl platform have access to a writable filesystem. This filesystem however is not persistent. Data written may or may not be accessible again in future requests, depending on how the routing tier routes requests across available containers, and is deleted after each deploy. This does include deploys you trigger manually, but also re-deploys done by the platfom itself during normal operation.
For customer uploads (e.g. user profile pictures) we recommend object stores like Amazon S3 or the GridFS feature available as part of the MongoLab Add-on.
TL;DR:
- Leverage multiple deployments to support the complete application lifecycle.
- Each deployment has a set of environment variables to help you configure your app.
- Various configuration files are available to adjust runtime settings.
Most apps share a common application lifecycle consisting of development, staging and production phases. The cloudControl platform is designed from the ground up to support this. As we explained earlier, each app can have multiple deployments. Those deployments match the branches in the version control system. The reason for this is very simple. To work on a new feature it is advisable to create a new branch. This new version can then be deployed as its own deployment making sure the new feature development is not interfering with the existing deployments. More importantly even, these development/feature or staging deployments also help ensure that the new code will work in producion because each deployment using the same stack has the same runtime environment.
Sometime it's useful for the app to check the deployment it currently runs in, e.g. to enable debugging output in development deployments but disable it in production deployments. This can be done by inspecting the environment variables that each deployment makes available to the app. The following environment variables are available:
- TMPDIR: The path to the tmp directory.
- CRED_FILE: The path of the creds.json file containing the Add-on credentials.
- DEP_VERSION: The Git or Bazaar version the image was built from.
- DEP_NAME: The deployment name in the same format as used by the command line client. E.g. myapp/default. This one stays the same even when undeploying and creating a new deployment with the same name.
- DEP_ID: The internal deployment ID. This one stays the same for the deployments lifetime but changes when undeploying and creating a new deployment with the same name.
- WRK_ID: The internal worker ID. Only set for worker containers.
TL;DR:
- Add-ons give you access to additional services like databases.
- Each deployment needs its own set of Add-ons.
- Add-on credentials are available to your app via the JSON formatted
$CRED_FILE(and via environment variables depending on the app's language).
Add-ons add additional services to your deployment. The Add-on marketplace offers a wide variety of different Add-ons. Think of it as an app store dedicated to developers. Add-ons can be different database offerings, caching, performance monitoring or logging services or even complete backend APIs or billing solutions.
Each deployment needs its own set of Add-ons. If your app needs a MySQL database and you have a production, a development and a staging environment, all three need their own MySQL Add-ons. Each Add-on comes in a few different plans allowing you to choose a more powerful database for your high traffic production deployment and a smaller one for the development or staging environments.
You can see the available Add-on plans on the Add-on marketplace website or with the cctrlapp addon.list command.
$ cctrlapp APP_NAME/DEP_NAME addon.list
[...]
Adding an Add-on is just as easy.
$ cctrlapp APP_NAME/DEP_NAME addon.add ADDON_NAME.ADDON_OPTION
As always replace the placeholders written in uppercase with their respective values.
To get the list of current Add-ons for a deployment use the addon command.
$ cctrlapp APP_NAME/DEP_NAME addon
Addon : alias.free
Addon : newrelic.standard
[...]
Addon : blitz.250
[...]
Addon : memcachier.dev
[...]
To upgrade or downgrade an Add-on use the respective command followed by the Add-on name you upgrade from and the Add-on name you upgrade to.
# upgrade
$ cctrlapp APP_NAME/DEP_NAME addon.upgrade FROM_SMALL_ADDON TO_BIG_ADDON
# downgrade
$ cctrlapp APP_NAME/DEP_NAME addon.downgrade FROM_BIG_ADDON TO_SMALL_ADDON
Remember: As in all examples in this documentation, replace all the uppercase placeholders with their respective values.
For many Add-ons you require credentials to connect to their service. The credentials are exported to the deployment in
a JSON formatted config file. The path to the file can be found in the CRED_FILE environment variable. Never
hardcode these credentials in your application, because they differ over deployments and can change after any redeploy
without notice.
A quick example to get MySQL credentials in PHP:
# read the credentials file
$string = file_get_contents($_ENV['CRED_FILE']);
if ($string == false) {
die('FATAL: Could not read credentials file');
}
# the file content is in JSON format, decode it and return an associative array
$creds = json_decode($string, true);
# now use the $creds array to configure your app e.g.:
$MYSQL_HOSTNAME = $creds['MYSQLS']['MYSQLS_HOSTNAME'];We recommend using the credentials file for security reasons but credentials can also be accessed through environment variables. This is disabled by default for PHP and Python apps. Accessing the environment is more convenient in most languages, but some reporting tools or wrong security settings in your app might print environment variables to external services or even your users. This also applies to any child processes of your app if they inherit the environment (which is the default). When in doubt, disable this feature and use the credentials file.
Set the variable SET_ENV_VARS using the Custom Config Add-on to either False or True to explicitly enable or disable
this feature.
The guides section has detailed examples about how to get the credentials in different languages (Ruby, Python, Java).
To see the format and contents of the credentials file locally, use the addon.creds command.
$ cctrlapp APP_NAME/DEP_NAME addon.creds
{
"BLITZ": {
"BLITZ_API_KEY": "SOME_SECRET_API_KEY",
"BLITZ_API_USER": "SOME_USER_ID"
},
"MEMCACHIER": {
"MEMCACHIER_PASSWORD": "SOME_SECRET_PASSWORD",
"MEMCACHIER_SERVERS": "SOME_HOST.eu.ec2.memcachier.com",
"MEMCACHIER_USERNAME": "SOME_USERNAME"
},
"MYSQLS": {
"MYSQLS_DATABASE": "SOME_DB_NAME",
"MYSQLS_HOSTNAME": "SOME_HOST.eu-west-1.rds.amazonaws.com",
"MYSQLS_PASSWORD": "SOME_SECRET_PASSWORD",
"MYSQLS_PORT": "3306",
"MYSQLS_USERNAME": "SOME_USERNAME"
}
}
TL;DR:
- There are four different log types (access, error, worker and deploy) available.
To see the log output in a tail -f-like fashion use the cctrlapp log command. The log command initially shows the last 500 log messages and then appends new messages as they arrive.
$ cctrlapp APP_NAME/DEP_NAME log [access,error,worker,deploy]
[...]
The access log shows each access to your app in an Apache compatible log format.
The error log shows all output your app prints to stdout, stderr and syslog. It also shows when a new version has been deployed to make it easy to determine if a problem existed already before or only after the last deploy. More detailed information on deploys can be found in the deploy log.
Workers are long running background processes. As such, they are not accessible via http from the outside. To make worker output accessible to you, its stdout, stderr and syslog output is redirected to this log. The worker log shows the timestamp of when the message was written, the wrk_id of the worker the message came from as well as the actual log line.
The deploy log gives detailed information on the deploy process. It shows on how many nodes your deployment is deployed and lists the nodes themselves, how long it took for each of the nodes to start the container and get the deployment running and also when the loadbalancers started sending traffic to the new version.
Some Add-ons in the Deployment category as well as the Custom Config Add-on can be used to forward error and worker logs to the external logging services.
The Custom Config Add-on can be used to specify an additional endpoint where error and worker logs will be sent. This is done by setting the config variable "RSYSLOG_REMOTE". The content should contain valid rsyslog configuration and can span multiple lines.
E.g. to forward the logs to custom syslog remote over a TLS connection, create a temporary file with the following content:
$DefaultNetstreamDriverCAFile /app/CUSTOM_CERTIFICATE_PATH
$ActionSendStreamDriver gtls
$ActionSendStreamDriverMode 1
$ActionSendStreamDriverAuthMode x509/name
$template CustomFormat, "%syslogtag%%msg%\n"
*.* @@SERVER_ADDRESS:PORT;CustomFormat
where "SERVER_ADDRESS" and "PORT" should be replaced with the concrete values and "CUSTOM_CERTIFICATE_PATH" should be the path to a certificate file for the custom syslog remote in you repository.
Use that file's name (let's say it's named custom_remote.cfg) as a value for the "RSYSLOG_REMOTE" config variable:
$ cctrlapp APP_NAME/DEP_NAME addon.add config.free --RSYSLOG_REMOTE=custom_remote.cfg
From now on all the new logs should be visible in your custom syslog remote.
TL;DR:
- Each deployment is provided a
.cloudcontrolled.comsubdomain. - Custom domains are supported via the Alias Add-on.
Each deployment gets a .cloudcontrolled.com subdomain. The default deployment always answers at APP_NAME.cloudcontrolled.com while any additional deployments get a DEP_NAME-APP_NAME.cloudcontrolled.com subdomain.
You can use custom domains to access your deployments. To add a domain like www.example.com, app.example.com or secure.example.com to one of your deployments simply add each one as an alias and add a CNAME for each pointing to your deployment's subdomain. So to point www.example.com to the default deployment of the app called awesomeapp add a CNAME for www.example.com pointing to awesomeapp.cloudcontrolled.com. The Alias Add-on also supports mapping wildcard domains like *.example.com to one of your deployments.
All custom domains need to be verified before they start working. To verify a domain it is required to also add the cloudControl verfification code as a TXT record.
Changes to DNS can take up to 24 hours until they have effect. Please refer to the Alias Add-on Documentation for detailed instructions on how to setup CNAME and TXT records.
TL;DR:
- You can scale up or down anytime by adding more containers (horizontal scaling) or changing the container size (vertical scaling).
- Use performance monitoring and load testing to determine the optimal scaling settings for your app.
When scaling your apps you have two options. You can either scale horizontally by adding more containers, or scale vertically by changing the container size. When you scale horizontally the cloudControl loadbalancing and routing tier ensures efficient distribution of incoming requests accross all available containers.
Horizontal scaling is controlled by the --containers parameter. It specifies the number of containers you have running. Raising --containers also increases the availability in case of node failures. Deployments with --containers 1 (the default) are unavailable for a few minutes in the event of a node failure until the failover process has finished. Set --containers value to at least 2 if you want to avoid downtime in such situations.
In addition to controlling the number of containers you can also specify the memory size of a container. Container sizes are specificed using the --memory parameter, being possible to choose from 128MB to 1024MB. To determine the optimal --memory value for your deployment you can use the New Relic Add-on to analyze the memory consumption of your app.
You can use the Blitz.io and New Relic Add-ons to run synthetic load tests against your deployments and analyze how well they perform with the current --containers and --memory settings under expected load to determine the optimal scaling settings and adjust accordingly. We have a tutorial that explains this in more detail.
TL;DR:
- All HTTP requests are routed via the routing tier.
*.cloudcontrolled.comis round robin across available routing tier nodes.- Requests are routed based on the
Hostheader. - Use the
X-Forwarded-Forheader to get the client IP.
All HTTP requests made to apps on the platform are routed via the routing tier. It takes care of routing the request to one of the app's containers based on matching the Host header against the list of the deployments aliases.
The routing tier is designed to be robust against single node and even complete datacenter failures while still keeping the added latency as low as possible.
The *.cloudcontrolled.com subdomains resolve in a round robin fashion to the current list of routing tier node IP addresses. All nodes are equally distributed to the three different availability zones but can route requests to any container in any other availability zone. To keep latency low, the routing tier tries to route requests to containers in the same availability zone unless none are available. Deployments running on --containers 1 (see the scaling section for details) only run in one container and therefore only in one availability zone.
Because of the elastic nature of the routing tier the list of routing tier addresses can change at any time. It is therefore highly discouraged to point custom domains directly to any of the routing tier IP addresses. Please use a CNAME instead. Refer to the custom domain section for more details on the correct DNS configuration.
If a container is not available due to an underlying node failure or a problem with the code in the container itself, the routing tier automatically routes requests to the other available containers of the deployment. Deployments running on --containers 1 will be unavailable for a couple of minutes until a replacement container has been started. To avoid even short downtimes in the event of a single node or container failure set the --containers option to at least 2.
Because client requests don't hit your app directly, but are forwarded via the routing tier, you can't access the clients IP by reading the remote address. The remote address will always be the internal IP of one of the routing nodes. To make the origin remote address available the routing tier sets the X-Forwarded-For header to the original clients IP.
TL;DR:
- Reduce the total number of requests that make up a page view.
- Cache as far away from your database as possible.
- Try to rely on cache breakers instead of flushing.
Perceived web application performance is mostly influenced by the frontend. It's very common that the highest optimization potential lies in reducing the overall number of requests per page view. Common techniques to do this is combining and minimizing javascript and css files into one file each and using sprites for images.
After you have reduced the total number of requests it's recommended to cache as far away from your database as possible. Using far future expire headers to avoid that browsers request ressources at all. The next best way of reducing the number of requests that hit your backends is to cache complete responses in the loadbalancer. For this we offer caching directly in the loadbalancing and routing tier.
The loadbalancing and routing tier that is in front of all deployments includes a Varnish caching proxy. To have your requests cached directly in Varnish and speed up the response time through this, ensure you have set correct cache control headers for the request. Also ensure, that the request does not include a cookie. Cookies are often used to keep state accross requests (e.g. if a user is logged in). To avoid caching responses for logged in users and returning them to other users Varnish is configured to never cache requests with cookies. To be able to cache requests in Varnish for apps that rely on cookies we recommend using a cookieless domain.
You can check if a request was cached in Varnish by checking the response's X-varnish-cache header. The value HIT means the respons was answered directly from the cache, and MISS means it was not.
To make requests that can't use a cookieless domain faster you can use in memory caching to store arbitrary data from database query results to complete http responses. Since the cloudControl routing tier distributes requests accross all available containers it is recommended to cache data in a way that makes it available also for requests that are routed to different containers. A battle tested solution for this is Memcached which is available via the MemCachier Add-on. Refer to the managing Add-ons section on how to add it. Also the MemCachier Documentation has detailed instructions on how to use it for your language and framework of choice.
When caching requests on client side or in a caching proxy, the URL is usually used as the cache identifier. As long as the URL stays the same and the cached response has not expired, the request is answered from cache. As part of every deploy all containers are started from a clean image. This ensures that all containers have the latest app code including templates, css, image and javascript files. But when using far future expire headers as recommended above this doesn't change anything if the response was cached at client or loadbalancer level. To ensure clients get the latest and greatest version it is recommend to include a changing parameter into the URL. This is commonly referred to as a cache breaker.
As part of the set of environment variables in the deployment runtime environment the DEP_VERSION is made available to the app. If you want to force a refresh of the cache when a new version is deployed you can use the DEP_VERSION to accomplish this.
This technique works for URLs as well as for the keys in in-memory caches like Memcached. Imagine you have cached values in Memcached that you want to keep between deploys and have values in Memcached that you want refreshed for each new version. Since Memcached only allows flushing the complete cache you would lose all cached values. Including the DEP_VERSION as part of the key of the cached values you want refreshed is an easy way to ensure that the cache gets refreshed.
TL;DR:
- Web requests are subject to a time limit of 120s.
- Scheduled jobs are supported through different Add-ons.
- Background workers are the recommended way of handling long running or asynchronous tasks.
Since a web request taking longer than 120s is killed by the routing tier, longer running tasks have to be handled asyncronously.
For tasks that are guaranteed to finish within the timelimit, the Cron add-on is a simple solution to call a predefined URL daily or hourly and have that task called periodically. For a more detailed documentation on the Cron add-on please refer to the Cron add-on documentation
Tasks that will take longer than 120s to execute or that are triggered by a user request and should be handled asyncronously to not keep the user waiting are best handled by the Worker add-on. Workers are long running processes started in containers just like the web processes but are not listening on a port and do not receive http requests. You can use workers to e.g. poll a queue and execute tasks in the background or handle long running periodical calculations. More details on usage scenarios and available queuing add-ons are available as part of the Worker add-on documentation.
The distributed nature of the cloudControl platform means it's not possible to SSH into the actual server. Instead, we offer the run command, that allows to launch a new container and connect to that via SSH.
The container is identical to the web or worker containers but starts an SSH daemon instead of one of the Procfile commands. Its based on the same stack image and deployment image and does also provides the Add-on credentials.
To start a shell (e.g. bash) use the run command.
$ cctrlapp APP_NAME/DEP_NAME run bash
Connecting...
Warning: Permanently added '[10.62.45.100]:25832' (RSA) to the list of known hosts.
u25832@DEP_ID-25832:~/www$ echo "interactive commands work as well"
interactive commands work as well
u25832@DEP_ID-25832:~/www$ exit
exit
Connection to 10.62.45.100 closed.
Connection to ssh.cloudcontrolled.net closed.
It's also possible to execute a command directly and have the container exit after the command finished. This is very useful for database migrations and other one-time tasks.
Listing the environment variables using "env | sort" works. Note that the use of the quotes is required for a command that includes spaces.
$ cctrlapp APP_NAME/DEP_NAME run "env | sort"
Connecting...
Warning: Permanently added '[10.250.134.126]:10346' (RSA) to the list of known hosts.
CRED_FILE=/srv/creds/creds.json
DEP_ID=DEP_ID
DEP_NAME=APP_NAME/DEP_NAME
DEP_VERSION=9d5ada800eff9fc57849b3102a2f27ff43ec141f
DOMAIN=cloudcontrolled.com
GEM_PATH=vendor/bundle/ruby/1.9.1
HOME=/srv
HOSTNAME=DEP_ID-10346
LANG=en_US.UTF-8
LOGNAME=u10346
MAIL=/var/mail/u10346
OLDPWD=/srv
PAAS_VENDOR=cloudControl
PATH=bin:vendor/bundle/ruby/1.9.1/bin:/usr/local/bin:/usr/bin:/bin
PORT=10346
PWD=/srv/www
RACK_ENV=production
RAILS_ENV=production
SHELL=/bin/sh
SSH_CLIENT=10.32.47.197 59378 10346
SSH_CONNECTION=10.32.47.197 59378 10.250.134.126 10346
SSH_TTY=/dev/pts/0
TERM=xterm
TMP_DIR=/srv/tmp
TMPDIR=/srv/tmp
USER=u10346
WRK_ID=WRK_ID
Connection to 10.250.134.126 closed.
Connection to ssh.cloudcontrolled.net closed.
TL;DR:
- Stacks define the common runtime environment.
- They are based on Ubuntu and stack names match the Ubuntu release's first letter.
- Luigi supports only PHP. Pinky supports multiple languages according to the available buildpacks.
A stack defines the common runtime environment for all deployments using it. By choosing the same stack for all your deployments, it's guaranteed that all your deployments find the same version of all OS components as well as all preinstalled libraries.
Stacks are based on Ubuntu releases and have the same first letter as the release they are based on. Each stack is named after a super hero sidekick. We try to keep them as close to the Ubuntu release as possible, but do make changes when necessary for security or performance reasons to optimize the stack for its specific purpose on our platform.
- Luigi based on Ubuntu 10.04 LTS Lucid Lynx
- Pinky based on Ubuntu 12.04 LTS Precise Pangolin
You can change the stack per deployment. This is handy for testing new stacks before migrating the production deployment. To see the stack a deployment is using, refer to the deployment details.
$ cctrlapp APP_NAME/DEP_NAME details
name: APP_NAME/DEP_NAME
stack: luigi
[...]
To change the stack of a deployment simply append the --stack command line option to the deploy command.
$ cctrlapp APP_NAME/DEP_NAME deploy --stack [luigi,pinky]