This project provides material and documentation to install a complete CI/CD stack on Minikube or other Kubernetes clusters for evaluation and sample scenarios of IBM Automation Decision Services.
The stack is composed of:
- A Git server (Gitea)
- A Maven artifact repository (Nexus)
- A continuous integration server (Jenkins)
The installation procedure is followed by post-installation steps to connect your Automation Decision Services instance to this CI/CD stack.
Your CI/CD stack must be reachable from the Automation Decision Services instance. Typically, if you installed Automation Decision Services on a cloud cluster or an enterprise cluster, you probably won't be able to install the CI/CD stack on your personal computer. If you installed Automation Decision Services on Minikube or Minishift on your personal computer, you can install the CI/CD stack in the same Minikube or Minishift instance.
-
Security level is low:
- No TLS
- Basic default user accounts with trivial users
- Some container images run as root
-
No automated backups of user data
- A Kubernetes cluster; see below for installation instructions on Minikube, Minishift, or OpenShift 3.11.
- Docker command line
- Helm 2.14.3 or above
- Python 3.6 or above
Prerequisites:
- Minikube 1.1.1 or above, with addons
ingressandstorage-provisionerenabled (seeminikube addons list). If you use version 1.4 or above, use the start option--kubernetes-version=1.15.0. - Docker command line
- Helm 2.14.3 or above
Procedure:
-
Start Minikube and enable the required addons.
minikube start --memory=4g --cpus=4 # or add any other relevant minikube option for your platform, for instance # on MacOs with minikube 1.4: # minikube start --memory=4g --cpus=4 --vm-driver=hyperkit --kubernetes-version=1.15.0 minikube addons enable ingress minikube addons enable storage-provisioner -
Install the CI/CD stack (it may take a few minutes to complete).
./scripts/minikube_install.sh -
Confirm that all pods have a
Runningstatus and areReady.$ kubectl get pods NAME READY STATUS RESTARTS AGE ci-cd-demo-devops-gitea-d756ff49c-m944s 3/3 Running 0 101s ci-cd-demo-devops-nexus-89cf4bfd6-4wfr6 2/2 Running 0 101s ci-cd-demo-jenkins-6dd459bdfd-jcgcw 1/1 Running 0 101s(Ready !)
-
Gitea server is now available at
http://git.<minikube_ip>.nip.iowith userdemoand passworddemo(adapt to the IP address of your Minikube as reported byminikube ip) -
Nexus is now available at
http://nexus.<minikube_ip>.nip.iowith usernexusdemoand passwordnexusdemo -
Jenkins is now available at
http://jenkins.<minikube_ip>.nip.iowith useradminand passwordadminIn Jenkins, the new Maven jobs have a
settings.xmlpreconfiguration that points to the Nexus repositories.
-
-
You can now continue with the post-installation steps to connect your Automation Decision Services instance to this CI/CD stack.
-
Uninstall the Helm release.
helm delete ci-cd-demo --purge -
Delete the persistent volume claims that are left by Helm (their names are given by the previous
helm deleteoutput).kubectl delete pvc ci-cd-demo-devops-gitea ci-cd-demo-postgres
Prerequisites:
- Minishift 1.34 or above
ocOpenShift command line (delivered with Minishift)- Docker command line
- Helm 2.14.3 or above
The following procedure creates a new project in OpenShift (default name is ci-cd) and installs
the three components of the CI/CD stack. Helm is used to generated the
Kubernetes resource files from the Helm templates, but the Tiller server is not required.
Procedure:
-
Start Minishift.
minishift start --memory=4g --cpus=4 # or add any other relevant minishift option for your platform, for instance # on MacOs: minishift start --memory=4g --cpus=4 --vm-driver=kyperkit -
Install the CI/CD stack (it may take a few minutes to complete).
./scripts/minishift_install.sh -n local-ci-cd # default is `ci-cd`The script waits for all components to be available.
-
Gitea server is now available at
http://git.<minishift_ip>.nip.iowith userdemoand passworddemo(adapt to the IP address of your Minishift as reported byminishift ip) -
Nexus is now available at
http://nexus.<minishift_ip>.nip.iowith usernexusdemoand passwordnexusdemo -
Jenkins is now available at
http://jenkins.<minishift_ip>.nip.iowith useradminand passwordadminIn Jenkins, the new Maven jobs have a
settings.xmlpreconfiguration that points to the Nexus repositories.
-
-
You can now continue with the post-installation steps to connect your Automation Decision Services instance to this CI/CD stack.
You can either delete the whole project:
oc delete project local-ci-cd
or delete the elements that are deployed in the project but keep the project:
oc delete -n local-ci-cd -f tmp/minishift-rendered.yaml
The tmp/minishift-rendered.yaml file is created at installation time.
Prerequisites:
- OpenShift cluster 3.11 or 4.2
ocOpenShift command line- Docker command line
- Helm 2.14.3 or above
The following procedure creates a new project in OpenShift and installs the three components of the CI/CD stack. Helm is used to generate the Kubernetes resource files from the Helm templates but the Tiller server is not required.
Procedure:
-
Install the CI/CD stack (it may take a few minutes to complete).
-
Connect to the OpenShift server.
oc login $SERVER -u $USER -p $PASSWORD -
Create a new project.
Choose the project (
PROJECT) where to deploy the sample CI/CD stack and add the security context constraintanyuidto this namespace.oc new-project "$PROJECT" --description="Sample CICD stack tools - Gitea, Nexus and Jenkins" --display-name="DevOps Stack" oc --as=system:admin adm policy add-scc-to-user anyuid system:serviceaccount:$PROJECT:default -
Build and push Gitea and Nexus images to the OpenShift registry.
Build the Gitea and Nexus images locally and then push them to the OpenShift registry with the public image registry name (
PUBLIC_IMAGE_REGISTRY).docker build -t "$PUBLIC_IMAGE_REGISTRY/$PROJECT/gitea:0.1.0" ./images/gitea docker build -t "$PUBLIC_IMAGE_REGISTRY/$PROJECT/nexus:0.2.1" ./images/nexus docker login -u $(oc whoami) -p $(oc whoami -t) $PUBLIC_IMAGE_REGISTRY docker push "$PUBLIC_IMAGE_REGISTRY/$PROJECT/gitea:0.1.0" docker push "$PUBLIC_IMAGE_REGISTRY/$PROJECT/nexus:0.2.1" -
Customize the Helm values files.
Customize the
configs/openshift-values.yamlfile with the required values. The host for Gitea (GIT_HOST), Nexus (NEXUS_HOST), and Jenkins (JENKINS_HOST) need to be set. The image names need to be set with a registry name that is visible inside the OpenShift cluster (IMAGES_REGISTRY). You can use the scriptscripts/customize_openshift_values.shto perform this action../scripts/customize_openshift_values.sh $IMAGES_REGISTRY $PROJECT $GIT_HOST $NEXUS_HOST $JENKINS_HOST > /tmp/customized_openshift_values.yaml -
Deploy the sample CI/CD stack.
Use Helm to generate the Kubernetes resource files from the Helm templates and to deploy the sample CI/CD stack.
helm template helm-charts --name devops-stack --namespace $PROJECT --values /tmp/customized_openshift_values.yaml > /tmp/openshift-rendered.yaml oc create -f /tmp/openshift-rendered.yaml
Wait for all components to be available.
- Gitea server is now available with user
demoand passworddemo - Nexus is now available with user
nexusdemoand passwordnexusdemo. - Jenkins is now available with user
adminand passwordadmin
In Jenkins, the new Maven jobs have a
settings.xmlpreconfiguration that points to the Nexus repositories. -
-
You can now continue with the post-installation steps to connect your Automation Decision Services instance to this CI/CD stack.
You can either delete the whole project:
oc delete project ci-cd
or delete the elements that are deployed in the project but keep the project:
oc delete -n ci-cd -f tmp/openshift-rendered.yaml
The tmp/openshift-rendered.yaml file is created at installation time.
-
Run the script
install-ads-maven-plugin.shto download the Automation Decision Services Maven plug-in and other artifacts from your Automation Decision Services installation and to upload them to your Nexus server.You must be authenticated using UMS to access to these resources.
You will need to get an access token using the OIDC
client_credentialsauthentication flow with UMS. The steps are the same as thepasswordflow described in the UMS Password crendential flow by replacing the grant type byclient_credentialsand by removing the parametersusernameandpasswordfrom the access token request../scripts/install-maven-plugin.sh <ADS_DESIGNER_URL> <NEXUS_URL> <ACCESS_TOKEN>For example, if you installed Automation Decision Services and the CI/CD stack locally on Minishift and the IP address is
192.168.64.10and you obtained the access tokensCBa88JWCXu9tK6Zcjn4sOt3Ow5NYgNai3AF5wiO, run the script:./scripts/install-maven-plugin.sh 'https://ads.192.168.64.10.nip.io/' 'http://nexus.192.168.64.10.nip.io/' 'sCBa88JWCXu9tK6Zcjn4sOt3Ow5NYgNai3AF5wiO' -
Check that artifacts have been uploaded into nexus:
-
Sign in to <NEXUS_URL> with user
nexusdemoand passwordnexusdemoand finalize configuration wizard if needed by having anonymous access enabled. Then Sign out. -
Anonymously search for maven
com.ibm.decisiongroupId to verify you are able to access them. -
Configure the Decision Designer instance to register the Gitea server of the CI/CD stack:
- Edit the
gitCredentialsattribute of the secret admin secret referenced in the Decision Designer configuration propertydecisionDesigner.adminSecretNameand add an entry for the Gitea server.
For example, if you installed the CI/CD stack on Minishift, and the command
minishift ipreturns192.168.64.10, the entry to add in thegitCredentialsJSON value is :"git.192.168.64.10.nip.io": { "user": "demo", "password": "demo" }Because this CI/CD stack does not use TLS, you don't need to register any new TLS certificate in the list of trusted certificates of Automation Decision Services.
- After you edited the admin secret, you need to restart all the
gitservicepods of the ADS instance :
kubectl patch "$(kubectl get deploy -l "app.kubernetes.io/component=gitservice,app.kubernetes.io/name=ibm-automation-decision-services" -o name)" -p "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"force_restart_date\":\"`date +'%s'`\"}}}}}"
- Edit the
-
Configure the decision runtime to fetch the decision service archives from the Nexus repository:
-
Edit the
values.yamlfile of your Automation Decision Services installation to inject the URL of your Nexus repository into keydecisionRuntimeService.archiveRepository.urlPrefix:decisionRuntime: archiveRepository: urlPrefix: http://nexus.192.168.64.10.nip.io/maven-snapshots
-
Edit the secret of the decision runtime referenced in the runtime configuration (key
decisionRuntime.adminSecretNamein thevalues.yamlfile) and set thearchiveRepositoryUserandarchiveRepositoryPasswordwith credentials of the Nexus server:nexusdemoandnexusdemo. -
Force a restart all the
runtimepods of the ADS instance :
kubectl patch "$(kubectl get deploy -l "app.kubernetes.io/component=decisionRuntimeService,app.kubernetes.io/name=ibm-automation-decision-services" -o name)" -p "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"force_restart_date\":\"`date +'%s'`\"}}}}}"
-
You can now go to the section Building and deploying decision services of the Automation Decision Services documentation (installguide.pdf) for instructions on how to create a build plan in Jenkins to build a decision service project published in your Git server.
Copyright 2020 IBM Corporation
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.