Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
113 changes: 80 additions & 33 deletions boms/software/maximo/files/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -99,7 +99,7 @@ The Maximo Core automation is broken into different layers of automation or bund
| BOM ID | Name | Description | Run Time |
|--------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|
| 200 | [200 - OpenShift Gitops](./200-openshift-gitops) | Set up OpenShift GitOps in ROSA, ARO or ROKS. This is required to install the software using gitops. Only use this if you are bringing your own OpenShift cluster which has not been provisioned from the reference architectures listed above. | 10 Mins |
| 210 | Storage <br/> [210-aws-portworx-storage](./210-aws-portworx-storage) <br/> [210-azure-portworx-storage](./210-azure-portworx-storage) <br/> [210-ibm-odf-storage](./210-ibm-odf-storage) <br/> [210-ibm-portworx-storage](./210-ibm-portworx-storage) | If you are installing into your own ROKS clusters on IBM Cloud you will need to use this automation bundle to configure storage in the cluster before installing Maximo Core. Only one of these layers needs to be applied. | 30 Mins |
| 210 | Storage <br/> [210-aws-portworx-storage](./210-aws-portworx-storage) <br/> [210-azure-portworx-storage](./210-azure-portworx-storage) <br/> [210-ibm-odf-storage](./210-ibm-odf-storage) <br/> [210-ibm-portworx-storage](./210-ibm-portworx-storage) | If you are installing into your own ROKS clusters on IBM Cloud you will need to use this automation bundle to configure storage in the cluster before installing Maximo Core. Only one of these layers needs to be applied. NOTE this may be optional if you have already provisioned storage on your cluster. | 30 Mins |
| 400 | [400 - Maximo Core - Multi Cloud](./400-mas-core-multicloud) | Provision Maximo Core into Multi Cloud environment AWS, Azure and IBM Cloud supported | 30 Mins |

## Installation Steps
Expand Down Expand Up @@ -129,7 +129,7 @@ Ensure the following before continuing
- You are able to login to the OpenShift cluster and obtain an OpenShift login token
- Cloud Pak entitlement key, this can be obtained from visiting the [IBM Container Library](https://myibm.ibm.com/products-services/containerlibrary)

### Installing Maximo Core
### Installing Maximo App Suite Core

The installation process will use a standard GitOps repository that has been built using the Modules to support Maximo Core installation. The automation is consistent across three cloud environments AWS, Azure, and IBM Cloud.

Expand Down Expand Up @@ -320,42 +320,57 @@ The following are variables that you will be prompted for and some suggested val
| rwx_storage_class | The storage class to use for Read-Write-Many volumes. Use a portworx or odf storage class (e.g. portworx: portworx-rwx-gp3-sc or odf: ocs-storagecluster-cephfs) | portworx-rwx-gp3-sc |
| rwo_storage_class | The storage class to use for Read-Write-One volumes. on aws: `gp2`, on azure: `managed-premium`, on ibm: `ibmc-vpc-block-10iops-tier` | ibmc-vpc-block-10iops-tier |
| file_storage_class | The storage class to use for file storage volumes. on aws: `gp2`, on azure: `managed-premium`, on ibm: `ibmc-vpc-block-10iops-tier` | ibmc-vpc-block-10iops-tier |
| block_storage_class | The storage class to use for block storage volumes. on aws: `gp2`, on azure: `managed-premium`, on ibm: `ibmc-vpc-block-10iops-tier` | ibmc-vpc-block-10iops-tier |
| gitops_repo_host | The host for the git repository. | github.com |
| gitops_repo_type | The type of the hosted git repository (github or gitlab). | github |
| gitops_repo_org | The org/group where the git repository will be created. If the value is left blank then it will default to the username | github userid or org |
| gitops_repo_repo | The short name of the repository to create | gitops-mas-ibmcloud |
| gitops_repo_username | The username of the user with access to the repository | github userid |
| gitops_repo_token | The git personal access token | BFe4k0MFK9s5RGIt... |
| entitlement_key | CloudPak Entitlement Key | eyJhbGciOiJIUzI1NiJ9.eyJpc3... |
| cluster_ingress | Ingress of the Cluster | masdemo.us-east-container.appdomain.cloud |
| gitops-cp-maximo_instanceid | Instance name for MAS - for example: masdemo or mas8 | mas8 |
| sls-namespace_name | Namespace for IBM SLS | ibm-sls |
| mongo-namespace_name | Namespace for Mongo | mongo |
| bas-namespace_name | Namespace for BAS | masbas |
| server_url | Url fo the OpenShift cluster | https://c100-e.us-east.containers.cloud.ibm.com:32346 |
| cluster_login_token | OpenShift cluster login token | sha256~nlXiXCYO_kEydz36B88y0reQ... |
| gitops-cluster-config_banner_text | Banner text for the cluster console | Maximo Application Suite |
| block_storage_class | The storage class to use for block storage volumes. on aws: `gp2`, on azure: `managed-premium`, on ibm: `ibmc-vpc-block-10iops-tier` | ibmc-vpc-block-10iops-tier | |
| portworx_spec_file | The name of the file containing the portworx configuration spec yaml | portworx_essentials.yaml |

1. Update the desired values in `terraform.tfvars`


1. Update the desired values in `terraform.tfvars` In many cases the default values are most likely good.
2. Save the `terraform.tfvars` file

#### Apply the automation

1. We are now ready to start installing Maximo Core. Run the `launch.sh` command if you are not inside the running container. Make sure you are in the root of the automation-maximo-app-suite repository.

```shell
./launch.sh
1. Navigate into the `/workspaces/current` folder

> ❗️ Do not skip this step. You must execute from the `/worksapces/current` folder.

2. (optional if installing gitops) Navigate into the `200-openshift-gitops` folder and run the following commands

```
cd 200-openshift-gitops
terraform init
terraform apply --auto-approve
```


When that has completed, you can move on.

3. (optional if installing storage)

Change directories to the `200-*` folder and run the following commands to deploy storage into your cluster:

```
cd ..\210-ibm-portworx-storage
terraform init
terraform apply --auto-approve
```

> This folder will vary based on the platform and storage options that you selected in earlier steps.

When that has completed, you can move on. Be sure to give it adequate time to complete. The automation will complete before the actual storage is totally provisioned. This could take 25mins depending on your cluster.

2. Within the container terminal, change directory to the `/workspaces/current` folder. This folder was populated by the `setup-workspaces.sh` script in the previous step. (The `launch.sh` command configures a named volume to preserve the contents of the `/workspaces` directory between container images so you don't need to re-run `setup-workspaces.sh` again unless you want to configure a different environment.)
3. Run `./apply-all.sh` to kick off the automation. The script will apply each layer in order.
4. You can check the progress by looking at two places, first look in your github repository. You will see the git repository has been created based on the name you have provided. The Maximo Core install will populate this with information to let OpenShift GitOps install the software. The second place is to look at the OpenShift console, Click Workloads->Pods and you will see the GitOps operator being installed.
4. Change directories to the `400-*` folder and run the following commands to deploy storage into your cluster:

```
cd ..\400-mas-core-multicloud
terraform init
terraform apply --auto-approve
```

Once the installation has finished you will see a message from terraform that shows the state of the resources that were provisioned.

The Maximo Application Suite will continue for approximately another 20 minutes while it sets up MAS and all the components for MAS-Core. From this point you can skip to the MAS suite setup steps in the [README](./README.md#setup) below.
The Maximo Application Suite will continue for approximately another 30 minutes while it sets up MAS and all the components for MAS-Core.

### Validating the Maximo Core installation

Expand All @@ -370,21 +385,53 @@ The initial setup for MAS is done through the web console and can be found in th

`https://admin.${YourDomainURL}/initialsetup`

The admin workspace url has been added to the OpenShift application link menu and can be found there. Note you will need to append the initialsetup to the end of the url there. After setup, you will be able to access the MAS Admin workspace directly from this menu link without needing to append anything to the url. That is only needed for the initial setup.

NOTE: Depending on the browser you may have to import the self-signed certificate into your keystore (if on a mac)

Login as super user with credential found in the secret named: `{masInstanceID}-credentials-superuser` in the OpenShift project named: `mas-{masInstanceID}-core`

## Installing Maximo App Suite - Manage

The following instructions can be used to install the Maximo Application Suite - Manage application. Manage is a necessary prerequisite for most of the MAS applications. If your environment does not require Manage, then you can skip this step.

1. Before installing, ensure that your `terraform.tfvars` file has the appropriate configuration you wish to install for MAS Manage. These values should be set based on your settings from running `setup-workspace.sh` above, but can be modified here.

2. Supported Manage Industry Add-on solutions (optional)
This module currently supports installing the following add-ons with MAS Manage:
- Health
- Civil Infrastructure

Set the `addons` variable in `terrafrom.tfvars` such that it looks like the following to deploy health with Manage:

## Summary
`addons = ["health"]`

This concludes the instructions for installing *Maximo Core* on AWS, Azure, and IBM Cloud
To deploy Civil Infrastructure use:

## Troubleshooting
`addons = ["civil"]`

The other industry solutions follow the same format and are comma separated if more than one is needed. NOTE be sure to check MAS requirements as not all industry solutions can be deployed together.

3. Demo Data (optional)
Demo data can be deployed automatically with Manage. This will significantly increase the installation time for Manage and require space in your database, as a result by default demo data is not deployed with Manage. Demo data can be useful as it also installs default users, and to actuall login to Manage you will need the `maxadmin` user defined and configured which the demo data install will automatically do.

If this is instance is purely for demonstration purposes or poc, you may want to deploy demo data then. In this case you can set the following:

`demodata = true`

4. Change directories to the `405-*` folder and run the following commands to deploy storage into your cluster:

```
cd ..\405-mas-manage
cd 405-mas-manage
terraform init
terraform apply --auto-approve
```

Please refer to the [Troubleshooting Guide](./TROUBLESHOOTING.md) for uninstallation instructions and instructions to correct common issues.
Once the automation has finished you will see a message from terraform that shows the state of the resources that were provisioned.

## How to Generate this repository from teh source Bill of Materials.
The Manage installation will continue for approximately another 2.5 - 5hrs depending if demo data was installed or not.

This set of automation packages was generated using the open-source [`isacable`](https://github.com/cloud-native-toolkit/iascable) tool. This tool enables a [Bill of Material yaml](https://github.com/cloud-native-toolkit/automation-solutions/tree/main/boms/software/maximo) file to describe your software requirements. If you want up stream releases or versions you can use `iascable` to generate a new terraform module.
### Validating the Manage installation

> The `iascable` tool is targeted for use by advanced SRE developers. It requires deep knowledge of how the modules plug together into a customized architecture. This repository is a fully tested output from that tool. This makes it ready to consume for projects.
During this installation process, you can login to the Maximo admin workspace using the link on the OpenShift Application Menu and navigate to the application Catalog and select the Manage icon. This will show several different installation steps, and the current state. Note some will show failure at first. This is expected as those steps may not have completed yet. When all installation steps have a green checkmark as complete, the Manage installation is complete and ready to use.