fix: wpb-22590 add pages for ansible operations in production and intro#106
fix: wpb-22590 add pages for ansible operations in production and intro#106mohitrajain wants to merge 2 commits intowpb-22590-2-wiab-devfrom
Conversation
| Cassandra, Minio, PostgresSQL, RabbitMQ and Elasticsearch are running outside Kubernets cluster, make sure those machines have necessary ports open - | ||
|
|
||
| On each of the machines running Cassandra, Minio, PostgresSQL, RabbitMQ and Elasticsearch, run the following commands to open the necessary ports, if needed: | ||
| ```bash |
There was a problem hiding this comment.
May be we can consider to automate this via ansible
There was a problem hiding this comment.
Yes, we can automate it but by default we have noticed that VM firewalls are open. Here we are trying to say that these ports should open. let me rephrase it better.
| - `./ansible/inventory/group_vars/all/secrets.yaml` - This file will be used by ansible playbooks to configure service secrets. | ||
| - `values/wire-server/secrets.yaml` - This contains the secrets for Wire services and share some secrets from coturn and database services. | ||
| - `values/coturn/secrets.yaml` - This contains a secret for the coturn service. | ||
|
|
There was a problem hiding this comment.
Both for wire-server and coturn, the secrets are created as prod-secrets.example.yaml last time i checked
| [postgresql:vars] | ||
| postgresql_network_interface=enp7s0 | ||
| wire_dbname=wire-server | ||
| repmgr_node_config={"postgresql1":{"node_id":1,"priority":150,"role":"primary"},"postgresql2":{"node_id":2,"priority":100,"role":"standby"},"postgresql3":{"node_id":3,"priority":50,"role":"standby"}} |
There was a problem hiding this comment.
Can use some serialization for better readability
There was a problem hiding this comment.
ini inventory doesn't support multiline value. We can pick yaml format for it but indentation can become tricky for end user. Will create an internal ticket to convert the default inventory format to yaml.
| ### Postgresql cluster | ||
| [kube-node] | ||
| call_kubenode1 ansible_host=10.1.1.33 etcd_member_name=call_kubenode1 ip=10.1.1.33 node_labels="{'wire.com/role': 'sftd'}" node_annotations="{'wire.com/external-ip': 'a.b.c.d'}" | ||
| call_kubenode2 ansible_host=10.1.1.34 etcd_member_name=call_kubenode2 ip=10.1.1.34 node_labels="{'wire.com/role': 'coturn'}"" |
There was a problem hiding this comment.
Is this config for a different calling cluster? If so, please mention it. The template we provide does not have this reference.
There was a problem hiding this comment.
Yes, this is a new sample inventory, I am trying to add to provide for calling k8s cluster.
| [k8s-cluster:vars] | ||
| calico_mtu=1450 | ||
| calico_veth_mtu=1430 | ||
| ``` |
There was a problem hiding this comment.
Why the user needs to set mtu based on Hetzner's requirement here? I would recommend to read up the doc: https://github.com/kubernetes-sigs/kubespray/blob/master/docs/CNI/calico.md#configuring-interface-mtu before suggesting something really advanced. If there is an use case to set those configs, its upto the user
There was a problem hiding this comment.
yes, these were carried on from our cd template. Taking them out.
Co-authored-by: Sukanta <amisukanta02@gmail.com>
| ```bash | ||
| $ cd ... # you pick a good location! | ||
| ``` | ||
| Obtain the latest airgap artifact for wire-server-deploy. Please contact us to get it. |
There was a problem hiding this comment.
how does 'contact us' work?
| $ wget https://s3-eu-west-1.amazonaws.com/public.wire.com/artifacts/wire-server-deploy-static-<HASH>.tgz | ||
| $ tar xvzf wire-server-deploy-static-<HASH>.tgz | ||
| ``` | ||
| Where `<HASH>` above is the hash of your deployment artifact, given to you by Wire, or acquired by looking at the above build job. |
| cd wire-server-deploy/ansible | ||
| ```bash | ||
| cp ansible/inventory/offline/99-static ansible/inventory/offline/hosts.ini | ||
| mv ansible/inventory/offline/99-static ansible/inventory/offline/orig.99-static |
There was a problem hiding this comment.
the move doesn't do anything productive.
| ``` | ||
|
|
||
| There are more settings in this file that we will set in later steps. | ||
| > Note: Make sure that `assethost` is present in the inventory file with the correct `ansible_host` (and `ip` values if required) |
There was a problem hiding this comment.
shouldn't this note hold for all entries in this file?
| e.g. attempt to run Cassandra and k8s on the same 3 machines, the | ||
| hostnames will be overwritten by the second installation playbook, | ||
| breaking the first. | ||
| These sections can be divided into individual host groups, reflecting the architecture of the target infrastructure. Examples with individual nodes for Elastic, MinIO, PostgreSQL, RabbitMQ and Cassandra are commented out below. |
There was a problem hiding this comment.
| These sections can be divided into individual host groups, reflecting the architecture of the target infrastructure. Examples with individual nodes for Elastic, MinIO, PostgreSQL, RabbitMQ and Cassandra are commented out below. | |
| These sections are divided into individual host groups, reflecting the architecture of the target infrastructure. Examples with individual nodes for Elastic, MinIO, PostgreSQL, RabbitMQ and Cassandra are commented out below. |
| repmgr_node_config = {"postgresql1": {"node_id": 1, "priority": 150, "role": "primary"}, "postgresql2": {"node_id": 2, "priority": 100, "role": "standby"}, "postgresql3": {"node_id": 3, "priority": 50, "role": "standby"}} | ||
| ``` | ||
| - In an INI inventory, the `repmgr_node_config` keys must match the PostgreSQL inventory hostnames. | ||
| - To read more about specific PostgreSQL configuration, reat at [PostgreSQL High Availability Cluster - Quick Setup](../administrate/postgresql-cluster.md). |
There was a problem hiding this comment.
Why is this here? we do not do this for the other backing services.
| ``` | ||
|
|
||
| - Use ansible and deploy ElasticSearch: | ||
| ## Generating random secrets for the services |
There was a problem hiding this comment.
| ## Generating random secrets for the services | |
| ## Generating secrets for the services |
| ### Troubleshooting external services | ||
| Cassandra, Minio, PostgresSQL, RabbitMQ and Elasticsearch are running outside Kubernets cluster, make sure those machines have necessary ports open - | ||
|
|
||
| On each of the machines running Cassandra, Minio, PostgresSQL, RabbitMQ and Elasticsearch, run the following commands to open the necessary ports, if needed: |
There was a problem hiding this comment.
| On each of the machines running Cassandra, Minio, PostgresSQL, RabbitMQ and Elasticsearch, run the following commands to open the necessary ports, if needed: | |
| On each of the machines running Cassandra, Minio, PostgresSQL, RabbitMQ and Elasticsearch, run the following commands to open the necessary ports, if necessary: |
| ```bash | ||
| ansible-playbook -i hosts.ini minio.yml -vv | ||
| ``` | ||
| The SFT & Coturn Calling server should be running on a kubernetes nodes that are connected to the public internet. If not all kubernetes nodes match these criteria, you should specifically label the nodes that do match these criteria, so that you're sure SFT is deployed correctly. |
| ``` | ||
|
|
||
| For instructions on how to install Restund, see [this page](restund.md#install-restund). | ||
| If the node does not know its onw public IP (e.g. becuase it's behind NAT) then you should also set the `wire.com/external-ip` annotation to the public IP of the node. |
There was a problem hiding this comment.
| If the node does not know its onw public IP (e.g. becuase it's behind NAT) then you should also set the `wire.com/external-ip` annotation to the public IP of the node. | |
| If the node is not bound to the public IP the users will see(e.g. becuase it's behind NAT) then you should also set the `wire.com/external-ip` annotation to the public IP of the node. |
Change type
Basic information
Testing
Tracking