Skip to content

fix: wpb-22590 add pages for ansible operations in production and intro#106

Open
mohitrajain wants to merge 2 commits intowpb-22590-2-wiab-devfrom
wpb-22590-3-prod
Open

fix: wpb-22590 add pages for ansible operations in production and intro#106
mohitrajain wants to merge 2 commits intowpb-22590-2-wiab-devfrom
wpb-22590-3-prod

Conversation

@mohitrajain
Copy link
Copy Markdown
Contributor

@mohitrajain mohitrajain commented Mar 19, 2026

Change type

  • Documentation change
  • Build pipeline change
  • Submodule update
  • Deployment change

Basic information

  • THIS CHANGE REQUIRES A WIRE-DOCS RELEASE NOW

Testing

  • I ran/applied the changes myself, in a test environment.

Tracking

  • I mentioned this PR in Jira, OR I mentioned the Jira ticket in this PR.
  • I mentioned this PR in one of the issues attached to one of our repositories.

@mohitrajain mohitrajain requested review from a team as code owners March 19, 2026 17:31
@mohitrajain mohitrajain changed the title fix: wpb-22590 add pages for ansible operations in production and int… fix: wpb-22590 add pages for ansible operations in production and intro Mar 19, 2026
Cassandra, Minio, PostgresSQL, RabbitMQ and Elasticsearch are running outside Kubernets cluster, make sure those machines have necessary ports open -

On each of the machines running Cassandra, Minio, PostgresSQL, RabbitMQ and Elasticsearch, run the following commands to open the necessary ports, if needed:
```bash
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

May be we can consider to automate this via ansible

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, we can automate it but by default we have noticed that VM firewalls are open. Here we are trying to say that these ports should open. let me rephrase it better.

- `./ansible/inventory/group_vars/all/secrets.yaml` - This file will be used by ansible playbooks to configure service secrets.
- `values/wire-server/secrets.yaml` - This contains the secrets for Wire services and share some secrets from coturn and database services.
- `values/coturn/secrets.yaml` - This contains a secret for the coturn service.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Both for wire-server and coturn, the secrets are created as prod-secrets.example.yaml last time i checked

[postgresql:vars]
postgresql_network_interface=enp7s0
wire_dbname=wire-server
repmgr_node_config={"postgresql1":{"node_id":1,"priority":150,"role":"primary"},"postgresql2":{"node_id":2,"priority":100,"role":"standby"},"postgresql3":{"node_id":3,"priority":50,"role":"standby"}}
Copy link
Copy Markdown
Contributor

@sghosh23 sghosh23 Mar 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can use some serialization for better readability

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ini inventory doesn't support multiline value. We can pick yaml format for it but indentation can become tricky for end user. Will create an internal ticket to convert the default inventory format to yaml.

### Postgresql cluster
[kube-node]
call_kubenode1 ansible_host=10.1.1.33 etcd_member_name=call_kubenode1 ip=10.1.1.33 node_labels="{'wire.com/role': 'sftd'}" node_annotations="{'wire.com/external-ip': 'a.b.c.d'}"
call_kubenode2 ansible_host=10.1.1.34 etcd_member_name=call_kubenode2 ip=10.1.1.34 node_labels="{'wire.com/role': 'coturn'}""
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The quotation seems off

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this config for a different calling cluster? If so, please mention it. The template we provide does not have this reference.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, this is a new sample inventory, I am trying to add to provide for calling k8s cluster.

[k8s-cluster:vars]
calico_mtu=1450
calico_veth_mtu=1430
```
Copy link
Copy Markdown
Contributor

@sghosh23 sghosh23 Mar 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why the user needs to set mtu based on Hetzner's requirement here? I would recommend to read up the doc: https://github.com/kubernetes-sigs/kubespray/blob/master/docs/CNI/calico.md#configuring-interface-mtu before suggesting something really advanced. If there is an use case to set those configs, its upto the user

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, these were carried on from our cd template. Taking them out.

Co-authored-by: Sukanta <amisukanta02@gmail.com>
```bash
$ cd ... # you pick a good location!
```
Obtain the latest airgap artifact for wire-server-deploy. Please contact us to get it.
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

how does 'contact us' work?

$ wget https://s3-eu-west-1.amazonaws.com/public.wire.com/artifacts/wire-server-deploy-static-<HASH>.tgz
$ tar xvzf wire-server-deploy-static-<HASH>.tgz
```
Where `<HASH>` above is the hash of your deployment artifact, given to you by Wire, or acquired by looking at the above build job.
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what build job?

cd wire-server-deploy/ansible
```bash
cp ansible/inventory/offline/99-static ansible/inventory/offline/hosts.ini
mv ansible/inventory/offline/99-static ansible/inventory/offline/orig.99-static
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the move doesn't do anything productive.

```

There are more settings in this file that we will set in later steps.
> Note: Make sure that `assethost` is present in the inventory file with the correct `ansible_host` (and `ip` values if required)
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shouldn't this note hold for all entries in this file?

e.g. attempt to run Cassandra and k8s on the same 3 machines, the
hostnames will be overwritten by the second installation playbook,
breaking the first.
These sections can be divided into individual host groups, reflecting the architecture of the target infrastructure. Examples with individual nodes for Elastic, MinIO, PostgreSQL, RabbitMQ and Cassandra are commented out below.
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
These sections can be divided into individual host groups, reflecting the architecture of the target infrastructure. Examples with individual nodes for Elastic, MinIO, PostgreSQL, RabbitMQ and Cassandra are commented out below.
These sections are divided into individual host groups, reflecting the architecture of the target infrastructure. Examples with individual nodes for Elastic, MinIO, PostgreSQL, RabbitMQ and Cassandra are commented out below.

repmgr_node_config = {"postgresql1": {"node_id": 1, "priority": 150, "role": "primary"}, "postgresql2": {"node_id": 2, "priority": 100, "role": "standby"}, "postgresql3": {"node_id": 3, "priority": 50, "role": "standby"}}
```
- In an INI inventory, the `repmgr_node_config` keys must match the PostgreSQL inventory hostnames.
- To read more about specific PostgreSQL configuration, reat at [PostgreSQL High Availability Cluster - Quick Setup](../administrate/postgresql-cluster.md).
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this here? we do not do this for the other backing services.

```

- Use ansible and deploy ElasticSearch:
## Generating random secrets for the services
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
## Generating random secrets for the services
## Generating secrets for the services

### Troubleshooting external services
Cassandra, Minio, PostgresSQL, RabbitMQ and Elasticsearch are running outside Kubernets cluster, make sure those machines have necessary ports open -

On each of the machines running Cassandra, Minio, PostgresSQL, RabbitMQ and Elasticsearch, run the following commands to open the necessary ports, if needed:
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
On each of the machines running Cassandra, Minio, PostgresSQL, RabbitMQ and Elasticsearch, run the following commands to open the necessary ports, if needed:
On each of the machines running Cassandra, Minio, PostgresSQL, RabbitMQ and Elasticsearch, run the following commands to open the necessary ports, if necessary:

```bash
ansible-playbook -i hosts.ini minio.yml -vv
```
The SFT & Coturn Calling server should be running on a kubernetes nodes that are connected to the public internet. If not all kubernetes nodes match these criteria, you should specifically label the nodes that do match these criteria, so that you're sure SFT is deployed correctly.
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not true.

```

For instructions on how to install Restund, see [this page](restund.md#install-restund).
If the node does not know its onw public IP (e.g. becuase it's behind NAT) then you should also set the `wire.com/external-ip` annotation to the public IP of the node.
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
If the node does not know its onw public IP (e.g. becuase it's behind NAT) then you should also set the `wire.com/external-ip` annotation to the public IP of the node.
If the node is not bound to the public IP the users will see(e.g. becuase it's behind NAT) then you should also set the `wire.com/external-ip` annotation to the public IP of the node.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants