Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore: documentation #24

Merged
merged 4 commits into from
Mar 27, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
61 changes: 51 additions & 10 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,20 +1,47 @@
# forms-queue
This ia a monorepo for forms-queue/worker.

## @forms-queue/worker

Worker is a process which listens to a database-backed queue, and sends it to the configured webhook.

A monorepo has been set up so it is easy to add new related services or code in future.

## Prerequisites
1. A node version manager, like [nvm](https://formulae.brew.sh/formula/nvm), or [n](https://github.com/tj/n)
2. node 20.x.x
3. yarn >= v1.22. This project uses yarn 4. Yarn v1.22 will load the correct version of yarn by looking at [.yarnrc](./.yarnrc.yml) and [.yarn](./yarn)
4. Docker >= 3.9 - [Install docker engine](https://docs.docker.com/engine/install/)

Install the dependencies by running `yarn install` from the root of the project.

## @forms-queue/worker

Worker is a process which listens to a database-backed queue, and sends it to the configured webhook.

To start the project, you will need a running postgres database. You can start the database with docker by running the following command:

```sh
docker compose up postgres
```

Or in detached mode (i.e. without tailing the logs)

```sh
docker compose up postgres -d
```

This will start the postgres container running on port 5432; with username `user` and password `root`,
and a database created called `queue` (`postgres://user:root@localhost:5432/queue`).
[`init.sql`](init.sql) is also loaded as a volume into the container, and will create a database named `notarial`
(`postgres://user:root@localhost:5432/notarial`). This is useful if you are going to be running [notarial-api](https://github.com/UKForeignOffice/notarial-api)
at the same time. If you are running postgres from this repo, you do not need to run it in `notarial-api`.

Then to start the worker in dev mode,

```sh
yarn worker start:local
```


### Getting started with Docker
You may use docker and docker compose to build and start the project with the right components (e.g. database, microservices), but will not be able to run the application(s) in dev mode.
### Running with docker
You may use docker and docker compose to build and start the project with the right components (e.g. database, microservices),
but will not be able to run the application(s) in dev mode (i.e. with watch and recompile).

To do this, simply run this command from the root of the project:
```
Expand All @@ -23,20 +50,34 @@ docker compose up -d

This will then run the server in a docker container in detached mode, allowing you to continue making commands through your terminal, but still keep the docker container running.

If you have already started a postgres container, you can comment out the postgres service in the docker-compose file.

To rebuild the server after making some changes, run the following commands:

```
docker compose down
docker compose -d --build
```



### Formatting
This project uses ESLint and Prettier to ensure consistent formatting. It is recommended that you add and turn on the prettier plugin for your IDE, and reformat on save.


## CI/CD
There is a CI/CD pipeline currently set up for deploying new versions of the project to test environments. For more information, please refer to the [CI/CD docs](https://github.com/UKForeignOffice/notarial-api/blob/main/docs/ci.md)

## Testing
Currently, there is unit testing and integration testing set up for the api workspace. For more information, refer to the [testing docs](./docs/testing.md).
## Troubleshooting
See [TROUBLESHOOTING.md](./TROUBLESHOOTING.md) on how to troubleshoot the worker and errors when running the service.

## Environment variables

| Environment variable | type | Default | Description |
|----------------------------|------------|-------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------|
| QUEUE_URL | string | postgres://user:root@localhost:5432/queue | The connection string to the database, including username and password |
| ARCHIVE_FAILED_AFTER_DAYS | string/int | 30 | How long to keep failed jobs in the pgboss.job before moving it to pgboss.archive |
| DELETE_ARCHIVED_IN_DAYS | string/int | 7 | How long to keep jobs in pgboss.archive before deleting it |
| SUBMISSION_REQUEST_TIMEOUT | string/int | 2000 | How long to keep the POST request alive for in milliseconds. This should be higher (20-30s) if integrating into CASEBOOK/Orbit which has long response times |
| NEW_JOB_CHECK_INTERVAL | string/int | 2000 | The frequency to check for new jobs in milliseconds |

Types are described as string/int since kubernetes only accepts strings. Strings are parsed into int.
181 changes: 181 additions & 0 deletions TROUBLESHOOTING.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,181 @@
# Troubleshooting

Use this guide to troubleshoot issues and resolve errors that may occur when forms-worker is deployed.

## Error codes

| Error code | Reason |
|-------------------------|--------------------------------------------------------------------------------------------------------|
| `SUBMIT_REQUEST_ERROR` | Something went wrong creating or sending the request |
| `SUBMIT_RESPONSE_ERROR` | The webhook_url responded with a non 200 error |
| `SUBMIT_UNKNOWN_ERROR` | An error was thrown, possibly unrelated to the reqeuest. Inspect the logs and database for more detail |


## pgboss

Connect to the database:
```sh
kubectl run -it --rm --env PGPASSWORD='<PASSWORD>' --env PAGER= --image=postgres:16 --restart=Never postgres-client -- psql -h <ENDPOINT_URL> -U master -d queue
```
Replace PASSWORD with the password for the database, ENDPOINT_URL with the endpoint URL for the database.



[pgboss](https://github.com/timgit/pg-boss) is used to manage queueing jobs. On application start, pgboss will automatically create necessary tables in the database.

### Jobs table
The jobs table `pgboss.job` is where all the current jobs are stored. Jobs will remain here, until they are completed or failed. Then they will move to `pgboss.archive`

The jobs table has the following columns:

```
Column | Type | Collation | Nullable | Default
--------------+-----------------------------+-----------+----------+-----------------------------
id | uuid | | not null | gen_random_uuid()
name | text | | not null |
priority | integer | | not null | 0
data | jsonb | | |
state | pgboss.job_state | | not null | 'created'::pgboss.job_state
retrylimit | integer | | not null | 0
retrycount | integer | | not null | 0
retrydelay | integer | | not null | 0
retrybackoff | boolean | | not null | false
startafter | timestamp with time zone | | not null | now()
startedon | timestamp with time zone | | |
singletonkey | text | | |
singletonon | timestamp without time zone | | |
expirein | interval | | not null | '00:15:00'::interval
createdon | timestamp with time zone | | not null | now()
completedon | timestamp with time zone | | |
keepuntil | timestamp with time zone | | not null | now() + '14 days'::interval
on_complete | boolean | | not null | false
output | jsonb | | |
```

Columns/values to note are
- `name`: the name of the job. This will be "submission" for forms-worker
- `state`: the state of the job. Read more about them in [pgboss documentation](https://github.com/timgit/pg-boss/blob/master/docs/readme.md#job-states)
- `created`: the job has been created
- `failed`: the job has failed
- `completed`: the job has been completed (successfully)
- `active`: the job is currently being processed
- `data`: the data associated with the job. This will contain the payload of the job. For this worker, it will follow this format:
```json5
{
"data": {}, // the user's answers
"webhook_url": "webhook_url", // the webhook URL to send the data to
}
```
- `output`: the output of the job. This will contain the reference number, or the error message if the job has failed
- `keepuntil`: the time until the job will be kept in the table. After this time, the job will be moved to `pgboss.archive`. If you need more time to resolve the issue, you can update this value to a later time.



## Finding jobs
To find jobs that have failed, run the following query:

```postgresql
select id, output from pgboss.job where state = 'failed' and name = 'submission';
```

## Fixing data
If the retrylimit has not been hit (retrylimit > retrycount) and the retrylimit is not 0, the job will be automatically retried.

It is recommended you run every query in a transaction, so that you can abort the changes if they are incorrect.

```postgresql
begin;
-- First run a query to print the current state of the job you are trying to change
select data from pgboss.job where state = 'failed' and id = '<id>';

update pgboss.job
set state = 'created',
completedon = null,
retrycount = 0,
state = 'created'
where id = '<id>';

-- Run the query again, to see if you've made the correct changes
select data from pgboss.job where state = 'failed' and id = '<id>';

-- Run the following query to commit the changes
-- commit;
-- Run the following to abort the changes
-- rollback;
```

The following queries will assume that you are running them in a transaction.

### Incorrect URLs
If the webhook URL is incorrect, you can update the URL in the database.

```postgresql
update pgboss.job
set data = jsonb_set(
data,
'{webhook_url}',
'"<NEW_URL>"'
)
where id = '<id>';
```

## Incorrect data
If the data is incorrect, you can update the data in the database.

```postgresql
update pgboss.job
set data = jsonb_set(
data,
'{data, questions, 0, answer}',
'"<NEW_ANSWER>"'
)
where id = '<id>';
```
where `0` is the index of the question, and `answer` is the key of the answer. However, you may find it easier to copy the data to a text editor, make the changes, and then update the data in the database.

```postgresql
update pgboss.job
set data = '<NEW_DATA>'
where id = '<id>';
```



### Retry a job
If a job has failed, and you want to retry it, you can update the job to `created` state, and reset the `retrycount` to 0.

```postgresql
update pgboss.job
set state = 'created',
completedon = null,
retrycount = 0,
state = 'created'
-- output = null
where id = '<id>';
```
You may also want to update output to null, to clear the error message.

## Creating a new job
If the job does not seem to be retrying, or it is easier to just create a new job you need to create a new job, you can do so by running the following query:

```postgresql
insert into pgboss.job (name, data)
values ('submission', '{"data": {"questions": []}, "webhook_url": "https://example.com"}');
```

Alternatively, you can copy the data from the failed job, and create a new job with the same data.

```postgresql
insert into pgboss.job (name, data)
SELECT name, data
from pgboss.job where id = '<id>';
```

### Moving a job from archive to job
If a job has been moved to the archive, and you want to retry it, you can move it back to the jobs table.

```postgresql
insert into pgboss.job (name, data)
SELECT name, data
from pgboss.archive where id = '<id>';
```
12 changes: 6 additions & 6 deletions docker-compose.yml
Original file line number Diff line number Diff line change
@@ -1,18 +1,18 @@
version: "3.9"
services:
api:
build:
context: .
dockerfile: api/Dockerfile
ports:
- "9000:9000"
worker:
depends_on:
- postgres
build:
context: .
dockerfile: worker/Dockerfile
environment:
QUEUE_URL: postgres://user:root@postgres:5432/queue
postgres:
container_name: postgres
image: "postgres:16"
volumes:
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
ports:
- "5432:5432"
environment:
Expand Down
1 change: 1 addition & 0 deletions init.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
create database notarial;
1 change: 1 addition & 0 deletions worker/config/default.js
Original file line number Diff line number Diff line change
Expand Up @@ -8,4 +8,5 @@ module.exports = {
requestTimeout: 2000,
},
newJobCheckInterval: "2000",
SUPPRESS_NO_CONFIG_WARNING: true,
};
7 changes: 7 additions & 0 deletions worker/src/Consumer/getConsumer.ts
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,9 @@ const deleteAfterDays = parseInt(config.get<string>("Queue.deleteArchivedAfterDa

logger.info(`archiveFailedAfterDays: ${archiveFailedAfterDays}, deleteAfterDays: ${deleteAfterDays}`);

/**
* Sets up database connection via PgBoss and creates an instance of a "consumer" (consumes the queue).
*/
export async function create() {
const boss = new PgBoss({
connectionString: URL,
Expand All @@ -40,6 +43,10 @@ export async function create() {
return boss;
}

/**
* `getConsumer` should be used whenever an instance of a consumer is needed.
* This is to prevent too many database connections from being opened unnecessarily.
*/
export async function getConsumer() {
try {
if (!consumer) {
Expand Down
13 changes: 13 additions & 0 deletions worker/src/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
# forms-queue

## Queue: `submission`

Workers
- `submit`

## `submit`
[submit](./queues/submission/workers/submit.ts)

When a "submission" event is detected, this worker POSTs the data to `job.data.webhook_url`.

The source of this event is the runner, after a user has submitted a form.
49 changes: 0 additions & 49 deletions worker/src/queues/submission/README.md

This file was deleted.

Loading