Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve development environment #214

Merged
merged 6 commits into from
Nov 7, 2023
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 14 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,8 @@
# scicat-ci
CI related information to deploy SciCat
CI related information to deploy SciCat at PSI.

For a simpler configuration for running and developing the SciCat core components, see
the [SciCat Live](https://github.com/SciCatProject/scicatlive) project.

# Local environment

Expand All @@ -8,9 +11,18 @@ To spin up the container(s) of interest, follow the documentation in the [README
# Deployment behaviour
The CI is responsible for deploying the SciCat components in the k8s clusters (one for `development`, another for `qa` and `production`), based on some rules.

The whole pipeline relies on the existence of three deployment environments: `development` (where changes are developed), `qa` (beta testing environment) and `production` (stable environment). The components are deployed on one of two clusters depending on the environment. The `development` environment is deployed on the `development` cluster, while the `qa` and `production` environment are deployed on the `qaprod` cluster and are installed under the `scicat-{env}` namespace on the corresponding cluster. For the three GitHub CI triggers, `pull_request` to `main`, `push` to `main` and `release`, the CI extracts the environment based on the trigger (follows), builds, tags and pushes the docker image and deploys the helm chart to the corresponding k8s cluster, having applied the configuration specific to the environment.
The whole pipeline relies on the existence of three deployment environments: `development` (where changes are developed), `qa` (beta testing environment) and `production` (stable environment). The components are deployed on one of two clusters depending on the environment. The `development` environment is deployed on the `development` cluster, while the `qa` and `production` environment are deployed on the `qaprod` cluster and are installed under the `scicat-{env}` namespace on the corresponding cluster. For the three GitHub CI triggers, `pull_request` to `main`, `push` to `main` and `release`, the CI extracts the environment based on the trigger (follows), builds, tags and pushes the docker image and deploys the helm chart to the corresponding k8s cluster, having applied the configuration specific to the environment.

| Environment | K8 Cluster | Namespace |
| ----------- | ----------- | ------------------ |
| development | development | scicat-development |
| qa | qaprod | scicat-qa |
| production | qaprod | scicat-production |

The configuration files are in the [helm/configs](helm/configs) folder and are organised in folders with the same name of the component (convention to be maintained). For each component, the files in `helm/configs/{component}` are shared by all the environments, while the ones specific to one environment are in `helm/configs/{component}/{environment}`. For example, the `backend` configuration files are in the [helm/configs/backend](helm/configs/backend) folder, and the development specific files are in [helm/configs/backend/development](helm/configs/backend/development)



There is no need to explicitly specify the environment, nor the location of the files at CI time, as its value is extracted depending on the GitHub CI trigger.

Some conventions are to be maintained:
Expand Down
92 changes: 88 additions & 4 deletions dev/README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,8 @@
# How to get started:
# Local Development

## Overview

Development can be done running local docker containers. First `docker-compose` is used to launch containers for each service. Most containers do not start the service directly, allowing this to be done manually from inside the container (eg using VS Code Dev Containers). Commands for each service are given below.

## Clone the repo and the submodules

Expand All @@ -7,9 +11,9 @@ git clone [email protected]:paulscherrerinstitute/scicat-ci.git
git submodule update --init --recursive --remote
```

## Run docker-compose
## Starting containers

### :warning: IMPORTANT
### :warning: IMPORTANT
The docker-compose builds the containers from the Dockerfile of each submodule, thus using the submodule checked out to a particular commit.
It is often the case that when setting up the environment one wants the components to be checked out automatically to the latest on main. The command above (`git submodule update --init --recursive --remote`) does that but might break any component where a non-backwards compatible change was applied.
We reference in the config of each components the latest commit (.git-commit-sha) of the submodule where the docker-compose was run and worked the last time, whenever the submodule commit is different from the one referenced in the scicat-ci repo.
Expand All @@ -25,7 +29,7 @@ docker-compose -f docker-compose.yaml up -d --force-recreate --build --no-deps

All the application containers (excluding the db -mongo- and the db_seeding -mongo_seed-) are meant to be used for development so docker-compose starts, rather than the applications, environments where the development environment of each application is set up. This means that, to run the application, one has to attach to the container and start it.

### Example
### Examples

Here are the two most common use cases, spinning up the backend and fronted; the new backend and the frontend.

Expand Down Expand Up @@ -56,3 +60,83 @@ docker-compose -f docker-compose.yaml up --force-recreate --build --no-deps -d
As before, this will start four containers: the be_next container, the fe one, the mongo database and a short-lived one, called mongodb_seed_be_next that puts some example data into the be_next db of mongo.

Since the configuration of the frontend with the new backend has slightly changed, remember to set the `accessTokenPrefix` value to "Bearer " in the [config.json](./config/frontend/config.json#L3) file of the fe, before starting the frontend application.


## Starting services

The `docker-compose.yaml` file is constructed to prepare containers with all dependencies but not to start the services. This is generally done by overriding the command with an infinite loop.

### Backend `be`

```bash
cd /home/node/app
node .
```

### Backend-next `be_next`

```bash
cd /home/node/app
node dist/main
```

### Frontend `fe`

The frontend uses a custom Dockerfile with the following modifications:

```bash
cd /frontend
npm run serve -- --host 0.0.0.0 --disable-host-check
```

A custom Dockerfile is used because the production image builds the static site and then serves it via nginx. The development image serves the site using `ng serve` (webpack-dev-server) so it reflects the latest code and updates when files change.
`
### Search `search`

```
cd /home/node/app
node .
```

### Landing Page `lp`

```bash
cd /home/node/app
npm run serve -- --host 0.0.0.0 --disable-host-check
```

### OAI-PMH `oi`

```bash
cd /home/node/app
node .
```

### Proposals `pr`

```bash
cd /usr/src/proposals

node .
```

### Backend `be` and `be_next`

```bash
cd /usr/src/app
python copy_public_ds.py
```
## Reclaiming space

This compose file creates a new docker volume with test data. Removing this requires adding `--volumes` when shutting down the containers:

```bash
docker-compose -f docker-compose.yaml down --volumes
```

If this is omitted it may eventually lead to your docker virtual disk filling up. If this happens, remove old volumes:

```bash
docker volume prune
docker system prune -v
```
20 changes: 8 additions & 12 deletions dev/config/frontend/Dockerfile
minottic marked this conversation as resolved.
Show resolved Hide resolved
Original file line number Diff line number Diff line change
@@ -1,18 +1,14 @@
FROM node:16 AS installer

WORKDIR /frontend
COPY package*.json /frontend/
RUN npm ci
COPY . /frontend/
# A browser is needed for automated tests
RUN apt-get update && apt-get install -y chromium
ENV CHROME_BIN=/usr/bin/chromium
ENV CHROME_PATH=/usr/lib/chromium/

FROM node:16 AS builder
WORKDIR /frontend
COPY --from=installer /frontend /frontend
COPY package*.json /frontend/

RUN npx ng build
RUN npm ci

FROM nginx:1.12-alpine
RUN rm -rf /usr/share/nginx/html/*
COPY --from=builder /frontend/dist/ /usr/share/nginx/html/
COPY scripts/nginx.conf /etc/nginx/nginx.conf
EXPOSE 80
EXPOSE 4200
CMD ["node serve"]
10 changes: 5 additions & 5 deletions dev/docker-compose.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -62,19 +62,19 @@ services:
fe:
build:
context: ../frontend/.
target: installer
dockerfile: ../dev/config/frontend/Dockerfile
environment:
CHROME_BIN: /usr/bin/chromium
CHROME_PATH: /usr/lib/chromium/
volumes:
- ../frontend:/frontend
- ./config/frontend/config.json:/frontend/src/assets/config.json
- /frontend/node_modules
- /frontend/dist
command: /bin/sh -c "apt-get update && apt-get install -y chromium && while true; do sleep 600; done"
# browser required for test suite; move to installer layer
command: /bin/sh -c "while true; do sleep 600; done"
ports:
# Dev image with `node start` uses port 4200
# For the upstream dockerfile with nginx, use 4200:80
- 4200:4200

profiles:
- fe

Expand Down