Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Try a different approach to parallel tests #3978

Draft
wants to merge 21 commits into
base: JDBetteridge/faster_tests
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
21 commits
Select commit Hold shift + click to select a range
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
86 changes: 52 additions & 34 deletions .github/workflows/build.yml
Original file line number Diff line number Diff line change
Expand Up @@ -45,24 +45,18 @@ jobs:
OPENBLAS_NUM_THREADS: 1
COMPLEX: ${{ matrix.complex }}
RDMAV_FORK_SAFE: 1
PYTEST_ARGS: |
--splitting-algorithm least_duration \
--splits \$MPISPAWN_NUM_TASKS \
--group \$MPISPAWN_TASK_ID1 \
--timeout=500 \
--timeout-method=thread \
-o faulthandler_timeout=1860
outputs:
scalar-type: ${{ matrix.scalar-type }}
steps:
- uses: actions/checkout@v4

- name: Cleanup
if: ${{ always() }}
run: |
cd ..
rm -rf firedrake_venv

- name: Build Firedrake
id: build
run: |
cd ..
# Linting should ignore unquoted shell variable $COMPLEX
Expand Down Expand Up @@ -93,66 +87,88 @@ jobs:
--install gadopt \
--install asQ \
|| (cat firedrake-install.log && /bin/false)

- name: Install test dependencies
id: build
run: |
sudo apt update
sudo apt -y install parallel
. ../firedrake_venv/bin/activate
python "$(which firedrake-clean)"
python -m pip install \
pytest-xdist \
pytest-timeout \
ipympl \
pytest-split \
git+https://github.com/JDBetteridge/mpispawn
python -m pip install pytest-timeout ipympl pytest-split
python -m pip list

- name: Run tests (nprocs = 1)
run: |
. ../firedrake_venv/bin/activate
mpispawn -nU 12 -nW 1 --propagate-errcodes \
pytest "$PYTEST_ARGS" --junit-xml=firedrake1.xml -m "parallel[1] or not parallel" -v tests/firedrake
: # use --quote to stop parallel from parsing the pytest arguments
parallel --line-buffer --tag --quote \
pytest -v --splits 12 --group {#} --splitting-algorithm least_duration \
--timeout=1800 --timeout-method=thread -o faulthandler_timeout=1860 \
--junit-xml=firedrake1_{#}.xml \
-m "parallel[1] or not parallel" tests/firedrake ::: $(seq 12)

- name: Run tests (nprocs = 2)
# Run even if earlier tests failed
if: ${{ success() || steps.build.conclusion == 'success' }}
run: |
. ../firedrake_venv/bin/activate
mpispawn -nU 12 -nW 2 --propagate-errcodes \
pytest "$PYTEST_ARGS" --junit-xml=firedrake2_\$MPISPAWN_TASK_ID1.xml -m parallel[2] -v tests/firedrake
parallel --line-buffer --tag --quote \
mpiexec -n 2 pytest -v --splits 6 --group {#} --splitting-algorithm least_duration \
--timeout=1800 --timeout-method=thread -o faulthandler_timeout=1860 \
--junit-xml=firedrake2_{#}.xml \
-m parallel[2] tests/firedrake ::: $(seq 6)

- name: Run tests (nprocs = 3)
if: ${{ success() || steps.build.conclusion == 'success' }}
run: |
. ../firedrake_venv/bin/activate
mpispawn -nU 12 -nW 3 --propagate-errcodes \
pytest "$PYTEST_ARGS" --junit-xml=firedrake3_\$MPISPAWN_TASK_ID1.xml -m parallel[3] -v tests/firedrake
parallel --line-buffer --tag --quote \
mpiexec -n 3 pytest -v --splits 4 --group {#} --splitting-algorithm least_duration \
--timeout=1800 --timeout-method=thread -o faulthandler_timeout=1860 \
--junit-xml=firedrake3_{#}.xml \
-m parallel[3] tests/firedrake ::: $(seq 4)

- name: Run tests (nprocs = 4)
if: ${{ success() || steps.build.conclusion == 'success' }}
run: |
. ../firedrake_venv/bin/activate
mpispawn -nU 12 -nW 4 --propagate-errcodes \
pytest "$PYTEST_ARGS" --junit-xml=firedrake4_\$MPISPAWN_TASK_ID1.xml -m parallel[4] -v tests/firedrake
- name: Run tests (nprocs = 5)
if: ${{ success() || steps.build.conclusion == 'success' }}
run: |
. ../firedrake_venv/bin/activate
mpispawn -nU 12 -nW 5 --propagate-errcodes \
pytest "$PYTEST_ARGS" --junit-xml=firedrake5_\$MPISPAWN_TASK_ID1.xml -m parallel[5] -v tests/firedrake
parallel --line-buffer --tag --quote \
mpiexec -n 4 pytest -v --splits 3 --group {#} --splitting-algorithm least_duration \
--timeout=1800 --timeout-method=thread -o faulthandler_timeout=1860 \
--junit-xml=firedrake4_{#}.xml \
-m parallel[4] tests/firedrake ::: $(seq 3)

# NOTE: We do not have any tests that run with 5 processes

- name: Run tests (nprocs = 6)
if: ${{ success() || steps.build.conclusion == 'success' }}
run: |
. ../firedrake_venv/bin/activate
mpispawn -nU 12 -nW 6 --propagate-errcodes \
pytest "$PYTEST_ARGS" --junit-xml=firedrake6_\$MPISPAWN_TASK_ID1.xml -m parallel[6] -v tests/firedrake
parallel --line-buffer --tag --quote \
mpiexec -n 6 pytest -v --splits 2 --group {#} --splitting-algorithm least_duration \
--timeout=1800 --timeout-method=thread -o faulthandler_timeout=1860 \
--junit-xml=firedrake6_{#}.xml \
-m parallel[6] tests/firedrake ::: $(seq 2)

- name: Run tests (nprocs = 7)
if: ${{ success() || steps.build.conclusion == 'success' }}
run: |
. ../firedrake_venv/bin/activate
mpispawn -nU 12 -nW 7 --propagate-errcodes \
pytest "$PYTEST_ARGS" --junit-xml=firedrake7_\$MPISPAWN_TASK_ID1.xml -m parallel[7] -v tests/firedrake
mpiexec -n 7 pytest -v \
--timeout=1800 --timeout-method=thread -o faulthandler_timeout=1860 \
--junit-xml=firedrake7.xml \
-m parallel[7] tests/firedrake

- name: Run tests (nprocs = 8)
if: ${{ success() || steps.build.conclusion == 'success' }}
run: |
. ../firedrake_venv/bin/activate
mpispawn -nU 12 -nW 8 --propagate-errcodes \
pytest "$PYTEST_ARGS" --junit-xml=firedrake8_\$MPISPAWN_TASK_ID1.xml -m parallel[8] -v tests/firedrake
mpiexec -n 8 pytest -v \
--timeout=1800 --timeout-method=thread -o faulthandler_timeout=1860 \
--junit-xml=firedrake8.xml \
-m parallel[8] tests/firedrake

- name: Publish Test Report
uses: mikepenz/[email protected]
if: ${{ always() && ( github.ref != 'refs/heads/master') }}
Expand All @@ -162,6 +178,7 @@ jobs:
check_name: "Firedrake ${{ matrix.scalar-type }}"
updateComment: true
flaky_summary: true

- name: Test pyadjoint
if: ${{ matrix.scalar-type == 'real' }}
run: |
Expand All @@ -176,6 +193,7 @@ jobs:
-n 12 --dist worksteal \
-sv tests/firedrake_adjoint
timeout-minutes: 30

- name: Cleanup
# Belt and braces: clean up before and after the run.
if: ${{ always() }}
Expand Down
1 change: 1 addition & 0 deletions .github/workflows/pip-mac.yml
Original file line number Diff line number Diff line change
Expand Up @@ -53,6 +53,7 @@ jobs:
--with-shared-libraries=1 \
--with-mpi-dir=/opt/homebrew \
--with-zlib \
--with-strict-petscerrorcode \
--download-bison \
--download-hdf5 \
--download-hwloc \
Expand Down
1 change: 1 addition & 0 deletions .github/workflows/pyop2.yml
Original file line number Diff line number Diff line change
Expand Up @@ -51,6 +51,7 @@ jobs:
--with-debugging=1 \
--with-shared-libraries=1 \
--with-c2html=0 \
--with-strict-petscerrorcode \
--with-fortran-bindings=0
make

Expand Down
3 changes: 1 addition & 2 deletions demos/netgen/netgen_mesh.py.rst
Original file line number Diff line number Diff line change
Expand Up @@ -380,8 +380,7 @@ We will now show how to solve the Poisson problem on a high-order mesh, of order

bc = DirichletBC(V, 0.0, [1])
A = assemble(a, bcs=bc)
b = assemble(l)
bc.apply(b)
b = assemble(l, bcs=bc)
solve(A, sol, b, solver_parameters={"ksp_type": "cg", "pc_type": "lu"})

VTKFile("output/Sphere.pvd").write(sol)
Expand Down
3 changes: 3 additions & 0 deletions docker/Dockerfile.env
Original file line number Diff line number Diff line change
Expand Up @@ -66,6 +66,7 @@ RUN bash -c 'cd petsc; \
--download-scalapack \
--download-suitesparse \
--download-superlu_dist \
--with-strict-petscerrorcode \
PETSC_ARCH=packages; \
mv packages/include/petscconf.h packages/include/old_petscconf.nope; \
rm -rf /home/firedrake/petsc/**/externalpackages; \
Expand Down Expand Up @@ -105,6 +106,7 @@ RUN bash -c 'export PACKAGES=/home/firedrake/petsc/packages; \
--with-scalapack-dir=$PACKAGES \
--with-suitesparse-dir=$PACKAGES \
--with-superlu_dist-dir=$PACKAGES \
--with-strict-petscerrorcode \
PETSC_ARCH=default; \
make PETSC_DIR=/home/firedrake/petsc PETSC_ARCH=default all;'

Expand Down Expand Up @@ -144,6 +146,7 @@ RUN bash -c 'export PACKAGES=/home/firedrake/petsc/packages; \
--with-scalapack-dir=$PACKAGES \
--with-suitesparse-dir=$PACKAGES \
--with-superlu_dist-dir=$PACKAGES \
--with-strict-petscerrorcode \
PETSC_ARCH=complex; \
make PETSC_DIR=/home/firedrake/petsc PETSC_ARCH=complex all;'

Expand Down
4 changes: 3 additions & 1 deletion docs/source/firedrake_usa_25.rst
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ Registration
------------

Register for the conference at
`this link <https://pay.baylor.edu/C20024_ustores/web/product_detail.jsp?PRODUCTID=1097/>`__.
`this link <https://pay.baylor.edu/C20024_ustores/web/product_detail.jsp?PRODUCTID=1097>`__.

The registration fees are as follows:

Expand All @@ -60,6 +60,8 @@ The registration fees are as follows:
- $200

The `SIAM Texas-Louisiana Section <https://www.siam.org/get-involved/connect-with-a-community/sections/siam-texas-louisiana-section/>`__ is providing some support for students currently attending universities in Texas or Louisiana to attend.
Students may submit an application to be considered for funding `here <https://docs.google.com/forms/d/e/1FAIpQLSdXKsGE3D18BhvRpfcGD_gSdDmmXGRQ4l47k4Aj4SKJ2c6DZg/viewform?usp=sharing>`__.



Abstract submission
Expand Down
6 changes: 3 additions & 3 deletions firedrake/adjoint_utils/blocks/dirichlet_bc.py
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ def evaluate_adj_component(self, inputs, adj_inputs, block_variable, idx,
adj_output = None
for adj_input in adj_inputs:
if isconstant(c):
adj_value = firedrake.Function(self.parent_space.dual())
adj_value = firedrake.Function(self.parent_space)
adj_input.apply(adj_value)
if self.function_space != self.parent_space:
vec = extract_bc_subvector(
Expand Down Expand Up @@ -88,11 +88,11 @@ def evaluate_adj_component(self, inputs, adj_inputs, block_variable, idx,
# you can even use the Function outside its domain.
# For now we will just assume the FunctionSpace is the same for
# the BC and the Function.
adj_value = firedrake.Function(self.parent_space.dual())
adj_value = firedrake.Function(self.parent_space)
adj_input.apply(adj_value)
r = extract_bc_subvector(
adj_value, c.function_space(), bc
)
).riesz_representation("l2")
if adj_output is None:
adj_output = r
else:
Expand Down
1 change: 1 addition & 0 deletions firedrake/adjoint_utils/blocks/function.py
Original file line number Diff line number Diff line change
Expand Up @@ -79,6 +79,7 @@ def evaluate_adj_component(self, inputs, adj_inputs, block_variable, idx,
)
diff_expr_assembled = firedrake.Function(adj_input_func.function_space())
diff_expr_assembled.interpolate(ufl.conj(diff_expr))
diff_expr_assembled = diff_expr_assembled.riesz_representation(riesz_map="l2")
adj_output = firedrake.Function(
R, val=firedrake.assemble(ufl.Action(diff_expr_assembled, adj_input_func))
)
Expand Down
26 changes: 12 additions & 14 deletions firedrake/adjoint_utils/blocks/solving.py
Original file line number Diff line number Diff line change
Expand Up @@ -197,14 +197,12 @@ def _assemble_dFdu_adj(self, dFdu_adj_form, **kwargs):

def _assemble_and_solve_adj_eq(self, dFdu_adj_form, dJdu, compute_bdy):
dJdu_copy = dJdu.copy()
kwargs = self.assemble_kwargs.copy()
# Homogenize and apply boundary conditions on adj_dFdu and dJdu.
bcs = self._homogenize_bcs()
kwargs["bcs"] = bcs
dFdu = self._assemble_dFdu_adj(dFdu_adj_form, **kwargs)
dFdu = firedrake.assemble(dFdu_adj_form, bcs=bcs, **self.assemble_kwargs)

for bc in bcs:
bc.apply(dJdu)
bc.zero(dJdu)

adj_sol = firedrake.Function(self.function_space)
firedrake.solve(
Expand All @@ -219,10 +217,8 @@ def _assemble_and_solve_adj_eq(self, dFdu_adj_form, dJdu, compute_bdy):
return adj_sol, adj_sol_bdy

def _compute_adj_bdy(self, adj_sol, adj_sol_bdy, dFdu_adj_form, dJdu):
adj_sol_bdy = firedrake.Function(
self.function_space.dual(), dJdu.dat - firedrake.assemble(
firedrake.action(dFdu_adj_form, adj_sol)).dat)
return adj_sol_bdy
adj_sol_bdy = firedrake.assemble(dJdu - firedrake.action(dFdu_adj_form, adj_sol))
return adj_sol_bdy.riesz_representation("l2")

def evaluate_adj_component(self, inputs, adj_inputs, block_variable, idx,
prepared=None):
Expand Down Expand Up @@ -264,8 +260,11 @@ def evaluate_adj_component(self, inputs, adj_inputs, block_variable, idx,
return dFdm

dFdm = -firedrake.derivative(F_form, c_rep, trial_function)
dFdm = firedrake.adjoint(dFdm)
dFdm = dFdm * adj_sol
if isinstance(dFdm, ufl.Form):
dFdm = firedrake.adjoint(dFdm)
dFdm = firedrake.action(dFdm, adj_sol)
else:
dFdm = dFdm(adj_sol)
dFdm = firedrake.assemble(dFdm, **self.assemble_kwargs)
return dFdm

Expand Down Expand Up @@ -654,9 +653,8 @@ def _forward_solve(self, lhs, rhs, func, bcs, **kwargs):
def _adjoint_solve(self, dJdu, compute_bdy):
dJdu_copy = dJdu.copy()
# Homogenize and apply boundary conditions on adj_dFdu and dJdu.
bcs = self._homogenize_bcs()
for bc in bcs:
bc.apply(dJdu)
for bc in self.bcs:
bc.zero(dJdu)

if (
self._ad_solvers["forward_nlvs"]._problem._constant_jacobian
Expand Down Expand Up @@ -876,7 +874,7 @@ def __init__(self, source, target_space, target, bcs=[], **kwargs):
self.add_dependency(bc, no_duplicates=True)

def apply_mixedmass(self, a):
b = firedrake.Function(self.target_space)
b = firedrake.Function(self.target_space.dual())
with a.dat.vec_ro as vsrc, b.dat.vec_wo as vrhs:
self.mixed_mass.mult(vsrc, vrhs)
return b
Expand Down
13 changes: 8 additions & 5 deletions firedrake/adjoint_utils/variational_solver.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@
from functools import wraps
from pyadjoint.tape import get_working_tape, stop_annotating, annotate_tape, no_annotations
from firedrake.adjoint_utils.blocks import NonlinearVariationalSolveBlock
from firedrake.ufl_expr import derivative, adjoint
from ufl import replace


Expand All @@ -11,7 +12,6 @@ def _ad_annotate_init(init):
@no_annotations
@wraps(init)
def wrapper(self, *args, **kwargs):
from firedrake import derivative, adjoint, TrialFunction
init(self, *args, **kwargs)
self._ad_F = self.F
self._ad_u = self.u_restrict
Expand All @@ -20,10 +20,13 @@ def wrapper(self, *args, **kwargs):
try:
# Some forms (e.g. SLATE tensors) are not currently
# differentiable.
dFdu = derivative(self.F,
self.u_restrict,
TrialFunction(self.u_restrict.function_space()))
self._ad_adj_F = adjoint(dFdu)
dFdu = derivative(self.F, self.u_restrict)
try:
self._ad_adj_F = adjoint(dFdu)
except ValueError:
# Try again without expanding derivatives,
# as dFdu might have been simplied to an empty Form
self._ad_adj_F = adjoint(dFdu, derivatives_expanded=True)
except (TypeError, NotImplementedError):
self._ad_adj_F = None
self._ad_kwargs = {'Jp': self.Jp, 'form_compiler_parameters': self.form_compiler_parameters, 'is_linear': self.is_linear}
Expand Down
Loading
Loading