Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FEniCS interface does not work in parallel #99

Open
bonh opened this issue Aug 18, 2017 · 18 comments
Open

FEniCS interface does not work in parallel #99

bonh opened this issue Aug 18, 2017 · 18 comments

Comments

@bonh
Copy link

bonh commented Aug 18, 2017

docker:
quay.io/fenicsproject/stable:1.6.0
dolfin-version:
1.6.0
git rev-parse HEAD for pragmatic:
303f2aa298c545368b0a88bc127e665f16577a0a
patch:
fenics-1.6.0.txt

1 core:
mpirun -np 1 python2 mesh_metric2_example.py &> log_np1.txt
log_np1.txt

2 cores:
mpirun -np 2 python2 mesh_metric2_example.py &> log_np2.txt
log_np2.txt

Any pointers? :-)

@croci
Copy link

croci commented Aug 18, 2017

These tests scripts were not updated to the most recent FEniCS versions.

However, some of the main routines in the python interface (I think: adapt, mesh_metric, detect_colinearity, patchwise_projection) currently do not work in parallel. I guess help in this regard would be much appreciated :)

@croci
Copy link

croci commented Aug 18, 2017

making mesh_metric and patchwise_projection work in parallel should not be hard, the colinearity detection routine has to be completely rewritten though (but if you are doing refinement or if you already know the boundary regions for coarsening this would not be needed).
I do not know apart from this what would need to be done to make the adapt routine work in parallel @KristianE86 any thoughts on this?

@taupalosaurus
Copy link
Contributor

FWIW, I don't know what is needed in the python script, but the parallel interface of pragmatic should be used (functions with mpi in the name). I can explain how it works if it is not clear. We briefly mentioned working on that with @pefarrell a couple of months a ago.

@croci
Copy link

croci commented Aug 18, 2017

@taupalosaurus I do remember, let us meet up in the second half of september or in October

@bonh
Copy link
Author

bonh commented Aug 18, 2017

If you provide some pointers on what to do I would try to help. Or code review.

@croci
Copy link

croci commented Aug 18, 2017

Thanks!

FYI: It is likely that we will be working on the upgrade in a month or two, so I doubt that the parallel issue will be fixed before then. It is in the to-do-list though.

@bonh
Copy link
Author

bonh commented Sep 2, 2017

Nice, looking forward to it!

@ghost
Copy link

ghost commented Nov 15, 2017

What's the current status for FEniCS+Pragmatic? Does it work in parallel? Does it work with the last version?

@croci
Copy link

croci commented Nov 15, 2017 via email

@KristianE86
Copy link
Contributor

It also somewhat broken in 3D, because of the current refinement algorithm.

@croci
Copy link

croci commented Nov 15, 2017

Let us make a distinction here. The python interface only works in serial, but it works in 3D. The main pragmatic code does not work in parallel in 3D anyways, is this what you meant @KristianE86 ?

@KristianE86
Copy link
Contributor

No, I mean that the current 3D refinement algorithm produces bad meshes. This makes the use of the library for 3D applications questionable.

@croci croci changed the title Parallel run fails using the python interface and FEniCS 1.6.0 Python interface does not work in parallel Nov 15, 2017
@croci croci changed the title Python interface does not work in parallel FEniCS interface does not work in parallel Nov 15, 2017
@taupalosaurus
Copy link
Contributor

@balborian the version of pragmatic in master is parallel, but the "halos" are partially frozen. I have a better parallel version in a branch, which is expected to be merged soonish. If there is demand, I can:

  • work with @croci to make the interface with FEniCS parallel
  • speed up development of a better parallelism in pragmatic

@KristianE86 Sorry for being blunt, but please avoid comments that are not relevant to the question. Refinement is not broken, it can be improved, like other routines, and I am working on it. However, the current refinement works fine enough in many cases. And it has little to do with the status of parallelism with FEniCS.

@KristianE86
Copy link
Contributor

The question was whether the library works with FEniCS. The current 3D refinement algorithm breaks most iterative solvers, because the elements are so bad. Depending on the application this may or may not be a major issue.
Everyone has an interest in honesty. Better no users than disappointed users.

@knepley
Copy link
Collaborator

knepley commented Nov 15, 2017 via email

@ghost
Copy link

ghost commented Nov 15, 2017

Any chance that the mesh remains serial but FEniCS still does the assembling and solving in parallel?
And also, how about its implementation with Firedrake? what's the current status?

@croci
Copy link

croci commented Nov 15, 2017

@balborian Yes you can. You just need to use pragmatic on one core and distribute the mesh afterwards. The routines in MPI and mpi_comm_self(), mpi_comm_world() might be useful. For the moment a simple way of doing this (which might be inefficient/not what you want) is to load the mesh you want to use with pragmatic with mpi_comm_self(), this way the mesh will not be distributed, then use pragmatic on it, then save it with HDF5File, then load it in parallel using HDF5File and mpi_comm_world(). An alternative is to just use pragmatic on the side, save the meshes you need, then load them in parallel.

The optimal option is to use pragmatic with mpi_comm_self(), then distribute the mesh without saving it to file, but I do not know if this is something possible to do in FEniCS, you can maybe ask on Allanswered (https://www.allanswered.com/follow/community/s/fenics-project/). However, this is a FEniCS problem.

I do not know about Firedrake, sorry. However, the problem with pragmatic in parallel is not FEniCS related, but it is in the Pragmatic main code, so the same issue with parallel computation would still remain with Firedrake (see @taupalosaurus reply above)

@taupalosaurus
Copy link
Contributor

Concerning Firedrake, there is a a priori working parallel interface in a branch, which is not merged into master yet because they want some parallel function interpolation first. Note that in this branch, the halos are also partially frozen, so refinement works better than coarsening, but this is about to change.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants