-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
FEniCS interface does not work in parallel #99
Comments
These tests scripts were not updated to the most recent FEniCS versions. However, some of the main routines in the python interface (I think: adapt, mesh_metric, detect_colinearity, patchwise_projection) currently do not work in parallel. I guess help in this regard would be much appreciated :) |
making mesh_metric and patchwise_projection work in parallel should not be hard, the colinearity detection routine has to be completely rewritten though (but if you are doing refinement or if you already know the boundary regions for coarsening this would not be needed). |
FWIW, I don't know what is needed in the python script, but the parallel interface of pragmatic should be used (functions with mpi in the name). I can explain how it works if it is not clear. We briefly mentioned working on that with @pefarrell a couple of months a ago. |
@taupalosaurus I do remember, let us meet up in the second half of september or in October |
If you provide some pointers on what to do I would try to help. Or code review. |
Thanks! FYI: It is likely that we will be working on the upgrade in a month or two, so I doubt that the parallel issue will be fixed before then. It is in the to-do-list though. |
Nice, looking forward to it! |
What's the current status for FEniCS+Pragmatic? Does it work in parallel? Does it work with the last version? |
It works with the last version. Consider the tests in pragmatic/python/tests.
It does not work in parallel (and it will probably not work in parallel for some time)
…-------- Original message --------From: balborian <[email protected]> Date: 15/11/2017 02:15 (GMT+00:00) To: meshadaptation/pragmatic <[email protected]> Cc: Matteo Croci <[email protected]>, Comment <[email protected]> Subject: Re: [meshadaptation/pragmatic] Parallel run fails using the python interface and FEniCS 1.6.0 (#99)
What's the current status for FEniCS+Pragmatic? Does it work in parallel? Does it work with the last version?
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or mute the thread.
{"api_version":"1.0","publisher":{"api_key":"05dde50f1d1a384dd78767c55493e4bb","name":"GitHub"},"entity":{"external_key":"github/meshadaptation/pragmatic","title":"meshadaptation/pragmatic","subtitle":"GitHub repository","main_image_url":"https://cloud.githubusercontent.com/assets/143418/17495839/a5054eac-5d88-11e6-95fc-7290892c7bb5.png","avatar_image_url":"https://cloud.githubusercontent.com/assets/143418/15842166/7c72db34-2c0b-11e6-9aed-b52498112777.png","action":{"name":"Open in GitHub","url":"https://github.com/meshadaptation/pragmatic"}},"updates":{"snippets":[{"icon":"PERSON","message":"@balborian in #99: What's the current status for FEniCS+Pragmatic? Does it work in parallel? Does it work with the last version?"}],"action":{"name":"View Issue","url":"#99 (comment)"}}}
|
It also somewhat broken in 3D, because of the current refinement algorithm. |
Let us make a distinction here. The python interface only works in serial, but it works in 3D. The main pragmatic code does not work in parallel in 3D anyways, is this what you meant @KristianE86 ? |
No, I mean that the current 3D refinement algorithm produces bad meshes. This makes the use of the library for 3D applications questionable. |
@balborian the version of pragmatic in master is parallel, but the "halos" are partially frozen. I have a better parallel version in a branch, which is expected to be merged soonish. If there is demand, I can:
@KristianE86 Sorry for being blunt, but please avoid comments that are not relevant to the question. Refinement is not broken, it can be improved, like other routines, and I am working on it. However, the current refinement works fine enough in many cases. And it has little to do with the status of parallelism with FEniCS. |
The question was whether the library works with FEniCS. The current 3D refinement algorithm breaks most iterative solvers, because the elements are so bad. Depending on the application this may or may not be a major issue. |
On Wed, Nov 15, 2017 at 7:09 AM, Kristian Ejlebjærg Jensen < ***@***.***> wrote:
The question was whether the library works with FEniCS. The current 3D
refinement algorithm breaks most iterative solvers, because the elements
are so bad.
This is a strange generalization to make. This obviously depends on:
- the indicator
- the problem
- the discretization
- the iterative method
For example, I have run examples which work, but it would be a stretch to
conclude that it works all the time.
Matt
… Depending on the application this may or may not be a major issue.
Everyone has an interest in honesty. Better no users than disappointed
users.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#99 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAjoiQkPlBvAxDHZRKBTBJneqQ2AR_EZks5s2tRmgaJpZM4O7kq7>
.
--
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
https://www.cse.buffalo.edu/~knepley/ <http://www.caam.rice.edu/~mk51/>
|
Any chance that the mesh remains serial but FEniCS still does the assembling and solving in parallel? |
@balborian Yes you can. You just need to use pragmatic on one core and distribute the mesh afterwards. The routines in MPI and mpi_comm_self(), mpi_comm_world() might be useful. For the moment a simple way of doing this (which might be inefficient/not what you want) is to load the mesh you want to use with pragmatic with mpi_comm_self(), this way the mesh will not be distributed, then use pragmatic on it, then save it with HDF5File, then load it in parallel using HDF5File and mpi_comm_world(). An alternative is to just use pragmatic on the side, save the meshes you need, then load them in parallel. The optimal option is to use pragmatic with mpi_comm_self(), then distribute the mesh without saving it to file, but I do not know if this is something possible to do in FEniCS, you can maybe ask on Allanswered (https://www.allanswered.com/follow/community/s/fenics-project/). However, this is a FEniCS problem. I do not know about Firedrake, sorry. However, the problem with pragmatic in parallel is not FEniCS related, but it is in the Pragmatic main code, so the same issue with parallel computation would still remain with Firedrake (see @taupalosaurus reply above) |
Concerning Firedrake, there is a a priori working parallel interface in a branch, which is not merged into master yet because they want some parallel function interpolation first. Note that in this branch, the halos are also partially frozen, so refinement works better than coarsening, but this is about to change. |
docker:
quay.io/fenicsproject/stable:1.6.0
dolfin-version:
1.6.0
git rev-parse HEAD for pragmatic:
303f2aa298c545368b0a88bc127e665f16577a0a
patch:
fenics-1.6.0.txt
1 core:
mpirun -np 1 python2 mesh_metric2_example.py &> log_np1.txt
log_np1.txt
2 cores:
mpirun -np 2 python2 mesh_metric2_example.py &> log_np2.txt
log_np2.txt
Any pointers? :-)
The text was updated successfully, but these errors were encountered: