Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Abbas developement #244

Merged
merged 5 commits into from
Nov 26, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 3 additions & 2 deletions .github/workflows/CI_parallel.yml
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ jobs:

env:
METIS_HOME: /home/runner/work/horses3d/horses3d/metis-5.1.0/build/Linux-x86_64
INTEL_COMPILER_DIR : /opt/intel/oneapi/compiler/2024.1
INTEL_COMPILER_DIR : /opt/intel/oneapi/compiler/2023.2.0
# Steps represent a sequence of tasks that will be executed as part of the job

steps:
Expand Down Expand Up @@ -68,7 +68,8 @@ jobs:
- name: Install Intel oneAPI
# UNCOMMENT TO USE CACHED IFORT
# if: (steps.cache-intel-compilers.outputs.cache-hit != 'true')
run: sudo apt-get install intel-oneapi-compiler-fortran intel-oneapi-compiler-dpcpp-cpp intel-oneapi-mpi intel-oneapi-mpi-devel intel-oneapi-mkl-devel ninja-build
# run: sudo apt-get install intel-oneapi-compiler-fortran intel-oneapi-compiler-dpcpp-cpp intel-oneapi-mpi intel-oneapi-mpi-devel intel-oneapi-mkl-devel ninja-build
run: sudo apt-get install intel-oneapi-compiler-fortran-2023.2.0 intel-oneapi-compiler-dpcpp-cpp-and-cpp-classic-2023.2.0 intel-oneapi-mpi intel-oneapi-mpi-devel intel-oneapi-mkl-devel ninja-build

# - name: cache-metis
# id: cache-metis
Expand Down
8 changes: 5 additions & 3 deletions .github/workflows/CI_sequential 1.yml
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ jobs:
- compiler: ifort
mkl: 'NO'
env:
INTEL_COMPILER_DIR : /opt/intel/oneapi/compiler/2024.1
INTEL_COMPILER_DIR : /opt/intel/oneapi/compiler/2023.2.0

# Steps represent a sequence of tasks that will be executed as part of the job

Expand Down Expand Up @@ -76,8 +76,10 @@ jobs:
- name: Install Intel oneAPI
# UNCOMMENT TO USE CACHED IFORT
# if: (steps.cache-intel-compilers.outputs.cache-hit != 'true')
run: sudo apt-get install intel-oneapi-compiler-fortran intel-oneapi-compiler-dpcpp-cpp intel-oneapi-mpi intel-oneapi-mpi-devel intel-oneapi-mkl-devel ninja-build
# Runs a single command using the runners shell
# run: sudo apt-get install intel-oneapi-compiler-fortran intel-oneapi-compiler-dpcpp-cpp intel-oneapi-mpi intel-oneapi-mpi-devel intel-oneapi-mkl-devel ninja-build
run: sudo apt-get install intel-oneapi-compiler-fortran-2023.2.0 intel-oneapi-compiler-dpcpp-cpp-and-cpp-classic-2023.2.0 intel-oneapi-mpi intel-oneapi-mpi-devel intel-oneapi-mkl-devel ninja-build

# Runs a single command using the runners shell
##- name: Install gfortran
## run: |
## sudo add-apt-repository ppa:ubuntu-toolchain-r/test
Expand Down
5 changes: 3 additions & 2 deletions .github/workflows/CI_sequential 2.yml
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ jobs:
- compiler: ifort
hdf5: 'YES'
env:
INTEL_COMPILER_DIR : /opt/intel/oneapi/compiler/2024.1
INTEL_COMPILER_DIR : /opt/intel/oneapi/compiler/2023.2.0
# Steps represent a sequence of tasks that will be executed as part of the job

steps:
Expand Down Expand Up @@ -82,7 +82,8 @@ jobs:
- name: Install Intel oneAPI
# UNCOMMENT TO USE CACHED IFORT
# if: (steps.cache-intel-compilers.outputs.cache-hit != 'true')
run: sudo apt-get install intel-oneapi-compiler-fortran intel-oneapi-compiler-dpcpp-cpp intel-oneapi-mpi intel-oneapi-mpi-devel intel-oneapi-mkl-devel ninja-build
# run: sudo apt-get install intel-oneapi-compiler-fortran intel-oneapi-compiler-dpcpp-cpp intel-oneapi-mpi intel-oneapi-mpi-devel intel-oneapi-mkl-devel ninja-build
run: sudo apt-get install intel-oneapi-compiler-fortran-2023.2.0 intel-oneapi-compiler-dpcpp-cpp-and-cpp-classic-2023.2.0 intel-oneapi-mpi intel-oneapi-mpi-devel intel-oneapi-mkl-devel ninja-build
# Runs a single command using the runners shell
##- name: Install gfortran
## run: |
Expand Down
5 changes: 3 additions & 2 deletions .github/workflows/CI_sequential 3.yml
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ jobs:
- compiler: ifort
mkl: 'NO'
env:
INTEL_COMPILER_DIR : /opt/intel/oneapi/compiler/2024.1
INTEL_COMPILER_DIR : /opt/intel/oneapi/compiler/2023.2.0
# Steps represent a sequence of tasks that will be executed as part of the job

steps:
Expand Down Expand Up @@ -75,7 +75,8 @@ jobs:
- name: Install Intel oneAPI
# UNCOMMENT TO USE CACHED IFORT
# if: (steps.cache-intel-compilers.outputs.cache-hit != 'true')
run: sudo apt-get install intel-oneapi-compiler-fortran intel-oneapi-compiler-dpcpp-cpp intel-oneapi-mpi intel-oneapi-mpi-devel intel-oneapi-mkl-devel ninja-build
# run: sudo apt-get install intel-oneapi-compiler-fortran intel-oneapi-compiler-dpcpp-cpp intel-oneapi-mpi intel-oneapi-mpi-devel intel-oneapi-mkl-devel ninja-build
run: sudo apt-get install intel-oneapi-compiler-fortran-2023.2.0 intel-oneapi-compiler-dpcpp-cpp-and-cpp-classic-2023.2.0 intel-oneapi-mpi intel-oneapi-mpi-devel intel-oneapi-mkl-devel ninja-build
# Runs a single command using the runners shell
##- name: Install gfortran
## run: |
Expand Down
5 changes: 3 additions & 2 deletions .github/workflows/CI_sequential 4.yml
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ jobs:
- compiler: ifort
mkl: 'NO'
env:
INTEL_COMPILER_DIR : /opt/intel/oneapi/compiler/2024.1
INTEL_COMPILER_DIR : /opt/intel/oneapi/compiler/2023.2.0
# Steps represent a sequence of tasks that will be executed as part of the job

steps:
Expand Down Expand Up @@ -75,7 +75,8 @@ jobs:
- name: Install Intel oneAPI
# UNCOMMENT TO USE CACHED IFORT
# if: (steps.cache-intel-compilers.outputs.cache-hit != 'true')
run: sudo apt-get install intel-oneapi-compiler-fortran intel-oneapi-compiler-dpcpp-cpp intel-oneapi-mpi intel-oneapi-mpi-devel intel-oneapi-mkl-devel ninja-build
# run: sudo apt-get install intel-oneapi-compiler-fortran intel-oneapi-compiler-dpcpp-cpp intel-oneapi-mpi intel-oneapi-mpi-devel intel-oneapi-mkl-devel ninja-build
run: sudo apt-get install intel-oneapi-compiler-fortran-2023.2.0 intel-oneapi-compiler-dpcpp-cpp-and-cpp-classic-2023.2.0 intel-oneapi-mpi intel-oneapi-mpi-devel intel-oneapi-mkl-devel ninja-build
# Runs a single command using the runners shell
##- name: Install gfortran
## run: |
Expand Down
23 changes: 17 additions & 6 deletions Solver/src/MultiphaseSolver/SpatialDiscretization.f90
Original file line number Diff line number Diff line change
Expand Up @@ -222,7 +222,6 @@ SUBROUTINE ComputeTimeDerivative( mesh, particles, time, mode, HO_Elements)
real(kind=RP) :: sqrtRho, invMa2
class(Element), pointer :: e


!$omp parallel shared(mesh, time)
!
!///////////////////////////////////////////////////
Expand Down Expand Up @@ -347,6 +346,7 @@ SUBROUTINE ComputeTimeDerivative( mesh, particles, time, mode, HO_Elements)
#ifdef _HAS_MPI_
!$omp single
call mesh % UpdateMPIFacesSolution(NCOMP)
call mesh % GatherMPIFacesSolution(NCOMP)
!$omp end single
#endif
end select
Expand All @@ -371,6 +371,7 @@ SUBROUTINE ComputeTimeDerivative( mesh, particles, time, mode, HO_Elements)
#ifdef _HAS_MPI_
!$omp single
call mesh % UpdateMPIFacesGradients(NCOMP)
call mesh % GatherMPIFacesGradients(NCOMP)
!$omp end single
#endif
!
Expand Down Expand Up @@ -416,6 +417,7 @@ SUBROUTINE ComputeTimeDerivative( mesh, particles, time, mode, HO_Elements)
#ifdef _HAS_MPI_
!$omp single
call mesh % UpdateMPIFacesSolution(NCONS)
call mesh % GatherMPIFacesSolution(NCONS)
!$omp end single
#endif
!
Expand Down Expand Up @@ -494,6 +496,14 @@ SUBROUTINE ComputeTimeDerivative( mesh, particles, time, mode, HO_Elements)
!$omp end do

call ViscousDiscretization % LiftGradients( NCONS, NCONS, mesh , time , mGradientVariables)

#ifdef _HAS_MPI_
!$omp single
! Not sure about the position of this w.r.t the MPI directly above
call mesh % UpdateMPIFacesGradients(NCONS)
call mesh % GatherMPIFacesGradients(NCONS)
!$omp end single
#endif
!
! -----------------------
! Compute time derivative
Expand Down Expand Up @@ -574,6 +584,7 @@ SUBROUTINE ComputeTimeDerivative( mesh, particles, time, mode, HO_Elements)
!$omp end do
end select
!$omp end parallel

!
END SUBROUTINE ComputeTimeDerivative
!
Expand Down Expand Up @@ -736,9 +747,9 @@ subroutine ComputeNSTimeDerivative( mesh , t )
end do ; end do ; end do


do k = 0, e % Nxyz(3) ; do j = 0, e % Nxyz(2) ; do i = 0, e % Nxyz(1)
e % storage % QDot(:,i,j,k) = e % storage % QDot(:,i,j,k) / e % geom % jacobian(i,j,k)
end do ; end do ; end do
! do k = 0, e % Nxyz(3) ; do j = 0, e % Nxyz(2) ; do i = 0, e % Nxyz(1)
! e % storage % QDot(:,i,j,k) = e % storage % QDot(:,i,j,k) / e % geom % jacobian(i,j,k)
! end do ; end do ; end do
end associate
end do
!$omp end do
Expand Down Expand Up @@ -1176,7 +1187,7 @@ subroutine ComputeLaplacian( mesh , t)
associate(e => mesh % elements(eID))
if ( e % hasSharedFaces ) cycle
call Laplacian_FacesContribution(e, t, mesh)

do k = 0, e % Nxyz(3) ; do j = 0, e % Nxyz(2) ; do i = 0, e % Nxyz(1)
e % storage % QDot(:,i,j,k) = e % storage % QDot(:,i,j,k) / e % geom % jacobian(i,j,k)
end do ; end do ; end do
Expand Down Expand Up @@ -1217,7 +1228,7 @@ subroutine ComputeLaplacian( mesh , t)
do eID = 1, size(mesh % elements)
associate(e => mesh % elements(eID))
if ( .not. e % hasSharedFaces ) cycle
call TimeDerivative_FacesContribution(e, t, mesh)
call Laplacian_FacesContribution(e, t, mesh)

do k = 0, e % Nxyz(3) ; do j = 0, e % Nxyz(2) ; do i = 0, e % Nxyz(1)
e % storage % QDot(:,i,j,k) = e % storage % QDot(:,i,j,k) / e % geom % jacobian(i,j,k)
Expand Down
6 changes: 3 additions & 3 deletions Solver/src/libs/discretization/EllipticBR2.f90
Original file line number Diff line number Diff line change
Expand Up @@ -430,9 +430,9 @@ subroutine BR2_ComputeGradientFaceIntegrals( self, nGradEqn, e, mesh)
unStar => mesh % faces(e % faceIDs(EBOTTOM)) % storage(e % faceSide(EBOTTOM)) % unStar )

do k = 0, e%Nxyz(3) ; do j = 0, e%Nxyz(2) ; do i = 0, e%Nxyz(1)
U_x(:,i,j) = U_x(:,i,j) - self % eta * unStar(:,1,i,j) * bv_z(k,LEFT) * invjac(i,i,j)
U_y(:,i,j) = U_y(:,i,j) - self % eta * unStar(:,2,i,j) * bv_z(k,LEFT) * invjac(i,i,j)
U_z(:,i,j) = U_z(:,i,j) - self % eta * unStar(:,3,i,j) * bv_z(k,LEFT) * invjac(i,i,j)
U_x(:,i,j) = U_x(:,i,j) - self % eta * unStar(:,1,i,j) * bv_z(k,LEFT) * invjac(i,j,k)
U_y(:,i,j) = U_y(:,i,j) - self % eta * unStar(:,2,i,j) * bv_z(k,LEFT) * invjac(i,j,k)
U_z(:,i,j) = U_z(:,i,j) - self % eta * unStar(:,3,i,j) * bv_z(k,LEFT) * invjac(i,j,k)
end do ; end do ; end do
end associate

Expand Down
4 changes: 3 additions & 1 deletion Solver/src/libs/mesh/HexMesh.f90
Original file line number Diff line number Diff line change
Expand Up @@ -2246,8 +2246,10 @@ subroutine HexMesh_SetConnectivitiesAndLinkFaces(self,nodes,facesList)
call ConstructMPIFacesStorage(self % MPIfaces, NCONS, NGRAD, MPI_NDOFS)
#elif defined(INCNS)
call ConstructMPIFacesStorage(self % MPIfaces, NCONS, NCONS, MPI_NDOFS)
#elif defined(CAHNHILLIARD)
#elif defined(CAHNHILLIARD) && !defined(MULTIPHASE)
call ConstructMPIFacesStorage(self % MPIfaces, NCOMP, NCOMP, MPI_NDOFS)
#elif defined(MULTIPHASE)
call ConstructMPIFacesStorage(self % MPIfaces, NCONS, NCONS, MPI_NDOFS)
#elif defined(ACOUSTIC)
call ConstructMPIFacesStorage(self % MPIfaces, NCONS, NCONS, MPI_NDOFS)
#endif
Expand Down
14 changes: 7 additions & 7 deletions Solver/test/NavierStokes/CylinderBR2/SETUP/ProblemFile.f90
Original file line number Diff line number Diff line change
Expand Up @@ -552,16 +552,16 @@ SUBROUTINE UserDefinedFinalize(mesh, time, iter, maxResidual &
!
#if defined(NAVIERSTOKES)
INTEGER :: iterations(3:7) = [100, 0, 0, 0, 0]
real(kind=RP), parameter :: residuals(5) = [ 8.94947477740774_RP, &
18.0524814828053_RP, &
0.188804475468846_RP, &
24.2331142737927_RP, &
244.034603817743_RP ]
real(kind=RP), parameter :: residuals(5) = [ 8.9494751074667516_RP, &
18.052481444063439_RP, &
0.1887988263729878_RP, &
24.233109718227368_RP, &
244.03459342403502_RP ]


real(kind=RP), parameter :: wake_u = 8.381270411983929E-009_RP
real(kind=RP), parameter :: cd = 34.3031214698872_RP
real(kind=RP), parameter :: cl = -5.536320494302416E-003_RP
real(kind=RP), parameter :: cd = 34.303121634815788_RP
real(kind=RP), parameter :: cl = -5.536315782160184E-003_RP

N = mesh % elements(1) % Nxyz(1) ! This works here because all the elements have the same order in all directions
CALL initializeSharedAssertionsManager
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -524,7 +524,7 @@ SUBROUTINE UserDefinedFinalize(mesh, time, iter, maxResidual &
! Local variables
! ---------------
!
CHARACTER(LEN=29) :: testName = "Re 200 Cylinder with Ducros Skewsymmetric and BR2"
CHARACTER(LEN=29) :: testName = "Re 200 Cylinder with Ducros Skewsymmetric and BR1"
REAL(KIND=RP) :: maxError
REAL(KIND=RP), ALLOCATABLE :: QExpected(:,:,:,:)
INTEGER :: eID
Expand Down
Loading