-
Notifications
You must be signed in to change notification settings - Fork 121
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
building examples with CUDA on RTX 4070 #280
Comments
There is a configuration option for target GPU architechure: https://github.com/ddemidov/amgcl/blob/master/CMakeLists.txt#L161 It has some outdated arhcs there, try to set it to just the one you need. |
Thanks for the reply. changing the following line enables circumventing the architecture problem, but generates the other errors I've pointed out. I've changed the CUDA_TARGET_ARCH for the architectures supported by the CPU as follows
and again, It enables to circumvent the architecture problem, but the compiling problem shows again, in another module though:
Those errors was generated with Pascal architecture which is the older one that RTX 4070 supports. |
sorry, i've just noticed that you tried with "Auto". I guess I'll need to do some digging, as I don't have a device to test this. |
Thank you. I've also used a recompiled version of GCC as part of a CFD software environment. Hence my interest in AMGCL. I'm updating my environment and Nvidia CUDA toolkit as well to verify if the problem is related to GCC as this is a possibility (avx512fp16intrin.h is a header from GCC not NVCC). For now, I've disabled the CUDA part in CMakeLists to test the CPU part of the library. |
I've managed to solve this problem here. I was using the CUDA from Ubuntu repo and that was the source of the problem regarding the headers. Using the CUDA install from NVIDIA HPC repo solved the problem and I was finally able to compile and run amgcl on CUDA 12.x on RTX 4070. It's necessary to make this change on CMakeLists.txt to ensure a compatible architecture will be used:
I've tested the Poisson problem in the tutorials (https://amgcl.readthedocs.io/en/latest/tutorial/poisson3Db.html) on CUDA/GPU and CPU and found the results interesting. But, the problem is rather small and the CUDA version ends up being slower thanks to the overhead generated by the CPU-GPU communication. I've changed the Stokes tutorial (https://amgcl.readthedocs.io/en/latest/tutorial/Stokes.html) to use the GPU by using the Poisson CUDA code as an example, but, I'm having problems converting the Bin matrices used on Stokes to MatrixMarket used on Poisson. Is there any tool to make this conversion or a tutorial on how to use the Bin matrices/vectors to use with amgcl on CUDA? |
That's great, thanks for letting me know! There is ./examples/mm2bin and ./examples/bin2mm utilities: ./bin2mm --help
Options:
-h [ --help ] Show this help.
-d [ --dense ] Matrix is dense (use it with the RHS file).
-i [ --input ] arg Input binary file.
-o [ --output ] arg Ouput matrix in the MatrixMarket format. Also, if you just want to test a 3D Poisson problem, you could run |
Thanks for the reply again. I've used the bin2mm to convert the matrices and vector of ucube and use this code to run a CUDA version of ucube: #include <vector>
#include <iostream>
#include <amgcl/backend/cuda.hpp>
#include <amgcl/adapter/crs_tuple.hpp>
#include <amgcl/make_solver.hpp>
#include <amgcl/amg.hpp>
#include <amgcl/coarsening/smoothed_aggregation.hpp>
#include <amgcl/relaxation/spai0.hpp>
#include <amgcl/solver/bicgstab.hpp>
#include <amgcl/io/mm.hpp>
#include <amgcl/profiler.hpp>
int main(int argc, char *argv[]) {
// The matrix and the RHS file names should be in the command line options:
if (argc < 3) {
std::cerr << "Usage: " << argv[0] << " <matrix.mtx> <rhs.mtx>" << std::endl;
return 1;
}
// Show the name of the GPU we are using:
int device;
cudaDeviceProp prop;
cudaGetDevice(&device);
cudaGetDeviceProperties(&prop, device);
std::cout << prop.name << std::endl;
// The profiler:
amgcl::profiler<> prof("UCube4");
// Read the system matrix and the RHS:
ptrdiff_t rows, cols;
std::vector<ptrdiff_t> ptr, col;
std::vector<double> val, rhs;
prof.tic("read");
std::tie(rows, cols) = amgcl::io::mm_reader(argv[1])(ptr, col, val);
std::cout << "Matrix " << argv[1] << ": " << rows << "x" << cols << std::endl;
std::tie(rows, cols) = amgcl::io::mm_reader(argv[2])(rhs);
std::cout << "RHS " << argv[2] << ": " << rows << "x" << cols << std::endl;
prof.toc("read");
// We use the tuple of CRS arrays to represent the system matrix.
// Note that std::tie creates a tuple of references, so no data is actually
// copied here:
auto A = std::tie(rows, ptr, col, val);
// Compose the solver type
typedef amgcl::backend::cuda<double> Backend;
typedef amgcl::make_solver<
amgcl::amg<
Backend,
amgcl::coarsening::smoothed_aggregation,
amgcl::relaxation::spai0
>,
amgcl::solver::bicgstab<Backend>
> Solver;
// We need to initialize the CUSPARSE library and pass the handle to AMGCL
// in backend parameters:
Backend::params bprm;
cusparseCreate(&bprm.cusparse_handle);
// There is no way to pass the backend parameters without passing the
// solver parameters, so we also need to create those. But we can leave
// them with the default values:
Solver::params prm;
// Initialize the solver with the system matrix:
prof.tic("setup");
Solver solve(A, prm, bprm);
prof.toc("setup");
// Show the mini-report on the constructed solver:
std::cout << solve << std::endl;
// Solve the system with the zero initial approximation.
// The RHS and the solution vectors should reside in the GPU memory:
int iters;
double error;
thrust::device_vector<double> f(rhs);
thrust::device_vector<double> x(rows, 0.0);
prof.tic("solve");
std::tie(iters, error) = solve(f, x);
prof.toc("solve");
// Output the number of iterations, the relative error,
// and the profiling data:
std::cout << "Iters: " << iters << std::endl
<< "Error: " << error << std::endl
<< prof << std::endl;
} With this code I was able to run ucube with CUDA, and except for the two sets of matrix and vectors, the GPU version was faster than the CPU. But, I found the solution error to be very different between the CPU version and the GPU version. For example, using the pair ucube_3_A.bin, ucube_3_b.bin at stokes_ucube produces this error: 3.40046e-09 What am I doing wrong? |
Also, there is another question. When running both this I've kind of expected an overhead when using the GPU, because of the additional communication over an PCI-e bridge, which tends to be slower than the CPU-RAM bridge, but what I've founded is this communication overhead can severely impact the use of the GPU. Backing my claims with data, I've run Except for the smaller mesh, the GPU solver performs faster than the CPU solver. But when I consider the whole time to run the simulation, which includes the time spent in communicating with the GPU, the time gain by the GPU solver is greatly reduce by the time spent with CPU-GPU communication. Since I'm new to this GPU solver thing, is this behavior expected or am I doing something wrong? |
I'd say this is expected, especially if the solver is expensive to setup (e.g. ILU-type relaxation is used). If you can not reuse the preconditioner across many time steps, then you may want to look at the one that is simpler to setup, even if it takes more iterations to solve. |
This sometimes may be explained by numerical intstability. For example, parallel reduction (vector sum in inner product implementation) may give slightly different results depending on the order of operations, which may lead to different solutions, especially if the system matrix is badly conditioned. |
Thanks for the reply again. But I have new questions. Is the code I've used to port ucube to GPU is not wrong? It is the snippet of code I've shown in the thread and I've changed parts of stokes_ucube with parts of solver_cuda. About the error, in the benchmark case the solution's error from both CPU and GPU are pretty much the same. So, in that case the matrix was well conditioned and therefore there wasn't any numerical instability. But, for ucube, the matrix is badly conditioned and this generates this kind of problem. Is that right? Assuming that is the case, how could I deal with this? I'm asking because I'm integrating amgcl to a CFD code who more often than not, runs cases with very badly conditioned matrices. I know this because we use PETSc and, very commonly, and PETSc have a very hard time solving the linear system. |
Thanks again for the reply. Is there any pre-conditioner I could use on agmcl to test this? I'm see to test also AMGX too. I was planning to test this lib in the future because, as far as I've checked, it only uses GPU and I need a amg library that works both in CPU and GPU, hence my interest in amgcl. But I'll check AMGX too. Thanks again. |
The code looks fine to me.
Usually you need a preconditioner that is well-suited or even tailored to the problem. In case of the ucube you could use the schur pressure correction composite preconditioner from the same tutorial the problem comes from. |
Thanks agai for the replies. I've tried to implement the schur-pressure pre-conditioner for cuda, using the CPU version as a parameter but the code don't even compile. Then, reading the doc for the ucube problem, there is instructions for using the VexCL library to use a hybrid CPU-GPU solution. There is also a ported version of ucube for VexCL. So, I've download the VexCL library, recompile amgcl with it, but when I try to run the code, it crashes accusing a CUDA driver API error: stokes_ucube_cuda ldd: How could I fix this error? Also, I've tested the AMGX and indeed has a lower setup time. I'll consider integrate amgx into the cfd as well. |
Hi,
I'm trying to compile the lib examples with CUDA 12 over an RTX 4070 GPU, but I'm having this error:
The GPU supported architectures are:
If I change the CMakeLists.txt to choose the architecture from the GPU itself by modifying
to
It will select a supported architecture, and compile the solver_cuda module (with some warnings) but the overall compiling will fail thanks to a series of compilation errors like:
How can I fix this to compile amgcl examples with CUDA 12?
My system settings:
The text was updated successfully, but these errors were encountered: