-
Notifications
You must be signed in to change notification settings - Fork 94
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for Linear Algebra ops #173
base: master
Are you sure you want to change the base?
Conversation
-- Added matrix determinant op (eager & lazy)
Forgot to commit the modification to |
-- Added logMatrixDeterminant op
-- Added matrix inversion op.
-- Added matrix solve function
-- Added least squares matrix solve op.
-- Added triangular matrix solve op.
-- Added Cholesky and GradCholesky op.
-- Added SelfAdjointEig (v2), QR, SVD ops.
@mandar2812 Thanks for this PR and sorry for the slow review but I have been very busy with multiple projects lately. I'll go through it in detail but quick question: why did you have to implement these ops in C++ yourself? I believe that there exist TF ops for multiple of these operations with already implemented kernels for both CPUs and GPUs. |
@eaplatanios I didn't implement them in C++, only added them in the Scala API using the the following procedure you (probably) described in another issue ( #32 ).
After compiling and building I saw that the jni export task defined in the sbt build generated/updated some C++ headers which are used by JNI to interface between the scala and C++ api (If I have understood this Scala-JNI-Native stack correctly :D ). |
@eaplatanios Lets revisit this PR sometime! |
@mandar2812 I'm happy to take a look at this now. Sorry for the super massive delay. I just merged changes that make TF Scala work with TF 2.x and also add support for Scala 2.13 (I had to drop support for Scala 2.11 as some of the dependencies have also dropped support -- e.g., circe). Would you like to update this PR so it works with the current master and then I can take a look? |
@eaplatanios yes I will update this with the latest changes in master. A quick question, which version of tensorflow 2.x did you use to build master? Specific tag or commit hash would be helpful. |
Great, thanks Mandar! The published images use 2.1.1, but I tested with all
of the 2.x releases and all compiled and passed tests.
…On Sun, May 24, 2020 at 2:11 PM Mandar Chandorkar ***@***.***> wrote:
@eaplatanios <https://github.com/eaplatanios> yes I will update this with
the latest changes in master. A quick question, which version of tensorflow
2.x did you use to build master? Specific tag or commit hash would be
helpful.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#173 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAJ4EXGACC76QHNEX5ODU4TRTFPN7ANCNFSM4I4LC3BQ>
.
|
@eaplatanios So I pulled your changes and I can merge the source tree just fine :) Only issue is not in building ... I see a bunch of native compilation errors around Eigen. I have TF 2.2.0 installed in my local python environment.
Could these errors be due to some C++ flags or GCC++ version issues? Can you tell me what environment setup I would need to get the repo to compile? I can try to replicate your setup. EDIT: I see these lines in
Is this causing the errors above? The Eigen source code which produces all of them seems to be in the directory |
Linear Algebra Support
Created a starting point for linear algebra support (solving #84). Added eager and lazy versions of ops.
Also added new type check
IsRealOrComplex
to check if underlying type of a tensor or output is float, double or complex.Ops Added
@eaplatanios @sbrunk @lucataglia @DirkToewe : What do you guys think?