-
I'm aware it's possible to map color to reference ColorChecker. I'm curious if something similar could be done for camera-to-camera color response matching. Correct me if I'm wrong, but from my very limited understanding of color science, if the spectral sensitivity functions of both cameras are known, instead of using a chart-based system, it's possible to use the spectral data as the observer and generate virtual test patches to map out the color differences between them. Could this be done with the api? |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 4 replies
-
Hi @qiuboujun, Yes this is entirely possible and I do it from time to time. Colour ships with a training dataset from KODAK that contains samples that were picked to optimise metamerism. You can get those with the With those samples, you can generate two sets (reference and test values) for your reference and test cameras and a chosen illuminant using the Note that in your case, you cannot really use a perceptually uniform colourspace or any Delta-E metrics because they only really make sense for a Standard Observer. Let us know how it goes! |
Beta Was this translation helpful? Give feedback.
-
Thanks @KelSolaar , that’s exactly what I’m looking for! One more question, is it possible to use a 3DLUT instead of the 3x3 matrix to map between the two cameras? Or I have to use a chart based system in conjunction with the 3x3 matrix to account for the nonlinear part of the camera response. The only reason I asked is because I’m trying to map between digital camera to film and it’s not exactly linear like most digital cameras. |
Beta Was this translation helpful? Give feedback.
Hi @qiuboujun,
Yes this is entirely possible and I do it from time to time. Colour ships with a training dataset from KODAK that contains samples that were picked to optimise metamerism. You can get those with the
colour.characterisation.aces_it.read_training_data_rawtoaces_v1
or alternatively, they are in colour-datasets:colour_datasets.load('3372171')
.With those samples, you can generate two sets (reference and test values) for your reference and test cameras and a chosen illuminant using the
colour.msds_to_XYZ
definition, then with those two sets, you can use thecolour.matrix_colour_correction
to produce a transform mapping the two.Note that in your case, you cannot really use a pe…