How does the picture compare tolerance work, especially with 16 bits per channel images?
We are trying to compare 2 16 bits per channel tiff files. The tolerance can be set from 0-255. Is the comparison being done in 8 bits and therefore a tolerance of 1 actually means a difference of 256 in the original image? If so a true 16 bit comparison mode would be nice.
We are trying to compare 2 16 bits per channel tiff files. The tolerance can be set from 0-255. Is the comparison being done in 8 bits and therefore a tolerance of 1 actually means a difference of 256 in the original image? If so a true 16 bit comparison mode would be nice.
Comment