Re: SANE & exposure times

Ewald R. de Wit (ewald@pobox.com)
Thu, 29 Jul 1999 15:54:09 +0200

Andreas Rick (rickand@gemse.fr) wrote:
> When you say "make sense out of the RGB values" do you mean:
> "get some calibrated interpretation of the RGB values"?

What I mean is simply that if you scan say a negative, and the R:G:B
exposure times are say 1:2:3 (blue gets exposed 3x more as red, etc),
then I would like to know this exact ratio of R:G:B so you can know
how red the original red was, etc.

> > That is, the frontend should
> > check these options and if they are present then it should divide the
> > RGB values by their corresponding exposure time.
>
> This will require to transmit the data in a format that is linear
> with exposure which requires 16 bit data to be transfered to the
> frontend and also that the LUT is applied in the frontend.

Yep. I don't believe in doing gamma LUT in the scanner anyway.. the
frontend can do a much better, realtime, wysiwyg job of it.

> If they were absolut we could do scanning with an absolut reference
> like optical density (=log(attenuation)). That could be nice too.
> This will make professional quality scans possible where the result
> is really as much scanner independent as possible within the limits
> of the scanners capabilities.

Yes, that is another good point. I would like to see a
SANE_NAME_OPTICAL_DENSITY_RANGE option so that the frontend can
calculate the scandata back to film density. I think the one OD range
option is sufficient for doing that (no need for separate RGB ranges,
is there?)

> > So where should we put it, in the frontend or up the backend?

> I will probably do a test to implement this in the Coolscan backend,
> but this doesn't exclude the frontend developpers to include
> a better version (the two approaches are not exclusive).

Yes, we can explore both ways.

[next article]
> I don't have the LS-2000 so I don't know whether the
> multiple scanning is done by moving the scanner head
> multiple times over the image or wether the head does
> only one cycle but each line is scanned multiple times.

The LS2000 does multisampling without moving the scan head.

> While I am trying to implement this functionality
> into the SANE-backend I had some doubts if this feature
> should not rather be implemented in the frontends.
> They have everything in their power to do so.

If it's done in the frontend we're gonna need to document the protocol
to do this in SANE.

> If I want to do it in the backend I have to store the
> whole image (up to 70MB on the LS-30).

As a quick solution you can write it to a file and keep adding new
scans to it, and export only the final result.

> The exposure related idea is: multi-exposure scanning.
> Instead of scanning all the images of a multi-scanning
> process with the same exposure level we might want to
> scan one image with the exposure calculated by the scanner,
> one with the double and one with 4-times the exposure.
> We then fusion the images by choosing the 3rd scan for
> all pixel values where the detector was not saturated during the
> third scan (ususally 25% of the dynamic range) the second
> image for the remaining pixels where the second image was not
> satturated and the first for the rest.
> Of course we have to scale the values to the same exposure
> level before the fusion (->divide by exposure).

I think that is brilliant idea!

> The advantage of this method is to add two bits of resolution
> (or reduce the noise by a factor of sqrt(4)=2) with only 2/3 scans.

The scanner noise stays the same, but your signal increases 4x, so
your SNR improves by a hefty 4x. Or put differently, you add log10(4) =
0.6 to your OD range. An OD of 3.6!!! oh baby

I wish a had this Nikon but I only have a HP Photosmart which has
horrible positioning accuracy and only goes to 2x exposure for
slides..

-- 
  --  Ewald

--
Source code, list archive, and docs: http://www.mostang.com/sane/
To unsubscribe: echo unsubscribe sane-devel | mail majordomo@mostang.com