Like a QR code (which can have certain pixels fail and still create a reliable hash for an item) we would like to investigate creating a Perceptual Hash (or universal ID) for our books.
How can we create a reliable hash for a book which is resilient to errors in OCR, image color/position/orientation differences, and absence of certain identifiers like ISBN?
See https://www.youtube.com/watch?v=DfWLBzArzKE for technical context and ideas.
One possible path is creating a weighted fingerprint vector of numerical scores based on attributes of the text in the book, such as the title, word frequency, and or the first e.g. 10 words on each page.
Why?
This allows us to check whether two books are the same (or broadly how similar they are) even when no ISBN is present.
Considerations
The potential value of such a hash may be in determining whether one wants to acquire a book (and/or if one already has the book) so one consideration is whether a hash can be computed without digitizing the whole book or requiring a significant amount of manual human. This could be used in an iterative process which alerts the operator during digitization if a duplicate is detected.
Questions
How do we handle similar editions?
Like a QR code (which can have certain pixels fail and still create a reliable hash for an item) we would like to investigate creating a Perceptual Hash (or universal ID) for our books.
How can we create a reliable hash for a book which is resilient to errors in OCR, image color/position/orientation differences, and absence of certain identifiers like ISBN?
See https://www.youtube.com/watch?v=DfWLBzArzKE for technical context and ideas.
One possible path is creating a weighted fingerprint vector of numerical scores based on attributes of the text in the book, such as the title, word frequency, and or the first e.g. 10 words on each page.
Why?
This allows us to check whether two books are the same (or broadly how similar they are) even when no ISBN is present.
Considerations
The potential value of such a hash may be in determining whether one wants to acquire a book (and/or if one already has the book) so one consideration is whether a hash can be computed without digitizing the whole book or requiring a significant amount of manual human. This could be used in an iterative process which alerts the operator during digitization if a duplicate is detected.
Questions
How do we handle similar editions?