Document Storage Project
- Part 1: What we’re doing
- Part 2: Setting up a base system
- Part 3: Configuring Apache
- Part 4: Indexing the Storage
- Part 5: Uploading Scanned Images
This is Part 6: Tying it all together.
All that’s left to do now is write a script that will:
- Detect when a new file’s been uploaded.
- Turn it into a searchable PDF with OCR.
- Put the finished PDF in a suitable directory so we can easily browse for it later.
This is actually pretty easy. inotifywait(1) will tell us whenever a file’s been closed, we can use that as our trigger to OCR the document.
Our script is therefore in two parts:
Part 1: will watch the /home/incoming directory for any files that are closed.
Part 2: will be called by the script in part 1 every time a file is created.
Part 1
This script lives in /home/scripts and is called watch-dir.
#!/bin/bash INCOMING="/home/incoming" DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" inotifywait -m --format '%:e %f' -e CLOSE_WRITE "${INCOMING}" 2>/dev/null | while read LINE do FILE="${INCOMING}"/`echo ${LINE} | cut -d" " -f2-` "${DIR}"/process-image "${FILE}" & done
Part 2
This script lives in /home/scripts and is called process-image.
#!/bin/bash # Dead easy - at least in theory! # Take a single argument - filename of the file to process. # Do all the necessary processing to make it a # searchable PDF. OUTFILE="`basename "${1}"`" TEMPFILE="`mktemp`" if [ -s "${1}" ] then # We use the first part of the filename as a classification. CLASSIFICATION=`echo ${OUTFILE} | cut -f1 -d"-"` OUTDIR="/home/http/documents/${CLASSIFICATION}/`date +%Y`/`date +%Y-%m`/`date +%Y-%m-%d`" if [ ! -d "${OUTDIR}" ] then mkdir -p "${OUTDIR}" || exit 1 fi # We have to move our file to a temporary location right away because # otherwise pdfsandwich uses the file's own location for # temporary storage. Well and good - but the file's location is # subject to an inotify that will call this script! mv "${1}" "${TEMPFILE}" || exit 1 # Have we a colour or a mono image? Probably quicker to find out # and process accordingly rather than treat everything as RGB. # We assume the first page is representative of everything COLOURDEPTH=`convert "${TEMPFILE}[0]" -verbose -identify /dev/null 2>/dev/null | grep "Depth:" | awk -F'[/-]' '{print $2}'` if [ "${COLOURDEPTH}" -gt 1 ] then SANDWICHOPTS="-rgb" fi pdfsandwich ${SANDWICHOPTS} -o "${OUTDIR}/${OUTFILE}" "${TEMPFILE}" > /dev/null 2>&1 rm "${TEMPFILE}" fi
There’s just one thing missing: pdfsandwich. This is actually something I found elsewhere on the web. It hasn’t made it into any of the major distro repositories as far as I can tell, but it’s easy enough to compile and install yourself. Find it here.
Run /home/scripts/watch-dir every time we boot – the easiest way to do this is to include a line in /etc/rc.local that calls it:
/home/scripts/watch-dir &
Get it started now (unless you were planning on rebooting):
nohup /home/scripts/watch-dir &
Now you should be able to scan in documents, they’ll be automatically OCR’d and made available on the internal website you set up in part 3.
Further enhancements are left to the reader; suggestions include:
- Automatically notifying sphider-plus to reindex when a document is added. (You’ll need a newer version of sphider-plus to do this. Unfortunately there is a cost associated with this, but it’s pretty cheap. Get it from here).
- There is a bug in pdfsandwich (actually, I think the bug is probably in tesseract or hocr2pdf, both of which are called by pdfsandwich): under certain circumstances which I haven’t been able to nail down, sometimes you’ll find that in the finished PDF one page of a multi-page document will only show the OCR’d layer, not the original document. Track down this bug, fix it and notify the maintainer of the appropriate package so that the upstream package can also be fixed.
- This isn’t terribly good for bulk scanning – if you want to scan in 50 one-page documents, you have to scan them individually otherwise they’ll be treated as a single 50 page document. Edit the script so we can somehow communicate with it that certain documents should be split into their constituent pages and store the resulting PDFs in this way.
- Like all OCR-based solutions, this won’t give you a perfect representation of the source text in the finished PDF. But I’m quite sure the accuracy can be improved, very likely without having to make significant changes to how this operates. Carry out some experiments to figure out optimum settings for accuracy and edit the scripts accordingly.