Skip to the content.

Cuneiform-Sign-Detection-Code

This repository contains the code for the article:

Dencker, T., Klinkisch, P., Maul, S. M., and Ommer, B. (2020): Deep Learning of Cuneiform Sign Detection with Weak Supervision using Transliteration Alignment, PLOS ONE, 15:12, , pp. 1–21 https://doi.org/10.1371/journal.pone.0243039

This repository contains code to run the proposed iterative training procedure, and the code to evaluate and visualize the detection results. We also provide the pre-trained models of the cuneiform sign detector for Neo-Assyrian script after completed iterative training on the Cuneiform Sign Detection Dataset. Finally, we make available a web application for the analysis of images of cuneiform clay tablets with the help of a pre-trained cuneiform sign detector.

Repository description

Use cases

Pre-processing

As pre-processing of the training data line detections are obtained for all tablet images before iterative training.

Training

Iterative training alternates between generating aligned and placed detections and training a new sign detector:

  1. use command-line scripts (scripts/generate/) for running alignment and placement step of iterative training
  2. use jupyter notebooks (experiments/sign_detector/) for sign detector training step of iterative training

To keep track of the sign detector and generated sign annotations of each iteration of iterative training (stored in results/), we follow the convention to label the sign detector with a model version (e.g. v002) which is also used to label the raw, aligned and placed detections based on this detector. Besides providing a model version, a user also selects which subsets of the training data to use for the generation of new annotations. In particular, subsets of SAAo collections (e.g. saa01, saa05, saa08) are selected, when running the scripts under scripts/generate/. To enable the evaluation on the test set, it is necessary to include the collections (test, saa06).

Evaluation

Use the test sign detector notebook in order to test the performance of the trained sign detector (mAP) on the test set or other subsets of the dataset. In experiments/alignment_evaluation/ you find further notebooks for evaluation and visualization of line-level and sign-level alignments and TP/FP for raw, aligned and placed detections (full tablet and crop level).

Pre-trained models

We provide pre-trained models in the form of PyTorch model files for the line segmentation network as well as the sign detector.

Model name Model type Train annotations
lineNet_basic_vpub.pth line segmentation 410 lines

For the sign detector, we provide the best weakly supervised model (fpn_net_vA) and the best semi-supervised model (fpn_net_vF).

Model name Model type Weak supervision in training Annotations in training mAP on test_full
fpn_net_vA.pth sign detector saa01, saa05, saa08, saa10, saa13, saa16 None 45.3
fpn_net_vF.pth sign detector saa01, saa05, saa08, saa10, saa13, saa16 train_full (4663 bboxes) 65.6

Web application

We also provide a demo web application that enables a user to apply a trained cuneiform sign detector to a large collection of tablet images. The code of the web front-end is available in the webapp repo. The back-end code is part of this repository and is located in lib/webapp/. Below you find a short animation of how the sign detector is used with this web interface.

Cuneiform font

For visualization of the cuneiform characters, we recommend installing the Unicode Cuneiform Fonts by Sylvie Vanseveren.

Installation

Software

Install general dependencies:

Clone this repository and place the cuneiform-sign-detection-dataset in the ./data sub-folder.

Hardware

Training and evaluation can be performed on a machine with a single GPU (we used a GeFore GTX 1080). The demo web application can run on a web server without GPU support, since detection inference with a lightweight MobileNetV2 backbone is fast even in CPU only mode (less than 1s for an image with HD resolution, less than 10s for 4K resolution).

References

This repository also includes external code. In particular, we want to mention: