logo_long Home Rules Data News Results FAQ Contact About

Winners of the HDC2021

1st place: Theophil Trippe, Jan Macdonald & Maximilian März from TU Berlin and Martin Genzel from Utrecht University. GitHub_A

2nd place: Ji Li, Department of Mathematics, National University of Singapore. GitHub_B

3rd place: Daniël M. Pelt, LIACS, Leiden University, Leiden, The Netherlands. GitHub

All Registered Teams

This is the list of teams that registered for the challenge.

Together with the names, the links to the Github repositories submitted to the challenge.

  • 01 - Leiden University, LIACS - The Netherlands. GitHub
  • 02 - Indian Institute of Technology Madras, Chennai; Department of Engineering Design - India
  • 03 - DAMTP, University of Cambridge - United Kingdom
  • 04 - Dipartimento di Sciente Fisiche, Informatiche e Matematiche Università degli Studi di Modena e Reggio Emilia - Italy. GitHub
  • 05 - University of Helsinki, The Department of Mathematics and Statistics - Finland
  • 06 - Heinrich Heine University Düsseldorf, Department of Computer Science - Germany. GitHub
  • 07 - Lomonosov Moscow State University, Faculty of Computational Mathematics and Cybernetics - Russia
  • 08 - Xiangtan University - China
  • 09 - University of Campinas (UNICAMP), School of Electrical and Computer Engineering - Brazil. GitHub_A GitHub_B
  • 10 - Mediterranean Institute of Technology, Software and Computer Systems Engineering - Tunisia
  • 11 - Center for Industrial Mathematics (ZeTeM) University of Bremen (et al.) - Germany/United Kingdom. GitHub_A GitHub_B GitHub_C
  • 12 - National University of Singapore, Department of Mathematics - Singapore. GitHub_A GitHub_B
  • 13 - Federal University of ABC; Center for Engineering, Modeling and Applied Social Sciences - Brazil. GitHub
  • 14 - University of Eastern Finland, Applied Physics - Finland
  • 15 - Technische Universität Berlin, Institute of Mathematics. Utrecht University, Mathematical Institute - Germany. GitHub_A GitHub_B
  • 16 - Technical University of Denmark, DTU Compute - Denmark. GitHub_A GitHub_B
  • 17 - Argonne National Laboratory, X-ray Science Division - United States of America

    Results

    HDC2021_example_result
    Figure 1: An example result of deconvolving the most difficult level.

    You can download here some of the deconvolution results. You can also click the image below to download.

    The table below shows the OCR scores obtained from the submitted algorithms using the test dataset. The table presents the average OCR scores in each blur category. Team ID numbers follow the list above. Those that submitted more than one algorithm were marked with suffix _A, _B, _C.

    The red cells mark the first category where the algorithms did not reach the minimum threshold of 70%. The stop category of the algorithm is the last category before the red cell. The stop category for the teams that submitted more than one algorithm is the best among their submissions.

    The last column shows the stop category for each team

    Team step 0 step 1 step 2 step 3 step 4 step 5 step 6 step 7 step 8 step 9 step 10 step 11 step 12 step 13 step 14 step 15 step 16 step 17 step 18 step 19 stop
    15_A
    95.62 95.30 94.83 94.75 94.53 97.03 94.03 96.25 93.12 95.80 93.75 91.55 91.42 92.00 87.15 77.50 89.00 81.97 80.85 71.65 19
    15_B
    95.97 94.40 94.45 95.53 94.53 97.78 95.55 96.40 94.33 95.70 92.97 92.80 91.65 93.22 86.28 73.60 89.50 81.35 79.97 69.88
    12_A
    95.95 94.80 94.90 96.03 94.78 95.95 92.85 92.83 91.15 91.72 90.20 86.72 87.58 83.03 81.17 78.50 62.73 53.60 53.12 28.65
    12_B
    95.65 95.08 95.40 95.62 94.75 96.50 92.62 93.80 92.62 91.97 85.80 83.50 85.95 85.72 82.97 84.15 77.80 72.33 71.80 61.38 18
    01
    96.28 95.22 95.40 96.28 94.92 96.12 91.75 93.28 91.65 91.75 88.67 88.83 87.12 87.55 84.88 18.02 82.47 70.30 71.00 67.17 14
    11_A
    94.90 95.25 90.70 93.03 37.33 88.40 77.28 82.00 73.62 65.30 42.90 52.75 3.50 0.00 43.88 6.42 0.00 1.93 2.23 0.68
    11_B
    88.67 81.38 80.80 88.20 91.03 90.25 84.30 80.08 77.55 76.35 72.62 67.25 68.60 61.95 62.35 46.92 41.40 36.00 32.23 25.95
    11_C
    93.15 90.28 92.60 93.67 92.22 94.65 87.78 85.67 81.25 79.85 79.15 67.08 62.80 60.83 54.23 32.60 36.35 23.05 21.88 21.75 10*
    06
    96.28 95.28 95.50 96.30 96.40 97.03 94.33 91.97 85.92 73.80 70.17 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 10*
    13
    95.42 93.80 93.92 89.67 91.45 92.12 71.12 80.90 67.12 63.85 54.38 55.00 64.83 65.47 52.35 20.15 25.68 14.22 6.58 4.67 07
    16_A
    94.53 93.78 92.05 92.35 76.90 84.65 46.02 69.12 64.45 58.27 35.83 29.45 19.20 17.45 9.05 5.55 1.50 0.00 0.00 0.00
    16_B
    95.10 95.65 96.15 93.60 90.20 88.38 76.45 65.53 68.35 19.27 4.03 13.50 7.42 3.80 2.48 0.00 0.00 0.00 0.00 0.00 06
    04
    95.53 94.97 94.03 91.40 82.45 76.83 68 66.30 62.85 39.55 24.38 13.82 10.70 5.95 3.30 1.43 0.00 0.00 0.00 0.00 05
    09_A
    95.53 94.22 94.22 69.03 16.40 9.65 6.83 1.55 6.05 3.70 1.70 3.05 1.62 1.55 0.33 2.90 2.85 3.73 7.62 3.62
    09_B
    95.50 94.60 93.65 63.42 16.15 12.38 6.33 2.55 2.27 3.42 2.62 1.52 4.03 5.55 1.27 2.92 2.20 1.95 5.83 2.42 02
    * Following the rules in case of category tie, the one with the highest percentage in the stop category wins.

    Test dataset

    The test dataset is different from the training set to assess robustness. It was collected the same way but the strings now include number characters. Each category contains 40 pictures of the e-ink display showing text (20 for each font type), together with pictures of the e-ink display showing images, serving for the sanity check.

    The test dataset is now available here https://doi.org/10.5281/zenodo.5713637