The Finnish Inverse Problems Society (FIPS) proudly presents the Helsinki Tomography Challenge 2022 (HTC 2022). We invite all scientists and research groups to test their reconstruction algorithms on our real-world data.
Results of the top teams have been published in a Special Issue of the journal Applied Mathematics for Modern Challenges.
The Grand Prize
The top participants of the challenge gave talks at a minisymposium at the Inverse Days Conference organized by the Finnish Inverse Problems Society (FIPS) to be held in Kuopio, Finland, in December 2022.
On top of the unlimited glory, the winner also receives the Ultimate Limited Angle Device. It is a vintage-looking tool for everyday use where determining the angle (limited or not) is necessary.
About
What is limited angle tomography?
Computed tomography means reconstructing the internal structure of a physical body using X-ray images of the body taken from different directions. Mathematically, the problem is to recover a non-negative function from a collection of line integrals. Reconstruction of the original, full object requires that measurements are obtained continuously at least 180° around the object.
In limited-angle tomography reconstruction, the object is imaged using only limited angle interval of X-ray projections. The significance of this is that the reconstruction must be computed from an incomplete set of line integrals, a highly ill-posed and challenging task.
Challenge description
The purpose of the challenge is to recover the shapes of 2D targets imaged with limited-angle tomography, collected in the Industrial Mathematics Computed Tomography Laboratory at the University of Helsinki, Finland. The experimental setup, targets, and measurement protocol are described in the following sections.
The outcome of the challenge should be an algorithm which takes in the X-ray data, i.e., the sinogram and it’s associated metadata about the measurement geometry, and produces a reconstruction which has been segmented into two components: air and plastic.
Organising Committee
Registration
How to register
To enter the HTC2022 competition:Register before 23:59 EET (Eastern European Time) on September 30, 2022, using this electronic form.
Register
Rules
Rules
The rules and information about the HTC2022 can also be found in this pdf . (Updated 28.10.2022)
How to enter the competition
To enter the HTC2022 competition:Register before 23:59 EET (Eastern European Time) on September 30, 2022, using this electronic form..Send your submission to htc2022(“at”)fips.fi before November 4, 2022 23:59 EET (Eastern European Time). What needs to be submitted? See below for instructions.
Only submissions that fulfill the requirements listed below will be accepted.
Requirements of the competition
What needs to be submitted? Briefly, the codes must be in Matlab or Python 3.X and the algorithms must be shared with us as a private GitHub repository at latest on deadline. Check the following subsections for detailed instructions. Only submissions that fulfill the requirements listed below will be accepted.
The teams can submit more than one reconstruction algorithm to the challenge, however, each algorithm must be in a separate repository. The maximum number of algorithms is the number of members of the team. Your team do not need to register multiple times in case you decide to submit more than one algorithm to the challenge. The team can send a single email with the links to all the repositories.
After the deadline, there is a brief period during which we can troubleshoot the codes together with the competing teams. This is to ensure that we are able to run the codes. The troubleshoot communication is done mainly via ‘Issues’ section of the submitted repository, so pay attention to any activities in the repository after the deadline.
Special situations: The spirit of the competition is that the algorithm is a general-purpose algorithm, capable of reconstructing limited-angle tomography images of the targets. The organizing committee has the right to disqualify an algorithm trying to violate that spirit.
Conflict of interest: researchers affiliated with the Department of Mathematics and Statistics of University of Helsinki will not be added to the leaderboard and cannot win the competition.
Deadline
Deadline: November 4, 2022 23:59 EET (Eastern European Time)
The algorithms must be shared with us as a private GitHub repository at latest on the deadline. The codes should be in Matlab or Python3.
After the deadline there is a brief period during which we can troubleshoot the codes together with the competing teams. This is to ensure that we are able to run the codes.
Github repository
Competitors can update the contents of the shared repository as many times as needed before the deadline. We will consider only the latest release of your repository on Github.
Attention: Simple commits to the main branch will not be considered. You MUST also create a release. You can find Github’s documentation on how to create releases here. If the latest release does not work we will not accept older versions.
Your repository must contain a README.md file with at least the following sections:
- Authors, institution, location.
- Brief description of your algorithm and a mention of the competition.
- Installation instructions, including any requirements.
- Matlab users: Please specify any toolboxes used.
- Python users: Please specify any modules used. If you use Anaconda, please add to the repository an environment.yml file capable of creating an environment than can run your code (instructions). Otherwise, please add a requirements.txt file generated with pip freeze (instructions)
- Usage instructions.
- Show few examples.
If your algorithm requires uploading large files to Github (e.g. with trained coefficients of a neural network), you can use Git Large File Storage (preferable way) or store them in another server and add the link to the Github installation instructions.
The teams can submit more than one algorithm to the challenge, each algorithm must be in a separate repository. The maximum number of algorithms is the number of members of the team. The teams don’t need to register multiple times in case they decide to submit more than one algorithm to the challenge.
Your code on Github
The repository must contain a main routine that we can run to apply your algorithm automatically to every image in a given directory. This is the file we will run to evaluate your code. Give it an easy to identify name like main.m or main.py.
Important: The input directory contains only the test dataset. No training dataset is provided to your code during the assessment. Therefore, any training procedures must be performed by your team before the submission.
Your main routine must require three input arguments:
- (string) Folder where the input image files are located
- (string) Folder where the output images must be stored
- (int) Difficulty category number. Values between 1 and 7
Below are the expected formats of the main routines in python and Matlab:
Matlab: The main function must be a callable function:
function main(inputFolder,outputFolder,categoryNbr)
...
your code comes here
...
Example calling the function:
>> main('path/to/input/files', 'path/to/output/files', 3)
Python: The main function must be a callable function from the command line. To achieve this you can use sys.argv or argparse module.
Example calling the function:
$ python3 main.py path/to/input/files path/to/output/files 3
The main routine must produce a reconstructed PNG file in the output folder for each image in the input folder. The output PNG images must have dimensions 512 x 512 pixels and the same filename apart from the extension. All images in the input directory belong to the same difficulty category, specified by the input argument. (Updated 28.10.2022)
The teams are allowed to use freely available python modules or Matlab toolboxes. Toolboxes, libraries and modules with paid licenses can also be used if the organizing committee also have the license. For example, the most usual Matlab toolboxes for image processing and deconvolution can be used (Image processing toolbox, wavelet toolbox, PDE toolbox, computer vision toolbox, deep learning toolbox, optimization toolbox). The teams can contact us to check if other toolboxes are available.
Scores and leaderboard
The scores and leaderboard are constructed step-wise as follows:
- All teams start with difficulty level 1. The reconstructions of the three samples (A, B, and C) of this level will be assessed quantitatively following the criteria described below, and their scores will be summed, forming the total score of the first level
- The team with the highest score S1 will be used as reference for the cut-off score of this level: any team with score S1 at least 25% of the highest score will pass to the next level.
- The same procedure is repeated for all difficulty levels, up to level 7, but considering only the teams that passed the cut-off score of the previous level:
- The cut-off of the levels stays fixed at 25% of the maximum at each level.
- Denote by Nmax the hardest level that at least one team could enter. If there is only one team, they win. If there are several teams competing in level Nmax they are ordered in the leaderboard according to their scores at that level.
- In case of a tie in level Nmax, the previous levels, starting from Nmax-1, will be compared until one of the competitors win. If the tie persists, the organizing committee will make the final decision on the winner.
Note: If one team submits more than one algorithm to the competition, then each submission will be temporarily assumed to belong to different ‘virtual’ teams when computing the scores and cut-offs. However, this team cannot be in more than one position in the leaderboard. In this situation, the organizing committee will consider only the highest performance algorithm when ranking the winners.
Open science spirit
Finally, the competitors must make their GitHub repositories public at latest on November 30, 2022. In the spirit of open science, only a public code can win HTC2022.
Data
Limited angle tomography data for the challenge
The actual challenge data consists of 21 phantoms, arranged into seven groups of gradually increasing difficulty, with each level containing three different phantoms, labeled A, B, and C. As the difficulty level increases, the number of holes increases and their shapes become increasingly complex. Furthermore, the view-angle is reduced as the difficulty level increases, starting with a 90-degree field-of-view at level 1, and reducing by 10 degrees at each increasing level of difficulty. Each target is assigned to a single group, therefore, each target is used only once.
The limited data is then passed as input to the submitted algorithms for assessment of the reconstructions. See code examples for more details.
The targets have been scanned using full-angle tomography, and have been appropriately subsampled to create the challenge data. This enables comparison of the limited-angle reconstruction to the ground truth obtainable from the full-angle data. The ground truth is obtained using the segmentation procedure described in a later section.
Each group is specified in the table below. In this table, angular range specifies the view-angle in the limited-angle data. The view-angles in the challenge data will not all begin from 0 degrees.
The test dataset will be made public by the end of the competition.
Get the data
The training dataset is available here:
Note: The publicly available data will not be used by the committee for measuring the quality of the algorithms submitted to the challenge. These are reserved for developing the algorithms. We measured some extra data for that. The PSFs are the same as in the categories of the public dataset. However, the targets are slightly different in a way that will be made public only after the deadline.
Phantoms
The targets are homogenous acrylic disc phantoms of 70 mm in diameter, with holes of varying shapes made with a laser cutter. Each disk has a different number of irregular holes in random locations. Figure 1 below shows few examples.
Figure 1: Target examples. Note that examples are provided to the competitors as the training set and, therefore, do not belong to the test set used to evaluate the submissions.
The dataset collected for the HTC2022 challenge consists of two separate sets, with identical experimental setup and settings. One set is provided to the competitors as training set for algorithm development, and the other will be used by the organizers to test the reconstruction algorithms. The test set will be made public after the end of the competition.
Training dataset
The training set consists of five phantoms with full angular data. These are designed to facilitate algorithm development and benchmarking for the challenge itself. Four of the training phantoms contain holes. A fifth training phantom is a solid disc with no holes.
We encourage subsampling these datasets to create limited-data sinograms and comparing the reconstruction results to the ground truth obtainable from the full-data sinograms. Note that the phantoms are not all identically centered. Training data for each difficulty group can be created by subsampling these datasets to create limited-data sinograms that match the angular range of each group of the actual challenge data (See Table 1 below).
Note: As the orientation of CT reconstructions can depend on the tools used, we have included example reconstructions for each of the training phantoms to demonstrate how the reconstructions obtained from the sinograms and the specified geometry should be oriented. These reconstructions have been computed using the filtered back-projection algorithm provided by the ASTRA Toolbox.
We have also included segmentation examples of the reconstructions to demonstrate the desired format for the final competition entries. The segmentation images were obtained by the following steps (using this Python code):
- Set all negative pixel values in the reconstruction to zero
- Determine a threshold level using Otsu’s method
- Globally threshold the image using the threshold level
- Perform a morphological closing on the image using a disc with a radius of 3 pixels.
The competitors do not need to follow the above segmentation procedure, and are encouraged to explore various segmentation techniques for the limited-angle reconstructions.
The competitors are encouraged to generate extra training data using simulations. The organizing committee will not provide the code to generate new targets before the end of the competition.
Testing dataset
The test set will be made public after the end of the competition.
Table 1: Limited-angle tomography difficulty groups
Data format
The dataset is shared using the MATLAB .mat files (version 7.3). Each individual measurement dataset consists of data structure containing the sinogram and its associated metadata, including measurement geometry. In other words, each file contains the measurements for one tomographic image.
Python users can load this type of file into their code using the mat73 module. Please refer to this link on how to install and use the module.
Data collection
The challenge data was measured at the Industrial Mathematics Computed Tomography Laboratory at the University of Helsinki. The measurement device is a cone-beam computed tomography scanner designed and constructed in-house. The scanner consists of an X-ray source, a rotating sample holder, and an X-ray detector (Figure 2).
The data has already been pre-processed with background and flat-field corrections, and compensated for a slightly misaligned center of rotation in the cone-beam computed tomography scanner. The log-transforms from intensity measurements to attenuation data have also been already computed.
We highlight the following geometric definitions, needed to properly specify the X-ray projection operator for the measurement setup:
- Dsd: Distance from source to detector
- Dso: Distance from source to origin
- Dsd: Distance from origin to detector
The X-ray detector data was binned by a factor of four after the measurements, giving a pixel size of 0.2 millimeters.
News
Results
Scientific articles by top teams
Results of the top teams have been published in a Special Issue of the journal Applied Mathematics for Modern Challenges.
Winners
1st place: Thomas Germer, Jan Robine, Sebastian Konietzny, Stefan Harmeling, and Tobias Uelwer from Technical University Dortmund and Heinrich Heine University Dusseldorf. GitHub_A
2nd place: Alexander Denker, Clemens Arndt, Judith Nickel, Johannes Leuschner, Janek Godeke, and Soren Dittmer from University of Bremen. GitHub_B
3rd place: Gemma Fardell, Jakob Sauer Jorgensen, Laura Murgatroyd, Evangelos Papoutsellis, and Edoardo Pasca from Technical University of Denmark. GitHub_B
All Registered Teams
This is the list of teams that registered for the challenge.
Together with the names, the links to the Github repositories submitted to the challenge.
- 02 – Instituto Balseiro – Argentina.
- 03 – Dartmouth College, Mathematics Department – USA.
- 04 – UT Southwestern Medical Center, Radiation Oncology – USA.
- 05 – Loginov MCSC MHD – Russia.
- 06 – Indian institute of science education and research Bhopal, Mathematics Department – India.
- 07 – Ludwig Maximilian University of Munich, Mathematics Department – Germany. GitHub
- 08 – Applied Mathematics, Innsbruck – Austria. GitHub_A, GitHub_B
- 09 – Federal University of ABC, Center for Engineering, Modeling and Applied Social Sciences – Brazil. GitHub
- 10 – School of Mathematics and Statistics, Henan University – China.
- 11 – Zhejiang Normal University, Henan University – China.
- 13 – Indian Institute of Science, Department of Computational and Data Sciences (CDS) – India. GitHub
- 14 – National University of Singapore, Mathematics Department – Singapore. GitHub
- 15 – Technical University Dortmund, Department of Computer Science. Heinrich Heine University Dusseldorf, Department of Computer Science – Germany. GitHub_A, GitHub_BGitHub_C
- 16 – University of Bremen, Center for Industrial Mathematics (ZeTeM)- Germany. GitHub_A, GitHub_B, GitHub_C, GitHub_D
- 17 – Tsinghua University, Yau Mathematical Sciences Center – China. GitHub_A, GitHub_B, GitHub_C, GitHub_D
- 18 – Zhejiang Normal University – China.
- 19 – Leiden University, LIACS – The Netherlands.
- 20 – Zhejiang normal university – China.
- 21 – Zhejiang Normal University,College Of Mathematical Medicine – China.
- 22 – University of Bologna, Departement of Computer Science (DISI) – Italy.
- 23 – Technical University of Denmark, DTU Computer Science – Denmark.
- 24 – Technical University of Denmark, Department of Applied Mathematics and Computer Science – Denmark. GitHub_A, GitHub_B, GitHub_C, GitHub_D, GitHub_E
- 25 – University of Modena and Reggio Emilia – Italy.
- 27 – Argonne National Laboratory, X-ray Science Division – USA.
Results
The table below shows the scores S_n of the submitted algorithms for the test dataset. Team ID numbers follow the list above. Those that submitted more than one algorithm were marked with indices A, B, C, etc.
The Last column of the table presents the final rank of each team. You can check the reconstructions and detailed scores in this pdf. Here is a taste of the results at level 6:
Team | Level 1 | Level 2 | Level 3 | Level 4 | Level 5 | Level 6 | Level 7 | Position |
---|---|---|---|---|---|---|---|---|
07 | 1.99157 | 1.30523 | 1.29962 | 1.26261 | 1.43642 | 1.18552 | 1.43021 | 9th |
08_A | 2.88354 | 2.82064 | 2.89009 | 2.74683 | 2.84895 | 2.40931 | 2.08002 | |
08_B | 2.91197 | 2.82176 | 2.88927 | 2.7432 | 2.86534 | 2.41197 | 2.08805 | 5th |
09 | 2.96583 | 2.95357 | 2.91334 | 2.80206 | 2.80498 | 2.29492 | 2.10937 | 4th |
13 | 2.6967 | 2.7199 | 2.25841 | 2.25673 | 2.26749 | 1.90006 | 1.90981 | 8th |
14 | 2.95607 | 2.41305 | 2.06875 | 2.03463 | 2.57182 | 2.09047 | 1.95438 | 7th |
15_A | 2.95905 | 2.96895 | 2.93179 | 2.9164 | 2.92612 | 2.81467 | 2.41018 | 1st |
15_B | 2.95941 | 2.962 | 2.92603 | 2.8934 | 2.90916 | 2.72889 | 2.38991 | |
15_C | 2.93267 | 2.9425 | 2.87712 | 2.81466 | 2.8321 | 2.61577 | 2.33738 | |
16_A | 2.92226 | 2.90749 | 2.68381 | 1.84681 | 1.69401 | 1.85821 | 1.75843 | |
16_B | 2.98727 | 2.98445 | 2.96335 | 2.94883 | 2.94267 | 2.68901 | 2.40549 | 2nd |
16_C | 2.92315 | 2.89535 | 2.80559 | 2.81679 | 2.90539 | 2.64358 | 2.27191 | |
16_D | 2.97861 | 2.95946 | 2.89116 | 2.92211 | 2.88932 | 2.64124 | 2.29831 | |
17_A | 2.84654 | 2.82335 | 2.74922 | 2.79669 | 2.7859 | 2.4763 | 2.05921 | 6th |
17_B | 2.89845 | 2.82632 | 2.83256 | 2.67381 | 2.73241 | 2.44723 | 2.0098 | |
17_C | 2.90189 | 2.81031 | 2.82227 | 2.66029 | 2.74065 | 2.4371 | 2.00498 | |
17_D | 2.89569 | 2.84482 | 2.85139 | 2.77558 | 2.74823 | 2.4728 | 2.02783 | |
24_A | 2.94146 | 2.90467 | 2.9036 | 2.82783 | 2.82966 | 2.47147 | 2.09127 | |
24_B | 2.93393 | 2.90992 | 2.92264 | 2.83569 | 2.84102 | 2.48042 | 2.17836 | 3rd |
24_C | 2.92923 | 2.89387 | 2.89817 | 2.81174 | 2.78908 | 2.4892 | 2.11445 | |
24_D | 2.93131 | 2.90559 | 2.91029 | 2.81522 | 2.80525 | 2.48613 | 2.09943 | |
24_E | 2.88967 | 2.86317 | 2.8368 | 2.72591 | 2.74638 | 2.33607 | 1.96893 |
FAQ
Contact
To contact the HTC2022 organizers send an e-mail to htc2022 (“at”) fips.fi