Kuopio Tomography Challenge 2023

The Finnish Inverse Problems Society (FIPS) proudly presents the Kuopio Tomography Challenge 2023 (KTC2023). We warmly welcome scientists and research groups worldwide to participate in this exciting academic event. Take the opportunity to showcase your expertise by testing reconstruction algorithms with real-world electrical impedance tomography data.

Ready to aim for the top spot on the leaderboard? Make sure to register by the September 30th, 2023 deadline. Teams have until November 5th, 2023 to finalize and submit their entries.

Honors Await: Symposium Invitation & Grand Prize

The leading participants of the competition will receive a cordial invitation to a minisymposium held within the Inverse Days Conference. This symposium, intricately organized by the Finnish Inverse Problems Society (FIPS), is scheduled to take place in the enchanting city of Lahti, Finland, in December 2023.

In addition to the honor of becoming the Electric Imaging Expert, the winner will also receive the Grand Prize: The Ultimate Electric Measurement Device – a vintage multimeter for measurements of everything electric – EIT setups or otherwise.

About

What is electrical impedance tomography (EIT)?

In the realm of electrical impedance tomography, the task involves reconstructing the internal electric conductivity distribution of a physical object. This is achieved through the utilization of measurements taken at the object’s boundary, which encompass electric current and voltage. From a mathematical perspective, the core challenge revolves around deducing a non-negative coefficient for a diffusion equation based on the boundary data consisting of electric current density and potential.

In real measurement setups, the object is imaged using measurement electrodes, which have a finite size and which cover only parts of the object boundary. The significance of this is that the reconstruction must be computed from an incomplete set of boundary data, a highly ill-posed and challenging task.

Challenge description

The purpose of the challenge is to recover the shapes of 2D targets imaged with electrical impedance tomography, collected in the Electrical Tomography Laboratory at the University of Eastern Finland, Finland. Detailed descriptions of the experimental setup, targets, and measurement protocol can be found in the Data section.

The outcome of the challenge should be an algorithm which takes in the EIT data, and it’s associated metadata about the measurement geometry, and produces a reconstruction which has been segmented into three components: water, resistive inclusions, and conductive inclusions.

Organising Committee

Mikko Räsänen | University of Eastern Finland, Finland
Petri Kuusela | University of Eastern Finland, Finland
Jyrki Jauhiainen | University of Helsinki, Finland
Muhammad Ziaul Arif | University of Eastern Finland, Finland
Kenneth Scheel | University of Eastern Finland, Finland
Tuomo Savolainen | University of Eastern Finland, Finland
Aku Seppänen | University of Eastern Finland, Finland

Registration

How to register

To enter the exhilarating KTC2023 competition:

Secure your spot by registering before 23:59 EET (Eastern European Time) on September 30th, 2023, via our electronic registration form.

After registration, please make sure to read the Rules of the competition.

We will promptly reach out to you for any updates we make regarding the competition.

Rules

Rules

The rules and information about KTC2023 can also be conveniently accessed in this pdf document.

Updated Sep 20th: fixed the number of injections in Table 1.
Updated Oct 12th: Added a note on the measurement units in Section 4, and fixed a typo in Equation (5).
Updated Nov 6th: Submission deadline extended to Nov 8th, and link added to the organizer’s Github account.

How to enter the competition

To take part in the KTC2023 competition:

  • Ensure your enrollment by registering before 23:59 EET (Eastern European Time) on September 30th, 2023. Utilize this electronic form.
  • Submit your entry to ktc2023@fips.fi before 23:59 EET (Eastern European Time) on November 8th, 2023. Add the organizer’s account https://github.com/ktc2023 to your repository. What’s required for submission? Refer to the instructions below.

Only submissions meeting the stipulated criteria outlined below will be accepted.

Requirements of the competition

What needs to be submitted? Briefly, the codes must be in Matlab or Python 3.X and the algorithms must be shared with us as a private GitHub repository at latest on deadline. Check the following subsections for detailed instructions. Only submissions that fulfill the requirements listed below will be accepted.

The teams can submit more than one reconstruction algorithm to the challenge, however, each algorithm must be in a separate repository. The maximum number of algorithms is the number of members of the team. Your team does not need to register multiple times in case you decide to submit more than one algorithm to the challenge. The team can send a single email with the links to all the repositories.

After the deadline, there is a brief period during which we can troubleshoot the codes together with the competing teams. This is to ensure that we are able to run the codes. The troubleshoot communication is done mainly via ‘Issues’ section of the submitted repository, so pay attention to any activities in the repository after the deadline.

Special situations: The spirit of the competition is that the algorithm is a general-purpose algorithm, capable of reconstructing electrical impedance tomography images of the targets. The organizing committee has the right to disqualify an algorithm trying to violate that spirit.

Conflict of interest: researchers affiliated with the Department of Technical Physics of University of Eastern Finland will not be added to the leaderboard and cannot win the competition.

Deadline

Deadline: November 8th, 2023 23:59 EET (Eastern European Time)

The algorithms must be shared with us as a private GitHub repository at latest on the deadline. The codes should be in Matlab or Python3.

After the deadline there is a brief period during which we can troubleshoot the codes together with the competing teams. This is to ensure that we are able to run the codes.

Github repository

Competitors can update the contents of the shared repository as many times as needed before the deadline. We will consider only the latest release of your repository on Github.

Attention: Simple commits to the main branch will not be considered. You MUST also create a release. You can find Github’s documentation on how to create releases here. If the latest release does not work we will not accept older versions.

Your repository must contain a README.md file with at least the following sections:

  • Authors, institution, location.
  • Brief description of your algorithm and a mention of the competition.
  • Installation instructions, including any requirements.
    • Matlab users: Please specify any toolboxes used.
    • Python users: Please specify any modules used. If you use Anaconda, please add to the repository an environment.yml file capable of creating an environment than can run your code (instructions). Otherwise, please add a requirements.txt file generated with pip freeze (instructions)
  • Usage instructions.
  • Show few examples.

If your algorithm requires uploading large files to Github (e.g. with trained coefficients of a neural network), you can use Git Large File Storage (preferable way) or store them in another server and add the link to the Github installation instructions.

The teams can submit more than one algorithm to the challenge, each algorithm must be in a separate repository. The maximum number of algorithms is the number of members of the team. The teams don’t need to register multiple times in case they decide to submit more than one algorithm to the challenge.

Your code on Github

The repository must contain a main routine that we can run to apply your algorithm automatically to every dataset in a given directory. This is the file we will run to evaluate your code. Give it an easy to identify name like main.m or main.py.

Important: The input directory contains only the test dataset. No training dataset is provided to your code during the assessment. Therefore, any training procedures must be performed by your team before the submission.

Your main routine must require three input arguments:

  1. (string) Folder where the input image files are located
  2. (string) Folder where the output images must be stored
  3. (int) Difficulty category number. Values between 1 and X

Below are the expected formats of the main routines in python and Matlab:

Matlab: The main function must be a callable

function main(inputFolder,outputFolder,categoryNbr)
...

your code comes here
...

Example calling the function:

>> main('path/to/input/files', 'path/to/output/files', 3)

Python: The main function must be a callable function from the command line. To achieve this you can use sys.argv or argparse module.

Example calling the function:

$ python3 main.py path/to/input/files path/to/output/files 3

The main routine must produce a reconstructed PNG file in the output folder for each dataset in the input folder. The output PNG images must have dimensions 256 x 256 pixels and the same filename as the dataset apart from the extension. All datasets in the input directory belong to the same difficulty category, specified by the input argument.

The teams are allowed to use freely available python modules or Matlab toolboxes. Toolboxes, libraries and modules with paid licenses can also be used if the organizing committee also have the license. For example, the most usual Matlab toolboxes for image processing and deconvolution can be used (Image processing toolbox, wavelet toolbox, PDE toolbox, computer vision toolbox, deep learning toolbox, optimization toolbox). The teams can contact us to check if other toolboxes are available.

Scores and leaderboard

We refer to the instructions PDF linked at the top of this page.

Open science spirit

Finally, the competitors must make their GitHub repositories public at latest on November 30th, 2023. In the spirit of open science, only a public code can win KTC2023. Note: the date has been corrected – it previously read November 5th by mistake on this website.

Data

Electrical impedance tomography data for the challenge

Two datasets were collected for the challenge: one training dataset published with the challenge, and one evaluation dataset which will be made public after the submission deadline of the challenge. The evaluation data for the challenge consists of 21 phantoms, arranged into seven groups of gradually increasing difficulty, with each level containing three different phantoms, labeled A, B, and C. As the difficulty level increases, the number of inclusions increases and their shapes become increasingly complex. Furthermore, the data from some electrodes is discarded as the difficulty level increases, starting with full 32 electrode data at level 1, and reducing by 2 electrodes at each increasing level of difficulty. Each target is assigned to a single difficulty group, therefore, each target is used only once.

The electric current and voltage data is then fed as input to the submitted algorithms for assessment of the reconstructions. See code examples for more details.

The targets have been scanned using all of the measurement electrodes, and have been appropriately subsampled to create the evaluation data. The ground truth images were obtained by segmenting digital photographs of each phantom.

The test dataset will be made public by the end of the competition.

Get the data

The training dataset is available here. This now contains the test dataset as well.

Note: The publicly available data will not be used by the committee for assessing the quality of the algorithms submitted to the challenge. This data is reserved for algorithm development. We have collected a separate test dataset to evaluate algorithm quality.The geometry is the same as in the categories of the public dataset. However, the targets are slightly different in a way that will be made public only after the deadline.

Phantoms

The targets consist of a circular water tank 23 cm in diameter, with inclusions of varying shapes made of either resistive or conductive plastic using a 3D printer, or metallic inclusions. Each target has a different number of irregular inclusions at varying positions.

The dataset collected for the KTC2023 challenge consists of two separate sets, with identical experimental setup and settings. One set is provided to the competitors as training set for algorithm development, and the other will be used by the organizers to test the reconstruction algorithms. The test set will be made public after the end of the competition.

Training dataset

The training set consists of five phantoms with full electrode data. These are designed to facilitate algorithm development and benchmarking for the challenge itself. Four of the training phantoms contain inclusions. A fifth training phantom is made up of water only and contains no inclusions.

We encourage subsampling these datasets to create limited datasets and comparing the reconstruction results to the ground truth obtainable from the full datasets. Training data for each difficulty group can be created by subsampling these datasets to create limited datasets that match the electrodes used in each difficulty level ( See the instructions PDF file ).

Note: As the orientation of EIT reconstructions can depend on the tools used, we have included example reconstructions for each of the training phantoms to demonstrate how the reconstructions obtained from the data and the specified geometry should be oriented. These reconstructions have been computed using a simple example algorithm provided with the training datasets.

We have also included segmentation examples of the reconstructions to demonstrate the desired format for the final competition entries. The segmentation images were obtained by the following steps:

  1. Compute an EIT image reconstruction using the provided simple example algorithm
  2. Determine two threshold levels using triclass Otsu’s method on the EIT reconstruction. Divide the image pixels into three classes according to the treshold levels.
  3. Assign the pixels in the largest resulting class with the value 0 (water). Assign the pixels in the other two classes with the value 1 (resistive inclusions) or 2 (conductive inclusions), depending on which class had higher values in the EIT reconstruction.
  4. If the background (water) class is either the bottom/top of the three classes, combine the other two classes and assign their pixels with the value 1 (resistive inclusions) / 2 (conductive inclusions), respectively.

The competitors do not need to follow the above segmentation procedure, and are encouraged to explore various segmentation techniques for the limited-angle reconstructions.

The competitors are encouraged to generate extra training data using simulations. The organizing committee will provide a Matlab and Python finite element code package to simulate EIT data for targets generated by the competitors themselves.

Testing dataset

The test set will be made public after the end of the competition.Data format

The dataset is shared using the MATLAB .mat files (version 7). Each file contains the measurements for one imaging target.

Python users can load this type of file into their code using the scipy.io.loadmat function found in the module scipy. Please see the provided example codes.

Data collection

The challenge data was measured at the Electrical Tomography Laboratory at the University of Eastern Finland. The measurement device is a current-injection and voltage measurement device designed and constructed in-house. The measurement setup consists of a water tank with 32 equally spaced measurement electrodes, the EIT device, and a digital camera.

News

Update 21.12.2023 | Test dataset is published. See the Data section on this website.
Update 15.12.2023 | Results of the challenge are published.
Update 11.8.2023 | Web page is officially published.

Results

Winners

We proudly announce the winner of KTC23.

1st place: Alexander Denker, Zeljko Kereta, Imraj Singh, Tom Freudenberg, Tobias Kluth, Simon Arridge and Peter Maass – from University of Bremen and University College London. GitHub_B

2nd place: André Kazuo Takahata, Fernando Silva de Moura, Leonardo Alves Ferreira, Ricardo Suyama and Roberto Gutierrez Beraldo – from The Federal University of ABC, Brazil. GitHub_A

3rd place: Martin Sæbye Carøe, Jakob Tore Kammeyer Nielsen, Rasmus Kleist Hørlyck Sørensen, Jasper Marijn Everink, Amal Mohammed Alghamdi, Chao Zhang, Kim Knudsen, Jakob Sauer Jørgensen and Aksel Kaastrup Rasmussen – from the Technical University of Denmark. GitHub_G

All Registered Teams

This is the list of teams that submitted solutions for the challenge.

Together with the names, the links to the Github repositories submitted to the challenge. The teams are listed here in the order of registration.01 – The Federal University of ABC, Brazil. GitHub_AGitHub_B02 – The Technical University of Denmark, Denmark. GitHub_A GitHub_B GitHub_C GitHub_D GitHub_E GitHub_F GitHub_G GitHub_H GitHub_I03 – University of Helsinki, Finland and Gonzaga University, The United States. GitHub_A04 – University of Naples Federico II, Italy. GitHub_A05 – Helmholtz Munich and Universität Hamburg, Germany. GitLab_AGitLab_BGitLab_C06 – University of Bremen, Germany and University College London, The United Kingdom. GitHub_AGitHub_BGitHub_C07 – University of Bath, The United Kingdom and CNRS, France and University of Bologna, Italy and University of Genoa, Italy. GitHub_AGitHub_BGitHub_C

A list of members in each team can be found in this document.

Results

You can check the detailed results in this pdf (original)pdf (Ground truths corrected, see the note below.).

(Update 26.4.2024)The ground truth images used in the KTC2023 had a slight geometric distortion, caused by improper cropping of the photographs to the outer boundary of the imaging chamber, instead of the inner boundary. The ground truth images have now been corrected in the shared dataset on Zenodo. It is recommended to download the new version (v3) of the dataset for research purposes. This correction affected the scores and top leaderboard positions slightly. The organizers have decided to grant teams 01 and 02 a shared 2nd place on the updated leaderboard, and team 04 the 3rd place. The updated leaderboard table is found below.

FAQ

Contact

To contact the KTC2023 organizers, please send an email to ktc2023@fips.fi.

Scroll to Top