The dataset viewer is not available for this split.
Error code: StreamingRowsError
Exception: ImportError
Message: To support decoding NIfTI files, please install 'nibabel'.
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
return get_rows(
^^^^^^^^^
File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2567, in __iter__
for key, example in ex_iterable:
^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2103, in __iter__
batch = formatter.format_batch(pa_table)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/formatting/formatting.py", line 472, in format_batch
batch = self.python_features_decoder.decode_batch(batch)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/formatting/formatting.py", line 234, in decode_batch
return self.features.decode_batch(batch, token_per_repo_id=self.token_per_repo_id) if self.features else batch
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/features/features.py", line 2254, in decode_batch
decode_nested_example(self[column_name], value, token_per_repo_id=token_per_repo_id)
File "/usr/local/lib/python3.12/site-packages/datasets/features/features.py", line 1508, in decode_nested_example
return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) if obj is not None else None
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/features/nifti.py", line 172, in decode_example
raise ImportError("To support decoding NIfTI files, please install 'nibabel'.")
ImportError: To support decoding NIfTI files, please install 'nibabel'.Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
YAML Metadata Warning: empty or missing yaml metadata in repo card
Check out the documentation for more information.
CT Generation Evaluation Docker
This Docker container evaluates CT volume generation predictions by running multiple metrics and merging their outputs into a single JSON file.
Metrics
- FVD_CTNet – Frechet Video Distance computed on 3D CT volumes using the CT-Net backbone.
- CLIPScore / CLIP_I2I – CLIP-based image-text similarity, reporting both I2T and I2I scores and their mean.
- FID_2p5D – 2.5D Frechet Inception Distance computed on orthogonal slices (XY, XZ, YZ).
Input Specification
The container accepts either:
# Mount predictions directory or ZIP archive to /input
docker run --rm \
-v "$(pwd)/input":/input \
-v "$(pwd)/output":/output \
forithmus/ctgen-eval:latest
Inside /input, provide either:
- A set of flattened
.mhafiles under/inputor any nested subdirectories. - A single
.ziparchive anywhere under/inputcontaining.mhafiles.
Notes:
- The first matching
.mhafiles or the first.zipfound will be used for evaluation.
Ground-Truth Data
Ground-truth .mha volumes are baked into the container at:
/opt/app/ground-truth
Each file in this directory should be named by accession or unique identifier and have a .mha extension.
Output Specification
After evaluation, the container writes the merged metrics JSON to /output/metrics.json. The file has the following structure:
{
"FVD_CTNet": <float>, // FVD score
"CLIPScore": <float>, // CLIP I2T score
"CLIPScore_I2I": <float>, // CLIP I2I score
"CLIPScore_mean": <float>, // mean CLIP score
"FID_2p5D_Avg": <float>, // average 2.5D FID
"FID_2p5D_XY": <float>, // XY-plane FID
"FID_2p5D_XZ": <float>, // XZ-plane FID
"FID_2p5D_YZ": <float> // YZ-plane FID
}
All values are floats rounded to four decimal places.
Testing
To verify functionality, run:
./test.sh
Ensure the script has execute permissions:
chmod +x test.sh
Exporting
Use the export.sh script to set environment variables before running evaluation:
source ./export.sh
This will generate a .tar.gz file for submission to the challenge platform.
For questions or issues, please contact the challenge organizers.
- Downloads last month
- -