-
Notifications
You must be signed in to change notification settings - Fork 2.2k
Description
Dear Team
I used 128x128x128 resolution .nii.gz files as the training set, but when running nnUNetv2_plan_and_preprocess -d 6 --verify_dataset_integrity
, I encountered the error
"ValueError: blocks cannot be greater than chunks."
I used print(block_size_data, chunk_size_data)
and print(block_size_seg, chunk_size_seg)
in the run_case_save
within default_preprocessor.py
to output:
block_size_data, chunk_size_data
block_size_seg, chunk_size_seg
The results were:
(np.int64(1), np.int64(1), np.int64(8), np.int64(8)) (1, np.int64(1), np.int64(16), np.int64(7))
(np.int64(1), np.int64(1), np.int64(8), np.int64(8)) (1, np.int64(1), np.int64(16), np.int64(7))
This indicates that in the last dimension, chunk_size_data < block_size_data. However, after I added the line block_size = tuple(min(block_dim, chunk_dim) for block_dim, chunk_dim in zip(block_size, chunk_size))
at the end of the comp_blosc2_params
in nnunet_dataset.py
, the error no longer occurred. Still, I don’t fully understand what caused the error. Could you explain this? The complete error message is as follows:
Fingerprint extraction...
Dataset006_CT
Using <class 'nnunetv2.imageio.simpleitk_reader_writer.SimpleITKIO'> as reader/writer####################
verify_dataset_integrity Done.
If you didn't see any error messages then your dataset is most likely OK!
####################Experiment planning...
############################
INFO: You are using the old nnU-Net default planner. We have updated our recommendations. Please consider using those instead! Read more here: https://github.com/MIC-DKFZ/nnUNet/blob/master/documentation/resenc_presets.md
############################Dropping 3d_lowres config because the image size difference to 3d_fullres is too small. 3d_fullres: [7. 7. 7.], 3d_lowres: [7, 7, 7]
2D U-Net configuration:
{'data_identifier': 'nnUNetPlans_2d', 'preprocessor_name': 'DefaultPreprocessor', 'batch_size': 70, 'patch_size': (np.int64(7), np.int64(7)), 'median_image_size_in_voxels': array([7., 7.]), 'spacing': array([1., 1.]), 'normalization_schemes': ['CTNormalization'], 'use_mask_for_norm': [False], 'resampling_fn_data': 'resample_data_or_seg_to_shape', 'resampling_fn_seg': 'resample_data_or_seg_to_shape', 'resampling_fn_data_kwargs': {'is_seg': False, 'order': 3, 'order_z': 0, 'force_separate_z': None}, 'resampling_fn_seg_kwargs': {'is_seg': True, 'order': 1, 'order_z': 0, 'force_separate_z': None}, 'resampling_fn_probabilities': 'resample_data_or_seg_to_shape', 'resampling_fn_probabilities_kwargs': {'is_seg': False, 'order': 1, 'order_z': 0, 'force_separate_z': None}, 'architecture': {'network_class_name': 'dynamic_network_architectures.architectures.unet.PlainConvUNet', 'arch_kwargs': {'n_stages': 1, 'features_per_stage': (32,), 'conv_op': 'torch.nn.modules.conv.Conv2d', 'kernel_sizes': ((3, 3),), 'strides': ((1, 1),), 'n_conv_per_stage': (2,), 'n_conv_per_stage_decoder': (), 'conv_bias': True, 'norm_op': 'torch.nn.modules.instancenorm.InstanceNorm2d', 'norm_op_kwargs': {'eps': 1e-05, 'affine': True}, 'dropout_op': None, 'dropout_op_kwargs': None, 'nonlin': 'torch.nn.LeakyReLU', 'nonlin_kwargs': {'inplace': True}}, '_kw_requires_import': ('conv_op', 'norm_op', 'dropout_op', 'nonlin')}, 'batch_dice': True}Using <class 'nnunetv2.imageio.simpleitk_reader_writer.SimpleITKIO'> as reader/writer
3D fullres U-Net configuration:
{'data_identifier': 'nnUNetPlans_3d_fullres', 'preprocessor_name': 'DefaultPreprocessor', 'batch_size': 10, 'patch_size': (np.int64(7), np.int64(7), np.int64(7)), 'median_image_size_in_voxels': array([7., 7., 7.]), 'spacing': array([1., 1., 1.]), 'normalization_schemes': ['CTNormalization'], 'use_mask_for_norm': [False], 'resampling_fn_data': 'resample_data_or_seg_to_shape', 'resampling_fn_seg': 'resample_data_or_seg_to_shape', 'resampling_fn_data_kwargs': {'is_seg': False, 'order': 3, 'order_z': 0, 'force_separate_z': None}, 'resampling_fn_seg_kwargs': {'is_seg': True, 'order': 1, 'order_z': 0, 'force_separate_z': None}, 'resampling_fn_probabilities': 'resample_data_or_seg_to_shape', 'resampling_fn_probabilities_kwargs': {'is_seg': False, 'order': 1, 'order_z': 0, 'force_separate_z': None}, 'architecture': {'network_class_name': 'dynamic_network_architectures.architectures.unet.PlainConvUNet', 'arch_kwargs': {'n_stages': 1, 'features_per_stage': (32,), 'conv_op': 'torch.nn.modules.conv.Conv3d', 'kernel_sizes': ((3, 3, 3),), 'strides': ((1, 1, 1),), 'n_conv_per_stage': (2,), 'n_conv_per_stage_decoder': (), 'conv_bias': True, 'norm_op': 'torch.nn.modules.instancenorm.InstanceNorm3d', 'norm_op_kwargs': {'eps': 1e-05, 'affine': True}, 'dropout_op': None, 'dropout_op_kwargs': None, 'nonlin': 'torch.nn.LeakyReLU', 'nonlin_kwargs': {'inplace': True}}, '_kw_requires_import': ('conv_op', 'norm_op', 'dropout_op', 'nonlin')}, 'batch_dice': False}Plans were saved to D:\nnUNet-master\dataset\nnUNet_preprocessed\Dataset006_CT\nnUNetPlans.json
Preprocessing...
Preprocessing dataset Dataset006_CT
Configuration: 2d...
0%| | 0/200 [00:00<?, ?it/s](np.int64(1), np.int64(1), np.int64(6), np.int64(5)) (1, np.int64(1), 6, 5)
(np.int64(1), np.int64(1), np.int64(6), np.int64(5)) (1, np.int64(1), 6, 5)
(np.int64(1), np.int64(1), np.int64(3), np.int64(3)) (1, np.int64(1), 3, 3)
(np.int64(1), np.int64(1), np.int64(3), np.int64(3)) (1, np.int64(1), 3, 3)
(np.int64(1), np.int64(1), np.int64(7), np.int64(8)) (1, np.int64(1), 7, 8)
(np.int64(1), np.int64(1), np.int64(7), np.int64(8)) (1, np.int64(1), 7, 8)
(np.int64(1), np.int64(1), np.int64(6), np.int64(5)) (1, np.int64(1), 6, 5)
(np.int64(1), np.int64(1), np.int64(6), np.int64(5)) (1, np.int64(1), 6, 5)
(np.int64(1), np.int64(1), np.int64(6), np.int64(6)) (1, np.int64(1), 6, 6)
(np.int64(1), np.int64(1), np.int64(6), np.int64(6)) (1, np.int64(1), 6, 6)
(np.int64(1), np.int64(1), np.int64(8), np.int64(7)) (1, np.int64(1), 8, 7)
(np.int64(1), np.int64(1), np.int64(8), np.int64(7)) (1, np.int64(1), 8, 7)
(np.int64(1), np.int64(1), np.int64(8), np.int64(8)) (1, np.int64(1), 12, 11)
(np.int64(1), np.int64(1), np.int64(8), np.int64(8)) (1, np.int64(1), 12, 11)
(np.int64(1), np.int64(1), np.int64(7), np.int64(7)) (1, np.int64(1), 7, 7)
(np.int64(1), np.int64(1), np.int64(7), np.int64(7)) (1, np.int64(1), 7, 7)
(np.int64(1), np.int64(1), np.int64(4), np.int64(4)) (1, np.int64(1), 4, 4)
(np.int64(1), np.int64(1), np.int64(4), np.int64(4)) (1, np.int64(1), 4, 4)
0%|▍ | 1/200 [00:02<07:59, 2.41s/it](np.int64(1), np.int64(1), np.int64(5), np.int64(6)) (1, np.int64(1), 5, 6)
(np.int64(1), np.int64(1), np.int64(5), np.int64(6)) (1, np.int64(1), 5, 6)
(np.int64(1), np.int64(1), np.int64(8), np.int64(8)) (1, np.int64(1), 11, 9)
(np.int64(1), np.int64(1), np.int64(8), np.int64(8)) (1, np.int64(1), 11, 9)
(np.int64(1), np.int64(1), np.int64(8), np.int64(8)) (1, np.int64(1), 9, 9)
(np.int64(1), np.int64(1), np.int64(8), np.int64(8)) (1, np.int64(1), 9, 9)
(np.int64(1), np.int64(1), np.int64(5), np.int64(5)) (1, np.int64(1), 5, 5)
(np.int64(1), np.int64(1), np.int64(5), np.int64(5)) (1, np.int64(1), 5, 5)
(np.int64(1), np.int64(1), np.int64(8), np.int64(8)) (1, np.int64(1), np.int64(16), np.int64(5))
(np.int64(1), np.int64(1), np.int64(8), np.int64(8)) (1, np.int64(1), np.int64(16), np.int64(5))
2%|██ | 5/200 [00:02<01:38, 1.98it/s]
multiprocessing.pool.RemoteTraceback:
"""
Traceback (most recent call last):
File "C:\Users\Administrator.DESKTOP-MQ823FM.conda\envs\nnunet\lib\multiprocessing\pool.py", line 125, in worker
result = (True, func(*args, **kwds))
File "C:\Users\Administrator.DESKTOP-MQ823FM.conda\envs\nnunet\lib\multiprocessing\pool.py", line 51, in starmapstar
return list(itertools.starmap(args[0], args[1]))
File "D:\nnUNet-master\nnunetv2\preprocessing\preprocessors\default_preprocessor.py", line 168, in run_case_save
nnUNetDatasetBlosc2.save_case(data, seg, properties, output_filename_truncated,
File "D:\nnUNet-master\nnunetv2\training\dataloading\nnunet_dataset.py", line 179, in save_case
blosc2.asarray(np.ascontiguousarray(data), urlpath=output_filename_truncated + '.b2nd', chunks=chunks,
File "C:\Users\Administrator.DESKTOP-MQ823FM.conda\envs\nnunet\lib\site-packages\blosc2\ndarray.py", line 3730, in asarray
chunks, blocks = compute_chunks_blocks(array.shape, chunks, blocks, array.dtype, **kwargs)
te_chunks_blocks
raise ValueError("blocks cannot be greater than chunks")
ValueError: blocks cannot be greater than chunks
"""The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Users\Administrator.DESKTOP-MQ823FM.conda\envs\nnunet\lib\runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Users\Administrator.DESKTOP-MQ823FM.conda\envs\nnunet\lib\runpy.py", line 86, in run_code
exec(code, run_globals)
File "C:\Users\Administrator.DESKTOP-MQ823FM.conda\envs\nnunet\Scripts\nnUNetv2_plan_and_preprocess.exe_main.py", line 7, in
File "D:\nnUNet-master\nnunetv2\experiment_planning\plan_and_preprocess_entrypoints.py", line 196, in plan_and_preprocess_entry
preprocess(args.d, plans_identifier, args.c, np, args.verbose)
File "D:\nnUNet-master\nnunetv2\experiment_planning\plan_and_preprocess_api.py", line 150, in preprocess
preprocess_dataset(d, plans_identifier, configurations, num_processes, verbose)
File "D:\nnUNet-master\nnunetv2\experiment_planning\plan_and_preprocess_api.py", line 129, in preprocess_dataset
preprocessor.run(dataset_id, c, plans_identifier, num_processes=n)
File "D:\nnUNet-master\nnunetv2\preprocessing\preprocessors\default_preprocessor.py", line 306, in run
_ = [r[i].get() for i in done]
File "D:\nnUNet-master\nnunetv2\preprocessing\preprocessors\default_preprocessor.py", line 306, in
_ = [r[i].get() for i in done]
File "C:\Users\Administrator.DESKTOP-MQ823FM.conda\envs\nnunet\lib\multiprocessing\pool.py", line 774, in get
raise self._value
ValueError: blocks cannot be greater than chunks