Downsampling large 3D images in numpy

I need to dump large 3D images (30GB +), which consist of a series of 2d tiff fragments by arbitrary factors without interference. scipy.ndimage.zoom is great for entering images that fit into RAM.

I was thinking of reading in parts of the stack and using scipy.ndimage_map_coordintes to get the interpolated pixel coordinates. Another idea was to create an array with memory mapping using numpy.memmap and doing scipy.ndimage.zoom on this.

Does anyone have any better approaches before I continue this?

+4
source share
1 answer

, , , ImageJ. , - :

import SimpleITK as sitk
import cv2
import numpy as np

def downsample_large_volume(img_path_list, input_voxel_size, output_voxel_size):

    scale = input_voxel_size / output_voxel_size
    resampled_zs = []

    #Resample z slices
    for img_path in img_path_list:
        z_slice_arr = cv2.imread(img_path, cv2.CV_LOAD_IMAGE_GRAYSCALE)
        z_slice_resized = cv2.resize(z_slice_arr, (0, 0), fx=scale, fy=scale, interpolation=cv2.INTER_AREA)
        resampled_zs.append(z_slice_resized) # Or save to disk to save RAM and use np.memmap for xz scaling


    temp_arr = np.dstack(resampled_zs)  # We seem to be in yxz space now
    final_scaled_slices = []

    # Resample xz plane at each y
    for y in range(temp_arr.shape[0]):
        xz_pane = temp_arr[y, :, :]
        scaled_xz = cv2.resize(xz_pane, (0, 0), fx=scale, fy=1, interpolation=cv2.INTER_AREA)
        final_scaled_slices.append(scaled_xz)

    final_array = np.dstack(final_scaled_slices)


    img = sitk.GetImageFromArray(np.swapaxes(np.swapaxes(final_array, 0, 1), 1, 2))
    sitk.WriteImage(img, 'scaled_by_pixel.nrrd')
+2

Source: https://habr.com/ru/post/1568743/


All Articles