Linux: how to simulate a sequence of physically adjacent areas in user space?

In my driver, I have a certain number of physically contiguous DMA buffers (e.g. 4 MB each) for receiving data from the device. They are processed by hardware using the SG list. Since the received data will be subjected to intensive processing, I do not want to disable the cache, and I will use dma_sync_single_for_cpu after each buffer is filled with DMA.

To simplify data processing, I want these buffers to appear as one huge, continuous circular buffer in user space. In the case of a single buffer, I just use remap_pfn_range or dma_mmap_coherent . However, I cannot use these functions several times to match sequential buffers.

Of course, I can implement the crash operation in vm_operations to find the pfn of the corresponding page in the right buffer and paste it into vma using vm_insert_pfn .

The acquisition will be very fast, so I cannot process the display when real data arrives. But this can be easily solved. So that the entire display is ready before the start of data collection, I can simply read the entire mmapped buffer in my application before the start of the collection, so that all pages are already inserted when the first data arrives.

An error-based trio should work, but maybe there is something more elegant? Only one function that can be called several times to gradually build the entire display?

An additional difficulty is that the solution must be applicable (with minimal adjustments) to the kernels, starting from 2.6.32 to the latest.

PS. . , - mmapped- ( ), COW?

+4
1

, dmam_alloc_noncoherent.

:

[...]
for(i=0;i<DMA_NOFBUFS;i++) {
    ext->buf_addr[i] = dmam_alloc_noncoherent(&my_dev->dev, DMA_BUFLEN, &my_dev->buf_dma_t[i],GFP_USER);
    if(my_dev->buf_addr[i] == NULL) {
        res = -ENOMEM;
        goto err1;
    }
    //Make buffer ready for filling by the device
    dma_sync_single_range_for_device(&my_dev->dev, my_dev->buf_dma_t[i],0,DMA_BUFLEN,DMA_FROM_DEVICE);
}
[...]

void swz_mmap_open(struct vm_area_struct *vma)
{
}

void swz_mmap_close(struct vm_area_struct *vma)
{
}

static int swz_mmap_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
{
    long offset;
    char * buffer = NULL;
    int buf_num = 0;
    //Calculate the offset (according to info in https://lxr.missinglinkelectronics.com/linux+v2.6.32/drivers/gpu/drm/i915/i915_gem.c#L1195 it is better not ot use the vmf->pgoff )
    offset = (unsigned long)(vmf->virtual_address - vma->vm_start);
    buf_num = offset/DMA_BUFLEN;
    if(buf_num > DMA_NOFBUFS) {
        printk(KERN_ERR "Access outside the buffer\n");
        return -EFAULT;
    }
    offset = offset - buf_num * DMA_BUFLEN;
    buffer = my_dev->buf_addr[buf_num];
    vm_insert_pfn(vma,(unsigned long)(vmf->virtual_address),virt_to_phys(&buffer[offset]) >> PAGE_SHIFT);         
    return VM_FAULT_NOPAGE;
}

struct vm_operations_struct swz_mmap_vm_ops =
{
    .open =     swz_mmap_open,
    .close =    swz_mmap_close,
    .fault =    swz_mmap_fault,    
};

static int char_sgdma_wz_mmap(struct file *file, struct vm_area_struct *vma)
{
    vma->vm_ops = &swz_mmap_vm_ops;
    vma->vm_flags |= VM_IO | VM_RESERVED | VM_CAN_NONLINEAR | VM_PFNMAP;
    swz_mmap_open(vma);
    return 0;
}
0

Source: https://habr.com/ru/post/1670215/


All Articles