Linux 4.4 PCIe DMA does not work on user space pages - can't highmem be used for DMA?

I am updating an old linux driver that passes data through DMA to user space pages that are passed from an application through get_user_pages().

My hardware is a new x86 Xeon based board with 12 GB of RAM.

The driver receives data from VME to PCIe FPGA and must write it to the main memory. I do dma_map_pages()for each page, I check it with dma_mapping_error()and write the returned physical DMA address to the buffer descriptors of the DMA controller. Then I run DMA. (We also see that the transmission starts with the FPGA indicator).

However, when I receive an IRQ interrupt, I do not see any data. For management, I have the same VME address space accessible through PIO mode, and it works. I also tried to write the values ​​on the address_page (s) on the user pages, and the application can see them. Everything is fine.

Digging deeper into this question, I checked the usual documentation, such as DMA-API.txt, but I could not find another approach, also not in other drivers.

My core is self-compiled 4.4.59 64-bit with all types of debugs (debug DMA-API, etc.) set to yes.

I also tried to break through the / iommu / drivers to see the debugging features here, but only a few pr_debugs.

Interesting thing: I have another driver, the ethernet driver, which supports a network adapter connected to PCI. It works without a problem!

DMA dma_addr_t :

NIC dma_alloc_coherent() .., " 4 ":

 [ 3127.800567] dma_alloc_coherent: memVirtDma = ffff88006eeab000, memPhysDma = 000000006eeab000
 [ 3127.801041] dma_alloc_coherent: memVirtDma = ffff880035d9b000, memPhysDma = 0000000035d9b000
 [ 3127.801373] dma_alloc_coherent: memVirtDma = ffff88006ecd4000, memPhysDma = 000000006ecd4000

VME, dma_map_page' > 4 , DMA : 0xffffe010 ( ).

pageAddr=ffff88026b4b1000 off=10 dmaAddr=00000000ffffe010 length=100

DMA_BIT_MASK(32) , FPGA 32 .

: , DMA ? , DMA, ?

dmesg:

[    0.539839] debug: unmapping init [mem 0xffff880037576000-0xffff880037ab2fff]
[    0.549502] DMA-API: preallocated 65536 debug entries
[    0.549509] DMA-API: debugging enabled by kernel config
[    0.549545] DMAR: Host address width 46
[    0.549550] DMAR: DRHD base: 0x000000fbffc000 flags: 0x1
[    0.549573] DMAR: dmar0: reg_base_addr fbffc000 ver 1:0 cap     8d2078c106f0466 ecap f020df
[    0.549580] DMAR: RMRR base: 0x0000007bc14000 end: 0x0000007bc23fff
[    0.549585] DMAR: ATSR flags: 0x0
[    0.549590] DMAR: RHSA base: 0x000000fbffc000 proximity domain: 0x0
[    0.549779] DMAR: dmar0: Using Queued invalidation
[    0.549784] DMAR: dmar0: Number of Domains supported <65536>
[    0.549796] DMAR: Setting RMRR:
[    0.549809] DMAR: Set context mapping for 00:14.0
[    0.549812] DMAR: Setting identity map for device 0000:00:14.0     [0x7bc14000 - 0x7bc23fff]
[    0.549820] DMAR: Mapping reserved region 7bc14000-7bc23fff
[    0.549829] DMAR: Set context mapping for 00:1d.0
[    0.549831] DMAR: Setting identity map for device 0000:00:1d.0     [0x7bc14000 - 0x7bc23fff]
[    0.549838] DMAR: Mapping reserved region 7bc14000-7bc23fff
[    0.549845] DMAR: Prepare 0-16MiB unity mapping for LPC
[    0.549853] DMAR: Set context mapping for 00:1f.0
[    0.549855] DMAR: Setting identity map for device 0000:00:1f.0 [0x0 -     0xffffff]
[    0.549861] DMAR: Mapping reserved region 0-ffffff
[    0.549892] DMAR: Intel(R) Virtualization Technology for Directed I/O
...
[    0.551725] iommu: Adding device 0000:00:00.0 to group 10
[    0.551753] iommu: Adding device 0000:00:01.0 to group 11
[    0.551780] iommu: Adding device 0000:00:01.1 to group 12
[    0.551806] iommu: Adding device 0000:00:02.0 to group 13
[    0.551833] iommu: Adding device 0000:00:02.2 to group 14
[    0.551860] iommu: Adding device 0000:00:03.0 to group 15
[    0.551886] iommu: Adding device 0000:00:03.2 to group 16
[    0.551962] iommu: Adding device 0000:00:05.0 to group 17
[    0.551995] iommu: Adding device 0000:00:05.1 to group 17
[    0.552027] iommu: Adding device 0000:00:05.2 to group 17
[    0.552059] iommu: Adding device 0000:00:05.4 to group 17
[    0.552083] iommu: Adding device 0000:00:14.0 to group 18
[    0.552134] iommu: Adding device 0000:00:16.0 to group 19
[    0.552166] iommu: Adding device 0000:00:16.1 to group 19
[    0.552191] iommu: Adding device 0000:00:19.0 to group 20
[    0.552216] iommu: Adding device 0000:00:1d.0 to group 21
[    0.552272] iommu: Adding device 0000:00:1f.0 to group 22
[    0.552305] iommu: Adding device 0000:00:1f.3 to group 22
[    0.552332] iommu: Adding device 0000:01:00.0 to group 23
[    0.552360] iommu: Adding device 0000:03:00.0 to group 24
[    0.552437] iommu: Adding device 0000:04:00.0 to group 25
[    0.552473] iommu: Adding device 0000:04:00.1 to group 25
[    0.552510] iommu: Adding device 0000:04:00.2 to group 25
[    0.552546] iommu: Adding device 0000:04:00.3 to group 25
[    0.552575] iommu: Adding device 0000:05:00.0 to group 26
[    0.552605] iommu: Adding device 0000:05:00.1 to group 27
+4
1

, . : PCIe PCI FPGA...

+2

Source: https://habr.com/ru/post/1676935/


All Articles