Discussion:
Using ion memory for direct-io
Zengtao (B)
2017-04-14 09:18:25 UTC
Permalink
Hi

Currently, the ion mapped to userspace will be forced with VM_IO and VM_PFNMAP flags.
When I use the ion memory to do the direct-io, it will fail when reaching the get_user_pages,

Back to the VM_IO and VM_PFNMAP flag, there two flags are introduced by the remap_pfn_range called
by the ion_heap_mmap_user.

From my point of view, all ion memory(cma/vmalloc/system heap) are managed by linux vm, it
is not reasonable to have the VM_IO and VM_PFNMAP flag, but I don't any suitable function
to replace the remap_pfn_range, any suggestions?

Thanks && Regards

Zengtao
Laura Abbott
2017-04-17 16:13:48 UTC
Permalink
Post by Zengtao (B)
Hi
Currently, the ion mapped to userspace will be forced with VM_IO and VM_PFNMAP flags.
When I use the ion memory to do the direct-io, it will fail when reaching the get_user_pages,
Back to the VM_IO and VM_PFNMAP flag, there two flags are introduced by the remap_pfn_range called
by the ion_heap_mmap_user.
From my point of view, all ion memory(cma/vmalloc/system heap) are managed by linux vm, it
is not reasonable to have the VM_IO and VM_PFNMAP flag, but I don't any suitable function
to replace the remap_pfn_range, any suggestions?
Thanks && Regards
Zengtao
The carveout heap is omitted from your list of 'all ion memory'. At one
time, carveout memory was not backed by struct pages so I suspect
this is a holdover from then. This would probably be better served
by using vm_insert_page and handling higher order pages properly.

Thanks,
Laura
Zengtao (B)
2017-04-18 02:05:18 UTC
Permalink
-----邮件原件-----
发送时间: 2017年4月18日 0:14
主题: Re: Using ion memory for direct-io
Post by Zengtao (B)
Hi
Currently, the ion mapped to userspace will be forced with VM_IO and
VM_PFNMAP flags.
Post by Zengtao (B)
When I use the ion memory to do the direct-io, it will fail when
reaching the get_user_pages,
Back to the VM_IO and VM_PFNMAP flag, there two flags are introduced
by the remap_pfn_range called by the ion_heap_mmap_user.
From my point of view, all ion memory(cma/vmalloc/system heap) are
managed by linux vm, it is not reasonable to have the VM_IO and
VM_PFNMAP flag, but I don't any suitable function to replace the
remap_pfn_range, any suggestions?
Post by Zengtao (B)
Thanks && Regards
Zengtao
The carveout heap is omitted from your list of 'all ion memory'. At one
time, carveout memory was not backed by struct pages so I suspect
this is a holdover from then. This would probably be better served
Yes, you are right, I missed the carveout heap which needs the VM_IO and VM_PFNMAP,
and I think the carveout heap can implement its own map_user rather then using the common
ion_heap_map_user.
by using vm_insert_page and handling higher order pages properly.
Your latest patch has remove the the page faulting support, I didn't deep into the reason,
but I think this conflicts with the vm_insert_page.
Thanks,
Laura
I tried to use the similar way as the dma framework do(split the page and map_vm_area), but
the split will break the ion sg design, maybe we need a new lowlevel map function instead of directly
using the remap_pfn_range.

Thanks
Zengtao
Laura Abbott
2017-04-18 15:56:09 UTC
Permalink
Post by Zengtao (B)
-----邮件原件-----
发送时间: 2017年4月18日 0:14
主题: Re: Using ion memory for direct-io
Post by Zengtao (B)
Hi
Currently, the ion mapped to userspace will be forced with VM_IO and
VM_PFNMAP flags.
Post by Zengtao (B)
When I use the ion memory to do the direct-io, it will fail when
reaching the get_user_pages,
Back to the VM_IO and VM_PFNMAP flag, there two flags are introduced
by the remap_pfn_range called by the ion_heap_mmap_user.
From my point of view, all ion memory(cma/vmalloc/system heap) are
managed by linux vm, it is not reasonable to have the VM_IO and
VM_PFNMAP flag, but I don't any suitable function to replace the
remap_pfn_range, any suggestions?
Post by Zengtao (B)
Thanks && Regards
Zengtao
The carveout heap is omitted from your list of 'all ion memory'. At one
time, carveout memory was not backed by struct pages so I suspect
this is a holdover from then. This would probably be better served
Yes, you are right, I missed the carveout heap which needs the VM_IO and VM_PFNMAP,
and I think the carveout heap can implement its own map_user rather then using the common
ion_heap_map_user.
The carveout heap only uses memory with struct pages these
days. My point was that the VM_IO and VFM_PFNMAP shouldn't
need to be used at all anymore. Sorry for the confusion.
Post by Zengtao (B)
by using vm_insert_page and handling higher order pages properly.
Your latest patch has remove the the page faulting support, I didn't deep into the reason,
but I think this conflicts with the vm_insert_page.
vm_insert_page should be able to be used outside the fault
handler.
Post by Zengtao (B)
Thanks,
Laura
I tried to use the similar way as the dma framework do(split the page and map_vm_area), but
the split will break the ion sg design, maybe we need a new lowlevel map function instead of directly
using the remap_pfn_range.
Yes, I don't think we can directly copy what the
dma framework does. I think we should be okay
with allocating pages with __GFP_COMP and then
just using vm_insert_page but I'm not 100% sure
this would work with CMA.

For completeness sake, I know the request is for direct-io
but can you share a few more details about the
driver/use case where you want to use direct-io with Ion
buffers?
Post by Zengtao (B)
Thanks
Zengtao
Thanks,
Laura
Zengtao (B)
2017-04-19 07:21:41 UTC
Permalink
-----邮件原件-----
发送时间: 2017年4月18日 23:56
主题: Re: 答复: Using ion memory for direct-io
Post by Zengtao (B)
-----邮件原件-----
发送时间: 2017年4月18日 0:14
主题: Re: Using ion memory for direct-io
Post by Zengtao (B)
Hi
Currently, the ion mapped to userspace will be forced with VM_IO and
VM_PFNMAP flags.
Post by Zengtao (B)
When I use the ion memory to do the direct-io, it will fail when
reaching the get_user_pages,
Back to the VM_IO and VM_PFNMAP flag, there two flags are introduced
by the remap_pfn_range called by the ion_heap_mmap_user.
From my point of view, all ion memory(cma/vmalloc/system heap) are
managed by linux vm, it is not reasonable to have the VM_IO and
VM_PFNMAP flag, but I don't any suitable function to replace the
remap_pfn_range, any suggestions?
Post by Zengtao (B)
Thanks && Regards
Zengtao
The carveout heap is omitted from your list of 'all ion memory'. At
one time, carveout memory was not backed by struct pages so I suspect
this is a holdover from then. This would probably be better served
Yes, you are right, I missed the carveout heap which needs the VM_IO
and VM_PFNMAP, and I think the carveout heap can implement its own
map_user rather then using the common ion_heap_map_user.
The carveout heap only uses memory with struct pages these days. My point
was that the VM_IO and VFM_PFNMAP shouldn't need to be used at all
anymore. Sorry for the confusion.
Post by Zengtao (B)
by using vm_insert_page and handling higher order pages properly.
Your latest patch has remove the the page faulting support, I didn't
deep into the reason, but I think this conflicts with the vm_insert_page.
vm_insert_page should be able to be used outside the fault handler.
Post by Zengtao (B)
Thanks,
Laura
I tried to use the similar way as the dma framework do(split the page
and map_vm_area), but the split will break the ion sg design, maybe we
need a new lowlevel map function instead of directly using the
remap_pfn_range.
Yes, I don't think we can directly copy what the dma framework does. I think we
should be okay with allocating pages with __GFP_COMP and then just using
vm_insert_page but I'm not 100% sure this would work with CMA.
For completeness sake, I know the request is for direct-io but can you share a
few more details about the driver/use case where you want to use direct-io with
Ion buffers?
For example, in the video recorder application, the hardware video encoder will use
ion memory to encode the raw video data, and then store it on disk, if we can use
direct-io , an extra memcpy is saved, and we can get a better performance and a
lower cpu usage.
Post by Zengtao (B)
Thanks
Zengtao
Thanks,
Laura
Thanks

Loading...