I have managed to implement a solution similar to what is described in steps 3 and 4 of Map physical memory to userspace as normal, struct page backed mapping.
My fault handler looks like this
vm_fault_t vm_fault(struct vm_fault* vmf)
{
unsigned long pos = vmf->vma->vm_pgoff / NPAGES;
unsigned long offset = vmf->address - vmf->vma->vm_start;
// Have to make the address-to-page translation manually, for unknown
// reasons virt_to_page causes bus error when userspace writes to the
// memory.
unsigned long physaddr = __pa(memory_list[pos].kmalloc_area) + offset;
unsigned long pfn = physaddr >> PAGE_SHIFT;
struct page* page = pfn_to_page(pfn);
// Increment refcount to page.
get_page(page);
return vmf_insert_page(vmf->vma, vmf->address, page);
}
and my mmap()
implementation
static int mmap_mmap(struct file* filp, struct vm_area_struct* vma)
{
// Do not map to userspace using remap_pfn_range(), since it those mappings
// are incompatible with zero-copy for the sendmsg() syscall (due to the
// VM_IO and VM_PFNMAP flags). Instead set up mapping using the fault handler.
vma->vm_ops = &imageaccess_vm_operations;
vma->vm_flags |= VM_MIXEDMAP; // Must be set, or else vmf_insert_page() will fail.
return 0;
}
I have not figured out why virt_to_page()
does not seem to work, if anyone has an idea, feel free to comment. This implementation works in any case.