Lines Matching +full:memory +full:- +full:mapped
8 of the API (and actual examples), see Documentation/core-api/dma-api-howto.rst.
11 Part II describes extensions for supporting non-consistent memory
13 non-consistent platforms (this is usually only legacy platforms) you
16 Part I - dma_API
17 ----------------
19 To get the dma_API, you must #include <linux/dma-mapping.h>. This
27 Part Ia - Using large DMA-coherent buffers
28 ------------------------------------------
36 Consistent memory is memory for which a write by either the device or
40 devices to read that memory.)
42 This routine allocates a region of <size> bytes of consistent memory.
51 Note: consistent memory can be expensive on some platforms, and the
53 consolidate your requests for consistent memory as much as possible.
59 the returned memory, like GFP_DMA).
67 Free a region of consistent memory you previously allocated. dev,
76 Part Ib - Using small DMA-coherent buffers
77 ------------------------------------------
81 Many drivers need lots of small DMA-coherent memory regions for DMA
84 much like a struct kmem_cache, except that they use the DMA-coherent allocator,
86 for alignment, like queue heads needing to be aligned on N-byte boundaries.
95 dma_pool_create() initializes a pool of DMA-coherent buffers
103 crossing restrictions, pass 0 for alloc; passing 4096 says memory allocated
112 Wraps dma_pool_alloc() and also zeroes the returned memory if the
122 This allocates memory from the pool; the returned memory will meet the
136 This puts memory back into the pool. The pool is what was passed to
138 were returned when that routine allocated the memory being freed.
147 memory back to the pool before you destroy it.
150 Part Ic - DMA addressing limitations
151 ------------------------------------
190 is the minimum required to cover all of memory. Examining the
215 addition, for high-rate short-lived streaming mappings, the upfront time
227 transfer memory ownership. Returns %false if those calls can be skipped.
237 Part Id - Streaming DMA mappings
238 --------------------------------
246 Maps a piece of processor virtual memory so it can be accessed by the
247 device and returns the DMA address of the memory.
255 DMA_TO_DEVICE data is going from the memory to the device
256 DMA_FROM_DEVICE data is coming from the device to the memory
262 Not all memory regions in a machine can be mapped by this API.
264 physical memory. Since this API does not provide any scatter/gather
265 capability, it will fail if the user tries to map a non-physically
266 contiguous piece of memory. For this reason, memory to be mapped by
270 Further, the DMA address of the memory must be within the
273 the memory ANDed with the dma_mask is still equal to the DMA
274 address, then the device can perform DMA to the memory). To
275 ensure that the memory allocated by kmalloc is within the dma_mask,
276 the driver may specify various platform-dependent flags to restrict
283 maps an I/O DMA address to a physical memory address). However, to be
289 Memory coherency operates at a granularity called the cache
290 line width. In order for memory mapped by this API to operate
291 correctly, the mapped region must begin exactly on a cache line
292 boundary and end exactly on one (to prevent two separately mapped
301 of the memory region by the software and before it is handed off to
302 the device. Once this primitive is used, memory covered by this
303 primitive should be treated as read-only by the device. If the device
308 accesses data that may be changed by the device. This memory should
309 be treated as read-only by the driver. If the driver needs to write
313 isn't sure if the memory was modified before being handed off to the
315 you must always sync bidirectional memory twice: once before the
316 memory is handed off to the device (to make sure all memory changes
327 Unmaps the region previously mapped. All the parameters passed in
369 the returned DMA address with dma_mapping_error(). A non-zero return value
379 Returns: the number of DMA address segments mapped (this may be shorter
384 Please note that the sg cannot be mapped again if it has been mapped once.
408 mapped them to. On failure 0, is returned.
412 accessed sg->address and sg->length as shown above.
420 Unmap the previously mapped scatter/gather list. All the parameters
460 - Before reading values that have been written by DMA from the device
462 - After writing values that will be written to the device using DMA
464 - before *and* after handing memory to the device if the memory is
495 The interpretation of DMA attributes is architecture-specific, and
497 Documentation/core-api/dma-attributes.rst.
505 you could pass an attribute DMA_ATTR_FOO when mapping memory
508 #include <linux/dma-mapping.h>
509 /* DMA_ATTR_FOO should be defined in linux/dma-mapping.h and
510 * documented in Documentation/core-api/dma-attributes.rst */
534 Part II - Non-coherent DMA allocations
535 --------------------------------------
538 by the passed in device, but which need explicit management of memory ownership
550 This routine allocates a region of <size> bytes of non-coherent memory. It
563 kmalloc()) for the allocation, but rejects flags used to specify a memory
566 Before giving the memory to the device, dma_sync_single_for_device() needs
567 to be called, and before reading memory written by the device,
577 Free a region of memory previously allocated using dma_alloc_pages().
599 kernel virtual address for the allocated memory instead of the page structure.
607 Free a region of memory previously allocated using dma_alloc_noncoherent().
619 This routine allocates <size> bytes of non-coherent and possibly non-contiguous
620 memory. It returns a pointer to struct sg_table that describes the allocated
621 and DMA mapped memory, or NULL if the allocation failed. The resulting memory
622 can be used for struct page mapped into a scatterlist are suitable for.
624 The return sg_table is guaranteed to have 1 single DMA mapped segment as
625 indicated by sgt->nents, but it might have multiple CPU side segments as
626 indicated by sgt->orig_nents.
632 kmalloc()) for the allocation, but rejects flags used to specify a memory
637 Before giving the memory to the device, dma_sync_sgtable_for_device() needs
638 to be called, and before reading memory written by the device,
649 Free memory previously allocated using dma_alloc_noncontiguous(). dev, size,
664 Once a non-contiguous allocation is mapped using this function, the
697 memory or doing partial flushes.
707 Part III - Debug drivers use of the DMA-API
708 -------------------------------------------
710 The DMA-API as described above has some constraints. DMA addresses must be
716 To debug drivers and find bugs in the usage of the DMA-API checking code can
719 debugging of DMA-API usage" option in your kernel configuration. Enabling this
723 about what DMA memory was allocated for which device. If this code detects an
727 WARNING: at /data2/repos/linux-2.6-iommu/lib/dma-debug.c:448
730 forcedeth 0000:00:08.0: DMA-API: device driver frees DMA memory with wrong
731 function [device address=0x00000000640444be] [size=66 bytes] [mapped as
734 Pid: 0, comm: swapper Tainted: G W 2.6.28-dmatest-09289-g8bb99c0 #1
755 <EOI> <4>---[ end trace f6435a98e2a38c0e ]---
758 of the DMA-API call which caused this warning.
766 The debugfs directory for the DMA-API debugging code is called dma-api/. In
770 dma-api/all_errors This file contains a numeric value. If this
776 dma-api/disabled This read-only file contains the character 'Y'
778 happen when it runs out of memory or if it was
781 dma-api/dump This read-only file contains current DMA
784 dma-api/error_count This file is read-only and shows the total
787 dma-api/num_errors The number in this file shows how many
793 dma-api/min_free_entries This read-only file can be read to get the
799 dma-api/num_free_entries The current number of free dma_debug_entries
802 dma-api/nr_total_entries The total number of dma_debug_entries in the
805 dma-api/driver_filter You can write a name of a driver into this file
814 'dma_debug=off' as a boot parameter. This will disable DMA-API debugging.
824 out of dma_debug_entries and was unable to allocate more on-demand. 65536
825 entries are preallocated at boot - if this is too low for you boot with
839 dma-debug interface debug_dma_mapping_error() to debug drivers that fail