summaryrefslogtreecommitdiff
path: root/include/linux/dma-attrs.h (follow)
Commit message (Collapse)AuthorAge
* common: DMA-mapping: add per-buffer coherent mappings attributesLiam Mark2017-01-04
| | | | | | | | | | | The DMA_ATTR_FORCE_COHERENT DMA attribute can be used to force a buffer to be mapped as IO coherent. The DMA_ATTR_FORCE_NON_COHERENT DMA attribute can be used to force a buffer to not be mapped as IO coherent. Change-Id: Id80d77a5ccd797eec36b45b320423fb46c9f5861 Signed-off-by: Liam Mark <lmark@codeaurora.org>
* common: DMA-mapping: Add EXEC_MAPPING attributeRohit Vaswani2016-03-22
| | | | | | | | | DMA_ATTR_EXEC_MAPPING specifies that an executable mapping should be created for the requested buffer. By default, the DMA mappings are non-executable. Change-Id: I135077e14996e92fa9d199bdee043c443db48924 Signed-off-by: Rohit Vaswani <rvaswani@codeaurora.org>
* common: DMA-mapping: add NO_DELAYED_UNMAP attributeRohit Vaswani2016-03-22
| | | | | | | | | DMA_ATTR_NO_DELAYED_UNMAP specifies to the msm lazy mapping driver that this buffer should be immediately unmapped once it is freed. Change-Id: I43e6a6058705502cf91bf5f0c530c3099cba06ae Signed-off-by: Rohit Vaswani <rvaswani@codeaurora.org>
* arm: Add option to skip buffer zeroingLaura Abbott2016-03-22
| | | | | | | | | | | | | The DMA framework currently zeros all buffers because it (righfully so) assumes that drivers will soon need to pass the memory to a device. Some devices/use case may not require zeroed memory and there can be an increase in performance if we skip the zeroing. Add a DMA_ATTR to allow skipping of DMA zeroing. Change-Id: Id9ccab355554b3163d8e7eae1caa82460e171e34 Signed-off-by: Laura Abbott <lauraa@codeaurora.org> [mitchelh: dropped changes to arm32] Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
* common: DMA-mapping: Add strongly ordered memory attributeLaura Abbott2016-03-22
| | | | | | | | | Strongly ordered memory is occasionally needed for some DMA allocations for specialized use cases. Add the corresponding DMA attribute. Change-Id: Idd9e756c242ef57d6fa6700e51cc38d0863b760d Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
* common: DMA-mapping: add DMA_ATTR_FORCE_CONTIGUOUS attributeMarek Szyprowski2012-11-29
| | | | | | | | | | | | | This patch adds DMA_ATTR_FORCE_CONTIGUOUS attribute to the DMA-mapping subsystem. By default DMA-mapping subsystem is allowed to assemble the buffer allocated by dma_alloc_attrs() function from individual pages if it can be mapped as contiguous chunk into device dma address space. By specifing this attribute the allocated buffer is forced to be contiguous also in physical memory. Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
* common: DMA-mapping: add DMA_ATTR_SKIP_CPU_SYNC attributeMarek Szyprowski2012-07-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds DMA_ATTR_SKIP_CPU_SYNC attribute to the DMA-mapping subsystem. By default dma_map_{single,page,sg} functions family transfer a given buffer from CPU domain to device domain. Some advanced use cases might require sharing a buffer between more than one device. This requires having a mapping created separately for each device and is usually performed by calling dma_map_{single,page,sg} function more than once for the given buffer with device pointer to each device taking part in the buffer sharing. The first call transfers a buffer from 'CPU' domain to 'device' domain, what synchronizes CPU caches for the given region (usually it means that the cache has been flushed or invalidated depending on the dma direction). However, next calls to dma_map_{single,page,sg}() for other devices will perform exactly the same sychronization operation on the CPU cache. CPU cache sychronization might be a time consuming operation, especially if the buffers are large, so it is highly recommended to avoid it if possible. DMA_ATTR_SKIP_CPU_SYNC allows platform code to skip synchronization of the CPU cache for the given buffer assuming that it has been already transferred to 'device' domain. This attribute can be also used for dma_unmap_{single,page,sg} functions family to force buffer to stay in device domain after releasing a mapping for it. Use this attribute with care! Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> Reviewed-by: Kyungmin Park <kyungmin.park@samsung.com>
* common: DMA-mapping: add DMA_ATTR_NO_KERNEL_MAPPING attributeMarek Szyprowski2012-07-30
| | | | | | | | | | | | | This patch adds DMA_ATTR_NO_KERNEL_MAPPING attribute which lets the platform to avoid creating a kernel virtual mapping for the allocated buffer. On some architectures creating such mapping is non-trivial task and consumes very limited resources (like kernel virtual address space or dma consistent address space). Buffers allocated with this attribute can be only passed to user space by calling dma_mmap_attrs(). Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> Reviewed-by: Kyungmin Park <kyungmin.park@samsung.com> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
* common: DMA-mapping: add NON-CONSISTENT attributeMarek Szyprowski2012-03-28
| | | | | | | | | | | DMA_ATTR_NON_CONSISTENT lets the platform to choose to return either consistent or non-consistent memory as it sees fit. By using this API, you are guaranteeing to the platform that you have all the correct and necessary sync points for this memory in the driver. Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> Acked-by: Kyungmin Park <kyungmin.park@samsung.com> Reviewed-by: Arnd Bergmann <arnd@arndb.de>
* common: DMA-mapping: add WRITE_COMBINE attributeMarek Szyprowski2012-03-28
| | | | | | | | | | DMA_ATTR_WRITE_COMBINE specifies that writes to the mapping may be buffered to improve performance. It will be used by the replacement for ARM/ARV32 specific dma_alloc_writecombine() function. Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> Acked-by: Kyungmin Park <kyungmin.park@samsung.com> Reviewed-by: Arnd Bergmann <arnd@arndb.de>
* powerpc/cell: Add DMA_ATTR_WEAK_ORDERING dma attribute and use in Cell IOMMU ↵Mark Nelson2008-07-22
| | | | | | | | | | | | | | | | | code Introduce a new dma attriblue DMA_ATTR_WEAK_ORDERING to use weak ordering on DMA mappings in the Cell processor. Add the code to the Cell's IOMMU implementation to use this code. Dynamic mappings can be weakly or strongly ordered on an individual basis but the fixed mapping has to be either completely strong or completely weak. This is currently decided by a kernel boot option (pass iommu_fixed=weak for a weakly ordered fixed linear mapping, strongly ordered is the default). Signed-off-by: Mark Nelson <markn@au1.ibm.com> Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* dma: add dma_*map*_attrs() interfacesArthur Kepner2008-04-29
Introduce new interfaces, dma_*map*_attrs(), for passing architecture-specific attributes when memory is mapped and unmapped for DMA. Give the interfaces default implementations which ignore attributes. Also introduce the dma_{set|get}_attr() interfaces for setting and retrieving individual attributes. Define one attribute, DMA_ATTR_WRITE_BARRIER, in anticipation of its use by ia64/sn. Select whether architectures implement arch-specific versions of the dma_*map*_attrs() interfaces via HAVE_DMA_ATTRS in Kconfig. [markn@au1.ibm.com: dma_{set,get}_attr() have to be static inline] Signed-off-by: Arthur Kepner <akepner@sgi.com> Cc: Tony Luck <tony.luck@intel.com> Cc: Jesse Barnes <jbarnes@virtuousgeek.org> Cc: Jes Sorensen <jes@sgi.com> Cc: Randy Dunlap <randy.dunlap@oracle.com> Cc: Roland Dreier <rdreier@cisco.com> Cc: James Bottomley <James.Bottomley@HansenPartnership.com> Cc: David Miller <davem@davemloft.net> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Grant Grundler <grundler@parisc-linux.org> Cc: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Mark Nelson <markn@au1.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>