| Commit message (Collapse) | Author | Age |
| ... | |
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Commit 69235aa80090 ("net: Remove the get current NAPI context API")
removed the definition of get_current_napi_context() as rmnet_data
was no longer using it. However, the rmnet_data change to use its
NAPI in multiple contexts was prone to race in hotplug scenarios.
Add back get_current_napi_context() and current_napi to the
softnet_data struct
CRs-Fixed: 966095
Change-Id: I7cf1c5e39a5ccbd7a74a096b11efd179a4d0d034
Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
[subashab@codeaurora.org: resolve trivial merge conflicts]
Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
kmap_flush_unused does not flush kmap_atomic mappings which
are handled separately. Architectures may have use cases to require
these to be flushed. Add an option to let architectures define
kmap_atomic flushing to get rid of extra mappings.
Change-Id: I5a338ed9e215f0f0ad3ab58a3066d2f4c8ce3ba7
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Add new APIs to allow clients to map and unmap dma_buffers created by
ION. The call to the unmap API will not actually do the unmapping from the
IOMMU. The unmapping will occur when the ION/dma_buf buffer is actually
freed. This behavior can be disabled with DMA_ATTR_NO_DELAYED_UNMAP
Change-Id: Ic4dbd3b582eb0388020c650cab5fbb1dad67ae81
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This is a snapshot of the ION support as of msm-3.10 commit
acdce027751d5a7488b283f0ce3111f873a5816d (Merge "defconfig: arm64: Enable
ONESHOT_SYNC for msm8994")
In addition, comment out the shrinker code and skip-zeroing bits as they
aren't yet in the tree.
Change-Id: Id9e1e7fa4c35ce5a9f9348837f05f002258865cf
Signed-off-by: Kumar Gala <galak@codeaurora.org>
[mitchelh: dropped MSM changes to ion_chunk_heap, dropped MSM changes to
ion_carveout_heap]
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
The DMA framework currently zeros all buffers because it (righfully so)
assumes that drivers will soon need to pass the memory to a device.
Some devices/use case may not require zeroed memory and there can
be an increase in performance if we skip the zeroing. Add a DMA_ATTR
to allow skipping of DMA zeroing.
Change-Id: Id9ccab355554b3163d8e7eae1caa82460e171e34
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
[mitchelh: dropped changes to arm32]
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
is_vmalloc_addr currently assumes that all vmalloc addresses
exist between VMALLOC_START and VMALLOC_END. This may not be
the case when interleaving vmalloc and lowmem. Update the
is_vmalloc_addr to properly check for this.
Correspondingly we need to ensure that VMALLOC_TOTAL accounts
for all the vmalloc regions when CONFIG_ENABLE_VMALLOC_SAVING
is enabled.
Change-Id: I5def3d6ae1a4de59ea36f095b8c73649a37b1f36
Signed-off-by: Susheel Khiani <skhiani@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Currently on 32 bit systems, virtual space above
PAGE_OFFSET is reserved for direct mapped lowmem
and part of virtual address space is reserved for
vmalloc. We want to optimize such as to have as
much direct mapped memory as possible since there is
penalty for mapping/unmapping highmem. Now, we may
have an image that is expected to have a lifetime of
the entire system and is reserved in physical region
that would be part of direct mapped lowmem. The
physical memory which is thus reserved is never used
by Linux. This means that even though the system is
not actually accessing the virtual memory
corresponding to the reserved physical memory, we
are still losing that portion of direct mapped lowmem
space.
So by allowing lowmem to be non contiguous we can
give this unused virtual address space of reserved
region back for use in vmalloc.
Change-Id: I980b3dfafac71884dcdcb8cd2e4a6363cde5746a
Signed-off-by: Susheel Khiani <skhiani@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
It was found that a number of tasks were blocked in the reclaim path
(throttle_vm_writeout) for seconds, because of vmstat_diff not being
synced in time. Fix that by adding a new function
global_page_state_snapshot.
Signed-off-by: Vinayak Menon <vinmenon@codeaurora.org>
Change-Id: Iec167635ad724a55c27bdbd49eb8686e7857216c
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
There are couple of issues with swapcache usage when ZRAM is used
as swap device.
1) Kernel does a swap readahead which can be around 6 to 8 pages
depending on total ram, which is not required for zram since
accesses are fast.
2) Kernel delays the freeing up of swapcache expecting a later hit,
which again is useless in the case of zram.
3) This is not related to swapcache, but zram usage itself.
As mentioned in (2) kernel delays freeing of swapcache, but along with
that it delays zram compressed page free also. i.e. there can be 2 copies,
though one is compressed.
This patch addresses these issues using two new flags
QUEUE_FLAG_FAST and SWP_FAST, to indicate that accesses to the device
will be fast and cheap, and instructs the swap layer to free up
swap space agressively, and not to do read ahead.
Change-Id: I5d2d5176a5f9420300bb2f843f6ecbdb25ea80e4
Signed-off-by: Vinayak Menon <vinmenon@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
The existing calculation of vmpressure takes into account only
the ratio of reclaimed to scanned pages, but not the time spent
or the difficulty in reclaiming those pages. For e.g. when there
are quite a number of file pages in the system, an allocation
request can be satisfied by reclaiming the file pages alone. If
such a reclaim is successful, the vmpressure value will remain low
irrespective of the time spent by the reclaim code to free up the
file pages. With a feature like lowmemorykiller, killing a task
can be faster than reclaiming the file pages alone. So if the
vmpressure values reflect the reclaim difficulty level, clients
can make a decision based on that, for e.g. to kill a task early.
This patch monitors the number of pages scanned in the direct
reclaim path and scales the vmpressure level according to that.
Signed-off-by: Vinayak Menon <vinmenon@codeaurora.org>
Change-Id: I6e643d29a9a1aa0814309253a8b690ad86ec0b13
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Currently, vmpressure is tied to memcg and its events are
available only to userspace clients. This patch removes
the dependency on CONFIG_MEMCG and adds a mechanism for
in-kernel clients to subscribe for vmpressure events (in
fact raw vmpressure values are delivered instead of vmpressure
levels, to provide clients more flexibility to take actions
on custom pressure levels which are not currently defined
by vmpressure module).
Change-Id: I38010f166546e8d7f12f5f355b5dbfd6ba04d587
Signed-off-by: Vinayak Menon <vinmenon@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
For some use cases, it is not known beforehand, how much
removed (carve-out) region size must be reserved. Hence
the reserved region size might need to be adjusted to
support varying use cases. In such cases maintaining
different device tree configurations to support varying
carve-out region size is difficult.
Introduce an optional device tree property, to
reserved-memory, "no-map-fixup" which works in tandem with
"removed-dma-pool" compatibility that tries to shrink and
adjust the removed area on very first successful allocation.
At end of which it returns the additional (unused) pages
from the region back to the system.
Point to note is this that this adjustment is done on very
first allocation and thereafter the region size is big
enough only to support maximum of first allocation request
size. This fixup is attempted only once upon first
allocation and never after that. Clients can allocate and
free from this region as any other dma region.
As the description suggests this type of region is specific
to certain special needs and is not to be used for common
use cases.
Change-Id: I31f49d6bd957814bc2ef3a94910425b820ccc739
Signed-off-by: Shiraz Hashim <shashim@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
The current DMA coherent pool assumes that there is a kernel
mapping at all times for hte entire pool. This may not be
what we want for the entire times. Add the dma_removed ops to
support this use case.
Change-Id: Ie4f1e9bdf57b79699fa8fa7e7a6087e6d88ebbfa
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Bring back the is_cma_pageblock definition for determining if a
page is CMA or not.
Change-Id: I39fd546e22e240b752244832c79514f109c8e84b
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Allow other functions to dump the list of tasks.
Useful for when debugging memory leaks.
Change-Id: I76c33a118a9765b4c2276e8c76de36399c78dbf6
Signed-off-by: Liam Mark <lmark@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Strongly ordered memory is occasionally needed for some DMA
allocations for specialized use cases. Add the corresponding
DMA attribute.
Change-Id: Idd9e756c242ef57d6fa6700e51cc38d0863b760d
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
After getting an allocation from dma_alloc_coherent, there
may be cases where it is neccessary to remap the handle
into the CPU's address space (e.g. no CPU side mapping was
requested at allocation time but now one is needed). Add
APIs to bring a handle into the CPU address space again.
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
[imaund@codeaurora.org: resolved context conflicts and added support
for remap 'no_warn' argument]
Signed-off-by: Ian Maund <imaund@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
There are many drivers in the kernel which can hold on
to lots of memory. It can be useful to dump out all those
drivers at key points in the kernel. Introduct a notifier
framework for dumping this information. When the notifiers
are called, drivers can dump out the state of any memory
they may be using.
Change-Id: Ifb2946964bf5d072552dd56d8d6dfdd794af6d84
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Add a new function, memblock_overlaps_memory(), to check if a
region overlaps with a memory bank. This will be used by
peripheral loader code to detect when kernel memory would be
overwritten.
Change-Id: I851f8f416a0f36e85c0e19536b5209f7d4bd431c
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
(cherry picked from commit cc2753448d9f2adf48295f935a7eee36023ba8d3)
Signed-off-by: Josh Cartwright <joshc@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Implement support for allocating an EP based on name.
This allows a function driver to request an EP using an
ep name string. This is useful to request GSI specific
EPs.
CRs-Fixed: 946385
Change-Id: Id38e6e1e3c2d82f440c0507d24f18f808dc3e4dc
Signed-off-by: Devdutt Patnaik <dpatnaik@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Register gsi function suspend callback with composite device to
handle super speed function suspend and resume. Also, move
function suspend option masks to composite device layer.
Change-Id: Ie316973d855612ddc5440934344d18b04d49c567
Signed-off-by: Hemant Kumar <hemantk@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Targets supporting h/w accelerated path over GSI require
GSI specific configuration of USB controller.
Add support to enable h/w accelerated path to IPA.
Include operations to configure USB endpoints as
GSI capable, allocate TRBs that are associated with
GSI capable endpoints, perform operations on these
endpoints, and enable the GSI wrapper in DWC3 controller.
This allows a function driver to use GSI based h/w
accelerated data path.
Change-Id: I62688c70a04b1fdab3e522e0af759ebab69d92d7
Signed-off-by: Hemant Kumar <hemantk@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This change adds APIs to allocate and instanciate
multi instance diag function driver using configFS.
Add an entry in kconfig to select diag driver for
configFS.
Change-Id: I428631dc63643eddb075a09d0e46e1a6b1117f0e
Signed-off-by: Hemant Kumar <hemantk@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Upon Diag function bind, the DLOAD memory region should be
updated with the USB PID and serial number in order to support
a persistent connection with the PC if the device reboots into
download mode.
This functionality need not be handled in the android.c driver.
The only reason it is there is to be able to locate the IO address
which is specified in device tree. Since this can be done from
the Diag function driver directly, move the handling there.
The address itself can be specified under the "qcom,msm-imem"
parent with its own "qcom,msm-imem-diag-dload" compatible string.
For now, allow falling back to retrieving the address from the
"android_usb" for backwards compatibility until the device tree
files are updated.
Change-Id: I0d6d1dac0f12b7890220d857227ae45c9258c1f2
Signed-off-by: Jack Pham <jackp@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Add function driver to support Qualcomm diagnostics port over USB.
This snapshot is taken as of msm-3.18 commit:
commit e70ad0cd5e (Promotion of kernel.lnx.3.18-151201)
Change-Id: I51aaa8f6a2e05fc252ea810244ddfc99ca2741cc
Signed-off-by: Hemant Kumar <hemantk@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This reverts commit 3a66d7dca186ebdef9b0bf55e216778fa598062c.
Diag function driver calls kref_put_spinlock_irqsave in
diag_write_complete API. Hence revert the change.
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This variable keeps track of when remote_wakeup feature
has been enabled by the host. It is needed for certain
function drivers to perform alternate methods during
suspend/resume.
Signed-off-by: Jack Pham <jackp@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Link Power Management (a.k.a. L1) is similar to the existing usb bus
suspend/resume/remote-wakeup, but has transitional latencies of tens
of microseconds between power states (instead of three to greater than
20 millisecond latencies of the USB 2.0 suspend/resume).
Change-Id: I8ae493534702e658c24f384a6b705b08e9ea9d05
Signed-off-by: Shimrit Malichi <smalichi@codeaurora.org>
Signed-off-by: Tarun Gupta <tarung@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
In super speed mode if userspace issues a write after usb bus suspend
usb_func_ep_queue() schedules wakeup to resume the function. After that
it queues the request which fails with -ENOTSUPP. As a result no
notification request queued to hw and write request gets delayed to be
sent until another write request comes and queues notification request
after function resume. This causes mismatch to the mbim request response.
Fix this by queuing the notification request upon function resume if
notify count is greater than zero. Also, drop control packet after bus
suspend if remote wakeup is not supported or if ep enqueue returns error
other than -EAGAIN.
CRs-Fixed: 789467
Change-Id: I446de1eb169b4ccb8f4db5f003b622d7b9c0b22b
Signed-off-by: Hemant Kumar <hemantk@codeaurora.org>
Signed-off-by: Azhar Shaikh <azhars@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
In Super-Speed mode, when the USB core wishes to issue remote wakeup due to
new data arriving on the BAM to BAM path, it needs to send Function Wakeup
notification to the USB host after the USB bus is resumed. However, the
sending of this notification fails when the USB core needs to wake up from
low-power mode, because the low-power mode exit is done asynchronously and
the sending of the Function Wakeup notification can not be done until the
USB bus is resumed.
This patch fixes this issue by checking whether the USB bus is suspended,
and if so, the sending Function Wakeup notification is delayed until the
USB bus is resumed.
Change-Id: I293476aaaf920b67fdbdf72a63524edc7a35750b
Signed-off-by: Danny Segal <dsegal@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
The USB 3.0 specification defines a new 'Function Suspend' feature.
This feature enables the USB host to put inactive composite device
functions in a suspended state even when the device itself is not
suspended. This patch extends the existing framework of USB gadget
to properly support the 'Function Resume' and 'Function Remote Wakeup'
related features.
Change-Id: I51713eac557eabc7b465d161377c09d4b6afa152
Signed-off-by: Danny Segal <dsegal@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This snapshot is taken as of msm-3.18 commit 1513280 (Merge "platform: msm: msm_bus:
Fix memory leak during client unregister)"
Change Kconfig option to say QCOM_BUS* instead of MSM_BUS*
Change-Id: I6dd9aba5b26984a914714ca49ae7253c1f267b4b
Signed-off-by: David Dai <daidavid1@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Adding a new element "tsk_dirty" to struct page increases the size
of mem_map/vmemmap, restrict this to a debug only functionality to
save few MB of memory.
Considering a system with 1G of RAM, there will be nearly 262144
pages and thus that many number of page structures in mem_map/vmemmap.
With pointer size of 8 bytes on a 64 bit system, adding this
pointer to "struct page" means an increase of "2MB" for mem_map.
CRs-Fixed: 738692
Change-Id: Idf3217dcbe17cf1ab4d462d2aa8d39da1ffd8b13
Signed-off-by: Venkat Gopalakrishnan <venkatg@codeaurora.org>
[venkatg@codeaurora.org: Fixed trivial merge conflict]
Signed-off-by: Venkat Gopalakrishnan <venkatg@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Background writes happen in the context of a background thread.
It is very useful to identify the actual task that generated the
request instead of background task that submited the request.
Hence keep track of the task when a page gets dirtied and dump
this task info while tracing. Not all the pages in the bio are
dirtied by the same task but most likely it will be, since the
sectors accessed on the device must be adjacent.
Change-Id: I6afba85a2063dd3350a0141ba87cf8440ce9f777
Signed-off-by: Venkat Gopalakrishnan <venkatg@codeaurora.org>
[venkatg@codeaurora.org: Fixed trivial merge conflicts]
Signed-off-by: Venkat Gopalakrishnan <venkatg@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Expose "sector_range", which will indicate to the low-level driver
unit-tests the size (in sectors, starting from "start_sector") of the
address space in which they can perform I/O operations. This user-defined
variable can be used to change the address space size from the default
512MiB.
Change-Id: I515a6849eb39b78e653f4018993a2c8e64e2a77f
Signed-off-by: Lee Susman <lsusman@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Unit tests submit large requests of 512KB made of 128 bios.
Current allocation was done via kmalloc which may not be able
to allocate such a large buffer which is also physically contiguous.
Using kmalloc to allocate each bio separately is also problematic as
it might not be page aligned. Some bio may end up using more than a
single memory segment which will fail the mapping of the bios to
the request which supports up to 128 physical segments.
To avoid such failure, allocate a separate page for each bio
(bio size is single page size).
Change-Id: Id0da394d458942e093d924bc7aa23aa3231cdca7
Signed-off-by: Gilad Broner <gbroner@codeaurora.org>
[venkatg@codeaurora.org: Drop changes to mmc_block_test.c]
Signed-off-by: Venkat Gopalakrishnan <venkatg@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Current test-iosched design enables running only a single test
for a single block device.
This change modifies the test-iosched framework to allow running
several tests on several block devices.
Change-Id: I051d842733873488b64e89053d9c4e30e1249870
Signed-off-by: Gilad Broner <gbroner@codeaurora.org>
[merez@codeaurora.org: fix conflicts due to removal of BKOPs UT]
Signed-off-by: Maya Erez <merez@codeaurora.org>
Signed-off-by: Venkat Gopalakrishnan <venkatg@codeaurora.org>
[venkatg@codeaurora.org: Drop changes to ufs_test.c and
mmc_block_test.c]
Signed-off-by: Venkat Gopalakrishnan <venkatg@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
When running a test, a timer was set to detect test timeout
and to unblock the wait_event() function which is waiting for the
test to finish. This is redundant as wait_event timeout variant
gives the same functionality without the overhead of managing a
timer for this purpose and improve code readability.
Change-Id: Icbd3cb0f3fcb5854673f4506b102b0c80e97d6bb
Signed-off-by: Gilad Broner <gbroner@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
The test will verify correctness of sequential data pattern
written to the device while new data (with same pattern) is
written simultaneously.
First this test will run a long sequential write scenario.
This first stage will write the pattern that will be read
later. Second, sequential read requests will read and
compare the same data. The second stage reads, will issue in
Parallel to write requests with the same LBA and size.
NOTE: The test requires a long timeout.
The purpose of this test is to mix read and write requests on the same
LBA while checking for the read data correctness.
Change-Id: I6a437ce689b66233af3055d07a7f62f1e7b40765
Signed-off-by: Dolev Raviv <draviv@codeaurora.org>
[venkatg@codeaurora.org: Changes to ufs_test.c are already
present as part of earlier commit, hence drop them here]
Signed-off-by: Venkat Gopalakrishnan <venkatg@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Introduce a new callback 'check_test_completion_fn' to test-iosched
framework. This callback is necessary to determine if a test has
completed or not in situation where the request queue is empty, but the
test was not completed.
Change-Id: I60bd8cccffacab11a5a7cba78caccf53fea3e1d8
Signed-off-by: Dolev Raviv <draviv@codeaurora.org>
[venkatg@codeaurora.org: Changes to ufs_test.c are already
present as part of earlier commit, hence drop them here]
Signed-off-by: Venkat Gopalakrishnan <venkatg@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This test adds the ability to test the UFS task management feature
in the driver. It loads the queue with requests in order to allow
the task management to operate in full capacity.
Modify test-iosched infrastructure to support the new tests:
- expose check_test_completion()
Note: we submit 16-bio requests since the current HW is very slow
and we don't want to exceed the timeout duration.
Change-Id: I8ee752cba3c6838d8edc05747fa0288c4b347ef6
Signed-off-by: Dolev Raviv <draviv@codeaurora.org>
Signed-off-by: Lee Susman <lsusman@codeaurora.org>
[merez@codeaurora.org: fix trivial conflicts in ufs_test.c]
Signed-off-by: Maya Erez <merez@codeaurora.org>
[venkatg@codeaurora.org: Changes to ufs_test.c are already
present as part of earlier commit, hence drop them here]
Signed-off-by: Venkat Gopalakrishnan <venkatg@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Add a define for the test bio size (which is the size of a page),
this is used for allocating the right sized buffer for the bio during
test request creation.
Change-Id: I9505c85c4352009bdee442172eb8ae8f4254cfb0
Signed-off-by: Lee Susman <lsusman@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Change time measurements in long_sequential_test from jiffies to ktime,
and make the relevant change in test-iosched infrastructure.
In long_sequential_test we measure throughput, and the jiffies resolution
is not sensitive enough for this calculation.
Change-Id: If7c9a03c687f61996609c014e056bcd7132b9012
Signed-off-by: Lee Susman <lsusman@codeaurora.org>
[venkatg@codeaurora.org: Drop changes to mmc_block_test.c]
Signed-off-by: Venkat Gopalakrishnan <venkatg@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Change the test design so that requests are dynamically created
and freed. This enables running tests with more than 128 requests,
therefore more than 50MiB can be written/read and makes it possible
to measure driver write/read throughput more accurately.
Change-Id: I56c9d6c1afba5c91a0621a16d97feafd4689521d
Signed-off-by: Lee Susman <lsusman@codeaurora.org>
[merez@codeaurora.org: fix conflicts due to BKOPS tests removal]
Signed-off-by: Maya Erez <merez@codeaurora.org>
[venkatg@codeaurora.org: Drop changes to mmc_block_test.c]
Signed-off-by: Venkat Gopalakrishnan <venkatg@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Some block devices requires the rq_disk field to be assigned.
This patch exposes a new API to the block device test utility for
getting the rq_disk assigned, in the created request.
Change-Id: I61dc4dad50eb7600728156a6cd08bb1ee134df0d
Signed-off-by: Dolev Raviv <draviv@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Add functionality to test-iosched so that it could simulate the
ROW scheduler behaviour. The main additions are:
- 3 distinct requests queue with counters
- support for urgent request pending
- reinsert request implementation (callback + dispatch behavior)
Change-Id: I83b5d9e3d2b8cd9a2353afa6a3e6a4cbc83b0cd4
Signed-off-by: Konstantin Dorfman <kdorfman@codeaurora.org>
Signed-off-by: Lee Susman <lsusman@codeaurora.org>
[merez@codeaurora.org: fixed conflicts due to bkops tests removal]
Signed-off-by: Maya Erez <merez@codeaurora.org>
[venkatg@codeaurora.org: Dropping elevator is_urgent_fn and
reinsert_req_fn ops fn as they are not present in 3.18 kernel]
Signed-off-by: Venkat Gopalakrishnan <venkatg@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Long sequential read test measures read throughput
at the driver level by reading large requests sequentially.
Change-Id: I3b6d685930e1d0faceabbc7d20489111734cc9d4
Signed-off-by: Lee Susman <lsusman@codeaurora.org>
[merez@codeaurora.org: Fix conflicts as BKOPS tests were removed]
Signed-off-by: Maya Erez <merez@codeaurora.org>
[venkatg@codeaurora.org: Drop changes to mmc_block_test.c]
Signed-off-by: Venkat Gopalakrishnan <venkatg@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
The test scheduler allows testing a block device by dispatching
specific requests according to the test case and declare PASS/FAIL
according to the requests completion error code
Change-Id: Ief91f9fed6e3c3c75627d27264d5252ea14f10ad
Signed-off-by: Maya Erez <merez@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
A barrier request is used to control ordering of write requests without
clearing the device's cache. LLD support for barrier is optional. If LLD
doesn't support barrier, flush will be issued instead to insure logical
correctness.
To maintain this fallback flush s/w path and flags are appended.
This patch implements the necessary requests marking in order to support
the barrier feature in the block layer.
This patch implements two major changes required for the barrier support.
(1) A new flush execution-policy is added to support "ordered" requests
and a fallback , in case barrier is not supported by LLD.
(2) If there is a flush pending in the flush-queue, the received barrier
is ignored, in order not to miss a demand for an actual flush.
Change-Id: I6072d759e5c3bd983105852d81732e949da3d448
Signed-off-by: Dolev Raviv <draviv@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
A barrier request is a type of flush request used to control ordering of
write requests without clearing the device's cache. LLD support for
barrier is optional. To maintain backward compatibility, barrier request
has to maintain flush s/w path and flags.
This patch introduces those flags to define interface between the block
layer and the LLD.
Change-Id: Iefa8e9a5c1b5e8256eaeb0322c435becd4669de9
Signed-off-by: Dolev Raviv <draviv@codeaurora.org>
[imaund@codeaurora.org: Resolved context conflicts]
Signed-off-by: Ian Maund <imaund@codeaurora.org>
|