diff options
author | Amit Pundir <amit.pundir@linaro.org> | 2016-11-15 18:16:35 +0530 |
---|---|---|
committer | Amit Pundir <amit.pundir@linaro.org> | 2016-11-15 18:33:34 +0530 |
commit | 91e63c11a5eb143f5d737cf0380088528e7fa327 (patch) | |
tree | 34f5b35edf45c51b9a4d3aa5b04313d97be97754 /net/ipv4/tcp_output.c | |
parent | 79df8fa79b6a2aced892ad2b2c9832e7d9bdea6b (diff) | |
parent | 29703588215bbf281aed9aa8cdec0641757fb9e1 (diff) |
Merge branch 'linux-linaro-lsk-v4.4' into linux-linaro-lsk-v4.4-android
Conflicts:
* arch/arm64/include/asm/assembler.h
Pick changes from AOSP Change-Id: I450594dc311b09b6b832b707a9abb357608cc6e4
("UPSTREAM: arm64: include alternative handling in dcache_by_line_op").
* drivers/android/binder.c
Pick changes from LTS commit 14f09e8e7cd8 ("ANDROID: binder: Add strong ref checks"),
instead of AOSP Change-Id: I66c15b066808f28bd27bfe50fd0e03ff45a09fca
("ANDROID: binder: Add strong ref checks").
* drivers/usb/gadget/function/u_ether.c
Refactor throttling of highspeed IRQ logic in AOSP by adding
a check for last queue request as intended by LTS commit
660c04e8f174 ("usb: gadget: function: u_ether: don't starve tx request queue").
Fixes AOSP Change-Id: I26515bfd9bbc8f7af38be7835692143f7093118a
("USB: gadget: u_ether: Fix data stall issue in RNDIS tethering mode").
Signed-off-by: Amit Pundir <amit.pundir@linaro.org>
Diffstat (limited to 'net/ipv4/tcp_output.c')
-rw-r--r-- | net/ipv4/tcp_output.c | 15 |
1 files changed, 9 insertions, 6 deletions
diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c index 9eb81a4b0da2..ca3731721d81 100644 --- a/net/ipv4/tcp_output.c +++ b/net/ipv4/tcp_output.c @@ -1950,12 +1950,14 @@ static int tcp_mtu_probe(struct sock *sk) len = 0; tcp_for_write_queue_from_safe(skb, next, sk) { copy = min_t(int, skb->len, probe_size - len); - if (nskb->ip_summed) + if (nskb->ip_summed) { skb_copy_bits(skb, 0, skb_put(nskb, copy), copy); - else - nskb->csum = skb_copy_and_csum_bits(skb, 0, - skb_put(nskb, copy), - copy, nskb->csum); + } else { + __wsum csum = skb_copy_and_csum_bits(skb, 0, + skb_put(nskb, copy), + copy, 0); + nskb->csum = csum_block_add(nskb->csum, csum, len); + } if (skb->len <= copy) { /* We've eaten all the data from this skb. @@ -2569,7 +2571,8 @@ int __tcp_retransmit_skb(struct sock *sk, struct sk_buff *skb) * copying overhead: fragmentation, tunneling, mangling etc. */ if (atomic_read(&sk->sk_wmem_alloc) > - min(sk->sk_wmem_queued + (sk->sk_wmem_queued >> 2), sk->sk_sndbuf)) + min_t(u32, sk->sk_wmem_queued + (sk->sk_wmem_queued >> 2), + sk->sk_sndbuf)) return -EAGAIN; if (skb_still_in_host_queue(sk, skb)) |