diff options
| author | Blagovest Kolenichev <bkolenichev@codeaurora.org> | 2017-08-28 12:06:08 -0700 |
|---|---|---|
| committer | Blagovest Kolenichev <bkolenichev@codeaurora.org> | 2017-09-01 11:47:49 -0700 |
| commit | 901bf6ddcc78595749b0410b19499bc7de5bb72b (patch) | |
| tree | 544194c9653e71ad3273b7e3eee9ec81877ad48c /kernel | |
| parent | e243bb85026ba9a88de4e860b265594da4f73706 (diff) | |
| parent | 4b8fc9f2bcfec33edecae21924e07a54d1a7a08c (diff) | |
Merge android-4.4@4b8fc9f (v4.4.82) into msm-4.4
* refs/heads/tmp-4b8fc9f
UPSTREAM: locking: avoid passing around 'thread_info' in mutex debugging code
ANDROID: arm64: fix undeclared 'init_thread_info' error
UPSTREAM: kdb: use task_cpu() instead of task_thread_info()->cpu
Linux 4.4.82
net: account for current skb length when deciding about UFO
ipv4: Should use consistent conditional judgement for ip fragment in __ip_append_data and ip_finish_output
mm/mempool: avoid KASAN marking mempool poison checks as use-after-free
KVM: arm/arm64: Handle hva aging while destroying the vm
sparc64: Prevent perf from running during super critical sections
udp: consistently apply ufo or fragmentation
revert "ipv4: Should use consistent conditional judgement for ip fragment in __ip_append_data and ip_finish_output"
revert "net: account for current skb length when deciding about UFO"
packet: fix tp_reserve race in packet_set_ring
net: avoid skb_warn_bad_offload false positives on UFO
tcp: fastopen: tcp_connect() must refresh the route
net: sched: set xt_tgchk_param par.nft_compat as 0 in ipt_init_target
bpf, s390: fix jit branch offset related to ldimm64
net: fix keepalive code vs TCP_FASTOPEN_CONNECT
tcp: avoid setting cwnd to invalid ssthresh after cwnd reduction states
ANDROID: keychord: Fix for a memory leak in keychord.
ANDROID: keychord: Fix races in keychord_write.
Use %zu to print resid (size_t).
ANDROID: keychord: Fix a slab out-of-bounds read.
Linux 4.4.81
workqueue: implicit ordered attribute should be overridable
net: account for current skb length when deciding about UFO
ipv4: Should use consistent conditional judgement for ip fragment in __ip_append_data and ip_finish_output
mm: don't dereference struct page fields of invalid pages
signal: protect SIGNAL_UNKILLABLE from unintentional clearing.
lib/Kconfig.debug: fix frv build failure
mm, slab: make sure that KMALLOC_MAX_SIZE will fit into MAX_ORDER
ARM: 8632/1: ftrace: fix syscall name matching
virtio_blk: fix panic in initialization error path
drm/virtio: fix framebuffer sparse warning
scsi: qla2xxx: Get mutex lock before checking optrom_state
phy state machine: failsafe leave invalid RUNNING state
x86/boot: Add missing declaration of string functions
tg3: Fix race condition in tg3_get_stats64().
net: phy: dp83867: fix irq generation
sh_eth: R8A7740 supports packet shecksumming
wext: handle NULL extra data in iwe_stream_add_point better
sparc64: Measure receiver forward progress to avoid send mondo timeout
xen-netback: correctly schedule rate-limited queues
net: phy: Fix PHY unbind crash
net: phy: Correctly process PHY_HALTED in phy_stop_machine()
net/mlx5: Fix command bad flow on command entry allocation failure
sctp: fix the check for _sctp_walk_params and _sctp_walk_errors
sctp: don't dereference ptr before leaving _sctp_walk_{params, errors}()
dccp: fix a memleak for dccp_feat_init err process
dccp: fix a memleak that dccp_ipv4 doesn't put reqsk properly
dccp: fix a memleak that dccp_ipv6 doesn't put reqsk properly
net: ethernet: nb8800: Handle all 4 RGMII modes identically
ipv6: Don't increase IPSTATS_MIB_FRAGFAILS twice in ip6_fragment()
packet: fix use-after-free in prb_retire_rx_blk_timer_expired()
openvswitch: fix potential out of bound access in parse_ct
mcs7780: Fix initialization when CONFIG_VMAP_STACK is enabled
rtnetlink: allocate more memory for dev_set_mac_address()
ipv4: initialize fib_trie prior to register_netdev_notifier call.
ipv6: avoid overflow of offset in ip6_find_1stfragopt
net: Zero terminate ifr_name in dev_ifname().
ipv4: ipv6: initialize treq->txhash in cookie_v[46]_check()
saa7164: fix double fetch PCIe access condition
drm: rcar-du: fix backport bug
f2fs: sanity check checkpoint segno and blkoff
media: lirc: LIRC_GET_REC_RESOLUTION should return microseconds
mm, mprotect: flush TLB if potentially racing with a parallel reclaim leaving stale TLB entries
iser-target: Avoid isert_conn->cm_id dereference in isert_login_recv_done
iscsi-target: Fix delayed logout processing greater than SECONDS_FOR_LOGOUT_COMP
iscsi-target: Fix initial login PDU asynchronous socket close OOPs
iscsi-target: Fix early sk_data_ready LOGIN_FLAGS_READY race
iscsi-target: Always wait for kthread_should_stop() before kthread exit
target: Avoid mappedlun symlink creation during lun shutdown
media: platform: davinci: return -EINVAL for VPFE_CMD_S_CCDC_RAW_PARAMS ioctl
ARM: dts: armada-38x: Fix irq type for pca955
ext4: fix overflow caused by missing cast in ext4_resize_fs()
ext4: fix SEEK_HOLE/SEEK_DATA for blocksize < pagesize
mm/page_alloc: Remove kernel address exposure in free_reserved_area()
KVM: async_pf: make rcu irq exit if not triggered from idle task
ASoC: do not close shared backend dailink
ALSA: hda - Fix speaker output from VAIO VPCL14M1R
workqueue: restore WQ_UNBOUND/max_active==1 to be ordered
libata: array underflow in ata_find_dev()
ANDROID: binder: don't queue async transactions to thread.
ANDROID: binder: don't enqueue death notifications to thread todo.
ANDROID: binder: call poll_wait() unconditionally.
android: configs: move quota-related configs to recommended
BACKPORT: arm64: split thread_info from task stack
UPSTREAM: arm64: assembler: introduce ldr_this_cpu
UPSTREAM: arm64: make cpu number a percpu variable
UPSTREAM: arm64: smp: prepare for smp_processor_id() rework
BACKPORT: arm64: move sp_el0 and tpidr_el1 into cpu_suspend_ctx
UPSTREAM: arm64: prep stack walkers for THREAD_INFO_IN_TASK
UPSTREAM: arm64: unexport walk_stackframe
UPSTREAM: arm64: traps: simplify die() and __die()
UPSTREAM: arm64: factor out current_stack_pointer
BACKPORT: arm64: asm-offsets: remove unused definitions
UPSTREAM: arm64: thread_info remove stale items
UPSTREAM: thread_info: include <current.h> for THREAD_INFO_IN_TASK
UPSTREAM: thread_info: factor out restart_block
UPSTREAM: kthread: Pin the stack via try_get_task_stack()/put_task_stack() in to_live_kthread() function
UPSTREAM: sched/core: Add try_get_task_stack() and put_task_stack()
UPSTREAM: sched/core: Allow putting thread_info into task_struct
UPSTREAM: printk: when dumping regs, show the stack, not thread_info
UPSTREAM: fix up initial thread stack pointer vs thread_info confusion
UPSTREAM: Clarify naming of thread info/stack allocators
ANDROID: sdcardfs: override credential for ioctl to lower fs
Conflicts:
android/configs/android-base.cfg
arch/arm64/Kconfig
arch/arm64/include/asm/suspend.h
arch/arm64/kernel/head.S
arch/arm64/kernel/smp.c
arch/arm64/kernel/suspend.c
arch/arm64/kernel/traps.c
arch/arm64/mm/proc.S
kernel/fork.c
sound/soc/soc-pcm.c
Change-Id: I273e216c94899a838bbd208391c6cbe20b2bf683
Signed-off-by: Blagovest Kolenichev <bkolenichev@codeaurora.org>
Diffstat (limited to 'kernel')
| -rw-r--r-- | kernel/fork.c | 54 | ||||
| -rw-r--r-- | kernel/kthread.c | 8 | ||||
| -rw-r--r-- | kernel/locking/mutex-debug.c | 12 | ||||
| -rw-r--r-- | kernel/locking/mutex-debug.h | 4 | ||||
| -rw-r--r-- | kernel/locking/mutex.c | 6 | ||||
| -rw-r--r-- | kernel/locking/mutex.h | 2 | ||||
| -rw-r--r-- | kernel/printk/printk.c | 5 | ||||
| -rw-r--r-- | kernel/sched/sched.h | 4 | ||||
| -rw-r--r-- | kernel/signal.c | 4 | ||||
| -rw-r--r-- | kernel/workqueue.c | 23 |
10 files changed, 73 insertions, 49 deletions
diff --git a/kernel/fork.c b/kernel/fork.c index fef4df444f47..07cd0d68ee02 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -148,18 +148,18 @@ static inline void free_task_struct(struct task_struct *tsk) } #endif -void __weak arch_release_thread_info(struct thread_info *ti) +void __weak arch_release_thread_stack(unsigned long *stack) { } -#ifndef CONFIG_ARCH_THREAD_INFO_ALLOCATOR +#ifndef CONFIG_ARCH_THREAD_STACK_ALLOCATOR /* * Allocate pages if THREAD_SIZE is >= PAGE_SIZE, otherwise use a * kmemcache based allocator. */ # if THREAD_SIZE >= PAGE_SIZE -static struct thread_info *alloc_thread_info_node(struct task_struct *tsk, +static unsigned long *alloc_thread_stack_node(struct task_struct *tsk, int node) { struct page *page = alloc_kmem_pages_node(node, THREADINFO_GFP, @@ -168,30 +168,32 @@ static struct thread_info *alloc_thread_info_node(struct task_struct *tsk, return page ? page_address(page) : NULL; } -static inline void free_thread_info(struct thread_info *ti) +static inline void free_thread_stack(unsigned long *stack) { - kasan_alloc_pages(virt_to_page(ti), THREAD_SIZE_ORDER); - free_kmem_pages((unsigned long)ti, THREAD_SIZE_ORDER); + struct page *page = virt_to_page(stack); + + kasan_alloc_pages(page, THREAD_SIZE_ORDER); + __free_kmem_pages(page, THREAD_SIZE_ORDER); } # else -static struct kmem_cache *thread_info_cache; +static struct kmem_cache *thread_stack_cache; -static struct thread_info *alloc_thread_info_node(struct task_struct *tsk, +static struct thread_info *alloc_thread_stack_node(struct task_struct *tsk, int node) { - return kmem_cache_alloc_node(thread_info_cache, THREADINFO_GFP, node); + return kmem_cache_alloc_node(thread_stack_cache, THREADINFO_GFP, node); } -static void free_thread_info(struct thread_info *ti) +static void free_stack(unsigned long *stack) { - kmem_cache_free(thread_info_cache, ti); + kmem_cache_free(thread_stack_cache, stack); } -void thread_info_cache_init(void) +void thread_stack_cache_init(void) { - thread_info_cache = kmem_cache_create("thread_info", THREAD_SIZE, + thread_stack_cache = kmem_cache_create("thread_stack", THREAD_SIZE, THREAD_SIZE, 0, NULL); - BUG_ON(thread_info_cache == NULL); + BUG_ON(thread_stack_cache == NULL); } # endif #endif @@ -214,9 +216,9 @@ struct kmem_cache *vm_area_cachep; /* SLAB cache for mm_struct structures (tsk->mm) */ static struct kmem_cache *mm_cachep; -static void account_kernel_stack(struct thread_info *ti, int account) +static void account_kernel_stack(unsigned long *stack, int account) { - struct zone *zone = page_zone(virt_to_page(ti)); + struct zone *zone = page_zone(virt_to_page(stack)); mod_zone_page_state(zone, NR_KERNEL_STACK, account); } @@ -224,8 +226,8 @@ static void account_kernel_stack(struct thread_info *ti, int account) void free_task(struct task_struct *tsk) { account_kernel_stack(tsk->stack, -1); - arch_release_thread_info(tsk->stack); - free_thread_info(tsk->stack); + arch_release_thread_stack(tsk->stack); + free_thread_stack(tsk->stack); rt_mutex_debug_task_free(tsk); ftrace_graph_exit_task(tsk); put_seccomp_filter(tsk); @@ -336,7 +338,7 @@ void set_task_stack_end_magic(struct task_struct *tsk) static struct task_struct *dup_task_struct(struct task_struct *orig, int node) { struct task_struct *tsk; - struct thread_info *ti; + unsigned long *stack; int err; if (node == NUMA_NO_NODE) @@ -345,15 +347,15 @@ static struct task_struct *dup_task_struct(struct task_struct *orig, int node) if (!tsk) return NULL; - ti = alloc_thread_info_node(tsk, node); - if (!ti) + stack = alloc_thread_stack_node(tsk, node); + if (!stack) goto free_tsk; err = arch_dup_task_struct(tsk, orig); if (err) - goto free_ti; + goto free_stack; - tsk->stack = ti; + tsk->stack = stack; #ifdef CONFIG_SECCOMP /* * We must handle setting up seccomp filters once we're under @@ -385,12 +387,12 @@ static struct task_struct *dup_task_struct(struct task_struct *orig, int node) tsk->task_frag.page = NULL; tsk->wake_q.next = NULL; - account_kernel_stack(ti, 1); + account_kernel_stack(stack, 1); return tsk; -free_ti: - free_thread_info(ti); +free_stack: + free_thread_stack(stack); free_tsk: free_task_struct(tsk); return NULL; diff --git a/kernel/kthread.c b/kernel/kthread.c index 698b8dec3074..d9b0be5c6a5f 100644 --- a/kernel/kthread.c +++ b/kernel/kthread.c @@ -65,7 +65,7 @@ static inline struct kthread *to_kthread(struct task_struct *k) static struct kthread *to_live_kthread(struct task_struct *k) { struct completion *vfork = ACCESS_ONCE(k->vfork_done); - if (likely(vfork)) + if (likely(vfork) && try_get_task_stack(k)) return __to_kthread(vfork); return NULL; } @@ -427,8 +427,10 @@ void kthread_unpark(struct task_struct *k) { struct kthread *kthread = to_live_kthread(k); - if (kthread) + if (kthread) { __kthread_unpark(k, kthread); + put_task_stack(k); + } } EXPORT_SYMBOL_GPL(kthread_unpark); @@ -457,6 +459,7 @@ int kthread_park(struct task_struct *k) wait_for_completion(&kthread->parked); } } + put_task_stack(k); ret = 0; } return ret; @@ -492,6 +495,7 @@ int kthread_stop(struct task_struct *k) __kthread_unpark(k, kthread); wake_up_process(k); wait_for_completion(&kthread->exited); + put_task_stack(k); } ret = k->exit_code; put_task_struct(k); diff --git a/kernel/locking/mutex-debug.c b/kernel/locking/mutex-debug.c index 3ef3736002d8..9c951fade415 100644 --- a/kernel/locking/mutex-debug.c +++ b/kernel/locking/mutex-debug.c @@ -49,21 +49,21 @@ void debug_mutex_free_waiter(struct mutex_waiter *waiter) } void debug_mutex_add_waiter(struct mutex *lock, struct mutex_waiter *waiter, - struct thread_info *ti) + struct task_struct *task) { SMP_DEBUG_LOCKS_WARN_ON(!spin_is_locked(&lock->wait_lock)); /* Mark the current thread as blocked on the lock: */ - ti->task->blocked_on = waiter; + task->blocked_on = waiter; } void mutex_remove_waiter(struct mutex *lock, struct mutex_waiter *waiter, - struct thread_info *ti) + struct task_struct *task) { DEBUG_LOCKS_WARN_ON(list_empty(&waiter->list)); - DEBUG_LOCKS_WARN_ON(waiter->task != ti->task); - DEBUG_LOCKS_WARN_ON(ti->task->blocked_on != waiter); - ti->task->blocked_on = NULL; + DEBUG_LOCKS_WARN_ON(waiter->task != task); + DEBUG_LOCKS_WARN_ON(task->blocked_on != waiter); + task->blocked_on = NULL; list_del_init(&waiter->list); waiter->task = NULL; diff --git a/kernel/locking/mutex-debug.h b/kernel/locking/mutex-debug.h index 0799fd3e4cfa..d06ae3bb46c5 100644 --- a/kernel/locking/mutex-debug.h +++ b/kernel/locking/mutex-debug.h @@ -20,9 +20,9 @@ extern void debug_mutex_wake_waiter(struct mutex *lock, extern void debug_mutex_free_waiter(struct mutex_waiter *waiter); extern void debug_mutex_add_waiter(struct mutex *lock, struct mutex_waiter *waiter, - struct thread_info *ti); + struct task_struct *task); extern void mutex_remove_waiter(struct mutex *lock, struct mutex_waiter *waiter, - struct thread_info *ti); + struct task_struct *task); extern void debug_mutex_unlock(struct mutex *lock); extern void debug_mutex_init(struct mutex *lock, const char *name, struct lock_class_key *key); diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c index 14b9cca36b05..bf5277ee11d3 100644 --- a/kernel/locking/mutex.c +++ b/kernel/locking/mutex.c @@ -549,7 +549,7 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass, goto skip_wait; debug_mutex_lock_common(lock, &waiter); - debug_mutex_add_waiter(lock, &waiter, task_thread_info(task)); + debug_mutex_add_waiter(lock, &waiter, task); /* add waiting tasks to the end of the waitqueue (FIFO): */ list_add_tail(&waiter.list, &lock->wait_list); @@ -596,7 +596,7 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass, } __set_task_state(task, TASK_RUNNING); - mutex_remove_waiter(lock, &waiter, current_thread_info()); + mutex_remove_waiter(lock, &waiter, task); /* set it to 0 if there are no waiters left: */ if (likely(list_empty(&lock->wait_list))) atomic_set(&lock->count, 0); @@ -617,7 +617,7 @@ skip_wait: return 0; err: - mutex_remove_waiter(lock, &waiter, task_thread_info(task)); + mutex_remove_waiter(lock, &waiter, task); spin_unlock_mutex(&lock->wait_lock, flags); debug_mutex_free_waiter(&waiter); mutex_release(&lock->dep_map, 1, ip); diff --git a/kernel/locking/mutex.h b/kernel/locking/mutex.h index 5cda397607f2..a68bae5e852a 100644 --- a/kernel/locking/mutex.h +++ b/kernel/locking/mutex.h @@ -13,7 +13,7 @@ do { spin_lock(lock); (void)(flags); } while (0) #define spin_unlock_mutex(lock, flags) \ do { spin_unlock(lock); (void)(flags); } while (0) -#define mutex_remove_waiter(lock, waiter, ti) \ +#define mutex_remove_waiter(lock, waiter, task) \ __list_del((waiter)->list.prev, (waiter)->list.next) #ifdef CONFIG_MUTEX_SPIN_ON_OWNER diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c index 9fcb521fab0e..dca87791e9c1 100644 --- a/kernel/printk/printk.c +++ b/kernel/printk/printk.c @@ -3180,9 +3180,8 @@ void show_regs_print_info(const char *log_lvl) { dump_stack_print_info(log_lvl); - printk("%stask: %p ti: %p task.ti: %p\n", - log_lvl, current, current_thread_info(), - task_thread_info(current)); + printk("%stask: %p task.stack: %p\n", + log_lvl, current, task_stack_page(current)); } #endif diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 67b7da81f8a2..33bf0c07e757 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -1767,7 +1767,11 @@ static inline void __set_task_cpu(struct task_struct *p, unsigned int cpu) * per-task data have been completed by this moment. */ smp_wmb(); +#ifdef CONFIG_THREAD_INFO_IN_TASK + p->cpu = cpu; +#else task_thread_info(p)->cpu = cpu; +#endif p->wake_cpu = cpu; #endif } diff --git a/kernel/signal.c b/kernel/signal.c index b92a047ddc82..5d50ea899b6d 100644 --- a/kernel/signal.c +++ b/kernel/signal.c @@ -346,7 +346,7 @@ static bool task_participate_group_stop(struct task_struct *task) * fresh group stop. Read comment in do_signal_stop() for details. */ if (!sig->group_stop_count && !(sig->flags & SIGNAL_STOP_STOPPED)) { - sig->flags = SIGNAL_STOP_STOPPED; + signal_set_stop_flags(sig, SIGNAL_STOP_STOPPED); return true; } return false; @@ -845,7 +845,7 @@ static bool prepare_signal(int sig, struct task_struct *p, bool force) * will take ->siglock, notice SIGNAL_CLD_MASK, and * notify its parent. See get_signal_to_deliver(). */ - signal->flags = why | SIGNAL_STOP_CONTINUED; + signal_set_stop_flags(signal, why | SIGNAL_STOP_CONTINUED); signal->group_stop_count = 0; signal->group_exit_code = 0; } diff --git a/kernel/workqueue.c b/kernel/workqueue.c index 73c018d7df00..80b5dbfd187d 100644 --- a/kernel/workqueue.c +++ b/kernel/workqueue.c @@ -3669,8 +3669,12 @@ static int apply_workqueue_attrs_locked(struct workqueue_struct *wq, return -EINVAL; /* creating multiple pwqs breaks ordering guarantee */ - if (WARN_ON((wq->flags & __WQ_ORDERED) && !list_empty(&wq->pwqs))) - return -EINVAL; + if (!list_empty(&wq->pwqs)) { + if (WARN_ON(wq->flags & __WQ_ORDERED_EXPLICIT)) + return -EINVAL; + + wq->flags &= ~__WQ_ORDERED; + } ctx = apply_wqattrs_prepare(wq, attrs); @@ -3856,6 +3860,16 @@ struct workqueue_struct *__alloc_workqueue_key(const char *fmt, struct workqueue_struct *wq; struct pool_workqueue *pwq; + /* + * Unbound && max_active == 1 used to imply ordered, which is no + * longer the case on NUMA machines due to per-node pools. While + * alloc_ordered_workqueue() is the right way to create an ordered + * workqueue, keep the previous behavior to avoid subtle breakages + * on NUMA. + */ + if ((flags & WQ_UNBOUND) && max_active == 1) + flags |= __WQ_ORDERED; + /* see the comment above the definition of WQ_POWER_EFFICIENT */ if ((flags & WQ_POWER_EFFICIENT) && wq_power_efficient) flags |= WQ_UNBOUND; @@ -4044,13 +4058,14 @@ void workqueue_set_max_active(struct workqueue_struct *wq, int max_active) struct pool_workqueue *pwq; /* disallow meddling with max_active for ordered workqueues */ - if (WARN_ON(wq->flags & __WQ_ORDERED)) + if (WARN_ON(wq->flags & __WQ_ORDERED_EXPLICIT)) return; max_active = wq_clamp_max_active(max_active, wq->flags, wq->name); mutex_lock(&wq->mutex); + wq->flags &= ~__WQ_ORDERED; wq->saved_max_active = max_active; for_each_pwq(pwq, wq) @@ -5178,7 +5193,7 @@ int workqueue_sysfs_register(struct workqueue_struct *wq) * attributes breaks ordering guarantee. Disallow exposing ordered * workqueues. */ - if (WARN_ON(wq->flags & __WQ_ORDERED)) + if (WARN_ON(wq->flags & __WQ_ORDERED_EXPLICIT)) return -EINVAL; wq->wq_dev = wq_dev = kzalloc(sizeof(*wq_dev), GFP_KERNEL); |
