summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* powerpc/mm/hash64: Map all the kernel regions in the same 0xc rangeAneesh Kumar K.V2019-04-2118-109/+172
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch maps vmalloc, IO and vmemap regions in the 0xc address range instead of the current 0xd and 0xf range. This brings the mapping closer to radix translation mode. With hash 64K page size each of this region is 512TB whereas with 4K config we are limited by the max page table range of 64TB and hence there regions are of 16TB size. The kernel mapping is now: On 4K hash kernel_region_map_size = 16TB kernel vmalloc start = 0xc000100000000000 kernel IO start = 0xc000200000000000 kernel vmemmap start = 0xc000300000000000 64K hash, 64K radix and 4k radix: kernel_region_map_size = 512TB kernel vmalloc start = 0xc008000000000000 kernel IO start = 0xc00a000000000000 kernel vmemmap start = 0xc00c000000000000 Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/mm/hash64: Add a variable to track the end of IO mappingAneesh Kumar K.V2019-04-216-4/+12
| | | | | | | This makes it easy to update the region mapping in the later patch Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerc/mm/hash: Reduce hash_mm_context sizeAneesh Kumar K.V2019-04-215-14/+40
| | | | | | | | | | | | | | Allocate subpage protect related variables only if we use the feature. This helps in reducing the hash related mm context struct by around 4K Before the patch sizeof(struct hash_mm_context) = 8288 After the patch sizeof(struct hash_mm_context) = 4160 Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/mm: Reduce memory usage for mm_context_t for radixAneesh Kumar K.V2019-04-215-40/+68
| | | | | | | | | | | | | | | Currently, our mm_context_t on book3s64 include all hash specific context details like slice mask and subpage protection details. We can skip allocating these with radix translation. This will help us to save 8K per mm_context with radix translation. With the patch applied we have sizeof(mm_context_t) = 136 sizeof(struct hash_mm_context) = 8288 Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/mm: Move slb_addr_linit to early_init_mmuAneesh Kumar K.V2019-04-213-11/+8
| | | | | | | | Avoid #ifdef in generic code. Also enables us to do this specific to MMU translation mode on book3s64 Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/mm: Add helpers for accessing hash translation related variablesAneesh Kumar K.V2019-04-218-44/+154
| | | | | | | | | We want to switch to allocating them runtime only when hash translation is enabled. Add helpers so that both book3s and nohash can be adapted to upcoming change easily. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/mm: Remove PPC_MM_SLICES #ifdef for book3s64Aneesh Kumar K.V2019-04-212-17/+0
| | | | | | | Book3s64 always have PPC_MM_SLICES enabled. So remove the unncessary #ifdef Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/mm: Fix build error with FLATMEM book3s64 configAneesh Kumar K.V2019-04-213-15/+17
| | | | | | | | | | | | | | | | The current value of MAX_PHYSMEM_BITS cannot work with 32 bit configs. We used to have MAX_PHYSMEM_BITS not defined without SPARSEMEM and 32 bit configs never expected a value to be set for MAX_PHYSMEM_BITS. Dependent code such as zsmalloc derived the right values based on other fields. Instead of finding a value that works with different configs, use new values only for book3s_64. For 64 bit booke, use the definition of MAX_PHYSMEM_BITS as per commit a7df61a0e2b6 ("[PATCH] ppc64: Increase sparsemem defaults") That change was done in 2005 and hopefully will work with book3e 64. Fixes: 8bc086899816 ("powerpc/mm: Only define MAX_PHYSMEM_BITS in SPARSEMEM configurations") Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/32s: Implement Kernel Userspace Access ProtectionChristophe Leroy2019-04-216-0/+131
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch implements Kernel Userspace Access Protection for book3s/32. Due to limitations of the processor page protection capabilities, the protection is only against writing. read protection cannot be achieved using page protection. The previous patch modifies the page protection so that RW user pages are RW for Key 0 and RO for Key 1, and it sets Key 0 for both user and kernel. This patch changes userspace segment registers are set to Ku 0 and Ks 1. When kernel needs to write to RW pages, the associated segment register is then changed to Ks 0 in order to allow write access to the kernel. In order to avoid having the read all segment registers when locking/unlocking the access, some data is kept in the thread_struct and saved on stack on exceptions. The field identifies both the first unlocked segment and the first segment following the last unlocked one. When no segment is unlocked, it contains value 0. As the hash_page() function is not able to easily determine if a protfault is due to a bad kernel access to userspace, protfaults need to be handled by handle_page_fault when KUAP is set. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> [mpe: Drop allow_read/write_to/from_user() as they're now in kup.h, and adapt allow_user_access() to do nothing when to == NULL] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/32s: Prepare Kernel Userspace Access ProtectionChristophe Leroy2019-04-213-14/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch prepares Kernel Userspace Access Protection for book3s/32. Due to limitations of the processor page protection capabilities, the protection is only against writing. read protection cannot be achieved using page protection. book3s/32 provides the following values for PP bits: PP00 provides RW for Key 0 and NA for Key 1 PP01 provides RW for Key 0 and RO for Key 1 PP10 provides RW for all PP11 provides RO for all Today PP10 is used for RW pages and PP11 for RO pages, and user segment register's Kp and Ks are set to 1. This patch modifies page protection to use PP01 for RW pages and sets user segment registers to Kp 0 and Ks 0. This will allow to setup Userspace write access protection by settng Ks to 1 in the following patch. Kernel space segment registers remain unchanged. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/32s: Implement Kernel Userspace Execution Prevention.Christophe Leroy2019-04-217-1/+85
| | | | | | | | | | | | To implement Kernel Userspace Execution Prevention, this patch sets NX bit on all user segments on kernel entry and clears NX bit on all user segments on kernel exit. Note that powerpc 601 doesn't have the NX bit, so KUEP will not work on it. A warning is displayed at startup. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/8xx: Add Kernel Userspace Access ProtectionChristophe Leroy2019-04-215-0/+81
| | | | | | | | | | | | | | | | | | | | | This patch adds Kernel Userspace Access Protection on the 8xx. When a page is RO or RW, it is set RO or RW for Key 0 and NA for Key 1. Up to now, the User group is defined with Key 0 for both User and Supervisor. By changing the group to Key 0 for User and Key 1 for Supervisor, this patch prevents the Kernel from being able to access user data. At exception entry, the kernel saves SPRN_MD_AP in the regs struct, and reapply the protection. At exception exit it restores SPRN_MD_AP with the value saved on exception entry. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> [mpe: Drop allow_read/write_to/from_user() as they're now in kup.h] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/8xx: Add Kernel Userspace Execution PreventionChristophe Leroy2019-04-213-0/+20
| | | | | | | | | | | | | | | | This patch adds Kernel Userspace Execution Prevention on the 8xx. When a page is Executable, it is set Executable for Key 0 and NX for Key 1. Up to now, the User group is defined with Key 0 for both User and Supervisor. By changing the group to Key 0 for User and Key 1 for Supervisor, this patch prevents the Kernel from being able to execute user code. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/8xx: Only define APG0 and APG1Christophe Leroy2019-04-211-6/+6
| | | | | | | | | | | Since the 8xx implements hardware page table walk assistance, the PGD entries always point to a 4k aligned page, so the 2 upper bits of the APG are not clobbered anymore and remain 0. Therefore only APG0 and APG1 are used and need a definition. We set the other APG to the lowest permission level. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/32: Prepare for Kernel Userspace Access ProtectionChristophe Leroy2019-04-213-6/+27
| | | | | | | | | | | | This patch adds ASM macros for saving, restoring and checking the KUAP state, and modifies setup_32 to call them on exceptions from kernel. The macros are defined as empty by default for when CONFIG_PPC_KUAP is not selected and/or for platforms which don't handle (yet) KUAP. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/32: Remove MSR_PR test when returning from syscallChristophe Leroy2019-04-211-5/+0
| | | | | | | | syscalls are from user only, so we can account time without checking whether returning to kernel or user as it will only be user. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/mm: Detect bad KUAP faultsMichael Ellerman2019-04-213-3/+29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | When KUAP is enabled we have logic to detect page faults that occur outside of a valid user access region and are blocked by the AMR. What we don't have at the moment is logic to detect a fault *within* a valid user access region, that has been incorrectly blocked by AMR. This is not meant to ever happen, but it can if we incorrectly save/restore the AMR, or if the AMR was overwritten for some other reason. Currently if that happens we assume it's just a regular fault that will be corrected by handling the fault normally, so we just return. But there is nothing the fault handling code can do to fix it, so the fault just happens again and we spin forever, leading to soft lockups. So add some logic to detect that case and WARN() if we ever see it. Arguably it should be a BUG(), but it's more polite to fail the access and let the kernel continue, rather than taking down the box. There should be no data integrity issue with failing the fault rather than BUG'ing, as we're just going to disallow an access that should have been allowed. To make the code a little easier to follow, unroll the condition at the end of bad_kernel_fault() and comment each case, before adding the call to bad_kuap_fault(). Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/64s: Implement KUAP for Radix MMUMichael Ellerman2019-04-2110-3/+176
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Kernel Userspace Access Prevention utilises a feature of the Radix MMU which disallows read and write access to userspace addresses. By utilising this, the kernel is prevented from accessing user data from outside of trusted paths that perform proper safety checks, such as copy_{to/from}_user() and friends. Userspace access is disabled from early boot and is only enabled when performing an operation like copy_{to/from}_user(). The register that controls this (AMR) does not prevent userspace from accessing itself, so there is no need to save and restore when entering and exiting userspace. When entering the kernel from the kernel we save AMR and if it is not blocking user access (because eg. we faulted doing a user access) we reblock user access for the duration of the exception (ie. the page fault) and then restore the AMR when returning back to the kernel. This feature can be tested by using the lkdtm driver (CONFIG_LKDTM=y) and performing the following: # (echo ACCESS_USERSPACE) > [debugfs]/provoke-crash/DIRECT If enabled, this should send SIGSEGV to the thread. We also add paranoid checking of AMR in switch and syscall return under CONFIG_PPC_KUAP_DEBUG. Co-authored-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Russell Currey <ruscur@russell.cc> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/lib: Refactor __patch_instruction() to use __put_user_asm()Russell Currey2019-04-211-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | __patch_instruction() is called in early boot, and uses __put_user_size(), which includes the allow/prevent calls to enforce KUAP, which could either be called too early, or in the Radix case, forced to use "early_" versions of functions just to safely handle this one case. __put_user_asm() does not do this, and thus is safe to use both in early boot, and later on since in this case it should only ever be touching kernel memory. __patch_instruction() was previously refactored to use __put_user_size() in order to be able to return -EFAULT, which would allow the kernel to patch instructions in userspace, which should never happen. This has the functional change of causing faults on userspace addresses if KUAP is turned on, which should never happen in practice. A future enhancement could be to double check the patch address is definitely allowed to be tampered with by the kernel. Signed-off-by: Russell Currey <ruscur@russell.cc> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/mm/radix: Use KUEP API for Radix MMURussell Currey2019-04-212-3/+10
| | | | | | | | | | Execution protection already exists on radix, this just refactors the radix init to provide the KUEP setup function instead. Thus, the only functional change is that it can now be disabled. Signed-off-by: Russell Currey <ruscur@russell.cc> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/64: Setup KUP on secondary CPUsRussell Currey2019-04-212-1/+4
| | | | | | | | | | | | | Some platforms (i.e. Radix MMU) need per-CPU initialisation for KUP. Any platforms that only want to do KUP initialisation once globally can just check to see if they're running on the boot CPU, or check if whatever setup they need has already been performed. Note that this is only for 64-bit. Signed-off-by: Russell Currey <ruscur@russell.cc> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc: Add a framework for Kernel Userspace Access ProtectionChristophe Leroy2019-04-2110-15/+121
| | | | | | | | | | | | | | | | | | | | | | This patch implements a framework for Kernel Userspace Access Protection. Then subarches will have the possibility to provide their own implementation by providing setup_kuap() and allow/prevent_user_access(). Some platforms will need to know the area accessed and whether it is accessed from read, write or both. Therefore source, destination and size and handed over to the two functions. mpe: Rename to allow/prevent rather than unlock/lock, and add read/write wrappers. Drop the 32-bit code for now until we have an implementation for it. Add kuap to pt_regs for 64-bit as well as 32-bit. Don't split strings, use pr_crit_ratelimited(). Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Russell Currey <ruscur@russell.cc> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc: Add skeleton for Kernel Userspace Execution PreventionChristophe Leroy2019-04-215-6/+34
| | | | | | | | | | | This patch adds a skeleton for Kernel Userspace Execution Prevention. Then subarches implementing it have to define CONFIG_PPC_HAVE_KUEP and provide setup_kuep() function. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> [mpe: Don't split strings, use pr_crit_ratelimited()] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc: Add framework for Kernel Userspace ProtectionChristophe Leroy2019-04-214-0/+26
| | | | | | | | | | | | | | | | | | This patch adds a skeleton for Kernel Userspace Protection functionnalities like Kernel Userspace Access Protection and Kernel Userspace Execution Prevention The subsequent implementation of KUAP for radix makes use of a MMU feature in order to patch out assembly when KUAP is disabled or unsupported. This won't work unless there's an entry point for KUP support before the feature magic happens, so for PPC64 setup_kup() is called early in setup. On PPC32, feature_fixup() is done too early to allow the same. Suggested-by: Russell Currey <ruscur@russell.cc> Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/powernv/idle: Restore AMR/UAMOR/AMOR after idleMichael Ellerman2019-04-211-4/+23
| | | | | | | | | | In order to implement KUAP (Kernel Userspace Access Protection) on Power9 we will be using the AMR, and therefore indirectly the UAMOR/AMOR. So save/restore these regs in the idle code. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/powernv/idle: Restore IAMR after idleRussell Currey2019-04-211-0/+20
| | | | | | | | | | | | | | | | | | | | | | | | Without restoring the IAMR after idle, execution prevention on POWER9 with Radix MMU is overwritten and the kernel can freely execute userspace without faulting. This is necessary when returning from any stop state that modifies user state, as well as hypervisor state. To test how this fails without this patch, load the lkdtm driver and do the following: $ echo EXEC_USERSPACE > /sys/kernel/debug/provoke-crash/DIRECT which won't fault, then boot the kernel with powersave=off, where it will fault. Applying this patch will fix this. Fixes: 3b10d0095a1e ("powerpc/mm/radix: Prevent kernel execution of user space") Cc: stable@vger.kernel.org # v4.10+ Signed-off-by: Russell Currey <ruscur@russell.cc> Reviewed-by: Akshay Adiga <akshay.adiga@linux.vnet.ibm.com> Reviewed-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/numa: document topology_updates_enabled, disable by defaultNathan Lynch2019-04-201-4/+10
| | | | | | | | | | Changing the NUMA associations for CPUs and memory at runtime is basically unsupported by the core mm, scheduler etc. We see all manner of crashes, warnings and instability when the pseries code tries to do this. Disable this behavior by default, and document the switch a bit. Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/numa: improve control of topology updatesNathan Lynch2019-04-201-6/+12
| | | | | | | | | | | | | | | | | When booted with "topology_updates=no", or when "off" is written to /proc/powerpc/topology_updates, NUMA reassignments are inhibited for PRRN and VPHN events. However, migration and suspend unconditionally re-enable reassignments via start_topology_update(). This is incoherent. Check the topology_updates_enabled flag in start/stop_topology_update() so that callers of those APIs need not be aware of whether reassignments are enabled. This allows the administrative decision on reassignments to remain in force across migrations and suspensions. Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/powernv: Squash sparse warnings in opal-call.cAndrew Donnellan2019-04-201-0/+2
| | | | | | | | | | | | | | | sparse complains a lot about opal-call.c: arch/powerpc/platforms/powernv/opal-call.c:128:1: warning: symbol 'opal_invalid_call' was not declared. Should it be static? arch/powerpc/platforms/powernv/opal-call.c:129:1: warning: symbol 'opal_console_write' was not declared. Should it be static? arch/powerpc/platforms/powernv/opal-call.c:130:1: warning: symbol 'opal_console_read' was not declared. Should it be static? Those symbols are forward declared in opal.h, but we can't include that because the function signatures in opal.h are different. So instead, just add an extra forward declaration to the OPAL_CALL macro to shut sparse up. Signed-off-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/crypto: Use cheaper random numbers for crc-vpmsum self-testGeorge Spelvin2019-04-201-7/+3
| | | | | | | | | | | | | | | This code was filling a 64K buffer from /dev/urandom in order to compute a CRC over (on average half of) it by two different methods, comparing the CRCs, and repeating. This is not a remotely security-critical application, so use the far faster and cheaper prandom_u32() generator. And, while we're at it, only fill as much of the buffer as we plan to use. Signed-off-by: George Spelvin <lkml@sdf.org> Acked-by: Daniel Axtens <dja@axtens.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc: Remove duplicate headersJagadeesh Pagadala2019-04-203-3/+0
| | | | | | | | Remove duplicate headers inclusions. Signed-off-by: Jagadeesh Pagadala <jagdsh.linux@gmail.com> Reviewed-by: Mukesh Ojha <mojha@codeaurora.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/8xx: Fix possible device node reference leakWen Yang2019-04-201-2/+1
| | | | | | | | | | | | | | | | | | | The call to of_find_compatible_node() returns a node pointer with refcount incremented thus it must be explicitly decremented after the last usage. irq_domain_add_linear() also calls of_node_get() to increase refcount, so irq_domain() will not be affected when it is released. Detected by coccinelle. Fixes: a8db8cf0d894 ("irq_domain: Replace irq_alloc_host() with revmap-specific initializers") Signed-off-by: Wen Yang <wen.yang99@zte.com.cn> Suggested-by: Christophe Leroy <christophe.leroy@c-s.fr> Suggested-by: Michael Ellerman <mpe@ellerman.id.au> Reviewed-by: Peng Hao <peng.hao2@zte.com.cn> Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/pseries: hwpoison the pages upon hitting UEGanesh Goudar2019-04-203-1/+85
| | | | | | | | | | | | | Add support to hwpoison the pages upon hitting machine check exception. This patch queues the address where UE is hit to percpu array and schedules work to plumb it into memory poison infrastructure. Reviewed-by: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com> Signed-off-by: Ganesh Goudar <ganeshgr@linux.ibm.com> [mpe: Combine #ifdefs, drop PPC_BIT8(), and empty inline stub] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/83xx: Add missing of_node_put() after of_device_is_available()Julia Lawall2019-04-201-1/+3
| | | | | | | | | Add an of_node_put() when a tested device node is not available. Fixes: c026c98739c7e ("powerpc/83xx: Do not configure or probe disabled FSL DR USB controllers") Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr> Reviewed-by: Mukesh Ojha <mojha@codeaurora.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* MAINTAINERS: Update remaining @linux.vnet.ibm.com addressesLukas Bulwahn2019-04-201-3/+3
| | | | | | | | | | | | | | | Paul McKenney attempted to update all email addresses @linux.vnet.ibm.com to @linux.ibm.com in commit 1dfddcdb95c4 ("MAINTAINERS: Update from @linux.vnet.ibm.com to @linux.ibm.com"), but some still remained. We update the remaining email addresses in MAINTAINERS, hopefully finally catching all cases for good. Fixes: 1dfddcdb95c4 ("MAINTAINERS: Update from @linux.vnet.ibm.com to @linux.ibm.com") Signed-off-by: Lukas Bulwahn <lukas.bulwahn@gmail.com> Acked-by: Paul E. McKenney <paulmck@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* MAINTAINERS: Remove non-existent VAS fileSukadev Bhattiprolu2019-04-201-2/+1
| | | | | | | | | | | The file arch/powerpc/include/uapi/asm/vas.h was considered but never merged and should be removed from the MAINTAINERS file. While here, add missing email address. Reported-by: Joe Perches <joe@perches.com> Signed-off-by: Sukadev Bhattiprolu <sukadev@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/pseries/pmem: Fix a set but not used valueQian Cai2019-04-201-2/+1
| | | | | | | | | | | | | The commit 4c5d87db4978 ("powerpc/pseries: PAPR persistent memory support") set a local variable "count" in dlpar_hp_pmem() but never use it. arch/powerpc/platforms/pseries/pmem.c: In function 'dlpar_hp_pmem': arch/powerpc/platforms/pseries/pmem.c:109:6: warning: variable 'count' set but not used Signed-off-by: Qian Cai <cai@lca.pw> Reviewed-by: Mukesh Ojha <mojha@codeaurora.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/pseries/iommu: Fix set but not used valuesQian Cai2019-04-201-8/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | The commit b7d6bf4fdd47 ("powerpc/pseries/pci: Remove obsolete SW invalidate") left 2 variables unused. arch/powerpc/platforms/pseries/iommu.c:108:17: warning: variable 'tces' set but not used __be64 *tcep, *tces; ^~~~ arch/powerpc/platforms/pseries/iommu.c:132:17: warning: variable 'tces' set but not used __be64 *tcep, *tces; ^~~~ Also, the commit 68c0449ea16d ("powerpc/pseries/iommu: Use memory@ nodes in max RAM address calculation") set "ranges" in ddw_memory_hotplug_max() but never use it. arch/powerpc/platforms/pseries/iommu.c: In function 'ddw_memory_hotplug_max': arch/powerpc/platforms/pseries/iommu.c:948:7: warning: variable 'ranges' set but not used int ranges, n_mem_addr_cells, n_mem_size_cells, len; ^~~~~~ Signed-off-by: Qian Cai <cai@lca.pw> Reviewed-by: Mukesh Ojha <mojha@codeaurora.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/mm: Silence unused-but-set-variable warningsQian Cai2019-04-202-2/+4
| | | | | | | | | | | | | | | | | | pte_unmap() compiles away on some powerpc platforms, so silence the warnings below by making it a static inline function. mm/memory.c: In function 'copy_pte_range': mm/memory.c:820:24: warning: variable 'orig_dst_pte' set but not used mm/memory.c:820:9: warning: variable 'orig_src_pte' set but not used mm/madvise.c: In function 'madvise_free_pte_range': mm/madvise.c:318:9: warning: variable 'orig_pte' set but not used mm/swap_state.c: In function 'swap_ra_info': mm/swap_state.c:634:15: warning: variable 'orig_pte' set but not used Suggested-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Qian Cai <cai@lca.pw> Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/mm: move warning from resize_hpt_for_hotplug()Laurent Vivier2019-04-204-16/+13
| | | | | | | | | | | | | | | resize_hpt_for_hotplug() reports a warning when it cannot resize the hash page table ("Unable to resize hash page table to target order") but in some cases it's not a problem and can make user thinks something has not worked properly. This patch moves the warning to arch_remove_memory() to only report the problem when it is needed. Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Laurent Vivier <lvivier@redhat.com> Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/mm/radix: Don't do SLB preload when using the radix MMUAneesh Kumar K.V2019-04-201-1/+2
| | | | | | | Add radix_enabled() check to avoid SLB preload with radix translation. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/configs: Enable CONFIG_USB_XHCI_HCD by defaultThomas Huth2019-04-201-0/+1
| | | | | | | | | | | | | | | Recent versions of QEMU provide a XHCI device by default these days instead of an old-fashioned OHCI device: https://git.qemu.org/?p=qemu.git;a=commitdiff;h=57040d451315320b7d27 So to get the keyboard working in the graphical console there again, we should now include XHCI support in the kernel by default, too. Signed-off-by: Thomas Huth <thuth@redhat.com> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Acked-by: Joel Stanley <joel@jms.id.au> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/pseries/mce: Improve array initialization.Mahesh Salgaonkar2019-04-201-26/+26
| | | | | | | | | | | This is a follow up to the patch that fixed misleading print for TLB mutlihit due to wrongly populated mc_err_types[] array. Convert all the static array initialization to '[x] = val' style for better readability of array indexing and avoid any further confusion. Suggested-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/64: Fix booting large kernels with STRICT_KERNEL_RWXRussell Currey2019-04-201-1/+3
| | | | | | | | | | | | | | | With STRICT_KERNEL_RWX enabled anything marked __init is placed at a 16M boundary. This is necessary so that it can be repurposed later with different permissions. However, in kernels with text larger than 16M, this pushes early_setup past 32M, incapable of being reached by the branch instruction. Fix this by setting the CTR and branching there instead. Fixes: 1e0fc9d1eb2b ("powerpc/Kconfig: Enable STRICT_KERNEL_RWX for some configs") Signed-off-by: Russell Currey <ruscur@russell.cc> [mpe: Fix it to work on BE by using DOTSYM()] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/embedded6xx: Remove unused functions holly_power_off and holly_haltMathieu Malaterre2019-04-201-12/+0
| | | | | | | | | | | Silence the following warnings triggered using W=1: arch/powerpc/platforms/embedded6xx/holly.c:236:6: error: no previous prototype for 'holly_power_off' arch/powerpc/platforms/embedded6xx/holly.c:243:6: error: no previous prototype for 'holly_halt' Signed-off-by: Mathieu Malaterre <malat@debian.org> Reviewed-by: Mukesh Ojha <mojha@codeaurora.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/embedded6xx: Make some functions staticMathieu Malaterre2019-04-201-3/+4
| | | | | | | | | | | | | | | | | | | | | | In commit cb9e4d10c448 ("[POWERPC] Add support for 750CL Holly board") new functions were added. Since most of these functions can be made static, make it so. Both holly_power_off and holly_halt functions were not changed since they are unused, making them static would have triggered the following warning (treated as error): arch/powerpc/platforms/embedded6xx/holly.c:244:13: error: 'holly_halt' defined but not used Silence the following warnings triggered using W=1: arch/powerpc/platforms/embedded6xx/holly.c:47:5: error: no previous prototype for 'holly_exclude_device' arch/powerpc/platforms/embedded6xx/holly.c:190:6: error: no previous prototype for 'holly_show_cpuinfo' arch/powerpc/platforms/embedded6xx/holly.c:196:17: error: no previous prototype for 'holly_restart' Signed-off-by: Mathieu Malaterre <malat@debian.org> Reviewed-by: Mukesh Ojha <mojha@codeaurora.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc: vdso: Make vdso32 installation conditional in vdso_installBen Hutchings2019-04-201-0/+2
| | | | | | | | | | | The 32-bit vDSO is not needed and not normally built for 64-bit little-endian configurations. However, the vdso_install target still builds and installs it. Add the same config condition as is normally used for the build. Fixes: e0d005916994 ("powerpc/vdso: Disable building the 32-bit VDSO ...") Signed-off-by: Ben Hutchings <ben@decadent.org.uk> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/mm/64: Document the sizes of/sizes mapped by Pxx_INDEX_SIZEMichael Ellerman2019-04-204-16/+18
| | | | | | | | | | | | | | | Add comments describing the size in bytes of the various levels of the page table tree, and the size of the virtual address space mapped by each level, to make it clear what the sizes are without having to also look up other definitions. The code that calculates the sizes actually uses sizeof(pgd_t) etc., so in theory these comments could skew vs the code, but the size of pgd_t etc. is unlikely to change very often. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/highmem: Change BUG_ON() to WARN_ON()Christophe Leroy2019-04-201-10/+4
| | | | | | | | | | | In arch/powerpc/mm/highmem.c, BUG_ON() is called only when CONFIG_DEBUG_HIGHMEM is selected, this means the BUG_ON() is not vital and can be replaced by a a WARN_ON(). At the same time, use IS_ENABLED() instead of #ifdef to clean a bit. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc: Fix defconfig choice logic when cross compilingMichael Ellerman2019-04-201-5/+4
| | | | | | | | | | | | | | | | | | | | | Our logic for choosing defconfig doesn't work well in some situations. For example if you're on a ppc64le machine but you specify a non-empty CROSS_COMPILE, in order to use a non-default toolchain, then defconfig will give you ppc64_defconfig (big endian): $ make CROSS_COMPILE=~/toolchains/gcc-8/bin/powerpc-linux- defconfig *** Default configuration is based on 'ppc64_defconfig' This is because we assume that CROSS_COMPILE being set means we can't be on a ppc machine and rather than checking we just default to ppc64_defconfig. We should just ignore CROSS_COMPILE, instead check the machine with uname and if it's one of ppc, ppc64 or ppc64le then use that defconfig. If it's none of those then we fall back to ppc64_defconfig. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>