summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* powerpc/perf: Rearrange setting of ldbar for thread-imcAnju T Sudhakar2019-05-021-11/+17
| | | | | | | | | | | | | | | LDBAR holds the memory address allocated for each cpu. For thread-imc the mode bit (i.e bit 1) of LDBAR is set to accumulation. Currently, ldbar is loaded with per cpu memory address and mode set to accumulation at boot time. To enable trace-imc, the mode bit of ldbar should be set to 'trace'. So to accommodate trace-mode of IMC, reposition setting of ldbar for thread-imc to thread_imc_event_add(). Also reset ldbar at thread_imc_event_del(). Signed-off-by: Anju T Sudhakar <anju@linux.vnet.ibm.com> Reviewed-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/include: Add data structures and macros for IMC trace modeAnju T Sudhakar2019-05-022-0/+40
| | | | | | | | | | | Add the macros needed for IMC (In-Memory Collection Counters) trace-mode and data structure to hold the trace-imc record data. Also, add the new type "OPAL_IMC_COUNTERS_TRACE" in 'opal-api.h', since there is a new switch case added in the opal-calls for IMC. Signed-off-by: Anju T Sudhakar <anju@linux.vnet.ibm.com> Reviewed-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/perf: Fix loop exit condition in nest_imc_event_initAnju T Sudhakar2019-05-022-2/+2
| | | | | | | | | | | | | | | | | | | | | The data structure (i.e struct imc_mem_info) to hold the memory address information for nest imc units is allocated based on the number of nodes in the system. nest_imc_event_init() traverse this struct array to calculate the memory base address for the event-cpu. If we fail to find a match for the event cpu's chip-id in imc_mem_info struct array, then the do-while loop will iterate until we crash. Fix this by changing the loop exit condition based on the number of non zero vbase elements in the array, since the allocation is done for nr_chips + 1. Reported-by: Dan Carpenter <dan.carpenter@oracle.com> Fixes: 885dcd709ba91 ("powerpc/perf: Add nest IMC PMU support") Signed-off-by: Anju T Sudhakar <anju@linux.vnet.ibm.com> Reviewed-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/perf: Return accordingly on invalid chip-id inAnju T Sudhakar2019-05-021-0/+5
| | | | | | | | | | | | | Nest hardware counter memory resides in a per-chip reserve-memory. During nest_imc_event_init(), chip-id of the event-cpu is considered to calculate the base memory addresss for that cpu. Return, proper error condition if the chip_id calculated is invalid. Reported-by: Dan Carpenter <dan.carpenter@oracle.com> Fixes: 885dcd709ba91 ("powerpc/perf: Add nest IMC PMU support") Reviewed-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com> Signed-off-by: Anju T Sudhakar <anju@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/perf: Remove PM_BR_CMPL_ALT from power9 event listMadhavan Srinivasan2019-05-021-2/+0
| | | | | | | | | PM_BR_CMPL_ALT event is not supported, remove it from the power9 event list. Fixes: 24bedcb7c811 ("powerpc/perf: Fix branch event code for power9") Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/perf: Add generic compat mode pmu driverMadhavan Srinivasan2019-05-024-2/+238
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Most of the power processor generation performance monitoring unit (PMU) driver code is bundled in the kernel and one of those is enabled/registered based on the oprofile_cpu_type check at the boot. But things get little tricky incase of "compat" mode boot. IBM POWER System Server based processors has a compactibility mode feature, which simpily put is, Nth generation processor (lets say POWER8) will act and appear in a mode consistent with an earlier generation (N-1) processor (that is POWER7). And in this "compat" mode boot, kernel modify the "oprofile_cpu_type" to be Nth generation (POWER8). If Nth generation pmu driver is bundled (POWER8), it gets registered. Key dependency here is to have distro support for latest processor performance monitoring support. Patch here adds a generic "compat-mode" performance monitoring driver to be register in absence of powernv platform specific pmu driver. Driver supports only "cycles" and "instruction" events. "0x0001e" used as event code for "cycles" and "0x00002" used as event code for "instruction" events. New file called "generic-compat-pmu.c" is created to contain the driver specific code. And base raw event code format modeled on PPMU_ARCH_207S. Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com> [mpe: Use SPDX tag for license] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/perf: init pmu from core-book3sMadhavan Srinivasan2019-05-029-19/+46
| | | | | | | | | | | | Currenty pmu driver file for each ppc64 generation processor has a __init call in itself. Refactor the code by moving the __init call to core-books.c. This also clean's up compat mode pmu driver registration. Suggested-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com> [mpe: Use SPDX tag for license] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/powernv/ioda2: Add __printf format/argument verificationJoe Perches2019-05-022-15/+18
| | | | | | | Fix fallout too. Signed-off-by: Joe Perches <joe@perches.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* Documentation: powerpc: Expand the DAWR acronymJoel Stanley2019-05-021-4/+4
| | | | | | | | Those not of us not drowning in POWER might not know what this means. Signed-off-by: Joel Stanley <joel@jms.id.au> Acked-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/configs: Add (back) MLX5 ethernet support to skiroot_defconfigJoel Stanley2019-05-021-0/+2
| | | | | | | | | | | | | It turns out that some defconfig changes and kernel config option changes meant we accidentally dropped Ethernet support for Mellanox CLX5 cards. Fixes: cbc39809a398 ("powerpc/configs: Update skiroot defconfig") Reported-by: Carol L Soto <clsoto@us.ibm.com> Suggested-by: Carol L Soto <clsoto@us.ibm.com> Signed-off-by: Stewart Smith <stewart@linux.ibm.com> Signed-off-by: Joel Stanley <joel@jms.id.au> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/hmi: Fix kernel hang when TB is in error state.Mahesh Salgaonkar2019-05-027-1/+49
| | | | | | | | | | | | | On TOD/TB errors timebase register stops/freezes until HMI error recovery gets TOD/TB back into running state. On successful recovery, TB starts running again and udelay() that relies on TB value continues to function properly. But in case when HMI fails to recover from TOD/TB errors, the TB register stay freezed. With TB not running the __delay() function keeps looping and never return. If __delay() is called while in panic path then system hangs and never reboots after panic. Signed-off-by: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/xmon: add read-only modeChristopher M. Riedl2019-05-022-0/+50
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Operations which write to memory and special purpose registers should be restricted on systems with integrity guarantees (such as Secure Boot) and, optionally, to avoid self-destructive behaviors. Add a config option, XMON_DEFAULT_RO_MODE, to set default xmon behavior. The kernel cmdline options xmon=ro and xmon=rw override this default. The following xmon operations are affected: memops: disable memmove disable memset disable memzcan memex: no-op'd mwrite super_regs: no-op'd write_spr bpt_cmds: disable proc_call: disable Signed-off-by: Christopher M. Riedl <cmr@informatik.wtf> Reviewed-by: Oliver O'Halloran <oohall@gmail.com> Reviewed-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/boot: Fix missing check of lseek() return valueBo YU2019-05-021-1/+5
| | | | | | | This is detected by Coverity scan: CID: 1440481 Signed-off-by: Bo YU <tsu.yubo@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/entry: Remove unneeded need_resched() loopValentin Schneider2019-05-022-11/+2
| | | | | | | | | | Since the enabling and disabling of IRQs within preempt_schedule_irq() is contained in a need_resched() loop, we don't need the outer arch code loop. Signed-off-by: Valentin Schneider <valentin.schneider@arm.com> [mpe: Rebase since CURRENT_THREAD_INFO() removal] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/dts/fsl: add crypto node alias for B4Horia Geantă2019-05-021-0/+1
| | | | | | | | crypto node alias is needed by U-boot to identify the node and perform fix-ups, like adding "fsl,sec-era" property. Signed-off-by: Horia Geantă <horia.geanta@nxp.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/prom_init: get rid of PROM_SCRATCH_SIZEChristophe Leroy2019-05-021-11/+9
| | | | | | | PROM_SCRATCH_SIZE is same as sizeof(prom_scratch) Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/security: Show powerpc_security_features in debugfsMichael Ellerman2019-05-021-0/+8
| | | | | | | | | | This can be helpful for debugging problems with the security feature flags, especially on guests where the flags come from the hypervisor via an hcall and so can't be observed in the device tree. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Reviewed-by: Joel Stanley <joel@jms.id.au> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/mm: Warn if W+X pages found on bootRussell Currey2019-05-025-1/+73
| | | | | | | | | | | | | | | | Implement code to walk all pages and warn if any are found to be both writable and executable. Depends on STRICT_KERNEL_RWX enabled, and is behind the DEBUG_WX config option. This only runs on boot and has no runtime performance implications. Very heavily influenced (and in some cases copied verbatim) from the ARM64 code written by Laura Abbott (thanks!), since our ptdump infrastructure is similar. Signed-off-by: Russell Currey <ruscur@russell.cc> [mpe: Fixup build error when disabled] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/mm/ptdump: Wrap seq_printf() to handle NULL pointersRussell Currey2019-05-021-10/+22
| | | | | | | | | | Lovingly borrowed from the arch/arm64 ptdump code. This doesn't seem to be an issue in practice, but is necessary for my upcoming commit. Signed-off-by: Russell Currey <ruscur@russell.cc> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc: remove the __kernel_io_end exportChristoph Hellwig2019-05-021-1/+0
| | | | | | | | | This export was added in this merge window, but without any actual user, or justification for a modular user. Fixes: a35a3c6f6065 ("powerpc/mm/hash64: Add a variable to track the end of IO mapping") Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* MAINTAINERS: Update cxl/ocxl email addressAndrew Donnellan2019-05-021-2/+2
| | | | | | | Use my @linux.ibm.com email to avoid a layer of redirection. Signed-off-by: Andrew Donnellan <ajd@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/64: Don't trace code that runs with the soft irq mask unreconciledNicholas Piggin2019-05-024-13/+20
| | | | | | | | | | | | | | | | | | | | | | | | "Reconciling" in terms of interrupt handling, is to bring the soft irq mask state in to synch with the hardware, after an interrupt causes MSR[EE] to be cleared (while the soft mask may be enabled, and hard irqs not marked disabled). General kernel code should not be called while unreconciled, because local_irq_disable, etc. manipulations can cause surprising irq traces, and it's fragile because the soft irq code does not really expect to be called in this situation. When exiting from an interrupt, MSR[EE] is cleared to prevent races, but soft irq state is enabled for the returned-to context, so this is now an unreconciled state. restore_math is called in this state, and that can be ftraced, and the ftrace subsystem disables local irqs. Mark restore_math and its callees as notrace. Restore a sanity check in the soft irq code that had to be disabled for this case, by commit 4da1f79227ad4 ("powerpc/64: Disable irq restore warning for now"). Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/irq: drop __irq_offset_valueChristophe Leroy2019-05-021-3/+0
| | | | | | | | | | | This patch drops__irq_offset_value which has not been used since commit 9c4cb8251513 ("powerpc: Remove use of CONFIG_PPC_MERGE") This removes a sparse warning. Fixes: 9c4cb8251513 ("powerpc: Remove use of CONFIG_PPC_MERGE") Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/setup: replace ifdefs by IS_ENABLED() wherever possible.Christophe Leroy2019-05-021-21/+18
| | | | | | | | Compared to ifdefs, IS_ENABLED() provide a cleaner code and allows to detect compilation failure regardless of the selected options. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/setup: cleanup the #ifdef CONFIG_TAU blockChristophe Leroy2019-05-021-12/+12
| | | | | | | | | Use cpu_has_feature() instead of opencoding Use IS_ENABLED() instead of #ifdef for CONFIG_TAU_AVERAGE Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/setup: cleanup ifdef mess in check_cache_coherency()Christophe Leroy2019-05-021-7/+3
| | | | | | | Use IS_ENABLED() instead of #ifdefs Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/setup: Remove unnecessary #ifdef CONFIG_ALTIVECChristophe Leroy2019-05-021-2/+0
| | | | | | | | CPU_FTR_ALTIVEC is only set when CONFIG_ALTIVEC is selected, so the ifdef is unnecessary. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/mm: define an empty mm_iommu_init()Christophe Leroy2019-05-022-2/+1
| | | | | | | | To avoid ifdefs, define a empty static inline mm_iommu_init() function when CONFIG_SPAPR_TCE_IOMMU is not selected. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/fadump: define an empty fadump_cleanup()Christophe Leroy2019-05-022-2/+1
| | | | | | | | To avoid #ifdefs, define an static inline fadump_cleanup() function when CONFIG_FADUMP is not selected Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/32: Don't add dummy frames when calling trace_hardirqs_on/offChristophe Leroy2019-05-021-14/+2
| | | | | | | | | | | | | | | No need to add dummy frames when calling trace_hardirqs_on or trace_hardirqs_off. GCC properly handles empty stacks. In addition, powerpc doesn't set CONFIG_FRAME_POINTER, therefore __builtin_return_address(1..) returns NULL at all time. So the dummy frames are definitely unneeded here. In the meantime, avoid reading memory for loading r1 with a value we already know. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/32: don't do syscall stuff in transfer_to_handlerChristophe Leroy2019-05-021-19/+0
| | | | | | | | As syscalls are now handled via a fast entry path, syscall related actions can be removed from the generic transfer_to_handler path. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/32: implement fast entry for syscalls on BOOKEChristophe Leroy2019-05-024-16/+100
| | | | | | | | | | | | | | | | | | This patch implements a fast entry for syscalls. Syscalls don't have to preserve non volatile registers except LR. This patch then implement a fast entry for syscalls, where volatile registers get clobbered. As this entry is dedicated to syscall it always sets MSR_EE and warns in case MSR_EE was previously off It also assumes that the call is always from user, system calls are unexpected from kernel. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/32: implement fast entry for syscalls on non BOOKEChristophe Leroy2019-05-025-10/+116
| | | | | | | | | | | | | | | | | | | | | This patch implements a fast entry for syscalls. Syscalls don't have to preserve non volatile registers except LR. This patch then implement a fast entry for syscalls, where volatile registers get clobbered. As this entry is dedicated to syscall it always sets MSR_EE and warns in case MSR_EE was previously off It also assumes that the call is always from user, system calls are unexpected from kernel. The overall series improves null_syscall selftest by 12,5% on an 83xx and by 17% on a 8xx. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc: Fix 32-bit handling of MSR_EE on exceptionsChristophe Leroy2019-05-021-49/+67
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | [text mostly copied from benh's RFC/WIP] ppc32 are still doing something rather gothic and wrong on 32-bit which we stopped doing on 64-bit a while ago. We have that thing where some handlers "copy" the EE value from the original stack frame into the new MSR before transferring to the handler. Thus for a number of exceptions, we enter the handlers with interrupts enabled. This is rather fishy, some of the stuff that handlers might do early on such as irq_enter/exit or user_exit, context tracking, etc... should be run with interrupts off afaik. Generally our handlers know when to re-enable interrupts if needed. The problem we were having is that we assumed these interrupts would return with interrupts enabled. However that isn't the case. Instead, this patch changes things so that we always enter exception handlers with interrupts *off* with the notable exception of syscalls which are special (and get a fast path). Suggested-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/32: get rid of COPY_EE in exception entryChristophe Leroy2019-05-023-27/+15
| | | | | | | | | | | | EXC_XFER_TEMPLATE() is not called with COPY_EE anymore so we can get rid of copyee parameters and related COPY_EE and NOCOPY macros. Suggested-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> [splited out from benh RFC patch] Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/32: Enter exceptions with MSR_EE unsetChristophe Leroy2019-05-027-106/+90
| | | | | | | | | | | | All exceptions handlers know when to reenable interrupts, so it is safer to enter all of them with MSR_EE unset, except for syscalls. Suggested-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> [splited out from benh RFC patch] Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/32: enter syscall with MSR_EE inconditionaly setChristophe Leroy2019-05-027-5/+13
| | | | | | | | | | | | | syscalls are expected to be entered with MSR_EE set. Lets make it inconditional by forcing MSR_EE on syscalls. This patch adds EXC_XFER_SYS for that. Suggested-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> [splited out from benh RFC patch] Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/fsl_booke: ensure SPEFloatingPointException() reenables interruptsChristophe Leroy2019-05-021-0/+8
| | | | | | | | | SPEFloatingPointException() is the only exception handler which 'forgets' to re-enable interrupts. This patch makes sure it does. Suggested-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/40x: Refactor exception entry macros by using head_32.hChristophe Leroy2019-05-022-86/+6
| | | | | | | Refactor exception entry macros by using the ones defined in head_32.h Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/40x: Split and rename NORMAL_EXCEPTION_PROLOGChristophe Leroy2019-05-021-10/+16
| | | | | | | | | This patch splits NORMAL_EXCEPTION_PROLOG in the same way as in head_8xx.S and head_32.S and renames it EXCEPTION_PROLOG() as well to match head_32.h Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/40x: add exception frame markerChristophe Leroy2019-05-021-0/+6
| | | | | | | | | | | | | | | | | | | | | | This patch adds STACK_FRAME_REGS_MARKER in the stack at exception entry in order to see interrupts in call traces as below: [ 0.013964] Call Trace: [ 0.014014] [c0745db0] [c007a9d4] tick_periodic.constprop.5+0xd8/0x104 (unreliable) [ 0.014086] [c0745dc0] [c007aa20] tick_handle_periodic+0x20/0x9c [ 0.014181] [c0745de0] [c0009cd0] timer_interrupt+0xa0/0x264 [ 0.014258] [c0745e10] [c000e484] ret_from_except+0x0/0x14 [ 0.014390] --- interrupt: 901 at console_unlock.part.7+0x3f4/0x528 [ 0.014390] LR = console_unlock.part.7+0x3f0/0x528 [ 0.014455] [c0745ee0] [c0050334] console_unlock.part.7+0x114/0x528 (unreliable) [ 0.014542] [c0745f30] [c00524e0] register_console+0x3d8/0x44c [ 0.014625] [c0745f60] [c0675aac] cpm_uart_console_init+0x18/0x2c [ 0.014709] [c0745f70] [c06614f4] console_init+0x114/0x1cc [ 0.014795] [c0745fb0] [c0658b68] start_kernel+0x300/0x3d8 [ 0.014864] [c0745ff0] [c00022cc] start_here+0x44/0x98 Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/40x: Don't use SPRN_SPRG_SCRATCH2 in EXCEPTION_PROLOGChristophe Leroy2019-05-021-12/+9
| | | | | | | | | | | | | | | | | Unlike said in the comment, r1 is not reused by the critical exception handler, as it uses a dedicated critirq_ctx stack. Decrementing r1 early is then unneeded. Should the above be valid, the code is crap buggy anyway as r1 gets some intermediate values that would jeopardise the whole process (for instance after mfspr r1,SPRN_SPRG_THREAD) Using SPRN_SPRG_SCRATCH2 to save r1 is then not needed, r11 can be used instead. This avoids one mtspr and one mfspr and makes the prolog closer to what's done on 6xx and 8xx. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/32: make the 6xx/8xx EXC_XFER_TEMPLATE() similar to the 40x/booke oneChristophe Leroy2019-05-021-7/+6
| | | | | | | | | | | | | 6xx/8xx EXC_XFER_TEMPLATE() macro adds a i##n symbol which is unused and can be removed. 40x and booke EXC_XFER_TEMPLATE() macros takes msr from the caller while the 6xx/8xx version uses only MSR_KERNEL as msr value. This patch modifies the 6xx/8xx version to make it similar to the 40x and booke versions. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/32: move LOAD_MSR_KERNEL() into head_32.h and use itChristophe Leroy2019-05-022-9/+15
| | | | | | | | | | | | | As preparation for using head_32.h for head_40x.S, move LOAD_MSR_KERNEL() there and use it to load r10 with MSR_KERNEL value. In the mean time, this patch modifies it so that it takes into account the size of the passed value to determine if 'li' can be used or if 'lis/ori' is needed instead of using the size of MSR_KERNEL. This is done by using gas macro. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/32: Refactor EXCEPTION entry macros for head_8xx.S and head_32.SChristophe Leroy2019-05-023-193/+122
| | | | | | | | | | | | | | | | | EXCEPTION_PROLOG is similar in head_8xx.S and head_32.S This patch creates head_32.h and moves EXCEPTION_PROLOG macro into it. It also converts it from a GCC macro to a GAS macro in order to ease refactorisation with 40x later, since GAS macros allows the use of #ifdef/#else/#endif inside it. And it also has the advantage of not requiring the uggly "; \" at the end of each line. This patch also moves EXCEPTION() and EXC_XFER_XXXX() macros which are also similar while adding START_EXCEPTION() out of EXCEPTION(). Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/mm: print hash info in a helperChristophe Leroy2019-05-024-23/+26
| | | | | | | | | | | Reduce #ifdef mess by defining a helper to print hash info at startup. In the meantime, remove the display of hash table address to reduce leak of non necessary information. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/32s: don't try to print hash table address.Christophe Leroy2019-05-021-2/+2
| | | | | | | | | | Due to %p, (ptrval) is printed in lieu of the hash table address. showing the hash table address isn't an operationnal need so just don't print it. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/32s: drop Hash_endChristophe Leroy2019-05-022-4/+2
| | | | | | | Hash_end has never been used, drop it. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/32s: map kasan zero shadow with PAGE_READONLY instead of PAGE_KERNEL_ROChristophe Leroy2019-05-021-2/+8
| | | | | | | | | | | For hash32, the zero shadow page gets mapped with PAGE_READONLY instead of PAGE_KERNEL_RO, because the PP bits don't provide a RO kernel, so PAGE_KERNEL_RO is equivalent to PAGE_KERNEL. By using PAGE_READONLY, the page is RO for both kernel and user, but this is not a security issue as it contains only zeroes. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* powerpc/32s: set up an early static hash table for KASAN.Christophe Leroy2019-05-023-26/+68
| | | | | | | | | | | | | | | | | | | KASAN requires early activation of hash table, before memblock() functions are available. This patch implements an early hash_table statically defined in __initdata. During early boot, a single page table is used. For hash32, when doing the final init, one page table is allocated for each PGD entry because of the _PAGE_HASHPTE flag which can't be common to several virt pages. This is done after memblock get available but before switching to the final hash table, otherwise there are issues with TLB flushing due to the shared entries. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>