summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* crypto: s5p-sss - Use common BIT macroKrzysztof Kozlowski2016-04-251-48/+47
| | | | | | | | The BIT() macro is obvious and well known, so prefer to use it instead of crafted own macro. Signed-off-by: Krzysztof Kozlowski <k.kozlowski@samsung.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: mxc-scc - fix unwinding in mxc_scc_crypto_register()Dan Carpenter2016-04-251-2/+2
| | | | | | | | | | | | | There are two issues here: 1) We need to decrement "i" otherwise we unregister something that was not successfully registered. 2) The original code did not unregister the first element in the array where i is zero. Fixes: d293b640ebd5 ('crypto: mxc-scc - add basic driver for the MXC SCC') Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: mxc-scc - signedness bugs in mxc_scc_ablkcipher_req_init()Dan Carpenter2016-04-251-6/+9
| | | | | | | | | ->src_nents and ->dst_nents are unsigned so they can't be less than zero. I fixed this by introducing a temporary "nents" variable. Fixes: d293b640ebd5 ('crypto: mxc-scc - add basic driver for the MXC SCC') Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: talitos - fix ahash algorithms registrationHoria Geant?2016-04-251-0/+64
| | | | | | | | | | Provide hardware state import/export functionality, as mandated by commit 8996eafdcbad ("crypto: ahash - ensure statesize is non-zero") Cc: <stable@vger.kernel.org> # 4.3+ Reported-by: Jonas Eymann <J.Eymann@gmx.net> Signed-off-by: Horia Geant? <horia.geanta@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: ccp - Ensure all dependencies are specifiedGary R Hook2016-04-251-0/+1
| | | | | | | A DMA_ENGINE requires DMADEVICES in Kconfig Signed-off-by: Gary R Hook <gary.hook@amd.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: marvell/cesa - Improving code readabilityRomain Perier2016-04-201-5/+5
| | | | | | | | | | | When looking for available engines, the variable "engine" is assigned to "&cesa->engines[i]" at the beginning of the for loop. Replacing next occurences of "&cesa->engines[i]" by "engine" and in order to improve readability. Signed-off-by: Romain Perier <romain.perier@free-electrons.com> Acked-by: Boris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: s5p-sss - Remove useless hash interrupt handlerKrzysztof Kozlowski2016-04-202-32/+8
| | | | | | | | | | | | | Beside regular feed control interrupt, the driver requires also hash interrupt for older SoCs (samsung,s5pv210-secss). However after requesting it, the interrupt handler isn't doing anything with it, not even clearing the hash interrupt bit. Driver does not provide hash functions so it is safe to remove the hash interrupt related code and to not require the interrupt in Device Tree. Signed-off-by: Krzysztof Kozlowski <k.kozlowski@samsung.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: s5p-sss - Fix use after free of copied input buffer in error pathKrzysztof Kozlowski2016-04-201-1/+0
| | | | | | | | | | | | | | | | | | | The driver makes copies of memory (input or output scatterlists) if they are not aligned. In s5p_aes_crypt_start() error path (on unsuccessful initialization of output scatterlist), if input scatterlist was not aligned, the driver first freed copied input memory and then unmapped it from the device, instead of doing otherwise (unmap and then free). This was wrong in two ways: 1. Freed pages were still mapped to the device. 2. The dma_unmap_sg() iterated over freed scatterlist structure. The call to s5p_free_sg_cpy() in this error path is not needed because the copied scatterlists will be freed by s5p_aes_complete(). Fixes: 9e4a1100a445 ("crypto: s5p-sss - Handle unaligned buffers") Signed-off-by: Krzysztof Kozlowski <k.kozlowski@samsung.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: ccp - Register the CCP as a DMA resourceGary R Hook2016-04-207-4/+893
| | | | | | | | | | | | | | | | | | | The CCP has the ability to provide DMA services to the kernel using pass-through mode of the device. Register these services as general purpose DMA channels. Changes since v2: - Add a Signed-off-by Changes since v1: - Allocate memory for a string in ccp_dmaengine_register - Ensure register/unregister calls are properly ordered - Verified all changed files are listed in the diffstat - Undo some superfluous changes - Added a cc: Signed-off-by: Gary R Hook <gary.hook@amd.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto4xx: integrate ppc4xx-rng into crypto4xxChristian Lamparter2016-04-2010-163/+184
| | | | | | | | | This patch integrates the ppc4xx-rng driver into the existing crypto4xx. This is because the true random number generator is controlled and part of the security core. Signed-off-by: Christian Lamparter <chunkeey@googlemail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* eCryptfs: Do not allocate hash tfm in NORECLAIM contextHerbert Xu2016-04-204-21/+26
| | | | | | | | | | | | | You cannot allocate crypto tfm objects in NORECLAIM or NOFS contexts. The ecryptfs code currently does exactly that for the MD5 tfm. This patch fixes it by preallocating the MD5 tfm in a safe context. The MD5 tfm is also reentrant so this patch removes the superfluous cs_hash_tfm_mutex. Reported-by: Nicolas Boichat <drinkcat@chromium.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: qat - fix section mismatch warningTadeusz Struk2016-04-181-1/+1
| | | | | | | | Fix Section mismatch warinig in adf_exit_vf_wq() Reported-by: kbuild test robot <fengguang.wu@intel.com> Signed-off-by: Tadeusz Struk <tadeusz.struk@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: qat - interrupts need to be enabled when VFs are disabledTadeusz Struk2016-04-181-1/+2
| | | | | | | | | IRQs need to be enabled when VFs go down in case some VF to PF comms happens. Tested-by: Suman Bangalore Sathyanarayana <sumanx.bangalore.sathyanarayana@intel.com> Signed-off-by: Tadeusz Struk <tadeusz.struk@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: qat - check if PF is runningTadeusz Struk2016-04-186-4/+14
| | | | | | | | | Before VF sends a signal to PF it should check if PF is still running. Tested-by: Suman Bangalore Sathyanarayana <sumanx.bangalore.sathyanarayana@intel.com> Signed-off-by: Tadeusz Struk <tadeusz.struk@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: qat - move vf2pf_init and vf2pf_exit to commonTadeusz Struk2016-04-186-70/+103
| | | | | | | | | The vf2pf_init and vf2pf_exit are exactly the same for all VFs so move them to common and reuse. Tested-by: Suman Bangalore Sathyanarayana <sumanx.bangalore.sathyanarayana@intel.com> Signed-off-by: Tadeusz Struk <tadeusz.struk@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: lzo - get rid of superfluous __GFP_REPEATMichal Hocko2016-04-151-1/+1
| | | | | | | | | | | | | | | __GFP_REPEAT has a rather weak semantic but since it has been introduced around 2.6.12 it has been ignored for low order allocations. lzo_init uses __GFP_REPEAT to allocate LZO1X_MEM_COMPRESS 16K. This is order 3 allocation request and __GFP_REPEAT is ignored for this size as well as all <= PAGE_ALLOC_COSTLY requests. Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: "David S. Miller" <davem@davemloft.net> Cc: linux-crypto@vger.kernel.org Signed-off-by: Michal Hocko <mhocko@suse.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* hwrng: hisi - Add support for Hisilicon SoC RNGKefeng Wang2016-04-153-0/+140
| | | | | | | | | This adds the Hisilicon Random Number Generator(RNG) support, which is found in Hip04 and Hip05 soc. Reviewed-by: Mathieu Poirier <mathieu.poirier@linaro.org> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* dt/bindings: Add bindings for hisilicon random number generatorKefeng Wang2016-04-151-0/+12
| | | | | | | | | Document the devicetree bindings for the random number generator found on Hisilicon Hip04 and Hip05 soc. Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> Acked-by: Rob Herring <rob@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: mxc-scc - add basic driver for the MXC SCCSteffen Trumtrar2016-04-153-0/+772
| | | | | | | | | | | | | | | According to the Freescale GPL driver code, there are two different Security Controller (SCC) versions: SCC and SCC2. The SCC is found on older i.MX SoCs, e.g. the i.MX25. This is the version implemented and tested here. As there is no publicly available documentation for this IP core, all information about this unit is gathered from the GPL'ed driver from Freescale. Signed-off-by: Steffen Trumtrar <s.trumtrar@pengutronix.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* ARM: i.MX25: add scc module to dtsiSteffen Trumtrar2016-04-151-0/+9
| | | | | | | Add the Security Controller (SCC) module to the dtsi. Signed-off-by: Steffen Trumtrar <s.trumtrar@pengutronix.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* Documentation: devicetree: add Freescale SCC bindingsSteffen Trumtrar2016-04-151-0/+21
| | | | | | | | | Add documentation for the Freescale Security Controller (SCC) found on i.MX25 SoCs. Signed-off-by: Steffen Trumtrar <s.trumtrar@pengutronix.de> Acked-by: Rob Herring <robh@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: qat - adf_dev_stop should not be called in atomic contextTadeusz Struk2016-04-153-3/+64
| | | | | | | | | | | | | VFs call adf_dev_stop() from a PF to VF interrupt bottom half. This causes an oops "scheduling while atomic", because it tries to acquire a mutex to un-register crypto algorithms. This patch fixes the issue by calling adf_dev_stop() asynchronously. Changes in v2: - change kthread to a work queue. Signed-off-by: Tadeusz Struk <tadeusz.struk@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: ccp - Fix RT breaking #include <linux/rwlock_types.h>Mike Galbraith2016-04-151-1/+1
| | | | | | | | Direct include of rwlock_types.h breaks RT, use spinlock_types.h instead. Fixes: 553d2374db0b crypto: ccp - Support for multiple CCPs Signed-off-by: Mike Galbraith <umgwanakikbuti@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: doc - document correct return value for request allocationEric Biggers2016-04-155-11/+7
| | | | | Signed-off-by: Eric Biggers <ebiggers3@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* hwrng: exynos - Fix misspelled Samsung addressKrzysztof Kozlowski2016-04-051-1/+1
| | | | | | | Correct smasung.com into samsung.com. Signed-off-by: Krzysztof Kozlowski <k.kozlowski@samsung.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: qat - changed adf_dev_stop to voidTadeusz Struk2016-04-0510-47/+17
| | | | | | | It returns always zero anyway. Signed-off-by: Tadeusz Struk <tadeusz.struk@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: qat - explicitly stop all VFs firstTadeusz Struk2016-04-051-1/+20
| | | | | | | | When stopping devices it is not enought to loop backwards. We need to explicitly stop all VFs first. Signed-off-by: Tadeusz Struk <tadeusz.struk@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: drbg - set HMAC key only when alteredStephan Mueller2016-04-051-14/+25
| | | | | | | | | | | | | | The HMAC implementation allows setting the HMAC key independently from the hashing operation. Therefore, the key only needs to be set when a new key is generated. This patch increases the speed of the HMAC DRBG by at least 35% depending on the use case. The patch is fully CAVS tested. Signed-off-by: Stephan Mueller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: sun4i-ss - Replace spinlock_bh by spin_lock_irq{save|restore}Corentin LABBE2016-04-051-4/+6
| | | | | | | | | | | | The current sun4i-ss driver could generate data corruption when ciphering/deciphering. It occurs randomly on end of handled data. No root cause have been found and the only way to remove it is to replace all spin_lock_bh by their irq counterparts. Fixes: 6298e948215f ("crypto: sunxi-ss - Add Allwinner Security System crypto accelerator") Signed-off-by: LABBE Corentin <clabbe.montjoie@gmail.com> Cc: stable <stable@vger.kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: qat - fix address leaking of RSA public exponentTudor Ambarus2016-04-051-1/+1
| | | | | Signed-off-by: Tudor Ambarus <tudor-dan.ambarus@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: qat - avoid memory corruption or undefined behaviourTudor Ambarus2016-04-051-1/+1
| | | | | | | | memcopying to a (null pointer + offset) will result in memory corruption or undefined behaviour. Signed-off-by: Tudor Ambarus <tudor-dan.ambarus@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: qat - Remove redundant nrbg ringsAhsan Atta2016-04-051-2/+0
| | | | | | | | Remove redundant nrbg rings. Signed-off-by: Ahsan Atta <ahsan.atta@intel.com> Signed-off-by: Tadeusz Struk <tadeusz.struk@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: qat - make sure const_tab is 1024 bytes alignedTadeusz Struk2016-04-051-1/+1
| | | | | | | FW requires the const_tab to be 1024 bytes aligned. Signed-off-by: Tadeusz Struk <tadeusz.struk@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* lib/mpi: mpi_read_raw_from_sgl(): fix out-of-bounds buffer accessNicolai Stange2016-04-051-3/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Within the copying loop in mpi_read_raw_from_sgl(), the last input SGE's byte count gets artificially extended as follows: if (sg_is_last(sg) && (len % BYTES_PER_MPI_LIMB)) len += BYTES_PER_MPI_LIMB - (len % BYTES_PER_MPI_LIMB); Within the following byte copying loop, this causes reads beyond that SGE's allocated buffer: BUG: KASAN: slab-out-of-bounds in mpi_read_raw_from_sgl+0x331/0x650 at addr ffff8801e168d4d8 Read of size 1 by task systemd-udevd/721 [...] Call Trace: [<ffffffff818c4d35>] dump_stack+0xbc/0x117 [<ffffffff818c4c79>] ? _atomic_dec_and_lock+0x169/0x169 [<ffffffff814af5d1>] ? print_section+0x61/0xb0 [<ffffffff814b1109>] print_trailer+0x179/0x2c0 [<ffffffff814bc524>] object_err+0x34/0x40 [<ffffffff814bfdc7>] kasan_report_error+0x307/0x8c0 [<ffffffff814bf315>] ? kasan_unpoison_shadow+0x35/0x50 [<ffffffff814bf38e>] ? kasan_kmalloc+0x5e/0x70 [<ffffffff814c0ad1>] kasan_report+0x71/0xa0 [<ffffffff81938171>] ? mpi_read_raw_from_sgl+0x331/0x650 [<ffffffff814bf1a6>] __asan_load1+0x46/0x50 [<ffffffff81938171>] mpi_read_raw_from_sgl+0x331/0x650 [<ffffffff817f41b6>] rsa_verify+0x106/0x260 [<ffffffff817f40b0>] ? rsa_set_pub_key+0xf0/0xf0 [<ffffffff818edc79>] ? sg_init_table+0x29/0x50 [<ffffffff817f4d22>] ? pkcs1pad_sg_set_buf+0xb2/0x2e0 [<ffffffff817f5b74>] pkcs1pad_verify+0x1f4/0x2b0 [<ffffffff81831057>] public_key_verify_signature+0x3a7/0x5e0 [<ffffffff81830cb0>] ? public_key_describe+0x80/0x80 [<ffffffff817830f0>] ? keyring_search_aux+0x150/0x150 [<ffffffff818334a4>] ? x509_request_asymmetric_key+0x114/0x370 [<ffffffff814b83f0>] ? kfree+0x220/0x370 [<ffffffff818312c2>] public_key_verify_signature_2+0x32/0x50 [<ffffffff81830b5c>] verify_signature+0x7c/0xb0 [<ffffffff81835d0c>] pkcs7_validate_trust+0x42c/0x5f0 [<ffffffff813c391a>] system_verify_data+0xca/0x170 [<ffffffff813c3850>] ? top_trace_array+0x9b/0x9b [<ffffffff81510b29>] ? __vfs_read+0x279/0x3d0 [<ffffffff8129372f>] mod_verify_sig+0x1ff/0x290 [...] The exact purpose of the len extension isn't clear to me, but due to its form, I suspect that it's a leftover somehow accounting for leading zero bytes within the most significant output limb. Note however that without that len adjustement, the total number of bytes ever processed by the inner loop equals nbytes and thus, the last output limb gets written at this point. Thus the net effect of the len adjustement cited above is just to keep the inner loop running for some more iterations, namely < BYTES_PER_MPI_LIMB ones, reading some extra bytes from beyond the last SGE's buffer and discarding them afterwards. Fix this issue by purging the extension of len beyond the last input SGE's buffer length. Fixes: 2d4d1eea540b ("lib/mpi: Add mpi sgl helpers") Signed-off-by: Nicolai Stange <nicstange@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* lib/mpi: mpi_read_raw_from_sgl(): sanitize meaning of indicesNicolai Stange2016-04-051-6/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Within the byte reading loop in mpi_read_raw_sgl(), there are two housekeeping indices used, z and x. At all times, the index z represents the number of output bytes covered by the input SGEs for which processing has completed so far. This includes any leading zero bytes within the most significant limb. The index x changes its meaning after the first outer loop's iteration though: while processing the first input SGE, it represents "number of leading zero bytes in most significant output limb" + "current position within current SGE" For the remaining SGEs OTOH, x corresponds just to "current position within current SGE" After all, it is only the sum of z and x that has any meaning for the output buffer and thus, the "number of leading zero bytes in most significant output limb" part can be moved away from x into z from the beginning, opening up the opportunity for cleaner code. Before the outer loop iterating over the SGEs, don't initialize z with zero, but with the number of leading zero bytes in the most significant output limb. For the inner loop iterating over a single SGE's bytes, get rid of the buf_shift offset to x' bounds and let x run from zero to sg->length - 1. Signed-off-by: Nicolai Stange <nicstange@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* lib/mpi: mpi_read_raw_from_sgl(): fix nbits calculationNicolai Stange2016-04-051-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The number of bits, nbits, is calculated in mpi_read_raw_from_sgl() as follows: nbits = nbytes * 8; Afterwards, the number of leading zero bits of the first byte get subtracted: nbits -= count_leading_zeros(*(u8 *)(sg_virt(sgl) + lzeros)); However, count_leading_zeros() takes an unsigned long and thus, the u8 gets promoted to an unsigned long. Thus, the above doesn't subtract the number of leading zeros in the most significant nonzero input byte from nbits, but the number of leading zeros of the most significant nonzero input byte promoted to unsigned long, i.e. BITS_PER_LONG - 8 too many. Fix this by subtracting count_leading_zeros(...) - (BITS_PER_LONG - 8) from nbits only. Fixes: 2d4d1eea540b ("lib/mpi: Add mpi sgl helpers") Signed-off-by: Nicolai Stange <nicstange@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* lib/mpi: mpi_read_raw_from_sgl(): purge redundant clearing of nbitsNicolai Stange2016-04-051-2/+0
| | | | | | | | | | | | | | | | | | In mpi_read_raw_from_sgl(), unsigned nbits is calculated as follows: nbits = nbytes * 8; and redundantly cleared later on if nbytes == 0: if (nbytes > 0) ... else nbits = 0; Purge this redundant clearing for the sake of clarity. Signed-off-by: Nicolai Stange <nicstange@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* lib/mpi: mpi_read_raw_from_sgl(): don't include leading zero SGEs in nbytesNicolai Stange2016-04-051-6/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | At the very beginning of mpi_read_raw_from_sgl(), the leading zeros of the input scatterlist are counted: lzeros = 0; for_each_sg(sgl, sg, ents, i) { ... if (/* sg contains nonzero bytes */) break; /* sg contains nothing but zeros here */ ents--; lzeros = 0; } Later on, the total number of trailing nonzero bytes is calculated by subtracting the number of leading zero bytes from the total number of input bytes: nbytes -= lzeros; However, since lzeros gets reset to zero for each completely zero leading sg in the loop above, it doesn't include those. Besides wasting resources by allocating a too large output buffer, this mistake propagates into the calculation of x, the number of leading zeros within the most significant output limb: x = BYTES_PER_MPI_LIMB - nbytes % BYTES_PER_MPI_LIMB; What's more, the low order bytes of the output, equal in number to the extra bytes in nbytes, are left uninitialized. Fix this by adjusting nbytes for each completely zero leading scatterlist entry. Fixes: 2d4d1eea540b ("lib/mpi: Add mpi sgl helpers") Signed-off-by: Nicolai Stange <nicstange@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* lib/mpi: mpi_read_raw_from_sgl(): replace len argument by nbytesNicolai Stange2016-04-051-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | Currently, the nbytes local variable is calculated from the len argument as follows: ... mpi_read_raw_from_sgl(..., unsigned int len) { unsigned nbytes; ... if (!ents) nbytes = 0; else nbytes = len - lzeros; ... } Given that nbytes is derived from len in a trivial way and that the len argument is shadowed by a local len variable in several loops, this is just confusing. Rename the len argument to nbytes and get rid of the nbytes local variable. Do the nbytes calculation in place. Signed-off-by: Nicolai Stange <nicstange@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* lib/mpi: mpi_read_buffer(): fix buffer overflowNicolai Stange2016-04-051-10/+3
| | | | | | | | | | | | | | | | | | | | | | | | Currently, mpi_read_buffer() writes full limbs to the output buffer and moves memory around to purge leading zero limbs afterwards. However, with commit 9cbe21d8f89d ("lib/mpi: only require buffers as big as needed for the integer") the caller is only required to provide a buffer large enough to hold the result without the leading zeros. This might result in a buffer overflow for small MP numbers with leading zeros. Fix this by coping the result to its final destination within the output buffer and not copying the leading zeros at all. Fixes: 9cbe21d8f89d ("lib/mpi: only require buffers as big as needed for the integer") Signed-off-by: Nicolai Stange <nicstange@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* lib/mpi: mpi_read_buffer(): replace open coded endian conversionNicolai Stange2016-04-051-15/+12
| | | | | | | | | | | | Currently, the endian conversion from CPU order to BE is open coded in mpi_read_buffer(). Replace this by the centrally provided cpu_to_be*() macros. Copy from the temporary storage on stack to the destination buffer by means of memcpy(). Signed-off-by: Nicolai Stange <nicstange@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* lib/mpi: mpi_read_buffer(): optimize skipping of leading zero limbsNicolai Stange2016-04-051-10/+8
| | | | | | | | | | | | Currently, if the number of leading zeros is greater than fits into a complete limb, mpi_read_buffer() skips them by iterating over them limb-wise. Instead of skipping the high order zero limbs within the loop as shown above, adjust the copying loop's bounds. Signed-off-by: Nicolai Stange <nicstange@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* lib/mpi: mpi_write_sgl(): replace open coded endian conversionNicolai Stange2016-04-051-16/+11
| | | | | | | | | | Currently, the endian conversion from CPU order to BE is open coded in mpi_write_sgl(). Replace this by the centrally provided cpu_to_be*() macros. Signed-off-by: Nicolai Stange <nicstange@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* lib/mpi: mpi_write_sgl(): fix out-of-bounds stack accessNicolai Stange2016-04-051-5/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Within the copying loop in mpi_write_sgl(), we have if (lzeros) { mpi_limb_t *limb1 = (void *)p - sizeof(alimb); mpi_limb_t *limb2 = (void *)p - sizeof(alimb) + lzeros; *limb1 = *limb2; ... } where p points past the end of alimb2 which lives on the stack and contains the current limb in BE order. The purpose of the above is to shift the non-zero bytes of alimb2 to its beginning in memory, i.e. to skip its leading zero bytes. However, limb2 points somewhere into the middle of alimb2 and thus, reading *limb2 pulls in lzero bytes from somewhere. Indeed, KASAN splats: BUG: KASAN: stack-out-of-bounds in mpi_write_to_sgl+0x4e3/0x6f0 at addr ffff8800cb04f601 Read of size 8 by task systemd-udevd/391 page:ffffea00032c13c0 count:0 mapcount:0 mapping: (null) index:0x0 flags: 0x3fff8000000000() page dumped because: kasan: bad access detected CPU: 3 PID: 391 Comm: systemd-udevd Tainted: G B L 4.5.0-next-20160316+ #12 [...] Call Trace: [<ffffffff8194889e>] dump_stack+0xdc/0x15e [<ffffffff819487c2>] ? _atomic_dec_and_lock+0xa2/0xa2 [<ffffffff814892b5>] ? __dump_page+0x185/0x330 [<ffffffff8150ffd6>] kasan_report_error+0x5e6/0x8b0 [<ffffffff814724cd>] ? kzfree+0x2d/0x40 [<ffffffff819c5bce>] ? mpi_free_limb_space+0xe/0x20 [<ffffffff819c469e>] ? mpi_powm+0x37e/0x16f0 [<ffffffff815109f1>] kasan_report+0x71/0xa0 [<ffffffff819c0353>] ? mpi_write_to_sgl+0x4e3/0x6f0 [<ffffffff8150ed34>] __asan_load8+0x64/0x70 [<ffffffff819c0353>] mpi_write_to_sgl+0x4e3/0x6f0 [<ffffffff819bfe70>] ? mpi_set_buffer+0x620/0x620 [<ffffffff819c0e6f>] ? mpi_cmp+0xbf/0x180 [<ffffffff8186e282>] rsa_verify+0x202/0x260 What's more, since lzeros can be anything from 1 to sizeof(mpi_limb_t)-1, the above will cause unaligned accesses which is bad on non-x86 archs. Fix the issue, by preparing the starting point p for the upcoming copy operation instead of shifting the source memory, i.e. alimb2. Fixes: 2d4d1eea540b ("lib/mpi: Add mpi sgl helpers") Signed-off-by: Nicolai Stange <nicstange@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* lib/mpi: mpi_write_sgl(): purge redundant pointer arithmeticNicolai Stange2016-04-051-2/+1
| | | | | | | | | | | | | | | | | | | Within the copying loop in mpi_write_sgl(), we have if (lzeros) { ... p -= lzeros; y = lzeros; } p = p - (sizeof(alimb) - y); If lzeros == 0, then y == 0, too. Thus, lzeros gets subtracted and added back again to p. Purge this redundancy. Signed-off-by: Nicolai Stange <nicstange@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* lib/mpi: mpi_write_sgl(): fix style issue with lzero decrementNicolai Stange2016-04-051-2/+2
| | | | | | | | | | | | | | | | | | | | Within the copying loop in mpi_write_sgl(), we have if (lzeros > 0) { ... lzeros -= sizeof(alimb); } However, at this point, lzeros < sizeof(alimb) holds. Make this fact explicit by rewriting the above to if (lzeros) { ... lzeros = 0; } Signed-off-by: Nicolai Stange <nicstange@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* lib/mpi: mpi_write_sgl(): fix skipping of leading zero limbsNicolai Stange2016-04-051-12/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | Currently, if the number of leading zeros is greater than fits into a complete limb, mpi_write_sgl() skips them by iterating over them limb-wise. However, it fails to adjust its internal leading zeros tracking variable, lzeros, accordingly: it does a p -= sizeof(alimb); continue; which should really have been a lzeros -= sizeof(alimb); continue; Since lzeros never decreases if its initial value >= sizeof(alimb), nothing gets copied by mpi_write_sgl() in that case. Instead of skipping the high order zero limbs within the loop as shown above, fix the issue by adjusting the copying loop's bounds. Fixes: 2d4d1eea540b ("lib/mpi: Add mpi sgl helpers") Signed-off-by: Nicolai Stange <nicstange@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: s5p-sss - Sort the headers to improve readabilityKrzysztof Kozlowski2016-04-051-10/+10
| | | | | | | | | | Sort the headers alphabetically to improve readability and to spot duplications easier. Suggested-by: Vladimir Zapolskiy <vz@mleia.com> Signed-off-by: Krzysztof Kozlowski <k.kozlowski@samsung.com> Acked-by: Vladimir Zapolskiy <vz@mleia.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: s5p-sss - Handle unaligned buffersKrzysztof Kozlowski2016-04-051-12/+138
| | | | | | | | | | | | | | | During crypto selftests on Odroid XU3 (Exynos5422) some of the algorithms failed because of passing AES-block unaligned source and destination buffers: alg: skcipher: encryption failed on chunk test 1 for ecb-aes-s5p: ret=22 Handle such case by copying the buffers to a new aligned and contiguous space. Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org> Acked-by: Vladimir Zapolskiy <vz@mleia.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: s5p-sss - Minor coding cleanupsKrzysztof Kozlowski2016-04-051-8/+7
| | | | | | | | | Remove unneeded inclusion of delay.h and get rid of indentation from labels. Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org> Acked-by: Vladimir Zapolskiy <vz@mleia.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>