The fix is similar to what was done for PowerPC32.
It solves below error, while compiling for PowerPC64,
-- snip --
| ../../../../valgrind-3.13.0/none/tests/ppc64/test_isa_2_06_part2.c: In function 'usage':
| ../../../../valgrind-3.13.0/none/tests/ppc64/test_isa_2_06_part2.c:1778:3: warning: implicit declaration of function 'fprintf' [-Wimplicit-function-declaration]
| fprintf(stderr,
| ^~~~~~~
| ../../../../valgrind-3.13.0/none/tests/ppc64/test_isa_2_06_part2.c:1778:3: warning: incompatible implicit declaration of built-in function 'fprintf'
| ../../../../valgrind-3.13.0/none/tests/ppc64/test_isa_2_06_part2.c:1778:3: note: include '<stdio.h>' or provide a declaration of 'fprintf'
| ../../../../valgrind-3.13.0/none/tests/ppc64/test_isa_2_06_part2.c:1778:11: error: 'stderr' undeclared (first use in this function)
| fprintf(stderr,
| ^~~~~~
-- snip --
Zhixiong Chi [Wed, 11 Apr 2018 08:26:18 +0000 (16:26 +0800)]
valgrind: fix the shared object issue while prelink ptest
If valgrind-ptest is installed, we will get the some prelink error
like below at do_image:
.../usr/sbin/prelink: /usr/lib64/valgrind/ptest/memcheck/tests/wrap7:\
Could not find one of the dependencies: \
.../usr/sbin//prelink-rtld: error \
while loading shared libraries: wrap7so.so: cannot open shared \
object file: No such file or directory
The wrap7 needs to link the shared object in the path
/usr/lib64/valgrind/ptest/memcheck/tests, but it fails.
So we correct the path for ptest.
toolchain-scripts: preserve host path in environment setup script
The environment setup script generated in the build directory sets the PATH
variable by expanding ${PATH} which would have host paths filtered. Sourcing
this script to run runqemu will not work as it complains host stty (/bin/stty)
cannot be found.
To resolve this, the script no longer expands ${PATH} during generation time,
instead it will now source oe-init-build-env to initialize the build
environment so that all host paths will be preserved. Also be sure to prepend
STAGING_BINDIR_TOOLCHAIN to the PATH variable so that the toolchain from the
build directory can be found.
lsb/lsbtests: Update package lists to use latest version of binary
Currently package list is pointing to "lsb-setup-4.1.0-1.noarch.rpm"
which is not available anymore on
http://ftp.linuxfoundation.org/pub/lsb/base/released-all/binary/ hence
BASE_PACKAGES_LIST is updated to point to the latest available version.
Anuj Mittal [Tue, 16 Oct 2018 02:47:12 +0000 (10:47 +0800)]
perl: skip tests that are not useful
Some tests, like the one that compares the hashes for a list of files
against those stored in a .dat file, don't make sense for downstream
distros packaging perl.
Backport a patch from upstream that allows skipping of these tests at
runtime. Also remove the local patch trying to keep hashes up-to-date
for one of those tests.
Kai Kang [Fri, 25 May 2018 02:48:23 +0000 (10:48 +0800)]
shadow: update ownership and permission of /var/spool/mail
Update shadow to change ownership of /var/spool/mail from root:root to
root:mail and permission from 0755 to 0775 just as in most popular
distributions such as fedora and debian(It also set setgid bit in debian
but we don't need it).
Signed-off-by: Kai Kang <kai.kang@windriver.com> Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org> Signed-off-by: Armin Kuster <akuster808@gmail.com> Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org> Signed-off-by: Armin Kuster <akuster808@gmail.com>
newgidmap: enforce setgroups=deny if self-mapping a group
This is necessary to match the kernel-side policy of "self-mapping in a
user namespace is fine, but you cannot drop groups" -- a policy that was
created in order to stop user namespaces from allowing trivial privilege
escalation by dropping supplementary groups that were "blacklisted" from
certain paths.
This is the simplest fix for the underlying issue, and effectively makes
it so that unless a user has a valid mapping set in /etc/subgid (which
only administrators can modify) -- and they are currently trying to use
that mapping -- then /proc/$pid/setgroups will be set to deny. This
workaround is only partial, because ideally it should be possible to set
an "allow_setgroups" or "deny_setgroups" flag in /etc/subgid to allow
administrators to further restrict newgidmap(1).
We also don't write anything in the "allow" case because "allow" is the
default, and users may have already written "deny" even if they
technically are allowed to use setgroups. And we don't write anything if
the setgroups policy is already "deny".
linux-firmware: fix the mess of licenses
* LICENSE_CREATE_PACKAGE functionality in license.bbclass when enabled
adds new package with suffix:
LICENSE_PACKAGE_SUFFIX ??= "-lic"
but then it checks if ${PN}-${LICENSE_PACKAGE_SUFFIX} is included
in PACKAGES before adding it and when found it shows:
WARNING: linux-firmware-1_0.0+gitAUTOINC+4c0bf113a5-r0 do_package: linux-firmware-lic package already existed in linux-firmware.
and doesn't add the ${PN}-lic to PACKAGES and causes another warning:
WARNING: linux-firmware-1_0.0+gitAUTOINC+4c0bf113a5-r0 do_package: QA Issue: linux-firmware: Files/directories were installed but not shipped in any package:
/usr
/usr/share
/usr/share/licenses
/usr/share/licenses/linux-firmware
that's because it was searching ${PN}-lic in PACKAGES as a string
so it found ${PN}-lic as a substring of ${PN}-license, add a split
to search in an list
license.bbclass: Minor simplification of get_deployed_dependencies()
Since ${SSTATE_ARCHS} now contains ${PACKAGE_EXTRA_ARCHS} there is no
longer any need to add those extra architectures to the list of
architectures handled in get_deployed_dependencies().
Daniel Díaz [Tue, 14 Aug 2018 14:47:03 +0000 (09:47 -0500)]
multilib_header: recognize BPF as a target
When building with `clang -target bpf` using the
multilib_header, a recursion was unavoidable because
bits/wordsize.h would #include itself, still lacking
a definition for __MHWORDSIZE or __WORDSIZE.
Derek Straka [Tue, 30 Jan 2018 03:04:39 +0000 (22:04 -0500)]
python-native: add dependency for gdbm and db native packages
These two packages are required to ensure the manifest files contain
all of the generated packages. Without this, the db and gdbm packages
will not contain the .so files as they are skipped during the compilation steps
Ross Burton [Mon, 13 Aug 2018 23:59:39 +0000 (00:59 +0100)]
bzip2: use Yocto Project mirror for SRC_URI
The bzip.org domain expired and is now a holding site for adverts, so we can't
trust a tarball that appears on that site (luckily we have source checksums to
detect this).
For now, point SRC_URI at the tarball in the Yocto Project source mirror, but
set HOMEPAGE and UPSTREAM_CHECK_URI to the sourceware.org/bzip2/ page which
apparently will be resurrected as the new canonical home page.
Ross Burton [Mon, 13 Aug 2018 17:20:54 +0000 (18:20 +0100)]
classes: sanity-check LIC_FILES_CHKSUM
We assume that LIC_FILES_CHKSUM is a file: URI but don't actually verify this,
which can lead to problems if you have a URI that resolves to a path of / as
Bitbake will then dutifully checksum / recursively.
module-base.bbclass: fix out-of-tree module builds with custom EXTRA_OEMAKE
Commit d2aa88a6a92985f21414fceea2dc0facbf7f8779 was meant to backport build
dependencies on bc-native and openssl-native, but it also changed execution
of do_make_scripts() from calling make directly to using oe_runmake. That
change was made in master/sumo as part of a separate make-mod-scripts recipe.
Unfortunately, that doesn't work here in rocko in the context of module-base
class, as it gets executed inside out-of-tree module environment. Quite often
those out-of-tree modules provide own Makefile with custom EXTRA_OEMAKE var
defined. But do_make_scripts() gets executed within STAGING_KERNEL_DIR and
cannot simply use custom EXTRA_OEMAKE set by a module.
Move back to calling make and passing HOSTCC/HOSTCPP directly w/o using
EXTRA_OEMAKE.
For more details please see:
http://lists.openembedded.org/pipermail/openembedded-core/2018-August/154189.html
CVE-2017-16612: Fix heap overflows when parsing malicious files
It is possible to trigger heap overflows due to an integer overflow
while parsing images and a signedness issue while parsing comments.
The integer overflow occurs because the chosen limit 0x10000 for
dimensions is too large for 32 bit systems, because each pixel takes 4 bytes.
Properly chosen values allow an overflow which in turn will lead to less
allocated memory than needed for subsequent reads.
The signedness bug is triggered by reading the length of a comment
as unsigned int, but casting it to int when calling the function
XcursorCommentCreate. Turning length into a negative value allows the
check against XCURSOR_COMMENT_MAX_LEN to pass, and the following
addition of sizeof (XcursorComment) + 1 makes it possible to allocate
less memory than needed for subsequent reads.
Chen Qi [Mon, 14 May 2018 08:35:22 +0000 (16:35 +0800)]
devtool/sdk.py: error out in case of downloading file failure
It's possible that downloading file from updateserver fails. In
this case, we should error out instead of continue.
We have users reporting unexpected behavior of 'devtool sdk-update'.
When an invalid url is supplied, e.g., `devtool sdk-update http://invalid',
the program reports 'Note: Already up-to-date'.
This is obviously not expected. We should error out in such case.
Whenever perf got rebuilt, I was consistently getting errors such as
| find: '[...]/perf/1.0-r9/perf-1.0/plugin_mac80211.so': No such file or directory
| find: '[...]/perf/1.0-r9/perf-1.0/plugin_mac80211.so': No such file or directory
| find: find: '[...]/perf/1.0-r9/perf-1.0/libtraceevent.a''[...]/perf/1.0-r9/perf-1.0/libtraceevent.a': No such file or directory: No such file or directory
|
[...]
| find: cannot delete '/mnt/xfs/devel/pil/yocto/tmp-glibc/work/wandboard-oe-linux-gnueabi/perf/1.0-r9/perf-1.0/util/.pstack.o.cmd': No such file or directory
breaking the whole build. The root cause seems to be that the implicit
'make clean' done during do_configure ends up running in parallel, and
thus multiple find commands attempt to stat and/or delete the same
file.
A patch disabling parallelism for the clean target has been ack'ed
upstream (lkml.kernel.org/r/20180705134955.GB3686@krava), but it should
be harmless to pass JOBS=1 even with a fixed kernel. This can be removed
if and when all relevant -stable kernels have that patch.
Ross Burton [Fri, 9 Mar 2018 18:56:10 +0000 (20:56 +0200)]
cryptodev: refresh patches
The patch tool will apply patches by default with "fuzz", which is where if the
hunk context isn't present but what is there is close enough, it will force the
patch in.
Whilst this is useful when there's just whitespace changes, when applied to
source it is possible for a patch applied with fuzz to produce broken code which
still compiles (see #10450). This is obviously bad.
We'd like to eventually have do_patch() rejecting any fuzz on these grounds. For
that to be realistic the existing patches with fuzz need to be rebased and
reviewed.
Signed-off-by: Ross Burton <ross.burton@intel.com> Signed-off-by: Alexander Kanavin <alexander.kanavin@linux.intel.com> Signed-off-by: Ross Burton <ross.burton@intel.com> Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org> Signed-off-by: Armin Kuster <akuster808@gmail.com>
One of the tarball mirrors is down; the other is blocked by Intel's corporate proxy
for being deemed 'suspicious' (the same problem might pop up in other
companies as well). Let's just take the source from github.
Ross Burton [Wed, 15 Nov 2017 16:47:41 +0000 (16:47 +0000)]
ovmf: refresh patches
The patch tool will apply patches by default with "fuzz", which is where if the
hunk context isn't present but what is there is close enough, it will force the
patch in.
Whilst this is useful when there's just whitespace changes, when applied to
source it is possible for a patch applied with fuzz to produce broken code which
still compiles (see #10450). This is obviously bad.
We'd like to eventually have do_patch() rejecting any fuzz on these grounds. For
that to be realistic the existing patches with fuzz need to be rebased and
reviewed.
Martin Jansa [Thu, 24 May 2018 14:56:01 +0000 (14:56 +0000)]
perf: fix build with kernel older than 4.8
* perf is failing to build for me since this oe-core commit:
commit 9b38c824961fc9dce51bda95c25dac91a69fc64f
Author: Hongxu Jia <hongxu.jia@windriver.com>
Date: Tue Apr 24 11:33:47 2018 +0800
perf: make a copy of kernel source to perf workdir
the problem is that perf sources in kernel older than 4.8 (in my case
4.4) are depending on the "global" include headers outside tools
directory, e.g. swab.h in:
kernel-source/tools$ git grep swab.h
perf/MANIFEST:include/linux/swab.h
perf/MANIFEST:include/uapi/linux/swab.h
perf/util/include/asm/byteorder.h:#include "../../../../include/uapi/linux/swab.h"
this was resolved in 4.8 with:
commit 7e3f36411342a54f1981fa97b43550b8406a3d69
Author: Arnaldo Carvalho de Melo <acme@redhat.com>
Date: Mon Jul 18 17:42:16 2016 -0300
Not used anymore. This also stops include linux/swab.h directly
from the kernel sources, remove that reference from the MANIFEST.
and few more changes to make tools/include more complete and standalone:
tools/include in 4.15:
asm asm-generic linux tools trace uapi
tools/include in 4.4:
asm asm-generic linux tools
but copying the include header even for kernels which don't really
need it doesn't add big overhead, so just copy include to perf sources
for all kernels.
perf: make a copy of kernel source to perf workdir
Since perf contaminates linux shared workdir, it probably caused
kernel-devsrc compile failure at world build.
...
|0 blocks
|cpio: ./tools/perf/arch/arm/util/sedr7ORqk: Cannot stat:
No such file or directory
|0 blocks
...
cpio tried to find a file at ${S}/tools/perf and failed
if the input list is not valid.
Make a copy of kernel shared source directory into a perf workdir
could fix the issue.
Bruce Ashfield [Sat, 28 Jul 2018 08:49:50 +0000 (16:49 +0800)]
make-mod-scripts: add build requirements for external modules
Newer kernels (4.14/v4.15+) have dependencies for the build of
modules (and hence external modules). Without these dependencies
explicitly in the build chain, you can end up with build failures like:
work-shared/qemux86/kernel-source/scripts/extract-cert.c:21:25: fatal
error: openssl/bio.h: No such file or directory
| #include <openssl/bio.h>
| ^
| compilation terminated.
| make[2]: *** [scripts/extract-cert] Error 1
| make[1]: *** [scripts] Error 2
To ensure that these headers are in place, and that the scripts use
our build environment flags, we add a dependency on openssl-native
and use oe_make to invoke the build.
Older kernels have no issues with the extra dependency, so there's no
need to make this conditional.
Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com> Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
[Tweaked to have changes in module*.bbclass instead from where
make-mod-scripts was split in sumo] Signed-off-by: Anuj Mittal <anuj.mittal@intel.com> Signed-off-by: Armin Kuster <akuster808@gmail.com>
Kernels which use tools/objtool can now fail when building external modules
due to objtool being missing, the generated files can also cause problems
for kernel-devsrc.
Ensure objtool is generated in make-mod-scripts by also calling
"make prepare".
For devsrc, delete the generated binaries since they'd be native
binaries and unsuitable for the target.
The oeqa kernel module tests also need to have the additional "make prepare"
step added.
Liwei Song [Thu, 1 Feb 2018 06:40:49 +0000 (01:40 -0500)]
linux-firmware: package all ibt-17-x-x.sfi/ddc firmware
All ibt-17-x-x.sfi/ddc firmware are use to support Intel Bluetooth 9560
they are needed in different version of Bluetooth driver since
4.14 kernel version.
commit b77bb7afe513 ("linux-firmware: package ibt-17-16-1 firmware")
only package one of the ibt-17 series firmware.
As the Bluetooth driver's update, to avoid packaging the ibt-17 firmware
one by one, install them in one package ibt-17.