Dan Callaghan [Tue, 15 Oct 2019 22:44:56 +0000 (08:44 +1000)]
elfutils: add PACKAGECONFIG for compression algorithms
Elfutils has optional support for bzip2 and xz (lzma). It uses
this for decompressing embedded ELF sections like the .gnu_debugdata
section for "mini debuginfo":
Previously this support was unconditionally disabled but the reasons for
disabling them seem to no longer apply. Both the target and native
variants of elfutils can build successfully against both bzip2 and xz.
Signed-off-by: Dan Callaghan <dan.callaghan@opengear.com> Signed-off-by: Ross Burton <ross.burton@intel.com> Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Upstream added support for optional docs, so
0001-Do-not-generate-gtkdoc-or-python-bindings.patch is replaced
with an option to disable gtk-doc (as the modulemd feature is not used
in oe-core anyway).
Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com> Signed-off-by: Ross Burton <ross.burton@intel.com>
Changing the gl options to qemu doesn't result in a correctly rebuilt
binary, the GL linkage can persist from a build where it was enabled
to one where it was not.
As well as clearly being incorrect and non-reproducible, this caused
some mystery failures on the autobuilder.
Cleaning ${B} at do_configure time avoids this. Most recipes
(e.g. autotools derived ones) already clean ${B} as appropriate and
avoid this issue.
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Haiqing Bai [Thu, 24 Oct 2019 02:33:04 +0000 (10:33 +0800)]
unfs3: fixed the issue that unfsd consumes 100% CPU
The 'accept' function on the socket of unfsd daemon
is always in below error state:
accept(4, 0x7ffd5e6dddc0, [128]) = -1 EINVAL (Invalid argument)
accept(6, 0x7ffd5e6dddc0, [128]) = -1 EINVAL (Invalid argument)
This error state is in the 'for' loop of the daemon, so it consumes 100%
CPU. The reason is that 'listen' is not called for the TCP socket before
'accept'. Actually the called 'svc_tli_create' from libtirpc will not call
'listen' on a bound socket.
Signed-off-by: Haiqing Bai <Haiqing.Bai@windriver.com> Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Kai Kang [Wed, 23 Oct 2019 07:49:39 +0000 (15:49 +0800)]
bind: fix CVE-2019-6471 and CVE-2018-5743
Backport patches to fix CVE-2019-6471 and CVE-2018-5743 for bind.
CVE-2019-6471 is fixed by 0001-bind-fix-CVE-2019-6471.patch and the
other 6 patches are for CVE-2018-5743. And backport one more patch to
fix compile error on arm caused by these 6 commits.
Signed-off-by: Kai Kang <kai.kang@windriver.com> Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Ross Burton [Wed, 23 Oct 2019 15:33:46 +0000 (16:33 +0100)]
buildhistory-analysis: filter out -src changes by default
Like the -dbg package, this package is automatically generated and contains
source filenames. We expect this to change on every upgrade, so don't show the
differences unless the user wants to see all changes.
Signed-off-by: Ross Burton <ross.burton@intel.com> Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
André Draszik [Mon, 21 Oct 2019 10:46:59 +0000 (11:46 +0100)]
connman: mark connman-wait-online as SYSTEMD_PACKAGE
The connman-wait-online package currently isn't marked as
systemd-enabled package. This means it is impossible to
auto-enable the service during image creation or package
installation, as no preset files and no pkg_postinst()
snippet is being created.
This change should have been done as part of the
upgrade to v1.31
Note:
connman-wait-online is needed when connman is in use
in more complex network/interface setups for systemd's
network-online.target to report success.
systemd-networkd's systemd-networkd-wait-online.service
alone doesn't work in such scenarios and simply times
out, as it know nothing about the expected network/
interface configuration, meaning the target doesn't
boot successfully (systemctl list-units --failed),
and long delays are seen, caused by waiting for the
systemd-networkd-wait-online.service timeout.
Signed-off-by: André Draszik <git@andred.net> Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Mike Crowe [Mon, 21 Oct 2019 14:38:18 +0000 (15:38 +0100)]
kernel-devicetree: Cope with non-standard kernel deploy subdirectory
kernel.bbclass installs non-standard kernels (where
KERNEL_PACKAGE_NAME is not "kernel") in a subdirectory of ${DEPLOYDIR}.
To achieve this kernel_do_deploy sets the deployDir shell variable to
${DEPLOYDIR} for the standard kernel or
${DEPLOYDIR}/${KERNEL_DEPLOYSUBDIR} for non-standard kernels.
kernel-devicetree.bbclass's do_deploy_append ought to do the same
and can do so by using the same shell variable.
Signed-off-by: Mike Crowe <mac@mcrowe.com> Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Mike Crowe [Mon, 21 Oct 2019 14:38:17 +0000 (15:38 +0100)]
kernel-fitimage: Cope with non-standard kernel deploy subdirectory
kernel.bbclass installs non-standard kernels (where
KERNEL_PACKAGE_NAME is not "kernel") in a subdirectory of ${DEPLOYDIR}.
To achieve this kernel_do_deploy sets the deployDir shell variable to
${DEPLOYDIR} for the standard kernel or
${DEPLOYDIR}/${KERNEL_DEPLOYSUBDIR} for non-standard kernels.
kernel-fitimage.bbclass's kernel_do_deploy_append ought to do the same
and can do so by using the same shell variable.
Signed-off-by: Mike Crowe <mac@mcrowe.com> Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Chee Yang Lee [Tue, 22 Oct 2019 05:27:06 +0000 (13:27 +0800)]
wic/engine: use 'linux-swap' for swap file system
[YOCTO #13312]
see https://bugzilla.yoctoproject.org/show_bug.cgi?id=13312
wic/engine.Disk._get_part_image was looking at variable fstypes for
supported fstype which is 'swap' but image build with 'linux-swap'.
supported fstype should be 'linux-swap'.
Signed-off-by: Chee Yang Lee <chee.yang.lee@intel.com> Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Changqing Li [Tue, 22 Oct 2019 02:47:11 +0000 (10:47 +0800)]
sudo: fix CVE-2019-14287
In Sudo before 1.8.28, an attacker with access to a Runas ALL sudoer
account can bypass certain policy blacklists and session PAM modules,
and can cause incorrect logging, by invoking sudo with a crafted user
ID. For example, this allows bypass of !root configuration, and USER=
logging, for a "sudo -u \#$((0xffffffff))" command.
Signed-off-by: Changqing Li <changqing.li@windriver.com> Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
André Draszik [Mon, 21 Oct 2019 10:30:06 +0000 (11:30 +0100)]
oeqa/runtime/context.py: ignore more files when loading controllers
When loading controllers as (external) modules, the code currently
tries to load all files ending with .py. This is a problem when
during development using an editor that creates a lock-file
in the same directory as the .py file, as the lock file is
typically called '.#xxxx.py'.
Python will try to load the lock file and fail miserably with
an exception:
The stack trace of python calls that resulted in this exception/failure was:
File: 'exec_python_func() autogenerated', lineno: 2, function: <module>
0001:
*** 0002:do_testimage(d)
0003:
File: 'poky/meta/classes/testimage.bbclass', lineno: 114, function: do_testimage
0110: netstat -an
0111:}
0112:
0113:python do_testimage() {
*** 0114: testimage_main(d)
0115:}
0116:
0117:addtask testimage
0118:do_testimage[nostamp] = "1"
File: 'poky/meta/classes/testimage.bbclass', lineno: 294, function: testimage_main
0290:
0291: # the robot dance
0292: target = OERuntimeTestContextExecutor.getTarget(
0293: d.getVar("TEST_TARGET"), logger, d.getVar("TEST_TARGET_IP"),
*** 0294: d.getVar("TEST_SERVER_IP"), **target_kwargs)
0295:
0296: # test context
0297: tc = OERuntimeTestContext(td, logger, target, host_dumper,
0298: image_packages, extract_dir)
File: 'poky/meta/lib/oeqa/runtime/context.py', lineno: 116, function: getTarget
0112: # XXX: Don't base your targets on this code it will be refactored
0113: # in the near future.
0114: # Custom target module loading
0115: target_modules_path = kwargs.get('target_modules_path', '')
*** 0116: controller = OERuntimeTestContextExecutor.getControllerModule(target_type, target_modules_path)
0117: target = controller(logger, target_ip, server_ip, **kwargs)
0118:
0119: return target
0120:
File: 'poky/meta/lib/oeqa/runtime/context.py', lineno: 128, function: getControllerModule
0124: # ImportError raised if a provided module can not be imported.
0125: @staticmethod
0126: def getControllerModule(target, target_modules_path):
0127: controllerslist = OERuntimeTestContextExecutor._getControllerModulenames(target_modules_path)
*** 0128: controller = OERuntimeTestContextExecutor._loadControllerFromName(target, controllerslist)
0129: return controller
0130:
0131: # Return a list of all python modules in lib/oeqa/controllers for each
0132: # layer in bbpath
File: 'poky/meta/lib/oeqa/runtime/context.py', lineno: 163, function: _loadControllerFromName
0159: # Raise ImportError if a provided module can not be imported
0160: @staticmethod
0161: def _loadControllerFromName(target, modulenames):
0162: for name in modulenames:
*** 0163: obj = OERuntimeTestContextExecutor._loadControllerFromModule(target, name)
0164: if obj:
0165: return obj
0166: raise AttributeError("Unable to load {0} from available modules: {1}".format(target, str(modulenames)))
0167:
File: 'poky/meta/lib/oeqa/runtime/context.py', lineno: 173, function: _loadControllerFromModule
0169: @staticmethod
0170: def _loadControllerFromModule(target, modulename):
0171: obj = None
0172: # import module, allowing it to raise import exception
*** 0173: module = __import__(modulename, globals(), locals(), [target])
0174: # look for target class in the module, catching any exceptions as it
0175: # is valid that a module may not have the target class.
0176: try:
0177: obj = getattr(module, target)
Exception: ImportError: No module named 'oeqa.controllers.'
Simply ignore those when collecting the list of files to try
to load.
Signed-off-by: André Draszik <git@andred.net> Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Tom Benn [Thu, 17 Oct 2019 10:20:04 +0000 (10:20 +0000)]
dbus: update dbus-1.init to reflect new PID file
The PID file referenced in dbus-1.init script was out of date and no longer existed. This meant that dbus could not be restarted via init.d without force removing the old PID file.
Signed-off-by: fridgecow <fridgecow@fb.com> Signed-off-by: Ross Burton <ross.burton@intel.com>
Yi Zhao [Tue, 15 Oct 2019 07:42:12 +0000 (15:42 +0800)]
libgcrypt: fix CVE-2019-12904
In Libgcrypt 1.8.4, the C implementation of AES is vulnerable to a
flush-and-reload side-channel attack because physical addresses are
available to other processes. (The C implementation is used on platforms
where an assembly-language implementation is unavailable.)
André Draszik [Tue, 1 Oct 2019 14:29:56 +0000 (15:29 +0100)]
ruby: some ptest fixes
* the (new?) ruby expects some additional compiled libraries
to run, so we need to copy them as part of ptest.
Fixes errors like:
# ruby ./runner.rb ./-ext-/vm/test_at_exit.rb
Run options:
# Running tests:
[1/1] TestVM#test_at_exit = 0.06 s
1) Failure:
TestVM#test_at_exit [/usr/lib/ruby/ptest/test/-ext-/vm/test_at_exit.rb:7]:
1. [1/2] Assertion for "stdout"
| <["begin", "end"]> expected but was
| <[]>.
2. [2/2] Assertion for "stderr"
| <[]> expected but was
| <["-:1:in `require': cannot load such file -- -test-/vm/at_exit (LoadError)",
| "\tfrom -:1:in `<main>'"]>.
* the 'erb' test can't find the erb binary, as we're not
running this from within the build directory
Signed-off-by: André Draszik <andre.draszik@jci.com> Signed-off-by: Ross Burton <ross.burton@intel.com>
icecc: Export ICECC_CC and friends via wrapper-script
By exporting ICECC_CC, ICECC_CXX, and ICECC_VERSION in a wrapper-script,
and putting this wrapper-script in the PATH, the Makefiles generated by CMake or
the autotools are able to function correctly outside of bitbake.
This provides a convenient developer workflow in which the
modify-compile-unittest cycle can happen directly in the ${B} directory.
The `rm -f $ICE_PATH/$compiler` line is transitional,
and can go at some later date (October 2020 or later, perhaps).
Signed-off-by: Douglas Royds <douglas.royds@taitradio.com> Signed-off-by: Ross Burton <ross.burton@intel.com>
Also note that 'created.rid' is not being installed
anymore since v2.6.0
While additional LICENSEs were added to the recipe,
they should always have been mentioned in this recipe,
i.e. the license checksum was updated only because:
* URLs were updated
* new imported components were mentioned (with no new licenses)
* formatting was changed
* dates were updated
Signed-off-by: André Draszik <andre.draszik@jci.com> Signed-off-by: Ross Burton <ross.burton@intel.com>
Michael Ho [Wed, 14 Aug 2019 15:05:15 +0000 (17:05 +0200)]
cmake.bbclass: add HOSTTOOLS_DIR to CMAKE_FIND_ROOT_PATH
The find_program command will fail if it is used on a tool that is listed in
ASSUME_PROVIDED. This is because these tools are in the hosttools directory
which is not listed in CMAKE_FIND_ROOT_PATH so cmake will not find them.
Adding the directory HOSTTOOLS_DIR to the CMAKE_FIND_ROOT_PATH variable fixes
the initial issue of needing to search for tools in ASSUME_PROVIDED.
Note that this change alone does not fix the issue because find_program will
by default only look into the subdirectories bin and usr/bin under the paths
in CMAKE_FIND_ROOT_PATH to find the programs and the hosttools directory has
instead the symlinks directly present without these subdirectories.
Set CMAKE_PROGRAM_PATH to by default include the root directory so
find_program can search the hosttools directory without needing the prefix
directories.
Signed-off-by: Ross Burton <ross.burton@intel.com>
devtool: Add --remove-work option for devtool reset command
Enable --remove-work option for devtool reset command that allows user
to clean up source directory within workspace.
Currently devtool reset command only removes recipes and user is forced
to manually remove the sources directory within the workspace before
running devtool modify again.
Using devtool reset -r or devtool reset --remove-work option, user can
cleanup the sources directory along with the recipe instead of manually
cleaning it.
syntax: devtool reset -r <recipename>
Ex: devtool reset -r zip
Ross Burton [Thu, 17 Oct 2019 11:29:43 +0000 (12:29 +0100)]
python3: ensure that all forms of python3-config are in python3-dev
In multilib builds python3-config gets renamed to eg python3-config-lib64 but
this ends up being packaged in python3-core not python3-dev.
The manifest uses an extended glob to package all python* binaries that are not
python-config into python3-core:
"${bindir}/python*[!-config]",
However, this doesn't do what was intended, as [] is a range match.
Replace the globs with more verbose but precise matches, and clear out
FILES_${PN} to ensure that new binaries don't end up in ${PN} (which shouldn't
exist).
[ YOCTO #13592 ]
Signed-off-by: Ross Burton <ross.burton@intel.com>
Mikko Rapeli [Thu, 17 Oct 2019 07:31:58 +0000 (10:31 +0300)]
systemd.bbclass: enable all services specified in ${SYSTEMD_SERVICE}
This has been the traditional way of enabling systemd services.
It may conflict with presets feature, but other layers, image classes
and recipes add services to be enabled using SYSTEMD_SERVICE
variable also with read-only rootfs, e.g. IMAGE_FEATURES has
stateless-rootfs and systemd_preset_all task is not executed.
Fixes startup of custom services from our recipes using custom
image classes with various BSP layers. In the worst case even
serial console getty service wasn't starting due to dependency
no not enabled services.
Signed-off-by: Mikko Rapeli <mikko.rapeli@bmw.de> Cc: Peter Kjellerstedt <peter.kjellerstedt@axis.com> Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
André Draszik [Thu, 17 Oct 2019 09:28:02 +0000 (10:28 +0100)]
oeqa/runtime/systemd: skip unit enable/disable on read-only-rootfs
This doesn't work on read-only-rootfs:
AssertionError: 1 != 0 : SYSTEMD_BUS_TIMEOUT=240s systemctl disable avahi-daemon.service
Failed to disable unit: File /etc/systemd/system/multi-user.target.wants/avahi-daemon.service: Read-only file system
This patch does two things:
1) Decorate the existing test to be skipped if the rootfs is
read-only
2) add a new test to be executed only if the rootfs is
read-only. This new test remounts the rootfs read-write
before continuing to execute the existing test, making
sure to clean up correctly after itself (remount r/o
again).
Signed-off-by: André Draszik <git@andred.net> Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
André Draszik [Wed, 16 Oct 2019 09:18:24 +0000 (10:18 +0100)]
oeqa/runtime/opkg: skip install on read-only-rootfs
Images can have package management enabled, but be
generally running as read-only. In this case, the
test fails at the moment with various errors due to
that.
Use the new @skipIfFeature decorator to also skip
this test in that case.
Signed-off-by: André Draszik <git@andred.net> Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
André Draszik [Wed, 16 Oct 2019 09:18:22 +0000 (10:18 +0100)]
oeqa/runtime/df: don't fail on long device names
When device names are long (more than 20 characters), the
df test will fail with an exception:
self.assertTrue(int(output)>5120, msg=msg)
ValueError: invalid literal for int() with base 10: ''
at least when busybox is in use.
The reason is that busybox breaks the line in that case:
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/disk/by-partuuid/8e991e5a-cebd-4f88-9494-c9db4f30cb02 1998672 87024 1790408 5% /
and the code tries to extract the fourth field from the
second line, which is empty of course.
df can be told not to break lines, though, using the -P
flag, which turns on the POSIX output format, and is
supported by busybox df and coreutils df:
Filesystem 1024-blocks Used Available Capacity Mounted on
/dev/disk/by-partuuid/8e991e5a-cebd-4f88-9494-c9db4f30cb021998672 87024 1790408 5% /
Signed-off-by: André Draszik <git@andred.net> Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
André Draszik [Wed, 16 Oct 2019 09:18:21 +0000 (10:18 +0100)]
testimage.bbclass: enable ssh agent forwarding
Some targets might use ssh to do their power- or serial-
control. In that case, ssh might need access to the
ssh agent, or otherwise won't work.
So export it into the environment.
Note that the (old) oeqa/controllers/masterimage.py
tries to do that as well by exporting all of BB_ORIGENV
into the test environment. Here in testimage.bbclass we
are a bit more strict and only pass the ssh related
environment variables.
Signed-off-by: André Draszik <git@andred.net> Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
André Draszik [Wed, 16 Oct 2019 09:18:20 +0000 (10:18 +0100)]
testimage.bbclass: support hardware-controlled targets
Since the introduction of the new runtime framework for target
testing in commit 2aa5a4954d76
("testimage.bbclass: Migrate class to use new runtime framework")
commit 3857e5c91da6 in poky.git, target controllers have no
access to the global datastore 'd' anymore.
This makes it impossible for a specific OEQA (hardware)
controller to access documented properties like
TEST_POWERCONTROL_CMD, TEST_SERIALCONTROL_CMD, etc,
meaning it's impossible for those controllers to actually
control the hardware.
To solve this, simply add those documented variables into
the target_kwargs[].
Signed-off-by: André Draszik <andre.draszik@jci.com> Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Ross Burton [Thu, 17 Oct 2019 11:29:44 +0000 (12:29 +0100)]
python3: -dev should depend on distutils
python3-config uses distutils:
Traceback (most recent call last):
File "/usr/bin/python3-config", line 9, in <module>
from distutils import sysconfig
ModuleNotFoundError: No module named 'distutils'
Add the dependency so that distutils is always present.
[ YOCTO #13592 ]
Signed-off-by: Ross Burton <ross.burton@intel.com> Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Fixes:
# decode-dimms
Can't locate Carp.pm in @INC (you may need to install the Carp module) (@INC contains: /usr/lib/perl5/site_perl/5.28.1/x86_64-linux /usr/lib/perl5/site_perl/5.28.1 /usr/lib/perl5/vendor_perl/5.28.1/x86_64-linux /usr/lib/perl5/vendor_perl/5.28.1 /usr/lib/perl5/5.28.1/x86_64-linux /usr/lib/perl5/5.28.1 .) at /usr/lib/perl5/5.28.1/Tie/Hash.pm line 190.
BEGIN failed--compilation aborted at /usr/lib/perl5/5.28.1/Tie/Hash.pm line 190.
Compilation failed in require at /usr/lib/perl5/5.28.1/x86_64-linux/POSIX.pm line 505.
Compilation failed in require at /usr/bin/decode-dimms line 41.
BEGIN failed--compilation aborted at /usr/bin/decode-dimms line 41.
root@qt5222:~# apt-get install perl-module-carp
Signed-off-by: Ricardo Ribalda Delgado <ricardo@ribalda.com> Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>