public inbox for isar-users@googlegroups.com
 help / color / mirror / Atom feed
* [RFC PATCH 0/5] Refactor mount logic
@ 2021-10-12 13:04 Adriaan Schmidt
  2021-10-12 13:04 ` [RFC PATCH 1/5] oe imports in central location Adriaan Schmidt
                   ` (5 more replies)
  0 siblings, 6 replies; 14+ messages in thread
From: Adriaan Schmidt @ 2021-10-12 13:04 UTC (permalink / raw)
  To: isar-users; +Cc: Adriaan Schmidt

After some reports on problems with mounting and unmounting, here is a proposal
to refactor the whole mount logic.

The main idea is to have all actual mounting happen in mounts.bbclass, which
also takes care of things like reference counting.
Any task that needs mounts uses a `do_my_task[mounts] = "..."` varflag annotation.

One thing that may need some thoughts: the original rootfs_do_mounts also
writes /etc/resolv.conf. This proposal introduces another varflag
(`do_my_task[buildchroot] = "1"`) for tasks that need that. Alternatively,
we could simply provide a function that needs to be called explicitly.

Current status: the code works, but there may be some issues with cleanup/unmounting.
The existing ("legacy") cleanup code in isar-events still finds active mounts,
which should actually be covered by cleanup code in mounts.bbclass.


Adriaan Schmidt (5):
  oe imports in central location
  meta: refactor containerization
  meta: add oe.utils
  meta: add mounts class
  meta: refactor to use the new mounting mechanism

 meta/classes/base.bbclass                     |  28 +
 meta/classes/buildchroot.bbclass              |  52 +-
 meta/classes/cpiogz-img.bbclass               |   3 +-
 meta/classes/dpkg-base.bbclass                |  49 +-
 meta/classes/dpkg-gbp.bbclass                 |   2 -
 meta/classes/dpkg.bbclass                     |   4 +-
 meta/classes/ext4-img.bbclass                 |   3 +-
 meta/classes/fit-img.bbclass                  |   4 +-
 .../classes/image-container-extension.bbclass |   3 +-
 meta/classes/image-sdk-extension.bbclass      |   6 +-
 meta/classes/image-tools-extension.bbclass    |   4 +-
 meta/classes/image.bbclass                    |  30 +-
 meta/classes/initramfs.bbclass                |   2 +-
 meta/classes/isar-events.bbclass              |   4 +-
 meta/classes/mounts.bbclass                   | 271 +++++++++
 meta/classes/patch.bbclass                    |   5 -
 meta/classes/rootfs.bbclass                   |  50 +-
 meta/classes/ubi-img.bbclass                  |   3 +-
 meta/classes/ubifs-img.bbclass                |   3 +-
 meta/classes/vm-img.bbclass                   |   7 +-
 meta/classes/wic-img.bbclass                  |  31 +-
 meta/conf/bitbake.conf                        |   2 +-
 meta/lib/oe/utils.py                          | 569 ++++++++++++++++++
 .../isar-bootstrap/isar-bootstrap.inc         |  43 +-
 .../buildchroot/buildchroot.inc               |   8 +-
 25 files changed, 976 insertions(+), 210 deletions(-)
 create mode 100644 meta/classes/mounts.bbclass
 create mode 100644 meta/lib/oe/utils.py

-- 
2.30.2


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [RFC PATCH 1/5] oe imports in central location
  2021-10-12 13:04 [RFC PATCH 0/5] Refactor mount logic Adriaan Schmidt
@ 2021-10-12 13:04 ` Adriaan Schmidt
  2021-10-12 13:04 ` [RFC PATCH 2/5] meta: refactor containerization Adriaan Schmidt
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 14+ messages in thread
From: Adriaan Schmidt @ 2021-10-12 13:04 UTC (permalink / raw)
  To: isar-users; +Cc: Adriaan Schmidt

Code taken from OE. Allows to import OE modules
without first setting python paths manually.

Signed-off-by: Adriaan Schmidt <adriaan.schmidt@siemens.com>
---
 meta/classes/base.bbclass      | 28 ++++++++++++++++++++++++++++
 meta/classes/dpkg-base.bbclass |  5 -----
 meta/classes/patch.bbclass     |  5 -----
 3 files changed, 28 insertions(+), 10 deletions(-)

diff --git a/meta/classes/base.bbclass b/meta/classes/base.bbclass
index 72d4cc0..13134ff 100644
--- a/meta/classes/base.bbclass
+++ b/meta/classes/base.bbclass
@@ -21,6 +21,34 @@
 THISDIR = "${@os.path.dirname(d.getVar('FILE', True))}"
 FILESPATH = "${@base_set_filespath(["${FILE_DIRNAME}/${PF}", "${FILE_DIRNAME}/${P}", "${FILE_DIRNAME}/${PN}", "${FILE_DIRNAME}/files", "${FILE_DIRNAME}"], d)}"
 
+OE_IMPORTS += "os sys time oe.path oe.patch"
+OE_IMPORTS[type] = "list"
+
+def oe_import(d):
+    import sys
+
+    bbpath = d.getVar("BBPATH").split(":")
+    sys.path[0:0] = [os.path.join(dir, "lib") for dir in bbpath]
+
+    def inject(name, value):
+        """Make a python object accessible from the metadata"""
+        if hasattr(bb.utils, "_context"):
+            bb.utils._context[name] = value
+        else:
+            __builtins__[name] = value
+
+    import oe.data
+    for toimport in oe.data.typed_value("OE_IMPORTS", d):
+        try:
+            imported = __import__(toimport)
+            inject(toimport.split(".", 1)[0], imported)
+        except AttributeError as e:
+            bb.error("Error importing OE modules: %s" % str(e))
+    return ""
+
+# We need the oe module name space early (before INHERITs get added)
+OE_IMPORTED := "${@oe_import(d)}"
+
 def get_deb_host_arch():
     import subprocess
     host_arch = subprocess.check_output(
diff --git a/meta/classes/dpkg-base.bbclass b/meta/classes/dpkg-base.bbclass
index 83500da..8a39a6d 100644
--- a/meta/classes/dpkg-base.bbclass
+++ b/meta/classes/dpkg-base.bbclass
@@ -248,11 +248,6 @@ do_deploy_deb[lockfiles] = "${REPO_ISAR_DIR}/isar.lock"
 do_deploy_deb[dirs] = "${S}"
 
 python do_devshell() {
-    import sys
-
-    oe_lib_path = os.path.join(d.getVar('LAYERDIR_core'), 'lib')
-    sys.path.insert(0, oe_lib_path)
-
     bb.build.exec_func('dpkg_do_mounts', d)
 
     isar_export_proxies(d)
diff --git a/meta/classes/patch.bbclass b/meta/classes/patch.bbclass
index 3060755..2337693 100644
--- a/meta/classes/patch.bbclass
+++ b/meta/classes/patch.bbclass
@@ -92,11 +92,6 @@ def should_apply(parm, d):
 should_apply[vardepsexclude] = "DATE SRCDATE"
 
 python patch_do_patch() {
-    import sys
-
-    oe_lib_path = os.path.join(d.getVar('LAYERDIR_core'), 'lib')
-    sys.path.insert(0, oe_lib_path)
-
     import oe.patch
 
     patchsetmap = {
-- 
2.30.2


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [RFC PATCH 2/5] meta: refactor containerization
  2021-10-12 13:04 [RFC PATCH 0/5] Refactor mount logic Adriaan Schmidt
  2021-10-12 13:04 ` [RFC PATCH 1/5] oe imports in central location Adriaan Schmidt
@ 2021-10-12 13:04 ` Adriaan Schmidt
  2021-10-13 10:17   ` Jan Kiszka
  2021-10-12 13:04 ` [RFC PATCH 3/5] meta: add oe.utils Adriaan Schmidt
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 14+ messages in thread
From: Adriaan Schmidt @ 2021-10-12 13:04 UTC (permalink / raw)
  To: isar-users; +Cc: Adriaan Schmidt

Don't unmount, but instead use "stay on one filesystem" options

Signed-off-by: Adriaan Schmidt <adriaan.schmidt@siemens.com>
---
 meta/classes/image-container-extension.bbclass | 3 ++-
 meta/classes/image-sdk-extension.bbclass       | 6 +-----
 2 files changed, 3 insertions(+), 6 deletions(-)

diff --git a/meta/classes/image-container-extension.bbclass b/meta/classes/image-container-extension.bbclass
index e26604a..762be4a 100644
--- a/meta/classes/image-container-extension.bbclass
+++ b/meta/classes/image-container-extension.bbclass
@@ -19,6 +19,7 @@ containerize_rootfs() {
     # prepare OCI container image skeleton
     bbdebug 1 "prepare OCI container image skeleton"
     rm -rf "${oci_img_dir}"
+
     sudo umoci init --layout "${oci_img_dir}"
     sudo umoci new --image "${oci_img_dir}:${empty_tag}"
     sudo umoci config --image "${oci_img_dir}:${empty_tag}" \
@@ -27,7 +28,7 @@ containerize_rootfs() {
         "${oci_img_dir}_unpacked"
 
     # add root filesystem as the flesh of the skeleton
-    sudo cp -a "${rootfs}"/* "${oci_img_dir}_unpacked/rootfs/"
+    sudo sh -c "tar --one-file-system -C \"${rootfs}\" -cpf - . | tar -C \"${oci_img_dir}_unpacked/rootfs\" --strip-components=1 -xpf -"
     # clean-up temporary files
     sudo find "${oci_img_dir}_unpacked/rootfs/tmp" -mindepth 1 -delete
 
diff --git a/meta/classes/image-sdk-extension.bbclass b/meta/classes/image-sdk-extension.bbclass
index 426b925..2cc5933 100644
--- a/meta/classes/image-sdk-extension.bbclass
+++ b/meta/classes/image-sdk-extension.bbclass
@@ -33,15 +33,11 @@ do_populate_sdk() {
         sudo rm -f ${SDKCHROOT_DIR}/etc/apt/sources.list.d/isar-apt.list
     fi
 
-    sudo umount -R ${SDKCHROOT_DIR}/dev || true
-    sudo umount ${SDKCHROOT_DIR}/proc || true
-    sudo umount -R ${SDKCHROOT_DIR}/sys || true
-
     # Remove setup scripts
     sudo rm -f ${SDKCHROOT_DIR}/chroot-setup.sh ${SDKCHROOT_DIR}/configscript.sh
 
     # Make all links relative
-    for link in $(find ${SDKCHROOT_DIR}/ -type l); do
+    for link in $(find ${SDKCHROOT_DIR}/ -type l -xdev); do
         target=$(readlink $link)
 
         if [ "${target#/}" != "${target}" ]; then
-- 
2.30.2


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [RFC PATCH 3/5] meta: add oe.utils
  2021-10-12 13:04 [RFC PATCH 0/5] Refactor mount logic Adriaan Schmidt
  2021-10-12 13:04 ` [RFC PATCH 1/5] oe imports in central location Adriaan Schmidt
  2021-10-12 13:04 ` [RFC PATCH 2/5] meta: refactor containerization Adriaan Schmidt
@ 2021-10-12 13:04 ` Adriaan Schmidt
  2021-10-13 10:17   ` Jan Kiszka
  2021-10-12 13:04 ` [RFC PATCH 4/5] meta: add mounts class Adriaan Schmidt
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 14+ messages in thread
From: Adriaan Schmidt @ 2021-10-12 13:04 UTC (permalink / raw)
  To: isar-users; +Cc: Adriaan Schmidt

Taken unmodified from yocto-3.3.2 (commit 31c639eb8664059eb4ed711be9173c223b4cc940)

Signed-off-by: Adriaan Schmidt <adriaan.schmidt@siemens.com>
---
 meta/classes/base.bbclass |   2 +-
 meta/lib/oe/utils.py      | 569 ++++++++++++++++++++++++++++++++++++++
 2 files changed, 570 insertions(+), 1 deletion(-)
 create mode 100644 meta/lib/oe/utils.py

diff --git a/meta/classes/base.bbclass b/meta/classes/base.bbclass
index 13134ff..f42dcba 100644
--- a/meta/classes/base.bbclass
+++ b/meta/classes/base.bbclass
@@ -21,7 +21,7 @@
 THISDIR = "${@os.path.dirname(d.getVar('FILE', True))}"
 FILESPATH = "${@base_set_filespath(["${FILE_DIRNAME}/${PF}", "${FILE_DIRNAME}/${P}", "${FILE_DIRNAME}/${PN}", "${FILE_DIRNAME}/files", "${FILE_DIRNAME}"], d)}"
 
-OE_IMPORTS += "os sys time oe.path oe.patch"
+OE_IMPORTS += "os sys time oe.path oe.patch oe.utils"
 OE_IMPORTS[type] = "list"
 
 def oe_import(d):
diff --git a/meta/lib/oe/utils.py b/meta/lib/oe/utils.py
new file mode 100644
index 0000000..a84039f
--- /dev/null
+++ b/meta/lib/oe/utils.py
@@ -0,0 +1,569 @@
+#
+# SPDX-License-Identifier: GPL-2.0-only
+#
+
+import subprocess
+import multiprocessing
+import traceback
+
+def read_file(filename):
+    try:
+        f = open( filename, "r" )
+    except IOError as reason:
+        return "" # WARNING: can't raise an error now because of the new RDEPENDS handling. This is a bit ugly. :M:
+    else:
+        data = f.read().strip()
+        f.close()
+        return data
+    return None
+
+def ifelse(condition, iftrue = True, iffalse = False):
+    if condition:
+        return iftrue
+    else:
+        return iffalse
+
+def conditional(variable, checkvalue, truevalue, falsevalue, d):
+    if d.getVar(variable) == checkvalue:
+        return truevalue
+    else:
+        return falsevalue
+
+def vartrue(var, iftrue, iffalse, d):
+    import oe.types
+    if oe.types.boolean(d.getVar(var)):
+        return iftrue
+    else:
+        return iffalse
+
+def less_or_equal(variable, checkvalue, truevalue, falsevalue, d):
+    if float(d.getVar(variable)) <= float(checkvalue):
+        return truevalue
+    else:
+        return falsevalue
+
+def version_less_or_equal(variable, checkvalue, truevalue, falsevalue, d):
+    result = bb.utils.vercmp_string(d.getVar(variable), checkvalue)
+    if result <= 0:
+        return truevalue
+    else:
+        return falsevalue
+
+def both_contain(variable1, variable2, checkvalue, d):
+    val1 = d.getVar(variable1)
+    val2 = d.getVar(variable2)
+    val1 = set(val1.split())
+    val2 = set(val2.split())
+    if isinstance(checkvalue, str):
+        checkvalue = set(checkvalue.split())
+    else:
+        checkvalue = set(checkvalue)
+    if checkvalue.issubset(val1) and checkvalue.issubset(val2):
+        return " ".join(checkvalue)
+    else:
+        return ""
+
+def set_intersect(variable1, variable2, d):
+    """
+    Expand both variables, interpret them as lists of strings, and return the
+    intersection as a flattened string.
+
+    For example:
+    s1 = "a b c"
+    s2 = "b c d"
+    s3 = set_intersect(s1, s2)
+    => s3 = "b c"
+    """
+    val1 = set(d.getVar(variable1).split())
+    val2 = set(d.getVar(variable2).split())
+    return " ".join(val1 & val2)
+
+def prune_suffix(var, suffixes, d):
+    # See if var ends with any of the suffixes listed and
+    # remove it if found
+    for suffix in suffixes:
+        if suffix and var.endswith(suffix):
+            var = var[:-len(suffix)]
+
+    prefix = d.getVar("MLPREFIX")
+    if prefix and var.startswith(prefix):
+        var = var[len(prefix):]
+
+    return var
+
+def str_filter(f, str, d):
+    from re import match
+    return " ".join([x for x in str.split() if match(f, x, 0)])
+
+def str_filter_out(f, str, d):
+    from re import match
+    return " ".join([x for x in str.split() if not match(f, x, 0)])
+
+def build_depends_string(depends, task):
+    """Append a taskname to a string of dependencies as used by the [depends] flag"""
+    return " ".join(dep + ":" + task for dep in depends.split())
+
+def inherits(d, *classes):
+    """Return True if the metadata inherits any of the specified classes"""
+    return any(bb.data.inherits_class(cls, d) for cls in classes)
+
+def features_backfill(var,d):
+    # This construct allows the addition of new features to variable specified
+    # as var
+    # Example for var = "DISTRO_FEATURES"
+    # This construct allows the addition of new features to DISTRO_FEATURES
+    # that if not present would disable existing functionality, without
+    # disturbing distributions that have already set DISTRO_FEATURES.
+    # Distributions wanting to elide a value in DISTRO_FEATURES_BACKFILL should
+    # add the feature to DISTRO_FEATURES_BACKFILL_CONSIDERED
+    features = (d.getVar(var) or "").split()
+    backfill = (d.getVar(var+"_BACKFILL") or "").split()
+    considered = (d.getVar(var+"_BACKFILL_CONSIDERED") or "").split()
+
+    addfeatures = []
+    for feature in backfill:
+        if feature not in features and feature not in considered:
+            addfeatures.append(feature)
+
+    if addfeatures:
+        d.appendVar(var, " " + " ".join(addfeatures))
+
+def all_distro_features(d, features, truevalue="1", falsevalue=""):
+    """
+    Returns truevalue if *all* given features are set in DISTRO_FEATURES,
+    else falsevalue. The features can be given as single string or anything
+    that can be turned into a set.
+
+    This is a shorter, more flexible version of
+    bb.utils.contains("DISTRO_FEATURES", features, truevalue, falsevalue, d).
+
+    Without explicit true/false values it can be used directly where
+    Python expects a boolean:
+       if oe.utils.all_distro_features(d, "foo bar"):
+           bb.fatal("foo and bar are mutually exclusive DISTRO_FEATURES")
+
+    With just a truevalue, it can be used to include files that are meant to be
+    used only when requested via DISTRO_FEATURES:
+       require ${@ oe.utils.all_distro_features(d, "foo bar", "foo-and-bar.inc")
+    """
+    return bb.utils.contains("DISTRO_FEATURES", features, truevalue, falsevalue, d)
+
+def any_distro_features(d, features, truevalue="1", falsevalue=""):
+    """
+    Returns truevalue if at least *one* of the given features is set in DISTRO_FEATURES,
+    else falsevalue. The features can be given as single string or anything
+    that can be turned into a set.
+
+    This is a shorter, more flexible version of
+    bb.utils.contains_any("DISTRO_FEATURES", features, truevalue, falsevalue, d).
+
+    Without explicit true/false values it can be used directly where
+    Python expects a boolean:
+       if not oe.utils.any_distro_features(d, "foo bar"):
+           bb.fatal("foo, bar or both must be set in DISTRO_FEATURES")
+
+    With just a truevalue, it can be used to include files that are meant to be
+    used only when requested via DISTRO_FEATURES:
+       require ${@ oe.utils.any_distro_features(d, "foo bar", "foo-or-bar.inc")
+
+    """
+    return bb.utils.contains_any("DISTRO_FEATURES", features, truevalue, falsevalue, d)
+
+def parallel_make(d, makeinst=False):
+    """
+    Return the integer value for the number of parallel threads to use when
+    building, scraped out of PARALLEL_MAKE. If no parallelization option is
+    found, returns None
+
+    e.g. if PARALLEL_MAKE = "-j 10", this will return 10 as an integer.
+    """
+    if makeinst:
+        pm = (d.getVar('PARALLEL_MAKEINST') or '').split()
+    else:
+        pm = (d.getVar('PARALLEL_MAKE') or '').split()
+    # look for '-j' and throw other options (e.g. '-l') away
+    while pm:
+        opt = pm.pop(0)
+        if opt == '-j':
+            v = pm.pop(0)
+        elif opt.startswith('-j'):
+            v = opt[2:].strip()
+        else:
+            continue
+
+        return int(v)
+
+    return ''
+
+def parallel_make_argument(d, fmt, limit=None, makeinst=False):
+    """
+    Helper utility to construct a parallel make argument from the number of
+    parallel threads specified in PARALLEL_MAKE.
+
+    Returns the input format string `fmt` where a single '%d' will be expanded
+    with the number of parallel threads to use. If `limit` is specified, the
+    number of parallel threads will be no larger than it. If no parallelization
+    option is found in PARALLEL_MAKE, returns an empty string
+
+    e.g. if PARALLEL_MAKE = "-j 10", parallel_make_argument(d, "-n %d") will return
+    "-n 10"
+    """
+    v = parallel_make(d, makeinst)
+    if v:
+        if limit:
+            v = min(limit, v)
+        return fmt % v
+    return ''
+
+def packages_filter_out_system(d):
+    """
+    Return a list of packages from PACKAGES with the "system" packages such as
+    PN-dbg PN-doc PN-locale-eb-gb removed.
+    """
+    pn = d.getVar('PN')
+    blacklist = [pn + suffix for suffix in ('', '-dbg', '-dev', '-doc', '-locale', '-staticdev', '-src')]
+    localepkg = pn + "-locale-"
+    pkgs = []
+
+    for pkg in d.getVar('PACKAGES').split():
+        if pkg not in blacklist and localepkg not in pkg:
+            pkgs.append(pkg)
+    return pkgs
+
+def getstatusoutput(cmd):
+    return subprocess.getstatusoutput(cmd)
+
+
+def trim_version(version, num_parts=2):
+    """
+    Return just the first <num_parts> of <version>, split by periods.  For
+    example, trim_version("1.2.3", 2) will return "1.2".
+    """
+    if type(version) is not str:
+        raise TypeError("Version should be a string")
+    if num_parts < 1:
+        raise ValueError("Cannot split to parts < 1")
+
+    parts = version.split(".")
+    trimmed = ".".join(parts[:num_parts])
+    return trimmed
+
+def cpu_count(at_least=1):
+    cpus = len(os.sched_getaffinity(0))
+    return max(cpus, at_least)
+
+def execute_pre_post_process(d, cmds):
+    if cmds is None:
+        return
+
+    for cmd in cmds.strip().split(';'):
+        cmd = cmd.strip()
+        if cmd != '':
+            bb.note("Executing %s ..." % cmd)
+            bb.build.exec_func(cmd, d)
+
+# For each item in items, call the function 'target' with item as the first 
+# argument, extraargs as the other arguments and handle any exceptions in the
+# parent thread
+def multiprocess_launch(target, items, d, extraargs=None):
+
+    class ProcessLaunch(multiprocessing.Process):
+        def __init__(self, *args, **kwargs):
+            multiprocessing.Process.__init__(self, *args, **kwargs)
+            self._pconn, self._cconn = multiprocessing.Pipe()
+            self._exception = None
+            self._result = None
+
+        def run(self):
+            try:
+                ret = self._target(*self._args, **self._kwargs)
+                self._cconn.send((None, ret))
+            except Exception as e:
+                tb = traceback.format_exc()
+                self._cconn.send((e, tb))
+
+        def update(self):
+            if self._pconn.poll():
+                (e, tb) = self._pconn.recv()
+                if e is not None:
+                    self._exception = (e, tb)
+                else:
+                    self._result = tb
+
+        @property
+        def exception(self):
+            self.update()
+            return self._exception
+
+        @property
+        def result(self):
+            self.update()
+            return self._result
+
+    max_process = int(d.getVar("BB_NUMBER_THREADS") or os.cpu_count() or 1)
+    launched = []
+    errors = []
+    results = []
+    items = list(items)
+    while (items and not errors) or launched:
+        if not errors and items and len(launched) < max_process:
+            args = (items.pop(),)
+            if extraargs is not None:
+                args = args + extraargs
+            p = ProcessLaunch(target=target, args=args)
+            p.start()
+            launched.append(p)
+        for q in launched:
+            # Have to manually call update() to avoid deadlocks. The pipe can be full and
+            # transfer stalled until we try and read the results object but the subprocess won't exit
+            # as it still has data to write (https://bugs.python.org/issue8426)
+            q.update()
+            # The finished processes are joined when calling is_alive()
+            if not q.is_alive():
+                if q.exception:
+                    errors.append(q.exception)
+                if q.result:
+                    results.append(q.result)
+                launched.remove(q)
+    # Paranoia doesn't hurt
+    for p in launched:
+        p.join()
+    if errors:
+        msg = ""
+        for (e, tb) in errors:
+            if isinstance(e, subprocess.CalledProcessError) and e.output:
+                msg = msg + str(e) + "\n"
+                msg = msg + "Subprocess output:"
+                msg = msg + e.output.decode("utf-8", errors="ignore")
+            else:
+                msg = msg + str(e) + ": " + str(tb) + "\n"
+        bb.fatal("Fatal errors occurred in subprocesses:\n%s" % msg)
+    return results
+
+def squashspaces(string):
+    import re
+    return re.sub(r"\s+", " ", string).strip()
+
+def format_pkg_list(pkg_dict, ret_format=None):
+    output = []
+
+    if ret_format == "arch":
+        for pkg in sorted(pkg_dict):
+            output.append("%s %s" % (pkg, pkg_dict[pkg]["arch"]))
+    elif ret_format == "file":
+        for pkg in sorted(pkg_dict):
+            output.append("%s %s %s" % (pkg, pkg_dict[pkg]["filename"], pkg_dict[pkg]["arch"]))
+    elif ret_format == "ver":
+        for pkg in sorted(pkg_dict):
+            output.append("%s %s %s" % (pkg, pkg_dict[pkg]["arch"], pkg_dict[pkg]["ver"]))
+    elif ret_format == "deps":
+        for pkg in sorted(pkg_dict):
+            for dep in pkg_dict[pkg]["deps"]:
+                output.append("%s|%s" % (pkg, dep))
+    else:
+        for pkg in sorted(pkg_dict):
+            output.append(pkg)
+
+    output_str = '\n'.join(output)
+
+    if output_str:
+        # make sure last line is newline terminated
+        output_str += '\n'
+
+    return output_str
+
+
+# Helper function to get the host compiler version
+# Do not assume the compiler is gcc
+def get_host_compiler_version(d, taskcontextonly=False):
+    import re, subprocess
+
+    if taskcontextonly and d.getVar('BB_WORKERCONTEXT') != '1':
+        return
+
+    compiler = d.getVar("BUILD_CC")
+    # Get rid of ccache since it is not present when parsing.
+    if compiler.startswith('ccache '):
+        compiler = compiler[7:]
+    try:
+        env = os.environ.copy()
+        # datastore PATH does not contain session PATH as set by environment-setup-...
+        # this breaks the install-buildtools use-case
+        # env["PATH"] = d.getVar("PATH")
+        output = subprocess.check_output("%s --version" % compiler, \
+                    shell=True, env=env, stderr=subprocess.STDOUT).decode("utf-8")
+    except subprocess.CalledProcessError as e:
+        bb.fatal("Error running %s --version: %s" % (compiler, e.output.decode("utf-8")))
+
+    match = re.match(r".* (\d+\.\d+)\.\d+.*", output.split('\n')[0])
+    if not match:
+        bb.fatal("Can't get compiler version from %s --version output" % compiler)
+
+    version = match.group(1)
+    return compiler, version
+
+
+def host_gcc_version(d, taskcontextonly=False):
+    import re, subprocess
+
+    if taskcontextonly and d.getVar('BB_WORKERCONTEXT') != '1':
+        return
+
+    compiler = d.getVar("BUILD_CC")
+    # Get rid of ccache since it is not present when parsing.
+    if compiler.startswith('ccache '):
+        compiler = compiler[7:]
+    try:
+        env = os.environ.copy()
+        env["PATH"] = d.getVar("PATH")
+        output = subprocess.check_output("%s --version" % compiler, \
+                    shell=True, env=env, stderr=subprocess.STDOUT).decode("utf-8")
+    except subprocess.CalledProcessError as e:
+        bb.fatal("Error running %s --version: %s" % (compiler, e.output.decode("utf-8")))
+
+    match = re.match(r".* (\d+\.\d+)\.\d+.*", output.split('\n')[0])
+    if not match:
+        bb.fatal("Can't get compiler version from %s --version output" % compiler)
+
+    version = match.group(1)
+    return "-%s" % version if version in ("4.8", "4.9") else ""
+
+
+def get_multilib_datastore(variant, d):
+    localdata = bb.data.createCopy(d)
+    if variant:
+        overrides = localdata.getVar("OVERRIDES", False) + ":virtclass-multilib-" + variant
+        localdata.setVar("OVERRIDES", overrides)
+        localdata.setVar("MLPREFIX", variant + "-")
+    else:
+        origdefault = localdata.getVar("DEFAULTTUNE_MULTILIB_ORIGINAL")
+        if origdefault:
+            localdata.setVar("DEFAULTTUNE", origdefault)
+        overrides = localdata.getVar("OVERRIDES", False).split(":")
+        overrides = ":".join([x for x in overrides if not x.startswith("virtclass-multilib-")])
+        localdata.setVar("OVERRIDES", overrides)
+        localdata.setVar("MLPREFIX", "")
+    return localdata
+
+#
+# Python 2.7 doesn't have threaded pools (just multiprocessing)
+# so implement a version here
+#
+
+from queue import Queue
+from threading import Thread
+
+class ThreadedWorker(Thread):
+    """Thread executing tasks from a given tasks queue"""
+    def __init__(self, tasks, worker_init, worker_end):
+        Thread.__init__(self)
+        self.tasks = tasks
+        self.daemon = True
+
+        self.worker_init = worker_init
+        self.worker_end = worker_end
+
+    def run(self):
+        from queue import Empty
+
+        if self.worker_init is not None:
+            self.worker_init(self)
+
+        while True:
+            try:
+                func, args, kargs = self.tasks.get(block=False)
+            except Empty:
+                if self.worker_end is not None:
+                    self.worker_end(self)
+                break
+
+            try:
+                func(self, *args, **kargs)
+            except Exception as e:
+                print(e)
+            finally:
+                self.tasks.task_done()
+
+class ThreadedPool:
+    """Pool of threads consuming tasks from a queue"""
+    def __init__(self, num_workers, num_tasks, worker_init=None,
+            worker_end=None):
+        self.tasks = Queue(num_tasks)
+        self.workers = []
+
+        for _ in range(num_workers):
+            worker = ThreadedWorker(self.tasks, worker_init, worker_end)
+            self.workers.append(worker)
+
+    def start(self):
+        for worker in self.workers:
+            worker.start()
+
+    def add_task(self, func, *args, **kargs):
+        """Add a task to the queue"""
+        self.tasks.put((func, args, kargs))
+
+    def wait_completion(self):
+        """Wait for completion of all the tasks in the queue"""
+        self.tasks.join()
+        for worker in self.workers:
+            worker.join()
+
+def write_ld_so_conf(d):
+    # Some utils like prelink may not have the correct target library paths
+    # so write an ld.so.conf to help them
+    ldsoconf = d.expand("${STAGING_DIR_TARGET}${sysconfdir}/ld.so.conf")
+    if os.path.exists(ldsoconf):
+        bb.utils.remove(ldsoconf)
+    bb.utils.mkdirhier(os.path.dirname(ldsoconf))
+    with open(ldsoconf, "w") as f:
+        f.write(d.getVar("base_libdir") + '\n')
+        f.write(d.getVar("libdir") + '\n')
+
+class ImageQAFailed(Exception):
+    def __init__(self, description, name=None, logfile=None):
+        self.description = description
+        self.name = name
+        self.logfile=logfile
+
+    def __str__(self):
+        msg = 'Function failed: %s' % self.name
+        if self.description:
+            msg = msg + ' (%s)' % self.description
+
+        return msg
+
+def sh_quote(string):
+    import shlex
+    return shlex.quote(string)
+
+def directory_size(root, blocksize=4096):
+    """
+    Calculate the size of the directory, taking into account hard links,
+    rounding up every size to multiples of the blocksize.
+    """
+    def roundup(size):
+        """
+        Round the size up to the nearest multiple of the block size.
+        """
+        import math
+        return math.ceil(size / blocksize) * blocksize
+
+    def getsize(filename):
+        """
+        Get the size of the filename, not following symlinks, taking into
+        account hard links.
+        """
+        stat = os.lstat(filename)
+        if stat.st_ino not in inodes:
+            inodes.add(stat.st_ino)
+            return stat.st_size
+        else:
+            return 0
+
+    inodes = set()
+    total = 0
+    for root, dirs, files in os.walk(root):
+        total += sum(roundup(getsize(os.path.join(root, name))) for name in files)
+        total += roundup(getsize(root))
+    return total
-- 
2.30.2


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [RFC PATCH 4/5] meta: add mounts class
  2021-10-12 13:04 [RFC PATCH 0/5] Refactor mount logic Adriaan Schmidt
                   ` (2 preceding siblings ...)
  2021-10-12 13:04 ` [RFC PATCH 3/5] meta: add oe.utils Adriaan Schmidt
@ 2021-10-12 13:04 ` Adriaan Schmidt
  2021-10-13 10:31   ` Jan Kiszka
  2021-10-12 13:04 ` [RFC PATCH 5/5] meta: refactor to use the new mounting mechanism Adriaan Schmidt
  2021-11-22 14:45 ` [RFC PATCH 0/5] Refactor mount logic Anton Mikanovich
  5 siblings, 1 reply; 14+ messages in thread
From: Adriaan Schmidt @ 2021-10-12 13:04 UTC (permalink / raw)
  To: isar-users; +Cc: Adriaan Schmidt

The new mounts.bbclass allows annotation of tasks, to describe which directories
need to be mounted. All mounting and unmounting is then done automatically,
and reference counting is used on a per-mountpoint basis to determine when
umounts need to happen.

Mounts are described as "cmd:src:dest", where cmd is
  * `bind` for a simple bind mount
  * `rbind` for a recursive bind mount
  * `pbind` for a "private" bind mount
  * `proc` for a "-t proc" mount

A task is annotated using the varflag [mounts].

If mounting should not happen automatically before/after the task, you can set
do_task[mounts-noauto] = "1", in which case you can manually call
`mount_task_prefunc` and `mount_task_postfunc` at more convenient times.

Signed-off-by: Adriaan Schmidt <adriaan.schmidt@siemens.com>
---
 meta/classes/mounts.bbclass | 271 ++++++++++++++++++++++++++++++++++++
 meta/conf/bitbake.conf      |   2 +-
 2 files changed, 272 insertions(+), 1 deletion(-)
 create mode 100644 meta/classes/mounts.bbclass

diff --git a/meta/classes/mounts.bbclass b/meta/classes/mounts.bbclass
new file mode 100644
index 0000000..de2375e
--- /dev/null
+++ b/meta/classes/mounts.bbclass
@@ -0,0 +1,271 @@
+
+python () {
+    # find all tasks that request [mounts], and hook up our functions
+    for task in [t for t in d.keys() if d.getVarFlag(t, 'task') and d.getVarFlag(t, 'mounts')]:
+        if d.getVarFlag(task, 'mounts-noauto') == "1":
+            continue
+        d.prependVarFlag(task, 'prefuncs', "mounts_task_prefunc ")
+        d.appendVarFlag(task, 'postfuncs', " mounts_task_postfunc")
+}
+
+MOUNTS_DB = "${TMPDIR}/mounts"
+MOUNTS_CONTEXT ?= "default"
+MOUNTS_LOCK = "${MOUNTS_DB}/${MOUNTS_CONTEXT}.mountlock"
+MOUNTS_TAB = "${MOUNTS_DB}/${MOUNTS_CONTEXT}.mounttab"
+
+def get_requested_mounts(d, task=None):
+    if task is None:
+        task = d.getVar('BB_CURRENTTASK')
+        if not task:
+            bb.fatal("mount code running without task context!?")
+    if task.startswith("do_"):
+        task = task[3:]
+    mounts = (d.getVarFlag("do_" + task, 'mounts') or "").split()
+    mounts_out = []
+    for m in mounts:
+        ms = m.split(':')
+        if len(ms) == 3 and ms[0] in 'bind rbind pbind proc'.split():
+            mounts_out.append(ms)
+        else:
+            bb.error(f"Invalid mount spec: {':'.join(ms)}")
+    return mounts_out
+
+def read_mtab(d, mtab_file=None):
+    from collections import namedtuple
+    Mount = namedtuple('Mount', 'cmd source target count')
+    if mtab_file is None:
+        mtab_file = d.getVar("MOUNTS_TAB", True)
+    # mtab format is "cmd:source:target:count"
+    try:
+        with open(mtab_file, 'r') as f:
+            data = [line.strip().split(':') for line in f.readlines()]
+    except FileNotFoundError:
+        return {}
+    mounts = {}
+    for m in data:
+        if not len(m) == 4:
+            bb.fatal("corrupt mtab!?")
+        mt = Mount._make(m)
+        mounts[mt.target] = mt._replace(count=int(mt.count))
+    return mounts
+
+def write_mtab(d, mtab, mtab_file=None):
+    if mtab_file is None:
+        mtab_file = d.getVar("MOUNTS_TAB", True)
+    with open(mtab_file, 'w') as f:
+        for cmd, source, target, count in mtab.values():
+            f.write(f"{cmd}:{source}:{target}:{count}\n")
+
+def shorten_path(x, n=3):
+    xs = x.split('/')
+    if len(xs) <= n:
+        return '/'.join(xs)
+    return '.../'+'/'.join(xs[-3:])
+
+mount_bind() {
+    sudo -s <<'EOSUDO'
+        SOURCE="${@d.getVar('MOUNT_ARG_SOURCE')}"
+        TARGET="${@d.getVar('MOUNT_ARG_TARGET')}"
+        mkdir -p "$TARGET"
+        mountpoint -q "$TARGET" || mount --bind "$SOURCE" "$TARGET"
+EOSUDO
+}
+
+umount_bind() {
+    sudo -s <<'EOSUDO'
+        TARGET="${@d.getVar('UMOUNT_ARG_TARGET')}"
+        mountpoint -q "$TARGET" && umount "$TARGET"
+EOSUDO
+}
+
+mount_rbind() {
+    sudo -s <<'EOSUDO'
+        SOURCE="${@d.getVar('MOUNT_ARG_SOURCE')}"
+        TARGET="${@d.getVar('MOUNT_ARG_TARGET')}"
+        mkdir -p "$TARGET"
+        mountpoint -q "$TARGET" || mount --rbind "$SOURCE" "$TARGET"
+        mount --make-rslave "$TARGET"
+EOSUDO
+}
+
+umount_rbind() {
+    sudo -s <<'EOSUDO'
+        TARGET="${@d.getVar('UMOUNT_ARG_TARGET')}"
+        mountpoint -q "$TARGET" && umount -R "$TARGET"
+EOSUDO
+}
+
+mount_pbind() {
+    sudo -s <<'EOSUDO'
+        SOURCE="${@d.getVar('MOUNT_ARG_SOURCE')}"
+        TARGET="${@d.getVar('MOUNT_ARG_TARGET')}"
+        mkdir -p "$TARGET"
+        mountpoint -q "$TARGET" || mount --bind --make-private "$SOURCE" "$TARGET"
+EOSUDO
+}
+
+umount_pbind() {
+    sudo -s <<'EOSUDO'
+        TARGET="${@d.getVar('UMOUNT_ARG_TARGET')}"
+        mountpoint -q "$TARGET" && umount "$TARGET"
+EOSUDO
+}
+
+mount_proc() {
+    sudo -s <<'EOSUDO'
+        TARGET="${@d.getVar('MOUNT_ARG_TARGET')}"
+        mkdir -p "$TARGET"
+        mountpoint -q "$TARGET" || mount -t proc none "$TARGET"
+EOSUDO
+}
+
+umount_proc() {
+    sudo -s <<'EOSUDO'
+        TARGET="${@d.getVar('UMOUNT_ARG_TARGET')}"
+        mountpoint -q "$TARGET" && umount "$TARGET"
+EOSUDO
+}
+
+python mounts_task_prefunc () {
+    from collections import namedtuple
+    Mount = namedtuple('Mount', 'cmd source target count')
+    task = d.getVar('PN') + ':' + d.getVar('BB_CURRENTTASK')
+    lock = bb.utils.lockfile(d.getVar("MOUNTS_LOCK"))
+    mounts = get_requested_mounts(d)
+    mtab = read_mtab(d)
+    for cmd, source, target in mounts:
+        mt = mtab.get(target)
+        if mt:
+            count = mt.count + 1
+            bb.debug(1, f"mount({task}): already mounted {shorten_path(mt.source)} at {shorten_path(mt.target)}, cnt={count}")
+            mtab[target] = mt._replace(count=count)
+            continue
+        bb.debug(1, f"mount({task}): mounting {shorten_path(source)} at {shorten_path(target)}, cnt=1")
+        d.setVar('MOUNT_ARG_SOURCE', source)
+        d.setVar('MOUNT_ARG_TARGET', target)
+        bb.build.exec_func('mount_' + cmd, d)
+        mtab[target] = Mount(cmd, source, target, 1)
+    write_mtab(d, mtab)
+    bb.utils.unlockfile(lock)
+}
+
+python mounts_task_postfunc () {
+    task = d.getVar('PN') + ':' + d.getVar('BB_CURRENTTASK')
+    lock = bb.utils.lockfile(d.getVar("MOUNTS_LOCK"))
+    mounts = get_requested_mounts(d)
+    mtab = read_mtab(d)
+
+    # release mounts
+    for cmd, source, target in mounts:
+        mt = mtab.get(target)
+        if mt is None:
+            bb.error(f"{target} not mounted. inconsistent mtab!?")
+            continue
+        count = mt.count - 1
+        bb.debug(1, f"umount({task}): releasing {shorten_path(target)}, cnt={count}")
+        mtab[target] = mt._replace(count=count)
+
+    # collect targets to unmount, in reverse order
+    umounts = []
+    for cmd, source, target in reversed(mounts):
+        mt = mtab.get(target)
+        if mt and mt.count == 0:
+            umounts.append(target)
+    for target, mt in mtab.items():
+        if mt.count < 0:
+            bb.error("count on {target} < 0. BUG!?!")
+        elif mt.count == 0 and not target in umounts:
+            umounts.append(target)
+
+    # now do the unmounting
+    for target in umounts:
+        try:
+            bb.debug(1, f"umount({task}): unmounting {shorten_path(target)}")
+            d.setVar('UMOUNT_ARG_TARGET', target)
+            bb.build.exec_func('umount_' + mt.cmd, d)
+            del mtab[target]
+        except bb.process.ExecutionError as e:
+            if e.exitcode == 32:
+                # target busy
+                bb.debug(1, f"umount({task}): target busy, moving on...")
+            else:
+                bb.warn(f"umount({task}): failed to unmount {target}: {str(e)}")
+
+    write_mtab(d, mtab)
+    bb.utils.unlockfile(lock)
+}
+
+# call postfunc explicitly in case a failing task has [mounts]
+addhandler mounts_taskfail
+python mounts_taskfail() {
+    task = d.getVar('BB_CURRENTTASK')
+    if not task:
+        bb.fatal("mount code running without task context!?")
+    if task.startswith("do_"):
+        task = task[3:]
+    if d.getVarFlag("do_" + task, 'mounts') and not d.getVarFlag("do_" + task, 'mounts-noauto') == "1":
+        bb.build.exec_func('mounts_task_postfunc', d)
+}
+mounts_taskfail[eventmask] = "bb.build.TaskFailed"
+
+# bb.event.Build* handlers don't have a task context.
+# Don't access MOUNTS_CONTEXT from here!
+addhandler mounts_init
+python mounts_init() {
+    bb.utils.remove(d.getVar('MOUNTS_DB') + "/*.mounttab")
+    bb.utils.remove(d.getVar('MOUNTS_DB') + "/*.mountlock")
+}
+mounts_init[eventmask] = "bb.event.BuildStarted"
+
+addhandler mounts_cleanup
+python mounts_cleanup() {
+    # look through MOUNTS_DB for contexts
+    import glob
+    import time
+    base = d.getVar('MOUNTS_DB')
+    locks = glob.glob(base + "/*.mountlock")
+    tabs = glob.glob(base + "/*.mounttab")
+
+    # there should not be any locks?
+    if len(locks) > 0:
+        bb.error(f"mounts_cleanup: someone still holding lock? ({str(locks)})")
+
+    # cleanup any existing contexts
+    for mtab_file in tabs:
+        mtab = read_mtab(d, mtab_file)
+        if len(mtab) > 0:
+            bb.note(f"mounts_cleanup: {mtab_file.split('/')[-1]}")
+
+        done = []
+        for target, mt in mtab.items():
+            if mt.count < 0:
+                bb.error("count on {target} < 0. BUG!?!")
+                continue
+            if mt.count > 0:
+                bb.error(f"cound on {target} > 0. BUG!?!")
+
+            bb.note(f"mounts_cleanup: unmounting {target}")
+            for i in range(10):
+                try:
+                    d.setVar('UMOUNT_ARG_TARGET', target)
+                    bb.build.exec_func('umount_' + mt.cmd, d)
+                    done.append(target)
+                    break
+                except bb.process.ExecutionError as e:
+                    if e.exitcode == 32:
+                        # target busy
+                        time.sleep(1)
+                        continue
+                    else:
+                        bb.error(f"umount({task}): {str(e)}")
+                        done.append(target)
+                        break
+                bb.warn(f"mounts_cleanup: failed to umount {target}")
+                done.append(target)
+
+        for target in done:
+            del mtab[target]
+        write_mtab(d, mtab, mtab_file)
+}
+
+mounts_cleanup[eventmask] = "bb.event.BuildCompleted"
diff --git a/meta/conf/bitbake.conf b/meta/conf/bitbake.conf
index 7f5901d..4726eaf 100644
--- a/meta/conf/bitbake.conf
+++ b/meta/conf/bitbake.conf
@@ -113,7 +113,7 @@ PARALLEL_MAKE ?= "-j ${@bb.utils.cpu_count()}"
 BBINCLUDELOGS ??= "yes"
 
 # Add event handlers for bitbake
-INHERIT += "isar-events"
+INHERIT += "mounts isar-events"
 
 include conf/local.conf
 include conf/multiconfig/${BB_CURRENT_MC}.conf
-- 
2.30.2


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [RFC PATCH 5/5] meta: refactor to use the new mounting mechanism
  2021-10-12 13:04 [RFC PATCH 0/5] Refactor mount logic Adriaan Schmidt
                   ` (3 preceding siblings ...)
  2021-10-12 13:04 ` [RFC PATCH 4/5] meta: add mounts class Adriaan Schmidt
@ 2021-10-12 13:04 ` Adriaan Schmidt
  2021-10-13 10:42   ` Jan Kiszka
  2021-11-22 14:45 ` [RFC PATCH 0/5] Refactor mount logic Anton Mikanovich
  5 siblings, 1 reply; 14+ messages in thread
From: Adriaan Schmidt @ 2021-10-12 13:04 UTC (permalink / raw)
  To: isar-users; +Cc: Adriaan Schmidt

Signed-off-by: Adriaan Schmidt <adriaan.schmidt@siemens.com>
---
 meta/classes/buildchroot.bbclass              | 52 ++++++++-----------
 meta/classes/cpiogz-img.bbclass               |  3 +-
 meta/classes/dpkg-base.bbclass                | 44 +++-------------
 meta/classes/dpkg-gbp.bbclass                 |  2 -
 meta/classes/dpkg.bbclass                     |  4 +-
 meta/classes/ext4-img.bbclass                 |  3 +-
 meta/classes/fit-img.bbclass                  |  4 +-
 meta/classes/image-tools-extension.bbclass    |  4 +-
 meta/classes/image.bbclass                    | 30 +++--------
 meta/classes/initramfs.bbclass                |  2 +-
 meta/classes/isar-events.bbclass              |  4 +-
 meta/classes/rootfs.bbclass                   | 50 ++++++------------
 meta/classes/ubi-img.bbclass                  |  3 +-
 meta/classes/ubifs-img.bbclass                |  3 +-
 meta/classes/vm-img.bbclass                   |  7 +--
 meta/classes/wic-img.bbclass                  | 31 ++++-------
 .../isar-bootstrap/isar-bootstrap.inc         | 43 +++++++++------
 .../buildchroot/buildchroot.inc               |  8 +--
 18 files changed, 104 insertions(+), 193 deletions(-)

diff --git a/meta/classes/buildchroot.bbclass b/meta/classes/buildchroot.bbclass
index e9eb9af..7c34834 100644
--- a/meta/classes/buildchroot.bbclass
+++ b/meta/classes/buildchroot.bbclass
@@ -13,50 +13,42 @@ python __anonymous() {
        (d.getVar('HOST_DISTRO') == "debian-stretch" and distro_arch == "i386"):
         dep = "buildchroot-target:do_build"
         rootfs = d.getVar('BUILDCHROOT_TARGET_DIR', True)
+        mount_ctx = "buildchroot-target"
     else:
         dep = "buildchroot-host:do_build"
         rootfs = d.getVar('BUILDCHROOT_HOST_DIR', True)
+        mount_ctx = "buildchroot-host"
 
     d.setVar('BUILDCHROOT_DEP', dep)
     d.setVar('BUILDCHROOT_DIR', rootfs)
+    d.setVar('MOUNTS_CONTEXT', mount_ctx + "-" + d.getVar('DISTRO') + "-" + d.getVar('DISTRO_ARCH'))
 }
 
-MOUNT_LOCKFILE = "${BUILDCHROOT_DIR}.lock"
+# mount settings
+BUILDCHROOT_MOUNTS = " \
+    bind:${REPO_ISAR_DIR}/${DISTRO}:${BUILDCHROOT_DIR}/isar-apt \
+    bind:${DL_DIR}:${BUILDCHROOT_DIR}/downloads \
+    rbind:/dev:${BUILDCHROOT_DIR}/dev \
+    proc::${BUILDCHROOT_DIR}/proc \
+    rbind:/sys:${BUILDCHROOT_DIR}/sys \
+    ${@oe.utils.vartrue("ISAR_USE_CACHED_BASE_REPO", "bind:${REPO_BASE_DIR}:${BUILDCHROOT_DIR}/base-apt", "", d)} \
+    "
+
+python () {
+    # find all tasks that want to use buildchroot
+    for task in [t for t in d.keys() if d.getVarFlag(t, 'task') and d.getVarFlag(t, 'buildchroot') == '1']:
+        d.prependVarFlag(task, 'prefuncs', "buildchroot_task_prefunc ")
+}
 
-buildchroot_do_mounts() {
+buildchroot_task_prefunc() {
     sudo -s <<'EOSUDO'
-        ( flock 9
         set -e
-
-        mountpoint -q '${BUILDCHROOT_DIR}/isar-apt' ||
-            mount --bind '${REPO_ISAR_DIR}/${DISTRO}' '${BUILDCHROOT_DIR}/isar-apt'
-        mountpoint -q '${BUILDCHROOT_DIR}/downloads' ||
-            mount --bind '${DL_DIR}' '${BUILDCHROOT_DIR}/downloads'
-        mountpoint -q '${BUILDCHROOT_DIR}/dev' ||
-            mount --rbind /dev '${BUILDCHROOT_DIR}/dev'
-        mount --make-rslave '${BUILDCHROOT_DIR}/dev'
-        mountpoint -q '${BUILDCHROOT_DIR}/proc' ||
-            mount -t proc none '${BUILDCHROOT_DIR}/proc'
-        mountpoint -q '${BUILDCHROOT_DIR}/sys' ||
-            mount --rbind /sys '${BUILDCHROOT_DIR}/sys'
-        mount --make-rslave '${BUILDCHROOT_DIR}/sys'
-
-        # Mount base-apt if 'ISAR_USE_CACHED_BASE_REPO' is set
-        if [ "${@repr(bb.utils.to_boolean(d.getVar('ISAR_USE_CACHED_BASE_REPO')))}" = 'True' ]
-        then
-            mkdir -p '${BUILDCHROOT_DIR}/base-apt'
-            mountpoint -q '${BUILDCHROOT_DIR}/base-apt' || \
-                mount --bind '${REPO_BASE_DIR}' '${BUILDCHROOT_DIR}/base-apt'
-        fi
-
-        # Refresh or remove /etc/resolv.conf at this chance
+        # Refresh or remove /etc/resolv.conf
         if [ "${@repr(bb.utils.to_boolean(d.getVar('BB_NO_NETWORK')))}" = 'True' ]
         then
-            rm -rf '${BUILDCHROOT_DIR}/etc/resolv.conf'
-        else
+            rm -f '${BUILDCHROOT_DIR}/etc/resolv.conf'
+        elif [ -d '${BUILDCHROOT_DIR}/etc' ]; then
             cp -L /etc/resolv.conf '${BUILDCHROOT_DIR}/etc'
         fi
-
-        ) 9>'${MOUNT_LOCKFILE}'
 EOSUDO
 }
diff --git a/meta/classes/cpiogz-img.bbclass b/meta/classes/cpiogz-img.bbclass
index 940e2fb..095e133 100644
--- a/meta/classes/cpiogz-img.bbclass
+++ b/meta/classes/cpiogz-img.bbclass
@@ -10,12 +10,11 @@ CPIO_IMAGE_FORMAT ?= "newc"
 
 do_cpiogz_image() {
     sudo rm -f ${CPIOGZ_IMAGE_FILE}
-    image_do_mounts
     sudo chroot ${BUILDCHROOT_DIR} \
                 sh -c "cd ${PP_ROOTFS}; /usr/bin/find . | \
                        /usr/bin/cpio -H ${CPIO_IMAGE_FORMAT} -o | /usr/bin/gzip -9 > \
                        ${PP_DEPLOY}/${CPIOGZ_FNAME}"
     sudo chown $(id -u):$(id -g) ${CPIOGZ_IMAGE_FILE}
 }
-
+do_cpiogz_image[mounts] = "${IMAGE_MOUNTS}"
 addtask cpiogz_image before do_image after do_image_tools
diff --git a/meta/classes/dpkg-base.bbclass b/meta/classes/dpkg-base.bbclass
index 8a39a6d..71c1acd 100644
--- a/meta/classes/dpkg-base.bbclass
+++ b/meta/classes/dpkg-base.bbclass
@@ -96,7 +96,6 @@ python() {
 }
 
 do_apt_fetch() {
-    dpkg_do_mounts
     E="${@ isar_export_proxies(d)}"
     sudo -E chroot ${BUILDCHROOT_DIR} /usr/bin/apt-get update \
         -o Dir::Etc::SourceList="sources.list.d/isar-apt.list" \
@@ -107,10 +106,9 @@ do_apt_fetch() {
         sudo -E chroot --userspec=$( id -u ):$( id -g ) ${BUILDCHROOT_DIR} \
             sh -c 'mkdir -p /downloads/deb-src/"$1"/"$2" && cd /downloads/deb-src/"$1"/"$2" && apt-get -y --download-only --only-source source "$2"' my_script "${DISTRO}" "${uri}"
     done
-
-    dpkg_undo_mounts
 }
-
+do_apt_fetch[mounts] = "${DPKG_MOUNTS}"
+do_apt_fetch[buildchroot] = "1"
 addtask apt_fetch after do_unpack before do_apt_unpack
 do_apt_fetch[lockfiles] += "${REPO_ISAR_DIR}/isar.lock"
 
@@ -119,7 +117,6 @@ do_apt_fetch[depends] = "${BUILDCHROOT_DEP}"
 
 do_apt_unpack() {
     rm -rf ${S}
-    dpkg_do_mounts
     E="${@ isar_export_proxies(d)}"
 
     for uri in "${SRC_APT}"; do
@@ -132,10 +129,9 @@ do_apt_unpack() {
                 dpkg-source -x "${dscfile}" "${PPS}"' \
                     my_script "${DISTRO}" "${uri}"
     done
-
-    dpkg_undo_mounts
 }
-
+do_apt_unpack[mounts] = "${DPKG_MOUNTS}"
+do_apt_unpack[buildchroot] = "1"
 addtask apt_unpack after do_apt_fetch before do_patch
 
 addtask cleanall_apt before do_cleanall
@@ -174,27 +170,7 @@ do_prepare_build[deptask] = "do_deploy_deb"
 
 BUILDROOT = "${BUILDCHROOT_DIR}/${PP}"
 
-dpkg_do_mounts() {
-    mkdir -p ${BUILDROOT}
-    sudo mount --bind ${WORKDIR} ${BUILDROOT}
-
-    buildchroot_do_mounts
-}
-
-dpkg_undo_mounts() {
-    i=0
-    while ! sudo umount ${BUILDROOT}; do
-        sleep 0.1
-        if [ `expr $i % 100` -eq 0 ] ; then
-            bbwarn "${BUILDROOT}: Couldn't unmount ($i), retrying..."
-        fi
-        if [ $i -ge 10000 ]; then
-            bbfatal "${BUILDROOT}: Couldn't unmount after timeout"
-        fi
-        i=`expr $i + 1`
-    done
-    sudo rmdir ${BUILDROOT}
-}
+DPKG_MOUNTS = "bind:${WORKDIR}:${BUILDROOT} ${BUILDCHROOT_MOUNTS}"
 
 # Placeholder for actual dpkg_runbuild() implementation
 dpkg_runbuild() {
@@ -204,14 +180,13 @@ dpkg_runbuild() {
 python do_dpkg_build() {
     lock = bb.utils.lockfile(d.getVar("REPO_ISAR_DIR") + "/isar.lock",
                              shared=True)
-    bb.build.exec_func("dpkg_do_mounts", d)
     try:
         bb.build.exec_func("dpkg_runbuild", d)
     finally:
-        bb.build.exec_func("dpkg_undo_mounts", d)
         bb.utils.unlockfile(lock)
 }
 
+do_dpkg_build[mounts] = "${DPKG_MOUNTS}"
 addtask dpkg_build before do_build
 
 KEEP_INSTALLED_ON_CLEAN ?= "0"
@@ -248,18 +223,15 @@ do_deploy_deb[lockfiles] = "${REPO_ISAR_DIR}/isar.lock"
 do_deploy_deb[dirs] = "${S}"
 
 python do_devshell() {
-    bb.build.exec_func('dpkg_do_mounts', d)
-
     isar_export_proxies(d)
 
     buildchroot = d.getVar('BUILDCHROOT_DIR')
     pp_pps = os.path.join(d.getVar('PP'), d.getVar('PPS'))
     termcmd = "sudo -E chroot {0} sh -c 'cd {1}; $SHELL -i'"
     oe_terminal(termcmd.format(buildchroot, pp_pps), "Isar devshell", d)
-
-    bb.build.exec_func('dpkg_undo_mounts', d)
 }
-
+do_devshell[mounts] = "${DPKG_MOUNTS}"
+do_devshell[buildchroot] = "1"
 addtask devshell after do_prepare_build
 DEVSHELL_STARTDIR ?= "${S}"
 do_devshell[dirs] = "${DEVSHELL_STARTDIR}"
diff --git a/meta/classes/dpkg-gbp.bbclass b/meta/classes/dpkg-gbp.bbclass
index d956e8c..e3bf305 100644
--- a/meta/classes/dpkg-gbp.bbclass
+++ b/meta/classes/dpkg-gbp.bbclass
@@ -13,7 +13,6 @@ GBP_DEPENDS ?= "git-buildpackage pristine-tar"
 GBP_EXTRA_OPTIONS ?= "--git-pristine-tar"
 
 do_install_builddeps_append() {
-    dpkg_do_mounts
     distro="${DISTRO}"
     if [ ${ISAR_CROSS_COMPILE} -eq 1 ]; then
        distro="${HOST_DISTRO}"
@@ -26,7 +25,6 @@ do_install_builddeps_append() {
     sudo -E chroot ${BUILDCHROOT_DIR} \
         apt-get install -y -o Debug::pkgProblemResolver=yes \
                         --no-install-recommends ${GBP_DEPENDS}
-    dpkg_undo_mounts
 }
 
 dpkg_runbuild_prepend() {
diff --git a/meta/classes/dpkg.bbclass b/meta/classes/dpkg.bbclass
index 4e7c2f7..96aefba 100644
--- a/meta/classes/dpkg.bbclass
+++ b/meta/classes/dpkg.bbclass
@@ -7,7 +7,6 @@ PACKAGE_ARCH ?= "${DISTRO_ARCH}"
 
 # Install build dependencies for package
 do_install_builddeps() {
-    dpkg_do_mounts
     E="${@ isar_export_proxies(d)}"
     distro="${DISTRO}"
     if [ ${ISAR_CROSS_COMPILE} -eq 1 ]; then
@@ -19,12 +18,13 @@ do_install_builddeps() {
     deb_dl_dir_export "${BUILDCHROOT_DIR}" "${distro}"
     sudo -E chroot ${BUILDCHROOT_DIR} /isar/deps.sh \
         ${PP}/${PPS} ${PACKAGE_ARCH}
-    dpkg_undo_mounts
 }
 
 addtask install_builddeps after do_prepare_build before do_dpkg_build
 # apt and reprepro may not run in parallel, acquire the Isar lock
 do_install_builddeps[lockfiles] += "${REPO_ISAR_DIR}/isar.lock"
+do_install_builddeps[mounts] = "${DPKG_MOUNTS}"
+do_install_builddeps[buildchroot] = "1"
 
 addtask devshell after do_install_builddeps
 
diff --git a/meta/classes/ext4-img.bbclass b/meta/classes/ext4-img.bbclass
index 334dc64..84dd4f2 100644
--- a/meta/classes/ext4-img.bbclass
+++ b/meta/classes/ext4-img.bbclass
@@ -13,11 +13,10 @@ do_ext4_image() {
 
     truncate -s ${ROOTFS_SIZE}K '${DEPLOY_DIR_IMAGE}/${EXT4_IMAGE_FILE}'
 
-    image_do_mounts
-
     sudo chroot ${BUILDCHROOT_DIR} /sbin/mke2fs ${MKE2FS_ARGS} \
                 -F -d '${PP_ROOTFS}' '${PP_DEPLOY}/${EXT4_IMAGE_FILE}'
 }
 
 addtask ext4_image before do_image after do_image_tools
 do_ext4_image[prefuncs] = 'set_image_size'
+do_ext4_image[mounts] = "${IMAGE_MOUNTS}"
diff --git a/meta/classes/fit-img.bbclass b/meta/classes/fit-img.bbclass
index 82b96d8..a34517a 100644
--- a/meta/classes/fit-img.bbclass
+++ b/meta/classes/fit-img.bbclass
@@ -18,11 +18,11 @@ do_fit_image() {
 
     rm -f '${DEPLOY_DIR_IMAGE}/${FIT_IMAGE_FILE}'
 
-    image_do_mounts
-
     # Create fit image using buildchroot tools
     sudo chroot ${BUILDCHROOT_DIR} /usr/bin/mkimage ${MKIMAGE_ARGS} \
                 -f '${PP_WORK}/${FIT_IMAGE_SOURCE}' '${PP_DEPLOY}/${FIT_IMAGE_FILE}'
     sudo chown $(id -u):$(id -g) '${DEPLOY_DIR_IMAGE}/${FIT_IMAGE_FILE}'
 }
+
 addtask fit_image before do_image after do_image_tools do_transform_template
+do_fit_image[mounts] = "${IMAGE_MOUNTS}"
diff --git a/meta/classes/image-tools-extension.bbclass b/meta/classes/image-tools-extension.bbclass
index 9f28800..265fb0c 100644
--- a/meta/classes/image-tools-extension.bbclass
+++ b/meta/classes/image-tools-extension.bbclass
@@ -17,13 +17,13 @@ DEPENDS += "${IMAGER_BUILD_DEPS}"
 do_install_imager_deps[depends] = "${BUILDCHROOT_DEP}"
 do_install_imager_deps[deptask] = "do_deploy_deb"
 do_install_imager_deps[lockfiles] += "${REPO_ISAR_DIR}/isar.lock"
+do_install_imager_deps[mounts] = "${BUILDCHROOT_MOUNTS}"
+do_install_imager_deps[buildchroot] = "1"
 do_install_imager_deps() {
     if [ -z "${@d.getVar("IMAGER_INSTALL", True).strip()}" ]; then
         exit
     fi
 
-    buildchroot_do_mounts
-
     E="${@ isar_export_proxies(d)}"
     deb_dl_dir_import ${BUILDCHROOT_DIR} ${DISTRO}
     sudo -E chroot ${BUILDCHROOT_DIR} sh -c ' \
diff --git a/meta/classes/image.bbclass b/meta/classes/image.bbclass
index ec93cab..dd934a7 100644
--- a/meta/classes/image.bbclass
+++ b/meta/classes/image.bbclass
@@ -52,15 +52,12 @@ DEPENDS += "${IMAGE_INSTALL}"
 ISAR_RELEASE_CMD_DEFAULT = "git -C ${LAYERDIR_core} describe --tags --dirty --match 'v[0-9].[0-9]*'"
 ISAR_RELEASE_CMD ?= "${ISAR_RELEASE_CMD_DEFAULT}"
 
-image_do_mounts() {
-    sudo flock ${MOUNT_LOCKFILE} -c ' \
-        mkdir -p "${BUILDROOT_DEPLOY}" "${BUILDROOT_ROOTFS}" "${BUILDROOT_WORK}"
-        mount --bind "${DEPLOY_DIR_IMAGE}" "${BUILDROOT_DEPLOY}"
-        mount --bind "${IMAGE_ROOTFS}" "${BUILDROOT_ROOTFS}"
-        mount --bind "${WORKDIR}" "${BUILDROOT_WORK}"
-    '
-    buildchroot_do_mounts
-}
+IMAGE_MOUNTS = " \
+    bind:${DEPLOY_DIR_IMAGE}:${BUILDROOT_DEPLOY} \
+    bind:${IMAGE_ROOTFS}:${BUILDROOT_ROOTFS} \
+    bind:${WORKDIR}:${BUILDROOT_WORK} \
+    ${BUILDCHROOT_MOUNTS} \
+    "
 
 ROOTFSDIR = "${IMAGE_ROOTFS}"
 ROOTFS_FEATURES += "clean-package-cache generate-manifest export-dpkg-status"
@@ -190,21 +187,6 @@ do_rootfs_finalize() {
             find "${ROOTFSDIR}/usr/bin" \
                 -maxdepth 1 -name 'qemu-*-static' -type f -delete
 
-        mountpoint -q '${ROOTFSDIR}/isar-apt' && \
-            umount -l ${ROOTFSDIR}/isar-apt
-        rmdir --ignore-fail-on-non-empty ${ROOTFSDIR}/isar-apt
-
-        mountpoint -q '${ROOTFSDIR}/base-apt' && \
-            umount -l ${ROOTFSDIR}/base-apt
-        rmdir --ignore-fail-on-non-empty ${ROOTFSDIR}/base-apt
-
-        mountpoint -q '${ROOTFSDIR}/dev' && \
-            umount -l ${ROOTFSDIR}/dev
-        mountpoint -q '${ROOTFSDIR}/sys' && \
-            umount -l ${ROOTFSDIR}/proc
-        mountpoint -q '${ROOTFSDIR}/sys' && \
-            umount -l ${ROOTFSDIR}/sys
-
         rm -f "${ROOTFSDIR}/etc/apt/apt.conf.d/55isar-fallback.conf"
 
         rm -f "${ROOTFSDIR}/etc/apt/sources.list.d/isar-apt.list"
diff --git a/meta/classes/initramfs.bbclass b/meta/classes/initramfs.bbclass
index 10a642b..b5aba91 100644
--- a/meta/classes/initramfs.bbclass
+++ b/meta/classes/initramfs.bbclass
@@ -25,8 +25,8 @@ ROOTFS_PACKAGES = "initramfs-tools ${INITRAMFS_PREINSTALL} ${INITRAMFS_INSTALL}"
 inherit rootfs
 
 do_generate_initramfs[dirs] = "${DEPLOY_DIR_IMAGE}"
+do_generate_initramfs[mounts] = "${ROOTFS_MOUNTS}"
 do_generate_initramfs() {
-    rootfs_do_mounts
     rootfs_do_qemu
 
     sudo -E chroot "${INITRAMFS_ROOTFS}" \
diff --git a/meta/classes/isar-events.bbclass b/meta/classes/isar-events.bbclass
index 92aff20..73419b4 100644
--- a/meta/classes/isar-events.bbclass
+++ b/meta/classes/isar-events.bbclass
@@ -8,8 +8,6 @@ addhandler build_started
 
 python build_started() {
     bb.utils.remove(d.getVar('TMPDIR') + "/work/*/*/*/temp/once.*")
-    bb.utils.remove(d.getVar('TMPDIR') + "/work/*/*/*/rootfs.mount")
-    bb.utils.remove(d.getVar('TMPDIR') + "/deploy/buildchroot-*/*.mount")
 }
 build_started[eventmask] = "bb.event.BuildStarted"
 
@@ -54,7 +52,7 @@ python build_completed() {
     with open('/proc/mounts') as f:
         for line in f.readlines():
             if basepath in line:
-                bb.debug(1, '%s left mounted, unmounting...' % line.split()[1])
+                bb.warn('%s left mounted, unmounting...' % line.split()[1])
                 subprocess.call(
                     ["sudo", "umount", "-l", line.split()[1]],
                     stdout=subprocess.DEVNULL,
diff --git a/meta/classes/rootfs.bbclass b/meta/classes/rootfs.bbclass
index f9151c5..7b9fbb1 100644
--- a/meta/classes/rootfs.bbclass
+++ b/meta/classes/rootfs.bbclass
@@ -26,38 +26,17 @@ export LANG = "C"
 export LANGUAGE = "C"
 export LC_ALL = "C"
 
-rootfs_do_mounts[weight] = "3"
-rootfs_do_mounts() {
-    sudo -s <<'EOSUDO'
-        mountpoint -q '${ROOTFSDIR}/dev' || \
-            mount --rbind /dev '${ROOTFSDIR}/dev'
-        mount --make-rslave '${ROOTFSDIR}/dev'
-        mountpoint -q '${ROOTFSDIR}/proc' || \
-            mount -t proc none '${ROOTFSDIR}/proc'
-        mountpoint -q '${ROOTFSDIR}/sys' || \
-            mount --rbind /sys '${ROOTFSDIR}/sys'
-        mount --make-rslave '${ROOTFSDIR}/sys'
-
-        # Mount isar-apt if the directory does not exist or if it is empty
-        # This prevents overwriting something that was copied there
-        if [ ! -e '${ROOTFSDIR}/isar-apt' ] || \
-           [ "$(find '${ROOTFSDIR}/isar-apt' -maxdepth 1 -mindepth 1 | wc -l)" = "0" ]
-        then
-            mkdir -p '${ROOTFSDIR}/isar-apt'
-            mountpoint -q '${ROOTFSDIR}/isar-apt' || \
-                mount --bind '${REPO_ISAR_DIR}/${DISTRO}' '${ROOTFSDIR}/isar-apt'
-        fi
-
-        # Mount base-apt if 'ISAR_USE_CACHED_BASE_REPO' is set
-        if [ "${@repr(bb.utils.to_boolean(d.getVar('ISAR_USE_CACHED_BASE_REPO')))}" = 'True' ]
-        then
-            mkdir -p '${ROOTFSDIR}/base-apt'
-            mountpoint -q '${ROOTFSDIR}/base-apt' || \
-                mount --bind '${REPO_BASE_DIR}' '${ROOTFSDIR}/base-apt'
-        fi
-
-EOSUDO
-}
+MOUNTS_CONTEXT = "${PN}-${DISTRO}-${DISTRO_ARCH}"
+ROOTFS_MOUNTS = " \
+    rbind:/dev:${ROOTFSDIR}/dev \
+    proc::${ROOTFSDIR}/proc \
+    rbind:/sys:${ROOTFSDIR}/sys \
+    bind:${REPO_ISAR_DIR}/${DISTRO}:${ROOTFSDIR}/isar-apt \
+    ${@oe.utils.vartrue("ISAR_USE_CACHED_BASE_REPO", "bind:${REPO_BASE_DIR}:${ROOTFSDIR}/base-apt", "", d)} \
+    "
+
+mounts_task_prefunc[weight] = "3"
+mounts_task_postfunc[weight] = "3"
 
 rootfs_do_qemu() {
     if [ '${@repr(d.getVar('ROOTFS_ARCH') == d.getVar('HOST_ARCH'))}' = 'False' ]
@@ -153,13 +132,15 @@ do_rootfs_install[root_cleandirs] = "${ROOTFSDIR}"
 do_rootfs_install[vardeps] += "${ROOTFS_CONFIGURE_COMMAND} ${ROOTFS_INSTALL_COMMAND}"
 do_rootfs_install[depends] = "isar-bootstrap-${@'target' if d.getVar('ROOTFS_ARCH') == d.getVar('DISTRO_ARCH') else 'host'}:do_build"
 do_rootfs_install[deptask] = "do_deploy_deb"
+do_rootfs_install[mounts] = "${ROOTFS_MOUNTS}"
+do_rootfs_install[mounts-noauto] = "1"
 python do_rootfs_install() {
     configure_cmds = (d.getVar("ROOTFS_CONFIGURE_COMMAND", True) or "").split()
     install_cmds = (d.getVar("ROOTFS_INSTALL_COMMAND", True) or "").split()
 
     # Mount after configure commands, so that they have time to copy
     # 'isar-apt' (sdkchroot):
-    cmds = ['rootfs_prepare'] + configure_cmds + ['rootfs_do_mounts'] + install_cmds
+    cmds = ['rootfs_prepare'] + configure_cmds + ['mounts_task_prefunc'] + install_cmds + ['mounts_task_postfunc']
 
     # NOTE: The weights specify how long each task takes in seconds and are used
     # by the MultiStageProgressReporter to render a progress bar for this task.
@@ -230,9 +211,8 @@ rootfs_export_dpkg_status() {
 }
 
 do_rootfs_postprocess[vardeps] = "${ROOTFS_POSTPROCESS_COMMAND}"
+do_rootfs_postprocess[mounts] = "${ROOTFS_MOUNTS}"
 python do_rootfs_postprocess() {
-    # Take care that its correctly mounted:
-    bb.build.exec_func('rootfs_do_mounts', d)
     # Take care that qemu-*-static is available, since it could have been
     # removed on a previous execution of this task:
     bb.build.exec_func('rootfs_do_qemu', d)
diff --git a/meta/classes/ubi-img.bbclass b/meta/classes/ubi-img.bbclass
index c69ac4d..87f0187 100644
--- a/meta/classes/ubi-img.bbclass
+++ b/meta/classes/ubi-img.bbclass
@@ -21,11 +21,10 @@ do_ubi_image() {
 
     rm -f '${DEPLOY_DIR_IMAGE}/${UBI_IMAGE_FILE}'
 
-    image_do_mounts
-
     # Create ubi image using buildchroot tools
     sudo chroot ${BUILDCHROOT_DIR} /usr/sbin/ubinize ${UBINIZE_ARGS} \
                 -o '${PP_DEPLOY}/${UBI_IMAGE_FILE}' '${PP_WORK}/${UBINIZE_CFG}'
     sudo chown $(id -u):$(id -g) '${DEPLOY_DIR_IMAGE}/${UBI_IMAGE_FILE}'
 }
 addtask ubi_image before do_image after do_image_tools do_transform_template
+do_ubi_image[mounts] = "${IMAGE_MOUNTS}"
diff --git a/meta/classes/ubifs-img.bbclass b/meta/classes/ubifs-img.bbclass
index 5d48c1d..c6775d6 100644
--- a/meta/classes/ubifs-img.bbclass
+++ b/meta/classes/ubifs-img.bbclass
@@ -20,12 +20,11 @@ ISAR_CROSS_COMPILE_armhf = "1"
 do_ubifs_image() {
     rm -f '${DEPLOY_DIR_IMAGE}/${UBIFS_IMAGE_FILE}'
 
-    image_do_mounts
-
     # Create ubifs image using buildchroot tools
     sudo chroot ${BUILDCHROOT_DIR} /usr/sbin/mkfs.ubifs ${MKUBIFS_ARGS} \
                 -r '${PP_ROOTFS}' '${PP_DEPLOY}/${UBIFS_IMAGE_FILE}'
     sudo chown $(id -u):$(id -g) '${DEPLOY_DIR_IMAGE}/${UBIFS_IMAGE_FILE}'
 }
 
+do_ubifs_image[mounts] = "${IMAGE_MOUNTS}"
 addtask ubifs_image before do_image after do_image_tools
diff --git a/meta/classes/vm-img.bbclass b/meta/classes/vm-img.bbclass
index b230af2..c6cfbf9 100644
--- a/meta/classes/vm-img.bbclass
+++ b/meta/classes/vm-img.bbclass
@@ -33,13 +33,12 @@ CONVERSION_OPTIONS = "${@set_convert_options(d)}"
 
 do_convert_wic() {
     rm -f '${DEPLOY_DIR_IMAGE}/${VIRTUAL_MACHINE_IMAGE_FILE}'
-    image_do_mounts
     bbnote "Creating ${VIRTUAL_MACHINE_IMAGE_FILE} from ${WIC_IMAGE_FILE}"
     sudo -E  chroot --userspec=$( id -u ):$( id -g ) ${BUILDCHROOT_DIR} \
     /usr/bin/qemu-img convert -f raw -O ${VIRTUAL_MACHINE_IMAGE_TYPE} ${CONVERSION_OPTIONS} \
         '${PP_DEPLOY}/${SOURCE_IMAGE_FILE}' '${PP_DEPLOY}/${VIRTUAL_MACHINE_IMAGE_FILE}'
 }
-
+do_convert_wic[mounts] = "${IMAGE_MOUNTS}"
 addtask convert_wic before do_build after do_wic_image do_copy_boot_files do_install_imager_deps do_transform_template
 
 # User settings for OVA
@@ -92,8 +91,6 @@ do_create_ova() {
     export DISK_NAME=$(basename -s .vmdk ${VIRTUAL_MACHINE_DISK})
     export LAST_CHANGE=$(date -u "+%Y-%m-%dT%H:%M:%SZ")
 
-    image_do_mounts
-
     sudo -Es chroot --userspec=$( id -u ):$( id -g ) ${BUILDCHROOT_DIR} <<'EOSUDO'
         export DISK_SIZE_BYTES=$(qemu-img info -f vmdk "${VIRTUAL_MACHINE_DISK}" \
                                  | gawk 'match($0, /^virtual size:.*\(([0-9]+) bytes\)/, a) {print a[1]}')
@@ -112,5 +109,5 @@ do_create_ova() {
         tar -uvf ${PP_DEPLOY}/${OVA_NAME}.ova -C ${PP_DEPLOY} ${VIRTUAL_MACHINE_IMAGE_FILE}
 EOSUDO
 }
-
+do_create_ova[mounts] = "${IMAGE_MOUNTS}"
 addtask do_create_ova after do_convert_wic before do_deploy
diff --git a/meta/classes/wic-img.bbclass b/meta/classes/wic-img.bbclass
index d849ad9..f8f9c0d 100644
--- a/meta/classes/wic-img.bbclass
+++ b/meta/classes/wic-img.bbclass
@@ -137,6 +137,17 @@ python check_for_wic_warnings() {
 }
 
 do_wic_image[file-checksums] += "${WKS_FILE_CHECKSUM}"
+do_wic_image[mounts] = "${BUILDCHROOT_MOUNTS}"
+python() {
+    buildchroot = d.getVar('BUILDCHROOT_DIR')
+    dirs = ((d.getVar('BBLAYERS') or '').split() +
+            (d.getVar('STAGING_DIR') or '').split() +
+            (d.getVar('SCRIPTSDIR') or '').split() +
+            (d.getVar('BITBAKEDIR') or '').split())
+    for dir in dirs:
+        d.appendVarFlag('do_wic_image', 'mounts', f" pbind:{dir}:{buildchroot}{dir}")
+}
+
 python do_wic_image() {
     bb.build.exec_func("generate_wic_image", d)
     bb.build.exec_func("check_for_wic_warnings", d)
@@ -144,17 +155,6 @@ python do_wic_image() {
 addtask wic_image before do_image after do_image_tools
 
 generate_wic_image() {
-    buildchroot_do_mounts
-    sudo -s <<'EOSUDO'
-        ( flock 9
-        for dir in ${BBLAYERS} ${STAGING_DIR} ${SCRIPTSDIR} ${BITBAKEDIR}; do
-            mkdir -p ${BUILDCHROOT_DIR}/$dir
-            if ! mountpoint ${BUILDCHROOT_DIR}/$dir >/dev/null 2>&1; then
-                mount --bind --make-private $dir ${BUILDCHROOT_DIR}/$dir
-            fi
-        done
-        ) 9>${MOUNT_LOCKFILE}
-EOSUDO
     export FAKEROOTCMD=${FAKEROOTCMD}
     export BUILDDIR=${BUILDDIR}
     export MTOOLS_SKIP_CHECK=1
@@ -200,13 +200,4 @@ EOSUDO
     done
     rm -rf ${BUILDCHROOT_DIR}/${WICTMP}
     rm -rf ${IMAGE_ROOTFS}/../pseudo
-    sudo -s <<'EOSUDO'
-        ( flock 9
-        for dir in ${BBLAYERS} ${STAGING_DIR} ${SCRIPTSDIR} ${BITBAKEDIR}; do
-            if mountpoint -q ${BUILDCHROOT_DIR}/$dir; then
-                umount ${BUILDCHROOT_DIR}/$dir
-            fi
-        done
-        ) 9>${MOUNT_LOCKFILE}
-EOSUDO
 }
diff --git a/meta/recipes-core/isar-bootstrap/isar-bootstrap.inc b/meta/recipes-core/isar-bootstrap/isar-bootstrap.inc
index b8af676..cc6c073 100644
--- a/meta/recipes-core/isar-bootstrap/isar-bootstrap.inc
+++ b/meta/recipes-core/isar-bootstrap/isar-bootstrap.inc
@@ -117,7 +117,6 @@ def get_apt_source_mirror(d, aptsources_entry_list):
     mirror_list = [entry.split()
                   for entry in premirrors.split('\\n')
                   if any(entry)]
-
     for regex, replace in mirror_list:
         match = re.search(regex, aptsources_entry_list[2])
 
@@ -318,9 +317,6 @@ do_bootstrap() {
             fi
             echo "deb ${line}" >  "${ROOTFSDIR}/etc/apt/sources.list.d/base-apt.list"
             echo "deb-src ${line}" >>  "${ROOTFSDIR}/etc/apt/sources.list.d/base-apt.list"
-
-            mkdir -p ${ROOTFSDIR}/base-apt
-            mount --bind ${REPO_BASE_DIR} ${ROOTFSDIR}/base-apt
         else
             install -v -m644 "${APTSRCS}" \
                              "${ROOTFSDIR}/etc/apt/sources.list.d/bootstrap.list"
@@ -364,13 +360,27 @@ do_bootstrap() {
         install -v -m755 "${WORKDIR}/chroot-setup.sh" "${ROOTFSDIR}/chroot-setup.sh"
         "${ROOTFSDIR}/chroot-setup.sh" "setup" "${ROOTFSDIR}"
 
-        # update APT
-        mount --rbind /dev ${ROOTFSDIR}/dev
-        mount --make-rslave ${ROOTFSDIR}/dev
-        mount -t proc none ${ROOTFSDIR}/proc
-        mount --rbind /sys ${ROOTFSDIR}/sys
-        mount --make-rslave ${ROOTFSDIR}/sys
+EOSUDO
+    deb_dl_dir_export "${ROOTFSDIR}" "${BOOTSTRAP_DISTRO}"
+}
 
+addtask bootstrap before do_build after do_generate_keyrings
+do_bootstrap[vardeps] += " \
+    DISTRO_APT_PREMIRRORS \
+    ISAR_ENABLE_COMPAT_ARCH \
+    ${DISTRO_VARS_PREFIX}DISTRO_APT_SOURCES \
+    "
+do_bootstrap[dirs] = "${DEPLOY_DIR_BOOTSTRAP}"
+do_bootstrap[depends] = "base-apt:do_cache isar-apt:do_cache_config"
+
+
+do_bootstrap_finalize() {
+    E="${@ isar_export_proxies(d)}"
+    export BOOTSTRAP_FOR_HOST E
+
+    deb_dl_dir_import "${ROOTFSDIR}" "${BOOTSTRAP_DISTRO}"
+
+    sudo -E -s <<'EOSUDO'
         export DEBIAN_FRONTEND=noninteractive
 
         if [ "${BOOTSTRAP_FOR_HOST}" = "1" ]; then
@@ -386,18 +396,19 @@ do_bootstrap() {
         chroot "${ROOTFSDIR}" /usr/bin/apt-get dist-upgrade -y \
                                 -o Debug::pkgProblemResolver=yes
 
-        umount -l "${ROOTFSDIR}/dev"
-        umount -l "${ROOTFSDIR}/proc"
-        umount -l "${ROOTFSDIR}/sys"
-        umount -l "${ROOTFSDIR}/base-apt" || true
-
         # Finalize debootstrap by setting the link in deploy
         ln -Tfsr "${ROOTFSDIR}" "${DEPLOY_ISAR_BOOTSTRAP}"
 EOSUDO
     deb_dl_dir_export "${ROOTFSDIR}" "${BOOTSTRAP_DISTRO}"
 }
 
-addtask bootstrap before do_build after do_generate_keyrings
+addtask bootstrap_finalize after do_bootstrap before do_build
+do_bootstrap_finalize[mounts] = " \
+    rbind:/dev:${ROOTFSDIR}/dev \
+    proc::${ROOTFSDIR}/proc \
+    rbind:/sys:${ROOTFSDIR}/sys \
+    ${@oe.utils.vartrue("ISAR_USE_CACHED_BASE_REPO", "bind:${REPO_BASE_DIR}:${ROOTFSDIR}/base-apt", "", d)} \
+    "
 
 CLEANFUNCS = "clean_deploy"
 clean_deploy() {
diff --git a/meta/recipes-devtools/buildchroot/buildchroot.inc b/meta/recipes-devtools/buildchroot/buildchroot.inc
index 31524a1..ea4a3ba 100644
--- a/meta/recipes-devtools/buildchroot/buildchroot.inc
+++ b/meta/recipes-devtools/buildchroot/buildchroot.inc
@@ -41,13 +41,7 @@ BUILDCHROOT_PREINSTALL_COMMON = " \
     equivs \
     adduser"
 
-rootfs_do_mounts_append() {
-    sudo -s <<'EOSUDO'
-    mkdir -p '${BUILDCHROOT_DIR}/downloads'
-    mountpoint -q '${BUILDCHROOT_DIR}/downloads' || \
-        mount --bind '${DL_DIR}' '${BUILDCHROOT_DIR}/downloads'
-EOSUDO
-}
+ROOTFS_MOUNTS += "bind:${DL_DIR}:${BUILDCHROOT_DIR}/downloads"
 
 ROOTFS_POSTPROCESS_COMMAND =+ "buildchroot_install_files"
 buildchroot_install_files() {
-- 
2.30.2


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [RFC PATCH 2/5] meta: refactor containerization
  2021-10-12 13:04 ` [RFC PATCH 2/5] meta: refactor containerization Adriaan Schmidt
@ 2021-10-13 10:17   ` Jan Kiszka
  0 siblings, 0 replies; 14+ messages in thread
From: Jan Kiszka @ 2021-10-13 10:17 UTC (permalink / raw)
  To: Adriaan Schmidt, isar-users

On 12.10.21 15:04, Adriaan Schmidt wrote:
> Don't unmount, but instead use "stay on one filesystem" options
> 

...which works because there are no bind mounts involved.

BTW, providing a "why" (e.g. "simpler") would be good for any commit
message.

Jan

> Signed-off-by: Adriaan Schmidt <adriaan.schmidt@siemens.com>
> ---
>  meta/classes/image-container-extension.bbclass | 3 ++-
>  meta/classes/image-sdk-extension.bbclass       | 6 +-----
>  2 files changed, 3 insertions(+), 6 deletions(-)
> 
> diff --git a/meta/classes/image-container-extension.bbclass b/meta/classes/image-container-extension.bbclass
> index e26604a..762be4a 100644
> --- a/meta/classes/image-container-extension.bbclass
> +++ b/meta/classes/image-container-extension.bbclass
> @@ -19,6 +19,7 @@ containerize_rootfs() {
>      # prepare OCI container image skeleton
>      bbdebug 1 "prepare OCI container image skeleton"
>      rm -rf "${oci_img_dir}"
> +
>      sudo umoci init --layout "${oci_img_dir}"
>      sudo umoci new --image "${oci_img_dir}:${empty_tag}"
>      sudo umoci config --image "${oci_img_dir}:${empty_tag}" \
> @@ -27,7 +28,7 @@ containerize_rootfs() {
>          "${oci_img_dir}_unpacked"
>  
>      # add root filesystem as the flesh of the skeleton
> -    sudo cp -a "${rootfs}"/* "${oci_img_dir}_unpacked/rootfs/"
> +    sudo sh -c "tar --one-file-system -C \"${rootfs}\" -cpf - . | tar -C \"${oci_img_dir}_unpacked/rootfs\" --strip-components=1 -xpf -"
>      # clean-up temporary files
>      sudo find "${oci_img_dir}_unpacked/rootfs/tmp" -mindepth 1 -delete
>  
> diff --git a/meta/classes/image-sdk-extension.bbclass b/meta/classes/image-sdk-extension.bbclass
> index 426b925..2cc5933 100644
> --- a/meta/classes/image-sdk-extension.bbclass
> +++ b/meta/classes/image-sdk-extension.bbclass
> @@ -33,15 +33,11 @@ do_populate_sdk() {
>          sudo rm -f ${SDKCHROOT_DIR}/etc/apt/sources.list.d/isar-apt.list
>      fi
>  
> -    sudo umount -R ${SDKCHROOT_DIR}/dev || true
> -    sudo umount ${SDKCHROOT_DIR}/proc || true
> -    sudo umount -R ${SDKCHROOT_DIR}/sys || true
> -
>      # Remove setup scripts
>      sudo rm -f ${SDKCHROOT_DIR}/chroot-setup.sh ${SDKCHROOT_DIR}/configscript.sh
>  
>      # Make all links relative
> -    for link in $(find ${SDKCHROOT_DIR}/ -type l); do
> +    for link in $(find ${SDKCHROOT_DIR}/ -type l -xdev); do
>          target=$(readlink $link)
>  
>          if [ "${target#/}" != "${target}" ]; then
> 

-- 
Siemens AG, T RDA IOT
Corporate Competence Center Embedded Linux

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [RFC PATCH 3/5] meta: add oe.utils
  2021-10-12 13:04 ` [RFC PATCH 3/5] meta: add oe.utils Adriaan Schmidt
@ 2021-10-13 10:17   ` Jan Kiszka
  2021-10-20  6:49     ` Schmidt, Adriaan
  0 siblings, 1 reply; 14+ messages in thread
From: Jan Kiszka @ 2021-10-13 10:17 UTC (permalink / raw)
  To: Adriaan Schmidt, isar-users

On 12.10.21 15:04, Adriaan Schmidt wrote:
> Taken unmodified from yocto-3.3.2 (commit 31c639eb8664059eb4ed711be9173c223b4cc940)

Reason missing ("will use X for Y").

Jan

> 
> Signed-off-by: Adriaan Schmidt <adriaan.schmidt@siemens.com>
> ---
>  meta/classes/base.bbclass |   2 +-
>  meta/lib/oe/utils.py      | 569 ++++++++++++++++++++++++++++++++++++++
>  2 files changed, 570 insertions(+), 1 deletion(-)
>  create mode 100644 meta/lib/oe/utils.py
> 
> diff --git a/meta/classes/base.bbclass b/meta/classes/base.bbclass
> index 13134ff..f42dcba 100644
> --- a/meta/classes/base.bbclass
> +++ b/meta/classes/base.bbclass
> @@ -21,7 +21,7 @@
>  THISDIR = "${@os.path.dirname(d.getVar('FILE', True))}"
>  FILESPATH = "${@base_set_filespath(["${FILE_DIRNAME}/${PF}", "${FILE_DIRNAME}/${P}", "${FILE_DIRNAME}/${PN}", "${FILE_DIRNAME}/files", "${FILE_DIRNAME}"], d)}"
>  
> -OE_IMPORTS += "os sys time oe.path oe.patch"
> +OE_IMPORTS += "os sys time oe.path oe.patch oe.utils"
>  OE_IMPORTS[type] = "list"
>  
>  def oe_import(d):
> diff --git a/meta/lib/oe/utils.py b/meta/lib/oe/utils.py
> new file mode 100644
> index 0000000..a84039f
> --- /dev/null
> +++ b/meta/lib/oe/utils.py
> @@ -0,0 +1,569 @@
> +#
> +# SPDX-License-Identifier: GPL-2.0-only
> +#
> +
> +import subprocess
> +import multiprocessing
> +import traceback
> +
> +def read_file(filename):
> +    try:
> +        f = open( filename, "r" )
> +    except IOError as reason:
> +        return "" # WARNING: can't raise an error now because of the new RDEPENDS handling. This is a bit ugly. :M:
> +    else:
> +        data = f.read().strip()
> +        f.close()
> +        return data
> +    return None
> +
> +def ifelse(condition, iftrue = True, iffalse = False):
> +    if condition:
> +        return iftrue
> +    else:
> +        return iffalse
> +
> +def conditional(variable, checkvalue, truevalue, falsevalue, d):
> +    if d.getVar(variable) == checkvalue:
> +        return truevalue
> +    else:
> +        return falsevalue
> +
> +def vartrue(var, iftrue, iffalse, d):
> +    import oe.types
> +    if oe.types.boolean(d.getVar(var)):
> +        return iftrue
> +    else:
> +        return iffalse
> +
> +def less_or_equal(variable, checkvalue, truevalue, falsevalue, d):
> +    if float(d.getVar(variable)) <= float(checkvalue):
> +        return truevalue
> +    else:
> +        return falsevalue
> +
> +def version_less_or_equal(variable, checkvalue, truevalue, falsevalue, d):
> +    result = bb.utils.vercmp_string(d.getVar(variable), checkvalue)
> +    if result <= 0:
> +        return truevalue
> +    else:
> +        return falsevalue
> +
> +def both_contain(variable1, variable2, checkvalue, d):
> +    val1 = d.getVar(variable1)
> +    val2 = d.getVar(variable2)
> +    val1 = set(val1.split())
> +    val2 = set(val2.split())
> +    if isinstance(checkvalue, str):
> +        checkvalue = set(checkvalue.split())
> +    else:
> +        checkvalue = set(checkvalue)
> +    if checkvalue.issubset(val1) and checkvalue.issubset(val2):
> +        return " ".join(checkvalue)
> +    else:
> +        return ""
> +
> +def set_intersect(variable1, variable2, d):
> +    """
> +    Expand both variables, interpret them as lists of strings, and return the
> +    intersection as a flattened string.
> +
> +    For example:
> +    s1 = "a b c"
> +    s2 = "b c d"
> +    s3 = set_intersect(s1, s2)
> +    => s3 = "b c"
> +    """
> +    val1 = set(d.getVar(variable1).split())
> +    val2 = set(d.getVar(variable2).split())
> +    return " ".join(val1 & val2)
> +
> +def prune_suffix(var, suffixes, d):
> +    # See if var ends with any of the suffixes listed and
> +    # remove it if found
> +    for suffix in suffixes:
> +        if suffix and var.endswith(suffix):
> +            var = var[:-len(suffix)]
> +
> +    prefix = d.getVar("MLPREFIX")
> +    if prefix and var.startswith(prefix):
> +        var = var[len(prefix):]
> +
> +    return var
> +
> +def str_filter(f, str, d):
> +    from re import match
> +    return " ".join([x for x in str.split() if match(f, x, 0)])
> +
> +def str_filter_out(f, str, d):
> +    from re import match
> +    return " ".join([x for x in str.split() if not match(f, x, 0)])
> +
> +def build_depends_string(depends, task):
> +    """Append a taskname to a string of dependencies as used by the [depends] flag"""
> +    return " ".join(dep + ":" + task for dep in depends.split())
> +
> +def inherits(d, *classes):
> +    """Return True if the metadata inherits any of the specified classes"""
> +    return any(bb.data.inherits_class(cls, d) for cls in classes)
> +
> +def features_backfill(var,d):
> +    # This construct allows the addition of new features to variable specified
> +    # as var
> +    # Example for var = "DISTRO_FEATURES"
> +    # This construct allows the addition of new features to DISTRO_FEATURES
> +    # that if not present would disable existing functionality, without
> +    # disturbing distributions that have already set DISTRO_FEATURES.
> +    # Distributions wanting to elide a value in DISTRO_FEATURES_BACKFILL should
> +    # add the feature to DISTRO_FEATURES_BACKFILL_CONSIDERED
> +    features = (d.getVar(var) or "").split()
> +    backfill = (d.getVar(var+"_BACKFILL") or "").split()
> +    considered = (d.getVar(var+"_BACKFILL_CONSIDERED") or "").split()
> +
> +    addfeatures = []
> +    for feature in backfill:
> +        if feature not in features and feature not in considered:
> +            addfeatures.append(feature)
> +
> +    if addfeatures:
> +        d.appendVar(var, " " + " ".join(addfeatures))
> +
> +def all_distro_features(d, features, truevalue="1", falsevalue=""):
> +    """
> +    Returns truevalue if *all* given features are set in DISTRO_FEATURES,
> +    else falsevalue. The features can be given as single string or anything
> +    that can be turned into a set.
> +
> +    This is a shorter, more flexible version of
> +    bb.utils.contains("DISTRO_FEATURES", features, truevalue, falsevalue, d).
> +
> +    Without explicit true/false values it can be used directly where
> +    Python expects a boolean:
> +       if oe.utils.all_distro_features(d, "foo bar"):
> +           bb.fatal("foo and bar are mutually exclusive DISTRO_FEATURES")
> +
> +    With just a truevalue, it can be used to include files that are meant to be
> +    used only when requested via DISTRO_FEATURES:
> +       require ${@ oe.utils.all_distro_features(d, "foo bar", "foo-and-bar.inc")
> +    """
> +    return bb.utils.contains("DISTRO_FEATURES", features, truevalue, falsevalue, d)
> +
> +def any_distro_features(d, features, truevalue="1", falsevalue=""):
> +    """
> +    Returns truevalue if at least *one* of the given features is set in DISTRO_FEATURES,
> +    else falsevalue. The features can be given as single string or anything
> +    that can be turned into a set.
> +
> +    This is a shorter, more flexible version of
> +    bb.utils.contains_any("DISTRO_FEATURES", features, truevalue, falsevalue, d).
> +
> +    Without explicit true/false values it can be used directly where
> +    Python expects a boolean:
> +       if not oe.utils.any_distro_features(d, "foo bar"):
> +           bb.fatal("foo, bar or both must be set in DISTRO_FEATURES")
> +
> +    With just a truevalue, it can be used to include files that are meant to be
> +    used only when requested via DISTRO_FEATURES:
> +       require ${@ oe.utils.any_distro_features(d, "foo bar", "foo-or-bar.inc")
> +
> +    """
> +    return bb.utils.contains_any("DISTRO_FEATURES", features, truevalue, falsevalue, d)
> +
> +def parallel_make(d, makeinst=False):
> +    """
> +    Return the integer value for the number of parallel threads to use when
> +    building, scraped out of PARALLEL_MAKE. If no parallelization option is
> +    found, returns None
> +
> +    e.g. if PARALLEL_MAKE = "-j 10", this will return 10 as an integer.
> +    """
> +    if makeinst:
> +        pm = (d.getVar('PARALLEL_MAKEINST') or '').split()
> +    else:
> +        pm = (d.getVar('PARALLEL_MAKE') or '').split()
> +    # look for '-j' and throw other options (e.g. '-l') away
> +    while pm:
> +        opt = pm.pop(0)
> +        if opt == '-j':
> +            v = pm.pop(0)
> +        elif opt.startswith('-j'):
> +            v = opt[2:].strip()
> +        else:
> +            continue
> +
> +        return int(v)
> +
> +    return ''
> +
> +def parallel_make_argument(d, fmt, limit=None, makeinst=False):
> +    """
> +    Helper utility to construct a parallel make argument from the number of
> +    parallel threads specified in PARALLEL_MAKE.
> +
> +    Returns the input format string `fmt` where a single '%d' will be expanded
> +    with the number of parallel threads to use. If `limit` is specified, the
> +    number of parallel threads will be no larger than it. If no parallelization
> +    option is found in PARALLEL_MAKE, returns an empty string
> +
> +    e.g. if PARALLEL_MAKE = "-j 10", parallel_make_argument(d, "-n %d") will return
> +    "-n 10"
> +    """
> +    v = parallel_make(d, makeinst)
> +    if v:
> +        if limit:
> +            v = min(limit, v)
> +        return fmt % v
> +    return ''
> +
> +def packages_filter_out_system(d):
> +    """
> +    Return a list of packages from PACKAGES with the "system" packages such as
> +    PN-dbg PN-doc PN-locale-eb-gb removed.
> +    """
> +    pn = d.getVar('PN')
> +    blacklist = [pn + suffix for suffix in ('', '-dbg', '-dev', '-doc', '-locale', '-staticdev', '-src')]
> +    localepkg = pn + "-locale-"
> +    pkgs = []
> +
> +    for pkg in d.getVar('PACKAGES').split():
> +        if pkg not in blacklist and localepkg not in pkg:
> +            pkgs.append(pkg)
> +    return pkgs
> +
> +def getstatusoutput(cmd):
> +    return subprocess.getstatusoutput(cmd)
> +
> +
> +def trim_version(version, num_parts=2):
> +    """
> +    Return just the first <num_parts> of <version>, split by periods.  For
> +    example, trim_version("1.2.3", 2) will return "1.2".
> +    """
> +    if type(version) is not str:
> +        raise TypeError("Version should be a string")
> +    if num_parts < 1:
> +        raise ValueError("Cannot split to parts < 1")
> +
> +    parts = version.split(".")
> +    trimmed = ".".join(parts[:num_parts])
> +    return trimmed
> +
> +def cpu_count(at_least=1):
> +    cpus = len(os.sched_getaffinity(0))
> +    return max(cpus, at_least)
> +
> +def execute_pre_post_process(d, cmds):
> +    if cmds is None:
> +        return
> +
> +    for cmd in cmds.strip().split(';'):
> +        cmd = cmd.strip()
> +        if cmd != '':
> +            bb.note("Executing %s ..." % cmd)
> +            bb.build.exec_func(cmd, d)
> +
> +# For each item in items, call the function 'target' with item as the first 
> +# argument, extraargs as the other arguments and handle any exceptions in the
> +# parent thread
> +def multiprocess_launch(target, items, d, extraargs=None):
> +
> +    class ProcessLaunch(multiprocessing.Process):
> +        def __init__(self, *args, **kwargs):
> +            multiprocessing.Process.__init__(self, *args, **kwargs)
> +            self._pconn, self._cconn = multiprocessing.Pipe()
> +            self._exception = None
> +            self._result = None
> +
> +        def run(self):
> +            try:
> +                ret = self._target(*self._args, **self._kwargs)
> +                self._cconn.send((None, ret))
> +            except Exception as e:
> +                tb = traceback.format_exc()
> +                self._cconn.send((e, tb))
> +
> +        def update(self):
> +            if self._pconn.poll():
> +                (e, tb) = self._pconn.recv()
> +                if e is not None:
> +                    self._exception = (e, tb)
> +                else:
> +                    self._result = tb
> +
> +        @property
> +        def exception(self):
> +            self.update()
> +            return self._exception
> +
> +        @property
> +        def result(self):
> +            self.update()
> +            return self._result
> +
> +    max_process = int(d.getVar("BB_NUMBER_THREADS") or os.cpu_count() or 1)
> +    launched = []
> +    errors = []
> +    results = []
> +    items = list(items)
> +    while (items and not errors) or launched:
> +        if not errors and items and len(launched) < max_process:
> +            args = (items.pop(),)
> +            if extraargs is not None:
> +                args = args + extraargs
> +            p = ProcessLaunch(target=target, args=args)
> +            p.start()
> +            launched.append(p)
> +        for q in launched:
> +            # Have to manually call update() to avoid deadlocks. The pipe can be full and
> +            # transfer stalled until we try and read the results object but the subprocess won't exit
> +            # as it still has data to write (https://bugs.python.org/issue8426)
> +            q.update()
> +            # The finished processes are joined when calling is_alive()
> +            if not q.is_alive():
> +                if q.exception:
> +                    errors.append(q.exception)
> +                if q.result:
> +                    results.append(q.result)
> +                launched.remove(q)
> +    # Paranoia doesn't hurt
> +    for p in launched:
> +        p.join()
> +    if errors:
> +        msg = ""
> +        for (e, tb) in errors:
> +            if isinstance(e, subprocess.CalledProcessError) and e.output:
> +                msg = msg + str(e) + "\n"
> +                msg = msg + "Subprocess output:"
> +                msg = msg + e.output.decode("utf-8", errors="ignore")
> +            else:
> +                msg = msg + str(e) + ": " + str(tb) + "\n"
> +        bb.fatal("Fatal errors occurred in subprocesses:\n%s" % msg)
> +    return results
> +
> +def squashspaces(string):
> +    import re
> +    return re.sub(r"\s+", " ", string).strip()
> +
> +def format_pkg_list(pkg_dict, ret_format=None):
> +    output = []
> +
> +    if ret_format == "arch":
> +        for pkg in sorted(pkg_dict):
> +            output.append("%s %s" % (pkg, pkg_dict[pkg]["arch"]))
> +    elif ret_format == "file":
> +        for pkg in sorted(pkg_dict):
> +            output.append("%s %s %s" % (pkg, pkg_dict[pkg]["filename"], pkg_dict[pkg]["arch"]))
> +    elif ret_format == "ver":
> +        for pkg in sorted(pkg_dict):
> +            output.append("%s %s %s" % (pkg, pkg_dict[pkg]["arch"], pkg_dict[pkg]["ver"]))
> +    elif ret_format == "deps":
> +        for pkg in sorted(pkg_dict):
> +            for dep in pkg_dict[pkg]["deps"]:
> +                output.append("%s|%s" % (pkg, dep))
> +    else:
> +        for pkg in sorted(pkg_dict):
> +            output.append(pkg)
> +
> +    output_str = '\n'.join(output)
> +
> +    if output_str:
> +        # make sure last line is newline terminated
> +        output_str += '\n'
> +
> +    return output_str
> +
> +
> +# Helper function to get the host compiler version
> +# Do not assume the compiler is gcc
> +def get_host_compiler_version(d, taskcontextonly=False):
> +    import re, subprocess
> +
> +    if taskcontextonly and d.getVar('BB_WORKERCONTEXT') != '1':
> +        return
> +
> +    compiler = d.getVar("BUILD_CC")
> +    # Get rid of ccache since it is not present when parsing.
> +    if compiler.startswith('ccache '):
> +        compiler = compiler[7:]
> +    try:
> +        env = os.environ.copy()
> +        # datastore PATH does not contain session PATH as set by environment-setup-...
> +        # this breaks the install-buildtools use-case
> +        # env["PATH"] = d.getVar("PATH")
> +        output = subprocess.check_output("%s --version" % compiler, \
> +                    shell=True, env=env, stderr=subprocess.STDOUT).decode("utf-8")
> +    except subprocess.CalledProcessError as e:
> +        bb.fatal("Error running %s --version: %s" % (compiler, e.output.decode("utf-8")))
> +
> +    match = re.match(r".* (\d+\.\d+)\.\d+.*", output.split('\n')[0])
> +    if not match:
> +        bb.fatal("Can't get compiler version from %s --version output" % compiler)
> +
> +    version = match.group(1)
> +    return compiler, version
> +
> +
> +def host_gcc_version(d, taskcontextonly=False):
> +    import re, subprocess
> +
> +    if taskcontextonly and d.getVar('BB_WORKERCONTEXT') != '1':
> +        return
> +
> +    compiler = d.getVar("BUILD_CC")
> +    # Get rid of ccache since it is not present when parsing.
> +    if compiler.startswith('ccache '):
> +        compiler = compiler[7:]
> +    try:
> +        env = os.environ.copy()
> +        env["PATH"] = d.getVar("PATH")
> +        output = subprocess.check_output("%s --version" % compiler, \
> +                    shell=True, env=env, stderr=subprocess.STDOUT).decode("utf-8")
> +    except subprocess.CalledProcessError as e:
> +        bb.fatal("Error running %s --version: %s" % (compiler, e.output.decode("utf-8")))
> +
> +    match = re.match(r".* (\d+\.\d+)\.\d+.*", output.split('\n')[0])
> +    if not match:
> +        bb.fatal("Can't get compiler version from %s --version output" % compiler)
> +
> +    version = match.group(1)
> +    return "-%s" % version if version in ("4.8", "4.9") else ""
> +
> +
> +def get_multilib_datastore(variant, d):
> +    localdata = bb.data.createCopy(d)
> +    if variant:
> +        overrides = localdata.getVar("OVERRIDES", False) + ":virtclass-multilib-" + variant
> +        localdata.setVar("OVERRIDES", overrides)
> +        localdata.setVar("MLPREFIX", variant + "-")
> +    else:
> +        origdefault = localdata.getVar("DEFAULTTUNE_MULTILIB_ORIGINAL")
> +        if origdefault:
> +            localdata.setVar("DEFAULTTUNE", origdefault)
> +        overrides = localdata.getVar("OVERRIDES", False).split(":")
> +        overrides = ":".join([x for x in overrides if not x.startswith("virtclass-multilib-")])
> +        localdata.setVar("OVERRIDES", overrides)
> +        localdata.setVar("MLPREFIX", "")
> +    return localdata
> +
> +#
> +# Python 2.7 doesn't have threaded pools (just multiprocessing)
> +# so implement a version here
> +#
> +
> +from queue import Queue
> +from threading import Thread
> +
> +class ThreadedWorker(Thread):
> +    """Thread executing tasks from a given tasks queue"""
> +    def __init__(self, tasks, worker_init, worker_end):
> +        Thread.__init__(self)
> +        self.tasks = tasks
> +        self.daemon = True
> +
> +        self.worker_init = worker_init
> +        self.worker_end = worker_end
> +
> +    def run(self):
> +        from queue import Empty
> +
> +        if self.worker_init is not None:
> +            self.worker_init(self)
> +
> +        while True:
> +            try:
> +                func, args, kargs = self.tasks.get(block=False)
> +            except Empty:
> +                if self.worker_end is not None:
> +                    self.worker_end(self)
> +                break
> +
> +            try:
> +                func(self, *args, **kargs)
> +            except Exception as e:
> +                print(e)
> +            finally:
> +                self.tasks.task_done()
> +
> +class ThreadedPool:
> +    """Pool of threads consuming tasks from a queue"""
> +    def __init__(self, num_workers, num_tasks, worker_init=None,
> +            worker_end=None):
> +        self.tasks = Queue(num_tasks)
> +        self.workers = []
> +
> +        for _ in range(num_workers):
> +            worker = ThreadedWorker(self.tasks, worker_init, worker_end)
> +            self.workers.append(worker)
> +
> +    def start(self):
> +        for worker in self.workers:
> +            worker.start()
> +
> +    def add_task(self, func, *args, **kargs):
> +        """Add a task to the queue"""
> +        self.tasks.put((func, args, kargs))
> +
> +    def wait_completion(self):
> +        """Wait for completion of all the tasks in the queue"""
> +        self.tasks.join()
> +        for worker in self.workers:
> +            worker.join()
> +
> +def write_ld_so_conf(d):
> +    # Some utils like prelink may not have the correct target library paths
> +    # so write an ld.so.conf to help them
> +    ldsoconf = d.expand("${STAGING_DIR_TARGET}${sysconfdir}/ld.so.conf")
> +    if os.path.exists(ldsoconf):
> +        bb.utils.remove(ldsoconf)
> +    bb.utils.mkdirhier(os.path.dirname(ldsoconf))
> +    with open(ldsoconf, "w") as f:
> +        f.write(d.getVar("base_libdir") + '\n')
> +        f.write(d.getVar("libdir") + '\n')
> +
> +class ImageQAFailed(Exception):
> +    def __init__(self, description, name=None, logfile=None):
> +        self.description = description
> +        self.name = name
> +        self.logfile=logfile
> +
> +    def __str__(self):
> +        msg = 'Function failed: %s' % self.name
> +        if self.description:
> +            msg = msg + ' (%s)' % self.description
> +
> +        return msg
> +
> +def sh_quote(string):
> +    import shlex
> +    return shlex.quote(string)
> +
> +def directory_size(root, blocksize=4096):
> +    """
> +    Calculate the size of the directory, taking into account hard links,
> +    rounding up every size to multiples of the blocksize.
> +    """
> +    def roundup(size):
> +        """
> +        Round the size up to the nearest multiple of the block size.
> +        """
> +        import math
> +        return math.ceil(size / blocksize) * blocksize
> +
> +    def getsize(filename):
> +        """
> +        Get the size of the filename, not following symlinks, taking into
> +        account hard links.
> +        """
> +        stat = os.lstat(filename)
> +        if stat.st_ino not in inodes:
> +            inodes.add(stat.st_ino)
> +            return stat.st_size
> +        else:
> +            return 0
> +
> +    inodes = set()
> +    total = 0
> +    for root, dirs, files in os.walk(root):
> +        total += sum(roundup(getsize(os.path.join(root, name))) for name in files)
> +        total += roundup(getsize(root))
> +    return total
> 


-- 
Siemens AG, T RDA IOT
Corporate Competence Center Embedded Linux

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [RFC PATCH 4/5] meta: add mounts class
  2021-10-12 13:04 ` [RFC PATCH 4/5] meta: add mounts class Adriaan Schmidt
@ 2021-10-13 10:31   ` Jan Kiszka
  2021-10-20  7:02     ` Schmidt, Adriaan
  0 siblings, 1 reply; 14+ messages in thread
From: Jan Kiszka @ 2021-10-13 10:31 UTC (permalink / raw)
  To: Adriaan Schmidt, isar-users

On 12.10.21 15:04, Adriaan Schmidt wrote:
> The new mounts.bbclass allows annotation of tasks, to describe which directories
> need to be mounted. All mounting and unmounting is then done automatically,
> and reference counting is used on a per-mountpoint basis to determine when
> umounts need to happen.
> 
> Mounts are described as "cmd:src:dest", where cmd is
>   * `bind` for a simple bind mount
>   * `rbind` for a recursive bind mount
>   * `pbind` for a "private" bind mount
>   * `proc` for a "-t proc" mount
> 
> A task is annotated using the varflag [mounts].
> 
> If mounting should not happen automatically before/after the task, you can set
> do_task[mounts-noauto] = "1", in which case you can manually call
> `mount_task_prefunc` and `mount_task_postfunc` at more convenient times.
> 
> Signed-off-by: Adriaan Schmidt <adriaan.schmidt@siemens.com>
> ---
>  meta/classes/mounts.bbclass | 271 ++++++++++++++++++++++++++++++++++++
>  meta/conf/bitbake.conf      |   2 +-
>  2 files changed, 272 insertions(+), 1 deletion(-)
>  create mode 100644 meta/classes/mounts.bbclass
> 
> diff --git a/meta/classes/mounts.bbclass b/meta/classes/mounts.bbclass
> new file mode 100644
> index 0000000..de2375e
> --- /dev/null
> +++ b/meta/classes/mounts.bbclass
> @@ -0,0 +1,271 @@
> +
> +python () {
> +    # find all tasks that request [mounts], and hook up our functions
> +    for task in [t for t in d.keys() if d.getVarFlag(t, 'task') and d.getVarFlag(t, 'mounts')]:
> +        if d.getVarFlag(task, 'mounts-noauto') == "1":
> +            continue
> +        d.prependVarFlag(task, 'prefuncs', "mounts_task_prefunc ")
> +        d.appendVarFlag(task, 'postfuncs', " mounts_task_postfunc")
> +}
> +
> +MOUNTS_DB = "${TMPDIR}/mounts"
> +MOUNTS_CONTEXT ?= "default"
> +MOUNTS_LOCK = "${MOUNTS_DB}/${MOUNTS_CONTEXT}.mountlock"
> +MOUNTS_TAB = "${MOUNTS_DB}/${MOUNTS_CONTEXT}.mounttab"
> +
> +def get_requested_mounts(d, task=None):
> +    if task is None:
> +        task = d.getVar('BB_CURRENTTASK')
> +        if not task:
> +            bb.fatal("mount code running without task context!?")
> +    if task.startswith("do_"):
> +        task = task[3:]
> +    mounts = (d.getVarFlag("do_" + task, 'mounts') or "").split()

You first strip the do_ prefix, only to prepend it again. Is that
indended? If so, maybe flip it around and prepend this if missing.

> +    mounts_out = []
> +    for m in mounts:
> +        ms = m.split(':')
> +        if len(ms) == 3 and ms[0] in 'bind rbind pbind proc'.split():
> +            mounts_out.append(ms)
> +        else:
> +            bb.error(f"Invalid mount spec: {':'.join(ms)}")
> +    return mounts_out
> +
> +def read_mtab(d, mtab_file=None):
> +    from collections import namedtuple
> +    Mount = namedtuple('Mount', 'cmd source target count')
> +    if mtab_file is None:
> +        mtab_file = d.getVar("MOUNTS_TAB", True)
> +    # mtab format is "cmd:source:target:count"
> +    try:
> +        with open(mtab_file, 'r') as f:
> +            data = [line.strip().split(':') for line in f.readlines()]
> +    except FileNotFoundError:
> +        return {}
> +    mounts = {}
> +    for m in data:
> +        if not len(m) == 4:
> +            bb.fatal("corrupt mtab!?")
> +        mt = Mount._make(m)
> +        mounts[mt.target] = mt._replace(count=int(mt.count))
> +    return mounts
> +
> +def write_mtab(d, mtab, mtab_file=None):
> +    if mtab_file is None:
> +        mtab_file = d.getVar("MOUNTS_TAB", True)
> +    with open(mtab_file, 'w') as f:
> +        for cmd, source, target, count in mtab.values():
> +            f.write(f"{cmd}:{source}:{target}:{count}\n")
> +
> +def shorten_path(x, n=3):
> +    xs = x.split('/')
> +    if len(xs) <= n:
> +        return '/'.join(xs)
> +    return '.../'+'/'.join(xs[-3:])

Hope this does not cut off any differentiating information, even when
just targeting logs.

> +
> +mount_bind() {
> +    sudo -s <<'EOSUDO'
> +        SOURCE="${@d.getVar('MOUNT_ARG_SOURCE')}"
> +        TARGET="${@d.getVar('MOUNT_ARG_TARGET')}"
> +        mkdir -p "$TARGET"
> +        mountpoint -q "$TARGET" || mount --bind "$SOURCE" "$TARGET"
> +EOSUDO
> +}
> +
> +umount_bind() {
> +    sudo -s <<'EOSUDO'
> +        TARGET="${@d.getVar('UMOUNT_ARG_TARGET')}"
> +        mountpoint -q "$TARGET" && umount "$TARGET"
> +EOSUDO
> +}
> +
> +mount_rbind() {
> +    sudo -s <<'EOSUDO'
> +        SOURCE="${@d.getVar('MOUNT_ARG_SOURCE')}"
> +        TARGET="${@d.getVar('MOUNT_ARG_TARGET')}"
> +        mkdir -p "$TARGET"
> +        mountpoint -q "$TARGET" || mount --rbind "$SOURCE" "$TARGET"
> +        mount --make-rslave "$TARGET"
> +EOSUDO
> +}
> +
> +umount_rbind() {
> +    sudo -s <<'EOSUDO'
> +        TARGET="${@d.getVar('UMOUNT_ARG_TARGET')}"
> +        mountpoint -q "$TARGET" && umount -R "$TARGET"
> +EOSUDO
> +}
> +
> +mount_pbind() {
> +    sudo -s <<'EOSUDO'
> +        SOURCE="${@d.getVar('MOUNT_ARG_SOURCE')}"
> +        TARGET="${@d.getVar('MOUNT_ARG_TARGET')}"
> +        mkdir -p "$TARGET"
> +        mountpoint -q "$TARGET" || mount --bind --make-private "$SOURCE" "$TARGET"
> +EOSUDO
> +}
> +
> +umount_pbind() {
> +    sudo -s <<'EOSUDO'
> +        TARGET="${@d.getVar('UMOUNT_ARG_TARGET')}"
> +        mountpoint -q "$TARGET" && umount "$TARGET"
> +EOSUDO
> +}
> +
> +mount_proc() {
> +    sudo -s <<'EOSUDO'
> +        TARGET="${@d.getVar('MOUNT_ARG_TARGET')}"
> +        mkdir -p "$TARGET"
> +        mountpoint -q "$TARGET" || mount -t proc none "$TARGET"
> +EOSUDO
> +}
> +
> +umount_proc() {
> +    sudo -s <<'EOSUDO'
> +        TARGET="${@d.getVar('UMOUNT_ARG_TARGET')}"
> +        mountpoint -q "$TARGET" && umount "$TARGET"
> +EOSUDO
> +}
> +
> +python mounts_task_prefunc () {
> +    from collections import namedtuple
> +    Mount = namedtuple('Mount', 'cmd source target count')
> +    task = d.getVar('PN') + ':' + d.getVar('BB_CURRENTTASK')
> +    lock = bb.utils.lockfile(d.getVar("MOUNTS_LOCK"))
> +    mounts = get_requested_mounts(d)
> +    mtab = read_mtab(d)
> +    for cmd, source, target in mounts:
> +        mt = mtab.get(target)
> +        if mt:
> +            count = mt.count + 1
> +            bb.debug(1, f"mount({task}): already mounted {shorten_path(mt.source)} at {shorten_path(mt.target)}, cnt={count}")
> +            mtab[target] = mt._replace(count=count)
> +            continue
> +        bb.debug(1, f"mount({task}): mounting {shorten_path(source)} at {shorten_path(target)}, cnt=1")
> +        d.setVar('MOUNT_ARG_SOURCE', source)
> +        d.setVar('MOUNT_ARG_TARGET', target)
> +        bb.build.exec_func('mount_' + cmd, d)

Could this fail and leave the lock blocked behind?

> +        mtab[target] = Mount(cmd, source, target, 1)
> +    write_mtab(d, mtab)
> +    bb.utils.unlockfile(lock)
> +}
> +
> +python mounts_task_postfunc () {
> +    task = d.getVar('PN') + ':' + d.getVar('BB_CURRENTTASK')
> +    lock = bb.utils.lockfile(d.getVar("MOUNTS_LOCK"))
> +    mounts = get_requested_mounts(d)
> +    mtab = read_mtab(d)
> +
> +    # release mounts
> +    for cmd, source, target in mounts:
> +        mt = mtab.get(target)
> +        if mt is None:
> +            bb.error(f"{target} not mounted. inconsistent mtab!?")
> +            continue
> +        count = mt.count - 1
> +        bb.debug(1, f"umount({task}): releasing {shorten_path(target)}, cnt={count}")
> +        mtab[target] = mt._replace(count=count)
> +
> +    # collect targets to unmount, in reverse order
> +    umounts = []
> +    for cmd, source, target in reversed(mounts):
> +        mt = mtab.get(target)
> +        if mt and mt.count == 0:
> +            umounts.append(target)
> +    for target, mt in mtab.items():
> +        if mt.count < 0:
> +            bb.error("count on {target} < 0. BUG!?!")
> +        elif mt.count == 0 and not target in umounts:
> +            umounts.append(target)
> +
> +    # now do the unmounting
> +    for target in umounts:
> +        try:
> +            bb.debug(1, f"umount({task}): unmounting {shorten_path(target)}")
> +            d.setVar('UMOUNT_ARG_TARGET', target)
> +            bb.build.exec_func('umount_' + mt.cmd, d)
> +            del mtab[target]
> +        except bb.process.ExecutionError as e:
> +            if e.exitcode == 32:
> +                # target busy
> +                bb.debug(1, f"umount({task}): target busy, moving on...")
> +            else:
> +                bb.warn(f"umount({task}): failed to unmount {target}: {str(e)}")
> +
> +    write_mtab(d, mtab)
> +    bb.utils.unlockfile(lock)
> +}
> +
> +# call postfunc explicitly in case a failing task has [mounts]
> +addhandler mounts_taskfail
> +python mounts_taskfail() {
> +    task = d.getVar('BB_CURRENTTASK')
> +    if not task:
> +        bb.fatal("mount code running without task context!?")
> +    if task.startswith("do_"):
> +        task = task[3:]
> +    if d.getVarFlag("do_" + task, 'mounts') and not d.getVarFlag("do_" + task, 'mounts-noauto') == "1":
> +        bb.build.exec_func('mounts_task_postfunc', d)
> +}
> +mounts_taskfail[eventmask] = "bb.build.TaskFailed"
> +
> +# bb.event.Build* handlers don't have a task context.
> +# Don't access MOUNTS_CONTEXT from here!
> +addhandler mounts_init
> +python mounts_init() {
> +    bb.utils.remove(d.getVar('MOUNTS_DB') + "/*.mounttab")
> +    bb.utils.remove(d.getVar('MOUNTS_DB') + "/*.mountlock")
> +}
> +mounts_init[eventmask] = "bb.event.BuildStarted"
> +
> +addhandler mounts_cleanup
> +python mounts_cleanup() {
> +    # look through MOUNTS_DB for contexts
> +    import glob
> +    import time
> +    base = d.getVar('MOUNTS_DB')
> +    locks = glob.glob(base + "/*.mountlock")
> +    tabs = glob.glob(base + "/*.mounttab")
> +
> +    # there should not be any locks?
> +    if len(locks) > 0:
> +        bb.error(f"mounts_cleanup: someone still holding lock? ({str(locks)})")
> +
> +    # cleanup any existing contexts
> +    for mtab_file in tabs:
> +        mtab = read_mtab(d, mtab_file)
> +        if len(mtab) > 0:
> +            bb.note(f"mounts_cleanup: {mtab_file.split('/')[-1]}")
> +
> +        done = []
> +        for target, mt in mtab.items():
> +            if mt.count < 0:
> +                bb.error("count on {target} < 0. BUG!?!")
> +                continue
> +            if mt.count > 0:
> +                bb.error(f"cound on {target} > 0. BUG!?!")
> +
> +            bb.note(f"mounts_cleanup: unmounting {target}")
> +            for i in range(10):
> +                try:
> +                    d.setVar('UMOUNT_ARG_TARGET', target)
> +                    bb.build.exec_func('umount_' + mt.cmd, d)
> +                    done.append(target)
> +                    break
> +                except bb.process.ExecutionError as e:
> +                    if e.exitcode == 32:
> +                        # target busy
> +                        time.sleep(1)
> +                        continue
> +                    else:
> +                        bb.error(f"umount({task}): {str(e)}")
> +                        done.append(target)
> +                        break
> +                bb.warn(f"mounts_cleanup: failed to umount {target}")
> +                done.append(target)
> +
> +        for target in done:
> +            del mtab[target]
> +        write_mtab(d, mtab, mtab_file)
> +}
> +
> +mounts_cleanup[eventmask] = "bb.event.BuildCompleted"
> diff --git a/meta/conf/bitbake.conf b/meta/conf/bitbake.conf
> index 7f5901d..4726eaf 100644
> --- a/meta/conf/bitbake.conf
> +++ b/meta/conf/bitbake.conf
> @@ -113,7 +113,7 @@ PARALLEL_MAKE ?= "-j ${@bb.utils.cpu_count()}"
>  BBINCLUDELOGS ??= "yes"
>  
>  # Add event handlers for bitbake
> -INHERIT += "isar-events"
> +INHERIT += "mounts isar-events"
>  
>  include conf/local.conf
>  include conf/multiconfig/${BB_CURRENT_MC}.conf
> 

With all this in place, did you see warning in build_completed() triggering?

Jan

-- 
Siemens AG, T RDA IOT
Corporate Competence Center Embedded Linux

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [RFC PATCH 5/5] meta: refactor to use the new mounting mechanism
  2021-10-12 13:04 ` [RFC PATCH 5/5] meta: refactor to use the new mounting mechanism Adriaan Schmidt
@ 2021-10-13 10:42   ` Jan Kiszka
  2021-10-20  9:20     ` Schmidt, Adriaan
  0 siblings, 1 reply; 14+ messages in thread
From: Jan Kiszka @ 2021-10-13 10:42 UTC (permalink / raw)
  To: Adriaan Schmidt, isar-users

-ENOCOMMITMSG

On 12.10.21 15:04, Adriaan Schmidt wrote:
> Signed-off-by: Adriaan Schmidt <adriaan.schmidt@siemens.com>
> ---
>  meta/classes/buildchroot.bbclass              | 52 ++++++++-----------
>  meta/classes/cpiogz-img.bbclass               |  3 +-
>  meta/classes/dpkg-base.bbclass                | 44 +++-------------
>  meta/classes/dpkg-gbp.bbclass                 |  2 -
>  meta/classes/dpkg.bbclass                     |  4 +-
>  meta/classes/ext4-img.bbclass                 |  3 +-
>  meta/classes/fit-img.bbclass                  |  4 +-
>  meta/classes/image-tools-extension.bbclass    |  4 +-
>  meta/classes/image.bbclass                    | 30 +++--------
>  meta/classes/initramfs.bbclass                |  2 +-
>  meta/classes/isar-events.bbclass              |  4 +-
>  meta/classes/rootfs.bbclass                   | 50 ++++++------------
>  meta/classes/ubi-img.bbclass                  |  3 +-
>  meta/classes/ubifs-img.bbclass                |  3 +-
>  meta/classes/vm-img.bbclass                   |  7 +--
>  meta/classes/wic-img.bbclass                  | 31 ++++-------
>  .../isar-bootstrap/isar-bootstrap.inc         | 43 +++++++++------
>  .../buildchroot/buildchroot.inc               |  8 +--
>  18 files changed, 104 insertions(+), 193 deletions(-)
> 
> diff --git a/meta/classes/buildchroot.bbclass b/meta/classes/buildchroot.bbclass
> index e9eb9af..7c34834 100644
> --- a/meta/classes/buildchroot.bbclass
> +++ b/meta/classes/buildchroot.bbclass
> @@ -13,50 +13,42 @@ python __anonymous() {
>         (d.getVar('HOST_DISTRO') == "debian-stretch" and distro_arch == "i386"):
>          dep = "buildchroot-target:do_build"
>          rootfs = d.getVar('BUILDCHROOT_TARGET_DIR', True)
> +        mount_ctx = "buildchroot-target"
>      else:
>          dep = "buildchroot-host:do_build"
>          rootfs = d.getVar('BUILDCHROOT_HOST_DIR', True)
> +        mount_ctx = "buildchroot-host"
>  
>      d.setVar('BUILDCHROOT_DEP', dep)
>      d.setVar('BUILDCHROOT_DIR', rootfs)
> +    d.setVar('MOUNTS_CONTEXT', mount_ctx + "-" + d.getVar('DISTRO') + "-" + d.getVar('DISTRO_ARCH'))
>  }
>  
> -MOUNT_LOCKFILE = "${BUILDCHROOT_DIR}.lock"
> +# mount settings
> +BUILDCHROOT_MOUNTS = " \
> +    bind:${REPO_ISAR_DIR}/${DISTRO}:${BUILDCHROOT_DIR}/isar-apt \
> +    bind:${DL_DIR}:${BUILDCHROOT_DIR}/downloads \
> +    rbind:/dev:${BUILDCHROOT_DIR}/dev \
> +    proc::${BUILDCHROOT_DIR}/proc \
> +    rbind:/sys:${BUILDCHROOT_DIR}/sys \
> +    ${@oe.utils.vartrue("ISAR_USE_CACHED_BASE_REPO", "bind:${REPO_BASE_DIR}:${BUILDCHROOT_DIR}/base-apt", "", d)} \
> +    "
> +
> +python () {
> +    # find all tasks that want to use buildchroot
> +    for task in [t for t in d.keys() if d.getVarFlag(t, 'task') and d.getVarFlag(t, 'buildchroot') == '1']:
> +        d.prependVarFlag(task, 'prefuncs', "buildchroot_task_prefunc ")
> +}
>  
> -buildchroot_do_mounts() {
> +buildchroot_task_prefunc() {
>      sudo -s <<'EOSUDO'
> -        ( flock 9
>          set -e
> -
> -        mountpoint -q '${BUILDCHROOT_DIR}/isar-apt' ||
> -            mount --bind '${REPO_ISAR_DIR}/${DISTRO}' '${BUILDCHROOT_DIR}/isar-apt'
> -        mountpoint -q '${BUILDCHROOT_DIR}/downloads' ||
> -            mount --bind '${DL_DIR}' '${BUILDCHROOT_DIR}/downloads'
> -        mountpoint -q '${BUILDCHROOT_DIR}/dev' ||
> -            mount --rbind /dev '${BUILDCHROOT_DIR}/dev'
> -        mount --make-rslave '${BUILDCHROOT_DIR}/dev'
> -        mountpoint -q '${BUILDCHROOT_DIR}/proc' ||
> -            mount -t proc none '${BUILDCHROOT_DIR}/proc'
> -        mountpoint -q '${BUILDCHROOT_DIR}/sys' ||
> -            mount --rbind /sys '${BUILDCHROOT_DIR}/sys'
> -        mount --make-rslave '${BUILDCHROOT_DIR}/sys'
> -
> -        # Mount base-apt if 'ISAR_USE_CACHED_BASE_REPO' is set
> -        if [ "${@repr(bb.utils.to_boolean(d.getVar('ISAR_USE_CACHED_BASE_REPO')))}" = 'True' ]
> -        then
> -            mkdir -p '${BUILDCHROOT_DIR}/base-apt'
> -            mountpoint -q '${BUILDCHROOT_DIR}/base-apt' || \
> -                mount --bind '${REPO_BASE_DIR}' '${BUILDCHROOT_DIR}/base-apt'
> -        fi
> -
> -        # Refresh or remove /etc/resolv.conf at this chance
> +        # Refresh or remove /etc/resolv.conf
>          if [ "${@repr(bb.utils.to_boolean(d.getVar('BB_NO_NETWORK')))}" = 'True' ]
>          then
> -            rm -rf '${BUILDCHROOT_DIR}/etc/resolv.conf'
> -        else
> +            rm -f '${BUILDCHROOT_DIR}/etc/resolv.conf'
> +        elif [ -d '${BUILDCHROOT_DIR}/etc' ]; then
>              cp -L /etc/resolv.conf '${BUILDCHROOT_DIR}/etc'
>          fi
> -
> -        ) 9>'${MOUNT_LOCKFILE}'

Which lock is protecting the remaining bits? If none, we need a word why
that is fine.

>  EOSUDO
>  }
> diff --git a/meta/classes/cpiogz-img.bbclass b/meta/classes/cpiogz-img.bbclass
> index 940e2fb..095e133 100644
> --- a/meta/classes/cpiogz-img.bbclass
> +++ b/meta/classes/cpiogz-img.bbclass
> @@ -10,12 +10,11 @@ CPIO_IMAGE_FORMAT ?= "newc"
>  
>  do_cpiogz_image() {
>      sudo rm -f ${CPIOGZ_IMAGE_FILE}
> -    image_do_mounts
>      sudo chroot ${BUILDCHROOT_DIR} \
>                  sh -c "cd ${PP_ROOTFS}; /usr/bin/find . | \
>                         /usr/bin/cpio -H ${CPIO_IMAGE_FORMAT} -o | /usr/bin/gzip -9 > \
>                         ${PP_DEPLOY}/${CPIOGZ_FNAME}"
>      sudo chown $(id -u):$(id -g) ${CPIOGZ_IMAGE_FILE}
>  }
> -
> +do_cpiogz_image[mounts] = "${IMAGE_MOUNTS}"
>  addtask cpiogz_image before do_image after do_image_tools
> diff --git a/meta/classes/dpkg-base.bbclass b/meta/classes/dpkg-base.bbclass
> index 8a39a6d..71c1acd 100644
> --- a/meta/classes/dpkg-base.bbclass
> +++ b/meta/classes/dpkg-base.bbclass
> @@ -96,7 +96,6 @@ python() {
>  }
>  
>  do_apt_fetch() {
> -    dpkg_do_mounts
>      E="${@ isar_export_proxies(d)}"
>      sudo -E chroot ${BUILDCHROOT_DIR} /usr/bin/apt-get update \
>          -o Dir::Etc::SourceList="sources.list.d/isar-apt.list" \
> @@ -107,10 +106,9 @@ do_apt_fetch() {
>          sudo -E chroot --userspec=$( id -u ):$( id -g ) ${BUILDCHROOT_DIR} \
>              sh -c 'mkdir -p /downloads/deb-src/"$1"/"$2" && cd /downloads/deb-src/"$1"/"$2" && apt-get -y --download-only --only-source source "$2"' my_script "${DISTRO}" "${uri}"
>      done
> -
> -    dpkg_undo_mounts
>  }
> -
> +do_apt_fetch[mounts] = "${DPKG_MOUNTS}"
> +do_apt_fetch[buildchroot] = "1"
>  addtask apt_fetch after do_unpack before do_apt_unpack
>  do_apt_fetch[lockfiles] += "${REPO_ISAR_DIR}/isar.lock"
>  
> @@ -119,7 +117,6 @@ do_apt_fetch[depends] = "${BUILDCHROOT_DEP}"
>  
>  do_apt_unpack() {
>      rm -rf ${S}
> -    dpkg_do_mounts
>      E="${@ isar_export_proxies(d)}"
>  
>      for uri in "${SRC_APT}"; do
> @@ -132,10 +129,9 @@ do_apt_unpack() {
>                  dpkg-source -x "${dscfile}" "${PPS}"' \
>                      my_script "${DISTRO}" "${uri}"
>      done
> -
> -    dpkg_undo_mounts
>  }
> -
> +do_apt_unpack[mounts] = "${DPKG_MOUNTS}"
> +do_apt_unpack[buildchroot] = "1"
>  addtask apt_unpack after do_apt_fetch before do_patch
>  
>  addtask cleanall_apt before do_cleanall
> @@ -174,27 +170,7 @@ do_prepare_build[deptask] = "do_deploy_deb"
>  
>  BUILDROOT = "${BUILDCHROOT_DIR}/${PP}"
>  
> -dpkg_do_mounts() {
> -    mkdir -p ${BUILDROOT}
> -    sudo mount --bind ${WORKDIR} ${BUILDROOT}
> -
> -    buildchroot_do_mounts
> -}
> -
> -dpkg_undo_mounts() {
> -    i=0
> -    while ! sudo umount ${BUILDROOT}; do
> -        sleep 0.1
> -        if [ `expr $i % 100` -eq 0 ] ; then
> -            bbwarn "${BUILDROOT}: Couldn't unmount ($i), retrying..."
> -        fi
> -        if [ $i -ge 10000 ]; then
> -            bbfatal "${BUILDROOT}: Couldn't unmount after timeout"
> -        fi
> -        i=`expr $i + 1`
> -    done
> -    sudo rmdir ${BUILDROOT}
> -}
> +DPKG_MOUNTS = "bind:${WORKDIR}:${BUILDROOT} ${BUILDCHROOT_MOUNTS}"
>  
>  # Placeholder for actual dpkg_runbuild() implementation
>  dpkg_runbuild() {
> @@ -204,14 +180,13 @@ dpkg_runbuild() {
>  python do_dpkg_build() {
>      lock = bb.utils.lockfile(d.getVar("REPO_ISAR_DIR") + "/isar.lock",
>                               shared=True)
> -    bb.build.exec_func("dpkg_do_mounts", d)
>      try:
>          bb.build.exec_func("dpkg_runbuild", d)
>      finally:
> -        bb.build.exec_func("dpkg_undo_mounts", d)
>          bb.utils.unlockfile(lock)
>  }
>  
> +do_dpkg_build[mounts] = "${DPKG_MOUNTS}"
>  addtask dpkg_build before do_build
>  
>  KEEP_INSTALLED_ON_CLEAN ?= "0"
> @@ -248,18 +223,15 @@ do_deploy_deb[lockfiles] = "${REPO_ISAR_DIR}/isar.lock"
>  do_deploy_deb[dirs] = "${S}"
>  
>  python do_devshell() {
> -    bb.build.exec_func('dpkg_do_mounts', d)
> -
>      isar_export_proxies(d)
>  
>      buildchroot = d.getVar('BUILDCHROOT_DIR')
>      pp_pps = os.path.join(d.getVar('PP'), d.getVar('PPS'))
>      termcmd = "sudo -E chroot {0} sh -c 'cd {1}; $SHELL -i'"
>      oe_terminal(termcmd.format(buildchroot, pp_pps), "Isar devshell", d)
> -
> -    bb.build.exec_func('dpkg_undo_mounts', d)
>  }
> -
> +do_devshell[mounts] = "${DPKG_MOUNTS}"
> +do_devshell[buildchroot] = "1"
>  addtask devshell after do_prepare_build
>  DEVSHELL_STARTDIR ?= "${S}"
>  do_devshell[dirs] = "${DEVSHELL_STARTDIR}"
> diff --git a/meta/classes/dpkg-gbp.bbclass b/meta/classes/dpkg-gbp.bbclass
> index d956e8c..e3bf305 100644
> --- a/meta/classes/dpkg-gbp.bbclass
> +++ b/meta/classes/dpkg-gbp.bbclass
> @@ -13,7 +13,6 @@ GBP_DEPENDS ?= "git-buildpackage pristine-tar"
>  GBP_EXTRA_OPTIONS ?= "--git-pristine-tar"
>  
>  do_install_builddeps_append() {
> -    dpkg_do_mounts
>      distro="${DISTRO}"
>      if [ ${ISAR_CROSS_COMPILE} -eq 1 ]; then
>         distro="${HOST_DISTRO}"
> @@ -26,7 +25,6 @@ do_install_builddeps_append() {
>      sudo -E chroot ${BUILDCHROOT_DIR} \
>          apt-get install -y -o Debug::pkgProblemResolver=yes \
>                          --no-install-recommends ${GBP_DEPENDS}
> -    dpkg_undo_mounts
>  }
>  
>  dpkg_runbuild_prepend() {
> diff --git a/meta/classes/dpkg.bbclass b/meta/classes/dpkg.bbclass
> index 4e7c2f7..96aefba 100644
> --- a/meta/classes/dpkg.bbclass
> +++ b/meta/classes/dpkg.bbclass
> @@ -7,7 +7,6 @@ PACKAGE_ARCH ?= "${DISTRO_ARCH}"
>  
>  # Install build dependencies for package
>  do_install_builddeps() {
> -    dpkg_do_mounts
>      E="${@ isar_export_proxies(d)}"
>      distro="${DISTRO}"
>      if [ ${ISAR_CROSS_COMPILE} -eq 1 ]; then
> @@ -19,12 +18,13 @@ do_install_builddeps() {
>      deb_dl_dir_export "${BUILDCHROOT_DIR}" "${distro}"
>      sudo -E chroot ${BUILDCHROOT_DIR} /isar/deps.sh \
>          ${PP}/${PPS} ${PACKAGE_ARCH}
> -    dpkg_undo_mounts
>  }
>  
>  addtask install_builddeps after do_prepare_build before do_dpkg_build
>  # apt and reprepro may not run in parallel, acquire the Isar lock
>  do_install_builddeps[lockfiles] += "${REPO_ISAR_DIR}/isar.lock"
> +do_install_builddeps[mounts] = "${DPKG_MOUNTS}"
> +do_install_builddeps[buildchroot] = "1"
>  
>  addtask devshell after do_install_builddeps
>  
> diff --git a/meta/classes/ext4-img.bbclass b/meta/classes/ext4-img.bbclass
> index 334dc64..84dd4f2 100644
> --- a/meta/classes/ext4-img.bbclass
> +++ b/meta/classes/ext4-img.bbclass
> @@ -13,11 +13,10 @@ do_ext4_image() {
>  
>      truncate -s ${ROOTFS_SIZE}K '${DEPLOY_DIR_IMAGE}/${EXT4_IMAGE_FILE}'
>  
> -    image_do_mounts
> -
>      sudo chroot ${BUILDCHROOT_DIR} /sbin/mke2fs ${MKE2FS_ARGS} \
>                  -F -d '${PP_ROOTFS}' '${PP_DEPLOY}/${EXT4_IMAGE_FILE}'
>  }
>  
>  addtask ext4_image before do_image after do_image_tools
>  do_ext4_image[prefuncs] = 'set_image_size'
> +do_ext4_image[mounts] = "${IMAGE_MOUNTS}"
> diff --git a/meta/classes/fit-img.bbclass b/meta/classes/fit-img.bbclass
> index 82b96d8..a34517a 100644
> --- a/meta/classes/fit-img.bbclass
> +++ b/meta/classes/fit-img.bbclass
> @@ -18,11 +18,11 @@ do_fit_image() {
>  
>      rm -f '${DEPLOY_DIR_IMAGE}/${FIT_IMAGE_FILE}'
>  
> -    image_do_mounts
> -
>      # Create fit image using buildchroot tools
>      sudo chroot ${BUILDCHROOT_DIR} /usr/bin/mkimage ${MKIMAGE_ARGS} \
>                  -f '${PP_WORK}/${FIT_IMAGE_SOURCE}' '${PP_DEPLOY}/${FIT_IMAGE_FILE}'
>      sudo chown $(id -u):$(id -g) '${DEPLOY_DIR_IMAGE}/${FIT_IMAGE_FILE}'
>  }
> +
>  addtask fit_image before do_image after do_image_tools do_transform_template
> +do_fit_image[mounts] = "${IMAGE_MOUNTS}"
> diff --git a/meta/classes/image-tools-extension.bbclass b/meta/classes/image-tools-extension.bbclass
> index 9f28800..265fb0c 100644
> --- a/meta/classes/image-tools-extension.bbclass
> +++ b/meta/classes/image-tools-extension.bbclass
> @@ -17,13 +17,13 @@ DEPENDS += "${IMAGER_BUILD_DEPS}"
>  do_install_imager_deps[depends] = "${BUILDCHROOT_DEP}"
>  do_install_imager_deps[deptask] = "do_deploy_deb"
>  do_install_imager_deps[lockfiles] += "${REPO_ISAR_DIR}/isar.lock"
> +do_install_imager_deps[mounts] = "${BUILDCHROOT_MOUNTS}"
> +do_install_imager_deps[buildchroot] = "1"
>  do_install_imager_deps() {
>      if [ -z "${@d.getVar("IMAGER_INSTALL", True).strip()}" ]; then
>          exit
>      fi
>  
> -    buildchroot_do_mounts
> -
>      E="${@ isar_export_proxies(d)}"
>      deb_dl_dir_import ${BUILDCHROOT_DIR} ${DISTRO}
>      sudo -E chroot ${BUILDCHROOT_DIR} sh -c ' \
> diff --git a/meta/classes/image.bbclass b/meta/classes/image.bbclass
> index ec93cab..dd934a7 100644
> --- a/meta/classes/image.bbclass
> +++ b/meta/classes/image.bbclass
> @@ -52,15 +52,12 @@ DEPENDS += "${IMAGE_INSTALL}"
>  ISAR_RELEASE_CMD_DEFAULT = "git -C ${LAYERDIR_core} describe --tags --dirty --match 'v[0-9].[0-9]*'"
>  ISAR_RELEASE_CMD ?= "${ISAR_RELEASE_CMD_DEFAULT}"
>  
> -image_do_mounts() {
> -    sudo flock ${MOUNT_LOCKFILE} -c ' \
> -        mkdir -p "${BUILDROOT_DEPLOY}" "${BUILDROOT_ROOTFS}" "${BUILDROOT_WORK}"
> -        mount --bind "${DEPLOY_DIR_IMAGE}" "${BUILDROOT_DEPLOY}"
> -        mount --bind "${IMAGE_ROOTFS}" "${BUILDROOT_ROOTFS}"
> -        mount --bind "${WORKDIR}" "${BUILDROOT_WORK}"
> -    '
> -    buildchroot_do_mounts
> -}
> +IMAGE_MOUNTS = " \
> +    bind:${DEPLOY_DIR_IMAGE}:${BUILDROOT_DEPLOY} \
> +    bind:${IMAGE_ROOTFS}:${BUILDROOT_ROOTFS} \
> +    bind:${WORKDIR}:${BUILDROOT_WORK} \
> +    ${BUILDCHROOT_MOUNTS} \
> +    "
>  
>  ROOTFSDIR = "${IMAGE_ROOTFS}"
>  ROOTFS_FEATURES += "clean-package-cache generate-manifest export-dpkg-status"
> @@ -190,21 +187,6 @@ do_rootfs_finalize() {
>              find "${ROOTFSDIR}/usr/bin" \
>                  -maxdepth 1 -name 'qemu-*-static' -type f -delete
>  
> -        mountpoint -q '${ROOTFSDIR}/isar-apt' && \
> -            umount -l ${ROOTFSDIR}/isar-apt
> -        rmdir --ignore-fail-on-non-empty ${ROOTFSDIR}/isar-apt
> -
> -        mountpoint -q '${ROOTFSDIR}/base-apt' && \
> -            umount -l ${ROOTFSDIR}/base-apt
> -        rmdir --ignore-fail-on-non-empty ${ROOTFSDIR}/base-apt

Who is deleting the empty isar-apt base-apt dirs?

> -
> -        mountpoint -q '${ROOTFSDIR}/dev' && \
> -            umount -l ${ROOTFSDIR}/dev
> -        mountpoint -q '${ROOTFSDIR}/sys' && \
> -            umount -l ${ROOTFSDIR}/proc
> -        mountpoint -q '${ROOTFSDIR}/sys' && \
> -            umount -l ${ROOTFSDIR}/sys
> -
>          rm -f "${ROOTFSDIR}/etc/apt/apt.conf.d/55isar-fallback.conf"
>  
>          rm -f "${ROOTFSDIR}/etc/apt/sources.list.d/isar-apt.list"
> diff --git a/meta/classes/initramfs.bbclass b/meta/classes/initramfs.bbclass
> index 10a642b..b5aba91 100644
> --- a/meta/classes/initramfs.bbclass
> +++ b/meta/classes/initramfs.bbclass
> @@ -25,8 +25,8 @@ ROOTFS_PACKAGES = "initramfs-tools ${INITRAMFS_PREINSTALL} ${INITRAMFS_INSTALL}"
>  inherit rootfs
>  
>  do_generate_initramfs[dirs] = "${DEPLOY_DIR_IMAGE}"
> +do_generate_initramfs[mounts] = "${ROOTFS_MOUNTS}"
>  do_generate_initramfs() {
> -    rootfs_do_mounts
>      rootfs_do_qemu
>  
>      sudo -E chroot "${INITRAMFS_ROOTFS}" \
> diff --git a/meta/classes/isar-events.bbclass b/meta/classes/isar-events.bbclass
> index 92aff20..73419b4 100644
> --- a/meta/classes/isar-events.bbclass
> +++ b/meta/classes/isar-events.bbclass
> @@ -8,8 +8,6 @@ addhandler build_started
>  
>  python build_started() {
>      bb.utils.remove(d.getVar('TMPDIR') + "/work/*/*/*/temp/once.*")
> -    bb.utils.remove(d.getVar('TMPDIR') + "/work/*/*/*/rootfs.mount")
> -    bb.utils.remove(d.getVar('TMPDIR') + "/deploy/buildchroot-*/*.mount")
>  }
>  build_started[eventmask] = "bb.event.BuildStarted"
>  
> @@ -54,7 +52,7 @@ python build_completed() {
>      with open('/proc/mounts') as f:
>          for line in f.readlines():
>              if basepath in line:
> -                bb.debug(1, '%s left mounted, unmounting...' % line.split()[1])
> +                bb.warn('%s left mounted, unmounting...' % line.split()[1])

Ah, you are brave and make this a real warning again. Means you've never
seen it?

>                  subprocess.call(
>                      ["sudo", "umount", "-l", line.split()[1]],
>                      stdout=subprocess.DEVNULL,
> diff --git a/meta/classes/rootfs.bbclass b/meta/classes/rootfs.bbclass
> index f9151c5..7b9fbb1 100644
> --- a/meta/classes/rootfs.bbclass
> +++ b/meta/classes/rootfs.bbclass
> @@ -26,38 +26,17 @@ export LANG = "C"
>  export LANGUAGE = "C"
>  export LC_ALL = "C"
>  
> -rootfs_do_mounts[weight] = "3"
> -rootfs_do_mounts() {
> -    sudo -s <<'EOSUDO'
> -        mountpoint -q '${ROOTFSDIR}/dev' || \
> -            mount --rbind /dev '${ROOTFSDIR}/dev'
> -        mount --make-rslave '${ROOTFSDIR}/dev'
> -        mountpoint -q '${ROOTFSDIR}/proc' || \
> -            mount -t proc none '${ROOTFSDIR}/proc'
> -        mountpoint -q '${ROOTFSDIR}/sys' || \
> -            mount --rbind /sys '${ROOTFSDIR}/sys'
> -        mount --make-rslave '${ROOTFSDIR}/sys'
> -
> -        # Mount isar-apt if the directory does not exist or if it is empty
> -        # This prevents overwriting something that was copied there
> -        if [ ! -e '${ROOTFSDIR}/isar-apt' ] || \
> -           [ "$(find '${ROOTFSDIR}/isar-apt' -maxdepth 1 -mindepth 1 | wc -l)" = "0" ]
> -        then
> -            mkdir -p '${ROOTFSDIR}/isar-apt'
> -            mountpoint -q '${ROOTFSDIR}/isar-apt' || \
> -                mount --bind '${REPO_ISAR_DIR}/${DISTRO}' '${ROOTFSDIR}/isar-apt'
> -        fi
> -
> -        # Mount base-apt if 'ISAR_USE_CACHED_BASE_REPO' is set
> -        if [ "${@repr(bb.utils.to_boolean(d.getVar('ISAR_USE_CACHED_BASE_REPO')))}" = 'True' ]
> -        then
> -            mkdir -p '${ROOTFSDIR}/base-apt'
> -            mountpoint -q '${ROOTFSDIR}/base-apt' || \
> -                mount --bind '${REPO_BASE_DIR}' '${ROOTFSDIR}/base-apt'
> -        fi
> -
> -EOSUDO
> -}
> +MOUNTS_CONTEXT = "${PN}-${DISTRO}-${DISTRO_ARCH}"
> +ROOTFS_MOUNTS = " \
> +    rbind:/dev:${ROOTFSDIR}/dev \
> +    proc::${ROOTFSDIR}/proc \
> +    rbind:/sys:${ROOTFSDIR}/sys \
> +    bind:${REPO_ISAR_DIR}/${DISTRO}:${ROOTFSDIR}/isar-apt \
> +    ${@oe.utils.vartrue("ISAR_USE_CACHED_BASE_REPO", "bind:${REPO_BASE_DIR}:${ROOTFSDIR}/base-apt", "", d)} \
> +    "
> +
> +mounts_task_prefunc[weight] = "3"
> +mounts_task_postfunc[weight] = "3"
>  
>  rootfs_do_qemu() {
>      if [ '${@repr(d.getVar('ROOTFS_ARCH') == d.getVar('HOST_ARCH'))}' = 'False' ]
> @@ -153,13 +132,15 @@ do_rootfs_install[root_cleandirs] = "${ROOTFSDIR}"
>  do_rootfs_install[vardeps] += "${ROOTFS_CONFIGURE_COMMAND} ${ROOTFS_INSTALL_COMMAND}"
>  do_rootfs_install[depends] = "isar-bootstrap-${@'target' if d.getVar('ROOTFS_ARCH') == d.getVar('DISTRO_ARCH') else 'host'}:do_build"
>  do_rootfs_install[deptask] = "do_deploy_deb"
> +do_rootfs_install[mounts] = "${ROOTFS_MOUNTS}"
> +do_rootfs_install[mounts-noauto] = "1"

Why "noauto"? How is it done otherwise?

>  python do_rootfs_install() {
>      configure_cmds = (d.getVar("ROOTFS_CONFIGURE_COMMAND", True) or "").split()
>      install_cmds = (d.getVar("ROOTFS_INSTALL_COMMAND", True) or "").split()
>  
>      # Mount after configure commands, so that they have time to copy
>      # 'isar-apt' (sdkchroot):
> -    cmds = ['rootfs_prepare'] + configure_cmds + ['rootfs_do_mounts'] + install_cmds
> +    cmds = ['rootfs_prepare'] + configure_cmds + ['mounts_task_prefunc'] + install_cmds + ['mounts_task_postfunc']
>  
>      # NOTE: The weights specify how long each task takes in seconds and are used
>      # by the MultiStageProgressReporter to render a progress bar for this task.
> @@ -230,9 +211,8 @@ rootfs_export_dpkg_status() {
>  }
>  
>  do_rootfs_postprocess[vardeps] = "${ROOTFS_POSTPROCESS_COMMAND}"
> +do_rootfs_postprocess[mounts] = "${ROOTFS_MOUNTS}"
>  python do_rootfs_postprocess() {
> -    # Take care that its correctly mounted:
> -    bb.build.exec_func('rootfs_do_mounts', d)
>      # Take care that qemu-*-static is available, since it could have been
>      # removed on a previous execution of this task:
>      bb.build.exec_func('rootfs_do_qemu', d)
> diff --git a/meta/classes/ubi-img.bbclass b/meta/classes/ubi-img.bbclass
> index c69ac4d..87f0187 100644
> --- a/meta/classes/ubi-img.bbclass
> +++ b/meta/classes/ubi-img.bbclass
> @@ -21,11 +21,10 @@ do_ubi_image() {
>  
>      rm -f '${DEPLOY_DIR_IMAGE}/${UBI_IMAGE_FILE}'
>  
> -    image_do_mounts
> -
>      # Create ubi image using buildchroot tools
>      sudo chroot ${BUILDCHROOT_DIR} /usr/sbin/ubinize ${UBINIZE_ARGS} \
>                  -o '${PP_DEPLOY}/${UBI_IMAGE_FILE}' '${PP_WORK}/${UBINIZE_CFG}'
>      sudo chown $(id -u):$(id -g) '${DEPLOY_DIR_IMAGE}/${UBI_IMAGE_FILE}'
>  }
>  addtask ubi_image before do_image after do_image_tools do_transform_template
> +do_ubi_image[mounts] = "${IMAGE_MOUNTS}"
> diff --git a/meta/classes/ubifs-img.bbclass b/meta/classes/ubifs-img.bbclass
> index 5d48c1d..c6775d6 100644
> --- a/meta/classes/ubifs-img.bbclass
> +++ b/meta/classes/ubifs-img.bbclass
> @@ -20,12 +20,11 @@ ISAR_CROSS_COMPILE_armhf = "1"
>  do_ubifs_image() {
>      rm -f '${DEPLOY_DIR_IMAGE}/${UBIFS_IMAGE_FILE}'
>  
> -    image_do_mounts
> -
>      # Create ubifs image using buildchroot tools
>      sudo chroot ${BUILDCHROOT_DIR} /usr/sbin/mkfs.ubifs ${MKUBIFS_ARGS} \
>                  -r '${PP_ROOTFS}' '${PP_DEPLOY}/${UBIFS_IMAGE_FILE}'
>      sudo chown $(id -u):$(id -g) '${DEPLOY_DIR_IMAGE}/${UBIFS_IMAGE_FILE}'
>  }
>  
> +do_ubifs_image[mounts] = "${IMAGE_MOUNTS}"
>  addtask ubifs_image before do_image after do_image_tools
> diff --git a/meta/classes/vm-img.bbclass b/meta/classes/vm-img.bbclass
> index b230af2..c6cfbf9 100644
> --- a/meta/classes/vm-img.bbclass
> +++ b/meta/classes/vm-img.bbclass
> @@ -33,13 +33,12 @@ CONVERSION_OPTIONS = "${@set_convert_options(d)}"
>  
>  do_convert_wic() {
>      rm -f '${DEPLOY_DIR_IMAGE}/${VIRTUAL_MACHINE_IMAGE_FILE}'
> -    image_do_mounts
>      bbnote "Creating ${VIRTUAL_MACHINE_IMAGE_FILE} from ${WIC_IMAGE_FILE}"
>      sudo -E  chroot --userspec=$( id -u ):$( id -g ) ${BUILDCHROOT_DIR} \
>      /usr/bin/qemu-img convert -f raw -O ${VIRTUAL_MACHINE_IMAGE_TYPE} ${CONVERSION_OPTIONS} \
>          '${PP_DEPLOY}/${SOURCE_IMAGE_FILE}' '${PP_DEPLOY}/${VIRTUAL_MACHINE_IMAGE_FILE}'
>  }
> -
> +do_convert_wic[mounts] = "${IMAGE_MOUNTS}"
>  addtask convert_wic before do_build after do_wic_image do_copy_boot_files do_install_imager_deps do_transform_template
>  
>  # User settings for OVA
> @@ -92,8 +91,6 @@ do_create_ova() {
>      export DISK_NAME=$(basename -s .vmdk ${VIRTUAL_MACHINE_DISK})
>      export LAST_CHANGE=$(date -u "+%Y-%m-%dT%H:%M:%SZ")
>  
> -    image_do_mounts
> -
>      sudo -Es chroot --userspec=$( id -u ):$( id -g ) ${BUILDCHROOT_DIR} <<'EOSUDO'
>          export DISK_SIZE_BYTES=$(qemu-img info -f vmdk "${VIRTUAL_MACHINE_DISK}" \
>                                   | gawk 'match($0, /^virtual size:.*\(([0-9]+) bytes\)/, a) {print a[1]}')
> @@ -112,5 +109,5 @@ do_create_ova() {
>          tar -uvf ${PP_DEPLOY}/${OVA_NAME}.ova -C ${PP_DEPLOY} ${VIRTUAL_MACHINE_IMAGE_FILE}
>  EOSUDO
>  }
> -
> +do_create_ova[mounts] = "${IMAGE_MOUNTS}"
>  addtask do_create_ova after do_convert_wic before do_deploy
> diff --git a/meta/classes/wic-img.bbclass b/meta/classes/wic-img.bbclass
> index d849ad9..f8f9c0d 100644
> --- a/meta/classes/wic-img.bbclass
> +++ b/meta/classes/wic-img.bbclass
> @@ -137,6 +137,17 @@ python check_for_wic_warnings() {
>  }
>  
>  do_wic_image[file-checksums] += "${WKS_FILE_CHECKSUM}"
> +do_wic_image[mounts] = "${BUILDCHROOT_MOUNTS}"
> +python() {
> +    buildchroot = d.getVar('BUILDCHROOT_DIR')
> +    dirs = ((d.getVar('BBLAYERS') or '').split() +
> +            (d.getVar('STAGING_DIR') or '').split() +
> +            (d.getVar('SCRIPTSDIR') or '').split() +
> +            (d.getVar('BITBAKEDIR') or '').split())
> +    for dir in dirs:
> +        d.appendVarFlag('do_wic_image', 'mounts', f" pbind:{dir}:{buildchroot}{dir}")
> +}
> +
>  python do_wic_image() {
>      bb.build.exec_func("generate_wic_image", d)
>      bb.build.exec_func("check_for_wic_warnings", d)
> @@ -144,17 +155,6 @@ python do_wic_image() {
>  addtask wic_image before do_image after do_image_tools
>  
>  generate_wic_image() {
> -    buildchroot_do_mounts
> -    sudo -s <<'EOSUDO'
> -        ( flock 9
> -        for dir in ${BBLAYERS} ${STAGING_DIR} ${SCRIPTSDIR} ${BITBAKEDIR}; do
> -            mkdir -p ${BUILDCHROOT_DIR}/$dir
> -            if ! mountpoint ${BUILDCHROOT_DIR}/$dir >/dev/null 2>&1; then
> -                mount --bind --make-private $dir ${BUILDCHROOT_DIR}/$dir
> -            fi
> -        done
> -        ) 9>${MOUNT_LOCKFILE}
> -EOSUDO
>      export FAKEROOTCMD=${FAKEROOTCMD}
>      export BUILDDIR=${BUILDDIR}
>      export MTOOLS_SKIP_CHECK=1
> @@ -200,13 +200,4 @@ EOSUDO
>      done
>      rm -rf ${BUILDCHROOT_DIR}/${WICTMP}
>      rm -rf ${IMAGE_ROOTFS}/../pseudo
> -    sudo -s <<'EOSUDO'
> -        ( flock 9
> -        for dir in ${BBLAYERS} ${STAGING_DIR} ${SCRIPTSDIR} ${BITBAKEDIR}; do
> -            if mountpoint -q ${BUILDCHROOT_DIR}/$dir; then
> -                umount ${BUILDCHROOT_DIR}/$dir
> -            fi
> -        done
> -        ) 9>${MOUNT_LOCKFILE}
> -EOSUDO
>  }
> diff --git a/meta/recipes-core/isar-bootstrap/isar-bootstrap.inc b/meta/recipes-core/isar-bootstrap/isar-bootstrap.inc
> index b8af676..cc6c073 100644
> --- a/meta/recipes-core/isar-bootstrap/isar-bootstrap.inc
> +++ b/meta/recipes-core/isar-bootstrap/isar-bootstrap.inc
> @@ -117,7 +117,6 @@ def get_apt_source_mirror(d, aptsources_entry_list):
>      mirror_list = [entry.split()
>                    for entry in premirrors.split('\\n')
>                    if any(entry)]
> -
>      for regex, replace in mirror_list:
>          match = re.search(regex, aptsources_entry_list[2])
>  
> @@ -318,9 +317,6 @@ do_bootstrap() {
>              fi
>              echo "deb ${line}" >  "${ROOTFSDIR}/etc/apt/sources.list.d/base-apt.list"
>              echo "deb-src ${line}" >>  "${ROOTFSDIR}/etc/apt/sources.list.d/base-apt.list"
> -
> -            mkdir -p ${ROOTFSDIR}/base-apt
> -            mount --bind ${REPO_BASE_DIR} ${ROOTFSDIR}/base-apt
>          else
>              install -v -m644 "${APTSRCS}" \
>                               "${ROOTFSDIR}/etc/apt/sources.list.d/bootstrap.list"
> @@ -364,13 +360,27 @@ do_bootstrap() {
>          install -v -m755 "${WORKDIR}/chroot-setup.sh" "${ROOTFSDIR}/chroot-setup.sh"
>          "${ROOTFSDIR}/chroot-setup.sh" "setup" "${ROOTFSDIR}"
>  
> -        # update APT
> -        mount --rbind /dev ${ROOTFSDIR}/dev
> -        mount --make-rslave ${ROOTFSDIR}/dev
> -        mount -t proc none ${ROOTFSDIR}/proc
> -        mount --rbind /sys ${ROOTFSDIR}/sys
> -        mount --make-rslave ${ROOTFSDIR}/sys
> +EOSUDO
> +    deb_dl_dir_export "${ROOTFSDIR}" "${BOOTSTRAP_DISTRO}"
> +}
>  
> +addtask bootstrap before do_build after do_generate_keyrings
> +do_bootstrap[vardeps] += " \
> +    DISTRO_APT_PREMIRRORS \
> +    ISAR_ENABLE_COMPAT_ARCH \
> +    ${DISTRO_VARS_PREFIX}DISTRO_APT_SOURCES \
> +    "
> +do_bootstrap[dirs] = "${DEPLOY_DIR_BOOTSTRAP}"
> +do_bootstrap[depends] = "base-apt:do_cache isar-apt:do_cache_config"
> +
> +
> +do_bootstrap_finalize() {
> +    E="${@ isar_export_proxies(d)}"
> +    export BOOTSTRAP_FOR_HOST E
> +
> +    deb_dl_dir_import "${ROOTFSDIR}" "${BOOTSTRAP_DISTRO}"
> +
> +    sudo -E -s <<'EOSUDO'
>          export DEBIAN_FRONTEND=noninteractive
>  
>          if [ "${BOOTSTRAP_FOR_HOST}" = "1" ]; then
> @@ -386,18 +396,19 @@ do_bootstrap() {
>          chroot "${ROOTFSDIR}" /usr/bin/apt-get dist-upgrade -y \
>                                  -o Debug::pkgProblemResolver=yes
>  
> -        umount -l "${ROOTFSDIR}/dev"
> -        umount -l "${ROOTFSDIR}/proc"
> -        umount -l "${ROOTFSDIR}/sys"
> -        umount -l "${ROOTFSDIR}/base-apt" || true
> -
>          # Finalize debootstrap by setting the link in deploy
>          ln -Tfsr "${ROOTFSDIR}" "${DEPLOY_ISAR_BOOTSTRAP}"
>  EOSUDO
>      deb_dl_dir_export "${ROOTFSDIR}" "${BOOTSTRAP_DISTRO}"
>  }
>  
> -addtask bootstrap before do_build after do_generate_keyrings
> +addtask bootstrap_finalize after do_bootstrap before do_build
> +do_bootstrap_finalize[mounts] = " \
> +    rbind:/dev:${ROOTFSDIR}/dev \
> +    proc::${ROOTFSDIR}/proc \
> +    rbind:/sys:${ROOTFSDIR}/sys \
> +    ${@oe.utils.vartrue("ISAR_USE_CACHED_BASE_REPO", "bind:${REPO_BASE_DIR}:${ROOTFSDIR}/base-apt", "", d)} \
> +    "
>  
>  CLEANFUNCS = "clean_deploy"
>  clean_deploy() {
> diff --git a/meta/recipes-devtools/buildchroot/buildchroot.inc b/meta/recipes-devtools/buildchroot/buildchroot.inc
> index 31524a1..ea4a3ba 100644
> --- a/meta/recipes-devtools/buildchroot/buildchroot.inc
> +++ b/meta/recipes-devtools/buildchroot/buildchroot.inc
> @@ -41,13 +41,7 @@ BUILDCHROOT_PREINSTALL_COMMON = " \
>      equivs \
>      adduser"
>  
> -rootfs_do_mounts_append() {
> -    sudo -s <<'EOSUDO'
> -    mkdir -p '${BUILDCHROOT_DIR}/downloads'
> -    mountpoint -q '${BUILDCHROOT_DIR}/downloads' || \
> -        mount --bind '${DL_DIR}' '${BUILDCHROOT_DIR}/downloads'
> -EOSUDO
> -}
> +ROOTFS_MOUNTS += "bind:${DL_DIR}:${BUILDCHROOT_DIR}/downloads"
>  
>  ROOTFS_POSTPROCESS_COMMAND =+ "buildchroot_install_files"
>  buildchroot_install_files() {
> 

Maybe split this up into multiple conversions of the same type?

Jan

-- 
Siemens AG, T RDA IOT
Corporate Competence Center Embedded Linux

^ permalink raw reply	[flat|nested] 14+ messages in thread

* RE: [RFC PATCH 3/5] meta: add oe.utils
  2021-10-13 10:17   ` Jan Kiszka
@ 2021-10-20  6:49     ` Schmidt, Adriaan
  0 siblings, 0 replies; 14+ messages in thread
From: Schmidt, Adriaan @ 2021-10-20  6:49 UTC (permalink / raw)
  To: jan.kiszka, isar-users

On 2021-10-13 12:17, Jan Kiszka wrote:
> On 12.10.21 15:04, Adriaan Schmidt wrote:
> > Taken unmodified from yocto-3.3.2 (commit
> 31c639eb8664059eb4ed711be9173c223b4cc940)
> 
> Reason missing ("will use X for Y").

Turns out we need it only for the function `vartrue`, which is nice to conditionally add strings to lists. But the same thing can be accomplished with an `if/else` and `bb.utils.to_boolean`. And because that second pattern is already used in Isar, I will drop oe.utils from the next version.

Adriaan


^ permalink raw reply	[flat|nested] 14+ messages in thread

* RE: [RFC PATCH 4/5] meta: add mounts class
  2021-10-13 10:31   ` Jan Kiszka
@ 2021-10-20  7:02     ` Schmidt, Adriaan
  0 siblings, 0 replies; 14+ messages in thread
From: Schmidt, Adriaan @ 2021-10-20  7:02 UTC (permalink / raw)
  To: jan.kiszka, isar-users

On 2021-10-13 12:31, Jan Kiszka wrote:
> On 12.10.21 15:04, Adriaan Schmidt wrote:
> > +def get_requested_mounts(d, task=None):
> > +    if task is None:
> > +        task = d.getVar('BB_CURRENTTASK')
> > +        if not task:
> > +            bb.fatal("mount code running without task context!?")
> > +    if task.startswith("do_"):
> > +        task = task[3:]
> > +    mounts = (d.getVarFlag("do_" + task, 'mounts') or "").split()
> 
> You first strip the do_ prefix, only to prepend it again. Is that
> indended? If so, maybe flip it around and prepend this if missing.

This is how OE's sstate.bbclass does it. I agree it's ugly, but turning it around does not look much better either.
It's about accepting task names with and without "do_", but I'm not even sure both cases can happen here. I'll have another look an clean this up.

> > +def shorten_path(x, n=3):
> > +    xs = x.split('/')
> > +    if len(xs) <= n:
> > +        return '/'.join(xs)
> > +    return '.../'+'/'.join(xs[-3:])
> 
> Hope this does not cut off any differentiating information, even when
> just targeting logs.

It might. During development it makes reading the prints much easier, but for proper logs we want the complete paths (will change for next version).

> > +python mounts_task_prefunc () {
> > +    from collections import namedtuple
> > +    Mount = namedtuple('Mount', 'cmd source target count')
> > +    task = d.getVar('PN') + ':' + d.getVar('BB_CURRENTTASK')
> > +    lock = bb.utils.lockfile(d.getVar("MOUNTS_LOCK"))
> > +    mounts = get_requested_mounts(d)
> > +    mtab = read_mtab(d)
> > +    for cmd, source, target in mounts:
> > +        mt = mtab.get(target)
> > +        if mt:
> > +            count = mt.count + 1
> > +            bb.debug(1, f"mount({task}): already mounted
> {shorten_path(mt.source)} at {shorten_path(mt.target)}, cnt={count}")
> > +            mtab[target] = mt._replace(count=count)
> > +            continue
> > +        bb.debug(1, f"mount({task}): mounting {shorten_path(source)} at
> {shorten_path(target)}, cnt=1")
> > +        d.setVar('MOUNT_ARG_SOURCE', source)
> > +        d.setVar('MOUNT_ARG_TARGET', target)
> > +        bb.build.exec_func('mount_' + cmd, d)
> 
> Could this fail and leave the lock blocked behind?

Oh yes, good point! This also needs a try/catch, just like the umounts.

> > +addhandler mounts_cleanup
> > +python mounts_cleanup() {
> > +    # look through MOUNTS_DB for contexts
> > +    import glob
> > +    import time
> > +    base = d.getVar('MOUNTS_DB')
> > +    locks = glob.glob(base + "/*.mountlock")
> > +    tabs = glob.glob(base + "/*.mounttab")
> > +
> > +    # there should not be any locks?
> > +    if len(locks) > 0:
> > +        bb.error(f"mounts_cleanup: someone still holding lock?
> ({str(locks)})")
> > +
> > +    # cleanup any existing contexts
> > +    for mtab_file in tabs:
> > +        mtab = read_mtab(d, mtab_file)
> > +        if len(mtab) > 0:
> > +            bb.note(f"mounts_cleanup: {mtab_file.split('/')[-1]}")
> > +
> > +        done = []
> > +        for target, mt in mtab.items():
> > +            if mt.count < 0:
> > +                bb.error("count on {target} < 0. BUG!?!")
> > +                continue
> > +            if mt.count > 0:
> > +                bb.error(f"cound on {target} > 0. BUG!?!")
> > +
> > +            bb.note(f"mounts_cleanup: unmounting {target}")
> > +            for i in range(10):
> > +                try:
> > +                    d.setVar('UMOUNT_ARG_TARGET', target)
> > +                    bb.build.exec_func('umount_' + mt.cmd, d)
> > +                    done.append(target)
> > +                    break
> > +                except bb.process.ExecutionError as e:
> > +                    if e.exitcode == 32:
> > +                        # target busy
> > +                        time.sleep(1)
> > +                        continue
> > +                    else:
> > +                        bb.error(f"umount({task}): {str(e)}")
> > +                        done.append(target)
> > +                        break
> > +                bb.warn(f"mounts_cleanup: failed to umount {target}")
> > +                done.append(target)
> > +
> > +        for target in done:
> > +            del mtab[target]
> > +        write_mtab(d, mtab, mtab_file)
> > +}
> > +
> > +mounts_cleanup[eventmask] = "bb.event.BuildCompleted"
> > diff --git a/meta/conf/bitbake.conf b/meta/conf/bitbake.conf
> > index 7f5901d..4726eaf 100644
> > --- a/meta/conf/bitbake.conf
> > +++ b/meta/conf/bitbake.conf
> > @@ -113,7 +113,7 @@ PARALLEL_MAKE ?= "-j ${@bb.utils.cpu_count()}"
> >  BBINCLUDELOGS ??= "yes"
> >
> >  # Add event handlers for bitbake
> > -INHERIT += "isar-events"
> > +INHERIT += "mounts isar-events"
> >
> >  include conf/local.conf
> >  include conf/multiconfig/${BB_CURRENT_MC}.conf
> >
> 
> With all this in place, did you see warning in build_completed() triggering?

In "normal" cases, the cleanup code of the new mounts class is already doing it's job. I have still seen issues in the ci tests (at least in the pre-avocado variant), which may be due to multiconfig and the concurrency/sharing it brings. This needs some more work on my side.

Adriaan


^ permalink raw reply	[flat|nested] 14+ messages in thread

* RE: [RFC PATCH 5/5] meta: refactor to use the new mounting mechanism
  2021-10-13 10:42   ` Jan Kiszka
@ 2021-10-20  9:20     ` Schmidt, Adriaan
  0 siblings, 0 replies; 14+ messages in thread
From: Schmidt, Adriaan @ 2021-10-20  9:20 UTC (permalink / raw)
  To: jan.kiszka, isar-users

On 2021-10-13 12:42, Jan Kiszka wrote:
> > -buildchroot_do_mounts() {
> > +buildchroot_task_prefunc() {
> >      sudo -s <<'EOSUDO'
> > -        ( flock 9
> >          set -e
> > -
> > -        mountpoint -q '${BUILDCHROOT_DIR}/isar-apt' ||
> > -            mount --bind '${REPO_ISAR_DIR}/${DISTRO}'
> '${BUILDCHROOT_DIR}/isar-apt'
> > -        mountpoint -q '${BUILDCHROOT_DIR}/downloads' ||
> > -            mount --bind '${DL_DIR}' '${BUILDCHROOT_DIR}/downloads'
> > -        mountpoint -q '${BUILDCHROOT_DIR}/dev' ||
> > -            mount --rbind /dev '${BUILDCHROOT_DIR}/dev'
> > -        mount --make-rslave '${BUILDCHROOT_DIR}/dev'
> > -        mountpoint -q '${BUILDCHROOT_DIR}/proc' ||
> > -            mount -t proc none '${BUILDCHROOT_DIR}/proc'
> > -        mountpoint -q '${BUILDCHROOT_DIR}/sys' ||
> > -            mount --rbind /sys '${BUILDCHROOT_DIR}/sys'
> > -        mount --make-rslave '${BUILDCHROOT_DIR}/sys'
> > -
> > -        # Mount base-apt if 'ISAR_USE_CACHED_BASE_REPO' is set
> > -        if [
> "${@repr(bb.utils.to_boolean(d.getVar('ISAR_USE_CACHED_BASE_REPO')))}" =
> 'True' ]
> > -        then
> > -            mkdir -p '${BUILDCHROOT_DIR}/base-apt'
> > -            mountpoint -q '${BUILDCHROOT_DIR}/base-apt' || \
> > -                mount --bind '${REPO_BASE_DIR}' '${BUILDCHROOT_DIR}/base-
> apt'
> > -        fi
> > -
> > -        # Refresh or remove /etc/resolv.conf at this chance
> > +        # Refresh or remove /etc/resolv.conf
> >          if [ "${@repr(bb.utils.to_boolean(d.getVar('BB_NO_NETWORK')))}" =
> 'True' ]
> >          then
> > -            rm -rf '${BUILDCHROOT_DIR}/etc/resolv.conf'
> > -        else
> > +            rm -f '${BUILDCHROOT_DIR}/etc/resolv.conf'
> > +        elif [ -d '${BUILDCHROOT_DIR}/etc' ]; then
> >              cp -L /etc/resolv.conf '${BUILDCHROOT_DIR}/etc'
> >          fi
> > -
> > -        ) 9>'${MOUNT_LOCKFILE}'
> 
> Which lock is protecting the remaining bits? If none, we need a word why
> that is fine.

The only remaining bit is writing resolv.conf, which I think is safe.
Will add a comment.


> > @@ -190,21 +187,6 @@ do_rootfs_finalize() {
> >              find "${ROOTFSDIR}/usr/bin" \
> >                  -maxdepth 1 -name 'qemu-*-static' -type f -delete
> >
> > -        mountpoint -q '${ROOTFSDIR}/isar-apt' && \
> > -            umount -l ${ROOTFSDIR}/isar-apt
> > -        rmdir --ignore-fail-on-non-empty ${ROOTFSDIR}/isar-apt
> > -
> > -        mountpoint -q '${ROOTFSDIR}/base-apt' && \
> > -            umount -l ${ROOTFSDIR}/base-apt
> > -        rmdir --ignore-fail-on-non-empty ${ROOTFSDIR}/base-apt
> 
> Who is deleting the empty isar-apt base-apt dirs?

No-one... The next version will add a feature, so we can define a mount as:

"bind,rmdir:${REPO_ISAR_DIR}/${DISTRO}:${BUILDCHROOT_DIR}/isar-apt"

and then the rmdir will run as part of unmounting.


> > @@ -54,7 +52,7 @@ python build_completed() {
> >      with open('/proc/mounts') as f:
> >          for line in f.readlines():
> >              if basepath in line:
> > -                bb.debug(1, '%s left mounted, unmounting...' %
> line.split()[1])
> > +                bb.warn('%s left mounted, unmounting...' %
> line.split()[1])
> 
> Ah, you are brave and make this a real warning again. Means you've never
> seen it?

The new mounts class comes with its own cleanup, so once that is finished, this one should not find anything.


> > @@ -153,13 +132,15 @@ do_rootfs_install[root_cleandirs] = "${ROOTFSDIR}"
> >  do_rootfs_install[vardeps] += "${ROOTFS_CONFIGURE_COMMAND}
> ${ROOTFS_INSTALL_COMMAND}"
> >  do_rootfs_install[depends] = "isar-bootstrap-${@'target' if
> d.getVar('ROOTFS_ARCH') == d.getVar('DISTRO_ARCH') else 'host'}:do_build"
> >  do_rootfs_install[deptask] = "do_deploy_deb"
> > +do_rootfs_install[mounts] = "${ROOTFS_MOUNTS}"
> > +do_rootfs_install[mounts-noauto] = "1"
> 
> Why "noauto"? How is it done otherwise?

"normally" the mount and unmount functions are added to do_task[prefuncs/postfuncs] and run automatically, with some additional code to catch task-failed events and do the unmounting in that case.

The mounts-noauto flag is in case we need mounting/unmounting to happen at different times. Then the functions need to be called explicitly by the task (cf. below: do_rootfs_install adds them to its list of cmds).
If the task fails, there is no way to know if unmounting still needs to happen, but this case should be rare, and the final cleanup handler will catch that.

> >  python do_rootfs_install() {
> >      configure_cmds = (d.getVar("ROOTFS_CONFIGURE_COMMAND", True) or "").split()
> >      install_cmds = (d.getVar("ROOTFS_INSTALL_COMMAND", True) or "").split()
> >
> >      # Mount after configure commands, so that they have time to copy
> >      # 'isar-apt' (sdkchroot):
> > -    cmds = ['rootfs_prepare'] + configure_cmds + ['rootfs_do_mounts'] + install_cmds
> > +    cmds = ['rootfs_prepare'] + configure_cmds + ['mounts_task_prefunc'] + install_cmds + ['mounts_task_postfunc']


> Maybe split this up into multiple conversions of the same type?

Yes, V2 will split this commit to make review easier.

Adriaan

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [RFC PATCH 0/5] Refactor mount logic
  2021-10-12 13:04 [RFC PATCH 0/5] Refactor mount logic Adriaan Schmidt
                   ` (4 preceding siblings ...)
  2021-10-12 13:04 ` [RFC PATCH 5/5] meta: refactor to use the new mounting mechanism Adriaan Schmidt
@ 2021-11-22 14:45 ` Anton Mikanovich
  5 siblings, 0 replies; 14+ messages in thread
From: Anton Mikanovich @ 2021-11-22 14:45 UTC (permalink / raw)
  To: Adriaan Schmidt, isar-users; +Cc: Baurzhan Ismagulov

12.10.2021 16:04, Adriaan Schmidt wrote:
> After some reports on problems with mounting and unmounting, here is a proposal
> to refactor the whole mount logic.
>
> The main idea is to have all actual mounting happen in mounts.bbclass, which
> also takes care of things like reference counting.
> Any task that needs mounts uses a `do_my_task[mounts] = "..."` varflag annotation.
>
> One thing that may need some thoughts: the original rootfs_do_mounts also
> writes /etc/resolv.conf. This proposal introduces another varflag
> (`do_my_task[buildchroot] = "1"`) for tasks that need that. Alternatively,
> we could simply provide a function that needs to be called explicitly.
>
> Current status: the code works, but there may be some issues with cleanup/unmounting.
> The existing ("legacy") cleanup code in isar-events still finds active mounts,
> which should actually be covered by cleanup code in mounts.bbclass.
>
>
> Adriaan Schmidt (5):
>    oe imports in central location
>    meta: refactor containerization
>    meta: add oe.utils
>    meta: add mounts class
>    meta: refactor to use the new mounting mechanism
>
>   meta/classes/base.bbclass                     |  28 +
>   meta/classes/buildchroot.bbclass              |  52 +-
>   meta/classes/cpiogz-img.bbclass               |   3 +-
>   meta/classes/dpkg-base.bbclass                |  49 +-
>   meta/classes/dpkg-gbp.bbclass                 |   2 -
>   meta/classes/dpkg.bbclass                     |   4 +-
>   meta/classes/ext4-img.bbclass                 |   3 +-
>   meta/classes/fit-img.bbclass                  |   4 +-
>   .../classes/image-container-extension.bbclass |   3 +-
>   meta/classes/image-sdk-extension.bbclass      |   6 +-
>   meta/classes/image-tools-extension.bbclass    |   4 +-
>   meta/classes/image.bbclass                    |  30 +-
>   meta/classes/initramfs.bbclass                |   2 +-
>   meta/classes/isar-events.bbclass              |   4 +-
>   meta/classes/mounts.bbclass                   | 271 +++++++++
>   meta/classes/patch.bbclass                    |   5 -
>   meta/classes/rootfs.bbclass                   |  50 +-
>   meta/classes/ubi-img.bbclass                  |   3 +-
>   meta/classes/ubifs-img.bbclass                |   3 +-
>   meta/classes/vm-img.bbclass                   |   7 +-
>   meta/classes/wic-img.bbclass                  |  31 +-
>   meta/conf/bitbake.conf                        |   2 +-
>   meta/lib/oe/utils.py                          | 569 ++++++++++++++++++
>   .../isar-bootstrap/isar-bootstrap.inc         |  43 +-
>   .../buildchroot/buildchroot.inc               |   8 +-
>   25 files changed, 976 insertions(+), 210 deletions(-)
>   create mode 100644 meta/classes/mounts.bbclass
>   create mode 100644 meta/lib/oe/utils.py
>
There are at least 6 cases needs to be checked after any mount rework:
1) There is no while loop issue during workdir umount
2) Fail on parallel build do not affect other recipes
3) Workdir should be unmounted in case build fail
4) Fail during mounting should not left paths unmounted
5) There is no `Target is busy` issues during high load
6) Host /dev is not get corrupted after `device is busy` issues

All of that issues were raised before and we need to be sure any rebuild 
is safe.
That's why we will check those 6 cases on the next version before going 
forward.
Will provide test case details if needed.

-- 
Anton Mikanovich
Promwad Ltd.
External service provider of ilbers GmbH
Maria-Merian-Str. 8
85521 Ottobrunn, Germany
+49 (89) 122 67 24-0
Commercial register Munich, HRB 214197
General Manager: Baurzhan Ismagulov


^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2021-11-22 14:45 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-10-12 13:04 [RFC PATCH 0/5] Refactor mount logic Adriaan Schmidt
2021-10-12 13:04 ` [RFC PATCH 1/5] oe imports in central location Adriaan Schmidt
2021-10-12 13:04 ` [RFC PATCH 2/5] meta: refactor containerization Adriaan Schmidt
2021-10-13 10:17   ` Jan Kiszka
2021-10-12 13:04 ` [RFC PATCH 3/5] meta: add oe.utils Adriaan Schmidt
2021-10-13 10:17   ` Jan Kiszka
2021-10-20  6:49     ` Schmidt, Adriaan
2021-10-12 13:04 ` [RFC PATCH 4/5] meta: add mounts class Adriaan Schmidt
2021-10-13 10:31   ` Jan Kiszka
2021-10-20  7:02     ` Schmidt, Adriaan
2021-10-12 13:04 ` [RFC PATCH 5/5] meta: refactor to use the new mounting mechanism Adriaan Schmidt
2021-10-13 10:42   ` Jan Kiszka
2021-10-20  9:20     ` Schmidt, Adriaan
2021-11-22 14:45 ` [RFC PATCH 0/5] Refactor mount logic Anton Mikanovich

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox