From mboxrd@z Thu Jan 1 00:00:00 1970 X-GM-THRID: 6621782831461826560 X-Received: by 2002:aa7:dcc5:: with SMTP id w5-v6mr337426edu.5.1541753960551; Fri, 09 Nov 2018 00:59:20 -0800 (PST) X-BeenThere: isar-users@googlegroups.com Received: by 2002:a17:906:1903:: with SMTP id a3-v6ls185732eje.3.gmail; Fri, 09 Nov 2018 00:59:19 -0800 (PST) X-Google-Smtp-Source: AJdET5efhM5dru4dBRKT/F7xkjStstWrgBXW33AURnoAwIoLiOp9OcfAz2aSnFOT9Y/z48pXJDTS X-Received: by 2002:a17:906:d1cf:: with SMTP id bs15-v6mr276748ejb.2.1541753959500; Fri, 09 Nov 2018 00:59:19 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1541753959; cv=none; d=google.com; s=arc-20160816; b=toD4kgjY+M+fUJa9RcprdvFW/fdZF+yMaMHRdnEeaaufjxe0Cspd/EyrFmSjgulsPW hYyHOA4Wz/NPX0kWGpQXrRgsvM64p96n1fCmC5V57rV5H/4xKW7PtOOlXY1eJMOP5nCr 7LtqUyRBAeISwEXsNagaK5kNtxpw096ex9aN5aS2Y0uBjZzheWvNEzkkHxketjkLrOBv AraXGKHDLXwedJv9dz374z8fH01hdIFrhRN0WnFaEbOnVyYnDw9Q5I6M962TQFqw8YtL YxJ82gcUyjpGQa37JYbCiXph65EED2attZteLMA5VeiVC0gaW6WMwseNMGfjJyUtESgW A2xQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from; bh=2iITXJijfQjKL0VlE6pMaMv+3lxOI28KQW/B5KBqnNQ=; b=emyT89BB5baEM2a1wYEDCEc7AK6y7DiF5zJsWbWC+t1y4bRTBi3MfbNYbWJg/xnpWp DiENV/SUeFFaGCfebUSY1vsiJiUVGMVkHuEbMYRoAni/cz3R0CIXL3yQqN84vzoxJgM3 UFOxrbq6M/vq3CSP1YetVgg4VMW8MKKEuHyU8ZUxLXKeLohUnG4/X1012eU+R2Jiaz2H 1R53Vi+yyo3/PE9+z2THUgJU6TyU5X0tub9xCjDzpNrm0zLCWEQSHY5EClWFT2OKujdr rpCp+vI4OH32NjSIS+UHC1zXgmI/fMZeTeJ6GFpaX5uIMorpqk/qak7MhGFkAk4NZOHB CZgQ== ARC-Authentication-Results: i=1; gmr-mx.google.com; spf=pass (google.com: best guess record for domain of mosipov@ilbers.de designates 85.214.62.211 as permitted sender) smtp.mailfrom=mosipov@ilbers.de Return-Path: Received: from aqmola.ilbers.de (aqmola.ilbers.de. [85.214.62.211]) by gmr-mx.google.com with ESMTPS id y16-v6si328852eds.0.2018.11.09.00.59.19 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 09 Nov 2018 00:59:19 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of mosipov@ilbers.de designates 85.214.62.211 as permitted sender) client-ip=85.214.62.211; Authentication-Results: gmr-mx.google.com; spf=pass (google.com: best guess record for domain of mosipov@ilbers.de designates 85.214.62.211 as permitted sender) smtp.mailfrom=mosipov@ilbers.de Received: from azat.m.ilbers.de (host-80-81-17-52.static.customer.m-online.net [80.81.17.52]) (authenticated bits=0) by aqmola.ilbers.de (8.14.4/8.14.4/Debian-4+deb7u1) with ESMTP id wA98x4fo015410 (version=TLSv1/SSLv3 cipher=AES128-SHA256 bits=128 verify=NOT) for ; Fri, 9 Nov 2018 09:59:17 +0100 From: "Maxim Yu. Osipov" To: isar-users@googlegroups.com Subject: [PATCH v2 1/2] bitbake: Update to the release 1.40.0 Date: Fri, 9 Nov 2018 09:59:02 +0100 Message-Id: <20181109085903.8299-2-mosipov@ilbers.de> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20181109085903.8299-1-mosipov@ilbers.de> References: <20181109085903.8299-1-mosipov@ilbers.de> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-TUID: b99yIeqGq9Gz Origin: https://github.com/openembedded/bitbake.git Commit: 2820e7aab2203fc6cf7127e433a80b7d13ba75e0 Author: Richard Purdie Date: Sat Oct 20 14:26:41 2018 +0100 bitbake: Bump version to 1.40.0 Signed-off-by: Maxim Yu. Osipov --- bitbake/bin/bitbake | 2 +- bitbake/bin/bitbake-selftest | 7 +- bitbake/bin/toaster | 13 +- bitbake/contrib/dump_cache.py | 85 +- .../bitbake-user-manual-execution.xml | 2 +- .../bitbake-user-manual-fetching.xml | 40 +- .../bitbake-user-manual-hello.xml | 8 +- .../bitbake-user-manual-intro.xml | 165 ++- .../bitbake-user-manual-metadata.xml | 115 +- .../bitbake-user-manual-ref-variables.xml | 59 +- .../bitbake-user-manual/bitbake-user-manual.xml | 2 +- .../figures/bb_multiconfig_files.png | 0 bitbake/lib/bb/COW.py | 2 +- bitbake/lib/bb/__init__.py | 18 +- bitbake/lib/bb/build.py | 8 +- bitbake/lib/bb/cache.py | 7 +- bitbake/lib/bb/checksum.py | 2 + bitbake/lib/bb/codeparser.py | 4 +- bitbake/lib/bb/cooker.py | 57 +- bitbake/lib/bb/cookerdata.py | 5 +- bitbake/lib/bb/daemonize.py | 25 +- bitbake/lib/bb/data.py | 61 +- bitbake/lib/bb/data_smart.py | 108 +- bitbake/lib/bb/event.py | 5 +- bitbake/lib/bb/fetch2/__init__.py | 62 +- bitbake/lib/bb/fetch2/bzr.py | 5 +- bitbake/lib/bb/fetch2/clearcase.py | 3 +- bitbake/lib/bb/fetch2/cvs.py | 5 +- bitbake/lib/bb/fetch2/git.py | 66 +- bitbake/lib/bb/fetch2/gitsm.py | 264 ++-- bitbake/lib/bb/fetch2/hg.py | 2 +- bitbake/lib/bb/fetch2/npm.py | 9 +- bitbake/lib/bb/fetch2/osc.py | 5 +- bitbake/lib/bb/fetch2/perforce.py | 8 +- bitbake/lib/bb/fetch2/repo.py | 12 +- bitbake/lib/bb/fetch2/svn.py | 5 +- bitbake/lib/bb/main.py | 15 +- bitbake/lib/bb/msg.py | 3 + bitbake/lib/bb/parse/__init__.py | 3 +- bitbake/lib/bb/parse/ast.py | 46 +- bitbake/lib/bb/parse/parse_py/BBHandler.py | 3 - bitbake/lib/bb/parse/parse_py/ConfHandler.py | 3 - bitbake/lib/bb/runqueue.py | 278 ++-- bitbake/lib/bb/server/process.py | 27 +- bitbake/lib/bb/siggen.py | 54 +- bitbake/lib/bb/taskdata.py | 18 +- bitbake/lib/bb/tests/cooker.py | 83 ++ bitbake/lib/bb/tests/data.py | 77 +- bitbake/lib/bb/tests/fetch.py | 295 ++++- bitbake/lib/bb/tests/parse.py | 4 + bitbake/lib/bb/ui/buildinfohelper.py | 9 +- bitbake/lib/bb/ui/taskexp.py | 10 +- bitbake/lib/bb/utils.py | 60 +- bitbake/lib/bblayers/action.py | 2 +- bitbake/lib/bblayers/layerindex.py | 323 ++--- bitbake/lib/layerindexlib/README | 28 + bitbake/lib/layerindexlib/__init__.py | 1363 ++++++++++++++++++++ bitbake/lib/layerindexlib/cooker.py | 344 +++++ bitbake/lib/layerindexlib/plugin.py | 60 + bitbake/lib/layerindexlib/restapi.py | 398 ++++++ bitbake/lib/layerindexlib/tests/__init__.py | 0 bitbake/lib/layerindexlib/tests/common.py | 43 + bitbake/lib/layerindexlib/tests/cooker.py | 123 ++ bitbake/lib/layerindexlib/tests/layerindexobj.py | 226 ++++ bitbake/lib/layerindexlib/tests/restapi.py | 184 +++ bitbake/lib/layerindexlib/tests/testdata/README | 11 + .../tests/testdata/build/conf/bblayers.conf | 15 + .../tests/testdata/layer1/conf/layer.conf | 17 + .../tests/testdata/layer2/conf/layer.conf | 20 + .../tests/testdata/layer3/conf/layer.conf | 19 + .../tests/testdata/layer4/conf/layer.conf | 22 + .../toaster/bldcontrol/localhostbecontroller.py | 212 ++- .../management/commands/checksettings.py | 8 +- .../bldcontrol/management/commands/runbuilds.py | 2 +- bitbake/lib/toaster/orm/fixtures/oe-core.xml | 28 +- bitbake/lib/toaster/orm/fixtures/poky.xml | 76 +- .../toaster/orm/management/commands/lsupdates.py | 228 ++-- .../orm/migrations/0018_project_specific.py | 28 + bitbake/lib/toaster/orm/models.py | 74 +- bitbake/lib/toaster/toastergui/api.py | 176 ++- .../lib/toaster/toastergui/static/js/layerBtn.js | 12 + .../toaster/toastergui/static/js/layerdetails.js | 3 +- .../lib/toaster/toastergui/static/js/libtoaster.js | 108 +- .../lib/toaster/toastergui/static/js/mrbsection.js | 4 +- .../toastergui/static/js/newcustomimage_modal.js | 7 + .../toaster/toastergui/static/js/projecttopbar.js | 22 + bitbake/lib/toaster/toastergui/tables.py | 12 +- .../toastergui/templates/base_specific.html | 128 ++ .../templates/baseprojectspecificpage.html | 48 + .../toastergui/templates/customise_btn.html | 6 +- .../templates/generic-toastertable-page.html | 2 +- .../toaster/toastergui/templates/importlayer.html | 4 +- .../toastergui/templates/landing_specific.html | 50 + .../toaster/toastergui/templates/layerdetails.html | 3 +- .../toaster/toastergui/templates/mrb_section.html | 2 +- .../toastergui/templates/newcustomimage.html | 4 +- .../toaster/toastergui/templates/newproject.html | 57 +- .../toastergui/templates/newproject_specific.html | 95 ++ .../lib/toaster/toastergui/templates/project.html | 7 +- .../toastergui/templates/project_specific.html | 162 +++ .../templates/project_specific_topbar.html | 80 ++ .../toaster/toastergui/templates/projectconf.html | 7 +- .../lib/toaster/toastergui/templates/recipe.html | 2 +- .../toastergui/templates/recipe_add_btn.html | 23 + bitbake/lib/toaster/toastergui/urls.py | 13 + bitbake/lib/toaster/toastergui/views.py | 165 ++- bitbake/lib/toaster/toastergui/widgets.py | 23 +- .../toastermain/management/commands/builddelete.py | 6 +- .../toastermain/management/commands/buildimport.py | 584 +++++++++ bitbake/toaster-requirements.txt | 2 +- 110 files changed, 6935 insertions(+), 970 deletions(-) create mode 100644 bitbake/doc/bitbake-user-manual/figures/bb_multiconfig_files.png create mode 100644 bitbake/lib/bb/tests/cooker.py create mode 100644 bitbake/lib/layerindexlib/README create mode 100644 bitbake/lib/layerindexlib/__init__.py create mode 100644 bitbake/lib/layerindexlib/cooker.py create mode 100644 bitbake/lib/layerindexlib/plugin.py create mode 100644 bitbake/lib/layerindexlib/restapi.py create mode 100644 bitbake/lib/layerindexlib/tests/__init__.py create mode 100644 bitbake/lib/layerindexlib/tests/common.py create mode 100644 bitbake/lib/layerindexlib/tests/cooker.py create mode 100644 bitbake/lib/layerindexlib/tests/layerindexobj.py create mode 100644 bitbake/lib/layerindexlib/tests/restapi.py create mode 100644 bitbake/lib/layerindexlib/tests/testdata/README create mode 100644 bitbake/lib/layerindexlib/tests/testdata/build/conf/bblayers.conf create mode 100644 bitbake/lib/layerindexlib/tests/testdata/layer1/conf/layer.conf create mode 100644 bitbake/lib/layerindexlib/tests/testdata/layer2/conf/layer.conf create mode 100644 bitbake/lib/layerindexlib/tests/testdata/layer3/conf/layer.conf create mode 100644 bitbake/lib/layerindexlib/tests/testdata/layer4/conf/layer.conf create mode 100644 bitbake/lib/toaster/orm/migrations/0018_project_specific.py create mode 100644 bitbake/lib/toaster/toastergui/templates/base_specific.html create mode 100644 bitbake/lib/toaster/toastergui/templates/baseprojectspecificpage.html create mode 100644 bitbake/lib/toaster/toastergui/templates/landing_specific.html create mode 100644 bitbake/lib/toaster/toastergui/templates/newproject_specific.html create mode 100644 bitbake/lib/toaster/toastergui/templates/project_specific.html create mode 100644 bitbake/lib/toaster/toastergui/templates/project_specific_topbar.html create mode 100644 bitbake/lib/toaster/toastergui/templates/recipe_add_btn.html mode change 100755 => 100644 bitbake/lib/toaster/toastergui/views.py create mode 100644 bitbake/lib/toaster/toastermain/management/commands/buildimport.py diff --git a/bitbake/bin/bitbake b/bitbake/bin/bitbake index 95e4109..57dec2a 100755 --- a/bitbake/bin/bitbake +++ b/bitbake/bin/bitbake @@ -38,7 +38,7 @@ from bb.main import bitbake_main, BitBakeConfigParameters, BBMainException if sys.getfilesystemencoding() != "utf-8": sys.exit("Please use a locale setting which supports UTF-8 (such as LANG=en_US.UTF-8).\nPython can't change the filesystem locale after loading so we need a UTF-8 when Python starts or things won't work.") -__version__ = "1.37.0" +__version__ = "1.40.0" if __name__ == "__main__": if __version__ != bb.__version__: diff --git a/bitbake/bin/bitbake-selftest b/bitbake/bin/bitbake-selftest index afe1603..cfa7ac5 100755 --- a/bitbake/bin/bitbake-selftest +++ b/bitbake/bin/bitbake-selftest @@ -22,16 +22,21 @@ sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(__file__)), 'lib import unittest try: import bb + import layerindexlib except RuntimeError as exc: sys.exit(str(exc)) tests = ["bb.tests.codeparser", + "bb.tests.cooker", "bb.tests.cow", "bb.tests.data", "bb.tests.event", "bb.tests.fetch", "bb.tests.parse", - "bb.tests.utils"] + "bb.tests.utils", + "layerindexlib.tests.layerindexobj", + "layerindexlib.tests.restapi", + "layerindexlib.tests.cooker"] for t in tests: t = '.'.join(t.split('.')[:3]) diff --git a/bitbake/bin/toaster b/bitbake/bin/toaster index 4036f0a..9fffbc6 100755 --- a/bitbake/bin/toaster +++ b/bitbake/bin/toaster @@ -18,11 +18,12 @@ # along with this program. If not, see http://www.gnu.org/licenses/. HELP=" -Usage: source toaster start|stop [webport=] [noweb] [nobuild] +Usage: source toaster start|stop [webport=] [noweb] [nobuild] [toasterdir] Optional arguments: [nobuild] Setup the environment for capturing builds with toaster but disable managed builds [noweb] Setup the environment for capturing builds with toaster but don't start the web server [webport] Set the development server (default: localhost:8000) + [toasterdir] Set absolute path to be used as TOASTER_DIR (default: BUILDDIR/../) " custom_extention() @@ -68,7 +69,7 @@ webserverKillAll() if [ -f ${pidfile} ]; then pid=`cat ${pidfile}` while kill -0 $pid 2>/dev/null; do - kill -SIGTERM -$pid 2>/dev/null + kill -SIGTERM $pid 2>/dev/null sleep 1 done rm ${pidfile} @@ -91,7 +92,7 @@ webserverStartAll() echo "Starting webserver..." - $MANAGE runserver "$ADDR_PORT" \ + $MANAGE runserver --noreload "$ADDR_PORT" \ >${BUILDDIR}/toaster_web.log 2>&1 \ & echo $! >${BUILDDIR}/.toastermain.pid @@ -186,6 +187,7 @@ unset OE_ROOT WEBSERVER=1 export TOASTER_BUILDSERVER=1 ADDR_PORT="localhost:8000" +TOASTERDIR=`dirname $BUILDDIR` unset CMD for param in $*; do case $param in @@ -211,6 +213,9 @@ for param in $*; do ADDR_PORT="localhost:$PORT" fi ;; + toasterdir=*) + TOASTERDIR="${param#*=}" + ;; --help) echo "$HELP" return 0 @@ -241,7 +246,7 @@ fi # 2) the build dir (in build) # 3) the sqlite db if that is being used. # 4) pid's we need to clean up on exit/shutdown -export TOASTER_DIR=`dirname $BUILDDIR` +export TOASTER_DIR=$TOASTERDIR export BB_ENV_EXTRAWHITE="$BB_ENV_EXTRAWHITE TOASTER_DIR" # Determine the action. If specified by arguments, fine, if not, toggle it diff --git a/bitbake/contrib/dump_cache.py b/bitbake/contrib/dump_cache.py index f4d4c1b..8963ca4 100755 --- a/bitbake/contrib/dump_cache.py +++ b/bitbake/contrib/dump_cache.py @@ -2,7 +2,7 @@ # ex:ts=4:sw=4:sts=4:et # -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*- # -# Copyright (C) 2012 Wind River Systems, Inc. +# Copyright (C) 2012, 2018 Wind River Systems, Inc. # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License version 2 as @@ -18,51 +18,68 @@ # 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. # -# This is used for dumping the bb_cache.dat, the output format is: -# recipe_path PN PV PACKAGES +# Used for dumping the bb_cache.dat # import os import sys -import warnings +import argparse # For importing bb.cache sys.path.insert(0, os.path.join(os.path.abspath(os.path.dirname(sys.argv[0])), '../lib')) from bb.cache import CoreRecipeInfo -import pickle as pickle +import pickle -def main(argv=None): - """ - Get the mapping for the target recipe. - """ - if len(argv) != 1: - print("Error, need one argument!", file=sys.stderr) - return 2 +class DumpCache(object): + def __init__(self): + parser = argparse.ArgumentParser( + description="bb_cache.dat's dumper", + epilog="Use %(prog)s --help to get help") + parser.add_argument("-r", "--recipe", + help="specify the recipe, default: all recipes", action="store") + parser.add_argument("-m", "--members", + help = "specify the member, use comma as separator for multiple ones, default: all members", action="store", default="") + parser.add_argument("-s", "--skip", + help = "skip skipped recipes", action="store_true") + parser.add_argument("cachefile", + help = "specify bb_cache.dat", nargs = 1, action="store", default="") - cachefile = argv[0] + self.args = parser.parse_args() - with open(cachefile, "rb") as cachefile: - pickled = pickle.Unpickler(cachefile) - while cachefile: - try: - key = pickled.load() - val = pickled.load() - except Exception: - break - if isinstance(val, CoreRecipeInfo) and (not val.skipped): - pn = val.pn - # Filter out the native recipes. - if key.startswith('virtual:native:') or pn.endswith("-native"): - continue + def main(self): + with open(self.args.cachefile[0], "rb") as cachefile: + pickled = pickle.Unpickler(cachefile) + while True: + try: + key = pickled.load() + val = pickled.load() + except Exception: + break + if isinstance(val, CoreRecipeInfo): + pn = val.pn - # 1.0 is the default version for a no PV recipe. - if "pv" in val.__dict__: - pv = val.pv - else: - pv = "1.0" + if self.args.recipe and self.args.recipe != pn: + continue - print("%s %s %s %s" % (key, pn, pv, ' '.join(val.packages))) + if self.args.skip and val.skipped: + continue -if __name__ == "__main__": - sys.exit(main(sys.argv[1:])) + if self.args.members: + out = key + for member in self.args.members.split(','): + out += ": %s" % val.__dict__.get(member) + print("%s" % out) + else: + print("%s: %s" % (key, val.__dict__)) + elif not self.args.recipe: + print("%s %s" % (key, val)) +if __name__ == "__main__": + try: + dump = DumpCache() + ret = dump.main() + except Exception as esc: + ret = 1 + import traceback + traceback.print_exc() + sys.exit(ret) diff --git a/bitbake/doc/bitbake-user-manual/bitbake-user-manual-execution.xml b/bitbake/doc/bitbake-user-manual/bitbake-user-manual-execution.xml index e4cc422..f1caaec 100644 --- a/bitbake/doc/bitbake-user-manual/bitbake-user-manual-execution.xml +++ b/bitbake/doc/bitbake-user-manual/bitbake-user-manual-execution.xml @@ -781,7 +781,7 @@ The code in meta/lib/oe/sstatesig.py shows two examples of this and also illustrates how you can insert your own policy into the system if so desired. - This file defines the two basic signature generators OpenEmbedded Core + This file defines the two basic signature generators OpenEmbedded-Core uses: "OEBasic" and "OEBasicHash". By default, there is a dummy "noop" signature handler enabled in BitBake. This means that behavior is unchanged from previous versions. diff --git a/bitbake/doc/bitbake-user-manual/bitbake-user-manual-fetching.xml b/bitbake/doc/bitbake-user-manual/bitbake-user-manual-fetching.xml index c721e86..29ae486 100644 --- a/bitbake/doc/bitbake-user-manual/bitbake-user-manual-fetching.xml +++ b/bitbake/doc/bitbake-user-manual/bitbake-user-manual-fetching.xml @@ -777,6 +777,43 @@ +
+ Repo Fetcher (<filename>repo://</filename>) + + + This fetcher submodule fetches code from + google-repo source control system. + The fetcher works by initiating and syncing sources of the + repository into + REPODIR, + which is usually + DL_DIR/repo. + + + + This fetcher supports the following parameters: + + + "protocol": + Protocol to fetch the repository manifest (default: git). + + + "branch": + Branch or tag of repository to get (default: master). + + + "manifest": + Name of the manifest file (default: default.xml). + + + Here are some example URLs: + + SRC_URI = "repo://REPOROOT;protocol=git;branch=some_branch;manifest=my_manifest.xml" + SRC_URI = "repo://REPOROOT;protocol=file;branch=some_branch;manifest=my_manifest.xml" + + +
+
Other Fetchers @@ -796,9 +833,6 @@ Secure Shell (ssh://) - Repo (repo://) - - OSC (osc://) diff --git a/bitbake/doc/bitbake-user-manual/bitbake-user-manual-hello.xml b/bitbake/doc/bitbake-user-manual/bitbake-user-manual-hello.xml index f1060e5..9076f0f 100644 --- a/bitbake/doc/bitbake-user-manual/bitbake-user-manual-hello.xml +++ b/bitbake/doc/bitbake-user-manual/bitbake-user-manual-hello.xml @@ -383,10 +383,10 @@ code separate from the general metadata used by BitBake. Thus, this example creates and uses a layer called "mylayer". - You can find additional information on layers at - . - - + You can find additional information on layers in the + "Layers" section. + + Minimally, you need a recipe file and a layer configuration file in your layer. The configuration file needs to be in the conf diff --git a/bitbake/doc/bitbake-user-manual/bitbake-user-manual-intro.xml b/bitbake/doc/bitbake-user-manual/bitbake-user-manual-intro.xml index eb45809..9e2e6b2 100644 --- a/bitbake/doc/bitbake-user-manual/bitbake-user-manual-intro.xml +++ b/bitbake/doc/bitbake-user-manual/bitbake-user-manual-intro.xml @@ -564,8 +564,12 @@ Writes the event log of the build to a bitbake event json file. Use '' (empty string) to assign the name automatically. - --runall=RUNALL Run the specified task for all build targets and their - dependencies. + --runall=RUNALL Run the specified task for any recipe in the taskgraph + of the specified target (even if it wouldn't otherwise + have run). + --runonly=RUNONLY Run only the specified task within the taskgraph of + the specified targets (and any task dependencies those + tasks may have).
@@ -719,6 +723,163 @@ + +
+ Executing a Multiple Configuration Build + + + BitBake is able to build multiple images or packages + using a single command where the different targets + require different configurations (multiple configuration + builds). + Each target, in this scenario, is referred to as a + "multiconfig". + + + + To accomplish a multiple configuration build, you must + define each target's configuration separately using + a parallel configuration file in the build directory. + The location for these multiconfig configuration files + is specific. + They must reside in the current build directory in + a sub-directory of conf named + multiconfig. + Following is an example for two separate targets: + + + + + The reason for this required file hierarchy + is because the BBPATH variable + is not constructed until the layers are parsed. + Consequently, using the configuration file as a + pre-configuration file is not possible unless it is + located in the current working directory. + + + + Minimally, each configuration file must define the + machine and the temporary directory BitBake uses + for the build. + Suggested practice dictates that you do not + overlap the temporary directories used during the + builds. + + + + Aside from separate configuration files for each + target, you must also enable BitBake to perform multiple + configuration builds. + Enabling is accomplished by setting the + BBMULTICONFIG + variable in the local.conf + configuration file. + As an example, suppose you had configuration files + for target1 and + target2 defined in the build + directory. + The following statement in the + local.conf file both enables + BitBake to perform multiple configuration builds and + specifies the two multiconfigs: + + BBMULTICONFIG = "target1 target2" + + + + + Once the target configuration files are in place and + BitBake has been enabled to perform multiple configuration + builds, use the following command form to start the + builds: + + $ bitbake [multiconfig:multiconfigname:]target [[[multiconfig:multiconfigname:]target] ... ] + + Here is an example for two multiconfigs: + target1 and + target2: + + $ bitbake multiconfig:target1:target multiconfig:target2:target + + +
+ +
+ Enabling Multiple Configuration Build Dependencies + + + Sometimes dependencies can exist between targets + (multiconfigs) in a multiple configuration build. + For example, suppose that in order to build an image + for a particular architecture, the root filesystem of + another build for a different architecture needs to + exist. + In other words, the image for the first multiconfig depends + on the root filesystem of the second multiconfig. + This dependency is essentially that the task in the recipe + that builds one multiconfig is dependent on the + completion of the task in the recipe that builds + another multiconfig. + + + + To enable dependencies in a multiple configuration + build, you must declare the dependencies in the recipe + using the following statement form: + + task_or_package[mcdepends] = "multiconfig:from_multiconfig:to_multiconfig:recipe_name:task_on_which_to_depend" + + To better show how to use this statement, consider an + example with two multiconfigs: target1 + and target2: + + image_task[mcdepends] = "multiconfig:target1:target2:image2:rootfs_task" + + In this example, the + from_multiconfig is "target1" and + the to_multiconfig is "target2". + The task on which the image whose recipe contains + image_task depends on the + completion of the rootfs_task + used to build out image2, which + is associated with the "target2" multiconfig. + + + + Once you set up this dependency, you can build the + "target1" multiconfig using a BitBake command as follows: + + $ bitbake multiconfig:target1:image1 + + This command executes all the tasks needed to create + image1 for the "target1" + multiconfig. + Because of the dependency, BitBake also executes through + the rootfs_task for the "target2" + multiconfig build. + + + + Having a recipe depend on the root filesystem of another + build might not seem that useful. + Consider this change to the statement in the + image1 recipe: + + image_task[mcdepends] = "multiconfig:target1:target2:image2:image_task" + + In this case, BitBake must create + image2 for the "target2" + build since the "target1" build depends on it. + + + + Because "target1" and "target2" are enabled for multiple + configuration builds and have separate configuration + files, BitBake places the artifacts for each build in the + respective temporary build directories. + +
diff --git a/bitbake/doc/bitbake-user-manual/bitbake-user-manual-metadata.xml b/bitbake/doc/bitbake-user-manual/bitbake-user-manual-metadata.xml index f0cfffe..fc55ef6 100644 --- a/bitbake/doc/bitbake-user-manual/bitbake-user-manual-metadata.xml +++ b/bitbake/doc/bitbake-user-manual/bitbake-user-manual-metadata.xml @@ -342,7 +342,7 @@ When you use this syntax, BitBake expects one or more strings. - Surrounding spaces are removed as well. + Surrounding spaces and spacing are preserved. Here is an example: FOO = "123 456 789 123456 123 456 123 456" @@ -352,8 +352,8 @@ FOO2_remove = "abc def" The variable FOO becomes - "789 123456" and FOO2 becomes - "ghi abcdef". + " 789 123456 " and FOO2 becomes + " ghi abcdef ". @@ -1929,6 +1929,38 @@ not careful. + [number_threads]: + Limits tasks to a specific number of simultaneous threads + during execution. + This varflag is useful when your build host has a large number + of cores but certain tasks need to be rate-limited due to various + kinds of resource constraints (e.g. to avoid network throttling). + number_threads works similarly to the + BB_NUMBER_THREADS + variable but is task-specific. + + Set the value globally. + For example, the following makes sure the + do_fetch task uses no more than two + simultaneous execution threads: + + do_fetch[number_threads] = "2" + + Warnings + + + Setting the varflag in individual recipes rather + than globally can result in unpredictable behavior. + + + Setting the varflag to a value greater than the + value used in the BB_NUMBER_THREADS + variable causes number_threads + to have no effect. + + + + [postfuncs]: List of functions to call after the completion of the task. @@ -2652,47 +2684,70 @@ - This list is a place holder of content existed from previous work - on the manual. - Some or all of it probably needs integrated into the subsections - that make up this section. - For now, I have just provided a short glossary-like description - for each variable. - Ultimately, this list goes away. + These checksums are stored in + STAMP. + You can examine the checksums using the following BitBake command: + + $ bitbake-dumpsigs + + This command returns the signature data in a readable format + that allows you to examine the inputs used when the + OpenEmbedded build system generates signatures. + For example, using bitbake-dumpsigs + allows you to examine the do_compile + task's “sigdata” for a C application (e.g. + bash). + Running the command also reveals that the “CC” variable is part of + the inputs that are hashed. + Any changes to this variable would invalidate the stamp and + cause the do_compile task to run. + + + + The following list describes related variables: - STAMP: - The base path to create stamp files. - STAMPCLEAN - Again, the base path to create stamp files but can use wildcards - for matching a range of files for clean operations. - - BB_STAMP_WHITELIST - Lists stamp files that are looked at when the stamp policy - is "whitelist". - - BB_STAMP_POLICY - Defines the mode for comparing timestamps of stamp files. - - BB_HASHCHECK_FUNCTION + + BB_HASHCHECK_FUNCTION: Specifies the name of the function to call during the "setscene" part of the task's execution in order to validate the list of task hashes. - BB_SETSCENE_VERIFY_FUNCTION2 + + BB_SETSCENE_DEPVALID: + Specifies a function BitBake calls that determines + whether BitBake requires a setscene dependency to + be met. + + + BB_SETSCENE_VERIFY_FUNCTION2: Specifies a function to call that verifies the list of planned task execution before the main task execution happens. - BB_SETSCENE_DEPVALID - Specifies a function BitBake calls that determines - whether BitBake requires a setscene dependency to - be met. + + BB_STAMP_POLICY: + Defines the mode for comparing timestamps of stamp files. + + + BB_STAMP_WHITELIST: + Lists stamp files that are looked at when the stamp policy + is "whitelist". - BB_TASKHASH + + BB_TASKHASH: Within an executing task, this variable holds the hash of the task as returned by the currently enabled signature generator. + + STAMP: + The base path to create stamp files. + + + STAMPCLEAN: + Again, the base path to create stamp files but can use wildcards + for matching a range of files for clean operations. + diff --git a/bitbake/doc/bitbake-user-manual/bitbake-user-manual-ref-variables.xml b/bitbake/doc/bitbake-user-manual/bitbake-user-manual-ref-variables.xml index d89e123..c327af5 100644 --- a/bitbake/doc/bitbake-user-manual/bitbake-user-manual-ref-variables.xml +++ b/bitbake/doc/bitbake-user-manual/bitbake-user-manual-ref-variables.xml @@ -78,7 +78,7 @@ - In OpenEmbedded Core, ASSUME_PROVIDED + In OpenEmbedded-Core, ASSUME_PROVIDED mostly specifies native tools that should not be built. An example is git-native, which when specified allows for the Git binary from the host to @@ -646,10 +646,10 @@ Contains the name of the currently executing task. - The value does not include the "do_" prefix. + The value includes the "do_" prefix. For example, if the currently executing task is do_config, the value is - "config". + "do_config". @@ -964,7 +964,7 @@ Allows you to extend a recipe so that it builds variants of the software. Some examples of these variants for recipes from the - OpenEmbedded Core metadata are "natives" such as + OpenEmbedded-Core metadata are "natives" such as quilt-native, which is a copy of Quilt built to run on the build system; "crosses" such as gcc-cross, which is a compiler @@ -980,7 +980,7 @@ amount of code, it usually is as simple as adding the variable to your recipe. Here are two examples. - The "native" variants are from the OpenEmbedded Core + The "native" variants are from the OpenEmbedded-Core metadata: BBCLASSEXTEND =+ "native nativesdk" @@ -1205,6 +1205,45 @@ + BBMULTICONFIG + + BBMULTICONFIG[doc] = "Enables BitBake to perform multiple configuration builds and lists each separate configuration (multiconfig)." + + + + + Enables BitBake to perform multiple configuration builds + and lists each separate configuration (multiconfig). + You can use this variable to cause BitBake to build + multiple targets where each target has a separate + configuration. + Define BBMULTICONFIG in your + conf/local.conf configuration file. + + + + As an example, the following line specifies three + multiconfigs, each having a separate configuration file: + + BBMULTIFONFIG = "configA configB configC" + + Each configuration file you use must reside in the + build directory within a directory named + conf/multiconfig (e.g. + build_directory/conf/multiconfig/configA.conf). + + + + For information on how to use + BBMULTICONFIG in an environment that + supports building targets with multiple configurations, + see the + "Executing a Multiple Configuration Build" + section. + + + + BBPATH @@ -2089,6 +2128,16 @@ + REPODIR + + + The directory in which a local copy of a + google-repo directory is stored + when it is synced. + + + + RPROVIDES diff --git a/bitbake/doc/bitbake-user-manual/bitbake-user-manual.xml b/bitbake/doc/bitbake-user-manual/bitbake-user-manual.xml index d23e3ef..d793265 100644 --- a/bitbake/doc/bitbake-user-manual/bitbake-user-manual.xml +++ b/bitbake/doc/bitbake-user-manual/bitbake-user-manual.xml @@ -56,7 +56,7 @@ --> - 2004-2017 + 2004-2018 Richard Purdie Chris Larson and Phil Blundell diff --git a/bitbake/doc/bitbake-user-manual/figures/bb_multiconfig_files.png b/bitbake/doc/bitbake-user-manual/figures/bb_multiconfig_files.png new file mode 100644 index 0000000..e69de29 diff --git a/bitbake/lib/bb/COW.py b/bitbake/lib/bb/COW.py index bec6208..7817473 100644 --- a/bitbake/lib/bb/COW.py +++ b/bitbake/lib/bb/COW.py @@ -150,7 +150,7 @@ class COWDictMeta(COWMeta): yield value if type == "items": yield (key, value) - raise StopIteration() + return def iterkeys(cls): return cls.iter("keys") diff --git a/bitbake/lib/bb/__init__.py b/bitbake/lib/bb/__init__.py index cd2f157..4bc47c8 100644 --- a/bitbake/lib/bb/__init__.py +++ b/bitbake/lib/bb/__init__.py @@ -21,7 +21,7 @@ # with this program; if not, write to the Free Software Foundation, Inc., # 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. -__version__ = "1.37.0" +__version__ = "1.40.0" import sys if sys.version_info < (3, 4, 0): @@ -63,6 +63,10 @@ class BBLogger(Logger): def verbose(self, msg, *args, **kwargs): return self.log(logging.INFO - 1, msg, *args, **kwargs) + def verbnote(self, msg, *args, **kwargs): + return self.log(logging.INFO + 2, msg, *args, **kwargs) + + logging.raiseExceptions = False logging.setLoggerClass(BBLogger) @@ -93,6 +97,18 @@ def debug(lvl, *args): def note(*args): mainlogger.info(''.join(args)) +# +# A higher prioity note which will show on the console but isn't a warning +# +# Something is happening the user should be aware of but they probably did +# something to make it happen +# +def verbnote(*args): + mainlogger.verbnote(''.join(args)) + +# +# Warnings - things the user likely needs to pay attention to and fix +# def warn(*args): mainlogger.warning(''.join(args)) diff --git a/bitbake/lib/bb/build.py b/bitbake/lib/bb/build.py index 4631abd..3e2a94e 100644 --- a/bitbake/lib/bb/build.py +++ b/bitbake/lib/bb/build.py @@ -41,8 +41,6 @@ from bb import data, event, utils bblogger = logging.getLogger('BitBake') logger = logging.getLogger('BitBake.Build') -NULL = open(os.devnull, 'r+') - __mtime_cache = {} def cached_mtime_noerror(f): @@ -533,7 +531,6 @@ def _exec_task(fn, task, d, quieterr): self.triggered = True # Handle logfiles - si = open('/dev/null', 'r') try: bb.utils.mkdirhier(os.path.dirname(logfn)) logfile = open(logfn, 'w') @@ -547,7 +544,8 @@ def _exec_task(fn, task, d, quieterr): ose = [os.dup(sys.stderr.fileno()), sys.stderr.fileno()] # Replace those fds with our own - os.dup2(si.fileno(), osi[1]) + with open('/dev/null', 'r') as si: + os.dup2(si.fileno(), osi[1]) os.dup2(logfile.fileno(), oso[1]) os.dup2(logfile.fileno(), ose[1]) @@ -608,7 +606,6 @@ def _exec_task(fn, task, d, quieterr): os.close(osi[0]) os.close(oso[0]) os.close(ose[0]) - si.close() logfile.close() if os.path.exists(logfn) and os.path.getsize(logfn) == 0: @@ -803,6 +800,7 @@ def add_tasks(tasklist, d): if name in flags: deptask = d.expand(flags[name]) task_deps[name][task] = deptask + getTask('mcdepends') getTask('depends') getTask('rdepends') getTask('deptask') diff --git a/bitbake/lib/bb/cache.py b/bitbake/lib/bb/cache.py index 86ce0e7..258d679 100644 --- a/bitbake/lib/bb/cache.py +++ b/bitbake/lib/bb/cache.py @@ -37,7 +37,7 @@ import bb.utils logger = logging.getLogger("BitBake.Cache") -__cache_version__ = "151" +__cache_version__ = "152" def getCacheFile(path, filename, data_hash): return os.path.join(path, filename + "." + data_hash) @@ -395,7 +395,7 @@ class Cache(NoCache): self.has_cache = True self.cachefile = getCacheFile(self.cachedir, "bb_cache.dat", self.data_hash) - logger.debug(1, "Using cache in '%s'", self.cachedir) + logger.debug(1, "Cache dir: %s", self.cachedir) bb.utils.mkdirhier(self.cachedir) cache_ok = True @@ -408,6 +408,8 @@ class Cache(NoCache): self.load_cachefile() elif os.path.isfile(self.cachefile): logger.info("Out of date cache found, rebuilding...") + else: + logger.debug(1, "Cache file %s not found, building..." % self.cachefile) def load_cachefile(self): cachesize = 0 @@ -424,6 +426,7 @@ class Cache(NoCache): for cache_class in self.caches_array: cachefile = getCacheFile(self.cachedir, cache_class.cachefile, self.data_hash) + logger.debug(1, 'Loading cache file: %s' % cachefile) with open(cachefile, "rb") as cachefile: pickled = pickle.Unpickler(cachefile) # Check cache version information diff --git a/bitbake/lib/bb/checksum.py b/bitbake/lib/bb/checksum.py index 8428920..4e1598f 100644 --- a/bitbake/lib/bb/checksum.py +++ b/bitbake/lib/bb/checksum.py @@ -97,6 +97,8 @@ class FileChecksumCache(MultiProcessCache): def checksum_dir(pth): # Handle directories recursively + if pth == "/": + bb.fatal("Refusing to checksum /") dirchecksums = [] for root, dirs, files in os.walk(pth): for name in files: diff --git a/bitbake/lib/bb/codeparser.py b/bitbake/lib/bb/codeparser.py index 530f44e..ddd1b97 100644 --- a/bitbake/lib/bb/codeparser.py +++ b/bitbake/lib/bb/codeparser.py @@ -140,7 +140,7 @@ class CodeParserCache(MultiProcessCache): # so that an existing cache gets invalidated. Additionally you'll need # to increment __cache_version__ in cache.py in order to ensure that old # recipe caches don't trigger "Taskhash mismatch" errors. - CACHE_VERSION = 9 + CACHE_VERSION = 10 def __init__(self): MultiProcessCache.__init__(self) @@ -214,7 +214,7 @@ class BufferedLogger(Logger): self.buffer = [] class PythonParser(): - getvars = (".getVar", ".appendVar", ".prependVar") + getvars = (".getVar", ".appendVar", ".prependVar", "oe.utils.conditional") getvarflags = (".getVarFlag", ".appendVarFlag", ".prependVarFlag") containsfuncs = ("bb.utils.contains", "base_contains") containsanyfuncs = ("bb.utils.contains_any", "bb.utils.filter") diff --git a/bitbake/lib/bb/cooker.py b/bitbake/lib/bb/cooker.py index cd365f7..71a0eba 100644 --- a/bitbake/lib/bb/cooker.py +++ b/bitbake/lib/bb/cooker.py @@ -516,6 +516,8 @@ class BBCooker: fn = runlist[0][3] else: envdata = self.data + data.expandKeys(envdata) + parse.ast.runAnonFuncs(envdata) if fn: try: @@ -536,7 +538,6 @@ class BBCooker: logger.plain(env.getvalue()) # emit the metadata which isnt valid shell - data.expandKeys(envdata) for e in sorted(envdata.keys()): if envdata.getVarFlag(e, 'func', False) and envdata.getVarFlag(e, 'python', False): logger.plain("\npython %s () {\n%s}\n", e, envdata.getVar(e, False)) @@ -608,7 +609,14 @@ class BBCooker: k2 = k.split(":do_") k = k2[0] ktask = k2[1] - taskdata[mc].add_provider(localdata[mc], self.recipecaches[mc], k) + if mc: + # Provider might be from another mc + for mcavailable in self.multiconfigs: + # The first element is empty + if mcavailable: + taskdata[mcavailable].add_provider(localdata[mcavailable], self.recipecaches[mcavailable], k) + else: + taskdata[mc].add_provider(localdata[mc], self.recipecaches[mc], k) current += 1 if not ktask.startswith("do_"): ktask = "do_%s" % ktask @@ -619,6 +627,27 @@ class BBCooker: runlist.append([mc, k, ktask, fn]) bb.event.fire(bb.event.TreeDataPreparationProgress(current, len(fulltargetlist)), self.data) + mcdeps = taskdata[mc].get_mcdepends() + # No need to do check providers if there are no mcdeps or not an mc build + if mcdeps and mc: + # Make sure we can provide the multiconfig dependency + seen = set() + new = True + while new: + new = False + for mc in self.multiconfigs: + for k in mcdeps: + if k in seen: + continue + l = k.split(':') + depmc = l[2] + if depmc not in self.multiconfigs: + bb.fatal("Multiconfig dependency %s depends on nonexistent mc configuration %s" % (k,depmc)) + else: + logger.debug(1, "Adding providers for multiconfig dependency %s" % l[3]) + taskdata[depmc].add_provider(localdata[depmc], self.recipecaches[depmc], l[3]) + seen.add(k) + new = True for mc in self.multiconfigs: taskdata[mc].add_unresolved(localdata[mc], self.recipecaches[mc]) @@ -705,8 +734,8 @@ class BBCooker: if not dotname in depend_tree["tdepends"]: depend_tree["tdepends"][dotname] = [] for dep in rq.rqdata.runtaskentries[tid].depends: - (depmc, depfn, deptaskname, deptaskfn) = bb.runqueue.split_tid_mcfn(dep) - deppn = self.recipecaches[mc].pkg_fn[deptaskfn] + (depmc, depfn, _, deptaskfn) = bb.runqueue.split_tid_mcfn(dep) + deppn = self.recipecaches[depmc].pkg_fn[deptaskfn] depend_tree["tdepends"][dotname].append("%s.%s" % (deppn, bb.runqueue.taskname_from_tid(dep))) if taskfn not in seen_fns: seen_fns.append(taskfn) @@ -1170,6 +1199,7 @@ class BBCooker: elif regex == "": parselog.debug(1, "BBFILE_PATTERN_%s is empty" % c) errors = False + continue else: try: cre = re.compile(regex) @@ -1564,7 +1594,7 @@ class BBCooker: pkgs_to_build.append(t) if 'universe' in pkgs_to_build: - parselog.warning("The \"universe\" target is only intended for testing and may produce errors.") + parselog.verbnote("The \"universe\" target is only intended for testing and may produce errors.") parselog.debug(1, "collating packages for \"universe\"") pkgs_to_build.remove('universe') for mc in self.multiconfigs: @@ -1603,8 +1633,6 @@ class BBCooker: if self.parser: self.parser.shutdown(clean=not force, force=force) - self.notifier.stop() - self.confignotifier.stop() def finishcommand(self): self.state = state.initial @@ -1633,7 +1661,10 @@ class CookerExit(bb.event.Event): class CookerCollectFiles(object): def __init__(self, priorities): self.bbappends = [] - self.bbfile_config_priorities = priorities + # Priorities is a list of tupples, with the second element as the pattern. + # We need to sort the list with the longest pattern first, and so on to + # the shortest. This allows nested layers to be properly evaluated. + self.bbfile_config_priorities = sorted(priorities, key=lambda tup: tup[1], reverse=True) def calc_bbfile_priority( self, filename, matched = None ): for _, _, regex, pri in self.bbfile_config_priorities: @@ -1807,21 +1838,25 @@ class CookerCollectFiles(object): realfn, cls, mc = bb.cache.virtualfn2realfn(p) priorities[p] = self.calc_bbfile_priority(realfn, matched) - # Don't show the warning if the BBFILE_PATTERN did match .bbappend files unmatched = set() for _, _, regex, pri in self.bbfile_config_priorities: if not regex in matched: unmatched.add(regex) - def findmatch(regex): + # Don't show the warning if the BBFILE_PATTERN did match .bbappend files + def find_bbappend_match(regex): for b in self.bbappends: (bbfile, append) = b if regex.match(append): + # If the bbappend is matched by already "matched set", return False + for matched_regex in matched: + if matched_regex.match(append): + return False return True return False for unmatch in unmatched.copy(): - if findmatch(unmatch): + if find_bbappend_match(unmatch): unmatched.remove(unmatch) for collection, pattern, regex, _ in self.bbfile_config_priorities: diff --git a/bitbake/lib/bb/cookerdata.py b/bitbake/lib/bb/cookerdata.py index fab47c7..5df66e6 100644 --- a/bitbake/lib/bb/cookerdata.py +++ b/bitbake/lib/bb/cookerdata.py @@ -143,7 +143,8 @@ class CookerConfiguration(object): self.writeeventlog = False self.server_only = False self.limited_deps = False - self.runall = None + self.runall = [] + self.runonly = [] self.env = {} @@ -395,6 +396,8 @@ class CookerDataBuilder(object): if compat and not (compat & layerseries): bb.fatal("Layer %s is not compatible with the core layer which only supports these series: %s (layer is compatible with %s)" % (c, " ".join(layerseries), " ".join(compat))) + elif not compat and not data.getVar("BB_WORKERCONTEXT"): + bb.warn("Layer %s should set LAYERSERIES_COMPAT_%s in its conf/layer.conf file to list the core layer names it is compatible with." % (c, c)) if not data.getVar("BBPATH"): msg = "The BBPATH variable is not set" diff --git a/bitbake/lib/bb/daemonize.py b/bitbake/lib/bb/daemonize.py index 8300d1d..c937675 100644 --- a/bitbake/lib/bb/daemonize.py +++ b/bitbake/lib/bb/daemonize.py @@ -16,6 +16,10 @@ def createDaemon(function, logfile): background as a daemon, returning control to the caller. """ + # Ensure stdout/stderror are flushed before forking to avoid duplicate output + sys.stdout.flush() + sys.stderr.flush() + try: # Fork a child process so the parent can exit. This returns control to # the command-line or shell. It also guarantees that the child will not @@ -49,8 +53,8 @@ def createDaemon(function, logfile): # exit() or _exit()? # _exit is like exit(), but it doesn't call any functions registered # with atexit (and on_exit) or any registered signal handlers. It also - # closes any open file descriptors. Using exit() may cause all stdio - # streams to be flushed twice and any temporary files may be unexpectedly + # closes any open file descriptors, but doesn't flush any buffered output. + # Using exit() may cause all any temporary files to be unexpectedly # removed. It's therefore recommended that child branches of a fork() # and the parent branch(es) of a daemon use _exit(). os._exit(0) @@ -61,17 +65,19 @@ def createDaemon(function, logfile): # The second child. # Replace standard fds with our own - si = open('/dev/null', 'r') - os.dup2(si.fileno(), sys.stdin.fileno()) + with open('/dev/null', 'r') as si: + os.dup2(si.fileno(), sys.stdin.fileno()) try: so = open(logfile, 'a+') - se = so os.dup2(so.fileno(), sys.stdout.fileno()) - os.dup2(se.fileno(), sys.stderr.fileno()) + os.dup2(so.fileno(), sys.stderr.fileno()) except io.UnsupportedOperation: sys.stdout = open(logfile, 'a+') - sys.stderr = sys.stdout + + # Have stdout and stderr be the same so log output matches chronologically + # and there aren't two seperate buffers + sys.stderr = sys.stdout try: function() @@ -79,4 +85,9 @@ def createDaemon(function, logfile): traceback.print_exc() finally: bb.event.print_ui_queue() + # os._exit() doesn't flush open files like os.exit() does. Manually flush + # stdout and stderr so that any logging output will be seen, particularly + # exception tracebacks. + sys.stdout.flush() + sys.stderr.flush() os._exit(0) diff --git a/bitbake/lib/bb/data.py b/bitbake/lib/bb/data.py index 80a7879..d66d98c 100644 --- a/bitbake/lib/bb/data.py +++ b/bitbake/lib/bb/data.py @@ -38,6 +38,7 @@ the speed is more critical here. # Based on functions from the base bb module, Copyright 2003 Holger Schurig import sys, os, re +import hashlib if sys.argv[0][-5:] == "pydoc": path = os.path.dirname(os.path.dirname(sys.argv[1])) else: @@ -283,14 +284,12 @@ def build_dependencies(key, keys, shelldeps, varflagsexcl, d): try: if key[-1] == ']': vf = key[:-1].split('[') - value = d.getVarFlag(vf[0], vf[1], False) - parser = d.expandWithRefs(value, key) + value, parser = d.getVarFlag(vf[0], vf[1], False, retparser=True) deps |= parser.references deps = deps | (keys & parser.execs) return deps, value varflags = d.getVarFlags(key, ["vardeps", "vardepvalue", "vardepsexclude", "exports", "postfuncs", "prefuncs", "lineno", "filename"]) or {} vardeps = varflags.get("vardeps") - value = d.getVarFlag(key, "_content", False) def handle_contains(value, contains, d): newvalue = "" @@ -309,10 +308,19 @@ def build_dependencies(key, keys, shelldeps, varflagsexcl, d): return newvalue return value + newvalue + def handle_remove(value, deps, removes, d): + for r in sorted(removes): + r2 = d.expandWithRefs(r, None) + value += "\n_remove of %s" % r + deps |= r2.references + deps = deps | (keys & r2.execs) + return value + if "vardepvalue" in varflags: - value = varflags.get("vardepvalue") + value = varflags.get("vardepvalue") elif varflags.get("func"): if varflags.get("python"): + value = d.getVarFlag(key, "_content", False) parser = bb.codeparser.PythonParser(key, logger) if value and "\t" in value: logger.warning("Variable %s contains tabs, please remove these (%s)" % (key, d.getVar("FILE"))) @@ -321,13 +329,15 @@ def build_dependencies(key, keys, shelldeps, varflagsexcl, d): deps = deps | (keys & parser.execs) value = handle_contains(value, parser.contains, d) else: - parsedvar = d.expandWithRefs(value, key) + value, parsedvar = d.getVarFlag(key, "_content", False, retparser=True) parser = bb.codeparser.ShellParser(key, logger) parser.parse_shell(parsedvar.value) deps = deps | shelldeps deps = deps | parsedvar.references deps = deps | (keys & parser.execs) | (keys & parsedvar.execs) value = handle_contains(value, parsedvar.contains, d) + if hasattr(parsedvar, "removes"): + value = handle_remove(value, deps, parsedvar.removes, d) if vardeps is None: parser.log.flush() if "prefuncs" in varflags: @@ -337,10 +347,12 @@ def build_dependencies(key, keys, shelldeps, varflagsexcl, d): if "exports" in varflags: deps = deps | set(varflags["exports"].split()) else: - parser = d.expandWithRefs(value, key) + value, parser = d.getVarFlag(key, "_content", False, retparser=True) deps |= parser.references deps = deps | (keys & parser.execs) value = handle_contains(value, parser.contains, d) + if hasattr(parser, "removes"): + value = handle_remove(value, deps, parser.removes, d) if "vardepvalueexclude" in varflags: exclude = varflags.get("vardepvalueexclude") @@ -394,6 +406,43 @@ def generate_dependencies(d): #print "For %s: %s" % (task, str(deps[task])) return tasklist, deps, values +def generate_dependency_hash(tasklist, gendeps, lookupcache, whitelist, fn): + taskdeps = {} + basehash = {} + + for task in tasklist: + data = lookupcache[task] + + if data is None: + bb.error("Task %s from %s seems to be empty?!" % (task, fn)) + data = '' + + gendeps[task] -= whitelist + newdeps = gendeps[task] + seen = set() + while newdeps: + nextdeps = newdeps + seen |= nextdeps + newdeps = set() + for dep in nextdeps: + if dep in whitelist: + continue + gendeps[dep] -= whitelist + newdeps |= gendeps[dep] + newdeps -= seen + + alldeps = sorted(seen) + for dep in alldeps: + data = data + dep + var = lookupcache[dep] + if var is not None: + data = data + str(var) + k = fn + "." + task + basehash[k] = hashlib.md5(data.encode("utf-8")).hexdigest() + taskdeps[task] = alldeps + + return taskdeps, basehash + def inherits_class(klass, d): val = d.getVar('__inherit_cache', False) or [] needle = os.path.join('classes', '%s.bbclass' % klass) diff --git a/bitbake/lib/bb/data_smart.py b/bitbake/lib/bb/data_smart.py index 7b09af5..6b94fc4 100644 --- a/bitbake/lib/bb/data_smart.py +++ b/bitbake/lib/bb/data_smart.py @@ -42,6 +42,7 @@ __setvar_keyword__ = ["_append", "_prepend", "_remove"] __setvar_regexp__ = re.compile('(?P.*?)(?P_append|_prepend|_remove)(_(?P[^A-Z]*))?$') __expand_var_regexp__ = re.compile(r"\${[^{}@\n\t :]+}") __expand_python_regexp__ = re.compile(r"\${@.+?}") +__whitespace_split__ = re.compile('(\s)') def infer_caller_details(loginfo, parent = False, varval = True): """Save the caller the trouble of specifying everything.""" @@ -104,11 +105,7 @@ class VariableParse: if self.varname and key: if self.varname == key: raise Exception("variable %s references itself!" % self.varname) - if key in self.d.expand_cache: - varparse = self.d.expand_cache[key] - var = varparse.value - else: - var = self.d.getVarFlag(key, "_content") + var = self.d.getVarFlag(key, "_content") self.references.add(key) if var is not None: return var @@ -267,6 +264,16 @@ class VariableHistory(object): return self.variables[var].append(loginfo.copy()) + def rename_variable_hist(self, oldvar, newvar): + if not self.dataroot._tracking: + return + if oldvar not in self.variables: + return + if newvar not in self.variables: + self.variables[newvar] = [] + for i in self.variables[oldvar]: + self.variables[newvar].append(i.copy()) + def variable(self, var): remote_connector = self.dataroot.getVar('_remote_data', False) if remote_connector: @@ -401,9 +408,6 @@ class DataSmart(MutableMapping): if not isinstance(s, str): # sanity check return VariableParse(varname, self, s) - if varname and varname in self.expand_cache: - return self.expand_cache[varname] - varparse = VariableParse(varname, self) while s.find('${') != -1: @@ -427,9 +431,6 @@ class DataSmart(MutableMapping): varparse.value = s - if varname: - self.expand_cache[varname] = varparse - return varparse def expand(self, s, varname = None): @@ -498,6 +499,7 @@ class DataSmart(MutableMapping): def setVar(self, var, value, **loginfo): #print("var=" + str(var) + " val=" + str(value)) + self.expand_cache = {} parsing=False if 'parsing' in loginfo: parsing=True @@ -510,7 +512,7 @@ class DataSmart(MutableMapping): if 'op' not in loginfo: loginfo['op'] = "set" - self.expand_cache = {} + match = __setvar_regexp__.match(var) if match and match.group("keyword") in __setvar_keyword__: base = match.group('base') @@ -619,6 +621,7 @@ class DataSmart(MutableMapping): val = self.getVar(key, 0, parsing=True) if val is not None: + self.varhistory.rename_variable_hist(key, newkey) loginfo['variable'] = newkey loginfo['op'] = 'rename from %s' % key loginfo['detail'] = val @@ -660,6 +663,7 @@ class DataSmart(MutableMapping): self.setVar(var + "_prepend", value, ignore=True, parsing=True) def delVar(self, var, **loginfo): + self.expand_cache = {} if '_remote_data' in self.dict: connector = self.dict["_remote_data"]["_content"] res = connector.delVar(var) @@ -669,7 +673,6 @@ class DataSmart(MutableMapping): loginfo['detail'] = "" loginfo['op'] = 'del' self.varhistory.record(**loginfo) - self.expand_cache = {} self.dict[var] = {} if var in self.overridedata: del self.overridedata[var] @@ -692,13 +695,13 @@ class DataSmart(MutableMapping): override = None def setVarFlag(self, var, flag, value, **loginfo): + self.expand_cache = {} if '_remote_data' in self.dict: connector = self.dict["_remote_data"]["_content"] res = connector.setVarFlag(var, flag, value) if not res: return - self.expand_cache = {} if 'op' not in loginfo: loginfo['op'] = "set" loginfo['flag'] = flag @@ -719,9 +722,21 @@ class DataSmart(MutableMapping): self.dict["__exportlist"]["_content"] = set() self.dict["__exportlist"]["_content"].add(var) - def getVarFlag(self, var, flag, expand=True, noweakdefault=False, parsing=False): + def getVarFlag(self, var, flag, expand=True, noweakdefault=False, parsing=False, retparser=False): + if flag == "_content": + cachename = var + else: + if not flag: + bb.warn("Calling getVarFlag with flag unset is invalid") + return None + cachename = var + "[" + flag + "]" + + if expand and cachename in self.expand_cache: + return self.expand_cache[cachename].value + local_var, overridedata = self._findVar(var) value = None + removes = set() if flag == "_content" and overridedata is not None and not parsing: match = False active = {} @@ -748,7 +763,11 @@ class DataSmart(MutableMapping): match = active[a] del active[a] if match: - value = self.getVar(match, False) + value, subparser = self.getVarFlag(match, "_content", False, retparser=True) + if hasattr(subparser, "removes"): + # We have to carry the removes from the overridden variable to apply at the + # end of processing + removes = subparser.removes if local_var is not None and value is None: if flag in local_var: @@ -784,17 +803,13 @@ class DataSmart(MutableMapping): if match: value = r + value - if expand and value: - # Only getvar (flag == _content) hits the expand cache - cachename = None - if flag == "_content": - cachename = var - else: - cachename = var + "[" + flag + "]" - value = self.expand(value, cachename) + parser = None + if expand or retparser: + parser = self.expandWithRefs(value, cachename) + if expand: + value = parser.value - if value and flag == "_content" and local_var is not None and "_remove" in local_var: - removes = [] + if value and flag == "_content" and local_var is not None and "_remove" in local_var and not parsing: self.need_overrides() for (r, o) in local_var["_remove"]: match = True @@ -803,26 +818,45 @@ class DataSmart(MutableMapping): if not o2 in self.overrides: match = False if match: - removes.extend(self.expand(r).split()) - - if removes: - filtered = filter(lambda v: v not in removes, - value.split()) - value = " ".join(filtered) - if expand and var in self.expand_cache: - # We need to ensure the expand cache has the correct value - # flag == "_content" here - self.expand_cache[var].value = value + removes.add(r) + + if value and flag == "_content" and not parsing: + if removes and parser: + expanded_removes = {} + for r in removes: + expanded_removes[r] = self.expand(r).split() + + parser.removes = set() + val = "" + for v in __whitespace_split__.split(parser.value): + skip = False + for r in removes: + if v in expanded_removes[r]: + parser.removes.add(r) + skip = True + if skip: + continue + val = val + v + parser.value = val + if expand: + value = parser.value + + if parser: + self.expand_cache[cachename] = parser + + if retparser: + return value, parser + return value def delVarFlag(self, var, flag, **loginfo): + self.expand_cache = {} if '_remote_data' in self.dict: connector = self.dict["_remote_data"]["_content"] res = connector.delVarFlag(var, flag) if not res: return - self.expand_cache = {} local_var, _ = self._findVar(var) if not local_var: return diff --git a/bitbake/lib/bb/event.py b/bitbake/lib/bb/event.py index 5d00496..5b1b094 100644 --- a/bitbake/lib/bb/event.py +++ b/bitbake/lib/bb/event.py @@ -141,6 +141,9 @@ def print_ui_queue(): logger = logging.getLogger("BitBake") if not _uiready: from bb.msg import BBLogFormatter + # Flush any existing buffered content + sys.stdout.flush() + sys.stderr.flush() stdout = logging.StreamHandler(sys.stdout) stderr = logging.StreamHandler(sys.stderr) formatter = BBLogFormatter("%(levelname)s: %(message)s") @@ -395,7 +398,7 @@ class RecipeEvent(Event): Event.__init__(self) class RecipePreFinalise(RecipeEvent): - """ Recipe Parsing Complete but not yet finialised""" + """ Recipe Parsing Complete but not yet finalised""" class RecipeTaskPreProcess(RecipeEvent): """ diff --git a/bitbake/lib/bb/fetch2/__init__.py b/bitbake/lib/bb/fetch2/__init__.py index 6bd0404..2b62b41 100644 --- a/bitbake/lib/bb/fetch2/__init__.py +++ b/bitbake/lib/bb/fetch2/__init__.py @@ -383,7 +383,7 @@ def decodeurl(url): path = location else: host = location - path = "" + path = "/" if user: m = re.compile('(?P[^:]+)(:?(?P.*))').match(user) if m: @@ -452,8 +452,8 @@ def uri_replace(ud, uri_find, uri_replace, replacements, d, mirrortarball=None): # Handle URL parameters if i: # Any specified URL parameters must match - for k in uri_replace_decoded[loc]: - if uri_decoded[loc][k] != uri_replace_decoded[loc][k]: + for k in uri_find_decoded[loc]: + if uri_decoded[loc][k] != uri_find_decoded[loc][k]: return None # Overwrite any specified replacement parameters for k in uri_replace_decoded[loc]: @@ -643,26 +643,25 @@ def verify_donestamp(ud, d, origud=None): if not ud.needdonestamp or (origud and not origud.needdonestamp): return True - if not os.path.exists(ud.donestamp): + if not os.path.exists(ud.localpath): + # local path does not exist + if os.path.exists(ud.donestamp): + # done stamp exists, but the downloaded file does not; the done stamp + # must be incorrect, re-trigger the download + bb.utils.remove(ud.donestamp) return False if (not ud.method.supports_checksum(ud) or (origud and not origud.method.supports_checksum(origud))): - # done stamp exists, checksums not supported; assume the local file is - # current - return True - - if not os.path.exists(ud.localpath): - # done stamp exists, but the downloaded file does not; the done stamp - # must be incorrect, re-trigger the download - bb.utils.remove(ud.donestamp) - return False + # if done stamp exists and checksums not supported; assume the local + # file is current + return os.path.exists(ud.donestamp) precomputed_checksums = {} # Only re-use the precomputed checksums if the donestamp is newer than the # file. Do not rely on the mtime of directories, though. If ud.localpath is # a directory, there will probably not be any checksums anyway. - if (os.path.isdir(ud.localpath) or + if os.path.exists(ud.donestamp) and (os.path.isdir(ud.localpath) or os.path.getmtime(ud.localpath) < os.path.getmtime(ud.donestamp)): try: with open(ud.donestamp, "rb") as cachefile: @@ -838,14 +837,16 @@ def runfetchcmd(cmd, d, quiet=False, cleanup=None, log=None, workdir=None): if not cleanup: cleanup = [] - # If PATH contains WORKDIR which contains PV which contains SRCPV we + # If PATH contains WORKDIR which contains PV-PR which contains SRCPV we # can end up in circular recursion here so give the option of breaking it # in a data store copy. try: d.getVar("PV") + d.getVar("PR") except bb.data_smart.ExpansionError: d = bb.data.createCopy(d) d.setVar("PV", "fetcheravoidrecurse") + d.setVar("PR", "fetcheravoidrecurse") origenv = d.getVar("BB_ORIGENV", False) for var in exportvars: @@ -1017,16 +1018,7 @@ def try_mirror_url(fetch, origud, ud, ld, check = False): origud.method.build_mirror_data(origud, ld) return origud.localpath # Otherwise the result is a local file:// and we symlink to it - if not os.path.exists(origud.localpath): - if os.path.islink(origud.localpath): - # Broken symbolic link - os.unlink(origud.localpath) - - # As per above, in case two tasks end up here simultaneously. - try: - os.symlink(ud.localpath, origud.localpath) - except FileExistsError: - pass + ensure_symlink(ud.localpath, origud.localpath) update_stamp(origud, ld) return ud.localpath @@ -1060,6 +1052,22 @@ def try_mirror_url(fetch, origud, ud, ld, check = False): bb.utils.unlockfile(lf) +def ensure_symlink(target, link_name): + if not os.path.exists(link_name): + if os.path.islink(link_name): + # Broken symbolic link + os.unlink(link_name) + + # In case this is executing without any file locks held (as is + # the case for file:// URLs), two tasks may end up here at the + # same time, in which case we do not want the second task to + # fail when the link has already been created by the first task. + try: + os.symlink(target, link_name) + except FileExistsError: + pass + + def try_mirrors(fetch, d, origud, mirrors, check = False): """ Try to use a mirrored version of the sources. @@ -1089,7 +1097,9 @@ def trusted_network(d, url): return True pkgname = d.expand(d.getVar('PN', False)) - trusted_hosts = d.getVarFlag('BB_ALLOWED_NETWORKS', pkgname, False) + trusted_hosts = None + if pkgname: + trusted_hosts = d.getVarFlag('BB_ALLOWED_NETWORKS', pkgname, False) if not trusted_hosts: trusted_hosts = d.getVar('BB_ALLOWED_NETWORKS') diff --git a/bitbake/lib/bb/fetch2/bzr.py b/bitbake/lib/bb/fetch2/bzr.py index 16123f8..658502f 100644 --- a/bitbake/lib/bb/fetch2/bzr.py +++ b/bitbake/lib/bb/fetch2/bzr.py @@ -41,8 +41,9 @@ class Bzr(FetchMethod): init bzr specific variable within url data """ # Create paths to bzr checkouts + bzrdir = d.getVar("BZRDIR") or (d.getVar("DL_DIR") + "/bzr") relpath = self._strip_leading_slashes(ud.path) - ud.pkgdir = os.path.join(d.expand('${BZRDIR}'), ud.host, relpath) + ud.pkgdir = os.path.join(bzrdir, ud.host, relpath) ud.setup_revisions(d) @@ -57,7 +58,7 @@ class Bzr(FetchMethod): command is "fetch", "update", "revno" """ - basecmd = d.expand('${FETCHCMD_bzr}') + basecmd = d.getVar("FETCHCMD_bzr") or "/usr/bin/env bzr" proto = ud.parm.get('protocol', 'http') diff --git a/bitbake/lib/bb/fetch2/clearcase.py b/bitbake/lib/bb/fetch2/clearcase.py index 36beab6..3a6573d 100644 --- a/bitbake/lib/bb/fetch2/clearcase.py +++ b/bitbake/lib/bb/fetch2/clearcase.py @@ -69,7 +69,6 @@ from bb.fetch2 import FetchMethod from bb.fetch2 import FetchError from bb.fetch2 import runfetchcmd from bb.fetch2 import logger -from distutils import spawn class ClearCase(FetchMethod): """Class to fetch urls via 'clearcase'""" @@ -107,7 +106,7 @@ class ClearCase(FetchMethod): else: ud.module = "" - ud.basecmd = d.getVar("FETCHCMD_ccrc") or spawn.find_executable("cleartool") or spawn.find_executable("rcleartool") + ud.basecmd = d.getVar("FETCHCMD_ccrc") or "/usr/bin/env cleartool || rcleartool" if d.getVar("SRCREV") == "INVALID": raise FetchError("Set a valid SRCREV for the clearcase fetcher in your recipe, e.g. SRCREV = \"/main/LATEST\" or any other label of your choice.") diff --git a/bitbake/lib/bb/fetch2/cvs.py b/bitbake/lib/bb/fetch2/cvs.py index 490c954..0e0a319 100644 --- a/bitbake/lib/bb/fetch2/cvs.py +++ b/bitbake/lib/bb/fetch2/cvs.py @@ -110,7 +110,7 @@ class Cvs(FetchMethod): if ud.tag: options.append("-r %s" % ud.tag) - cvsbasecmd = d.getVar("FETCHCMD_cvs") + cvsbasecmd = d.getVar("FETCHCMD_cvs") or "/usr/bin/env cvs" cvscmd = cvsbasecmd + " '-d" + cvsroot + "' co " + " ".join(options) + " " + ud.module cvsupdatecmd = cvsbasecmd + " '-d" + cvsroot + "' update -d -P " + " ".join(options) @@ -121,7 +121,8 @@ class Cvs(FetchMethod): # create module directory logger.debug(2, "Fetch: checking for module directory") pkg = d.getVar('PN') - pkgdir = os.path.join(d.getVar('CVSDIR'), pkg) + cvsdir = d.getVar("CVSDIR") or (d.getVar("DL_DIR") + "/cvs") + pkgdir = os.path.join(cvsdir, pkg) moddir = os.path.join(pkgdir, localdir) workdir = None if os.access(os.path.join(moddir, 'CVS'), os.R_OK): diff --git a/bitbake/lib/bb/fetch2/git.py b/bitbake/lib/bb/fetch2/git.py index d34ea1d..15858a6 100644 --- a/bitbake/lib/bb/fetch2/git.py +++ b/bitbake/lib/bb/fetch2/git.py @@ -125,6 +125,9 @@ class GitProgressHandler(bb.progress.LineFilterProgressHandler): class Git(FetchMethod): + bitbake_dir = os.path.abspath(os.path.join(os.path.dirname(os.path.join(os.path.abspath(__file__))), '..', '..', '..')) + make_shallow_path = os.path.join(bitbake_dir, 'bin', 'git-make-shallow') + """Class to fetch a module or modules from git repositories""" def init(self, d): pass @@ -258,7 +261,7 @@ class Git(FetchMethod): gitsrcname = gitsrcname + '_' + ud.revisions[name] dl_dir = d.getVar("DL_DIR") - gitdir = d.getVar("GITDIR") or (dl_dir + "/git2/") + gitdir = d.getVar("GITDIR") or (dl_dir + "/git2") ud.clonedir = os.path.join(gitdir, gitsrcname) ud.localfile = ud.clonedir @@ -296,17 +299,22 @@ class Git(FetchMethod): return ud.clonedir def need_update(self, ud, d): + return self.clonedir_need_update(ud, d) or self.shallow_tarball_need_update(ud) or self.tarball_need_update(ud) + + def clonedir_need_update(self, ud, d): if not os.path.exists(ud.clonedir): return True for name in ud.names: if not self._contains_ref(ud, d, name, ud.clonedir): return True - if ud.shallow and ud.write_shallow_tarballs and not os.path.exists(ud.fullshallow): - return True - if ud.write_tarballs and not os.path.exists(ud.fullmirror): - return True return False + def shallow_tarball_need_update(self, ud): + return ud.shallow and ud.write_shallow_tarballs and not os.path.exists(ud.fullshallow) + + def tarball_need_update(self, ud): + return ud.write_tarballs and not os.path.exists(ud.fullmirror) + def try_premirror(self, ud, d): # If we don't do this, updating an existing checkout with only premirrors # is not possible @@ -319,16 +327,13 @@ class Git(FetchMethod): def download(self, ud, d): """Fetch url""" - no_clone = not os.path.exists(ud.clonedir) - need_update = no_clone or self.need_update(ud, d) - # A current clone is preferred to either tarball, a shallow tarball is # preferred to an out of date clone, and a missing clone will use # either tarball. - if ud.shallow and os.path.exists(ud.fullshallow) and need_update: + if ud.shallow and os.path.exists(ud.fullshallow) and self.need_update(ud, d): ud.localpath = ud.fullshallow return - elif os.path.exists(ud.fullmirror) and no_clone: + elif os.path.exists(ud.fullmirror) and not os.path.exists(ud.clonedir): bb.utils.mkdirhier(ud.clonedir) runfetchcmd("tar -xzf %s" % ud.fullmirror, d, workdir=ud.clonedir) @@ -350,11 +355,12 @@ class Git(FetchMethod): for name in ud.names: if not self._contains_ref(ud, d, name, ud.clonedir): needupdate = True + break + if needupdate: - try: - runfetchcmd("%s remote rm origin" % ud.basecmd, d, workdir=ud.clonedir) - except bb.fetch2.FetchError: - logger.debug(1, "No Origin") + output = runfetchcmd("%s remote" % ud.basecmd, d, quiet=True, workdir=ud.clonedir) + if "origin" in output: + runfetchcmd("%s remote rm origin" % ud.basecmd, d, workdir=ud.clonedir) runfetchcmd("%s remote add --mirror=fetch origin %s" % (ud.basecmd, repourl), d, workdir=ud.clonedir) fetch_cmd = "LANG=C %s fetch -f --prune --progress %s refs/*:refs/*" % (ud.basecmd, repourl) @@ -370,6 +376,7 @@ class Git(FetchMethod): except OSError as exc: if exc.errno != errno.ENOENT: raise + for name in ud.names: if not self._contains_ref(ud, d, name, ud.clonedir): raise bb.fetch2.FetchError("Unable to find revision %s in branch %s even from upstream" % (ud.revisions[name], ud.branches[name])) @@ -446,7 +453,7 @@ class Git(FetchMethod): shallow_branches.append(r) # Make the repository shallow - shallow_cmd = ['git', 'make-shallow', '-s'] + shallow_cmd = [self.make_shallow_path, '-s'] for b in shallow_branches: shallow_cmd.append('-r') shallow_cmd.append(b) @@ -469,11 +476,27 @@ class Git(FetchMethod): if os.path.exists(destdir): bb.utils.prunedir(destdir) - if ud.shallow and (not os.path.exists(ud.clonedir) or self.need_update(ud, d)): - bb.utils.mkdirhier(destdir) - runfetchcmd("tar -xzf %s" % ud.fullshallow, d, workdir=destdir) - else: - runfetchcmd("%s clone %s %s/ %s" % (ud.basecmd, ud.cloneflags, ud.clonedir, destdir), d) + source_found = False + source_error = [] + + if not source_found: + clonedir_is_up_to_date = not self.clonedir_need_update(ud, d) + if clonedir_is_up_to_date: + runfetchcmd("%s clone %s %s/ %s" % (ud.basecmd, ud.cloneflags, ud.clonedir, destdir), d) + source_found = True + else: + source_error.append("clone directory not available or not up to date: " + ud.clonedir) + + if not source_found: + if ud.shallow and os.path.exists(ud.fullshallow): + bb.utils.mkdirhier(destdir) + runfetchcmd("tar -xzf %s" % ud.fullshallow, d, workdir=destdir) + source_found = True + else: + source_error.append("shallow clone not enabled or not available: " + ud.fullshallow) + + if not source_found: + raise bb.fetch2.UnpackError("No up to date source found: " + "; ".join(source_error), ud.url) repourl = self._get_repo_url(ud) runfetchcmd("%s remote set-url origin %s" % (ud.basecmd, repourl), d, workdir=destdir) @@ -592,7 +615,8 @@ class Git(FetchMethod): tagregex = re.compile(d.getVar('UPSTREAM_CHECK_GITTAGREGEX') or "(?P([0-9][\.|_]?)+)") try: output = self._lsremote(ud, d, "refs/tags/*") - except bb.fetch2.FetchError or bb.fetch2.NetworkAccess: + except (bb.fetch2.FetchError, bb.fetch2.NetworkAccess) as e: + bb.note("Could not list remote: %s" % str(e)) return pupver verstring = "" diff --git a/bitbake/lib/bb/fetch2/gitsm.py b/bitbake/lib/bb/fetch2/gitsm.py index 0aff100..0a982da 100644 --- a/bitbake/lib/bb/fetch2/gitsm.py +++ b/bitbake/lib/bb/fetch2/gitsm.py @@ -31,9 +31,12 @@ NOTE: Switching a SRC_URI from "git://" to "gitsm://" requires a clean of your r import os import bb +import copy from bb.fetch2.git import Git from bb.fetch2 import runfetchcmd from bb.fetch2 import logger +from bb.fetch2 import Fetch +from bb.fetch2 import BBFetchException class GitSM(Git): def supports(self, ud, d): @@ -42,94 +45,207 @@ class GitSM(Git): """ return ud.type in ['gitsm'] - def uses_submodules(self, ud, d, wd): - for name in ud.names: - try: - runfetchcmd("%s show %s:.gitmodules" % (ud.basecmd, ud.revisions[name]), d, quiet=True, workdir=wd) - return True - except bb.fetch.FetchError: - pass - return False + @staticmethod + def parse_gitmodules(gitmodules): + modules = {} + module = "" + for line in gitmodules.splitlines(): + if line.startswith('[submodule'): + module = line.split('"')[1] + modules[module] = {} + elif module and line.strip().startswith('path'): + path = line.split('=')[1].strip() + modules[module]['path'] = path + elif module and line.strip().startswith('url'): + url = line.split('=')[1].strip() + modules[module]['url'] = url + return modules - def _set_relative_paths(self, repopath): - """ - Fix submodule paths to be relative instead of absolute, - so that when we move the repo it doesn't break - (In Git 1.7.10+ this is done automatically) - """ + def update_submodules(self, ud, d): submodules = [] - with open(os.path.join(repopath, '.gitmodules'), 'r') as f: - for line in f.readlines(): - if line.startswith('[submodule'): - submodules.append(line.split('"')[1]) + paths = {} + uris = {} + local_paths = {} - for module in submodules: - repo_conf = os.path.join(repopath, module, '.git') - if os.path.exists(repo_conf): - with open(repo_conf, 'r') as f: - lines = f.readlines() - newpath = '' - for i, line in enumerate(lines): - if line.startswith('gitdir:'): - oldpath = line.split(': ')[-1].rstrip() - if oldpath.startswith('/'): - newpath = '../' * (module.count('/') + 1) + '.git/modules/' + module - lines[i] = 'gitdir: %s\n' % newpath - break - if newpath: - with open(repo_conf, 'w') as f: - for line in lines: - f.write(line) - - repo_conf2 = os.path.join(repopath, '.git', 'modules', module, 'config') - if os.path.exists(repo_conf2): - with open(repo_conf2, 'r') as f: - lines = f.readlines() - newpath = '' - for i, line in enumerate(lines): - if line.lstrip().startswith('worktree = '): - oldpath = line.split(' = ')[-1].rstrip() - if oldpath.startswith('/'): - newpath = '../' * (module.count('/') + 3) + module - lines[i] = '\tworktree = %s\n' % newpath - break - if newpath: - with open(repo_conf2, 'w') as f: - for line in lines: - f.write(line) + for name in ud.names: + try: + gitmodules = runfetchcmd("%s show %s:.gitmodules" % (ud.basecmd, ud.revisions[name]), d, quiet=True, workdir=ud.clonedir) + except: + # No submodules to update + continue + + for m, md in self.parse_gitmodules(gitmodules).items(): + submodules.append(m) + paths[m] = md['path'] + uris[m] = md['url'] + if uris[m].startswith('..'): + newud = copy.copy(ud) + newud.path = os.path.realpath(os.path.join(newud.path, md['url'])) + uris[m] = Git._get_repo_url(self, newud) - def update_submodules(self, ud, d): - # We have to convert bare -> full repo, do the submodule bit, then convert back - tmpclonedir = ud.clonedir + ".tmp" - gitdir = tmpclonedir + os.sep + ".git" - bb.utils.remove(tmpclonedir, True) - os.mkdir(tmpclonedir) - os.rename(ud.clonedir, gitdir) - runfetchcmd("sed " + gitdir + "/config -i -e 's/bare.*=.*true/bare = false/'", d) - runfetchcmd(ud.basecmd + " reset --hard", d, workdir=tmpclonedir) - runfetchcmd(ud.basecmd + " checkout -f " + ud.revisions[ud.names[0]], d, workdir=tmpclonedir) - runfetchcmd(ud.basecmd + " submodule update --init --recursive", d, workdir=tmpclonedir) - self._set_relative_paths(tmpclonedir) - runfetchcmd("sed " + gitdir + "/config -i -e 's/bare.*=.*false/bare = true/'", d, workdir=tmpclonedir) - os.rename(gitdir, ud.clonedir,) - bb.utils.remove(tmpclonedir, True) + for module in submodules: + module_hash = runfetchcmd("%s ls-tree -z -d %s %s" % (ud.basecmd, ud.revisions[name], paths[module]), d, quiet=True, workdir=ud.clonedir) + module_hash = module_hash.split()[2] + + # Build new SRC_URI + proto = uris[module].split(':', 1)[0] + url = uris[module].replace('%s:' % proto, 'gitsm:', 1) + url += ';protocol=%s' % proto + url += ";name=%s" % module + url += ";bareclone=1;nocheckout=1" + + ld = d.createCopy() + # Not necessary to set SRC_URI, since we're passing the URI to + # Fetch. + #ld.setVar('SRC_URI', url) + ld.setVar('SRCREV_%s' % module, module_hash) + + # Workaround for issues with SRCPV/SRCREV_FORMAT errors + # error refer to 'multiple' repositories. Only the repository + # in the original SRC_URI actually matters... + ld.setVar('SRCPV', d.getVar('SRCPV')) + ld.setVar('SRCREV_FORMAT', module) + + newfetch = Fetch([url], ld, cache=False) + newfetch.download() + local_paths[module] = newfetch.localpath(url) + + # Correct the submodule references to the local download version... + runfetchcmd("%(basecmd)s config submodule.%(module)s.url %(url)s" % {'basecmd': ud.basecmd, 'module': module, 'url' : local_paths[module]}, d, workdir=ud.clonedir) + + symlink_path = os.path.join(ud.clonedir, 'modules', paths[module]) + if not os.path.exists(symlink_path): + try: + os.makedirs(os.path.dirname(symlink_path), exist_ok=True) + except OSError: + pass + os.symlink(local_paths[module], symlink_path) + + return True + + def need_update(self, ud, d): + main_repo_needs_update = Git.need_update(self, ud, d) + + # First check that the main repository has enough history fetched. If it doesn't, then we don't + # even have the .gitmodules and gitlinks for the submodules to attempt asking whether the + # submodules' histories are recent enough. + if main_repo_needs_update: + return True + + # Now check that the submodule histories are new enough. The git-submodule command doesn't have + # any clean interface for doing this aside from just attempting the checkout (with network + # fetched disabled). + return not self.update_submodules(ud, d) def download(self, ud, d): Git.download(self, ud, d) if not ud.shallow or ud.localpath != ud.fullshallow: - submodules = self.uses_submodules(ud, d, ud.clonedir) - if submodules: - self.update_submodules(ud, d) + self.update_submodules(ud, d) + + def copy_submodules(self, submodules, ud, destdir, d): + if ud.bareclone: + repo_conf = destdir + else: + repo_conf = os.path.join(destdir, '.git') + + if submodules and not os.path.exists(os.path.join(repo_conf, 'modules')): + os.mkdir(os.path.join(repo_conf, 'modules')) + + for module in submodules: + srcpath = os.path.join(ud.clonedir, 'modules', module) + modpath = os.path.join(repo_conf, 'modules', module) + + if os.path.exists(srcpath): + if os.path.exists(os.path.join(srcpath, '.git')): + srcpath = os.path.join(srcpath, '.git') + + target = modpath + if os.path.exists(modpath): + target = os.path.dirname(modpath) + + os.makedirs(os.path.dirname(target), exist_ok=True) + runfetchcmd("cp -fpLR %s %s" % (srcpath, target), d) + elif os.path.exists(modpath): + # Module already exists, likely unpacked from a shallow mirror clone + pass + else: + # This is fatal, as we do NOT want git-submodule to hit the network + raise bb.fetch2.FetchError('Submodule %s does not exist in %s or %s.' % (module, srcpath, modpath)) def clone_shallow_local(self, ud, dest, d): super(GitSM, self).clone_shallow_local(ud, dest, d) - runfetchcmd('cp -fpPRH "%s/modules" "%s/"' % (ud.clonedir, os.path.join(dest, '.git')), d) + # Copy over the submodules' fetched histories too. + repo_conf = os.path.join(dest, '.git') + + submodules = [] + for name in ud.names: + try: + gitmodules = runfetchcmd("%s show %s:.gitmodules" % (ud.basecmd, ud.revision), d, quiet=True, workdir=dest) + except: + # No submodules to update + continue + + submodules = list(self.parse_gitmodules(gitmodules).keys()) + + self.copy_submodules(submodules, ud, dest, d) def unpack(self, ud, destdir, d): Git.unpack(self, ud, destdir, d) - if self.uses_submodules(ud, d, ud.destdir): - runfetchcmd(ud.basecmd + " checkout " + ud.revisions[ud.names[0]], d, workdir=ud.destdir) - runfetchcmd(ud.basecmd + " submodule update --init --recursive", d, workdir=ud.destdir) + # Copy over the submodules' fetched histories too. + if ud.bareclone: + repo_conf = ud.destdir + else: + repo_conf = os.path.join(ud.destdir, '.git') + + submodules = [] + paths = {} + uris = {} + local_paths = {} + for name in ud.names: + try: + gitmodules = runfetchcmd("%s show HEAD:.gitmodules" % (ud.basecmd), d, quiet=True, workdir=ud.destdir) + except: + # No submodules to update + continue + + for m, md in self.parse_gitmodules(gitmodules).items(): + submodules.append(m) + paths[m] = md['path'] + uris[m] = md['url'] + + self.copy_submodules(submodules, ud, ud.destdir, d) + + submodules_queue = [(module, os.path.join(repo_conf, 'modules', module)) for module in submodules] + while len(submodules_queue) != 0: + module, modpath = submodules_queue.pop() + + # add submodule children recursively + try: + gitmodules = runfetchcmd("%s show HEAD:.gitmodules" % (ud.basecmd), d, quiet=True, workdir=modpath) + for m, md in self.parse_gitmodules(gitmodules).items(): + submodules_queue.append([m, os.path.join(modpath, 'modules', m)]) + except: + # no children + pass + + # Determine (from the submodule) the correct url to reference + try: + output = runfetchcmd("%(basecmd)s config remote.origin.url" % {'basecmd': ud.basecmd}, d, workdir=modpath) + except bb.fetch2.FetchError as e: + # No remote url defined in this submodule + continue + + local_paths[module] = output + + # Setup the local URL properly (like git submodule init or sync would do...) + runfetchcmd("%(basecmd)s config submodule.%(module)s.url %(url)s" % {'basecmd': ud.basecmd, 'module': module, 'url' : local_paths[module]}, d, workdir=ud.destdir) + + # Ensure the submodule repository is NOT set to bare, since we're checking it out... + runfetchcmd("%s config core.bare false" % (ud.basecmd), d, quiet=True, workdir=modpath) + + if submodules: + # Run submodule update, this sets up the directories -- without touching the config + runfetchcmd("%s submodule update --recursive --no-fetch" % (ud.basecmd), d, quiet=True, workdir=ud.destdir) diff --git a/bitbake/lib/bb/fetch2/hg.py b/bitbake/lib/bb/fetch2/hg.py index d0857e6..936d043 100644 --- a/bitbake/lib/bb/fetch2/hg.py +++ b/bitbake/lib/bb/fetch2/hg.py @@ -80,7 +80,7 @@ class Hg(FetchMethod): ud.fullmirror = os.path.join(d.getVar("DL_DIR"), mirrortarball) ud.mirrortarballs = [mirrortarball] - hgdir = d.getVar("HGDIR") or (d.getVar("DL_DIR") + "/hg/") + hgdir = d.getVar("HGDIR") or (d.getVar("DL_DIR") + "/hg") ud.pkgdir = os.path.join(hgdir, hgsrcname) ud.moddir = os.path.join(ud.pkgdir, ud.module) ud.localfile = ud.moddir diff --git a/bitbake/lib/bb/fetch2/npm.py b/bitbake/lib/bb/fetch2/npm.py index b5f148c..408dfc3 100644 --- a/bitbake/lib/bb/fetch2/npm.py +++ b/bitbake/lib/bb/fetch2/npm.py @@ -32,7 +32,6 @@ from bb.fetch2 import runfetchcmd from bb.fetch2 import logger from bb.fetch2 import UnpackError from bb.fetch2 import ParameterError -from distutils import spawn def subprocess_setup(): # Python installs a SIGPIPE handler by default. This is usually not what @@ -195,9 +194,11 @@ class Npm(FetchMethod): outputurl = pdata['dist']['tarball'] data[pkg] = {} data[pkg]['tgz'] = os.path.basename(outputurl) - if not outputurl in fetchedlist: - self._runwget(ud, d, "%s --directory-prefix=%s %s" % (self.basecmd, ud.prefixdir, outputurl), False) - fetchedlist.append(outputurl) + if outputurl in fetchedlist: + return + + self._runwget(ud, d, "%s --directory-prefix=%s %s" % (self.basecmd, ud.prefixdir, outputurl), False) + fetchedlist.append(outputurl) dependencies = pdata.get('dependencies', {}) optionalDependencies = pdata.get('optionalDependencies', {}) diff --git a/bitbake/lib/bb/fetch2/osc.py b/bitbake/lib/bb/fetch2/osc.py index 2b4f7d9..6c60456 100644 --- a/bitbake/lib/bb/fetch2/osc.py +++ b/bitbake/lib/bb/fetch2/osc.py @@ -32,8 +32,9 @@ class Osc(FetchMethod): ud.module = ud.parm["module"] # Create paths to osc checkouts + oscdir = d.getVar("OSCDIR") or (d.getVar("DL_DIR") + "/osc") relpath = self._strip_leading_slashes(ud.path) - ud.pkgdir = os.path.join(d.getVar('OSCDIR'), ud.host) + ud.pkgdir = os.path.join(oscdir, ud.host) ud.moddir = os.path.join(ud.pkgdir, relpath, ud.module) if 'rev' in ud.parm: @@ -54,7 +55,7 @@ class Osc(FetchMethod): command is "fetch", "update", "info" """ - basecmd = d.expand('${FETCHCMD_osc}') + basecmd = d.getVar("FETCHCMD_osc") or "/usr/bin/env osc" proto = ud.parm.get('protocol', 'ocs') diff --git a/bitbake/lib/bb/fetch2/perforce.py b/bitbake/lib/bb/fetch2/perforce.py index 3debad5..903a8e6 100644 --- a/bitbake/lib/bb/fetch2/perforce.py +++ b/bitbake/lib/bb/fetch2/perforce.py @@ -43,13 +43,9 @@ class Perforce(FetchMethod): provided by the env, use it. If P4PORT is specified by the recipe, use its values, which may override the settings in P4CONFIG. """ - ud.basecmd = d.getVar('FETCHCMD_p4') - if not ud.basecmd: - ud.basecmd = "/usr/bin/env p4" + ud.basecmd = d.getVar("FETCHCMD_p4") or "/usr/bin/env p4" - ud.dldir = d.getVar('P4DIR') - if not ud.dldir: - ud.dldir = '%s/%s' % (d.getVar('DL_DIR'), 'p4') + ud.dldir = d.getVar("P4DIR") or (d.getVar("DL_DIR") + "/p4") path = ud.url.split('://')[1] path = path.split(';')[0] diff --git a/bitbake/lib/bb/fetch2/repo.py b/bitbake/lib/bb/fetch2/repo.py index c22d9b5..8c7e818 100644 --- a/bitbake/lib/bb/fetch2/repo.py +++ b/bitbake/lib/bb/fetch2/repo.py @@ -45,6 +45,8 @@ class Repo(FetchMethod): "master". """ + ud.basecmd = d.getVar("FETCHCMD_repo") or "/usr/bin/env repo" + ud.proto = ud.parm.get('protocol', 'git') ud.branch = ud.parm.get('branch', 'master') ud.manifest = ud.parm.get('manifest', 'default.xml') @@ -60,8 +62,8 @@ class Repo(FetchMethod): logger.debug(1, "%s already exists (or was stashed). Skipping repo init / sync.", ud.localpath) return + repodir = d.getVar("REPODIR") or (d.getVar("DL_DIR") + "/repo") gitsrcname = "%s%s" % (ud.host, ud.path.replace("/", ".")) - repodir = d.getVar("REPODIR") or os.path.join(d.getVar("DL_DIR"), "repo") codir = os.path.join(repodir, gitsrcname, ud.manifest) if ud.user: @@ -72,11 +74,11 @@ class Repo(FetchMethod): repodir = os.path.join(codir, "repo") bb.utils.mkdirhier(repodir) if not os.path.exists(os.path.join(repodir, ".repo")): - bb.fetch2.check_network_access(d, "repo init -m %s -b %s -u %s://%s%s%s" % (ud.manifest, ud.branch, ud.proto, username, ud.host, ud.path), ud.url) - runfetchcmd("repo init -m %s -b %s -u %s://%s%s%s" % (ud.manifest, ud.branch, ud.proto, username, ud.host, ud.path), d, workdir=repodir) + bb.fetch2.check_network_access(d, "%s init -m %s -b %s -u %s://%s%s%s" % (ud.basecmd, ud.manifest, ud.branch, ud.proto, username, ud.host, ud.path), ud.url) + runfetchcmd("%s init -m %s -b %s -u %s://%s%s%s" % (ud.basecmd, ud.manifest, ud.branch, ud.proto, username, ud.host, ud.path), d, workdir=repodir) - bb.fetch2.check_network_access(d, "repo sync %s" % ud.url, ud.url) - runfetchcmd("repo sync", d, workdir=repodir) + bb.fetch2.check_network_access(d, "%s sync %s" % (ud.basecmd, ud.url), ud.url) + runfetchcmd("%s sync" % ud.basecmd, d, workdir=repodir) scmdata = ud.parm.get("scmdata", "") if scmdata == "keep": diff --git a/bitbake/lib/bb/fetch2/svn.py b/bitbake/lib/bb/fetch2/svn.py index 3f172ee..ed70bcf 100644 --- a/bitbake/lib/bb/fetch2/svn.py +++ b/bitbake/lib/bb/fetch2/svn.py @@ -49,7 +49,7 @@ class Svn(FetchMethod): if not "module" in ud.parm: raise MissingParameterError('module', ud.url) - ud.basecmd = d.getVar('FETCHCMD_svn') + ud.basecmd = d.getVar("FETCHCMD_svn") or "/usr/bin/env svn --non-interactive --trust-server-cert" ud.module = ud.parm["module"] @@ -59,8 +59,9 @@ class Svn(FetchMethod): ud.path_spec = ud.parm["path_spec"] # Create paths to svn checkouts + svndir = d.getVar("SVNDIR") or (d.getVar("DL_DIR") + "/svn") relpath = self._strip_leading_slashes(ud.path) - ud.pkgdir = os.path.join(d.expand('${SVNDIR}'), ud.host, relpath) + ud.pkgdir = os.path.join(svndir, ud.host, relpath) ud.moddir = os.path.join(ud.pkgdir, ud.module) ud.setup_revisions(d) diff --git a/bitbake/lib/bb/main.py b/bitbake/lib/bb/main.py index 7711b29..732a315 100755 --- a/bitbake/lib/bb/main.py +++ b/bitbake/lib/bb/main.py @@ -292,8 +292,12 @@ class BitBakeConfigParameters(cookerdata.ConfigParameters): help="Writes the event log of the build to a bitbake event json file. " "Use '' (empty string) to assign the name automatically.") - parser.add_option("", "--runall", action="store", dest="runall", - help="Run the specified task for all build targets and their dependencies.") + parser.add_option("", "--runall", action="append", dest="runall", + help="Run the specified task for any recipe in the taskgraph of the specified target (even if it wouldn't otherwise have run).") + + parser.add_option("", "--runonly", action="append", dest="runonly", + help="Run only the specified task within the taskgraph of the specified targets (and any task dependencies those tasks may have).") + options, targets = parser.parse_args(argv) @@ -401,9 +405,6 @@ def setup_bitbake(configParams, configuration, extrafeatures=None): # In status only mode there are no logs and no UI logger.addHandler(handler) - # Clear away any spurious environment variables while we stoke up the cooker - cleanedvars = bb.utils.clean_environment() - if configParams.server_only: featureset = [] ui_module = None @@ -419,6 +420,10 @@ def setup_bitbake(configParams, configuration, extrafeatures=None): server_connection = None + # Clear away any spurious environment variables while we stoke up the cooker + # (done after import_extension_module() above since for example import gi triggers env var usage) + cleanedvars = bb.utils.clean_environment() + if configParams.remote_server: # Connect to a remote XMLRPC server server_connection = bb.server.xmlrpcclient.connectXMLRPC(configParams.remote_server, featureset, diff --git a/bitbake/lib/bb/msg.py b/bitbake/lib/bb/msg.py index f1723be..96f077e 100644 --- a/bitbake/lib/bb/msg.py +++ b/bitbake/lib/bb/msg.py @@ -40,6 +40,7 @@ class BBLogFormatter(logging.Formatter): VERBOSE = logging.INFO - 1 NOTE = logging.INFO PLAIN = logging.INFO + 1 + VERBNOTE = logging.INFO + 2 ERROR = logging.ERROR WARNING = logging.WARNING CRITICAL = logging.CRITICAL @@ -51,6 +52,7 @@ class BBLogFormatter(logging.Formatter): VERBOSE: 'NOTE', NOTE : 'NOTE', PLAIN : '', + VERBNOTE: 'NOTE', WARNING : 'WARNING', ERROR : 'ERROR', CRITICAL: 'ERROR', @@ -66,6 +68,7 @@ class BBLogFormatter(logging.Formatter): VERBOSE : BASECOLOR, NOTE : BASECOLOR, PLAIN : BASECOLOR, + VERBNOTE: BASECOLOR, WARNING : YELLOW, ERROR : RED, CRITICAL: RED, diff --git a/bitbake/lib/bb/parse/__init__.py b/bitbake/lib/bb/parse/__init__.py index 2fc4002..5397d57 100644 --- a/bitbake/lib/bb/parse/__init__.py +++ b/bitbake/lib/bb/parse/__init__.py @@ -134,8 +134,9 @@ def resolve_file(fn, d): if not newfn: raise IOError(errno.ENOENT, "file %s not found in %s" % (fn, bbpath)) fn = newfn + else: + mark_dependency(d, fn) - mark_dependency(d, fn) if not os.path.isfile(fn): raise IOError(errno.ENOENT, "file %s not found" % fn) diff --git a/bitbake/lib/bb/parse/ast.py b/bitbake/lib/bb/parse/ast.py index dba4540..9d20c32 100644 --- a/bitbake/lib/bb/parse/ast.py +++ b/bitbake/lib/bb/parse/ast.py @@ -335,35 +335,39 @@ def handleInherit(statements, filename, lineno, m): classes = m.group(1) statements.append(InheritNode(filename, lineno, classes)) -def finalize(fn, d, variant = None): - saved_handlers = bb.event.get_handlers().copy() - - for var in d.getVar('__BBHANDLERS', False) or []: - # try to add the handler - handlerfn = d.getVarFlag(var, "filename", False) - if not handlerfn: - bb.fatal("Undefined event handler function '%s'" % var) - handlerln = int(d.getVarFlag(var, "lineno", False)) - bb.event.register(var, d.getVar(var, False), (d.getVarFlag(var, "eventmask") or "").split(), handlerfn, handlerln) - - bb.event.fire(bb.event.RecipePreFinalise(fn), d) - - bb.data.expandKeys(d) +def runAnonFuncs(d): code = [] for funcname in d.getVar("__BBANONFUNCS", False) or []: code.append("%s(d)" % funcname) bb.utils.better_exec("\n".join(code), {"d": d}) - tasklist = d.getVar('__BBTASKS', False) or [] - bb.event.fire(bb.event.RecipeTaskPreProcess(fn, list(tasklist)), d) - bb.build.add_tasks(tasklist, d) +def finalize(fn, d, variant = None): + saved_handlers = bb.event.get_handlers().copy() + try: + for var in d.getVar('__BBHANDLERS', False) or []: + # try to add the handler + handlerfn = d.getVarFlag(var, "filename", False) + if not handlerfn: + bb.fatal("Undefined event handler function '%s'" % var) + handlerln = int(d.getVarFlag(var, "lineno", False)) + bb.event.register(var, d.getVar(var, False), (d.getVarFlag(var, "eventmask") or "").split(), handlerfn, handlerln) + + bb.event.fire(bb.event.RecipePreFinalise(fn), d) + + bb.data.expandKeys(d) + runAnonFuncs(d) + + tasklist = d.getVar('__BBTASKS', False) or [] + bb.event.fire(bb.event.RecipeTaskPreProcess(fn, list(tasklist)), d) + bb.build.add_tasks(tasklist, d) - bb.parse.siggen.finalise(fn, d, variant) + bb.parse.siggen.finalise(fn, d, variant) - d.setVar('BBINCLUDED', bb.parse.get_file_depends(d)) + d.setVar('BBINCLUDED', bb.parse.get_file_depends(d)) - bb.event.fire(bb.event.RecipeParsed(fn), d) - bb.event.set_handlers(saved_handlers) + bb.event.fire(bb.event.RecipeParsed(fn), d) + finally: + bb.event.set_handlers(saved_handlers) def _create_variants(datastores, names, function, onlyfinalise): def create_variant(name, orig_d, arg = None): diff --git a/bitbake/lib/bb/parse/parse_py/BBHandler.py b/bitbake/lib/bb/parse/parse_py/BBHandler.py index f89ad24..e5039e3 100644 --- a/bitbake/lib/bb/parse/parse_py/BBHandler.py +++ b/bitbake/lib/bb/parse/parse_py/BBHandler.py @@ -131,9 +131,6 @@ def handle(fn, d, include): abs_fn = resolve_file(fn, d) - if include: - bb.parse.mark_dependency(d, abs_fn) - # actual loading statements = get_statements(fn, abs_fn, base_name) diff --git a/bitbake/lib/bb/parse/parse_py/ConfHandler.py b/bitbake/lib/bb/parse/parse_py/ConfHandler.py index 97aa130..9d3ebe1 100644 --- a/bitbake/lib/bb/parse/parse_py/ConfHandler.py +++ b/bitbake/lib/bb/parse/parse_py/ConfHandler.py @@ -134,9 +134,6 @@ def handle(fn, data, include): abs_fn = resolve_file(fn, data) f = open(abs_fn, 'r') - if include: - bb.parse.mark_dependency(data, abs_fn) - statements = ast.StatementGroup() lineno = 0 while True: diff --git a/bitbake/lib/bb/runqueue.py b/bitbake/lib/bb/runqueue.py index b7be102..9ce06c4 100644 --- a/bitbake/lib/bb/runqueue.py +++ b/bitbake/lib/bb/runqueue.py @@ -94,13 +94,13 @@ class RunQueueStats: self.active = self.active - 1 self.failed = self.failed + 1 - def taskCompleted(self, number = 1): - self.active = self.active - number - self.completed = self.completed + number + def taskCompleted(self): + self.active = self.active - 1 + self.completed = self.completed + 1 - def taskSkipped(self, number = 1): - self.active = self.active + number - self.skipped = self.skipped + number + def taskSkipped(self): + self.active = self.active + 1 + self.skipped = self.skipped + 1 def taskActive(self): self.active = self.active + 1 @@ -134,6 +134,7 @@ class RunQueueScheduler(object): self.prio_map = [self.rqdata.runtaskentries.keys()] self.buildable = [] + self.skip_maxthread = {} self.stamps = {} for tid in self.rqdata.runtaskentries: (mc, fn, taskname, taskfn) = split_tid_mcfn(tid) @@ -150,8 +151,25 @@ class RunQueueScheduler(object): self.buildable = [x for x in self.buildable if x not in self.rq.runq_running] if not self.buildable: return None + + # Filter out tasks that have a max number of threads that have been exceeded + skip_buildable = {} + for running in self.rq.runq_running.difference(self.rq.runq_complete): + rtaskname = taskname_from_tid(running) + if rtaskname not in self.skip_maxthread: + self.skip_maxthread[rtaskname] = self.rq.cfgData.getVarFlag(rtaskname, "number_threads") + if not self.skip_maxthread[rtaskname]: + continue + if rtaskname in skip_buildable: + skip_buildable[rtaskname] += 1 + else: + skip_buildable[rtaskname] = 1 + if len(self.buildable) == 1: tid = self.buildable[0] + taskname = taskname_from_tid(tid) + if taskname in skip_buildable and skip_buildable[taskname] >= int(self.skip_maxthread[taskname]): + return None stamp = self.stamps[tid] if stamp not in self.rq.build_stamps.values(): return tid @@ -164,6 +182,9 @@ class RunQueueScheduler(object): best = None bestprio = None for tid in self.buildable: + taskname = taskname_from_tid(tid) + if taskname in skip_buildable and skip_buildable[taskname] >= int(self.skip_maxthread[taskname]): + continue prio = self.rev_prio_map[tid] if bestprio is None or bestprio > prio: stamp = self.stamps[tid] @@ -178,7 +199,7 @@ class RunQueueScheduler(object): """ Return the id of the task we should build next """ - if self.rq.stats.active < self.rq.number_tasks: + if self.rq.can_start_task(): return self.next_buildable_task() def newbuildable(self, task): @@ -581,11 +602,18 @@ class RunQueueData: if t in taskData[mc].taskentries: depends.add(t) - def add_resolved_dependencies(mc, fn, tasknames, depends): - for taskname in tasknames: - tid = build_tid(mc, fn, taskname) - if tid in self.runtaskentries: - depends.add(tid) + def add_mc_dependencies(mc, tid): + mcdeps = taskData[mc].get_mcdepends() + for dep in mcdeps: + mcdependency = dep.split(':') + pn = mcdependency[3] + frommc = mcdependency[1] + mcdep = mcdependency[2] + deptask = mcdependency[4] + if mc == frommc: + fn = taskData[mcdep].build_targets[pn][0] + newdep = '%s:%s' % (fn,deptask) + taskData[mc].taskentries[tid].tdepends.append(newdep) for mc in taskData: for tid in taskData[mc].taskentries: @@ -603,12 +631,16 @@ class RunQueueData: if fn in taskData[mc].failed_fns: continue + # We add multiconfig dependencies before processing internal task deps (tdepends) + if 'mcdepends' in task_deps and taskname in task_deps['mcdepends']: + add_mc_dependencies(mc, tid) + # Resolve task internal dependencies # # e.g. addtask before X after Y for t in taskData[mc].taskentries[tid].tdepends: - (_, depfn, deptaskname, _) = split_tid_mcfn(t) - depends.add(build_tid(mc, depfn, deptaskname)) + (depmc, depfn, deptaskname, _) = split_tid_mcfn(t) + depends.add(build_tid(depmc, depfn, deptaskname)) # Resolve 'deptask' dependencies # @@ -673,57 +705,106 @@ class RunQueueData: recursiveitasks[tid].append(newdep) self.runtaskentries[tid].depends = depends + # Remove all self references + self.runtaskentries[tid].depends.discard(tid) #self.dump_data() + self.init_progress_reporter.next_stage() + # Resolve recursive 'recrdeptask' dependencies (Part B) # # e.g. do_sometask[recrdeptask] = "do_someothertask" # (makes sure sometask runs after someothertask of all DEPENDS, RDEPENDS and intertask dependencies, recursively) # We need to do this separately since we need all of runtaskentries[*].depends to be complete before this is processed - self.init_progress_reporter.next_stage(len(recursivetasks)) - extradeps = {} - for taskcounter, tid in enumerate(recursivetasks): - extradeps[tid] = set(self.runtaskentries[tid].depends) - - tasknames = recursivetasks[tid] - seendeps = set() - - def generate_recdeps(t): - newdeps = set() - (mc, fn, taskname, _) = split_tid_mcfn(t) - add_resolved_dependencies(mc, fn, tasknames, newdeps) - extradeps[tid].update(newdeps) - seendeps.add(t) - newdeps.add(t) - for i in newdeps: - if i not in self.runtaskentries: - # Not all recipes might have the recrdeptask task as a task - continue - task = self.runtaskentries[i].task - for n in self.runtaskentries[i].depends: - if n not in seendeps: - generate_recdeps(n) - generate_recdeps(tid) - if tid in recursiveitasks: - for dep in recursiveitasks[tid]: - generate_recdeps(dep) - self.init_progress_reporter.update(taskcounter) + # Generating/interating recursive lists of dependencies is painful and potentially slow + # Precompute recursive task dependencies here by: + # a) create a temp list of reverse dependencies (revdeps) + # b) walk up the ends of the chains (when a given task no longer has dependencies i.e. len(deps) == 0) + # c) combine the total list of dependencies in cumulativedeps + # d) optimise by pre-truncating 'task' off the items in cumulativedeps (keeps items in sets lower) - # Remove circular references so that do_a[recrdeptask] = "do_a do_b" can work - for tid in recursivetasks: - extradeps[tid].difference_update(recursivetasksselfref) + revdeps = {} + deps = {} + cumulativedeps = {} + for tid in self.runtaskentries: + deps[tid] = set(self.runtaskentries[tid].depends) + revdeps[tid] = set() + cumulativedeps[tid] = set() + # Generate a temp list of reverse dependencies for tid in self.runtaskentries: - task = self.runtaskentries[tid].task - # Add in extra dependencies - if tid in extradeps: - self.runtaskentries[tid].depends = extradeps[tid] - # Remove all self references - if tid in self.runtaskentries[tid].depends: - logger.debug(2, "Task %s contains self reference!", tid) - self.runtaskentries[tid].depends.remove(tid) + for dep in self.runtaskentries[tid].depends: + revdeps[dep].add(tid) + # Find the dependency chain endpoints + endpoints = set() + for tid in self.runtaskentries: + if len(deps[tid]) == 0: + endpoints.add(tid) + # Iterate the chains collating dependencies + while endpoints: + next = set() + for tid in endpoints: + for dep in revdeps[tid]: + cumulativedeps[dep].add(fn_from_tid(tid)) + cumulativedeps[dep].update(cumulativedeps[tid]) + if tid in deps[dep]: + deps[dep].remove(tid) + if len(deps[dep]) == 0: + next.add(dep) + endpoints = next + #for tid in deps: + # if len(deps[tid]) != 0: + # bb.warn("Sanity test failure, dependencies left for %s (%s)" % (tid, deps[tid])) + + # Loop here since recrdeptasks can depend upon other recrdeptasks and we have to + # resolve these recursively until we aren't adding any further extra dependencies + extradeps = True + while extradeps: + extradeps = 0 + for tid in recursivetasks: + tasknames = recursivetasks[tid] + + totaldeps = set(self.runtaskentries[tid].depends) + if tid in recursiveitasks: + totaldeps.update(recursiveitasks[tid]) + for dep in recursiveitasks[tid]: + if dep not in self.runtaskentries: + continue + totaldeps.update(self.runtaskentries[dep].depends) + + deps = set() + for dep in totaldeps: + if dep in cumulativedeps: + deps.update(cumulativedeps[dep]) + + for t in deps: + for taskname in tasknames: + newtid = t + ":" + taskname + if newtid == tid: + continue + if newtid in self.runtaskentries and newtid not in self.runtaskentries[tid].depends: + extradeps += 1 + self.runtaskentries[tid].depends.add(newtid) + + # Handle recursive tasks which depend upon other recursive tasks + deps = set() + for dep in self.runtaskentries[tid].depends.intersection(recursivetasks): + deps.update(self.runtaskentries[dep].depends.difference(self.runtaskentries[tid].depends)) + for newtid in deps: + for taskname in tasknames: + if not newtid.endswith(":" + taskname): + continue + if newtid in self.runtaskentries: + extradeps += 1 + self.runtaskentries[tid].depends.add(newtid) + + bb.debug(1, "Added %s recursive dependencies in this loop" % extradeps) + + # Remove recrdeptask circular references so that do_a[recrdeptask] = "do_a do_b" can work + for tid in recursivetasksselfref: + self.runtaskentries[tid].depends.difference_update(recursivetasksselfref) self.init_progress_reporter.next_stage() @@ -798,30 +879,57 @@ class RunQueueData: # # Once all active tasks are marked, prune the ones we don't need. - delcount = 0 + delcount = {} for tid in list(self.runtaskentries.keys()): if tid not in runq_build: + delcount[tid] = self.runtaskentries[tid] del self.runtaskentries[tid] - delcount += 1 - self.init_progress_reporter.next_stage() + # Handle --runall + if self.cooker.configuration.runall: + # re-run the mark_active and then drop unused tasks from new list + runq_build = {} + + for task in self.cooker.configuration.runall: + runall_tids = set() + for tid in list(self.runtaskentries): + wanttid = fn_from_tid(tid) + ":do_%s" % task + if wanttid in delcount: + self.runtaskentries[wanttid] = delcount[wanttid] + if wanttid in self.runtaskentries: + runall_tids.add(wanttid) + + for tid in list(runall_tids): + mark_active(tid,1) - if self.cooker.configuration.runall is not None: - runall = "do_%s" % self.cooker.configuration.runall - runall_tids = { k: v for k, v in self.runtaskentries.items() if taskname_from_tid(k) == runall } + for tid in list(self.runtaskentries.keys()): + if tid not in runq_build: + delcount[tid] = self.runtaskentries[tid] + del self.runtaskentries[tid] + if len(self.runtaskentries) == 0: + bb.msg.fatal("RunQueue", "Could not find any tasks with the tasknames %s to run within the recipes of the taskgraphs of the targets %s" % (str(self.cooker.configuration.runall), str(self.targets))) + + self.init_progress_reporter.next_stage() + + # Handle runonly + if self.cooker.configuration.runonly: # re-run the mark_active and then drop unused tasks from new list runq_build = {} - for tid in list(runall_tids): - mark_active(tid,1) + + for task in self.cooker.configuration.runonly: + runonly_tids = { k: v for k, v in self.runtaskentries.items() if taskname_from_tid(k) == "do_%s" % task } + + for tid in list(runonly_tids): + mark_active(tid,1) for tid in list(self.runtaskentries.keys()): if tid not in runq_build: + delcount[tid] = self.runtaskentries[tid] del self.runtaskentries[tid] - delcount += 1 if len(self.runtaskentries) == 0: - bb.msg.fatal("RunQueue", "No remaining tasks to run for build target %s with runall %s" % (target, runall)) + bb.msg.fatal("RunQueue", "Could not find any tasks with the tasknames %s to run within the taskgraphs of the targets %s" % (str(self.cooker.configuration.runonly), str(self.targets))) # # Step D - Sanity checks and computation @@ -834,7 +942,7 @@ class RunQueueData: else: bb.msg.fatal("RunQueue", "No active tasks and not in --continue mode?! Please report this bug.") - logger.verbose("Pruned %s inactive tasks, %s left", delcount, len(self.runtaskentries)) + logger.verbose("Pruned %s inactive tasks, %s left", len(delcount), len(self.runtaskentries)) logger.verbose("Assign Weightings") @@ -962,7 +1070,7 @@ class RunQueueData: msg += "\n%s has unique rprovides:\n %s" % (provfn, "\n ".join(rprovide_results[provfn] - commonrprovs)) if self.warn_multi_bb: - logger.warning(msg) + logger.verbnote(msg) else: logger.error(msg) @@ -970,7 +1078,7 @@ class RunQueueData: # Create a whitelist usable by the stamp checks self.stampfnwhitelist = {} - for mc in self.taskData: + for mc in self.taskData: self.stampfnwhitelist[mc] = [] for entry in self.stampwhitelist.split(): if entry not in self.taskData[mc].build_targets: @@ -1002,7 +1110,7 @@ class RunQueueData: bb.debug(1, "Task %s is marked nostamp, cannot invalidate this task" % taskname) else: logger.verbose("Invalidate task %s, %s", taskname, fn) - bb.parse.siggen.invalidate_task(taskname, self.dataCaches[mc], fn) + bb.parse.siggen.invalidate_task(taskname, self.dataCaches[mc], taskfn) self.init_progress_reporter.next_stage() @@ -1646,6 +1754,10 @@ class RunQueueExecute: valid = bb.utils.better_eval(call, locs) return valid + def can_start_task(self): + can_start = self.stats.active < self.number_tasks + return can_start + class RunQueueExecuteDummy(RunQueueExecute): def __init__(self, rq): self.rq = rq @@ -1719,13 +1831,14 @@ class RunQueueExecuteTasks(RunQueueExecute): bb.build.del_stamp(taskname, self.rqdata.dataCaches[mc], taskfn) self.rq.scenequeue_covered.remove(tid) - toremove = covered_remove + toremove = covered_remove | self.rq.scenequeue_notcovered for task in toremove: logger.debug(1, 'Not skipping task %s due to setsceneverify', task) while toremove: covered_remove = [] for task in toremove: - removecoveredtask(task) + if task in self.rq.scenequeue_covered: + removecoveredtask(task) for deptask in self.rqdata.runtaskentries[task].depends: if deptask not in self.rq.scenequeue_covered: continue @@ -1795,14 +1908,13 @@ class RunQueueExecuteTasks(RunQueueExecute): continue if revdep in self.runq_buildable: continue - alldeps = 1 + alldeps = True for dep in self.rqdata.runtaskentries[revdep].depends: if dep not in self.runq_complete: - alldeps = 0 - if alldeps == 1: + alldeps = False + break + if alldeps: self.setbuildable(revdep) - fn = fn_from_tid(revdep) - taskname = taskname_from_tid(revdep) logger.debug(1, "Marking task %s as buildable", revdep) def task_complete(self, task): @@ -1826,8 +1938,8 @@ class RunQueueExecuteTasks(RunQueueExecute): self.setbuildable(task) bb.event.fire(runQueueTaskSkipped(task, self.stats, self.rq, reason), self.cfgData) self.task_completeoutright(task) - self.stats.taskCompleted() self.stats.taskSkipped() + self.stats.taskCompleted() def execute(self): """ @@ -1937,7 +2049,7 @@ class RunQueueExecuteTasks(RunQueueExecute): self.build_stamps2.append(self.build_stamps[task]) self.runq_running.add(task) self.stats.taskActive() - if self.stats.active < self.number_tasks: + if self.can_start_task(): return True if self.stats.active > 0: @@ -1992,6 +2104,7 @@ class RunQueueExecuteScenequeue(RunQueueExecute): # If we don't have any setscene functions, skip this step if len(self.rqdata.runq_setscene_tids) == 0: rq.scenequeue_covered = set() + rq.scenequeue_notcovered = set() rq.state = runQueueRunInit return @@ -2207,10 +2320,15 @@ class RunQueueExecuteScenequeue(RunQueueExecute): sq_hash.append(self.rqdata.runtaskentries[tid].hash) sq_taskname.append(taskname) sq_task.append(tid) + + self.cooker.data.setVar("BB_SETSCENE_STAMPCURRENT_COUNT", len(stamppresent)) + call = self.rq.hashvalidate + "(sq_fn, sq_task, sq_hash, sq_hashfn, d)" locs = { "sq_fn" : sq_fn, "sq_task" : sq_taskname, "sq_hash" : sq_hash, "sq_hashfn" : sq_hashfn, "d" : self.cooker.data } valid = bb.utils.better_eval(call, locs) + self.cooker.data.delVar("BB_SETSCENE_STAMPCURRENT_COUNT") + valid_new = stamppresent for v in valid: valid_new.append(sq_task[v]) @@ -2272,8 +2390,8 @@ class RunQueueExecuteScenequeue(RunQueueExecute): def task_failoutright(self, task): self.runq_running.add(task) self.runq_buildable.add(task) - self.stats.taskCompleted() self.stats.taskSkipped() + self.stats.taskCompleted() self.scenequeue_notcovered.add(task) self.scenequeue_updatecounters(task, True) @@ -2281,8 +2399,8 @@ class RunQueueExecuteScenequeue(RunQueueExecute): self.runq_running.add(task) self.runq_buildable.add(task) self.task_completeoutright(task) - self.stats.taskCompleted() self.stats.taskSkipped() + self.stats.taskCompleted() def execute(self): """ @@ -2292,7 +2410,7 @@ class RunQueueExecuteScenequeue(RunQueueExecute): self.rq.read_workers() task = None - if self.stats.active < self.number_tasks: + if self.can_start_task(): # Find the next setscene to run for nexttask in self.rqdata.runq_setscene_tids: if nexttask in self.runq_buildable and nexttask not in self.runq_running and self.stamps[nexttask] not in self.build_stamps.values(): @@ -2351,7 +2469,7 @@ class RunQueueExecuteScenequeue(RunQueueExecute): self.build_stamps2.append(self.build_stamps[task]) self.runq_running.add(task) self.stats.taskActive() - if self.stats.active < self.number_tasks: + if self.can_start_task(): return True if self.stats.active > 0: diff --git a/bitbake/lib/bb/server/process.py b/bitbake/lib/bb/server/process.py index 3d31355..38b923f 100644 --- a/bitbake/lib/bb/server/process.py +++ b/bitbake/lib/bb/server/process.py @@ -223,6 +223,8 @@ class ProcessServer(multiprocessing.Process): try: self.cooker.shutdown(True) + self.cooker.notifier.stop() + self.cooker.confignotifier.stop() except: pass @@ -375,11 +377,12 @@ class BitBakeServer(object): if os.path.exists(sockname): os.unlink(sockname) + # Place the log in the builddirectory alongside the lock file + logfile = os.path.join(os.path.dirname(self.bitbake_lock.name), "bitbake-cookerdaemon.log") + self.sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) # AF_UNIX has path length issues so chdir here to workaround cwd = os.getcwd() - logfile = os.path.join(cwd, "bitbake-cookerdaemon.log") - try: os.chdir(os.path.dirname(sockname)) self.sock.bind(os.path.basename(sockname)) @@ -392,11 +395,16 @@ class BitBakeServer(object): bb.daemonize.createDaemon(self._startServer, logfile) self.sock.close() self.bitbake_lock.close() + os.close(self.readypipein) ready = ConnectionReader(self.readypipe) r = ready.poll(30) if r: - r = ready.get() + try: + r = ready.get() + except EOFError: + # Trap the child exitting/closing the pipe and error out + r = None if not r or r != "ready": ready.close() bb.error("Unable to start bitbake server") @@ -422,21 +430,16 @@ class BitBakeServer(object): bb.error("Server log for this session (%s):\n%s" % (logfile, "".join(lines))) raise SystemExit(1) ready.close() - os.close(self.readypipein) def _startServer(self): print(self.start_log_format % (os.getpid(), datetime.datetime.now().strftime(self.start_log_datetime_format))) server = ProcessServer(self.bitbake_lock, self.sock, self.sockname) self.configuration.setServerRegIdleCallback(server.register_idle_function) + os.close(self.readypipe) writer = ConnectionWriter(self.readypipein) - try: - self.cooker = bb.cooker.BBCooker(self.configuration, self.featureset) - writer.send("ready") - except: - writer.send("fail") - raise - finally: - os.close(self.readypipein) + self.cooker = bb.cooker.BBCooker(self.configuration, self.featureset) + writer.send("ready") + writer.close() server.cooker = self.cooker server.server_timeout = self.configuration.server_timeout server.xmlrpcinterface = self.configuration.xmlrpcinterface diff --git a/bitbake/lib/bb/siggen.py b/bitbake/lib/bb/siggen.py index 5ef82d7..03c824e 100644 --- a/bitbake/lib/bb/siggen.py +++ b/bitbake/lib/bb/siggen.py @@ -110,42 +110,13 @@ class SignatureGeneratorBasic(SignatureGenerator): ignore_mismatch = ((d.getVar("BB_HASH_IGNORE_MISMATCH") or '') == '1') tasklist, gendeps, lookupcache = bb.data.generate_dependencies(d) - taskdeps = {} - basehash = {} + taskdeps, basehash = bb.data.generate_dependency_hash(tasklist, gendeps, lookupcache, self.basewhitelist, fn) for task in tasklist: - data = lookupcache[task] - - if data is None: - bb.error("Task %s from %s seems to be empty?!" % (task, fn)) - data = '' - - gendeps[task] -= self.basewhitelist - newdeps = gendeps[task] - seen = set() - while newdeps: - nextdeps = newdeps - seen |= nextdeps - newdeps = set() - for dep in nextdeps: - if dep in self.basewhitelist: - continue - gendeps[dep] -= self.basewhitelist - newdeps |= gendeps[dep] - newdeps -= seen - - alldeps = sorted(seen) - for dep in alldeps: - data = data + dep - var = lookupcache[dep] - if var is not None: - data = data + str(var) - datahash = hashlib.md5(data.encode("utf-8")).hexdigest() k = fn + "." + task - if not ignore_mismatch and k in self.basehash and self.basehash[k] != datahash: - bb.error("When reparsing %s, the basehash value changed from %s to %s. The metadata is not deterministic and this needs to be fixed." % (k, self.basehash[k], datahash)) - self.basehash[k] = datahash - taskdeps[task] = alldeps + if not ignore_mismatch and k in self.basehash and self.basehash[k] != basehash[k]: + bb.error("When reparsing %s, the basehash value changed from %s to %s. The metadata is not deterministic and this needs to be fixed." % (k, self.basehash[k], basehash[k])) + self.basehash[k] = basehash[k] self.taskdeps[fn] = taskdeps self.gendeps[fn] = gendeps @@ -193,15 +164,24 @@ class SignatureGeneratorBasic(SignatureGenerator): return taint def get_taskhash(self, fn, task, deps, dataCache): + + mc = '' + if fn.startswith('multiconfig:'): + mc = fn.split(':')[1] k = fn + "." + task + data = dataCache.basetaskhash[k] self.basehash[k] = data self.runtaskdeps[k] = [] self.file_checksum_values[k] = [] recipename = dataCache.pkg_fn[fn] - for dep in sorted(deps, key=clean_basepath): - depname = dataCache.pkg_fn[self.pkgnameextract.search(dep).group('fn')] + pkgname = self.pkgnameextract.search(dep).group('fn') + if mc: + depmc = pkgname.split(':')[1] + if mc != depmc: + continue + depname = dataCache.pkg_fn[pkgname] if not self.rundep_check(fn, recipename, task, dep, depname, dataCache): continue if dep not in self.taskhash: @@ -347,7 +327,7 @@ class SignatureGeneratorBasicHash(SignatureGeneratorBasic): def stampcleanmask(self, stampbase, fn, taskname, extrainfo): return self.stampfile(stampbase, fn, taskname, extrainfo, clean=True) - + def invalidate_task(self, task, d, fn): bb.note("Tainting hash to force rebuild of task %s, %s" % (fn, task)) bb.build.write_taint(task, d, fn) @@ -636,7 +616,7 @@ def compare_sigfiles(a, b, recursecb=None, color=False, collapsed=False): if collapsed: output.extend(recout) else: - # If a dependent hash changed, might as well print the line above and then defer to the changes in + # If a dependent hash changed, might as well print the line above and then defer to the changes in # that hash since in all likelyhood, they're the same changes this task also saw. output = [output[-1]] + recout diff --git a/bitbake/lib/bb/taskdata.py b/bitbake/lib/bb/taskdata.py index 0ea6c0b..94e822c 100644 --- a/bitbake/lib/bb/taskdata.py +++ b/bitbake/lib/bb/taskdata.py @@ -70,6 +70,8 @@ class TaskData: self.skiplist = skiplist + self.mcdepends = [] + def add_tasks(self, fn, dataCache): """ Add tasks for a given fn to the database @@ -88,6 +90,13 @@ class TaskData: self.add_extra_deps(fn, dataCache) + def add_mcdepends(task): + for dep in task_deps['mcdepends'][task].split(): + if len(dep.split(':')) != 5: + bb.msg.fatal("TaskData", "Error for %s:%s[%s], multiconfig dependency %s does not contain exactly four ':' characters.\n Task '%s' should be specified in the form 'multiconfig:fromMC:toMC:packagename:task'" % (fn, task, 'mcdepends', dep, 'mcdepends')) + if dep not in self.mcdepends: + self.mcdepends.append(dep) + # Common code for dep_name/depends = 'depends'/idepends and 'rdepends'/irdepends def handle_deps(task, dep_name, depends, seen): if dep_name in task_deps and task in task_deps[dep_name]: @@ -110,16 +119,20 @@ class TaskData: parentids = [] for dep in task_deps['parents'][task]: if dep not in task_deps['tasks']: - bb.debug(2, "Not adding dependeny of %s on %s since %s does not exist" % (task, dep, dep)) + bb.debug(2, "Not adding dependency of %s on %s since %s does not exist" % (task, dep, dep)) continue parentid = "%s:%s" % (fn, dep) parentids.append(parentid) self.taskentries[tid].tdepends.extend(parentids) + # Touch all intertask dependencies handle_deps(task, 'depends', self.taskentries[tid].idepends, self.seen_build_target) handle_deps(task, 'rdepends', self.taskentries[tid].irdepends, self.seen_run_target) + if 'mcdepends' in task_deps and task in task_deps['mcdepends']: + add_mcdepends(task) + # Work out build dependencies if not fn in self.depids: dependids = set() @@ -537,6 +550,9 @@ class TaskData: provmap[name] = provider[0] return provmap + def get_mcdepends(self): + return self.mcdepends + def dump_data(self): """ Dump some debug information on the internal data structures diff --git a/bitbake/lib/bb/tests/cooker.py b/bitbake/lib/bb/tests/cooker.py new file mode 100644 index 0000000..2b44236 --- /dev/null +++ b/bitbake/lib/bb/tests/cooker.py @@ -0,0 +1,83 @@ +# ex:ts=4:sw=4:sts=4:et +# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*- +# +# BitBake Tests for cooker.py +# +# This program is free software; you can redistribute it and/or modify +# it under the terms of the GNU General Public License version 2 as +# published by the Free Software Foundation. +# +# This program is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License along +# with this program; if not, write to the Free Software Foundation, Inc., +# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. +# + +import unittest +import tempfile +import os +import bb, bb.cooker +import re +import logging + +# Cooker tests +class CookerTest(unittest.TestCase): + def setUp(self): + # At least one variable needs to be set + self.d = bb.data.init() + topdir = os.path.join(os.path.dirname(os.path.realpath(__file__)), "testdata/cooker") + self.d.setVar('TOPDIR', topdir) + + def test_CookerCollectFiles_sublayers(self): + '''Test that a sublayer of an existing layer does not trigger + No bb files matched ...''' + + def append_collection(topdir, path, d): + collection = path.split('/')[-1] + pattern = "^" + topdir + "/" + path + "/" + regex = re.compile(pattern) + priority = 5 + + d.setVar('BBFILE_COLLECTIONS', (d.getVar('BBFILE_COLLECTIONS') or "") + " " + collection) + d.setVar('BBFILE_PATTERN_%s' % (collection), pattern) + d.setVar('BBFILE_PRIORITY_%s' % (collection), priority) + + return (collection, pattern, regex, priority) + + topdir = self.d.getVar("TOPDIR") + + # Priorities: list of (collection, pattern, regex, priority) + bbfile_config_priorities = [] + # Order is important for this test, shortest to longest is typical failure case + bbfile_config_priorities.append( append_collection(topdir, 'first', self.d) ) + bbfile_config_priorities.append( append_collection(topdir, 'second', self.d) ) + bbfile_config_priorities.append( append_collection(topdir, 'second/third', self.d) ) + + pkgfns = [ topdir + '/first/recipes/sample1_1.0.bb', + topdir + '/second/recipes/sample2_1.0.bb', + topdir + '/second/third/recipes/sample3_1.0.bb' ] + + class LogHandler(logging.Handler): + def __init__(self): + logging.Handler.__init__(self) + self.logdata = [] + + def emit(self, record): + self.logdata.append(record.getMessage()) + + # Move cooker to use my special logging + logger = bb.cooker.logger + log_handler = LogHandler() + logger.addHandler(log_handler) + collection = bb.cooker.CookerCollectFiles(bbfile_config_priorities) + collection.collection_priorities(pkgfns, self.d) + logger.removeHandler(log_handler) + + # Should be empty (no generated messages) + expected = [] + + self.assertEqual(log_handler.logdata, expected) diff --git a/bitbake/lib/bb/tests/data.py b/bitbake/lib/bb/tests/data.py index a4a9dd3..db3e201 100644 --- a/bitbake/lib/bb/tests/data.py +++ b/bitbake/lib/bb/tests/data.py @@ -281,7 +281,7 @@ class TestConcatOverride(unittest.TestCase): def test_remove(self): self.d.setVar("TEST", "${VAL} ${BAR}") self.d.setVar("TEST_remove", "val") - self.assertEqual(self.d.getVar("TEST"), "bar") + self.assertEqual(self.d.getVar("TEST"), " bar") def test_remove_cleared(self): self.d.setVar("TEST", "${VAL} ${BAR}") @@ -300,7 +300,7 @@ class TestConcatOverride(unittest.TestCase): self.d.setVar("TEST", "${VAL} ${BAR}") self.d.setVar("TEST_remove", "val") self.d.setVar("TEST_TEST", "${TEST} ${TEST}") - self.assertEqual(self.d.getVar("TEST_TEST"), "bar bar") + self.assertEqual(self.d.getVar("TEST_TEST"), " bar bar") def test_empty_remove(self): self.d.setVar("TEST", "") @@ -311,13 +311,25 @@ class TestConcatOverride(unittest.TestCase): self.d.setVar("BAR", "Z") self.d.setVar("TEST", "${BAR}/X Y") self.d.setVar("TEST_remove", "${BAR}/X") - self.assertEqual(self.d.getVar("TEST"), "Y") + self.assertEqual(self.d.getVar("TEST"), " Y") def test_remove_expansion_items(self): self.d.setVar("TEST", "A B C D") self.d.setVar("BAR", "B D") self.d.setVar("TEST_remove", "${BAR}") - self.assertEqual(self.d.getVar("TEST"), "A C") + self.assertEqual(self.d.getVar("TEST"), "A C ") + + def test_remove_preserve_whitespace(self): + # When the removal isn't active, the original value should be preserved + self.d.setVar("TEST", " A B") + self.d.setVar("TEST_remove", "C") + self.assertEqual(self.d.getVar("TEST"), " A B") + + def test_remove_preserve_whitespace2(self): + # When the removal is active preserve the whitespace + self.d.setVar("TEST", " A B") + self.d.setVar("TEST_remove", "B") + self.assertEqual(self.d.getVar("TEST"), " A ") class TestOverrides(unittest.TestCase): def setUp(self): @@ -374,6 +386,15 @@ class TestOverrides(unittest.TestCase): self.d.setVar("OVERRIDES", "foo:bar:some_val") self.assertEqual(self.d.getVar("TEST"), "testvalue3") + def test_remove_with_override(self): + self.d.setVar("TEST_bar", "testvalue2") + self.d.setVar("TEST_some_val", "testvalue3 testvalue5") + self.d.setVar("TEST_some_val_remove", "testvalue3") + self.d.setVar("TEST_foo", "testvalue4") + self.d.setVar("OVERRIDES", "foo:bar:some_val") + self.assertEqual(self.d.getVar("TEST"), " testvalue5") + + class TestKeyExpansion(unittest.TestCase): def setUp(self): self.d = bb.data.init() @@ -443,6 +464,54 @@ class Contains(unittest.TestCase): self.assertFalse(bb.utils.contains_any("SOMEFLAG", "x y z", True, False, self.d)) +class TaskHash(unittest.TestCase): + def test_taskhashes(self): + def gettask_bashhash(taskname, d): + tasklist, gendeps, lookupcache = bb.data.generate_dependencies(d) + taskdeps, basehash = bb.data.generate_dependency_hash(tasklist, gendeps, lookupcache, set(), "somefile") + bb.warn(str(lookupcache)) + return basehash["somefile." + taskname] + + d = bb.data.init() + d.setVar("__BBTASKS", ["mytask"]) + d.setVar("__exportlist", []) + d.setVar("mytask", "${MYCOMMAND}") + d.setVar("MYCOMMAND", "${VAR}; foo; bar; exit 0") + d.setVar("VAR", "val") + orighash = gettask_bashhash("mytask", d) + + # Changing a variable should change the hash + d.setVar("VAR", "val2") + nexthash = gettask_bashhash("mytask", d) + self.assertNotEqual(orighash, nexthash) + + d.setVar("VAR", "val") + # Adding an inactive removal shouldn't change the hash + d.setVar("BAR", "notbar") + d.setVar("MYCOMMAND_remove", "${BAR}") + nexthash = gettask_bashhash("mytask", d) + self.assertEqual(orighash, nexthash) + + # Adding an active removal should change the hash + d.setVar("BAR", "bar;") + nexthash = gettask_bashhash("mytask", d) + self.assertNotEqual(orighash, nexthash) + + # Setup an inactive contains() + d.setVar("VAR", "${@bb.utils.contains('VAR2', 'A', 'val', '', d)}") + orighash = gettask_bashhash("mytask", d) + + # Activate the contains() and the hash should change + d.setVar("VAR2", "A") + nexthash = gettask_bashhash("mytask", d) + self.assertNotEqual(orighash, nexthash) + + # The contains should be inactive but even though VAR2 has a + # different value the hash should match the original + d.setVar("VAR2", "B") + nexthash = gettask_bashhash("mytask", d) + self.assertEqual(orighash, nexthash) + class Serialize(unittest.TestCase): def test_serialize(self): diff --git a/bitbake/lib/bb/tests/fetch.py b/bitbake/lib/bb/tests/fetch.py index 11698f2..17909ec 100644 --- a/bitbake/lib/bb/tests/fetch.py +++ b/bitbake/lib/bb/tests/fetch.py @@ -20,6 +20,7 @@ # import unittest +import hashlib import tempfile import subprocess import collections @@ -401,6 +402,12 @@ class MirrorUriTest(FetcherTest): : "git://somewhere.org/somedir/mtd-utils.git;tag=1234567890123456789012345678901234567890;protocol=http", ("git://git.invalid.infradead.org/foo/mtd-utils.git;tag=1234567890123456789012345678901234567890", "git://.*/.*", "git://somewhere.org/somedir/MIRRORNAME;protocol=http") : "git://somewhere.org/somedir/git.invalid.infradead.org.foo.mtd-utils.git;tag=1234567890123456789012345678901234567890;protocol=http", + ("http://somewhere.org/somedir1/somedir2/somefile_1.2.3.tar.gz", "http://.*/.*", "http://somewhere2.org") + : "http://somewhere2.org/somefile_1.2.3.tar.gz", + ("http://somewhere.org/somedir1/somedir2/somefile_1.2.3.tar.gz", "http://.*/.*", "http://somewhere2.org/") + : "http://somewhere2.org/somefile_1.2.3.tar.gz", + ("git://someserver.org/bitbake;tag=1234567890123456789012345678901234567890;branch=master", "git://someserver.org/bitbake;branch=master", "git://git.openembedded.org/bitbake;protocol=http") + : "git://git.openembedded.org/bitbake;tag=1234567890123456789012345678901234567890;branch=master;protocol=http", #Renaming files doesn't work #("http://somewhere.org/somedir1/somefile_1.2.3.tar.gz", "http://somewhere.org/somedir1/somefile_1.2.3.tar.gz", "http://somewhere2.org/somedir3/somefile_2.3.4.tar.gz") : "http://somewhere2.org/somedir3/somefile_2.3.4.tar.gz" @@ -456,6 +463,124 @@ class MirrorUriTest(FetcherTest): 'https://BBBB/B/B/B/bitbake/bitbake-1.0.tar.gz', 'http://AAAA/A/A/A/B/B/bitbake/bitbake-1.0.tar.gz']) + +class GitDownloadDirectoryNamingTest(FetcherTest): + def setUp(self): + super(GitDownloadDirectoryNamingTest, self).setUp() + self.recipe_url = "git://git.openembedded.org/bitbake" + self.recipe_dir = "git.openembedded.org.bitbake" + self.mirror_url = "git://github.com/openembedded/bitbake.git" + self.mirror_dir = "github.com.openembedded.bitbake.git" + + self.d.setVar('SRCREV', '82ea737a0b42a8b53e11c9cde141e9e9c0bd8c40') + + def setup_mirror_rewrite(self): + self.d.setVar("PREMIRRORS", self.recipe_url + " " + self.mirror_url + " \n") + + @skipIfNoNetwork() + def test_that_directory_is_named_after_recipe_url_when_no_mirroring_is_used(self): + self.setup_mirror_rewrite() + fetcher = bb.fetch.Fetch([self.recipe_url], self.d) + + fetcher.download() + + dir = os.listdir(self.dldir + "/git2") + self.assertIn(self.recipe_dir, dir) + + @skipIfNoNetwork() + def test_that_directory_exists_for_mirrored_url_and_recipe_url_when_mirroring_is_used(self): + self.setup_mirror_rewrite() + fetcher = bb.fetch.Fetch([self.recipe_url], self.d) + + fetcher.download() + + dir = os.listdir(self.dldir + "/git2") + self.assertIn(self.mirror_dir, dir) + self.assertIn(self.recipe_dir, dir) + + @skipIfNoNetwork() + def test_that_recipe_directory_and_mirrored_directory_exists_when_mirroring_is_used_and_the_mirrored_directory_already_exists(self): + self.setup_mirror_rewrite() + fetcher = bb.fetch.Fetch([self.mirror_url], self.d) + fetcher.download() + fetcher = bb.fetch.Fetch([self.recipe_url], self.d) + + fetcher.download() + + dir = os.listdir(self.dldir + "/git2") + self.assertIn(self.mirror_dir, dir) + self.assertIn(self.recipe_dir, dir) + + +class TarballNamingTest(FetcherTest): + def setUp(self): + super(TarballNamingTest, self).setUp() + self.recipe_url = "git://git.openembedded.org/bitbake" + self.recipe_tarball = "git2_git.openembedded.org.bitbake.tar.gz" + self.mirror_url = "git://github.com/openembedded/bitbake.git" + self.mirror_tarball = "git2_github.com.openembedded.bitbake.git.tar.gz" + + self.d.setVar('BB_GENERATE_MIRROR_TARBALLS', '1') + self.d.setVar('SRCREV', '82ea737a0b42a8b53e11c9cde141e9e9c0bd8c40') + + def setup_mirror_rewrite(self): + self.d.setVar("PREMIRRORS", self.recipe_url + " " + self.mirror_url + " \n") + + @skipIfNoNetwork() + def test_that_the_recipe_tarball_is_created_when_no_mirroring_is_used(self): + fetcher = bb.fetch.Fetch([self.recipe_url], self.d) + + fetcher.download() + + dir = os.listdir(self.dldir) + self.assertIn(self.recipe_tarball, dir) + + @skipIfNoNetwork() + def test_that_the_mirror_tarball_is_created_when_mirroring_is_used(self): + self.setup_mirror_rewrite() + fetcher = bb.fetch.Fetch([self.recipe_url], self.d) + + fetcher.download() + + dir = os.listdir(self.dldir) + self.assertIn(self.mirror_tarball, dir) + + +class GitShallowTarballNamingTest(FetcherTest): + def setUp(self): + super(GitShallowTarballNamingTest, self).setUp() + self.recipe_url = "git://git.openembedded.org/bitbake" + self.recipe_tarball = "gitshallow_git.openembedded.org.bitbake_82ea737-1_master.tar.gz" + self.mirror_url = "git://github.com/openembedded/bitbake.git" + self.mirror_tarball = "gitshallow_github.com.openembedded.bitbake.git_82ea737-1_master.tar.gz" + + self.d.setVar('BB_GIT_SHALLOW', '1') + self.d.setVar('BB_GENERATE_SHALLOW_TARBALLS', '1') + self.d.setVar('SRCREV', '82ea737a0b42a8b53e11c9cde141e9e9c0bd8c40') + + def setup_mirror_rewrite(self): + self.d.setVar("PREMIRRORS", self.recipe_url + " " + self.mirror_url + " \n") + + @skipIfNoNetwork() + def test_that_the_tarball_is_named_after_recipe_url_when_no_mirroring_is_used(self): + fetcher = bb.fetch.Fetch([self.recipe_url], self.d) + + fetcher.download() + + dir = os.listdir(self.dldir) + self.assertIn(self.recipe_tarball, dir) + + @skipIfNoNetwork() + def test_that_the_mirror_tarball_is_created_when_mirroring_is_used(self): + self.setup_mirror_rewrite() + fetcher = bb.fetch.Fetch([self.recipe_url], self.d) + + fetcher.download() + + dir = os.listdir(self.dldir) + self.assertIn(self.mirror_tarball, dir) + + class FetcherLocalTest(FetcherTest): def setUp(self): def touch(fn): @@ -522,6 +647,109 @@ class FetcherLocalTest(FetcherTest): with self.assertRaises(bb.fetch2.UnpackError): self.fetchUnpack(['file://a;subdir=/bin/sh']) +class FetcherNoNetworkTest(FetcherTest): + def setUp(self): + super().setUp() + # all test cases are based on not having network + self.d.setVar("BB_NO_NETWORK", "1") + + def test_missing(self): + string = "this is a test file\n".encode("utf-8") + self.d.setVarFlag("SRC_URI", "md5sum", hashlib.md5(string).hexdigest()) + self.d.setVarFlag("SRC_URI", "sha256sum", hashlib.sha256(string).hexdigest()) + + self.assertFalse(os.path.exists(os.path.join(self.dldir, "test-file.tar.gz"))) + self.assertFalse(os.path.exists(os.path.join(self.dldir, "test-file.tar.gz.done"))) + fetcher = bb.fetch.Fetch(["http://invalid.yoctoproject.org/test-file.tar.gz"], self.d) + with self.assertRaises(bb.fetch2.NetworkAccess): + fetcher.download() + + def test_valid_missing_donestamp(self): + # create the file in the download directory with correct hash + string = "this is a test file\n".encode("utf-8") + with open(os.path.join(self.dldir, "test-file.tar.gz"), "wb") as f: + f.write(string) + + self.d.setVarFlag("SRC_URI", "md5sum", hashlib.md5(string).hexdigest()) + self.d.setVarFlag("SRC_URI", "sha256sum", hashlib.sha256(string).hexdigest()) + + self.assertTrue(os.path.exists(os.path.join(self.dldir, "test-file.tar.gz"))) + self.assertFalse(os.path.exists(os.path.join(self.dldir, "test-file.tar.gz.done"))) + fetcher = bb.fetch.Fetch(["http://invalid.yoctoproject.org/test-file.tar.gz"], self.d) + fetcher.download() + self.assertTrue(os.path.exists(os.path.join(self.dldir, "test-file.tar.gz.done"))) + + def test_invalid_missing_donestamp(self): + # create an invalid file in the download directory with incorrect hash + string = "this is a test file\n".encode("utf-8") + with open(os.path.join(self.dldir, "test-file.tar.gz"), "wb"): + pass + + self.d.setVarFlag("SRC_URI", "md5sum", hashlib.md5(string).hexdigest()) + self.d.setVarFlag("SRC_URI", "sha256sum", hashlib.sha256(string).hexdigest()) + + self.assertTrue(os.path.exists(os.path.join(self.dldir, "test-file.tar.gz"))) + self.assertFalse(os.path.exists(os.path.join(self.dldir, "test-file.tar.gz.done"))) + fetcher = bb.fetch.Fetch(["http://invalid.yoctoproject.org/test-file.tar.gz"], self.d) + with self.assertRaises(bb.fetch2.NetworkAccess): + fetcher.download() + # the existing file should not exist or should have be moved to "bad-checksum" + self.assertFalse(os.path.exists(os.path.join(self.dldir, "test-file.tar.gz"))) + + def test_nochecksums_missing(self): + self.assertFalse(os.path.exists(os.path.join(self.dldir, "test-file.tar.gz"))) + self.assertFalse(os.path.exists(os.path.join(self.dldir, "test-file.tar.gz.done"))) + # ssh fetch does not support checksums + fetcher = bb.fetch.Fetch(["ssh://invalid@invalid.yoctoproject.org/test-file.tar.gz"], self.d) + # attempts to download with missing donestamp + with self.assertRaises(bb.fetch2.NetworkAccess): + fetcher.download() + + def test_nochecksums_missing_donestamp(self): + # create a file in the download directory + with open(os.path.join(self.dldir, "test-file.tar.gz"), "wb"): + pass + + self.assertTrue(os.path.exists(os.path.join(self.dldir, "test-file.tar.gz"))) + self.assertFalse(os.path.exists(os.path.join(self.dldir, "test-file.tar.gz.done"))) + # ssh fetch does not support checksums + fetcher = bb.fetch.Fetch(["ssh://invalid@invalid.yoctoproject.org/test-file.tar.gz"], self.d) + # attempts to download with missing donestamp + with self.assertRaises(bb.fetch2.NetworkAccess): + fetcher.download() + + def test_nochecksums_has_donestamp(self): + # create a file in the download directory with the donestamp + with open(os.path.join(self.dldir, "test-file.tar.gz"), "wb"): + pass + with open(os.path.join(self.dldir, "test-file.tar.gz.done"), "wb"): + pass + + self.assertTrue(os.path.exists(os.path.join(self.dldir, "test-file.tar.gz"))) + self.assertTrue(os.path.exists(os.path.join(self.dldir, "test-file.tar.gz.done"))) + # ssh fetch does not support checksums + fetcher = bb.fetch.Fetch(["ssh://invalid@invalid.yoctoproject.org/test-file.tar.gz"], self.d) + # should not fetch + fetcher.download() + # both files should still exist + self.assertTrue(os.path.exists(os.path.join(self.dldir, "test-file.tar.gz"))) + self.assertTrue(os.path.exists(os.path.join(self.dldir, "test-file.tar.gz.done"))) + + def test_nochecksums_missing_has_donestamp(self): + # create a file in the download directory with the donestamp + with open(os.path.join(self.dldir, "test-file.tar.gz.done"), "wb"): + pass + + self.assertFalse(os.path.exists(os.path.join(self.dldir, "test-file.tar.gz"))) + self.assertTrue(os.path.exists(os.path.join(self.dldir, "test-file.tar.gz.done"))) + # ssh fetch does not support checksums + fetcher = bb.fetch.Fetch(["ssh://invalid@invalid.yoctoproject.org/test-file.tar.gz"], self.d) + with self.assertRaises(bb.fetch2.NetworkAccess): + fetcher.download() + # both files should still exist + self.assertFalse(os.path.exists(os.path.join(self.dldir, "test-file.tar.gz"))) + self.assertFalse(os.path.exists(os.path.join(self.dldir, "test-file.tar.gz.done"))) + class FetcherNetworkTest(FetcherTest): @skipIfNoNetwork() def test_fetch(self): @@ -641,27 +869,27 @@ class FetcherNetworkTest(FetcherTest): self.assertRaises(bb.fetch.ParameterError, self.gitfetcher, url, url) @skipIfNoNetwork() - def test_gitfetch_premirror(self): - url1 = "git://git.openembedded.org/bitbake" - url2 = "git://someserver.org/bitbake" + def test_gitfetch_finds_local_tarball_for_mirrored_url_when_previous_downloaded_by_the_recipe_url(self): + recipeurl = "git://git.openembedded.org/bitbake" + mirrorurl = "git://someserver.org/bitbake" self.d.setVar("PREMIRRORS", "git://someserver.org/bitbake git://git.openembedded.org/bitbake \n") - self.gitfetcher(url1, url2) + self.gitfetcher(recipeurl, mirrorurl) @skipIfNoNetwork() - def test_gitfetch_premirror2(self): - url1 = url2 = "git://someserver.org/bitbake" + def test_gitfetch_finds_local_tarball_when_previous_downloaded_from_a_premirror(self): + recipeurl = "git://someserver.org/bitbake" self.d.setVar("PREMIRRORS", "git://someserver.org/bitbake git://git.openembedded.org/bitbake \n") - self.gitfetcher(url1, url2) + self.gitfetcher(recipeurl, recipeurl) @skipIfNoNetwork() - def test_gitfetch_premirror3(self): + def test_gitfetch_finds_local_repository_when_premirror_rewrites_the_recipe_url(self): realurl = "git://git.openembedded.org/bitbake" - dummyurl = "git://someserver.org/bitbake" + recipeurl = "git://someserver.org/bitbake" self.sourcedir = self.unpackdir.replace("unpacked", "sourcemirror.git") os.chdir(self.tempdir) bb.process.run("git clone %s %s 2> /dev/null" % (realurl, self.sourcedir), shell=True) - self.d.setVar("PREMIRRORS", "%s git://%s;protocol=file \n" % (dummyurl, self.sourcedir)) - self.gitfetcher(dummyurl, dummyurl) + self.d.setVar("PREMIRRORS", "%s git://%s;protocol=file \n" % (recipeurl, self.sourcedir)) + self.gitfetcher(recipeurl, recipeurl) @skipIfNoNetwork() def test_git_submodule(self): @@ -728,7 +956,7 @@ class URLHandle(unittest.TestCase): # decodeurl and we need to handle them decodedata = datatable.copy() decodedata.update({ - "http://somesite.net;someparam=1": ('http', 'somesite.net', '', '', '', {'someparam': '1'}), + "http://somesite.net;someparam=1": ('http', 'somesite.net', '/', '', '', {'someparam': '1'}), }) def test_decodeurl(self): @@ -757,12 +985,12 @@ class FetchLatestVersionTest(FetcherTest): ("dtc", "git://git.qemu.org/dtc.git", "65cc4d2748a2c2e6f27f1cf39e07a5dbabd80ebf", "") : "1.4.0", # combination version pattern - ("sysprof", "git://git.gnome.org/sysprof", "cd44ee6644c3641507fb53b8a2a69137f2971219", "") + ("sysprof", "git://gitlab.gnome.org/GNOME/sysprof.git;protocol=https", "cd44ee6644c3641507fb53b8a2a69137f2971219", "") : "1.2.0", ("u-boot-mkimage", "git://git.denx.de/u-boot.git;branch=master;protocol=git", "62c175fbb8a0f9a926c88294ea9f7e88eb898f6c", "") : "2014.01", # version pattern "yyyymmdd" - ("mobile-broadband-provider-info", "git://git.gnome.org/mobile-broadband-provider-info", "4ed19e11c2975105b71b956440acdb25d46a347d", "") + ("mobile-broadband-provider-info", "git://gitlab.gnome.org/GNOME/mobile-broadband-provider-info.git;protocol=https", "4ed19e11c2975105b71b956440acdb25d46a347d", "") : "20120614", # packages with a valid UPSTREAM_CHECK_GITTAGREGEX ("xf86-video-omap", "git://anongit.freedesktop.org/xorg/driver/xf86-video-omap", "ae0394e687f1a77e966cf72f895da91840dffb8f", "(?P(\d+\.(\d\.?)*))") @@ -809,7 +1037,7 @@ class FetchLatestVersionTest(FetcherTest): ud = bb.fetch2.FetchData(k[1], self.d) pupver= ud.method.latest_versionstring(ud, self.d) verstring = pupver[0] - self.assertTrue(verstring, msg="Could not find upstream version") + self.assertTrue(verstring, msg="Could not find upstream version for %s" % k[0]) r = bb.utils.vercmp_string(v, verstring) self.assertTrue(r == -1 or r == 0, msg="Package %s, version: %s <= %s" % (k[0], v, verstring)) @@ -822,7 +1050,7 @@ class FetchLatestVersionTest(FetcherTest): ud = bb.fetch2.FetchData(k[1], self.d) pupver = ud.method.latest_versionstring(ud, self.d) verstring = pupver[0] - self.assertTrue(verstring, msg="Could not find upstream version") + self.assertTrue(verstring, msg="Could not find upstream version for %s" % k[0]) r = bb.utils.vercmp_string(v, verstring) self.assertTrue(r == -1 or r == 0, msg="Package %s, version: %s <= %s" % (k[0], v, verstring)) @@ -874,9 +1102,6 @@ class FetchCheckStatusTest(FetcherTest): class GitMakeShallowTest(FetcherTest): - bitbake_dir = os.path.join(os.path.dirname(os.path.join(os.path.abspath(__file__))), '..', '..', '..') - make_shallow_path = os.path.join(bitbake_dir, 'bin', 'git-make-shallow') - def setUp(self): FetcherTest.setUp(self) self.gitdir = os.path.join(self.tempdir, 'gitshallow') @@ -905,7 +1130,7 @@ class GitMakeShallowTest(FetcherTest): def make_shallow(self, args=None): if args is None: args = ['HEAD'] - return bb.process.run([self.make_shallow_path] + args, cwd=self.gitdir) + return bb.process.run([bb.fetch2.git.Git.make_shallow_path] + args, cwd=self.gitdir) def add_empty_file(self, path, msg=None): if msg is None: @@ -1237,6 +1462,9 @@ class GitShallowTest(FetcherTest): smdir = os.path.join(self.tempdir, 'gitsubmodule') bb.utils.mkdirhier(smdir) self.git('init', cwd=smdir) + # Make this look like it was cloned from a remote... + self.git('config --add remote.origin.url "%s"' % smdir, cwd=smdir) + self.git('config --add remote.origin.fetch "+refs/heads/*:refs/remotes/origin/*"', cwd=smdir) self.add_empty_file('asub', cwd=smdir) self.git('submodule init', cwd=self.srcdir) @@ -1470,3 +1698,30 @@ class GitShallowTest(FetcherTest): self.assertNotEqual(orig_revs, revs) self.assertRefs(['master', 'origin/master']) self.assertRevCount(orig_revs - 1758) + + def test_that_unpack_throws_an_error_when_the_git_clone_nor_shallow_tarball_exist(self): + self.add_empty_file('a') + fetcher, ud = self.fetch() + bb.utils.remove(self.gitdir, recurse=True) + bb.utils.remove(self.dldir, recurse=True) + + with self.assertRaises(bb.fetch2.UnpackError) as context: + fetcher.unpack(self.d.getVar('WORKDIR')) + + self.assertTrue("No up to date source found" in context.exception.msg) + self.assertTrue("clone directory not available or not up to date" in context.exception.msg) + self.assertTrue("shallow clone not enabled or not available" in context.exception.msg) + + @skipIfNoNetwork() + def test_that_unpack_does_work_when_using_git_shallow_tarball_but_tarball_is_not_available(self): + self.d.setVar('SRCREV', 'e5939ff608b95cdd4d0ab0e1935781ab9a276ac0') + self.d.setVar('BB_GIT_SHALLOW', '1') + self.d.setVar('BB_GENERATE_SHALLOW_TARBALLS', '1') + fetcher = bb.fetch.Fetch(["git://git.yoctoproject.org/fstests"], self.d) + fetcher.download() + + bb.utils.remove(self.dldir + "/*.tar.gz") + fetcher.unpack(self.unpackdir) + + dir = os.listdir(self.unpackdir + "/git/") + self.assertIn("fstests.doap", dir) diff --git a/bitbake/lib/bb/tests/parse.py b/bitbake/lib/bb/tests/parse.py index 8f16ba4..1bc4740 100644 --- a/bitbake/lib/bb/tests/parse.py +++ b/bitbake/lib/bb/tests/parse.py @@ -44,9 +44,13 @@ C = "3" """ def setUp(self): + self.origdir = os.getcwd() self.d = bb.data.init() bb.parse.siggen = bb.siggen.init(self.d) + def tearDown(self): + os.chdir(self.origdir) + def parsehelper(self, content, suffix = ".bb"): f = tempfile.NamedTemporaryFile(suffix = suffix) diff --git a/bitbake/lib/bb/ui/buildinfohelper.py b/bitbake/lib/bb/ui/buildinfohelper.py index 524a5b0..31323d2 100644 --- a/bitbake/lib/bb/ui/buildinfohelper.py +++ b/bitbake/lib/bb/ui/buildinfohelper.py @@ -1603,14 +1603,14 @@ class BuildInfoHelper(object): mockevent.lineno = -1 self.store_log_event(mockevent) - def store_log_event(self, event): + def store_log_event(self, event,cli_backlog=True): self._ensure_build() if event.levelno < formatter.WARNING: return # early return for CLI builds - if self.brbe is None: + if cli_backlog and self.brbe is None: if not 'backlog' in self.internal_state: self.internal_state['backlog'] = [] self.internal_state['backlog'].append(event) @@ -1622,7 +1622,7 @@ class BuildInfoHelper(object): tempevent = self.internal_state['backlog'].pop() logger.debug(1, "buildinfohelper: Saving stored event %s " % tempevent) - self.store_log_event(tempevent) + self.store_log_event(tempevent,cli_backlog) else: logger.info("buildinfohelper: All events saved") del self.internal_state['backlog'] @@ -1987,7 +1987,8 @@ class BuildInfoHelper(object): if 'backlog' in self.internal_state: # we save missed events in the database for the current build tempevent = self.internal_state['backlog'].pop() - self.store_log_event(tempevent) + # Do not skip command line build events + self.store_log_event(tempevent,False) if not connection.features.autocommits_when_autocommit_is_off: transaction.set_autocommit(True) diff --git a/bitbake/lib/bb/ui/taskexp.py b/bitbake/lib/bb/ui/taskexp.py index 0e8e9d4..8305d70 100644 --- a/bitbake/lib/bb/ui/taskexp.py +++ b/bitbake/lib/bb/ui/taskexp.py @@ -103,9 +103,16 @@ class DepExplorer(Gtk.Window): self.pkg_treeview.get_selection().connect("changed", self.on_cursor_changed) column = Gtk.TreeViewColumn("Package", Gtk.CellRendererText(), text=COL_PKG_NAME) self.pkg_treeview.append_column(column) - pane.add1(scrolled) scrolled.add(self.pkg_treeview) + self.search_entry = Gtk.SearchEntry.new() + self.pkg_treeview.set_search_entry(self.search_entry) + + left_panel = Gtk.VPaned() + left_panel.add(self.search_entry) + left_panel.add(scrolled) + pane.add1(left_panel) + box = Gtk.VBox(homogeneous=True, spacing=4) # Task Depends @@ -129,6 +136,7 @@ class DepExplorer(Gtk.Window): pane.add2(box) self.show_all() + self.search_entry.grab_focus() def on_package_activated(self, treeview, path, column, data_col): model = treeview.get_model() diff --git a/bitbake/lib/bb/utils.py b/bitbake/lib/bb/utils.py index c540b49..73b6cb4 100644 --- a/bitbake/lib/bb/utils.py +++ b/bitbake/lib/bb/utils.py @@ -187,7 +187,7 @@ def explode_deps(s): #r[-1] += ' ' + ' '.join(j) return r -def explode_dep_versions2(s): +def explode_dep_versions2(s, *, sort=True): """ Take an RDEPENDS style string of format: "DEPEND1 (optional version) DEPEND2 (optional version) ..." @@ -250,7 +250,8 @@ def explode_dep_versions2(s): if not (i in r and r[i]): r[lastdep] = [] - r = collections.OrderedDict(sorted(r.items(), key=lambda x: x[0])) + if sort: + r = collections.OrderedDict(sorted(r.items(), key=lambda x: x[0])) return r def explode_dep_versions(s): @@ -496,7 +497,11 @@ def lockfile(name, shared=False, retry=True, block=False): if statinfo.st_ino == statinfo2.st_ino: return lf lf.close() - except Exception: + except OSError as e: + if e.errno == errno.EACCES: + logger.error("Unable to acquire lock '%s', %s", + e.strerror, name) + sys.exit(1) try: lf.close() except Exception: @@ -523,12 +528,17 @@ def md5_file(filename): """ Return the hex string representation of the MD5 checksum of filename. """ - import hashlib - m = hashlib.md5() + import hashlib, mmap with open(filename, "rb") as f: - for line in f: - m.update(line) + m = hashlib.md5() + try: + with mmap.mmap(f.fileno(), 0, access=mmap.ACCESS_READ) as mm: + for chunk in iter(lambda: mm.read(8192), b''): + m.update(chunk) + except ValueError: + # You can't mmap() an empty file so silence this exception + pass return m.hexdigest() def sha256_file(filename): @@ -806,8 +816,8 @@ def movefile(src, dest, newmtime = None, sstat = None): return None # failure try: if didcopy: - os.lchown(dest, sstat[stat.ST_UID], sstat[stat.ST_GID]) - os.chmod(dest, stat.S_IMODE(sstat[stat.ST_MODE])) # Sticky is reset on chown + os.lchown(destpath, sstat[stat.ST_UID], sstat[stat.ST_GID]) + os.chmod(destpath, stat.S_IMODE(sstat[stat.ST_MODE])) # Sticky is reset on chown os.unlink(src) except Exception as e: print("movefile: Failed to chown/chmod/unlink", dest, e) @@ -900,6 +910,23 @@ def copyfile(src, dest, newmtime = None, sstat = None): newmtime = sstat[stat.ST_MTIME] return newmtime +def break_hardlinks(src, sstat = None): + """ + Ensures src is the only hardlink to this file. Other hardlinks, + if any, are not affected (other than in their st_nlink value, of + course). Returns true on success and false on failure. + + """ + try: + if not sstat: + sstat = os.lstat(src) + except Exception as e: + logger.warning("break_hardlinks: stat of %s failed (%s)" % (src, e)) + return False + if sstat[stat.ST_NLINK] == 1: + return True + return copyfile(src, src, sstat=sstat) + def which(path, item, direction = 0, history = False, executable=False): """ Locate `item` in the list of paths `path` (colon separated string like $PATH). @@ -1284,7 +1311,7 @@ def edit_metadata_file(meta_file, variables, varfunc): return updated -def edit_bblayers_conf(bblayers_conf, add, remove): +def edit_bblayers_conf(bblayers_conf, add, remove, edit_cb=None): """Edit bblayers.conf, adding and/or removing layers Parameters: bblayers_conf: path to bblayers.conf file to edit @@ -1292,6 +1319,8 @@ def edit_bblayers_conf(bblayers_conf, add, remove): list to add nothing remove: layer path (or list of layer paths) to remove; None or empty list to remove nothing + edit_cb: optional callback function that will be called after + processing adds/removes once per existing entry. Returns a tuple: notadded: list of layers specified to be added but weren't (because they were already in the list) @@ -1355,6 +1384,17 @@ def edit_bblayers_conf(bblayers_conf, add, remove): bblayers.append(addlayer) del addlayers[:] + if edit_cb: + newlist = [] + for layer in bblayers: + res = edit_cb(layer, canonicalise_path(layer)) + if res != layer: + newlist.append(res) + updated = True + else: + newlist.append(layer) + bblayers = newlist + if updated: if op == '+=' and not bblayers: bblayers = None diff --git a/bitbake/lib/bblayers/action.py b/bitbake/lib/bblayers/action.py index aa575d1..a3f658f 100644 --- a/bitbake/lib/bblayers/action.py +++ b/bitbake/lib/bblayers/action.py @@ -45,7 +45,7 @@ class ActionPlugin(LayerPlugin): notadded, _ = bb.utils.edit_bblayers_conf(bblayers_conf, layerdirs, None) if not (args.force or notadded): try: - self.tinfoil.parseRecipes() + self.tinfoil.run_command('parseConfiguration') except bb.tinfoil.TinfoilUIException: # Restore the back up copy of bblayers.conf shutil.copy2(backup, bblayers_conf) diff --git a/bitbake/lib/bblayers/layerindex.py b/bitbake/lib/bblayers/layerindex.py index 9af385d..9f02a9d 100644 --- a/bitbake/lib/bblayers/layerindex.py +++ b/bitbake/lib/bblayers/layerindex.py @@ -1,10 +1,9 @@ +import layerindexlib + import argparse -import http.client -import json import logging import os import subprocess -import urllib.parse from bblayers.action import ActionPlugin @@ -21,110 +20,6 @@ class LayerIndexPlugin(ActionPlugin): This class inherits ActionPlugin to get do_add_layer. """ - def get_json_data(self, apiurl): - proxy_settings = os.environ.get("http_proxy", None) - conn = None - _parsedurl = urllib.parse.urlparse(apiurl) - path = _parsedurl.path - query = _parsedurl.query - - def parse_url(url): - parsedurl = urllib.parse.urlparse(url) - if parsedurl.netloc[0] == '[': - host, port = parsedurl.netloc[1:].split(']', 1) - if ':' in port: - port = port.rsplit(':', 1)[1] - else: - port = None - else: - if parsedurl.netloc.count(':') == 1: - (host, port) = parsedurl.netloc.split(":") - else: - host = parsedurl.netloc - port = None - return (host, 80 if port is None else int(port)) - - if proxy_settings is None: - host, port = parse_url(apiurl) - conn = http.client.HTTPConnection(host, port) - conn.request("GET", path + "?" + query) - else: - host, port = parse_url(proxy_settings) - conn = http.client.HTTPConnection(host, port) - conn.request("GET", apiurl) - - r = conn.getresponse() - if r.status != 200: - raise Exception("Failed to read " + path + ": %d %s" % (r.status, r.reason)) - return json.loads(r.read().decode()) - - def get_layer_deps(self, layername, layeritems, layerbranches, layerdependencies, branchnum, selfname=False): - def layeritems_info_id(items_name, layeritems): - litems_id = None - for li in layeritems: - if li['name'] == items_name: - litems_id = li['id'] - break - return litems_id - - def layerbranches_info(items_id, layerbranches): - lbranch = {} - for lb in layerbranches: - if lb['layer'] == items_id and lb['branch'] == branchnum: - lbranch['id'] = lb['id'] - lbranch['vcs_subdir'] = lb['vcs_subdir'] - break - return lbranch - - def layerdependencies_info(lb_id, layerdependencies): - ld_deps = [] - for ld in layerdependencies: - if ld['layerbranch'] == lb_id and not ld['dependency'] in ld_deps: - ld_deps.append(ld['dependency']) - if not ld_deps: - logger.error("The dependency of layerDependencies is not found.") - return ld_deps - - def layeritems_info_name_subdir(items_id, layeritems): - litems = {} - for li in layeritems: - if li['id'] == items_id: - litems['vcs_url'] = li['vcs_url'] - litems['name'] = li['name'] - break - return litems - - if selfname: - selfid = layeritems_info_id(layername, layeritems) - lbinfo = layerbranches_info(selfid, layerbranches) - if lbinfo: - selfsubdir = lbinfo['vcs_subdir'] - else: - logger.error("%s is not found in the specified branch" % layername) - return - selfurl = layeritems_info_name_subdir(selfid, layeritems)['vcs_url'] - if selfurl: - return selfurl, selfsubdir - else: - logger.error("Cannot get layer %s git repo and subdir" % layername) - return - ldict = {} - itemsid = layeritems_info_id(layername, layeritems) - if not itemsid: - return layername, None - lbid = layerbranches_info(itemsid, layerbranches) - if lbid: - lbid = layerbranches_info(itemsid, layerbranches)['id'] - else: - logger.error("%s is not found in the specified branch" % layername) - return None, None - for dependency in layerdependencies_info(lbid, layerdependencies): - lname = layeritems_info_name_subdir(dependency, layeritems)['name'] - lurl = layeritems_info_name_subdir(dependency, layeritems)['vcs_url'] - lsubdir = layerbranches_info(dependency, layerbranches)['vcs_subdir'] - ldict[lname] = lurl, lsubdir - return None, ldict - def get_fetch_layer(self, fetchdir, url, subdir, fetch_layer): layername = self.get_layer_name(url) if os.path.splitext(layername)[1] == '.git': @@ -136,95 +31,124 @@ class LayerIndexPlugin(ActionPlugin): result = subprocess.call('git clone %s %s' % (url, repodir), shell = True) if result: logger.error("Failed to download %s" % url) - return None, None + return None, None, None else: - return layername, layerdir + return subdir, layername, layerdir else: logger.plain("Repository %s needs to be fetched" % url) - return layername, layerdir + return subdir, layername, layerdir elif os.path.exists(layerdir): - return layername, layerdir + return subdir, layername, layerdir else: logger.error("%s is not in %s" % (url, subdir)) - return None, None + return None, None, None def do_layerindex_fetch(self, args): """Fetches a layer from a layer index along with its dependent layers, and adds them to conf/bblayers.conf. """ - apiurl = self.tinfoil.config_data.getVar('BBLAYERS_LAYERINDEX_URL') - if not apiurl: - logger.error("Cannot get BBLAYERS_LAYERINDEX_URL") - return 1 + + def _construct_url(baseurls, branches): + urls = [] + for baseurl in baseurls: + if baseurl[-1] != '/': + baseurl += '/' + + if not baseurl.startswith('cooker'): + baseurl += "api/" + + if branches: + baseurl += ";branch=%s" % ','.join(branches) + + urls.append(baseurl) + + return urls + + + # Set the default... + if args.branch: + branches = [args.branch] else: - if apiurl[-1] != '/': - apiurl += '/' - apiurl += "api/" - apilinks = self.get_json_data(apiurl) - branches = self.get_json_data(apilinks['branches']) - - branchnum = 0 - for branch in branches: - if branch['name'] == args.branch: - branchnum = branch['id'] - break - if branchnum == 0: - validbranches = ', '.join([branch['name'] for branch in branches]) - logger.error('Invalid layer branch name "%s". Valid branches: %s' % (args.branch, validbranches)) - return 1 + branches = (self.tinfoil.config_data.getVar('LAYERSERIES_CORENAMES') or 'master').split() + logger.debug(1, 'Trying branches: %s' % branches) ignore_layers = [] - for collection in self.tinfoil.config_data.getVar('BBFILE_COLLECTIONS').split(): - lname = self.tinfoil.config_data.getVar('BBLAYERS_LAYERINDEX_NAME_%s' % collection) - if lname: - ignore_layers.append(lname) - if args.ignore: ignore_layers.extend(args.ignore.split(',')) - layeritems = self.get_json_data(apilinks['layerItems']) - layerbranches = self.get_json_data(apilinks['layerBranches']) - layerdependencies = self.get_json_data(apilinks['layerDependencies']) - invaluenames = [] - repourls = {} - printlayers = [] - - def query_dependencies(layers, layeritems, layerbranches, layerdependencies, branchnum): - depslayer = [] - for layername in layers: - invaluename, layerdict = self.get_layer_deps(layername, layeritems, layerbranches, layerdependencies, branchnum) - if layerdict: - repourls[layername] = self.get_layer_deps(layername, layeritems, layerbranches, layerdependencies, branchnum, selfname=True) - for layer in layerdict: - if not layer in ignore_layers: - depslayer.append(layer) - printlayers.append((layername, layer, layerdict[layer][0], layerdict[layer][1])) - if not layer in ignore_layers and not layer in repourls: - repourls[layer] = (layerdict[layer][0], layerdict[layer][1]) - if invaluename and not invaluename in invaluenames: - invaluenames.append(invaluename) - return depslayer - - depslayers = query_dependencies(args.layername, layeritems, layerbranches, layerdependencies, branchnum) - while depslayers: - depslayer = query_dependencies(depslayers, layeritems, layerbranches, layerdependencies, branchnum) - depslayers = depslayer - if invaluenames: - for invaluename in invaluenames: - logger.error('Layer "%s" not found in layer index' % invaluename) - return 1 - logger.plain("%s %s %s %s" % ("Layer".ljust(19), "Required by".ljust(19), "Git repository".ljust(54), "Subdirectory")) - logger.plain('=' * 115) - for layername in args.layername: - layerurl = repourls[layername] - logger.plain("%s %s %s %s" % (layername.ljust(20), '-'.ljust(20), layerurl[0].ljust(55), layerurl[1])) - printedlayers = [] - for layer, dependency, gitrepo, subdirectory in printlayers: - if dependency in printedlayers: - continue - logger.plain("%s %s %s %s" % (dependency.ljust(20), layer.ljust(20), gitrepo.ljust(55), subdirectory)) - printedlayers.append(dependency) - - if repourls: + # Load the cooker DB + cookerIndex = layerindexlib.LayerIndex(self.tinfoil.config_data) + cookerIndex.load_layerindex('cooker://', load='layerDependencies') + + # Fast path, check if we already have what has been requested! + (dependencies, invalidnames) = cookerIndex.find_dependencies(names=args.layername, ignores=ignore_layers) + if not args.show_only and not invalidnames: + logger.plain("You already have the requested layer(s): %s" % args.layername) + return 0 + + # The information to show is already in the cookerIndex + if invalidnames: + # General URL to use to access the layer index + # While there is ONE right now, we're expect users could enter several + apiurl = self.tinfoil.config_data.getVar('BBLAYERS_LAYERINDEX_URL').split() + if not apiurl: + logger.error("Cannot get BBLAYERS_LAYERINDEX_URL") + return 1 + + remoteIndex = layerindexlib.LayerIndex(self.tinfoil.config_data) + + for remoteurl in _construct_url(apiurl, branches): + logger.plain("Loading %s..." % remoteurl) + remoteIndex.load_layerindex(remoteurl) + + if remoteIndex.is_empty(): + logger.error("Remote layer index %s is empty for branches %s" % (apiurl, branches)) + return 1 + + lIndex = cookerIndex + remoteIndex + + (dependencies, invalidnames) = lIndex.find_dependencies(names=args.layername, ignores=ignore_layers) + + if invalidnames: + for invaluename in invalidnames: + logger.error('Layer "%s" not found in layer index' % invaluename) + return 1 + + logger.plain("%s %s %s" % ("Layer".ljust(49), "Git repository (branch)".ljust(54), "Subdirectory")) + logger.plain('=' * 125) + + for deplayerbranch in dependencies: + layerBranch = dependencies[deplayerbranch][0] + + # TODO: Determine display behavior + # This is the local content, uncomment to hide local + # layers from the display. + #if layerBranch.index.config['TYPE'] == 'cooker': + # continue + + layerDeps = dependencies[deplayerbranch][1:] + + requiredby = [] + recommendedby = [] + for dep in layerDeps: + if dep.required: + requiredby.append(dep.layer.name) + else: + recommendedby.append(dep.layer.name) + + logger.plain('%s %s %s' % (("%s:%s:%s" % + (layerBranch.index.config['DESCRIPTION'], + layerBranch.branch.name, + layerBranch.layer.name)).ljust(50), + ("%s (%s)" % (layerBranch.layer.vcs_url, + layerBranch.actual_branch)).ljust(55), + layerBranch.vcs_subdir + )) + if requiredby: + logger.plain(' required by: %s' % ' '.join(requiredby)) + if recommendedby: + logger.plain(' recommended by: %s' % ' '.join(recommendedby)) + + if dependencies: fetchdir = self.tinfoil.config_data.getVar('BBLAYERS_FETCH_DIR') if not fetchdir: logger.error("Cannot get BBLAYERS_FETCH_DIR") @@ -232,26 +156,39 @@ class LayerIndexPlugin(ActionPlugin): if not os.path.exists(fetchdir): os.makedirs(fetchdir) addlayers = [] - for repourl, subdir in repourls.values(): - name, layerdir = self.get_fetch_layer(fetchdir, repourl, subdir, not args.show_only) + + for deplayerbranch in dependencies: + layerBranch = dependencies[deplayerbranch][0] + + if layerBranch.index.config['TYPE'] == 'cooker': + # Anything loaded via cooker is already local, skip it + continue + + subdir, name, layerdir = self.get_fetch_layer(fetchdir, + layerBranch.layer.vcs_url, + layerBranch.vcs_subdir, + not args.show_only) if not name: # Error already shown return 1 addlayers.append((subdir, name, layerdir)) if not args.show_only: - for subdir, name, layerdir in set(addlayers): + localargs = argparse.Namespace() + localargs.layerdir = [] + localargs.force = args.force + for subdir, name, layerdir in addlayers: if os.path.exists(layerdir): if subdir: - logger.plain("Adding layer \"%s\" to conf/bblayers.conf" % subdir) + logger.plain("Adding layer \"%s\" (%s) to conf/bblayers.conf" % (subdir, layerdir)) else: - logger.plain("Adding layer \"%s\" to conf/bblayers.conf" % name) - localargs = argparse.Namespace() - localargs.layerdir = layerdir - localargs.force = args.force - self.do_add_layer(localargs) + logger.plain("Adding layer \"%s\" (%s) to conf/bblayers.conf" % (name, layerdir)) + localargs.layerdir.append(layerdir) else: break + if localargs.layerdir: + self.do_add_layer(localargs) + def do_layerindex_show_depends(self, args): """Find layer dependencies from layer index. """ @@ -260,12 +197,12 @@ class LayerIndexPlugin(ActionPlugin): self.do_layerindex_fetch(args) def register_commands(self, sp): - parser_layerindex_fetch = self.add_command(sp, 'layerindex-fetch', self.do_layerindex_fetch) + parser_layerindex_fetch = self.add_command(sp, 'layerindex-fetch', self.do_layerindex_fetch, parserecipes=False) parser_layerindex_fetch.add_argument('-n', '--show-only', help='show dependencies and do nothing else', action='store_true') - parser_layerindex_fetch.add_argument('-b', '--branch', help='branch name to fetch (default %(default)s)', default='master') + parser_layerindex_fetch.add_argument('-b', '--branch', help='branch name to fetch') parser_layerindex_fetch.add_argument('-i', '--ignore', help='assume the specified layers do not need to be fetched/added (separate multiple layers with commas, no spaces)', metavar='LAYER') parser_layerindex_fetch.add_argument('layername', nargs='+', help='layer to fetch') - parser_layerindex_show_depends = self.add_command(sp, 'layerindex-show-depends', self.do_layerindex_show_depends) - parser_layerindex_show_depends.add_argument('-b', '--branch', help='branch name to fetch (default %(default)s)', default='master') + parser_layerindex_show_depends = self.add_command(sp, 'layerindex-show-depends', self.do_layerindex_show_depends, parserecipes=False) + parser_layerindex_show_depends.add_argument('-b', '--branch', help='branch name to fetch') parser_layerindex_show_depends.add_argument('layername', nargs='+', help='layer to query') diff --git a/bitbake/lib/layerindexlib/README b/bitbake/lib/layerindexlib/README new file mode 100644 index 0000000..5d927af --- /dev/null +++ b/bitbake/lib/layerindexlib/README @@ -0,0 +1,28 @@ +The layerindexlib module is designed to permit programs to work directly +with layer index information. (See layers.openembedded.org...) + +The layerindexlib module includes a plugin interface that is used to extend +the basic functionality. There are two primary plugins available: restapi +and cooker. + +The restapi plugin works with a web based REST Api compatible with the +layerindex-web project, as well as the ability to store and retried a +the information for one or more files on the disk. + +The cooker plugin works by reading the information from the current build +project and processing it as if it were a layer index. + + +TODO: + +__init__.py: +Implement local on-disk caching (using the rest api store/load) +Implement layer index style query operations on a combined index + +common.py: +Stop network access if BB_NO_NETWORK or allowed hosts is restricted + +cooker.py: +Cooker - Implement recipe parsing + + diff --git a/bitbake/lib/layerindexlib/__init__.py b/bitbake/lib/layerindexlib/__init__.py new file mode 100644 index 0000000..cb79cb3 --- /dev/null +++ b/bitbake/lib/layerindexlib/__init__.py @@ -0,0 +1,1363 @@ +# Copyright (C) 2016-2018 Wind River Systems, Inc. +# +# This program is free software; you can redistribute it and/or modify +# it under the terms of the GNU General Public License version 2 as +# published by the Free Software Foundation. +# +# This program is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. +# See the GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with this program; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA + +import datetime + +import logging +import imp + +from collections import OrderedDict +from layerindexlib.plugin import LayerIndexPluginUrlError + +logger = logging.getLogger('BitBake.layerindexlib') + +# Exceptions + +class LayerIndexException(Exception): + '''LayerIndex Generic Exception''' + def __init__(self, message): + self.msg = message + Exception.__init__(self, message) + + def __str__(self): + return self.msg + +class LayerIndexUrlError(LayerIndexException): + '''Exception raised when unable to access a URL for some reason''' + def __init__(self, url, message=""): + if message: + msg = "Unable to access layerindex url %s: %s" % (url, message) + else: + msg = "Unable to access layerindex url %s" % url + self.url = url + LayerIndexException.__init__(self, msg) + +class LayerIndexFetchError(LayerIndexException): + '''General layerindex fetcher exception when something fails''' + def __init__(self, url, message=""): + if message: + msg = "Unable to fetch layerindex url %s: %s" % (url, message) + else: + msg = "Unable to fetch layerindex url %s" % url + self.url = url + LayerIndexException.__init__(self, msg) + + +# Interface to the overall layerindex system +# the layer may contain one or more individual indexes +class LayerIndex(): + def __init__(self, d): + if not d: + raise LayerIndexException("Must be initialized with bb.data.") + + self.data = d + + # List of LayerIndexObj + self.indexes = [] + + self.plugins = [] + + import bb.utils + bb.utils.load_plugins(logger, self.plugins, os.path.dirname(__file__)) + for plugin in self.plugins: + if hasattr(plugin, 'init'): + plugin.init(self) + + def __add__(self, other): + newIndex = LayerIndex(self.data) + + if self.__class__ != newIndex.__class__ or \ + other.__class__ != newIndex.__class__: + raise TypeException("Can not add different types.") + + for indexEnt in self.indexes: + newIndex.indexes.append(indexEnt) + + for indexEnt in other.indexes: + newIndex.indexes.append(indexEnt) + + return newIndex + + def _parse_params(self, params): + '''Take a parameter list, return a dictionary of parameters. + + Expected to be called from the data of urllib.parse.urlparse(url).params + + If there are two conflicting parameters, last in wins... + ''' + + param_dict = {} + for param in params.split(';'): + if not param: + continue + item = param.split('=', 1) + logger.debug(1, item) + param_dict[item[0]] = item[1] + + return param_dict + + def _fetch_url(self, url, username=None, password=None, debuglevel=0): + '''Fetch data from a specific URL. + + Fetch something from a specific URL. This is specifically designed to + fetch data from a layerindex-web instance, but may be useful for other + raw fetch actions. + + It is not designed to be used to fetch recipe sources or similar. the + regular fetcher class should used for that. + + It is the responsibility of the caller to check BB_NO_NETWORK and related + BB_ALLOWED_NETWORKS. + ''' + + if not url: + raise LayerIndexUrlError(url, "empty url") + + import urllib + from urllib.request import urlopen, Request + from urllib.parse import urlparse + + up = urlparse(url) + + if username: + logger.debug(1, "Configuring authentication for %s..." % url) + password_mgr = urllib.request.HTTPPasswordMgrWithDefaultRealm() + password_mgr.add_password(None, "%s://%s" % (up.scheme, up.netloc), username, password) + handler = urllib.request.HTTPBasicAuthHandler(password_mgr) + opener = urllib.request.build_opener(handler, urllib.request.HTTPSHandler(debuglevel=debuglevel)) + else: + opener = urllib.request.build_opener(urllib.request.HTTPSHandler(debuglevel=debuglevel)) + + urllib.request.install_opener(opener) + + logger.debug(1, "Fetching %s (%s)..." % (url, ["without authentication", "with authentication"][bool(username)])) + + try: + res = urlopen(Request(url, headers={'User-Agent': 'Mozilla/5.0 (bitbake/lib/layerindex)'}, unverifiable=True)) + except urllib.error.HTTPError as e: + logger.debug(1, "HTTP Error: %s: %s" % (e.code, e.reason)) + logger.debug(1, " Requested: %s" % (url)) + logger.debug(1, " Actual: %s" % (e.geturl())) + + if e.code == 404: + logger.debug(1, "Request not found.") + raise LayerIndexFetchError(url, e) + else: + logger.debug(1, "Headers:\n%s" % (e.headers)) + raise LayerIndexFetchError(url, e) + except OSError as e: + error = 0 + reason = "" + + # Process base OSError first... + if hasattr(e, 'errno'): + error = e.errno + reason = e.strerror + + # Process gaierror (socket error) subclass if available. + if hasattr(e, 'reason') and hasattr(e.reason, 'errno') and hasattr(e.reason, 'strerror'): + error = e.reason.errno + reason = e.reason.strerror + if error == -2: + raise LayerIndexFetchError(url, "%s: %s" % (e, reason)) + + if error and error != 0: + raise LayerIndexFetchError(url, "Unexpected exception: [Error %s] %s" % (error, reason)) + else: + raise LayerIndexFetchError(url, "Unable to fetch OSError exception: %s" % e) + + finally: + logger.debug(1, "...fetching %s (%s), done." % (url, ["without authentication", "with authentication"][bool(username)])) + + return res + + + def load_layerindex(self, indexURI, load=['layerDependencies', 'recipes', 'machines', 'distros'], reload=False): + '''Load the layerindex. + + indexURI - An index to load. (Use multiple calls to load multiple indexes) + + reload - If reload is True, then any previously loaded indexes will be forgotten. + + load - List of elements to load. Default loads all items. + Note: plugs may ignore this. + +The format of the indexURI: + + ;branch=;cache=;desc= + + Note: the 'branch' parameter if set can select multiple branches by using + comma, such as 'branch=master,morty,pyro'. However, many operations only look + at the -first- branch specified! + + The cache value may be undefined, in this case a network failure will + result in an error, otherwise the system will look for a file of the cache + name and load that instead. + + For example: + + http://layers.openembedded.org/layerindex/api/;branch=master;desc=OpenEmbedded%20Layer%20Index + cooker:// +''' + if reload: + self.indexes = [] + + logger.debug(1, 'Loading: %s' % indexURI) + + if not self.plugins: + raise LayerIndexException("No LayerIndex Plugins available") + + for plugin in self.plugins: + # Check if the plugin was initialized + logger.debug(1, 'Trying %s' % plugin.__class__) + if not hasattr(plugin, 'type') or not plugin.type: + continue + try: + # TODO: Implement 'cache', for when the network is not available + indexEnt = plugin.load_index(indexURI, load) + break + except LayerIndexPluginUrlError as e: + logger.debug(1, "%s doesn't support %s" % (plugin.type, e.url)) + except NotImplementedError: + pass + else: + logger.debug(1, "No plugins support %s" % indexURI) + raise LayerIndexException("No plugins support %s" % indexURI) + + # Mark CONFIG data as something we've added... + indexEnt.config['local'] = [] + indexEnt.config['local'].append('config') + + # No longer permit changes.. + indexEnt.lockData() + + self.indexes.append(indexEnt) + + def store_layerindex(self, indexURI, index=None): + '''Store one layerindex + +Typically this will be used to create a local cache file of a remote index. + + file://;branch= + +We can write out in either the restapi or django formats. The split option +will write out the individual elements split by layer and related components. +''' + if not index: + logger.warning('No index to write, nothing to do.') + return + + if not self.plugins: + raise LayerIndexException("No LayerIndex Plugins available") + + for plugin in self.plugins: + # Check if the plugin was initialized + logger.debug(1, 'Trying %s' % plugin.__class__) + if not hasattr(plugin, 'type') or not plugin.type: + continue + try: + plugin.store_index(indexURI, index) + break + except LayerIndexPluginUrlError as e: + logger.debug(1, "%s doesn't support %s" % (plugin.type, e.url)) + except NotImplementedError: + logger.debug(1, "Store not implemented in %s" % plugin.type) + pass + else: + logger.debug(1, "No plugins support %s" % url) + raise LayerIndexException("No plugins support %s" % url) + + + def is_empty(self): + '''Return True or False if the index has any usable data. + +We check the indexes entries to see if they have a branch set, as well as +layerBranches set. If not, they are effectively blank.''' + + found = False + for index in self.indexes: + if index.__bool__(): + found = True + break + return not found + + + def find_vcs_url(self, vcs_url, branch=None): + '''Return the first layerBranch with the given vcs_url + + If a branch has not been specified, we will iterate over the branches in + the default configuration until the first vcs_url/branch match.''' + + for index in self.indexes: + logger.debug(1, ' searching %s' % index.config['DESCRIPTION']) + layerBranch = index.find_vcs_url(vcs_url, [branch]) + if layerBranch: + return layerBranch + return None + + def find_collection(self, collection, version=None, branch=None): + '''Return the first layerBranch with the given collection name + + If a branch has not been specified, we will iterate over the branches in + the default configuration until the first collection/branch match.''' + + logger.debug(1, 'find_collection: %s (%s) %s' % (collection, version, branch)) + + if branch: + branches = [branch] + else: + branches = None + + for index in self.indexes: + logger.debug(1, ' searching %s' % index.config['DESCRIPTION']) + layerBranch = index.find_collection(collection, version, branches) + if layerBranch: + return layerBranch + else: + logger.debug(1, 'Collection %s (%s) not found for branch (%s)' % (collection, version, branch)) + return None + + def find_layerbranch(self, name, branch=None): + '''Return the layerBranch item for a given name and branch + + If a branch has not been specified, we will iterate over the branches in + the default configuration until the first name/branch match.''' + + if branch: + branches = [branch] + else: + branches = None + + for index in self.indexes: + layerBranch = index.find_layerbranch(name, branches) + if layerBranch: + return layerBranch + return None + + def find_dependencies(self, names=None, layerbranches=None, ignores=None): + '''Return a tuple of all dependencies and valid items for the list of (layer) names + + The dependency scanning happens depth-first. The returned + dependencies should be in the best order to define bblayers. + + names - list of layer names (searching layerItems) + branches - when specified (with names) only this list of branches are evaluated + + layerbranches - list of layerbranches to resolve dependencies + + ignores - list of layer names to ignore + + return: (dependencies, invalid) + + dependencies[LayerItem.name] = [ LayerBranch, LayerDependency1, LayerDependency2, ... ] + invalid = [ LayerItem.name1, LayerItem.name2, ... ] + ''' + + invalid = [] + + # Convert name/branch to layerbranches + if layerbranches is None: + layerbranches = [] + + for name in names: + if ignores and name in ignores: + continue + + for index in self.indexes: + layerbranch = index.find_layerbranch(name) + if not layerbranch: + # Not in this index, hopefully it's in another... + continue + layerbranches.append(layerbranch) + break + else: + invalid.append(name) + + + def _resolve_dependencies(layerbranches, ignores, dependencies, invalid): + for layerbranch in layerbranches: + if ignores and layerbranch.layer.name in ignores: + continue + + # Get a list of dependencies and then recursively process them + for layerdependency in layerbranch.index.layerDependencies_layerBranchId[layerbranch.id]: + deplayerbranch = layerdependency.dependency_layerBranch + + if ignores and deplayerbranch.layer.name in ignores: + continue + + # This little block is why we can't re-use the LayerIndexObj version, + # we must be able to satisfy each dependencies across layer indexes and + # use the layer index order for priority. (r stands for replacement below) + + # If this is the primary index, we can fast path and skip this + if deplayerbranch.index != self.indexes[0]: + # Is there an entry in a prior index for this collection/version? + rdeplayerbranch = self.find_collection( + collection=deplayerbranch.collection, + version=deplayerbranch.version + ) + if rdeplayerbranch != deplayerbranch: + logger.debug(1, 'Replaced %s:%s:%s with %s:%s:%s' % \ + (deplayerbranch.index.config['DESCRIPTION'], + deplayerbranch.branch.name, + deplayerbranch.layer.name, + rdeplayerbranch.index.config['DESCRIPTION'], + rdeplayerbranch.branch.name, + rdeplayerbranch.layer.name)) + deplayerbranch = rdeplayerbranch + + # New dependency, we need to resolve it now... depth-first + if deplayerbranch.layer.name not in dependencies: + (dependencies, invalid) = _resolve_dependencies([deplayerbranch], ignores, dependencies, invalid) + + if deplayerbranch.layer.name not in dependencies: + dependencies[deplayerbranch.layer.name] = [deplayerbranch, layerdependency] + else: + if layerdependency not in dependencies[deplayerbranch.layer.name]: + dependencies[deplayerbranch.layer.name].append(layerdependency) + + return (dependencies, invalid) + + # OK, resolve this one... + dependencies = OrderedDict() + (dependencies, invalid) = _resolve_dependencies(layerbranches, ignores, dependencies, invalid) + + for layerbranch in layerbranches: + if layerbranch.layer.name not in dependencies: + dependencies[layerbranch.layer.name] = [layerbranch] + + return (dependencies, invalid) + + + def list_obj(self, object): + '''Print via the plain logger object information + +This function is used to implement debugging and provide the user info. +''' + for lix in self.indexes: + if not hasattr(lix, object): + continue + + logger.plain ('') + logger.plain ('Index: %s' % lix.config['DESCRIPTION']) + + output = [] + + if object == 'branches': + logger.plain ('%s %s %s' % ('{:26}'.format('branch'), '{:34}'.format('description'), '{:22}'.format('bitbake branch'))) + logger.plain ('{:-^80}'.format("")) + for branchid in lix.branches: + output.append('%s %s %s' % ( + '{:26}'.format(lix.branches[branchid].name), + '{:34}'.format(lix.branches[branchid].short_description), + '{:22}'.format(lix.branches[branchid].bitbake_branch) + )) + for line in sorted(output): + logger.plain (line) + + continue + + if object == 'layerItems': + logger.plain ('%s %s' % ('{:26}'.format('layer'), '{:34}'.format('description'))) + logger.plain ('{:-^80}'.format("")) + for layerid in lix.layerItems: + output.append('%s %s' % ( + '{:26}'.format(lix.layerItems[layerid].name), + '{:34}'.format(lix.layerItems[layerid].summary) + )) + for line in sorted(output): + logger.plain (line) + + continue + + if object == 'layerBranches': + logger.plain ('%s %s %s' % ('{:26}'.format('layer'), '{:34}'.format('description'), '{:19}'.format('collection:version'))) + logger.plain ('{:-^80}'.format("")) + for layerbranchid in lix.layerBranches: + output.append('%s %s %s' % ( + '{:26}'.format(lix.layerBranches[layerbranchid].layer.name), + '{:34}'.format(lix.layerBranches[layerbranchid].layer.summary), + '{:19}'.format("%s:%s" % + (lix.layerBranches[layerbranchid].collection, + lix.layerBranches[layerbranchid].version) + ) + )) + for line in sorted(output): + logger.plain (line) + + continue + + if object == 'layerDependencies': + logger.plain ('%s %s %s %s' % ('{:19}'.format('branch'), '{:26}'.format('layer'), '{:11}'.format('dependency'), '{:26}'.format('layer'))) + logger.plain ('{:-^80}'.format("")) + for layerDependency in lix.layerDependencies: + if not lix.layerDependencies[layerDependency].dependency_layerBranch: + continue + + output.append('%s %s %s %s' % ( + '{:19}'.format(lix.layerDependencies[layerDependency].layerbranch.branch.name), + '{:26}'.format(lix.layerDependencies[layerDependency].layerbranch.layer.name), + '{:11}'.format('requires' if lix.layerDependencies[layerDependency].required else 'recommends'), + '{:26}'.format(lix.layerDependencies[layerDependency].dependency_layerBranch.layer.name) + )) + for line in sorted(output): + logger.plain (line) + + continue + + if object == 'recipes': + logger.plain ('%s %s %s' % ('{:20}'.format('recipe'), '{:10}'.format('version'), 'layer')) + logger.plain ('{:-^80}'.format("")) + output = [] + for recipe in lix.recipes: + output.append('%s %s %s' % ( + '{:30}'.format(lix.recipes[recipe].pn), + '{:30}'.format(lix.recipes[recipe].pv), + lix.recipes[recipe].layer.name + )) + for line in sorted(output): + logger.plain (line) + + continue + + if object == 'machines': + logger.plain ('%s %s %s' % ('{:24}'.format('machine'), '{:34}'.format('description'), '{:19}'.format('layer'))) + logger.plain ('{:-^80}'.format("")) + for machine in lix.machines: + output.append('%s %s %s' % ( + '{:24}'.format(lix.machines[machine].name), + '{:34}'.format(lix.machines[machine].description)[:34], + '{:19}'.format(lix.machines[machine].layerbranch.layer.name) + )) + for line in sorted(output): + logger.plain (line) + + continue + + if object == 'distros': + logger.plain ('%s %s %s' % ('{:24}'.format('distro'), '{:34}'.format('description'), '{:19}'.format('layer'))) + logger.plain ('{:-^80}'.format("")) + for distro in lix.distros: + output.append('%s %s %s' % ( + '{:24}'.format(lix.distros[distro].name), + '{:34}'.format(lix.distros[distro].description)[:34], + '{:19}'.format(lix.distros[distro].layerbranch.layer.name) + )) + for line in sorted(output): + logger.plain (line) + + continue + + logger.plain ('') + + +# This class holds a single layer index instance +# The LayerIndexObj is made up of dictionary of elements, such as: +# index['config'] - configuration data for this index +# index['branches'] - dictionary of Branch objects, by id number +# index['layerItems'] - dictionary of layerItem objects, by id number +# ...etc... (See: http://layers.openembedded.org/layerindex/api/) +# +# The class needs to manage the 'index' entries and allow easily adding +# of new items, as well as simply loading of the items. +class LayerIndexObj(): + def __init__(self): + super().__setattr__('_index', {}) + super().__setattr__('_lock', False) + + def __bool__(self): + '''False if the index is effectively empty + + We check the index to see if it has a branch set, as well as + layerbranches set. If not, it is effectively blank.''' + + if not bool(self._index): + return False + + try: + if self.branches and self.layerBranches: + return True + except AttributeError: + pass + + return False + + def __getattr__(self, name): + if name.startswith('_'): + return super().__getattribute__(name) + + if name not in self._index: + raise AttributeError('%s not in index datastore' % name) + + return self._index[name] + + def __setattr__(self, name, value): + if self.isLocked(): + raise TypeError("Can not set attribute '%s': index is locked" % name) + + if name.startswith('_'): + super().__setattr__(name, value) + return + + self._index[name] = value + + def __delattr__(self, name): + if self.isLocked(): + raise TypeError("Can not delete attribute '%s': index is locked" % name) + + if name.startswith('_'): + super().__delattr__(name) + + self._index.pop(name) + + def lockData(self): + '''Lock data object (make it readonly)''' + super().__setattr__("_lock", True) + + def unlockData(self): + '''unlock data object (make it readonly)''' + super().__setattr__("_lock", False) + + # When the data is unlocked, we have to clear the caches, as + # modification is allowed! + del(self._layerBranches_layerId_branchId) + del(self._layerDependencies_layerBranchId) + del(self._layerBranches_vcsUrl) + + def isLocked(self): + '''Is this object locked (readonly)?''' + return self._lock + + def add_element(self, indexname, objs): + '''Add a layer index object to index.''' + if indexname not in self._index: + self._index[indexname] = {} + + for obj in objs: + if obj.id in self._index[indexname]: + if self._index[indexname][obj.id] == obj: + continue + raise LayerIndexError('Conflict adding object %s(%s) to index' % (indexname, obj.id)) + self._index[indexname][obj.id] = obj + + def add_raw_element(self, indexname, objtype, rawobjs): + '''Convert a raw layer index data item to a layer index item object and add to the index''' + objs = [] + for entry in rawobjs: + objs.append(objtype(self, entry)) + self.add_element(indexname, objs) + + # Quick lookup table for searching layerId and branchID combos + @property + def layerBranches_layerId_branchId(self): + def createCache(self): + cache = {} + for layerbranchid in self.layerBranches: + layerbranch = self.layerBranches[layerbranchid] + cache["%s:%s" % (layerbranch.layer_id, layerbranch.branch_id)] = layerbranch + return cache + + if self.isLocked(): + cache = getattr(self, '_layerBranches_layerId_branchId', None) + else: + cache = None + + if not cache: + cache = createCache(self) + + if self.isLocked(): + super().__setattr__('_layerBranches_layerId_branchId', cache) + + return cache + + # Quick lookup table for finding all dependencies of a layerBranch + @property + def layerDependencies_layerBranchId(self): + def createCache(self): + cache = {} + # This ensures empty lists for all branchids + for layerbranchid in self.layerBranches: + cache[layerbranchid] = [] + + for layerdependencyid in self.layerDependencies: + layerdependency = self.layerDependencies[layerdependencyid] + cache[layerdependency.layerbranch_id].append(layerdependency) + return cache + + if self.isLocked(): + cache = getattr(self, '_layerDependencies_layerBranchId', None) + else: + cache = None + + if not cache: + cache = createCache(self) + + if self.isLocked(): + super().__setattr__('_layerDependencies_layerBranchId', cache) + + return cache + + # Quick lookup table for finding all instances of a vcs_url + @property + def layerBranches_vcsUrl(self): + def createCache(self): + cache = {} + for layerbranchid in self.layerBranches: + layerbranch = self.layerBranches[layerbranchid] + if layerbranch.layer.vcs_url not in cache: + cache[layerbranch.layer.vcs_url] = [layerbranch] + else: + cache[layerbranch.layer.vcs_url].append(layerbranch) + return cache + + if self.isLocked(): + cache = getattr(self, '_layerBranches_vcsUrl', None) + else: + cache = None + + if not cache: + cache = createCache(self) + + if self.isLocked(): + super().__setattr__('_layerBranches_vcsUrl', cache) + + return cache + + + def find_vcs_url(self, vcs_url, branches=None): + ''''Return the first layerBranch with the given vcs_url + + If a list of branches has not been specified, we will iterate on + all branches until the first vcs_url is found.''' + + if not self.__bool__(): + return None + + for layerbranch in self.layerBranches_vcsUrl: + if branches and layerbranch.branch.name not in branches: + continue + + return layerbranch + + return None + + + def find_collection(self, collection, version=None, branches=None): + '''Return the first layerBranch with the given collection name + + If a list of branches has not been specified, we will iterate on + all branches until the first collection is found.''' + + if not self.__bool__(): + return None + + for layerbranchid in self.layerBranches: + layerbranch = self.layerBranches[layerbranchid] + if branches and layerbranch.branch.name not in branches: + continue + + if layerbranch.collection == collection and \ + (version is None or version == layerbranch.version): + return layerbranch + + return None + + + def find_layerbranch(self, name, branches=None): + '''Return the first layerbranch whose layer name matches + + If a list of branches has not been specified, we will iterate on + all branches until the first layer with that name is found.''' + + if not self.__bool__(): + return None + + for layerbranchid in self.layerBranches: + layerbranch = self.layerBranches[layerbranchid] + if branches and layerbranch.branch.name not in branches: + continue + + if layerbranch.layer.name == name: + return layerbranch + + return None + + def find_dependencies(self, names=None, branches=None, layerBranches=None, ignores=None): + '''Return a tuple of all dependencies and valid items for the list of (layer) names + + The dependency scanning happens depth-first. The returned + dependencies should be in the best order to define bblayers. + + names - list of layer names (searching layerItems) + branches - when specified (with names) only this list of branches are evaluated + + layerBranches - list of layerBranches to resolve dependencies + + ignores - list of layer names to ignore + + return: (dependencies, invalid) + + dependencies[LayerItem.name] = [ LayerBranch, LayerDependency1, LayerDependency2, ... ] + invalid = [ LayerItem.name1, LayerItem.name2, ... ]''' + + invalid = [] + + # Convert name/branch to layerBranches + if layerbranches is None: + layerbranches = [] + + for name in names: + if ignores and name in ignores: + continue + + layerbranch = self.find_layerbranch(name, branches) + if not layerbranch: + invalid.append(name) + else: + layerbranches.append(layerbranch) + + for layerbranch in layerbranches: + if layerbranch.index != self: + raise LayerIndexException("Can not resolve dependencies across indexes with this class function!") + + def _resolve_dependencies(layerbranches, ignores, dependencies, invalid): + for layerbranch in layerbranches: + if ignores and layerBranch.layer.name in ignores: + continue + + for layerdependency in layerbranch.index.layerDependencies_layerBranchId[layerBranch.id]: + deplayerbranch = layerDependency.dependency_layerBranch + + if ignores and deplayerbranch.layer.name in ignores: + continue + + # New dependency, we need to resolve it now... depth-first + if deplayerbranch.layer.name not in dependencies: + (dependencies, invalid) = _resolve_dependencies([deplayerbranch], ignores, dependencies, invalid) + + if deplayerbranch.layer.name not in dependencies: + dependencies[deplayerbranch.layer.name] = [deplayerbranch, layerdependency] + else: + if layerdependency not in dependencies[deplayerbranch.layer.name]: + dependencies[deplayerbranch.layer.name].append(layerdependency) + + return (dependencies, invalid) + + # OK, resolve this one... + dependencies = OrderedDict() + (dependencies, invalid) = _resolve_dependencies(layerbranches, ignores, dependencies, invalid) + + # Is this item already in the list, if not add it + for layerbranch in layerbranches: + if layerbranch.layer.name not in dependencies: + dependencies[layerbranch.layer.name] = [layerbranch] + + return (dependencies, invalid) + + +# Define a basic LayerIndexItemObj. This object forms the basis for all other +# objects. The raw Layer Index data is stored in the _data element, but we +# do not want users to access data directly. So wrap this and protect it +# from direct manipulation. +# +# It is up to the insantiators of the objects to fill them out, and once done +# lock the objects to prevent further accidently manipulation. +# +# Using the getattr, setattr and properties we can access and manipulate +# the data within the data element. +class LayerIndexItemObj(): + def __init__(self, index, data=None, lock=False): + if data is None: + data = {} + + if type(data) != type(dict()): + raise TypeError('data (%s) is not a dict' % type(data)) + + super().__setattr__('_lock', lock) + super().__setattr__('index', index) + super().__setattr__('_data', data) + + def __eq__(self, other): + if self.__class__ != other.__class__: + return False + res=(self._data == other._data) + return res + + def __bool__(self): + return bool(self._data) + + def __getattr__(self, name): + # These are internal to THIS class, and not part of data + if name == "index" or name.startswith('_'): + return super().__getattribute__(name) + + if name not in self._data: + raise AttributeError('%s not in datastore' % name) + + return self._data[name] + + def _setattr(self, name, value, prop=True): + '''__setattr__ like function, but with control over property object behavior''' + if self.isLocked(): + raise TypeError("Can not set attribute '%s': Object data is locked" % name) + + if name.startswith('_'): + super().__setattr__(name, value) + return + + # Since __setattr__ runs before properties, we need to check if + # there is a setter property and then execute it + # ... or return self._data[name] + propertyobj = getattr(self.__class__, name, None) + if prop and isinstance(propertyobj, property): + if propertyobj.fset: + propertyobj.fset(self, value) + else: + raise AttributeError('Attribute %s is readonly, and may not be set' % name) + else: + self._data[name] = value + + def __setattr__(self, name, value): + self._setattr(name, value, prop=True) + + def _delattr(self, name, prop=True): + # Since __delattr__ runs before properties, we need to check if + # there is a deleter property and then execute it + # ... or we pop it ourselves.. + propertyobj = getattr(self.__class__, name, None) + if prop and isinstance(propertyobj, property): + if propertyobj.fdel: + propertyobj.fdel(self) + else: + raise AttributeError('Attribute %s is readonly, and may not be deleted' % name) + else: + self._data.pop(name) + + def __delattr__(self, name): + self._delattr(name, prop=True) + + def lockData(self): + '''Lock data object (make it readonly)''' + super().__setattr__("_lock", True) + + def unlockData(self): + '''unlock data object (make it readonly)''' + super().__setattr__("_lock", False) + + def isLocked(self): + '''Is this object locked (readonly)?''' + return self._lock + +# Branch object +class Branch(LayerIndexItemObj): + def define_data(self, id, name, bitbake_branch, + short_description=None, sort_priority=1, + updates_enabled=True, updated=None, + update_environment=None): + self.id = id + self.name = name + self.bitbake_branch = bitbake_branch + self.short_description = short_description or name + self.sort_priority = sort_priority + self.updates_enabled = updates_enabled + self.updated = updated or datetime.datetime.today().isoformat() + self.update_environment = update_environment + + @property + def name(self): + return self.__getattr__('name') + + @name.setter + def name(self, value): + self._data['name'] = value + + if self.bitbake_branch == value: + self.bitbake_branch = "" + + @name.deleter + def name(self): + self._delattr('name', prop=False) + + @property + def bitbake_branch(self): + try: + return self.__getattr__('bitbake_branch') + except AttributeError: + return self.name + + @bitbake_branch.setter + def bitbake_branch(self, value): + if self.name == value: + self._data['bitbake_branch'] = "" + else: + self._data['bitbake_branch'] = value + + @bitbake_branch.deleter + def bitbake_branch(self): + self._delattr('bitbake_branch', prop=False) + + +class LayerItem(LayerIndexItemObj): + def define_data(self, id, name, status='P', + layer_type='A', summary=None, + description=None, + vcs_url=None, vcs_web_url=None, + vcs_web_tree_base_url=None, + vcs_web_file_base_url=None, + usage_url=None, + mailing_list_url=None, + index_preference=1, + classic=False, + updated=None): + self.id = id + self.name = name + self.status = status + self.layer_type = layer_type + self.summary = summary or name + self.description = description or summary or name + self.vcs_url = vcs_url + self.vcs_web_url = vcs_web_url + self.vcs_web_tree_base_url = vcs_web_tree_base_url + self.vcs_web_file_base_url = vcs_web_file_base_url + self.index_preference = index_preference + self.classic = classic + self.updated = updated or datetime.datetime.today().isoformat() + + +class LayerBranch(LayerIndexItemObj): + def define_data(self, id, collection, version, layer, branch, + vcs_subdir="", vcs_last_fetch=None, + vcs_last_rev=None, vcs_last_commit=None, + actual_branch="", + updated=None): + self.id = id + self.collection = collection + self.version = version + if isinstance(layer, LayerItem): + self.layer = layer + else: + self.layer_id = layer + + if isinstance(branch, Branch): + self.branch = branch + else: + self.branch_id = branch + + self.vcs_subdir = vcs_subdir + self.vcs_last_fetch = vcs_last_fetch + self.vcs_last_rev = vcs_last_rev + self.vcs_last_commit = vcs_last_commit + self.actual_branch = actual_branch + self.updated = updated or datetime.datetime.today().isoformat() + + # This is a little odd, the _data attribute is 'layer', but it's really + # referring to the layer id.. so lets adjust this to make it useful + @property + def layer_id(self): + return self.__getattr__('layer') + + @layer_id.setter + def layer_id(self, value): + self._setattr('layer', value, prop=False) + + @layer_id.deleter + def layer_id(self): + self._delattr('layer', prop=False) + + @property + def layer(self): + try: + return self.index.layerItems[self.layer_id] + except KeyError: + raise AttributeError('Unable to find layerItems in index to map layer_id %s' % self.layer_id) + except IndexError: + raise AttributeError('Unable to find layer_id %s in index layerItems' % self.layer_id) + + @layer.setter + def layer(self, value): + if not isinstance(value, LayerItem): + raise TypeError('value is not a LayerItem') + if self.index != value.index: + raise AttributeError('Object and value do not share the same index and thus key set.') + self.layer_id = value.id + + @layer.deleter + def layer(self): + del self.layer_id + + @property + def branch_id(self): + return self.__getattr__('branch') + + @branch_id.setter + def branch_id(self, value): + self._setattr('branch', value, prop=False) + + @branch_id.deleter + def branch_id(self): + self._delattr('branch', prop=False) + + @property + def branch(self): + try: + logger.debug(1, "Get branch object from branches[%s]" % (self.branch_id)) + return self.index.branches[self.branch_id] + except KeyError: + raise AttributeError('Unable to find branches in index to map branch_id %s' % self.branch_id) + except IndexError: + raise AttributeError('Unable to find branch_id %s in index branches' % self.branch_id) + + @branch.setter + def branch(self, value): + if not isinstance(value, LayerItem): + raise TypeError('value is not a LayerItem') + if self.index != value.index: + raise AttributeError('Object and value do not share the same index and thus key set.') + self.branch_id = value.id + + @branch.deleter + def branch(self): + del self.branch_id + + @property + def actual_branch(self): + if self.__getattr__('actual_branch'): + return self.__getattr__('actual_branch') + else: + return self.branch.name + + @actual_branch.setter + def actual_branch(self, value): + logger.debug(1, "Set actual_branch to %s .. name is %s" % (value, self.branch.name)) + if value != self.branch.name: + self._setattr('actual_branch', value, prop=False) + else: + self._setattr('actual_branch', '', prop=False) + + @actual_branch.deleter + def actual_branch(self): + self._delattr('actual_branch', prop=False) + +# Extend LayerIndexItemObj with common LayerBranch manipulations +# All of the remaining LayerIndex objects refer to layerbranch, and it is +# up to the user to follow that back through the LayerBranch object into +# the layer object to get various attributes. So add an intermediate set +# of attributes that can easily get us the layerbranch as well as layer. + +class LayerIndexItemObj_LayerBranch(LayerIndexItemObj): + @property + def layerbranch_id(self): + return self.__getattr__('layerbranch') + + @layerbranch_id.setter + def layerbranch_id(self, value): + self._setattr('layerbranch', value, prop=False) + + @layerbranch_id.deleter + def layerbranch_id(self): + self._delattr('layerbranch', prop=False) + + @property + def layerbranch(self): + try: + return self.index.layerBranches[self.layerbranch_id] + except KeyError: + raise AttributeError('Unable to find layerBranches in index to map layerbranch_id %s' % self.layerbranch_id) + except IndexError: + raise AttributeError('Unable to find layerbranch_id %s in index branches' % self.layerbranch_id) + + @layerbranch.setter + def layerbranch(self, value): + if not isinstance(value, LayerBranch): + raise TypeError('value (%s) is not a layerBranch' % type(value)) + if self.index != value.index: + raise AttributeError('Object and value do not share the same index and thus key set.') + self.layerbranch_id = value.id + + @layerbranch.deleter + def layerbranch(self): + del self.layerbranch_id + + @property + def layer_id(self): + return self.layerbranch.layer_id + + # Doesn't make sense to set or delete layer_id + + @property + def layer(self): + return self.layerbranch.layer + + # Doesn't make sense to set or delete layer + + +class LayerDependency(LayerIndexItemObj_LayerBranch): + def define_data(self, id, layerbranch, dependency, required=True): + self.id = id + if isinstance(layerbranch, LayerBranch): + self.layerbranch = layerbranch + else: + self.layerbranch_id = layerbranch + if isinstance(dependency, LayerDependency): + self.dependency = dependency + else: + self.dependency_id = dependency + self.required = required + + @property + def dependency_id(self): + return self.__getattr__('dependency') + + @dependency_id.setter + def dependency_id(self, value): + self._setattr('dependency', value, prop=False) + + @dependency_id.deleter + def dependency_id(self): + self._delattr('dependency', prop=False) + + @property + def dependency(self): + try: + return self.index.layerItems[self.dependency_id] + except KeyError: + raise AttributeError('Unable to find layerItems in index to map layerbranch_id %s' % self.dependency_id) + except IndexError: + raise AttributeError('Unable to find dependency_id %s in index layerItems' % self.dependency_id) + + @dependency.setter + def dependency(self, value): + if not isinstance(value, LayerDependency): + raise TypeError('value (%s) is not a dependency' % type(value)) + if self.index != value.index: + raise AttributeError('Object and value do not share the same index and thus key set.') + self.dependency_id = value.id + + @dependency.deleter + def dependency(self): + self._delattr('dependency', prop=False) + + @property + def dependency_layerBranch(self): + layerid = self.dependency_id + branchid = self.layerbranch.branch_id + + try: + return self.index.layerBranches_layerId_branchId["%s:%s" % (layerid, branchid)] + except IndexError: + # layerBranches_layerId_branchId -- but not layerId:branchId + raise AttributeError('Unable to find layerId:branchId %s:%s in index layerBranches_layerId_branchId' % (layerid, branchid)) + except KeyError: + raise AttributeError('Unable to find layerId:branchId %s:%s in layerItems and layerBranches' % (layerid, branchid)) + + # dependency_layerBranch doesn't make sense to set or del + + +class Recipe(LayerIndexItemObj_LayerBranch): + def define_data(self, id, + filename, filepath, pn, pv, layerbranch, + summary="", description="", section="", license="", + homepage="", bugtracker="", provides="", bbclassextend="", + inherits="", blacklisted="", updated=None): + self.id = id + self.filename = filename + self.filepath = filepath + self.pn = pn + self.pv = pv + self.summary = summary + self.description = description + self.section = section + self.license = license + self.homepage = homepage + self.bugtracker = bugtracker + self.provides = provides + self.bbclassextend = bbclassextend + self.inherits = inherits + self.updated = updated or datetime.datetime.today().isoformat() + self.blacklisted = blacklisted + if isinstance(layerbranch, LayerBranch): + self.layerbranch = layerbranch + else: + self.layerbranch_id = layerbranch + + @property + def fullpath(self): + return os.path.join(self.filepath, self.filename) + + # Set would need to understand how to split it + # del would we del both parts? + + @property + def inherits(self): + if 'inherits' not in self._data: + # Older indexes may not have this, so emulate it + if '-image-' in self.pn: + return 'image' + return self.__getattr__('inherits') + + @inherits.setter + def inherits(self, value): + return self._setattr('inherits', value, prop=False) + + @inherits.deleter + def inherits(self): + return self._delattr('inherits', prop=False) + + +class Machine(LayerIndexItemObj_LayerBranch): + def define_data(self, id, + name, description, layerbranch, + updated=None): + self.id = id + self.name = name + self.description = description + if isinstance(layerbranch, LayerBranch): + self.layerbranch = layerbranch + else: + self.layerbranch_id = layerbranch + self.updated = updated or datetime.datetime.today().isoformat() + +class Distro(LayerIndexItemObj_LayerBranch): + def define_data(self, id, + name, description, layerbranch, + updated=None): + self.id = id + self.name = name + self.description = description + if isinstance(layerbranch, LayerBranch): + self.layerbranch = layerbranch + else: + self.layerbranch_id = layerbranch + self.updated = updated or datetime.datetime.today().isoformat() + +# When performing certain actions, we may need to sort the data. +# This will allow us to keep it consistent from run to run. +def sort_entry(item): + newitem = item + try: + if type(newitem) == type(dict()): + newitem = OrderedDict(sorted(newitem.items(), key=lambda t: t[0])) + for index in newitem: + newitem[index] = sort_entry(newitem[index]) + elif type(newitem) == type(list()): + newitem.sort(key=lambda obj: obj['id']) + for index, _ in enumerate(newitem): + newitem[index] = sort_entry(newitem[index]) + except: + logger.error('Sort failed for item %s' % type(item)) + pass + + return newitem diff --git a/bitbake/lib/layerindexlib/cooker.py b/bitbake/lib/layerindexlib/cooker.py new file mode 100644 index 0000000..848f0e2 --- /dev/null +++ b/bitbake/lib/layerindexlib/cooker.py @@ -0,0 +1,344 @@ +# Copyright (C) 2016-2018 Wind River Systems, Inc. +# +# This program is free software; you can redistribute it and/or modify +# it under the terms of the GNU General Public License version 2 as +# published by the Free Software Foundation. +# +# This program is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. +# See the GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with this program; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA + +import logging +import json + +from collections import OrderedDict, defaultdict + +from urllib.parse import unquote, urlparse + +import layerindexlib + +import layerindexlib.plugin + +logger = logging.getLogger('BitBake.layerindexlib.cooker') + +import bb.utils + +def plugin_init(plugins): + return CookerPlugin() + +class CookerPlugin(layerindexlib.plugin.IndexPlugin): + def __init__(self): + self.type = "cooker" + + self.server_connection = None + self.ui_module = None + self.server = None + + def _run_command(self, command, path, default=None): + try: + result, _ = bb.process.run(command, cwd=path) + result = result.strip() + except bb.process.ExecutionError: + result = default + return result + + def _handle_git_remote(self, remote): + if "://" not in remote: + if ':' in remote: + # This is assumed to be ssh + remote = "ssh://" + remote + else: + # This is assumed to be a file path + remote = "file://" + remote + return remote + + def _get_bitbake_info(self): + """Return a tuple of bitbake information""" + + # Our path SHOULD be .../bitbake/lib/layerindex/cooker.py + bb_path = os.path.dirname(__file__) # .../bitbake/lib/layerindex/cooker.py + bb_path = os.path.dirname(bb_path) # .../bitbake/lib/layerindex + bb_path = os.path.dirname(bb_path) # .../bitbake/lib + bb_path = os.path.dirname(bb_path) # .../bitbake + bb_path = self._run_command('git rev-parse --show-toplevel', os.path.dirname(__file__), default=bb_path) + bb_branch = self._run_command('git rev-parse --abbrev-ref HEAD', bb_path, default="") + bb_rev = self._run_command('git rev-parse HEAD', bb_path, default="") + for remotes in self._run_command('git remote -v', bb_path, default="").split("\n"): + remote = remotes.split("\t")[1].split(" ")[0] + if "(fetch)" == remotes.split("\t")[1].split(" ")[1]: + bb_remote = self._handle_git_remote(remote) + break + else: + bb_remote = self._handle_git_remote(bb_path) + + return (bb_remote, bb_branch, bb_rev, bb_path) + + def _load_bblayers(self, branches=None): + """Load the BBLAYERS and related collection information""" + + d = self.layerindex.data + + if not branches: + raise LayerIndexFetchError("No branches specified for _load_bblayers!") + + index = layerindexlib.LayerIndexObj() + + branchId = 0 + index.branches = {} + + layerItemId = 0 + index.layerItems = {} + + layerBranchId = 0 + index.layerBranches = {} + + bblayers = d.getVar('BBLAYERS').split() + + if not bblayers: + # It's blank! Nothing to process... + return index + + collections = d.getVar('BBFILE_COLLECTIONS') + layerconfs = d.varhistory.get_variable_items_files('BBFILE_COLLECTIONS', d) + bbfile_collections = {layer: os.path.dirname(os.path.dirname(path)) for layer, path in layerconfs.items()} + + (_, bb_branch, _, _) = self._get_bitbake_info() + + for branch in branches: + branchId += 1 + index.branches[branchId] = layerindexlib.Branch(index, None) + index.branches[branchId].define_data(branchId, branch, bb_branch) + + for entry in collections.split(): + layerpath = entry + if entry in bbfile_collections: + layerpath = bbfile_collections[entry] + + layername = d.getVar('BBLAYERS_LAYERINDEX_NAME_%s' % entry) or os.path.basename(layerpath) + layerversion = d.getVar('LAYERVERSION_%s' % entry) or "" + layerurl = self._handle_git_remote(layerpath) + + layersubdir = "" + layerrev = "" + layerbranch = "" + + if os.path.isdir(layerpath): + layerbasepath = self._run_command('git rev-parse --show-toplevel', layerpath, default=layerpath) + if os.path.abspath(layerpath) != os.path.abspath(layerbasepath): + layersubdir = os.path.abspath(layerpath)[len(layerbasepath) + 1:] + + layerbranch = self._run_command('git rev-parse --abbrev-ref HEAD', layerpath, default="") + layerrev = self._run_command('git rev-parse HEAD', layerpath, default="") + + for remotes in self._run_command('git remote -v', layerpath, default="").split("\n"): + if not remotes: + layerurl = self._handle_git_remote(layerpath) + else: + remote = remotes.split("\t")[1].split(" ")[0] + if "(fetch)" == remotes.split("\t")[1].split(" ")[1]: + layerurl = self._handle_git_remote(remote) + break + + layerItemId += 1 + index.layerItems[layerItemId] = layerindexlib.LayerItem(index, None) + index.layerItems[layerItemId].define_data(layerItemId, layername, description=layerpath, vcs_url=layerurl) + + for branchId in index.branches: + layerBranchId += 1 + index.layerBranches[layerBranchId] = layerindexlib.LayerBranch(index, None) + index.layerBranches[layerBranchId].define_data(layerBranchId, entry, layerversion, layerItemId, branchId, + vcs_subdir=layersubdir, vcs_last_rev=layerrev, actual_branch=layerbranch) + + return index + + + def load_index(self, url, load): + """ + Fetches layer information from a build configuration. + + The return value is a dictionary containing API, + layer, branch, dependency, recipe, machine, distro, information. + + url type should be 'cooker'. + url path is ignored + """ + + up = urlparse(url) + + if up.scheme != 'cooker': + raise layerindexlib.plugin.LayerIndexPluginUrlError(self.type, url) + + d = self.layerindex.data + + params = self.layerindex._parse_params(up.params) + + # Only reason to pass a branch is to emulate them... + if 'branch' in params: + branches = params['branch'].split(',') + else: + branches = ['HEAD'] + + logger.debug(1, "Loading cooker data branches %s" % branches) + + index = self._load_bblayers(branches=branches) + + index.config = {} + index.config['TYPE'] = self.type + index.config['URL'] = url + + if 'desc' in params: + index.config['DESCRIPTION'] = unquote(params['desc']) + else: + index.config['DESCRIPTION'] = 'local' + + if 'cache' in params: + index.config['CACHE'] = params['cache'] + + index.config['BRANCH'] = branches + + # ("layerDependencies", layerindexlib.LayerDependency) + layerDependencyId = 0 + if "layerDependencies" in load: + index.layerDependencies = {} + for layerBranchId in index.layerBranches: + branchName = index.layerBranches[layerBranchId].branch.name + collection = index.layerBranches[layerBranchId].collection + + def add_dependency(layerDependencyId, index, deps, required): + try: + depDict = bb.utils.explode_dep_versions2(deps) + except bb.utils.VersionStringException as vse: + bb.fatal('Error parsing LAYERDEPENDS_%s: %s' % (c, str(vse))) + + for dep, oplist in list(depDict.items()): + # We need to search ourselves, so use the _ version... + depLayerBranch = index.find_collection(dep, branches=[branchName]) + if not depLayerBranch: + # Missing dependency?! + logger.error('Missing dependency %s (%s)' % (dep, branchName)) + continue + + # We assume that the oplist matches... + layerDependencyId += 1 + layerDependency = layerindexlib.LayerDependency(index, None) + layerDependency.define_data(id=layerDependencyId, + required=required, layerbranch=layerBranchId, + dependency=depLayerBranch.layer_id) + + logger.debug(1, '%s requires %s' % (layerDependency.layer.name, layerDependency.dependency.name)) + index.add_element("layerDependencies", [layerDependency]) + + return layerDependencyId + + deps = d.getVar("LAYERDEPENDS_%s" % collection) + if deps: + layerDependencyId = add_dependency(layerDependencyId, index, deps, True) + + deps = d.getVar("LAYERRECOMMENDS_%s" % collection) + if deps: + layerDependencyId = add_dependency(layerDependencyId, index, deps, False) + + # Need to load recipes here (requires cooker access) + recipeId = 0 + ## TODO: NOT IMPLEMENTED + # The code following this is an example of what needs to be + # implemented. However, it does not work as-is. + if False and 'recipes' in load: + index.recipes = {} + + ret = self.ui_module.main(self.server_connection.connection, self.server_connection.events, config_params) + + all_versions = self._run_command('allProviders') + + all_versions_list = defaultdict(list, all_versions) + for pn in all_versions_list: + for ((pe, pv, pr), fpath) in all_versions_list[pn]: + realfn = bb.cache.virtualfn2realfn(fpath) + + filepath = os.path.dirname(realfn[0]) + filename = os.path.basename(realfn[0]) + + # This is all HORRIBLY slow, and likely unnecessary + #dscon = self._run_command('parseRecipeFile', fpath, False, []) + #connector = myDataStoreConnector(self, dscon.dsindex) + #recipe_data = bb.data.init() + #recipe_data.setVar('_remote_data', connector) + + #summary = recipe_data.getVar('SUMMARY') + #description = recipe_data.getVar('DESCRIPTION') + #section = recipe_data.getVar('SECTION') + #license = recipe_data.getVar('LICENSE') + #homepage = recipe_data.getVar('HOMEPAGE') + #bugtracker = recipe_data.getVar('BUGTRACKER') + #provides = recipe_data.getVar('PROVIDES') + + layer = bb.utils.get_file_layer(realfn[0], self.config_data) + + depBranchId = collection_layerbranch[layer] + + recipeId += 1 + recipe = layerindexlib.Recipe(index, None) + recipe.define_data(id=recipeId, + filename=filename, filepath=filepath, + pn=pn, pv=pv, + summary=pn, description=pn, section='?', + license='?', homepage='?', bugtracker='?', + provides='?', bbclassextend='?', inherits='?', + blacklisted='?', layerbranch=depBranchId) + + index = addElement("recipes", [recipe], index) + + # ("machines", layerindexlib.Machine) + machineId = 0 + if 'machines' in load: + index.machines = {} + + for layerBranchId in index.layerBranches: + # load_bblayers uses the description to cache the actual path... + machine_path = index.layerBranches[layerBranchId].layer.description + machine_path = os.path.join(machine_path, 'conf/machine') + if os.path.isdir(machine_path): + for (dirpath, _, filenames) in os.walk(machine_path): + # Ignore subdirs... + if not dirpath.endswith('conf/machine'): + continue + for fname in filenames: + if fname.endswith('.conf'): + machineId += 1 + machine = layerindexlib.Machine(index, None) + machine.define_data(id=machineId, name=fname[:-5], + description=fname[:-5], + layerbranch=index.layerBranches[layerBranchId]) + + index.add_element("machines", [machine]) + + # ("distros", layerindexlib.Distro) + distroId = 0 + if 'distros' in load: + index.distros = {} + + for layerBranchId in index.layerBranches: + # load_bblayers uses the description to cache the actual path... + distro_path = index.layerBranches[layerBranchId].layer.description + distro_path = os.path.join(distro_path, 'conf/distro') + if os.path.isdir(distro_path): + for (dirpath, _, filenames) in os.walk(distro_path): + # Ignore subdirs... + if not dirpath.endswith('conf/distro'): + continue + for fname in filenames: + if fname.endswith('.conf'): + distroId += 1 + distro = layerindexlib.Distro(index, None) + distro.define_data(id=distroId, name=fname[:-5], + description=fname[:-5], + layerbranch=index.layerBranches[layerBranchId]) + + index.add_element("distros", [distro]) + + return index diff --git a/bitbake/lib/layerindexlib/plugin.py b/bitbake/lib/layerindexlib/plugin.py new file mode 100644 index 0000000..92a2e97 --- /dev/null +++ b/bitbake/lib/layerindexlib/plugin.py @@ -0,0 +1,60 @@ +# Copyright (C) 2016-2018 Wind River Systems, Inc. +# +# This program is free software; you can redistribute it and/or modify +# it under the terms of the GNU General Public License version 2 as +# published by the Free Software Foundation. +# +# This program is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. +# See the GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with this program; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA + +# The file contains: +# LayerIndex exceptions +# Plugin base class +# Utility Functions for working on layerindex data + +import argparse +import logging +import os +import bb.msg + +logger = logging.getLogger('BitBake.layerindexlib.plugin') + +class LayerIndexPluginException(Exception): + """LayerIndex Generic Exception""" + def __init__(self, message): + self.msg = message + Exception.__init__(self, message) + + def __str__(self): + return self.msg + +class LayerIndexPluginUrlError(LayerIndexPluginException): + """Exception raised when a plugin does not support a given URL type""" + def __init__(self, plugin, url): + msg = "%s does not support %s:" % (plugin, url) + self.plugin = plugin + self.url = url + LayerIndexPluginException.__init__(self, msg) + +class IndexPlugin(): + def __init__(self): + self.type = None + + def init(self, layerindex): + self.layerindex = layerindex + + def plugin_type(self): + return self.type + + def load_index(self, uri): + raise NotImplementedError('load_index is not implemented') + + def store_index(self, uri, index): + raise NotImplementedError('store_index is not implemented') + diff --git a/bitbake/lib/layerindexlib/restapi.py b/bitbake/lib/layerindexlib/restapi.py new file mode 100644 index 0000000..d08eb20 --- /dev/null +++ b/bitbake/lib/layerindexlib/restapi.py @@ -0,0 +1,398 @@ +# Copyright (C) 2016-2018 Wind River Systems, Inc. +# +# This program is free software; you can redistribute it and/or modify +# it under the terms of the GNU General Public License version 2 as +# published by the Free Software Foundation. +# +# This program is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. +# See the GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with this program; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA + +import logging +import json +from urllib.parse import unquote +from urllib.parse import urlparse + +import layerindexlib +import layerindexlib.plugin + +logger = logging.getLogger('BitBake.layerindexlib.restapi') + +def plugin_init(plugins): + return RestApiPlugin() + +class RestApiPlugin(layerindexlib.plugin.IndexPlugin): + def __init__(self): + self.type = "restapi" + + def load_index(self, url, load): + """ + Fetches layer information from a local or remote layer index. + + The return value is a LayerIndexObj. + + url is the url to the rest api of the layer index, such as: + http://layers.openembedded.org/layerindex/api/ + + Or a local file... + """ + + up = urlparse(url) + + if up.scheme == 'file': + return self.load_index_file(up, url, load) + + if up.scheme == 'http' or up.scheme == 'https': + return self.load_index_web(up, url, load) + + raise layerindexlib.plugin.LayerIndexPluginUrlError(self.type, url) + + + def load_index_file(self, up, url, load): + """ + Fetches layer information from a local file or directory. + + The return value is a LayerIndexObj. + + ud is the parsed url to the local file or directory. + """ + if not os.path.exists(up.path): + raise FileNotFoundError(up.path) + + index = layerindexlib.LayerIndexObj() + + index.config = {} + index.config['TYPE'] = self.type + index.config['URL'] = url + + params = self.layerindex._parse_params(up.params) + + if 'desc' in params: + index.config['DESCRIPTION'] = unquote(params['desc']) + else: + index.config['DESCRIPTION'] = up.path + + if 'cache' in params: + index.config['CACHE'] = params['cache'] + + if 'branch' in params: + branches = params['branch'].split(',') + index.config['BRANCH'] = branches + else: + branches = ['*'] + + + def load_cache(path, index, branches=[]): + logger.debug(1, 'Loading json file %s' % path) + with open(path, 'rt', encoding='utf-8') as f: + pindex = json.load(f) + + # Filter the branches on loaded files... + newpBranch = [] + for branch in branches: + if branch != '*': + if 'branches' in pindex: + for br in pindex['branches']: + if br['name'] == branch: + newpBranch.append(br) + else: + if 'branches' in pindex: + for br in pindex['branches']: + newpBranch.append(br) + + if newpBranch: + index.add_raw_element('branches', layerindexlib.Branch, newpBranch) + else: + logger.debug(1, 'No matching branches (%s) in index file(s)' % branches) + # No matching branches.. return nothing... + return + + for (lName, lType) in [("layerItems", layerindexlib.LayerItem), + ("layerBranches", layerindexlib.LayerBranch), + ("layerDependencies", layerindexlib.LayerDependency), + ("recipes", layerindexlib.Recipe), + ("machines", layerindexlib.Machine), + ("distros", layerindexlib.Distro)]: + if lName in pindex: + index.add_raw_element(lName, lType, pindex[lName]) + + + if not os.path.isdir(up.path): + load_cache(up.path, index, branches) + return index + + logger.debug(1, 'Loading from dir %s...' % (up.path)) + for (dirpath, _, filenames) in os.walk(up.path): + for filename in filenames: + if not filename.endswith('.json'): + continue + fpath = os.path.join(dirpath, filename) + load_cache(fpath, index, branches) + + return index + + + def load_index_web(self, up, url, load): + """ + Fetches layer information from a remote layer index. + + The return value is a LayerIndexObj. + + ud is the parsed url to the rest api of the layer index, such as: + http://layers.openembedded.org/layerindex/api/ + """ + + def _get_json_response(apiurl=None, username=None, password=None, retry=True): + assert apiurl is not None + + logger.debug(1, "fetching %s" % apiurl) + + up = urlparse(apiurl) + + username=up.username + password=up.password + + # Strip username/password and params + if up.port: + up_stripped = up._replace(params="", netloc="%s:%s" % (up.hostname, up.port)) + else: + up_stripped = up._replace(params="", netloc=up.hostname) + + res = self.layerindex._fetch_url(up_stripped.geturl(), username=username, password=password) + + try: + parsed = json.loads(res.read().decode('utf-8')) + except ConnectionResetError: + if retry: + logger.debug(1, "%s: Connection reset by peer. Retrying..." % url) + parsed = _get_json_response(apiurl=up_stripped.geturl(), username=username, password=password, retry=False) + logger.debug(1, "%s: retry successful.") + else: + raise LayerIndexFetchError('%s: Connection reset by peer. Is there a firewall blocking your connection?' % apiurl) + + return parsed + + index = layerindexlib.LayerIndexObj() + + index.config = {} + index.config['TYPE'] = self.type + index.config['URL'] = url + + params = self.layerindex._parse_params(up.params) + + if 'desc' in params: + index.config['DESCRIPTION'] = unquote(params['desc']) + else: + index.config['DESCRIPTION'] = up.hostname + + if 'cache' in params: + index.config['CACHE'] = params['cache'] + + if 'branch' in params: + branches = params['branch'].split(',') + index.config['BRANCH'] = branches + else: + branches = ['*'] + + try: + index.apilinks = _get_json_response(apiurl=url, username=up.username, password=up.password) + except Exception as e: + raise layerindexlib.LayerIndexFetchError(url, e) + + # Local raw index set... + pindex = {} + + # Load all the requested branches at the same time time, + # a special branch of '*' means load all branches + filter = "" + if "*" not in branches: + filter = "?filter=name:%s" % "OR".join(branches) + + logger.debug(1, "Loading %s from %s" % (branches, index.apilinks['branches'])) + + # The link won't include username/password, so pull it from the original url + pindex['branches'] = _get_json_response(index.apilinks['branches'] + filter, + username=up.username, password=up.password) + if not pindex['branches']: + logger.debug(1, "No valid branches (%s) found at url %s." % (branch, url)) + return index + index.add_raw_element("branches", layerindexlib.Branch, pindex['branches']) + + # Load all of the layerItems (these can not be easily filtered) + logger.debug(1, "Loading %s from %s" % ('layerItems', index.apilinks['layerItems'])) + + + # The link won't include username/password, so pull it from the original url + pindex['layerItems'] = _get_json_response(index.apilinks['layerItems'], + username=up.username, password=up.password) + if not pindex['layerItems']: + logger.debug(1, "No layers were found at url %s." % (url)) + return index + index.add_raw_element("layerItems", layerindexlib.LayerItem, pindex['layerItems']) + + + # From this point on load the contents for each branch. Otherwise we + # could run into a timeout. + for branch in index.branches: + filter = "?filter=branch__name:%s" % index.branches[branch].name + + logger.debug(1, "Loading %s from %s" % ('layerBranches', index.apilinks['layerBranches'])) + + # The link won't include username/password, so pull it from the original url + pindex['layerBranches'] = _get_json_response(index.apilinks['layerBranches'] + filter, + username=up.username, password=up.password) + if not pindex['layerBranches']: + logger.debug(1, "No valid layer branches (%s) found at url %s." % (branches or "*", url)) + return index + index.add_raw_element("layerBranches", layerindexlib.LayerBranch, pindex['layerBranches']) + + + # Load the rest, they all have a similar format + # Note: the layer index has a few more items, we can add them if necessary + # in the future. + filter = "?filter=layerbranch__branch__name:%s" % index.branches[branch].name + for (lName, lType) in [("layerDependencies", layerindexlib.LayerDependency), + ("recipes", layerindexlib.Recipe), + ("machines", layerindexlib.Machine), + ("distros", layerindexlib.Distro)]: + if lName not in load: + continue + logger.debug(1, "Loading %s from %s" % (lName, index.apilinks[lName])) + + # The link won't include username/password, so pull it from the original url + pindex[lName] = _get_json_response(index.apilinks[lName] + filter, + username=up.username, password=up.password) + index.add_raw_element(lName, lType, pindex[lName]) + + return index + + def store_index(self, url, index): + """ + Store layer information into a local file/dir. + + The return value is a dictionary containing API, + layer, branch, dependency, recipe, machine, distro, information. + + ud is a parsed url to a directory or file. If the path is a + directory, we will split the files into one file per layer. + If the path is to a file (exists or not) the entire DB will be + dumped into that one file. + """ + + up = urlparse(url) + + if up.scheme != 'file': + raise layerindexlib.plugin.LayerIndexPluginUrlError(self.type, url) + + logger.debug(1, "Storing to %s..." % up.path) + + try: + layerbranches = index.layerBranches + except KeyError: + logger.error('No layerBranches to write.') + return + + + def filter_item(layerbranchid, objects): + filtered = [] + for obj in getattr(index, objects, None): + try: + if getattr(index, objects)[obj].layerbranch_id == layerbranchid: + filtered.append(getattr(index, objects)[obj]._data) + except AttributeError: + logger.debug(1, 'No obj.layerbranch_id: %s' % objects) + # No simple filter method, just include it... + try: + filtered.append(getattr(index, objects)[obj]._data) + except AttributeError: + logger.debug(1, 'No obj._data: %s %s' % (objects, type(obj))) + filtered.append(obj) + return filtered + + + # Write out to a single file. + # Filter out unnecessary items, then sort as we write for determinism + if not os.path.isdir(up.path): + pindex = {} + + pindex['branches'] = [] + pindex['layerItems'] = [] + pindex['layerBranches'] = [] + + for layerbranchid in layerbranches: + if layerbranches[layerbranchid].branch._data not in pindex['branches']: + pindex['branches'].append(layerbranches[layerbranchid].branch._data) + + if layerbranches[layerbranchid].layer._data not in pindex['layerItems']: + pindex['layerItems'].append(layerbranches[layerbranchid].layer._data) + + if layerbranches[layerbranchid]._data not in pindex['layerBranches']: + pindex['layerBranches'].append(layerbranches[layerbranchid]._data) + + for entry in index._index: + # Skip local items, apilinks and items already processed + if entry in index.config['local'] or \ + entry == 'apilinks' or \ + entry == 'branches' or \ + entry == 'layerBranches' or \ + entry == 'layerItems': + continue + if entry not in pindex: + pindex[entry] = [] + pindex[entry].extend(filter_item(layerbranchid, entry)) + + bb.debug(1, 'Writing index to %s' % up.path) + with open(up.path, 'wt') as f: + json.dump(layerindexlib.sort_entry(pindex), f, indent=4) + return + + + # Write out to a directory one file per layerBranch + # Prepare all layer related items, to create a minimal file. + # We have to sort the entries as we write so they are deterministic + for layerbranchid in layerbranches: + pindex = {} + + for entry in index._index: + # Skip local items, apilinks and items already processed + if entry in index.config['local'] or \ + entry == 'apilinks' or \ + entry == 'branches' or \ + entry == 'layerBranches' or \ + entry == 'layerItems': + continue + pindex[entry] = filter_item(layerbranchid, entry) + + # Add the layer we're processing as the first one... + pindex['branches'] = [layerbranches[layerbranchid].branch._data] + pindex['layerItems'] = [layerbranches[layerbranchid].layer._data] + pindex['layerBranches'] = [layerbranches[layerbranchid]._data] + + # We also need to include the layerbranch for any dependencies... + for layerdep in pindex['layerDependencies']: + layerdependency = layerindexlib.LayerDependency(index, layerdep) + + layeritem = layerdependency.dependency + layerbranch = layerdependency.dependency_layerBranch + + # We need to avoid duplicates... + if layeritem._data not in pindex['layerItems']: + pindex['layerItems'].append(layeritem._data) + + if layerbranch._data not in pindex['layerBranches']: + pindex['layerBranches'].append(layerbranch._data) + + # apply mirroring adjustments here.... + + fname = index.config['DESCRIPTION'] + '__' + pindex['branches'][0]['name'] + '__' + pindex['layerItems'][0]['name'] + fname = fname.translate(str.maketrans('/ ', '__')) + fpath = os.path.join(up.path, fname) + + bb.debug(1, 'Writing index to %s' % fpath + '.json') + with open(fpath + '.json', 'wt') as f: + json.dump(layerindexlib.sort_entry(pindex), f, indent=4) diff --git a/bitbake/lib/layerindexlib/tests/__init__.py b/bitbake/lib/layerindexlib/tests/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/bitbake/lib/layerindexlib/tests/common.py b/bitbake/lib/layerindexlib/tests/common.py new file mode 100644 index 0000000..22a5458 --- /dev/null +++ b/bitbake/lib/layerindexlib/tests/common.py @@ -0,0 +1,43 @@ +# Copyright (C) 2017-2018 Wind River Systems, Inc. +# +# This program is free software; you can redistribute it and/or modify +# it under the terms of the GNU General Public License version 2 as +# published by the Free Software Foundation. +# +# This program is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. +# See the GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with this program; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA + +import unittest +import tempfile +import os +import bb + +import logging + +class LayersTest(unittest.TestCase): + + def setUp(self): + self.origdir = os.getcwd() + self.d = bb.data.init() + # At least one variable needs to be set + self.d.setVar('DL_DIR', os.getcwd()) + + if os.environ.get("BB_SKIP_NETTESTS") == "yes": + self.d.setVar('BB_NO_NETWORK', '1') + + self.tempdir = tempfile.mkdtemp() + self.logger = logging.getLogger("BitBake") + + def tearDown(self): + os.chdir(self.origdir) + if os.environ.get("BB_TMPDIR_NOCLEAN") == "yes": + print("Not cleaning up %s. Please remove manually." % self.tempdir) + else: + bb.utils.prunedir(self.tempdir) + diff --git a/bitbake/lib/layerindexlib/tests/cooker.py b/bitbake/lib/layerindexlib/tests/cooker.py new file mode 100644 index 0000000..fdbf091 --- /dev/null +++ b/bitbake/lib/layerindexlib/tests/cooker.py @@ -0,0 +1,123 @@ +# Copyright (C) 2018 Wind River Systems, Inc. +# +# This program is free software; you can redistribute it and/or modify +# it under the terms of the GNU General Public License version 2 as +# published by the Free Software Foundation. +# +# This program is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. +# See the GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with this program; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA + +import unittest +import tempfile +import os +import bb + +import layerindexlib +from layerindexlib.tests.common import LayersTest + +import logging + +class LayerIndexCookerTest(LayersTest): + + def setUp(self): + LayersTest.setUp(self) + + # Note this is NOT a comprehensive test of cooker, as we can't easily + # configure the test data. But we can emulate the basics of the layer.conf + # files, so that is what we will do. + + new_topdir = os.path.join(os.path.dirname(os.path.realpath(__file__)), "testdata") + new_bbpath = os.path.join(new_topdir, "build") + + self.d.setVar('TOPDIR', new_topdir) + self.d.setVar('BBPATH', new_bbpath) + + self.d = bb.parse.handle("%s/conf/bblayers.conf" % new_bbpath, self.d, True) + for layer in self.d.getVar('BBLAYERS').split(): + self.d = bb.parse.handle("%s/conf/layer.conf" % layer, self.d, True) + + self.layerindex = layerindexlib.LayerIndex(self.d) + self.layerindex.load_layerindex('cooker://', load=['layerDependencies']) + + def test_layerindex_is_empty(self): + self.assertFalse(self.layerindex.is_empty(), msg="Layerindex is not empty!") + + def test_dependency_resolution(self): + # Verify depth first searching... + (dependencies, invalidnames) = self.layerindex.find_dependencies(names=['meta-python']) + + first = True + for deplayerbranch in dependencies: + layerBranch = dependencies[deplayerbranch][0] + layerDeps = dependencies[deplayerbranch][1:] + + if not first: + continue + + first = False + + # Top of the deps should be openembedded-core, since everything depends on it. + self.assertEqual(layerBranch.layer.name, "openembedded-core", msg='Top dependency not openembedded-core') + + # meta-python should cause an openembedded-core dependency, if not assert! + for dep in layerDeps: + if dep.layer.name == 'meta-python': + break + else: + self.assertTrue(False, msg='meta-python was not found') + + # Only check the first element... + break + else: + if first: + # Empty list, this is bad. + self.assertTrue(False, msg='Empty list of dependencies') + + # Last dep should be the requested item + layerBranch = dependencies[deplayerbranch][0] + self.assertEqual(layerBranch.layer.name, "meta-python", msg='Last dependency not meta-python') + + def test_find_collection(self): + def _check(collection, expected): + self.logger.debug(1, "Looking for collection %s..." % collection) + result = self.layerindex.find_collection(collection) + if expected: + self.assertIsNotNone(result, msg="Did not find %s when it shouldn't be there" % collection) + else: + self.assertIsNone(result, msg="Found %s when it should be there" % collection) + + tests = [ ('core', True), + ('openembedded-core', False), + ('networking-layer', True), + ('meta-python', True), + ('openembedded-layer', True), + ('notpresent', False) ] + + for collection,result in tests: + _check(collection, result) + + def test_find_layerbranch(self): + def _check(name, expected): + self.logger.debug(1, "Looking for layerbranch %s..." % name) + result = self.layerindex.find_layerbranch(name) + if expected: + self.assertIsNotNone(result, msg="Did not find %s when it shouldn't be there" % collection) + else: + self.assertIsNone(result, msg="Found %s when it should be there" % collection) + + tests = [ ('openembedded-core', True), + ('core', False), + ('networking-layer', True), + ('meta-python', True), + ('openembedded-layer', True), + ('notpresent', False) ] + + for collection,result in tests: + _check(collection, result) + diff --git a/bitbake/lib/layerindexlib/tests/layerindexobj.py b/bitbake/lib/layerindexlib/tests/layerindexobj.py new file mode 100644 index 0000000..e2fbb95 --- /dev/null +++ b/bitbake/lib/layerindexlib/tests/layerindexobj.py @@ -0,0 +1,226 @@ +# Copyright (C) 2017-2018 Wind River Systems, Inc. +# +# This program is free software; you can redistribute it and/or modify +# it under the terms of the GNU General Public License version 2 as +# published by the Free Software Foundation. +# +# This program is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. +# See the GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with this program; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA + +import unittest +import tempfile +import os +import bb + +from layerindexlib.tests.common import LayersTest + +import logging + +class LayerIndexObjectsTest(LayersTest): + def setUp(self): + from layerindexlib import LayerIndexObj, Branch, LayerItem, LayerBranch, LayerDependency, Recipe, Machine, Distro + + LayersTest.setUp(self) + + self.index = LayerIndexObj() + + branchId = 0 + layerItemId = 0 + layerBranchId = 0 + layerDependencyId = 0 + recipeId = 0 + machineId = 0 + distroId = 0 + + self.index.branches = {} + self.index.layerItems = {} + self.index.layerBranches = {} + self.index.layerDependencies = {} + self.index.recipes = {} + self.index.machines = {} + self.index.distros = {} + + branchId += 1 + self.index.branches[branchId] = Branch(self.index) + self.index.branches[branchId].define_data(branchId, + 'test_branch', 'bb_test_branch') + self.index.branches[branchId].lockData() + + layerItemId +=1 + self.index.layerItems[layerItemId] = LayerItem(self.index) + self.index.layerItems[layerItemId].define_data(layerItemId, + 'test_layerItem', vcs_url='git://git_test_url/test_layerItem') + self.index.layerItems[layerItemId].lockData() + + layerBranchId +=1 + self.index.layerBranches[layerBranchId] = LayerBranch(self.index) + self.index.layerBranches[layerBranchId].define_data(layerBranchId, + 'test_collection', '99', layerItemId, + branchId) + + recipeId += 1 + self.index.recipes[recipeId] = Recipe(self.index) + self.index.recipes[recipeId].define_data(recipeId, 'test_git.bb', + 'recipes-test', 'test', 'git', + layerBranchId) + + machineId += 1 + self.index.machines[machineId] = Machine(self.index) + self.index.machines[machineId].define_data(machineId, + 'test_machine', 'test_machine', + layerBranchId) + + distroId += 1 + self.index.distros[distroId] = Distro(self.index) + self.index.distros[distroId].define_data(distroId, + 'test_distro', 'test_distro', + layerBranchId) + + layerItemId +=1 + self.index.layerItems[layerItemId] = LayerItem(self.index) + self.index.layerItems[layerItemId].define_data(layerItemId, 'test_layerItem 2', + vcs_url='git://git_test_url/test_layerItem') + + layerBranchId +=1 + self.index.layerBranches[layerBranchId] = LayerBranch(self.index) + self.index.layerBranches[layerBranchId].define_data(layerBranchId, + 'test_collection_2', '72', layerItemId, + branchId, actual_branch='some_other_branch') + + layerDependencyId += 1 + self.index.layerDependencies[layerDependencyId] = LayerDependency(self.index) + self.index.layerDependencies[layerDependencyId].define_data(layerDependencyId, + layerBranchId, 1) + + layerDependencyId += 1 + self.index.layerDependencies[layerDependencyId] = LayerDependency(self.index) + self.index.layerDependencies[layerDependencyId].define_data(layerDependencyId, + layerBranchId, 1, required=False) + + def test_branch(self): + branch = self.index.branches[1] + self.assertEqual(branch.id, 1) + self.assertEqual(branch.name, 'test_branch') + self.assertEqual(branch.short_description, 'test_branch') + self.assertEqual(branch.bitbake_branch, 'bb_test_branch') + + def test_layerItem(self): + layerItem = self.index.layerItems[1] + self.assertEqual(layerItem.id, 1) + self.assertEqual(layerItem.name, 'test_layerItem') + self.assertEqual(layerItem.summary, 'test_layerItem') + self.assertEqual(layerItem.description, 'test_layerItem') + self.assertEqual(layerItem.vcs_url, 'git://git_test_url/test_layerItem') + self.assertEqual(layerItem.vcs_web_url, None) + self.assertIsNone(layerItem.vcs_web_tree_base_url) + self.assertIsNone(layerItem.vcs_web_file_base_url) + self.assertIsNotNone(layerItem.updated) + + layerItem = self.index.layerItems[2] + self.assertEqual(layerItem.id, 2) + self.assertEqual(layerItem.name, 'test_layerItem 2') + self.assertEqual(layerItem.summary, 'test_layerItem 2') + self.assertEqual(layerItem.description, 'test_layerItem 2') + self.assertEqual(layerItem.vcs_url, 'git://git_test_url/test_layerItem') + self.assertIsNone(layerItem.vcs_web_url) + self.assertIsNone(layerItem.vcs_web_tree_base_url) + self.assertIsNone(layerItem.vcs_web_file_base_url) + self.assertIsNotNone(layerItem.updated) + + def test_layerBranch(self): + layerBranch = self.index.layerBranches[1] + self.assertEqual(layerBranch.id, 1) + self.assertEqual(layerBranch.collection, 'test_collection') + self.assertEqual(layerBranch.version, '99') + self.assertEqual(layerBranch.vcs_subdir, '') + self.assertEqual(layerBranch.actual_branch, 'test_branch') + self.assertIsNotNone(layerBranch.updated) + self.assertEqual(layerBranch.layer_id, 1) + self.assertEqual(layerBranch.branch_id, 1) + self.assertEqual(layerBranch.layer, self.index.layerItems[1]) + self.assertEqual(layerBranch.branch, self.index.branches[1]) + + layerBranch = self.index.layerBranches[2] + self.assertEqual(layerBranch.id, 2) + self.assertEqual(layerBranch.collection, 'test_collection_2') + self.assertEqual(layerBranch.version, '72') + self.assertEqual(layerBranch.vcs_subdir, '') + self.assertEqual(layerBranch.actual_branch, 'some_other_branch') + self.assertIsNotNone(layerBranch.updated) + self.assertEqual(layerBranch.layer_id, 2) + self.assertEqual(layerBranch.branch_id, 1) + self.assertEqual(layerBranch.layer, self.index.layerItems[2]) + self.assertEqual(layerBranch.branch, self.index.branches[1]) + + def test_layerDependency(self): + layerDependency = self.index.layerDependencies[1] + self.assertEqual(layerDependency.id, 1) + self.assertEqual(layerDependency.layerbranch_id, 2) + self.assertEqual(layerDependency.layerbranch, self.index.layerBranches[2]) + self.assertEqual(layerDependency.layer_id, 2) + self.assertEqual(layerDependency.layer, self.index.layerItems[2]) + self.assertTrue(layerDependency.required) + self.assertEqual(layerDependency.dependency_id, 1) + self.assertEqual(layerDependency.dependency, self.index.layerItems[1]) + self.assertEqual(layerDependency.dependency_layerBranch, self.index.layerBranches[1]) + + layerDependency = self.index.layerDependencies[2] + self.assertEqual(layerDependency.id, 2) + self.assertEqual(layerDependency.layerbranch_id, 2) + self.assertEqual(layerDependency.layerbranch, self.index.layerBranches[2]) + self.assertEqual(layerDependency.layer_id, 2) + self.assertEqual(layerDependency.layer, self.index.layerItems[2]) + self.assertFalse(layerDependency.required) + self.assertEqual(layerDependency.dependency_id, 1) + self.assertEqual(layerDependency.dependency, self.index.layerItems[1]) + self.assertEqual(layerDependency.dependency_layerBranch, self.index.layerBranches[1]) + + def test_recipe(self): + recipe = self.index.recipes[1] + self.assertEqual(recipe.id, 1) + self.assertEqual(recipe.layerbranch_id, 1) + self.assertEqual(recipe.layerbranch, self.index.layerBranches[1]) + self.assertEqual(recipe.layer_id, 1) + self.assertEqual(recipe.layer, self.index.layerItems[1]) + self.assertEqual(recipe.filename, 'test_git.bb') + self.assertEqual(recipe.filepath, 'recipes-test') + self.assertEqual(recipe.fullpath, 'recipes-test/test_git.bb') + self.assertEqual(recipe.summary, "") + self.assertEqual(recipe.description, "") + self.assertEqual(recipe.section, "") + self.assertEqual(recipe.pn, 'test') + self.assertEqual(recipe.pv, 'git') + self.assertEqual(recipe.license, "") + self.assertEqual(recipe.homepage, "") + self.assertEqual(recipe.bugtracker, "") + self.assertEqual(recipe.provides, "") + self.assertIsNotNone(recipe.updated) + self.assertEqual(recipe.inherits, "") + + def test_machine(self): + machine = self.index.machines[1] + self.assertEqual(machine.id, 1) + self.assertEqual(machine.layerbranch_id, 1) + self.assertEqual(machine.layerbranch, self.index.layerBranches[1]) + self.assertEqual(machine.layer_id, 1) + self.assertEqual(machine.layer, self.index.layerItems[1]) + self.assertEqual(machine.name, 'test_machine') + self.assertEqual(machine.description, 'test_machine') + self.assertIsNotNone(machine.updated) + + def test_distro(self): + distro = self.index.distros[1] + self.assertEqual(distro.id, 1) + self.assertEqual(distro.layerbranch_id, 1) + self.assertEqual(distro.layerbranch, self.index.layerBranches[1]) + self.assertEqual(distro.layer_id, 1) + self.assertEqual(distro.layer, self.index.layerItems[1]) + self.assertEqual(distro.name, 'test_distro') + self.assertEqual(distro.description, 'test_distro') + self.assertIsNotNone(distro.updated) diff --git a/bitbake/lib/layerindexlib/tests/restapi.py b/bitbake/lib/layerindexlib/tests/restapi.py new file mode 100644 index 0000000..5876695 --- /dev/null +++ b/bitbake/lib/layerindexlib/tests/restapi.py @@ -0,0 +1,184 @@ +# Copyright (C) 2017-2018 Wind River Systems, Inc. +# +# This program is free software; you can redistribute it and/or modify +# it under the terms of the GNU General Public License version 2 as +# published by the Free Software Foundation. +# +# This program is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. +# See the GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with this program; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA + +import unittest +import tempfile +import os +import bb + +import layerindexlib +from layerindexlib.tests.common import LayersTest + +import logging + +def skipIfNoNetwork(): + if os.environ.get("BB_SKIP_NETTESTS") == "yes": + return unittest.skip("Network tests being skipped") + return lambda f: f + +class LayerIndexWebRestApiTest(LayersTest): + + @skipIfNoNetwork() + def setUp(self): + self.assertFalse(os.environ.get("BB_SKIP_NETTESTS") == "yes", msg="BB_SKIP_NETTESTS set, but we tried to test anyway") + LayersTest.setUp(self) + self.layerindex = layerindexlib.LayerIndex(self.d) + self.layerindex.load_layerindex('http://layers.openembedded.org/layerindex/api/;branch=sumo', load=['layerDependencies']) + + @skipIfNoNetwork() + def test_layerindex_is_empty(self): + self.assertFalse(self.layerindex.is_empty(), msg="Layerindex is empty") + + @skipIfNoNetwork() + def test_layerindex_store_file(self): + self.layerindex.store_layerindex('file://%s/file.json' % self.tempdir, self.layerindex.indexes[0]) + + self.assertTrue(os.path.isfile('%s/file.json' % self.tempdir), msg="Temporary file was not created by store_layerindex") + + reload = layerindexlib.LayerIndex(self.d) + reload.load_layerindex('file://%s/file.json' % self.tempdir) + + self.assertFalse(reload.is_empty(), msg="Layerindex is empty") + + # Calculate layerItems in original index that should NOT be in reload + layerItemNames = [] + for itemId in self.layerindex.indexes[0].layerItems: + layerItemNames.append(self.layerindex.indexes[0].layerItems[itemId].name) + + for layerBranchId in self.layerindex.indexes[0].layerBranches: + layerItemNames.remove(self.layerindex.indexes[0].layerBranches[layerBranchId].layer.name) + + for itemId in reload.indexes[0].layerItems: + self.assertFalse(reload.indexes[0].layerItems[itemId].name in layerItemNames, msg="Item reloaded when it shouldn't have been") + + # Compare the original to what we wrote... + for type in self.layerindex.indexes[0]._index: + if type == 'apilinks' or \ + type == 'layerItems' or \ + type in self.layerindex.indexes[0].config['local']: + continue + for id in getattr(self.layerindex.indexes[0], type): + self.logger.debug(1, "type %s" % (type)) + + self.assertTrue(id in getattr(reload.indexes[0], type), msg="Id number not in reloaded index") + + self.logger.debug(1, "%s ? %s" % (getattr(self.layerindex.indexes[0], type)[id], getattr(reload.indexes[0], type)[id])) + + self.assertEqual(getattr(self.layerindex.indexes[0], type)[id], getattr(reload.indexes[0], type)[id], msg="Reloaded contents different") + + @skipIfNoNetwork() + def test_layerindex_store_split(self): + self.layerindex.store_layerindex('file://%s' % self.tempdir, self.layerindex.indexes[0]) + + reload = layerindexlib.LayerIndex(self.d) + reload.load_layerindex('file://%s' % self.tempdir) + + self.assertFalse(reload.is_empty(), msg="Layer index is empty") + + for type in self.layerindex.indexes[0]._index: + if type == 'apilinks' or \ + type == 'layerItems' or \ + type in self.layerindex.indexes[0].config['local']: + continue + for id in getattr(self.layerindex.indexes[0] ,type): + self.logger.debug(1, "type %s" % (type)) + + self.assertTrue(id in getattr(reload.indexes[0], type), msg="Id number missing from reloaded data") + + self.logger.debug(1, "%s ? %s" % (getattr(self.layerindex.indexes[0] ,type)[id], getattr(reload.indexes[0], type)[id])) + + self.assertEqual(getattr(self.layerindex.indexes[0] ,type)[id], getattr(reload.indexes[0], type)[id], msg="reloaded data does not match original") + + @skipIfNoNetwork() + def test_dependency_resolution(self): + # Verify depth first searching... + (dependencies, invalidnames) = self.layerindex.find_dependencies(names=['meta-python']) + + first = True + for deplayerbranch in dependencies: + layerBranch = dependencies[deplayerbranch][0] + layerDeps = dependencies[deplayerbranch][1:] + + if not first: + continue + + first = False + + # Top of the deps should be openembedded-core, since everything depends on it. + self.assertEqual(layerBranch.layer.name, "openembedded-core", msg='OpenEmbedded-Core is no the first dependency') + + # meta-python should cause an openembedded-core dependency, if not assert! + for dep in layerDeps: + if dep.layer.name == 'meta-python': + break + else: + self.logger.debug(1, "meta-python was not found") + self.assetTrue(False) + + # Only check the first element... + break + else: + # Empty list, this is bad. + self.logger.debug(1, "Empty list of dependencies") + self.assertIsNotNone(first, msg="Empty list of dependencies") + + # Last dep should be the requested item + layerBranch = dependencies[deplayerbranch][0] + self.assertEqual(layerBranch.layer.name, "meta-python", msg="Last dependency not meta-python") + + @skipIfNoNetwork() + def test_find_collection(self): + def _check(collection, expected): + self.logger.debug(1, "Looking for collection %s..." % collection) + result = self.layerindex.find_collection(collection) + if expected: + self.assertIsNotNone(result, msg="Did not find %s when it should be there" % collection) + else: + self.assertIsNone(result, msg="Found %s when it shouldn't be there" % collection) + + tests = [ ('core', True), + ('openembedded-core', False), + ('networking-layer', True), + ('meta-python', True), + ('openembedded-layer', True), + ('notpresent', False) ] + + for collection,result in tests: + _check(collection, result) + + @skipIfNoNetwork() + def test_find_layerbranch(self): + def _check(name, expected): + self.logger.debug(1, "Looking for layerbranch %s..." % name) + + for index in self.layerindex.indexes: + for layerbranchid in index.layerBranches: + self.logger.debug(1, "Present: %s" % index.layerBranches[layerbranchid].layer.name) + result = self.layerindex.find_layerbranch(name) + if expected: + self.assertIsNotNone(result, msg="Did not find %s when it should be there" % collection) + else: + self.assertIsNone(result, msg="Found %s when it shouldn't be there" % collection) + + tests = [ ('openembedded-core', True), + ('core', False), + ('meta-networking', True), + ('meta-python', True), + ('meta-oe', True), + ('notpresent', False) ] + + for collection,result in tests: + _check(collection, result) + diff --git a/bitbake/lib/layerindexlib/tests/testdata/README b/bitbake/lib/layerindexlib/tests/testdata/README new file mode 100644 index 0000000..36ab40b --- /dev/null +++ b/bitbake/lib/layerindexlib/tests/testdata/README @@ -0,0 +1,11 @@ +This test data is used to verify the 'cooker' module of the layerindex. + +The module consists of a faux project bblayers.conf with four layers defined. + +layer1 - openembedded-core +layer2 - networking-layer +layer3 - meta-python +layer4 - openembedded-layer (meta-oe) + +Since we do not have a fully populated cooker, we use this to test the +basic index generation, and not any deep recipe based contents. diff --git a/bitbake/lib/layerindexlib/tests/testdata/build/conf/bblayers.conf b/bitbake/lib/layerindexlib/tests/testdata/build/conf/bblayers.conf new file mode 100644 index 0000000..40429b2 --- /dev/null +++ b/bitbake/lib/layerindexlib/tests/testdata/build/conf/bblayers.conf @@ -0,0 +1,15 @@ +LAYERSERIES_CORENAMES = "sumo" + +# LAYER_CONF_VERSION is increased each time build/conf/bblayers.conf +# changes incompatibly +LCONF_VERSION = "7" + +BBPATH = "${TOPDIR}" +BBFILES ?= "" + +BBLAYERS ?= " \ + ${TOPDIR}/layer1 \ + ${TOPDIR}/layer2 \ + ${TOPDIR}/layer3 \ + ${TOPDIR}/layer4 \ + " diff --git a/bitbake/lib/layerindexlib/tests/testdata/layer1/conf/layer.conf b/bitbake/lib/layerindexlib/tests/testdata/layer1/conf/layer.conf new file mode 100644 index 0000000..966d531 --- /dev/null +++ b/bitbake/lib/layerindexlib/tests/testdata/layer1/conf/layer.conf @@ -0,0 +1,17 @@ +# We have a conf and classes directory, add to BBPATH +BBPATH .= ":${LAYERDIR}" +# We have recipes-* directories, add to BBFILES +BBFILES += "${LAYERDIR}/recipes-*/*/*.bb" + +BBFILE_COLLECTIONS += "core" +BBFILE_PATTERN_core = "^${LAYERDIR}/" +BBFILE_PRIORITY_core = "5" + +LAYERSERIES_CORENAMES = "sumo" + +# This should only be incremented on significant changes that will +# cause compatibility issues with other layers +LAYERVERSION_core = "11" +LAYERSERIES_COMPAT_core = "sumo" + +BBLAYERS_LAYERINDEX_NAME_core = "openembedded-core" diff --git a/bitbake/lib/layerindexlib/tests/testdata/layer2/conf/layer.conf b/bitbake/lib/layerindexlib/tests/testdata/layer2/conf/layer.conf new file mode 100644 index 0000000..7569d1c --- /dev/null +++ b/bitbake/lib/layerindexlib/tests/testdata/layer2/conf/layer.conf @@ -0,0 +1,20 @@ +# We have a conf and classes directory, add to BBPATH +BBPATH .= ":${LAYERDIR}" + +# We have a packages directory, add to BBFILES +BBFILES += "${LAYERDIR}/recipes-*/*/*.bb \ + ${LAYERDIR}/recipes-*/*/*.bbappend" + +BBFILE_COLLECTIONS += "networking-layer" +BBFILE_PATTERN_networking-layer := "^${LAYERDIR}/" +BBFILE_PRIORITY_networking-layer = "5" + +# This should only be incremented on significant changes that will +# cause compatibility issues with other layers +LAYERVERSION_networking-layer = "1" + +LAYERDEPENDS_networking-layer = "core" +LAYERDEPENDS_networking-layer += "openembedded-layer" +LAYERDEPENDS_networking-layer += "meta-python" + +LAYERSERIES_COMPAT_networking-layer = "sumo" diff --git a/bitbake/lib/layerindexlib/tests/testdata/layer3/conf/layer.conf b/bitbake/lib/layerindexlib/tests/testdata/layer3/conf/layer.conf new file mode 100644 index 0000000..7089071 --- /dev/null +++ b/bitbake/lib/layerindexlib/tests/testdata/layer3/conf/layer.conf @@ -0,0 +1,19 @@ +# We might have a conf and classes directory, append to BBPATH +BBPATH .= ":${LAYERDIR}" + +# We have recipes directories, add to BBFILES +BBFILES += "${LAYERDIR}/recipes*/*/*.bb ${LAYERDIR}/recipes*/*/*.bbappend" + +BBFILE_COLLECTIONS += "meta-python" +BBFILE_PATTERN_meta-python := "^${LAYERDIR}/" +BBFILE_PRIORITY_meta-python = "7" + +# This should only be incremented on significant changes that will +# cause compatibility issues with other layers +LAYERVERSION_meta-python = "1" + +LAYERDEPENDS_meta-python = "core openembedded-layer" + +LAYERSERIES_COMPAT_meta-python = "sumo" + +LICENSE_PATH += "${LAYERDIR}/licenses" diff --git a/bitbake/lib/layerindexlib/tests/testdata/layer4/conf/layer.conf b/bitbake/lib/layerindexlib/tests/testdata/layer4/conf/layer.conf new file mode 100644 index 0000000..6649ee0 --- /dev/null +++ b/bitbake/lib/layerindexlib/tests/testdata/layer4/conf/layer.conf @@ -0,0 +1,22 @@ +# We have a conf and classes directory, append to BBPATH +BBPATH .= ":${LAYERDIR}" + +# We have a recipes directory, add to BBFILES +BBFILES += "${LAYERDIR}/recipes-*/*/*.bb ${LAYERDIR}/recipes-*/*/*.bbappend" + +BBFILE_COLLECTIONS += "openembedded-layer" +BBFILE_PATTERN_openembedded-layer := "^${LAYERDIR}/" + +# Define the priority for recipes (.bb files) from this layer, +# choosing carefully how this layer interacts with all of the +# other layers. + +BBFILE_PRIORITY_openembedded-layer = "6" + +# This should only be incremented on significant changes that will +# cause compatibility issues with other layers +LAYERVERSION_openembedded-layer = "1" + +LAYERDEPENDS_openembedded-layer = "core" + +LAYERSERIES_COMPAT_openembedded-layer = "sumo" diff --git a/bitbake/lib/toaster/bldcontrol/localhostbecontroller.py b/bitbake/lib/toaster/bldcontrol/localhostbecontroller.py index 4c17562..9490635 100644 --- a/bitbake/lib/toaster/bldcontrol/localhostbecontroller.py +++ b/bitbake/lib/toaster/bldcontrol/localhostbecontroller.py @@ -27,8 +27,9 @@ import shutil import time from django.db import transaction from django.db.models import Q -from bldcontrol.models import BuildEnvironment, BRLayer, BRVariable, BRTarget, BRBitbake -from orm.models import CustomImageRecipe, Layer, Layer_Version, ProjectLayer, ToasterSetting +from bldcontrol.models import BuildEnvironment, BuildRequest, BRLayer, BRVariable, BRTarget, BRBitbake, Build +from orm.models import CustomImageRecipe, Layer, Layer_Version, Project, ProjectLayer, ToasterSetting +from orm.models import signal_runbuilds import subprocess from toastermain import settings @@ -38,6 +39,8 @@ from bldcontrol.bbcontroller import BuildEnvironmentController, ShellCmdExceptio import logging logger = logging.getLogger("toaster") +install_dir = os.environ.get('TOASTER_DIR') + from pprint import pprint, pformat class LocalhostBEController(BuildEnvironmentController): @@ -87,10 +90,10 @@ class LocalhostBEController(BuildEnvironmentController): #logger.debug("localhostbecontroller: using HEAD checkout in %s" % local_checkout_path) return local_checkout_path - - def setCloneStatus(self,bitbake,status,total,current): + def setCloneStatus(self,bitbake,status,total,current,repo_name): bitbake.req.build.repos_cloned=current bitbake.req.build.repos_to_clone=total + bitbake.req.build.progress_item=repo_name bitbake.req.build.save() def setLayers(self, bitbake, layers, targets): @@ -100,6 +103,7 @@ class LocalhostBEController(BuildEnvironmentController): layerlist = [] nongitlayerlist = [] + layer_index = 0 git_env = os.environ.copy() # (note: add custom environment settings here) @@ -113,7 +117,7 @@ class LocalhostBEController(BuildEnvironmentController): if bitbake.giturl and bitbake.commit: gitrepos[(bitbake.giturl, bitbake.commit)] = [] gitrepos[(bitbake.giturl, bitbake.commit)].append( - ("bitbake", bitbake.dirpath)) + ("bitbake", bitbake.dirpath, 0)) for layer in layers: # We don't need to git clone the layer for the CustomImageRecipe @@ -124,12 +128,13 @@ class LocalhostBEController(BuildEnvironmentController): # If we have local layers then we don't need clone them # For local layers giturl will be empty if not layer.giturl: - nongitlayerlist.append(layer.layer_version.layer.local_source_dir) + nongitlayerlist.append( "%03d:%s" % (layer_index,layer.local_source_dir) ) continue if not (layer.giturl, layer.commit) in gitrepos: gitrepos[(layer.giturl, layer.commit)] = [] - gitrepos[(layer.giturl, layer.commit)].append( (layer.name, layer.dirpath) ) + gitrepos[(layer.giturl, layer.commit)].append( (layer.name,layer.dirpath,layer_index) ) + layer_index += 1 logger.debug("localhostbecontroller, our git repos are %s" % pformat(gitrepos)) @@ -159,9 +164,9 @@ class LocalhostBEController(BuildEnvironmentController): # 3. checkout the repositories clone_count=0 clone_total=len(gitrepos.keys()) - self.setCloneStatus(bitbake,'Started',clone_total,clone_count) + self.setCloneStatus(bitbake,'Started',clone_total,clone_count,'') for giturl, commit in gitrepos.keys(): - self.setCloneStatus(bitbake,'progress',clone_total,clone_count) + self.setCloneStatus(bitbake,'progress',clone_total,clone_count,gitrepos[(giturl, commit)][0][0]) clone_count += 1 localdirname = os.path.join(self.be.sourcedir, self.getGitCloneDirectory(giturl, commit)) @@ -172,8 +177,11 @@ class LocalhostBEController(BuildEnvironmentController): try: localremotes = self._shellcmd("git remote -v", localdirname,env=git_env) - if not giturl in localremotes and commit != 'HEAD': - raise BuildSetupException("Existing git repository at %s, but with different remotes ('%s', expected '%s'). Toaster will not continue out of fear of damaging something." % (localdirname, ", ".join(localremotes.split("\n")), giturl)) + # NOTE: this nice-to-have check breaks when using git remaping to get past firewall + # Re-enable later with .gitconfig remapping checks + #if not giturl in localremotes and commit != 'HEAD': + # raise BuildSetupException("Existing git repository at %s, but with different remotes ('%s', expected '%s'). Toaster will not continue out of fear of damaging something." % (localdirname, ", ".join(localremotes.split("\n")), giturl)) + pass except ShellCmdException: # our localdirname might not be a git repository #- that's fine @@ -192,7 +200,7 @@ class LocalhostBEController(BuildEnvironmentController): if commit != "HEAD": logger.debug("localhostbecontroller: checking out commit %s to %s " % (commit, localdirname)) ref = commit if re.match('^[a-fA-F0-9]+$', commit) else 'origin/%s' % commit - self._shellcmd('git fetch --all && git reset --hard "%s"' % ref, localdirname,env=git_env) + self._shellcmd('git fetch && git reset --hard "%s"' % ref, localdirname,env=git_env) # take the localdirname as poky dir if we can find the oe-init-build-env if self.pokydirname is None and os.path.exists(os.path.join(localdirname, "oe-init-build-env")): @@ -205,21 +213,33 @@ class LocalhostBEController(BuildEnvironmentController): self._shellcmd("git clone -b \"%s\" \"%s\" \"%s\" " % (bitbake.commit, bitbake.giturl, os.path.join(self.pokydirname, 'bitbake')),env=git_env) # verify our repositories - for name, dirpath in gitrepos[(giturl, commit)]: + for name, dirpath, index in gitrepos[(giturl, commit)]: localdirpath = os.path.join(localdirname, dirpath) - logger.debug("localhostbecontroller: localdirpath expected '%s'" % localdirpath) + logger.debug("localhostbecontroller: localdirpath expects '%s'" % localdirpath) if not os.path.exists(localdirpath): raise BuildSetupException("Cannot find layer git path '%s' in checked out repository '%s:%s'. Aborting." % (localdirpath, giturl, commit)) if name != "bitbake": - layerlist.append(localdirpath.rstrip("/")) + layerlist.append("%03d:%s" % (index,localdirpath.rstrip("/"))) - self.setCloneStatus(bitbake,'complete',clone_total,clone_count) + self.setCloneStatus(bitbake,'complete',clone_total,clone_count,'') logger.debug("localhostbecontroller: current layer list %s " % pformat(layerlist)) - if self.pokydirname is None and os.path.exists(os.path.join(self.be.sourcedir, "oe-init-build-env")): - logger.debug("localhostbecontroller: selected poky dir name %s" % self.be.sourcedir) - self.pokydirname = self.be.sourcedir + # Resolve self.pokydirname if not resolved yet, consider the scenario + # where all layers are local, that's the else clause + if self.pokydirname is None: + if os.path.exists(os.path.join(self.be.sourcedir, "oe-init-build-env")): + logger.debug("localhostbecontroller: selected poky dir name %s" % self.be.sourcedir) + self.pokydirname = self.be.sourcedir + else: + # Alternatively, scan local layers for relative "oe-init-build-env" location + for layer in layers: + if os.path.exists(os.path.join(layer.layer_version.layer.local_source_dir,"..","oe-init-build-env")): + logger.debug("localhostbecontroller, setting pokydirname to %s" % (layer.layer_version.layer.local_source_dir)) + self.pokydirname = os.path.join(layer.layer_version.layer.local_source_dir,"..") + break + else: + logger.error("pokydirname is not set, you will run into trouble!") # 5. create custom layer and add custom recipes to it for target in targets: @@ -232,7 +252,7 @@ class LocalhostBEController(BuildEnvironmentController): customrecipe, layers) if os.path.isdir(custom_layer_path): - layerlist.append(custom_layer_path) + layerlist.append("%03d:%s" % (layer_index,custom_layer_path)) except CustomImageRecipe.DoesNotExist: continue # not a custom recipe, skip @@ -240,7 +260,11 @@ class LocalhostBEController(BuildEnvironmentController): layerlist.extend(nongitlayerlist) logger.debug("\n\nset layers gives this list %s" % pformat(layerlist)) self.islayerset = True - return layerlist + + # restore the order of layer list for bblayers.conf + layerlist.sort() + sorted_layerlist = [l[4:] for l in layerlist] + return sorted_layerlist def setup_custom_image_recipe(self, customrecipe, layers): """ Set up toaster-custom-images layer and recipe files """ @@ -310,41 +334,141 @@ class LocalhostBEController(BuildEnvironmentController): def triggerBuild(self, bitbake, layers, variables, targets, brbe): layers = self.setLayers(bitbake, layers, targets) + is_merged_attr = bitbake.req.project.merged_attr + + git_env = os.environ.copy() + # (note: add custom environment settings here) + try: + # insure that the project init/build uses the selected bitbake, and not Toaster's + del git_env['TEMPLATECONF'] + del git_env['BBBASEDIR'] + del git_env['BUILDDIR'] + except KeyError: + pass # init build environment from the clone - builddir = '%s-toaster-%d' % (self.be.builddir, bitbake.req.project.id) + if bitbake.req.project.builddir: + builddir = bitbake.req.project.builddir + else: + builddir = '%s-toaster-%d' % (self.be.builddir, bitbake.req.project.id) oe_init = os.path.join(self.pokydirname, 'oe-init-build-env') # init build environment try: custom_script = ToasterSetting.objects.get(name="CUSTOM_BUILD_INIT_SCRIPT").value custom_script = custom_script.replace("%BUILDDIR%" ,builddir) - self._shellcmd("bash -c 'source %s'" % (custom_script)) + self._shellcmd("bash -c 'source %s'" % (custom_script),env=git_env) except ToasterSetting.DoesNotExist: self._shellcmd("bash -c 'source %s %s'" % (oe_init, builddir), - self.be.sourcedir) + self.be.sourcedir,env=git_env) # update bblayers.conf - bblconfpath = os.path.join(builddir, "conf/toaster-bblayers.conf") - with open(bblconfpath, 'w') as bblayers: - bblayers.write('# line added by toaster build control\n' - 'BBLAYERS = "%s"' % ' '.join(layers)) - - # write configuration file - confpath = os.path.join(builddir, 'conf/toaster.conf') - with open(confpath, 'w') as conf: - for var in variables: - conf.write('%s="%s"\n' % (var.name, var.value)) - conf.write('INHERIT+="toaster buildhistory"') + if not is_merged_attr: + bblconfpath = os.path.join(builddir, "conf/toaster-bblayers.conf") + with open(bblconfpath, 'w') as bblayers: + bblayers.write('# line added by toaster build control\n' + 'BBLAYERS = "%s"' % ' '.join(layers)) + + # write configuration file + confpath = os.path.join(builddir, 'conf/toaster.conf') + with open(confpath, 'w') as conf: + for var in variables: + conf.write('%s="%s"\n' % (var.name, var.value)) + conf.write('INHERIT+="toaster buildhistory"') + else: + # Append the Toaster-specific values directly to the bblayers.conf + bblconfpath = os.path.join(builddir, "conf/bblayers.conf") + bblconfpath_save = os.path.join(builddir, "conf/bblayers.conf.save") + shutil.copyfile(bblconfpath, bblconfpath_save) + with open(bblconfpath) as bblayers: + content = bblayers.readlines() + do_write = True + was_toaster = False + with open(bblconfpath,'w') as bblayers: + for line in content: + #line = line.strip('\n') + if 'TOASTER_CONFIG_PROLOG' in line: + do_write = False + was_toaster = True + elif 'TOASTER_CONFIG_EPILOG' in line: + do_write = True + elif do_write: + bblayers.write(line) + if not was_toaster: + bblayers.write('\n') + bblayers.write('#=== TOASTER_CONFIG_PROLOG ===\n') + bblayers.write('BBLAYERS = "\\\n') + for layer in layers: + bblayers.write(' %s \\\n' % layer) + bblayers.write(' "\n') + bblayers.write('#=== TOASTER_CONFIG_EPILOG ===\n') + # Append the Toaster-specific values directly to the local.conf + bbconfpath = os.path.join(builddir, "conf/local.conf") + bbconfpath_save = os.path.join(builddir, "conf/local.conf.save") + shutil.copyfile(bbconfpath, bbconfpath_save) + with open(bbconfpath) as f: + content = f.readlines() + do_write = True + was_toaster = False + with open(bbconfpath,'w') as conf: + for line in content: + #line = line.strip('\n') + if 'TOASTER_CONFIG_PROLOG' in line: + do_write = False + was_toaster = True + elif 'TOASTER_CONFIG_EPILOG' in line: + do_write = True + elif do_write: + conf.write(line) + if not was_toaster: + conf.write('\n') + conf.write('#=== TOASTER_CONFIG_PROLOG ===\n') + for var in variables: + if (not var.name.startswith("INTERNAL_")) and (not var.name == "BBLAYERS"): + conf.write('%s="%s"\n' % (var.name, var.value)) + conf.write('#=== TOASTER_CONFIG_EPILOG ===\n') + + # If 'target' is just the project preparation target, then we are done + for target in targets: + if "_PROJECT_PREPARE_" == target.target: + logger.debug('localhostbecontroller: Project has been prepared. Done.') + # Update the Build Request and release the build environment + bitbake.req.state = BuildRequest.REQ_COMPLETED + bitbake.req.save() + self.be.lock = BuildEnvironment.LOCK_FREE + self.be.save() + # Close the project build and progress bar + bitbake.req.build.outcome = Build.SUCCEEDED + bitbake.req.build.save() + # Update the project status + bitbake.req.project.set_variable(Project.PROJECT_SPECIFIC_STATUS,Project.PROJECT_SPECIFIC_CLONING_SUCCESS) + signal_runbuilds() + return # clean the Toaster to build environment env_clean = 'unset BBPATH;' # clean BBPATH for <= YP-2.4.0 - # run bitbake server from the clone + # run bitbake server from the clone if available + # otherwise pick it from the PATH bitbake = os.path.join(self.pokydirname, 'bitbake', 'bin', 'bitbake') + if not os.path.exists(bitbake): + logger.info("Bitbake not available under %s, will try to use it from PATH" % + self.pokydirname) + for path in os.environ["PATH"].split(os.pathsep): + if os.path.exists(os.path.join(path, 'bitbake')): + bitbake = os.path.join(path, 'bitbake') + break + else: + logger.error("Looks like Bitbake is not available, please fix your environment") + toasterlayers = os.path.join(builddir,"conf/toaster-bblayers.conf") - self._shellcmd('%s bash -c \"source %s %s; BITBAKE_UI="knotty" %s --read %s --read %s ' - '--server-only -B 0.0.0.0:0\"' % (env_clean, oe_init, - builddir, bitbake, confpath, toasterlayers), self.be.sourcedir) + if not is_merged_attr: + self._shellcmd('%s bash -c \"source %s %s; BITBAKE_UI="knotty" %s --read %s --read %s ' + '--server-only -B 0.0.0.0:0\"' % (env_clean, oe_init, + builddir, bitbake, confpath, toasterlayers), self.be.sourcedir) + else: + self._shellcmd('%s bash -c \"source %s %s; BITBAKE_UI="knotty" %s ' + '--server-only -B 0.0.0.0:0\"' % (env_clean, oe_init, + builddir, bitbake), self.be.sourcedir) # read port number from bitbake.lock self.be.bbport = -1 @@ -390,12 +514,20 @@ class LocalhostBEController(BuildEnvironmentController): log = os.path.join(builddir, 'toaster_ui.log') local_bitbake = os.path.join(os.path.dirname(os.getenv('BBBASEDIR')), 'bitbake') - self._shellcmd(['%s bash -c \"(TOASTER_BRBE="%s" BBSERVER="0.0.0.0:%s" ' + if not is_merged_attr: + self._shellcmd(['%s bash -c \"(TOASTER_BRBE="%s" BBSERVER="0.0.0.0:%s" ' '%s %s -u toasterui --read %s --read %s --token="" >>%s 2>&1;' 'BITBAKE_UI="knotty" BBSERVER=0.0.0.0:%s %s -m)&\"' \ % (env_clean, brbe, self.be.bbport, local_bitbake, bbtargets, confpath, toasterlayers, log, self.be.bbport, bitbake,)], builddir, nowait=True) + else: + self._shellcmd(['%s bash -c \"(TOASTER_BRBE="%s" BBSERVER="0.0.0.0:%s" ' + '%s %s -u toasterui --token="" >>%s 2>&1;' + 'BITBAKE_UI="knotty" BBSERVER=0.0.0.0:%s %s -m)&\"' \ + % (env_clean, brbe, self.be.bbport, local_bitbake, bbtargets, log, + self.be.bbport, bitbake,)], + builddir, nowait=True) logger.debug('localhostbecontroller: Build launched, exiting. ' 'Follow build logs at %s' % log) diff --git a/bitbake/lib/toaster/bldcontrol/management/commands/checksettings.py b/bitbake/lib/toaster/bldcontrol/management/commands/checksettings.py index 582114a..14298d9 100644 --- a/bitbake/lib/toaster/bldcontrol/management/commands/checksettings.py +++ b/bitbake/lib/toaster/bldcontrol/management/commands/checksettings.py @@ -74,8 +74,9 @@ class Command(BaseCommand): print("Loading default settings") call_command("loaddata", "settings") template_conf = os.environ.get("TEMPLATECONF", "") + custom_xml_only = os.environ.get("CUSTOM_XML_ONLY") - if ToasterSetting.objects.filter(name='CUSTOM_XML_ONLY').count() > 0: + if ToasterSetting.objects.filter(name='CUSTOM_XML_ONLY').count() > 0 or (not custom_xml_only == None): # only use the custom settings pass elif "poky" in template_conf: @@ -107,7 +108,10 @@ class Command(BaseCommand): action="ignore", message="^.*No fixture named.*$") print("Importing custom settings if present") - call_command("loaddata", "custom") + try: + call_command("loaddata", "custom") + except: + print("NOTE: optional fixture 'custom' not found") # we run lsupdates after config update print("\nFetching information from the layer index, " diff --git a/bitbake/lib/toaster/bldcontrol/management/commands/runbuilds.py b/bitbake/lib/toaster/bldcontrol/management/commands/runbuilds.py index 791e53e..6a55dd4 100644 --- a/bitbake/lib/toaster/bldcontrol/management/commands/runbuilds.py +++ b/bitbake/lib/toaster/bldcontrol/management/commands/runbuilds.py @@ -49,7 +49,7 @@ class Command(BaseCommand): # we could not find a BEC; postpone the BR br.state = BuildRequest.REQ_QUEUED br.save() - logger.debug("runbuilds: No build env") + logger.debug("runbuilds: No build env (%s)" % e) return logger.info("runbuilds: starting build %s, environment %s" % diff --git a/bitbake/lib/toaster/orm/fixtures/oe-core.xml b/bitbake/lib/toaster/orm/fixtures/oe-core.xml index 00720c3..fec93ab 100644 --- a/bitbake/lib/toaster/orm/fixtures/oe-core.xml +++ b/bitbake/lib/toaster/orm/fixtures/oe-core.xml @@ -8,9 +8,9 @@ - rocko + sumo git://git.openembedded.org/bitbake - 1.36 + 1.38 HEAD @@ -22,14 +22,19 @@ git://git.openembedded.org/bitbake master + + thud + git://git.openembedded.org/bitbake + 1.40 + - rocko - Openembedded Rocko + sumo + Openembedded Sumo 1 - rocko - Toaster will run your builds using the tip of the <a href=\"http://cgit.openembedded.org/openembedded-core/log/?h=rocko\">OpenEmbedded Rocko</a> branch. + sumo + Toaster will run your builds using the tip of the <a href=\"http://cgit.openembedded.org/openembedded-core/log/?h=sumo\">OpenEmbedded Sumo</a> branch. local @@ -45,6 +50,13 @@ master Toaster will run your builds using the tip of the <a href=\"http://cgit.openembedded.org/openembedded-core/log/\">OpenEmbedded master</a> branch. + + thud + Openembedded Rocko + 1 + thud + Toaster will run your builds using the tip of the <a href=\"http://cgit.openembedded.org/openembedded-core/log/?h=thud\">OpenEmbedded Thud</a> branch. + @@ -59,6 +71,10 @@ 3 openembedded-core + + 4 + openembedded-core + diff --git a/bitbake/lib/toaster/orm/fixtures/poky.xml b/bitbake/lib/toaster/orm/fixtures/poky.xml index 2f39d77..fb9a771 100644 --- a/bitbake/lib/toaster/orm/fixtures/poky.xml +++ b/bitbake/lib/toaster/orm/fixtures/poky.xml @@ -8,9 +8,9 @@ - rocko + sumo git://git.yoctoproject.org/poky - rocko + sumo bitbake @@ -25,15 +25,21 @@ master bitbake + + thud + git://git.yoctoproject.org/poky + thud + bitbake + - rocko - Yocto Project 2.4 "Rocko" + sumo + Yocto Project 2.5 "Sumo" 1 - rocko - Toaster will run your builds using the tip of the <a href="http://git.yoctoproject.org/cgit/cgit.cgi/poky/log/?h=rocko">Yocto Project Rocko branch</a>. + sumo + Toaster will run your builds using the tip of the <a href="http://git.yoctoproject.org/cgit/cgit.cgi/poky/log/?h=sumo">Yocto Project Sumo branch</a>. local @@ -49,6 +55,13 @@ master Toaster will run your builds using the tip of the <a href="http://git.yoctoproject.org/cgit/cgit.cgi/poky/log/">Yocto Project Master branch</a>. + + rocko + Yocto Project 2.6 "Thud" + 1 + thud + Toaster will run your builds using the tip of the <a href="http://git.yoctoproject.org/cgit/cgit.cgi/poky/log/?h=thud">Yocto Project Thud branch</a>. + @@ -87,6 +100,18 @@ 3 meta-yocto-bsp + + 4 + openembedded-core + + + 4 + meta-poky + + + 4 + meta-yocto-bsp + + + + + + + + + + + + + + {% if DEBUG %} + + {% endif %} + + {% block extraheadcontent %} + {% endblock %} + + + + + {% csrf_token %} + + + + + + +
+ {% block pagecontent %} + {% endblock %} +
+ + diff --git a/bitbake/lib/toaster/toastergui/templates/baseprojectspecificpage.html b/bitbake/lib/toaster/toastergui/templates/baseprojectspecificpage.html new file mode 100644 index 0000000..d0b588d --- /dev/null +++ b/bitbake/lib/toaster/toastergui/templates/baseprojectspecificpage.html @@ -0,0 +1,48 @@ +{% extends "base_specific.html" %} + +{% load projecttags %} +{% load humanize %} + +{% block title %} {{title}} - {{project.name}} - Toaster {% endblock %} + +{% block pagecontent %} + +
+ {% include "project_specific_topbar.html" %} + + + +
+ +
+
+ {% block projectinfomain %}{% endblock %} +
+ +
+{% endblock %} + diff --git a/bitbake/lib/toaster/toastergui/templates/customise_btn.html b/bitbake/lib/toaster/toastergui/templates/customise_btn.html index 38c258a..ce46240 100644 --- a/bitbake/lib/toaster/toastergui/templates/customise_btn.html +++ b/bitbake/lib/toaster/toastergui/templates/customise_btn.html @@ -5,7 +5,11 @@ > Customise - + {% endif %}

Project release

@@ -157,5 +159,6 @@
+ {% endblock %} diff --git a/bitbake/lib/toaster/toastergui/templates/project_specific.html b/bitbake/lib/toaster/toastergui/templates/project_specific.html new file mode 100644 index 0000000..f625d18 --- /dev/null +++ b/bitbake/lib/toaster/toastergui/templates/project_specific.html @@ -0,0 +1,162 @@ +{% extends "baseprojectspecificpage.html" %} + +{% load projecttags %} +{% load humanize %} +{% load static %} + +{% block title %} Configuration - {{project.name}} - Toaster {% endblock %} +{% block projectinfomain %} + + + + + + + + +