From: Anton Mikanovich <amikan@ilbers.de>
To: isar-users@googlegroups.com
Cc: Anton Mikanovich <amikan@ilbers.de>
Subject: [PATCH v8 06/20] bitbake: update to Bitbake 2.0.5
Date: Wed, 25 Jan 2023 21:23:23 +0200 [thread overview]
Message-ID: <20230125192337.86869-7-amikan@ilbers.de> (raw)
In-Reply-To: <20230125192337.86869-1-amikan@ilbers.de>
Upstream commit c90d57497b9bcd237c3ae810ee8edb5b0d2d575a.
Signed-off-by: Anton Mikanovich <amikan@ilbers.de>
---
bitbake/README | 21 +-
bitbake/bin/bitbake | 4 +-
bitbake/bin/bitbake-diffsigs | 5 +-
bitbake/bin/bitbake-getvar | 50 ++
bitbake/bin/bitbake-hashclient | 2 +
bitbake/bin/bitbake-hashserv | 2 +
bitbake/bin/bitbake-layers | 2 +
bitbake/bin/bitbake-prserv | 8 +-
bitbake/bin/bitbake-selftest | 3 +
bitbake/bin/bitbake-server | 3 +-
bitbake/bin/bitbake-worker | 23 +-
bitbake/bin/git-make-shallow | 4 +
bitbake/bin/toaster | 6 +-
bitbake/bin/toaster-eventreplay | 2 +
bitbake/conf/bitbake.conf | 6 +-
bitbake/contrib/hashserv/Dockerfile | 6 +-
bitbake/contrib/prserv/Dockerfile | 62 ++
bitbake/contrib/vim/plugin/newbbappend.vim | 2 +-
bitbake/contrib/vim/syntax/bitbake.vim | 11 +-
bitbake/doc/Makefile | 2 +-
bitbake/doc/README | 6 +-
.../bitbake-user-manual-execution.rst | 112 ++--
.../bitbake-user-manual-fetching.rst | 229 +++++--
.../bitbake-user-manual-hello.rst | 60 +-
.../bitbake-user-manual-intro.rst | 64 +-
.../bitbake-user-manual-metadata.rst | 493 +++++++--------
.../bitbake-user-manual-ref-variables.rst | 583 ++++++++++--------
bitbake/doc/releases.rst | 84 ++-
bitbake/lib/bb/COW.py | 2 +
bitbake/lib/bb/__init__.py | 19 +-
bitbake/lib/bb/asyncrpc/__init__.py | 33 +
bitbake/lib/bb/asyncrpc/client.py | 178 ++++++
bitbake/lib/bb/asyncrpc/serv.py | 288 +++++++++
bitbake/lib/bb/build.py | 104 ++--
bitbake/lib/bb/cache.py | 35 +-
bitbake/lib/bb/checksum.py | 22 +-
bitbake/lib/bb/codeparser.py | 22 +-
bitbake/lib/bb/command.py | 36 +-
bitbake/lib/bb/compress/_pipecompress.py | 196 ++++++
bitbake/lib/bb/compress/lz4.py | 19 +
bitbake/lib/bb/compress/zstd.py | 30 +
bitbake/lib/bb/cooker.py | 298 +++++----
bitbake/lib/bb/cookerdata.py | 47 +-
bitbake/lib/bb/daemonize.py | 44 +-
bitbake/lib/bb/data.py | 59 +-
bitbake/lib/bb/data_smart.py | 213 ++++---
bitbake/lib/bb/event.py | 16 +-
bitbake/lib/bb/exceptions.py | 2 +
bitbake/lib/bb/fetch2/README | 57 ++
bitbake/lib/bb/fetch2/__init__.py | 138 +++--
bitbake/lib/bb/fetch2/crate.py | 136 ++++
bitbake/lib/bb/fetch2/git.py | 54 +-
bitbake/lib/bb/fetch2/gitsm.py | 25 +-
bitbake/lib/bb/fetch2/npm.py | 52 +-
bitbake/lib/bb/fetch2/npmsw.py | 33 +-
bitbake/lib/bb/fetch2/osc.py | 20 +-
bitbake/lib/bb/fetch2/s3.py | 41 +-
bitbake/lib/bb/fetch2/ssh.py | 49 +-
bitbake/lib/bb/fetch2/svn.py | 10 +-
bitbake/lib/bb/fetch2/wget.py | 161 +++--
bitbake/lib/bb/main.py | 9 +-
bitbake/lib/bb/monitordisk.py | 17 +-
bitbake/lib/bb/msg.py | 32 +-
bitbake/lib/bb/parse/__init__.py | 2 +
bitbake/lib/bb/parse/ast.py | 14 +-
bitbake/lib/bb/parse/parse_py/BBHandler.py | 7 +-
bitbake/lib/bb/parse/parse_py/ConfHandler.py | 16 +-
bitbake/lib/bb/persist_data.py | 54 +-
bitbake/lib/bb/process.py | 4 +-
bitbake/lib/bb/progress.py | 9 +-
bitbake/lib/bb/providers.py | 14 +-
bitbake/lib/bb/runqueue.py | 284 +++++----
bitbake/lib/bb/server/process.py | 47 +-
bitbake/lib/bb/server/xmlrpcserver.py | 1 +
bitbake/lib/bb/siggen.py | 186 ++++--
bitbake/lib/bb/taskdata.py | 14 +-
bitbake/lib/bb/tests/codeparser.py | 28 +-
bitbake/lib/bb/tests/compression.py | 100 +++
bitbake/lib/bb/tests/cooker.py | 2 +
bitbake/lib/bb/tests/data.py | 95 +--
.../debian/pool/main/m/minicom/index.html | 59 ++
bitbake/lib/bb/tests/fetch.py | 533 ++++++++++------
bitbake/lib/bb/tests/parse.py | 43 +-
.../bb/tests/runqueue-tests/conf/bitbake.conf | 2 +-
bitbake/lib/bb/tests/runqueue.py | 52 +-
bitbake/lib/bb/tests/utils.py | 20 +-
bitbake/lib/bb/tinfoil.py | 12 +-
bitbake/lib/bb/ui/buildinfohelper.py | 83 +--
bitbake/lib/bb/ui/knotty.py | 100 +--
bitbake/lib/bb/ui/taskexp.py | 5 +
bitbake/lib/bb/ui/uievent.py | 6 +-
bitbake/lib/bb/ui/uihelper.py | 4 +-
bitbake/lib/bb/utils.py | 161 ++++-
bitbake/lib/bblayers/__init__.py | 2 +
bitbake/lib/bblayers/action.py | 4 +-
bitbake/lib/bblayers/common.py | 2 +
bitbake/lib/bblayers/layerindex.py | 18 +-
bitbake/lib/bblayers/query.py | 10 +-
bitbake/lib/codegen.py | 6 +
bitbake/lib/hashserv/__init__.py | 66 +-
bitbake/lib/hashserv/client.py | 152 +----
bitbake/lib/hashserv/server.py | 549 ++++++++---------
bitbake/lib/hashserv/tests.py | 161 ++++-
bitbake/lib/layerindexlib/__init__.py | 9 +-
bitbake/lib/layerindexlib/cooker.py | 2 +-
bitbake/lib/layerindexlib/restapi.py | 4 +-
bitbake/lib/layerindexlib/tests/restapi.py | 2 +-
bitbake/lib/ply/yacc.py | 7 +-
bitbake/lib/prserv/__init__.py | 2 +
bitbake/lib/prserv/client.py | 50 ++
bitbake/lib/prserv/db.py | 67 +-
bitbake/lib/prserv/serv.py | 542 ++++++----------
bitbake/lib/pyinotify.py | 44 +-
.../bldcontrol/localhostbecontroller.py | 4 +-
.../management/commands/runbuilds.py | 83 ++-
.../migrations/0008_models_bigautofield.py | 48 ++
bitbake/lib/toaster/manage.py | 2 +
.../lib/toaster/orm/fixtures/gen_fixtures.py | 445 +++++++++++++
bitbake/lib/toaster/orm/fixtures/oe-core.xml | 48 +-
bitbake/lib/toaster/orm/fixtures/poky.xml | 118 ++--
bitbake/lib/toaster/orm/fixtures/settings.xml | 2 +-
.../orm/management/commands/lsupdates.py | 14 +-
.../migrations/0020_models_bigautofield.py | 173 ++++++
bitbake/lib/toaster/orm/models.py | 5 +-
.../toaster/toastergui/templates/base.html | 2 +-
.../toastergui/templates/configvars.html | 2 +-
.../toaster/toastergui/templates/landing.html | 6 +-
.../templates/landing_not_managed.html | 34 -
.../toastergui/templates/layerdetails.html | 2 +-
.../templates/package_detail_base.html | 2 +-
.../toaster/toastergui/templates/project.html | 2 +-
.../templates/project_specific.html | 2 +-
.../toastergui/templates/projectconf.html | 34 +-
bitbake/lib/toaster/toastergui/views.py | 22 +-
.../management/commands/buildimport.py | 2 +-
bitbake/lib/toaster/toastermain/settings.py | 3 +
bitbake/toaster-requirements.txt | 2 +-
137 files changed, 6117 insertions(+), 2948 deletions(-)
create mode 100755 bitbake/bin/bitbake-getvar
create mode 100644 bitbake/contrib/prserv/Dockerfile
create mode 100644 bitbake/lib/bb/asyncrpc/__init__.py
create mode 100644 bitbake/lib/bb/asyncrpc/client.py
create mode 100644 bitbake/lib/bb/asyncrpc/serv.py
create mode 100644 bitbake/lib/bb/compress/_pipecompress.py
create mode 100644 bitbake/lib/bb/compress/lz4.py
create mode 100644 bitbake/lib/bb/compress/zstd.py
create mode 100644 bitbake/lib/bb/fetch2/README
create mode 100644 bitbake/lib/bb/fetch2/crate.py
create mode 100644 bitbake/lib/bb/tests/compression.py
create mode 100644 bitbake/lib/bb/tests/fetch-testdata/debian/pool/main/m/minicom/index.html
create mode 100644 bitbake/lib/prserv/client.py
create mode 100644 bitbake/lib/toaster/bldcontrol/migrations/0008_models_bigautofield.py
create mode 100755 bitbake/lib/toaster/orm/fixtures/gen_fixtures.py
create mode 100644 bitbake/lib/toaster/orm/migrations/0020_models_bigautofield.py
delete mode 100644 bitbake/lib/toaster/toastergui/templates/landing_not_managed.html
diff --git a/bitbake/README b/bitbake/README
index 96e6007..7b4a2b7 100644
--- a/bitbake/README
+++ b/bitbake/README
@@ -7,7 +7,7 @@ One of BitBake's main users, OpenEmbedded, takes this core and builds embedded L
stacks using a task-oriented approach.
For information about Bitbake, see the OpenEmbedded website:
- http://www.openembedded.org/
+ https://www.openembedded.org/
Bitbake plain documentation can be found under the doc directory or its integrated
html version at the Yocto Project website:
@@ -17,7 +17,7 @@ Contributing
------------
Please refer to
-http://www.openembedded.org/wiki/How_to_submit_a_patch_to_OpenEmbedded
+https://www.openembedded.org/wiki/How_to_submit_a_patch_to_OpenEmbedded
for guidelines on how to submit patches, just note that the latter documentation is intended
for OpenEmbedded (and its core) not bitbake patches (bitbake-devel@lists.openembedded.org)
but in general main guidelines apply. Once the commit(s) have been created, the way to send
@@ -26,10 +26,23 @@ branch, type:
git send-email -M -1 --to bitbake-devel@lists.openembedded.org
+If you're sending a patch related to the BitBake manual, make sure you copy
+the Yocto Project documentation mailing list:
+
+ git send-email -M -1 --to bitbake-devel@lists.openembedded.org --cc docs@lists.yoctoproject.org
+
Mailing list:
- http://lists.openembedded.org/mailman/listinfo/bitbake-devel
+ https://lists.openembedded.org/g/bitbake-devel
Source code:
- http://git.openembedded.org/bitbake/
+ https://git.openembedded.org/bitbake/
+
+Testing:
+
+Bitbake has a testsuite located in lib/bb/tests/ whichs aim to try and prevent regressions.
+You can run this with "bitbake-selftest". In particular the fetcher is well covered since
+it has so many corner cases. The datastore has many tests too. Testing with the testsuite is
+recommended before submitting patches, particularly to the fetcher and datastore. We also
+appreciate new test cases and may require them for more obscure issues.
diff --git a/bitbake/bin/bitbake b/bitbake/bin/bitbake
index 7127550..042c918 100755
--- a/bitbake/bin/bitbake
+++ b/bitbake/bin/bitbake
@@ -12,6 +12,8 @@
import os
import sys
+import warnings
+warnings.simplefilter("default")
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(__file__)),
'lib'))
@@ -26,7 +28,7 @@ from bb.main import bitbake_main, BitBakeConfigParameters, BBMainException
if sys.getfilesystemencoding() != "utf-8":
sys.exit("Please use a locale setting which supports UTF-8 (such as LANG=en_US.UTF-8).\nPython can't change the filesystem locale after loading so we need a UTF-8 when Python starts or things won't work.")
-__version__ = "1.50.0"
+__version__ = "2.0.0"
if __name__ == "__main__":
if __version__ != bb.__version__:
diff --git a/bitbake/bin/bitbake-diffsigs b/bitbake/bin/bitbake-diffsigs
index 19420a2..cf4cc70 100755
--- a/bitbake/bin/bitbake-diffsigs
+++ b/bitbake/bin/bitbake-diffsigs
@@ -11,6 +11,7 @@
import os
import sys
import warnings
+warnings.simplefilter("default")
import argparse
import logging
import pickle
@@ -59,7 +60,7 @@ def find_siginfo_task(bbhandler, pn, taskname, sig1=None, sig2=None):
if sig1 and sig2:
sigfiles = find_siginfo(bbhandler, pn, taskname, [sig1, sig2])
- if len(sigfiles) == 0:
+ if not sigfiles:
logger.error('No sigdata files found matching %s %s matching either %s or %s' % (pn, taskname, sig1, sig2))
sys.exit(1)
elif not sig1 in sigfiles:
@@ -85,7 +86,7 @@ def recursecb(key, hash1, hash2):
hashfiles = find_siginfo(tinfoil, key, None, hashes)
recout = []
- if len(hashfiles) == 0:
+ if not hashfiles:
recout.append("Unable to find matching sigdata for %s with hashes %s or %s" % (key, hash1, hash2))
elif not hash1 in hashfiles:
recout.append("Unable to find matching sigdata for %s with hash %s" % (key, hash1))
diff --git a/bitbake/bin/bitbake-getvar b/bitbake/bin/bitbake-getvar
new file mode 100755
index 0000000..5435a8d
--- /dev/null
+++ b/bitbake/bin/bitbake-getvar
@@ -0,0 +1,50 @@
+#! /usr/bin/env python3
+#
+# Copyright (C) 2021 Richard Purdie
+#
+# SPDX-License-Identifier: GPL-2.0-only
+#
+
+import argparse
+import io
+import os
+import sys
+import warnings
+warnings.simplefilter("default")
+
+bindir = os.path.dirname(__file__)
+topdir = os.path.dirname(bindir)
+sys.path[0:0] = [os.path.join(topdir, 'lib')]
+
+import bb.tinfoil
+
+if __name__ == "__main__":
+ parser = argparse.ArgumentParser(description="Bitbake Query Variable")
+ parser.add_argument("variable", help="variable name to query")
+ parser.add_argument("-r", "--recipe", help="Recipe name to query", default=None, required=False)
+ parser.add_argument('-u', '--unexpand', help='Do not expand the value (with --value)', action="store_true")
+ parser.add_argument('-f', '--flag', help='Specify a variable flag to query (with --value)', default=None)
+ parser.add_argument('--value', help='Only report the value, no history and no variable name', action="store_true")
+ args = parser.parse_args()
+
+ if args.unexpand and not args.value:
+ print("--unexpand only makes sense with --value")
+ sys.exit(1)
+
+ if args.flag and not args.value:
+ print("--flag only makes sense with --value")
+ sys.exit(1)
+
+ with bb.tinfoil.Tinfoil(tracking=True) as tinfoil:
+ if args.recipe:
+ tinfoil.prepare(quiet=2)
+ d = tinfoil.parse_recipe(args.recipe)
+ else:
+ tinfoil.prepare(quiet=2, config_only=True)
+ d = tinfoil.config_data
+ if args.flag:
+ print(str(d.getVarFlag(args.variable, args.flag, expand=(not args.unexpand))))
+ elif args.value:
+ print(str(d.getVar(args.variable, expand=(not args.unexpand))))
+ else:
+ bb.data.emit_var(args.variable, d=d, all=True)
diff --git a/bitbake/bin/bitbake-hashclient b/bitbake/bin/bitbake-hashclient
index a892902..494f175 100755
--- a/bitbake/bin/bitbake-hashclient
+++ b/bitbake/bin/bitbake-hashclient
@@ -13,6 +13,8 @@ import pprint
import sys
import threading
import time
+import warnings
+warnings.simplefilter("default")
try:
import tqdm
diff --git a/bitbake/bin/bitbake-hashserv b/bitbake/bin/bitbake-hashserv
index 153f65a..00af76b 100755
--- a/bitbake/bin/bitbake-hashserv
+++ b/bitbake/bin/bitbake-hashserv
@@ -10,6 +10,8 @@ import sys
import logging
import argparse
import sqlite3
+import warnings
+warnings.simplefilter("default")
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(__file__)), 'lib'))
diff --git a/bitbake/bin/bitbake-layers b/bitbake/bin/bitbake-layers
index ff085d6..449434d 100755
--- a/bitbake/bin/bitbake-layers
+++ b/bitbake/bin/bitbake-layers
@@ -14,6 +14,8 @@ import logging
import os
import sys
import argparse
+import warnings
+warnings.simplefilter("default")
bindir = os.path.dirname(__file__)
topdir = os.path.dirname(bindir)
diff --git a/bitbake/bin/bitbake-prserv b/bitbake/bin/bitbake-prserv
index 1e9b6cb..5be42f3 100755
--- a/bitbake/bin/bitbake-prserv
+++ b/bitbake/bin/bitbake-prserv
@@ -1,11 +1,15 @@
#!/usr/bin/env python3
#
+# Copyright BitBake Contributors
+#
# SPDX-License-Identifier: GPL-2.0-only
#
import os
import sys,logging
import optparse
+import warnings
+warnings.simplefilter("default")
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(__file__)),'lib'))
@@ -36,12 +40,14 @@ def main():
dest="host", type="string", default=PRHOST_DEFAULT)
parser.add_option("--port", help="port number(default: 8585)", action="store",
dest="port", type="int", default=PRPORT_DEFAULT)
+ parser.add_option("-r", "--read-only", help="open database in read-only mode",
+ action="store_true")
options, args = parser.parse_args(sys.argv)
prserv.init_logger(os.path.abspath(options.logfile),options.loglevel)
if options.start:
- ret=prserv.serv.start_daemon(options.dbfile, options.host, options.port,os.path.abspath(options.logfile))
+ ret=prserv.serv.start_daemon(options.dbfile, options.host, options.port,os.path.abspath(options.logfile), options.read_only)
elif options.stop:
ret=prserv.serv.stop_daemon(options.host, options.port)
else:
diff --git a/bitbake/bin/bitbake-selftest b/bitbake/bin/bitbake-selftest
index 6c07374..f25f23b 100755
--- a/bitbake/bin/bitbake-selftest
+++ b/bitbake/bin/bitbake-selftest
@@ -7,6 +7,8 @@
import os
import sys, logging
+import warnings
+warnings.simplefilter("default")
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(__file__)), 'lib'))
import unittest
@@ -29,6 +31,7 @@ tests = ["bb.tests.codeparser",
"bb.tests.runqueue",
"bb.tests.siggen",
"bb.tests.utils",
+ "bb.tests.compression",
"hashserv.tests",
"layerindexlib.tests.layerindexobj",
"layerindexlib.tests.restapi",
diff --git a/bitbake/bin/bitbake-server b/bitbake/bin/bitbake-server
index 65796be..f53f88b 100755
--- a/bitbake/bin/bitbake-server
+++ b/bitbake/bin/bitbake-server
@@ -8,6 +8,7 @@
import os
import sys
import warnings
+warnings.simplefilter("default")
import logging
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(sys.argv[0])), 'lib'))
@@ -30,8 +31,6 @@ timeout = float(sys.argv[7])
xmlrpcinterface = (sys.argv[8], int(sys.argv[9]))
if xmlrpcinterface[0] == "None":
xmlrpcinterface = (None, xmlrpcinterface[1])
-if timeout == "None":
- timeout = None
# Replace standard fds with our own
with open('/dev/null', 'r') as si:
diff --git a/bitbake/bin/bitbake-worker b/bitbake/bin/bitbake-worker
index 4318ce6..2f3e9f7 100755
--- a/bitbake/bin/bitbake-worker
+++ b/bitbake/bin/bitbake-worker
@@ -1,11 +1,14 @@
#!/usr/bin/env python3
#
+# Copyright BitBake Contributors
+#
# SPDX-License-Identifier: GPL-2.0-only
#
import os
import sys
import warnings
+warnings.simplefilter("default")
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(sys.argv[0])), 'lib'))
from bb import fetch2
import logging
@@ -151,6 +154,10 @@ def fork_off_task(cfg, data, databuilder, workerdata, fn, task, taskname, taskha
fakeenv = {}
umask = None
+ uid = os.getuid()
+ gid = os.getgid()
+
+
taskdep = workerdata["taskdeps"][fn]
if 'umask' in taskdep and taskname in taskdep['umask']:
umask = taskdep['umask'][taskname]
@@ -236,6 +243,7 @@ def fork_off_task(cfg, data, databuilder, workerdata, fn, task, taskname, taskha
the_data = databuilder.mcdata[mc]
the_data.setVar("BB_WORKERCONTEXT", "1")
the_data.setVar("BB_TASKDEPDATA", taskdepdata)
+ the_data.setVar('BB_CURRENTTASK', taskname.replace("do_", ""))
if cfg.limited_deps:
the_data.setVar("BB_LIMITEDDEPS", "1")
the_data.setVar("BUILDNAME", workerdata["buildname"])
@@ -255,6 +263,13 @@ def fork_off_task(cfg, data, databuilder, workerdata, fn, task, taskname, taskha
bb.utils.set_process_name("%s:%s" % (the_data.getVar("PN"), taskname.replace("do_", "")))
+ if not the_data.getVarFlag(taskname, 'network', False):
+ if bb.utils.is_local_uid(uid):
+ logger.debug("Attempting to disable network for %s" % taskname)
+ bb.utils.disable_network(uid, gid)
+ else:
+ logger.debug("Skipping disable network for %s since %s is not a local uid." % (taskname, uid))
+
# exported_vars() returns a generator which *cannot* be passed to os.environ.update()
# successfully. We also need to unset anything from the environment which shouldn't be there
exports = bb.data.exported_vars(the_data)
@@ -416,14 +431,18 @@ class BitbakeWorker(object):
if self.queue.startswith(b"<" + item + b">"):
index = self.queue.find(b"</" + item + b">")
while index != -1:
- func(self.queue[(len(item) + 2):index])
+ try:
+ func(self.queue[(len(item) + 2):index])
+ except pickle.UnpicklingError:
+ workerlog_write("Unable to unpickle data: %s\n" % ":".join("{:02x}".format(c) for c in self.queue))
+ raise
self.queue = self.queue[(index + len(item) + 3):]
index = self.queue.find(b"</" + item + b">")
def handle_cookercfg(self, data):
self.cookercfg = pickle.loads(data)
self.databuilder = bb.cookerdata.CookerDataBuilder(self.cookercfg, worker=True)
- self.databuilder.parseBaseConfiguration()
+ self.databuilder.parseBaseConfiguration(worker=True)
self.data = self.databuilder.data
def handle_extraconfigdata(self, data):
diff --git a/bitbake/bin/git-make-shallow b/bitbake/bin/git-make-shallow
index 57069f7..d0532c5 100755
--- a/bitbake/bin/git-make-shallow
+++ b/bitbake/bin/git-make-shallow
@@ -1,5 +1,7 @@
#!/usr/bin/env python3
#
+# Copyright BitBake Contributors
+#
# SPDX-License-Identifier: GPL-2.0-only
#
@@ -16,6 +18,8 @@ import itertools
import os
import subprocess
import sys
+import warnings
+warnings.simplefilter("default")
version = 1.0
diff --git a/bitbake/bin/toaster b/bitbake/bin/toaster
index 6b90ee1..558a819 100755
--- a/bitbake/bin/toaster
+++ b/bitbake/bin/toaster
@@ -33,7 +33,7 @@ databaseCheck()
$MANAGE migrate --noinput || retval=1
if [ $retval -eq 1 ]; then
- echo "Failed migrations, aborting system start" 1>&2
+ echo "Failed migrations, halting system start" 1>&2
return $retval
fi
# Make sure that checksettings can pick up any value for TEMPLATECONF
@@ -41,7 +41,7 @@ databaseCheck()
$MANAGE checksettings --traceback || retval=1
if [ $retval -eq 1 ]; then
- printf "\nError while checking settings; aborting\n"
+ printf "\nError while checking settings; exiting\n"
return $retval
fi
@@ -248,7 +248,7 @@ fi
# 3) the sqlite db if that is being used.
# 4) pid's we need to clean up on exit/shutdown
export TOASTER_DIR=$TOASTERDIR
-export BB_ENV_EXTRAWHITE="$BB_ENV_EXTRAWHITE TOASTER_DIR"
+export BB_ENV_PASSTHROUGH_ADDITIONS="$BB_ENV_PASSTHROUGH_ADDITIONS TOASTER_DIR"
# Determine the action. If specified by arguments, fine, if not, toggle it
if [ "$CMD" = "start" ] ; then
diff --git a/bitbake/bin/toaster-eventreplay b/bitbake/bin/toaster-eventreplay
index 8fa4ab7..404b61f 100755
--- a/bitbake/bin/toaster-eventreplay
+++ b/bitbake/bin/toaster-eventreplay
@@ -19,6 +19,8 @@ import sys
import json
import pickle
import codecs
+import warnings
+warnings.simplefilter("default")
from collections import namedtuple
diff --git a/bitbake/conf/bitbake.conf b/bitbake/conf/bitbake.conf
index b3d5fab..baf0a76 100644
--- a/bitbake/conf/bitbake.conf
+++ b/bitbake/conf/bitbake.conf
@@ -31,10 +31,10 @@ OVERRIDES = "local:${MACHINE}:${TARGET_OS}:${TARGET_ARCH}"
P = "${PN}-${PV}"
PERSISTENT_DIR = "${TMPDIR}/cache"
PF = "${PN}-${PV}-${PR}"
-PN = "${@bb.parse.BBHandler.vars_from_file(d.getVar('FILE', False),d)[0] or 'defaultpkgname'}"
-PR = "${@bb.parse.BBHandler.vars_from_file(d.getVar('FILE', False),d)[2] or 'r0'}"
+PN = "${@bb.parse.vars_from_file(d.getVar('FILE', False),d)[0] or 'defaultpkgname'}"
+PR = "${@bb.parse.vars_from_file(d.getVar('FILE', False),d)[2] or 'r0'}"
PROVIDES = ""
-PV = "${@bb.parse.BBHandler.vars_from_file(d.getVar('FILE', False),d)[1] or '1.0'}"
+PV = "${@bb.parse.vars_from_file(d.getVar('FILE', False),d)[1] or '1.0'}"
S = "${WORKDIR}/${P}"
SRC_URI = "file://${FILE}"
STAMP = "${TMPDIR}/stamps/${PF}"
diff --git a/bitbake/contrib/hashserv/Dockerfile b/bitbake/contrib/hashserv/Dockerfile
index d6fc728..74b4a3b 100644
--- a/bitbake/contrib/hashserv/Dockerfile
+++ b/bitbake/contrib/hashserv/Dockerfile
@@ -1,7 +1,7 @@
# SPDX-License-Identifier: MIT
#
# Copyright (c) 2021 Joshua Watt <JPEWhacker@gmail.com>
-#
+#
# Dockerfile to build a bitbake hash equivalence server container
#
# From the root of the bitbake repository, run:
@@ -15,5 +15,9 @@ RUN apk add --no-cache python3
COPY bin/bitbake-hashserv /opt/bbhashserv/bin/
COPY lib/hashserv /opt/bbhashserv/lib/hashserv/
+COPY lib/bb /opt/bbhashserv/lib/bb/
+COPY lib/codegen.py /opt/bbhashserv/lib/codegen.py
+COPY lib/ply /opt/bbhashserv/lib/ply/
+COPY lib/bs4 /opt/bbhashserv/lib/bs4/
ENTRYPOINT ["/opt/bbhashserv/bin/bitbake-hashserv"]
diff --git a/bitbake/contrib/prserv/Dockerfile b/bitbake/contrib/prserv/Dockerfile
new file mode 100644
index 0000000..9585fe3
--- /dev/null
+++ b/bitbake/contrib/prserv/Dockerfile
@@ -0,0 +1,62 @@
+# SPDX-License-Identifier: MIT
+#
+# Copyright (c) 2022 Daniel Gomez <daniel@qtec.com>
+#
+# Dockerfile to build a bitbake PR service container
+#
+# From the root of the bitbake repository, run:
+#
+# docker build -f contrib/prserv/Dockerfile . -t prserv
+#
+# Running examples:
+#
+# 1. PR Service in RW mode, port 18585:
+#
+# docker run --detach --tty \
+# --env PORT=18585 \
+# --publish 18585:18585 \
+# --volume $PWD:/var/lib/bbprserv \
+# prserv
+#
+# 2. PR Service in RO mode, default port (8585) and custom LOGFILE:
+#
+# docker run --detach --tty \
+# --env DBMODE="--read-only" \
+# --env LOGFILE=/var/lib/bbprserv/prservro.log \
+# --publish 8585:8585 \
+# --volume $PWD:/var/lib/bbprserv \
+# prserv
+#
+
+FROM alpine:3.14.4
+
+RUN apk add --no-cache python3
+
+COPY bin/bitbake-prserv /opt/bbprserv/bin/
+COPY lib/prserv /opt/bbprserv/lib/prserv/
+COPY lib/bb /opt/bbprserv/lib/bb/
+COPY lib/codegen.py /opt/bbprserv/lib/codegen.py
+COPY lib/ply /opt/bbprserv/lib/ply/
+COPY lib/bs4 /opt/bbprserv/lib/bs4/
+
+ENV PATH=$PATH:/opt/bbprserv/bin
+
+RUN mkdir -p /var/lib/bbprserv
+
+ENV DBFILE=/var/lib/bbprserv/prserv.sqlite3 \
+ LOGFILE=/var/lib/bbprserv/prserv.log \
+ LOGLEVEL=debug \
+ HOST=0.0.0.0 \
+ PORT=8585 \
+ DBMODE=""
+
+ENTRYPOINT [ "/bin/sh", "-c", \
+"bitbake-prserv \
+--file=$DBFILE \
+--log=$LOGFILE \
+--loglevel=$LOGLEVEL \
+--start \
+--host=$HOST \
+--port=$PORT \
+$DBMODE \
+&& tail -f $LOGFILE"]
diff --git a/bitbake/contrib/vim/plugin/newbbappend.vim b/bitbake/contrib/vim/plugin/newbbappend.vim
index e04174c..3f65f79 100644
--- a/bitbake/contrib/vim/plugin/newbbappend.vim
+++ b/bitbake/contrib/vim/plugin/newbbappend.vim
@@ -20,7 +20,7 @@ fun! NewBBAppendTemplate()
set nopaste
" New bbappend template
- 0 put ='FILESEXTRAPATHS_prepend := \"${THISDIR}/${PN}:\"'
+ 0 put ='FILESEXTRAPATHS:prepend := \"${THISDIR}/${PN}:\"'
2
if paste == 1
diff --git a/bitbake/contrib/vim/syntax/bitbake.vim b/bitbake/contrib/vim/syntax/bitbake.vim
index f964621..c5ea80f 100644
--- a/bitbake/contrib/vim/syntax/bitbake.vim
+++ b/bitbake/contrib/vim/syntax/bitbake.vim
@@ -51,9 +51,9 @@ syn region bbString matchgroup=bbQuote start=+'+ skip=+\\$+ end=+'+
syn match bbExport "^export" nextgroup=bbIdentifier skipwhite
syn keyword bbExportFlag export contained nextgroup=bbIdentifier skipwhite
syn match bbIdentifier "[a-zA-Z0-9\-_\.\/\+]\+" display contained
-syn match bbVarDeref "${[a-zA-Z0-9\-_\.\/\+]\+}" contained
+syn match bbVarDeref "${[a-zA-Z0-9\-_:\.\/\+]\+}" contained
syn match bbVarEq "\(:=\|+=\|=+\|\.=\|=\.\|?=\|??=\|=\)" contained nextgroup=bbVarValue
-syn match bbVarDef "^\(export\s*\)\?\([a-zA-Z0-9\-_\.\/\+]\+\(_[${}a-zA-Z0-9\-_\.\/\+]\+\)\?\)\s*\(:=\|+=\|=+\|\.=\|=\.\|?=\|??=\|=\)\@=" contains=bbExportFlag,bbIdentifier,bbVarDeref nextgroup=bbVarEq
+syn match bbVarDef "^\(export\s*\)\?\([a-zA-Z0-9\-_\.\/\+][${}a-zA-Z0-9\-_:\.\/\+]*\)\s*\(:=\|+=\|=+\|\.=\|=\.\|?=\|??=\|=\)\@=" contains=bbExportFlag,bbIdentifier,bbOverrideOperator,bbVarDeref nextgroup=bbVarEq
syn match bbVarValue ".*$" contained contains=bbString,bbVarDeref,bbVarPyValue
syn region bbVarPyValue start=+${@+ skip=+\\$+ end=+}+ contained contains=@python
@@ -77,13 +77,15 @@ syn keyword bbOEFunctions do_fetch do_unpack do_patch do_configure do_comp
" Generic Functions
syn match bbFunction "\h[0-9A-Za-z_\-\.]*" display contained contains=bbOEFunctions
+syn keyword bbOverrideOperator append prepend remove contained
+
" BitBake shell metadata
syn include @shell syntax/sh.vim
if exists("b:current_syntax")
unlet b:current_syntax
endif
syn keyword bbShFakeRootFlag fakeroot contained
-syn match bbShFuncDef "^\(fakeroot\s*\)\?\([\.0-9A-Za-z_${}\-\.]\+\)\(python\)\@<!\(\s*()\s*\)\({\)\@=" contains=bbShFakeRootFlag,bbFunction,bbVarDeref,bbDelimiter nextgroup=bbShFuncRegion skipwhite
+syn match bbShFuncDef "^\(fakeroot\s*\)\?\([\.0-9A-Za-z_:${}\-\.]\+\)\(python\)\@<!\(\s*()\s*\)\({\)\@=" contains=bbShFakeRootFlag,bbFunction,bbOverrideOperator,bbVarDeref,bbDelimiter nextgroup=bbShFuncRegion skipwhite
syn region bbShFuncRegion matchgroup=bbDelimiter start="{\s*$" end="^}\s*$" contained contains=@shell
" Python value inside shell functions
@@ -91,7 +93,7 @@ syn region shDeref start=+${@+ skip=+\\$+ excludenl end=+}+ contained co
" BitBake python metadata
syn keyword bbPyFlag python contained
-syn match bbPyFuncDef "^\(fakeroot\s*\)\?\(python\)\(\s\+[0-9A-Za-z_${}\-\.]\+\)\?\(\s*()\s*\)\({\)\@=" contains=bbShFakeRootFlag,bbPyFlag,bbFunction,bbVarDeref,bbDelimiter nextgroup=bbPyFuncRegion skipwhite
+syn match bbPyFuncDef "^\(fakeroot\s*\)\?\(python\)\(\s\+[0-9A-Za-z_:${}\-\.]\+\)\?\(\s*()\s*\)\({\)\@=" contains=bbShFakeRootFlag,bbPyFlag,bbFunction,bbOverrideOperator,bbVarDeref,bbDelimiter nextgroup=bbPyFuncRegion skipwhite
syn region bbPyFuncRegion matchgroup=bbDelimiter start="{\s*$" end="^}\s*$" contained contains=@python
" BitBake 'def'd python functions
@@ -122,5 +124,6 @@ hi def link bbStatement Statement
hi def link bbStatementRest Identifier
hi def link bbOEFunctions Special
hi def link bbVarPyValue PreProc
+hi def link bbOverrideOperator Operator
let b:current_syntax = "bb"
diff --git a/bitbake/doc/Makefile b/bitbake/doc/Makefile
index d40f390..996f01b 100644
--- a/bitbake/doc/Makefile
+++ b/bitbake/doc/Makefile
@@ -3,7 +3,7 @@
# You can set these variables from the command line, and also
# from the environment for the first two.
-SPHINXOPTS ?= -j auto
+SPHINXOPTS ?= -W --keep-going -j auto
SPHINXBUILD ?= sphinx-build
SOURCEDIR = .
BUILDDIR = _build
diff --git a/bitbake/doc/README b/bitbake/doc/README
index cbb5215..d4f56af 100644
--- a/bitbake/doc/README
+++ b/bitbake/doc/README
@@ -8,12 +8,12 @@ Manual Organization
Folders exist for individual manuals as follows:
-* bitbake-user-manual - The BitBake User Manual
+* bitbake-user-manual --- The BitBake User Manual
Each folder is self-contained regarding content and figures.
If you want to find HTML versions of the BitBake manuals on the web,
-go to http://www.openembedded.org/wiki/Documentation.
+go to https://www.openembedded.org/wiki/Documentation.
Sphinx
======
@@ -47,7 +47,7 @@ To install all required packages run:
To build the documentation locally, run:
- $ cd documentation
+ $ cd doc
$ make html
The resulting HTML index page will be _build/html/index.html, and you
diff --git a/bitbake/doc/bitbake-user-manual/bitbake-user-manual-execution.rst b/bitbake/doc/bitbake-user-manual/bitbake-user-manual-execution.rst
index 56abf77..7a22e96 100644
--- a/bitbake/doc/bitbake-user-manual/bitbake-user-manual-execution.rst
+++ b/bitbake/doc/bitbake-user-manual/bitbake-user-manual-execution.rst
@@ -16,7 +16,7 @@ data, or simply return information about the execution environment.
This chapter describes BitBake's execution process from start to finish
when you use it to create an image. The execution process is launched
-using the following command form: ::
+using the following command form::
$ bitbake target
@@ -32,7 +32,7 @@ the BitBake command and its options, see ":ref:`The BitBake Command
your project's ``local.conf`` configuration file.
A common method to determine this value for your build host is to run
- the following: ::
+ the following::
$ grep processor /proc/cpuinfo
@@ -40,7 +40,7 @@ the BitBake command and its options, see ":ref:`The BitBake Command
the number of processors, which takes into account hyper-threading.
Thus, a quad-core build host with hyper-threading most likely shows
eight processors, which is the value you would then assign to
- ``BB_NUMBER_THREADS``.
+ :term:`BB_NUMBER_THREADS`.
A possibly simpler solution is that some Linux distributions (e.g.
Debian and Ubuntu) provide the ``ncpus`` command.
@@ -65,13 +65,13 @@ data itself is of various types:
The ``layer.conf`` files are used to construct key variables such as
:term:`BBPATH` and :term:`BBFILES`.
-``BBPATH`` is used to search for configuration and class files under the
-``conf`` and ``classes`` directories, respectively. ``BBFILES`` is used
+:term:`BBPATH` is used to search for configuration and class files under the
+``conf`` and ``classes`` directories, respectively. :term:`BBFILES` is used
to locate both recipe and recipe append files (``.bb`` and
``.bbappend``). If there is no ``bblayers.conf`` file, it is assumed the
-user has set the ``BBPATH`` and ``BBFILES`` directly in the environment.
+user has set the :term:`BBPATH` and :term:`BBFILES` directly in the environment.
-Next, the ``bitbake.conf`` file is located using the ``BBPATH`` variable
+Next, the ``bitbake.conf`` file is located using the :term:`BBPATH` variable
that was just constructed. The ``bitbake.conf`` file may also include
other configuration files using the ``include`` or ``require``
directives.
@@ -79,8 +79,8 @@ directives.
Prior to parsing configuration files, BitBake looks at certain
variables, including:
-- :term:`BB_ENV_WHITELIST`
-- :term:`BB_ENV_EXTRAWHITE`
+- :term:`BB_ENV_PASSTHROUGH`
+- :term:`BB_ENV_PASSTHROUGH_ADDITIONS`
- :term:`BB_PRESERVE_ENV`
- :term:`BB_ORIGENV`
- :term:`BITBAKE_UI`
@@ -104,7 +104,7 @@ BitBake first searches the current working directory for an optional
contain a :term:`BBLAYERS` variable that is a
space-delimited list of 'layer' directories. Recall that if BitBake
cannot find a ``bblayers.conf`` file, then it is assumed the user has
-set the ``BBPATH`` and ``BBFILES`` variables directly in the
+set the :term:`BBPATH` and :term:`BBFILES` variables directly in the
environment.
For each directory (layer) in this list, a ``conf/layer.conf`` file is
@@ -114,7 +114,7 @@ files automatically set up :term:`BBPATH` and other
variables correctly for a given build directory.
BitBake then expects to find the ``conf/bitbake.conf`` file somewhere in
-the user-specified ``BBPATH``. That configuration file generally has
+the user-specified :term:`BBPATH`. That configuration file generally has
include directives to pull in any other metadata such as files specific
to the architecture, the machine, the local environment, and so forth.
@@ -135,11 +135,11 @@ The ``base.bbclass`` file is always included. Other classes that are
specified in the configuration using the
:term:`INHERIT` variable are also included. BitBake
searches for class files in a ``classes`` subdirectory under the paths
-in ``BBPATH`` in the same way as configuration files.
+in :term:`BBPATH` in the same way as configuration files.
A good way to get an idea of the configuration files and the class files
used in your execution environment is to run the following BitBake
-command: ::
+command::
$ bitbake -e > mybb.log
@@ -155,7 +155,7 @@ execution environment.
pair of curly braces in a shell function, the closing curly brace
must not be located at the start of the line without leading spaces.
- Here is an example that causes BitBake to produce a parsing error: ::
+ Here is an example that causes BitBake to produce a parsing error::
fakeroot create_shar() {
cat << "EOF" > ${SDK_DEPLOY}/${TOOLCHAIN_OUTPUTNAME}.sh
@@ -184,13 +184,13 @@ Locating and Parsing Recipes
During the configuration phase, BitBake will have set
:term:`BBFILES`. BitBake now uses it to construct a
list of recipes to parse, along with any append files (``.bbappend``) to
-apply. ``BBFILES`` is a space-separated list of available files and
-supports wildcards. An example would be: ::
+apply. :term:`BBFILES` is a space-separated list of available files and
+supports wildcards. An example would be::
BBFILES = "/path/to/bbfiles/*.bb /path/to/appends/*.bbappend"
BitBake parses each
-recipe and append file located with ``BBFILES`` and stores the values of
+recipe and append file located with :term:`BBFILES` and stores the values of
various variables into the datastore.
.. note::
@@ -201,18 +201,18 @@ For each file, a fresh copy of the base configuration is made, then the
recipe is parsed line by line. Any inherit statements cause BitBake to
find and then parse class files (``.bbclass``) using
:term:`BBPATH` as the search path. Finally, BitBake
-parses in order any append files found in ``BBFILES``.
+parses in order any append files found in :term:`BBFILES`.
One common convention is to use the recipe filename to define pieces of
metadata. For example, in ``bitbake.conf`` the recipe name and version
are used to set the variables :term:`PN` and
-:term:`PV`: ::
+:term:`PV`::
- PN = "${@bb.parse.BBHandler.vars_from_file(d.getVar('FILE', False),d)[0] or 'defaultpkgname'}"
- PV = "${@bb.parse.BBHandler.vars_from_file(d.getVar('FILE', False),d)[1] or '1.0'}"
+ PN = "${@bb.parse.vars_from_file(d.getVar('FILE', False),d)[0] or 'defaultpkgname'}"
+ PV = "${@bb.parse.vars_from_file(d.getVar('FILE', False),d)[1] or '1.0'}"
In this example, a recipe called "something_1.2.3.bb" would set
-``PN`` to "something" and ``PV`` to "1.2.3".
+:term:`PN` to "something" and :term:`PV` to "1.2.3".
By the time parsing is complete for a recipe, BitBake has a list of
tasks that the recipe defines and a set of data consisting of keys and
@@ -228,7 +228,7 @@ and then reload it.
Where possible, subsequent BitBake commands reuse this cache of recipe
information. The validity of this cache is determined by first computing
a checksum of the base configuration data (see
-:term:`BB_HASHCONFIG_WHITELIST`) and
+:term:`BB_HASHCONFIG_IGNORE_VARS`) and
then checking if the checksum matches. If that checksum matches what is
in the cache and the recipe and class files have not changed, BitBake is
able to use the cache. BitBake then reloads the cached information about
@@ -238,7 +238,7 @@ Recipe file collections exist to allow the user to have multiple
repositories of ``.bb`` files that contain the same exact package. For
example, one could easily use them to make one's own local copy of an
upstream repository, but with custom modifications that one does not
-want upstream. Here is an example: ::
+want upstream. Here is an example::
BBFILES = "/stuff/openembedded/*/*.bb /stuff/openembedded.modified/*/*.bb"
BBFILE_COLLECTIONS = "upstream local"
@@ -260,21 +260,21 @@ Providers
Assuming BitBake has been instructed to execute a target and that all
the recipe files have been parsed, BitBake starts to figure out how to
-build the target. BitBake looks through the ``PROVIDES`` list for each
-of the recipes. A ``PROVIDES`` list is the list of names by which the
-recipe can be known. Each recipe's ``PROVIDES`` list is created
+build the target. BitBake looks through the :term:`PROVIDES` list for each
+of the recipes. A :term:`PROVIDES` list is the list of names by which the
+recipe can be known. Each recipe's :term:`PROVIDES` list is created
implicitly through the recipe's :term:`PN` variable and
explicitly through the recipe's :term:`PROVIDES`
variable, which is optional.
-When a recipe uses ``PROVIDES``, that recipe's functionality can be
-found under an alternative name or names other than the implicit ``PN``
+When a recipe uses :term:`PROVIDES`, that recipe's functionality can be
+found under an alternative name or names other than the implicit :term:`PN`
name. As an example, suppose a recipe named ``keyboard_1.0.bb``
-contained the following: ::
+contained the following::
PROVIDES += "fullkeyboard"
-The ``PROVIDES``
+The :term:`PROVIDES`
list for this recipe becomes "keyboard", which is implicit, and
"fullkeyboard", which is explicit. Consequently, the functionality found
in ``keyboard_1.0.bb`` can be found under two different names.
@@ -284,14 +284,14 @@ in ``keyboard_1.0.bb`` can be found under two different names.
Preferences
===========
-The ``PROVIDES`` list is only part of the solution for figuring out a
+The :term:`PROVIDES` list is only part of the solution for figuring out a
target's recipes. Because targets might have multiple providers, BitBake
needs to prioritize providers by determining provider preferences.
A common example in which a target has multiple providers is
-"virtual/kernel", which is on the ``PROVIDES`` list for each kernel
+"virtual/kernel", which is on the :term:`PROVIDES` list for each kernel
recipe. Each machine often selects the best kernel provider by using a
-line similar to the following in the machine configuration file: ::
+line similar to the following in the machine configuration file::
PREFERRED_PROVIDER_virtual/kernel = "linux-yocto"
@@ -309,10 +309,10 @@ specify a particular version. You can influence the order by using the
:term:`DEFAULT_PREFERENCE` variable.
By default, files have a preference of "0". Setting
-``DEFAULT_PREFERENCE`` to "-1" makes the recipe unlikely to be used
-unless it is explicitly referenced. Setting ``DEFAULT_PREFERENCE`` to
-"1" makes it likely the recipe is used. ``PREFERRED_VERSION`` overrides
-any ``DEFAULT_PREFERENCE`` setting. ``DEFAULT_PREFERENCE`` is often used
+:term:`DEFAULT_PREFERENCE` to "-1" makes the recipe unlikely to be used
+unless it is explicitly referenced. Setting :term:`DEFAULT_PREFERENCE` to
+"1" makes it likely the recipe is used. :term:`PREFERRED_VERSION` overrides
+any :term:`DEFAULT_PREFERENCE` setting. :term:`DEFAULT_PREFERENCE` is often used
to mark newer and more experimental recipe versions until they have
undergone sufficient testing to be considered stable.
@@ -331,7 +331,7 @@ If the first recipe is named ``a_1.1.bb``, then the
Thus, if a recipe named ``a_1.2.bb`` exists, BitBake will choose 1.2 by
default. However, if you define the following variable in a ``.conf``
-file that BitBake parses, you can change that preference: ::
+file that BitBake parses, you can change that preference::
PREFERRED_VERSION_a = "1.1"
@@ -394,7 +394,7 @@ ready to run, those tasks have all their dependencies met, and the
thread threshold has not been exceeded.
It is worth noting that you can greatly speed up the build time by
-properly setting the ``BB_NUMBER_THREADS`` variable.
+properly setting the :term:`BB_NUMBER_THREADS` variable.
As each task completes, a timestamp is written to the directory
specified by the :term:`STAMP` variable. On subsequent
@@ -435,7 +435,7 @@ BitBake writes a shell script to
executes the script. The generated shell script contains all the
exported variables, and the shell functions with all variables expanded.
Output from the shell script goes to the file
-``${T}/log.do_taskname.pid``. Looking at the expanded shell functions in
+``${``\ :term:`T`\ ``}/log.do_taskname.pid``. Looking at the expanded shell functions in
the run file and the output in the log files is a useful debugging
technique.
@@ -477,7 +477,7 @@ changes because it should not affect the output for target packages. The
simplistic approach for excluding the working directory is to set it to
some fixed value and create the checksum for the "run" script. BitBake
goes one step better and uses the
-:term:`BB_HASHBASE_WHITELIST` variable
+:term:`BB_BASEHASH_IGNORE_VARS` variable
to define a list of variables that should never be included when
generating the signatures.
@@ -498,7 +498,7 @@ to the task.
Like the working directory case, situations exist where dependencies
should be ignored. For these cases, you can instruct the build process
-to ignore a dependency by using a line like the following: ::
+to ignore a dependency by using a line like the following::
PACKAGE_ARCHS[vardepsexclude] = "MACHINE"
@@ -508,7 +508,7 @@ even if it does reference it.
Equally, there are cases where we need to add dependencies BitBake is
not able to find. You can accomplish this by using a line like the
-following: ::
+following::
PACKAGE_ARCHS[vardeps] = "MACHINE"
@@ -523,7 +523,7 @@ it cannot figure out dependencies.
Thus far, this section has limited discussion to the direct inputs into
a task. Information based on direct inputs is referred to as the
"basehash" in the code. However, there is still the question of a task's
-indirect inputs - the things that were already built and present in the
+indirect inputs --- the things that were already built and present in the
build directory. The checksum (or signature) for a particular task needs
to add the hashes of all the tasks on which the particular task depends.
Choosing which dependencies to add is a policy decision. However, the
@@ -534,11 +534,11 @@ At the code level, there are a variety of ways both the basehash and the
dependent task hashes can be influenced. Within the BitBake
configuration file, we can give BitBake some extra information to help
it construct the basehash. The following statement effectively results
-in a list of global variable dependency excludes - variables never
+in a list of global variable dependency excludes --- variables never
included in any checksum. This example uses variables from OpenEmbedded
-to help illustrate the concept: ::
+to help illustrate the concept::
- BB_HASHBASE_WHITELIST ?= "TMPDIR FILE PATH PWD BB_TASKHASH BBPATH DL_DIR \
+ BB_BASEHASH_IGNORE_VARS ?= "TMPDIR FILE PATH PWD BB_TASKHASH BBPATH DL_DIR \
SSTATE_DIR THISDIR FILESEXTRAPATHS FILE_DIRNAME HOME LOGNAME SHELL \
USER FILESPATH STAGING_DIR_HOST STAGING_DIR_TARGET COREBASE PRSERV_HOST \
PRSERV_DUMPDIR PRSERV_DUMPFILE PRSERV_LOCKDOWN PARALLEL_MAKE \
@@ -557,11 +557,11 @@ OpenEmbedded-Core uses: "OEBasic" and "OEBasicHash". By default, there
is a dummy "noop" signature handler enabled in BitBake. This means that
behavior is unchanged from previous versions. ``OE-Core`` uses the
"OEBasicHash" signature handler by default through this setting in the
-``bitbake.conf`` file: ::
+``bitbake.conf`` file::
BB_SIGNATURE_HANDLER ?= "OEBasicHash"
-The "OEBasicHash" ``BB_SIGNATURE_HANDLER`` is the same as the "OEBasic"
+The "OEBasicHash" :term:`BB_SIGNATURE_HANDLER` is the same as the "OEBasic"
version but adds the task hash to the stamp files. This results in any
metadata change that changes the task hash, automatically causing the
task to be run again. This removes the need to bump
@@ -578,10 +578,7 @@ the build. This information includes:
- ``BB_BASEHASH_``\ *filename:taskname*: The base hashes for each
dependent task.
-- ``BBHASHDEPS_``\ *filename:taskname*: The task dependencies for
- each task.
-
-- ``BB_TASKHASH``: The hash of the currently running task.
+- :term:`BB_TASKHASH`: The hash of the currently running task.
It is worth noting that BitBake's "-S" option lets you debug BitBake's
processing of signatures. The options passed to -S allow different
@@ -648,13 +645,6 @@ compiled binary. To handle this, BitBake calls the
each successful setscene task to know whether or not it needs to obtain
the dependencies of that task.
-Finally, after all the setscene tasks have executed, BitBake calls the
-function listed in
-:term:`BB_SETSCENE_VERIFY_FUNCTION2`
-with the list of tasks BitBake thinks has been "covered". The metadata
-can then ensure that this list is correct and can inform BitBake that it
-wants specific tasks to be run regardless of the setscene result.
-
You can find more information on setscene metadata in the
:ref:`bitbake-user-manual/bitbake-user-manual-metadata:task checksums and setscene`
section.
diff --git a/bitbake/doc/bitbake-user-manual/bitbake-user-manual-fetching.rst b/bitbake/doc/bitbake-user-manual/bitbake-user-manual-fetching.rst
index bd6cc0e..9c269ca 100644
--- a/bitbake/doc/bitbake-user-manual/bitbake-user-manual-fetching.rst
+++ b/bitbake/doc/bitbake-user-manual/bitbake-user-manual-fetching.rst
@@ -27,7 +27,7 @@ and unpacking the files is often optionally followed by patching.
Patching, however, is not covered by this module.
The code to execute the first part of this process, a fetch, looks
-something like the following: ::
+something like the following::
src_uri = (d.getVar('SRC_URI') or "").split()
fetcher = bb.fetch2.Fetch(src_uri, d)
@@ -37,7 +37,7 @@ This code sets up an instance of the fetch class. The instance uses a
space-separated list of URLs from the :term:`SRC_URI`
variable and then calls the ``download`` method to download the files.
-The instantiation of the fetch class is usually followed by: ::
+The instantiation of the fetch class is usually followed by::
rootdir = l.getVar('WORKDIR')
fetcher.unpack(rootdir)
@@ -51,7 +51,7 @@ This code unpacks the downloaded files to the specified by ``WORKDIR``.
examine the OpenEmbedded class file ``base.bbclass``
.
-The ``SRC_URI`` and ``WORKDIR`` variables are not hardcoded into the
+The :term:`SRC_URI` and ``WORKDIR`` variables are not hardcoded into the
fetcher, since those fetcher methods can be (and are) called with
different variable names. In OpenEmbedded for example, the shared state
(sstate) code uses the fetch module to fetch the sstate files.
@@ -64,38 +64,38 @@ URLs by looking for source files in a specific search order:
:term:`PREMIRRORS` variable.
- *Source URI:* If pre-mirrors fail, BitBake uses the original URL (e.g
- from ``SRC_URI``).
+ from :term:`SRC_URI`).
- *Mirror Sites:* If fetch failures occur, BitBake next uses mirror
locations as defined by the :term:`MIRRORS` variable.
For each URL passed to the fetcher, the fetcher calls the submodule that
handles that particular URL type. This behavior can be the source of
-some confusion when you are providing URLs for the ``SRC_URI`` variable.
-Consider the following two URLs: ::
+some confusion when you are providing URLs for the :term:`SRC_URI` variable.
+Consider the following two URLs::
- http://git.yoctoproject.org/git/poky;protocol=git
+ https://git.yoctoproject.org/git/poky;protocol=git
git://git.yoctoproject.org/git/poky;protocol=http
In the former case, the URL is passed to the ``wget`` fetcher, which does not
understand "git". Therefore, the latter case is the correct form since the Git
fetcher does know how to use HTTP as a transport.
-Here are some examples that show commonly used mirror definitions: ::
+Here are some examples that show commonly used mirror definitions::
PREMIRRORS ?= "\
- bzr://.*/.\* http://somemirror.org/sources/ \\n \
- cvs://.*/.\* http://somemirror.org/sources/ \\n \
- git://.*/.\* http://somemirror.org/sources/ \\n \
- hg://.*/.\* http://somemirror.org/sources/ \\n \
- osc://.*/.\* http://somemirror.org/sources/ \\n \
- p4://.*/.\* http://somemirror.org/sources/ \\n \
- svn://.*/.\* http://somemirror.org/sources/ \\n"
+ bzr://.*/.\* http://somemirror.org/sources/ \
+ cvs://.*/.\* http://somemirror.org/sources/ \
+ git://.*/.\* http://somemirror.org/sources/ \
+ hg://.*/.\* http://somemirror.org/sources/ \
+ osc://.*/.\* http://somemirror.org/sources/ \
+ p4://.*/.\* http://somemirror.org/sources/ \
+ svn://.*/.\* http://somemirror.org/sources/"
MIRRORS =+ "\
- ftp://.*/.\* http://somemirror.org/sources/ \\n \
- http://.*/.\* http://somemirror.org/sources/ \\n \
- https://.*/.\* http://somemirror.org/sources/ \\n"
+ ftp://.*/.\* http://somemirror.org/sources/ \
+ http://.*/.\* http://somemirror.org/sources/ \
+ https://.*/.\* http://somemirror.org/sources/"
It is useful to note that BitBake
supports cross-URLs. It is possible to mirror a Git repository on an
@@ -110,26 +110,26 @@ which is specified by the :term:`DL_DIR` variable.
File integrity is of key importance for reproducing builds. For
non-local archive downloads, the fetcher code can verify SHA-256 and MD5
checksums to ensure the archives have been downloaded correctly. You can
-specify these checksums by using the ``SRC_URI`` variable with the
-appropriate varflags as follows: ::
+specify these checksums by using the :term:`SRC_URI` variable with the
+appropriate varflags as follows::
SRC_URI[md5sum] = "value"
SRC_URI[sha256sum] = "value"
You can also specify the checksums as
-parameters on the ``SRC_URI`` as shown below: ::
+parameters on the :term:`SRC_URI` as shown below::
SRC_URI = "http://example.com/foobar.tar.bz2;md5sum=4a8e0f237e961fd7785d19d07fdb994d"
If multiple URIs exist, you can specify the checksums either directly as
in the previous example, or you can name the URLs. The following syntax
-shows how you name the URIs: ::
+shows how you name the URIs::
SRC_URI = "http://example.com/foobar.tar.bz2;name=foo"
SRC_URI[foo.md5sum] = 4a8e0f237e961fd7785d19d07fdb994d
After a file has been downloaded and
-has had its checksum checked, a ".done" stamp is placed in ``DL_DIR``.
+has had its checksum checked, a ".done" stamp is placed in :term:`DL_DIR`.
BitBake uses this stamp during subsequent builds to avoid downloading or
comparing a checksum for the file again.
@@ -144,6 +144,10 @@ download without a checksum triggers an error message. The
make any attempted network access a fatal error, which is useful for
checking that mirrors are complete as well as other things.
+If :term:`BB_CHECK_SSL_CERTS` is set to ``0`` then SSL certificate checking will
+be disabled. This variable defaults to ``1`` so SSL certificates are normally
+checked.
+
.. _bb-the-unpack:
The Unpack
@@ -163,8 +167,8 @@ govern the behavior of the unpack stage:
- *dos:* Applies to ``.zip`` and ``.jar`` files and specifies whether
to use DOS line ending conversion on text files.
-- *basepath:* Instructs the unpack stage to strip the specified
- directories from the source path when unpacking.
+- *striplevel:* Strip specified number of leading components (levels)
+ from file names on extraction
- *subdir:* Unpacks the specific URL to the specified subdirectory
within the root directory.
@@ -204,7 +208,7 @@ time the ``download()`` method is called.
If you specify a directory, the entire directory is unpacked.
Here are a couple of example URLs, the first relative and the second
-absolute: ::
+absolute::
SRC_URI = "file://relativefile.patch"
SRC_URI = "file:///Users/ich/very_important_software"
@@ -225,7 +229,12 @@ downloaded file is useful for avoiding collisions in
:term:`DL_DIR` when dealing with multiple files that
have the same name.
-Some example URLs are as follows: ::
+If a username and password are specified in the ``SRC_URI``, a Basic
+Authorization header will be added to each request, including across redirects.
+To instead limit the Authorization header to the first request, add
+"redirectauth=0" to the list of parameters.
+
+Some example URLs are as follows::
SRC_URI = "http://oe.handhelds.org/not_there.aac"
SRC_URI = "ftp://oe.handhelds.org/not_there_as_well.aac"
@@ -235,15 +244,13 @@ Some example URLs are as follows: ::
Because URL parameters are delimited by semi-colons, this can
introduce ambiguity when parsing URLs that also contain semi-colons,
- for example:
- ::
+ for example::
SRC_URI = "http://abc123.org/git/?p=gcc/gcc.git;a=snapshot;h=a5dd47"
Such URLs should should be modified by replacing semi-colons with '&'
- characters:
- ::
+ characters::
SRC_URI = "http://abc123.org/git/?p=gcc/gcc.git&a=snapshot&h=a5dd47"
@@ -251,8 +258,7 @@ Some example URLs are as follows: ::
In most cases this should work. Treating semi-colons and '&' in
queries identically is recommended by the World Wide Web Consortium
(W3C). Note that due to the nature of the URL, you may have to
- specify the name of the downloaded file as well:
- ::
+ specify the name of the downloaded file as well::
SRC_URI = "http://abc123.org/git/?p=gcc/gcc.git&a=snapshot&h=a5dd47;downloadfilename=myfile.bz2"
@@ -321,7 +327,7 @@ The supported parameters are as follows:
- *"port":* The port to which the CVS server connects.
-Some example URLs are as follows: ::
+Some example URLs are as follows::
SRC_URI = "cvs://CVSROOT;module=mymodule;tag=some-version;method=ext"
SRC_URI = "cvs://CVSROOT;module=mymodule;date=20060126;localdir=usethat"
@@ -363,7 +369,7 @@ The supported parameters are as follows:
username is different than the username used in the main URL, which
is passed to the subversion command.
-Following are three examples using svn: ::
+Following are three examples using svn::
SRC_URI = "svn://myrepos/proj1;module=vip;protocol=http;rev=667"
SRC_URI = "svn://myrepos/proj1;module=opie;protocol=svn+ssh"
@@ -390,6 +396,19 @@ This fetcher supports the following parameters:
protocol is "file". You can also use "http", "https", "ssh" and
"rsync".
+ .. note::
+
+ When ``protocol`` is "ssh", the URL expected in :term:`SRC_URI` differs
+ from the one that is typically passed to ``git clone`` command and provided
+ by the Git server to fetch from. For example, the URL returned by GitLab
+ server for ``mesa`` when cloning over SSH is
+ ``git@gitlab.freedesktop.org:mesa/mesa.git``, however the expected URL in
+ :term:`SRC_URI` is the following::
+
+ SRC_URI = "git://git@gitlab.freedesktop.org/mesa/mesa.git;branch=main;protocol=ssh;..."
+
+ Note the ``:`` character changed for a ``/`` before the path to the project.
+
- *"nocheckout":* Tells the fetcher to not checkout source code when
unpacking when set to "1". Set this option for the URL where there is
a custom routine to checkout code. The default is "0".
@@ -413,9 +432,9 @@ This fetcher supports the following parameters:
raw Git metadata is provided. This parameter implies the "nocheckout"
parameter as well.
-- *"branch":* The branch(es) of the Git tree to clone. If unset, this
- is assumed to be "master". The number of branch parameters much match
- the number of name parameters.
+- *"branch":* The branch(es) of the Git tree to clone. Unless
+ "nobranch" is set to "1", this is a mandatory parameter. The number of
+ branch parameters must match the number of name parameters.
- *"rev":* The revision to use for the checkout. The default is
"master".
@@ -436,15 +455,23 @@ This fetcher supports the following parameters:
parameter implies no branch and only works when the transfer protocol
is ``file://``.
-Here are some example URLs: ::
+Here are some example URLs::
- SRC_URI = "git://git.oe.handhelds.org/git/vip.git;tag=version-1"
- SRC_URI = "git://git.oe.handhelds.org/git/vip.git;protocol=http"
+ SRC_URI = "git://github.com/fronteed/icheck.git;protocol=https;branch=${PV};tag=${PV}"
+ SRC_URI = "git://github.com/asciidoc/asciidoc-py;protocol=https;branch=main"
+ SRC_URI = "git://git@gitlab.freedesktop.org/mesa/mesa.git;branch=main;protocol=ssh;..."
+
+.. note::
+
+ When using ``git`` as the fetcher of the main source code of your software,
+ ``S`` should be set accordingly::
+
+ S = "${WORKDIR}/git"
.. note::
Specifying passwords directly in ``git://`` urls is not supported.
- There are several reasons: ``SRC_URI`` is often written out to logs and
+ There are several reasons: :term:`SRC_URI` is often written out to logs and
other places, and that could easily leak passwords; it is also all too
easy to share metadata without removing passwords. SSH keys, ``~/.netrc``
and ``~/.ssh/config`` files can be used as alternatives.
@@ -484,7 +511,7 @@ repository.
To use this fetcher, make sure your recipe has proper
:term:`SRC_URI`, :term:`SRCREV`, and
-:term:`PV` settings. Here is an example: ::
+:term:`PV` settings. Here is an example::
SRC_URI = "ccrc://cc.example.org/ccrc;vob=/example_vob;module=/example_module"
SRCREV = "EXAMPLE_CLEARCASE_TAG"
@@ -493,7 +520,7 @@ To use this fetcher, make sure your recipe has proper
The fetcher uses the ``rcleartool`` or
``cleartool`` remote client, depending on which one is available.
-Following are options for the ``SRC_URI`` statement:
+Following are options for the :term:`SRC_URI` statement:
- *vob*: The name, which must include the prepending "/" character,
of the ClearCase VOB. This option is required.
@@ -506,7 +533,7 @@ Following are options for the ``SRC_URI`` statement:
The module and vob options are combined to create the load rule in the
view config spec. As an example, consider the vob and module values from
the SRC_URI statement at the start of this section. Combining those values
- results in the following: ::
+ results in the following::
load /example_vob/example_module
@@ -555,10 +582,10 @@ password if you do not wish to keep those values in a recipe itself. If
you choose not to use ``P4CONFIG``, or to explicitly set variables that
``P4CONFIG`` can contain, you can specify the ``P4PORT`` value, which is
the server's URL and port number, and you can specify a username and
-password directly in your recipe within ``SRC_URI``.
+password directly in your recipe within :term:`SRC_URI`.
Here is an example that relies on ``P4CONFIG`` to specify the server URL
-and port, username, and password, and fetches the Head Revision: ::
+and port, username, and password, and fetches the Head Revision::
SRC_URI = "p4://example-depot/main/source/..."
SRCREV = "${AUTOREV}"
@@ -566,7 +593,7 @@ and port, username, and password, and fetches the Head Revision: ::
S = "${WORKDIR}/p4"
Here is an example that specifies the server URL and port, username, and
-password, and fetches a Revision based on a Label: ::
+password, and fetches a Revision based on a Label::
P4PORT = "tcp:p4server.example.net:1666"
SRC_URI = "p4://user:passwd@example-depot/main/source/..."
@@ -592,7 +619,7 @@ paths locally is desirable, the fetcher supports two parameters:
paths locally for the specified location, even in combination with the
``module`` parameter.
-Here is an example use of the the ``module`` parameter: ::
+Here is an example use of the the ``module`` parameter::
SRC_URI = "p4://user:passwd@example-depot/main;module=source/..."
@@ -600,7 +627,7 @@ In this case, the content of the top-level directory ``source/`` will be fetched
to ``${P4DIR}``, including the directory itself. The top-level directory will
be accesible at ``${P4DIR}/source/``.
-Here is an example use of the the ``remotepath`` parameter: ::
+Here is an example use of the the ``remotepath`` parameter::
SRC_URI = "p4://user:passwd@example-depot/main;module=source/...;remotepath=keep"
@@ -628,7 +655,7 @@ This fetcher supports the following parameters:
- *"manifest":* Name of the manifest file (default: ``default.xml``).
-Here are some example URLs: ::
+Here are some example URLs::
SRC_URI = "repo://REPOROOT;protocol=git;branch=some_branch;manifest=my_manifest.xml"
SRC_URI = "repo://REPOROOT;protocol=file;branch=some_branch;manifest=my_manifest.xml"
@@ -651,16 +678,108 @@ Such functionality is set by the variable:
delegate access to resources, if this variable is set, the Az Fetcher will
use it when fetching artifacts from the cloud.
-You can specify the AZ_SAS variable as shown below: ::
+You can specify the AZ_SAS variable as shown below::
AZ_SAS = "se=2021-01-01&sp=r&sv=2018-11-09&sr=c&skoid=<skoid>&sig=<signature>"
-Here is an example URL: ::
+Here is an example URL::
SRC_URI = "az://<azure-storage-account>.blob.core.windows.net/<foo_container>/<bar_file>"
It can also be used when setting mirrors definitions using the :term:`PREMIRRORS` variable.
+.. _crate-fetcher:
+
+Crate Fetcher (``crate://``)
+----------------------------
+
+This submodule fetches code for
+`Rust language "crates" <https://doc.rust-lang.org/reference/glossary.html?highlight=crate#crate>`__
+corresponding to Rust libraries and programs to compile. Such crates are typically shared
+on https://crates.io/ but this fetcher supports other crate registries too.
+
+The format for the :term:`SRC_URI` setting must be::
+
+ SRC_URI = "crate://REGISTRY/NAME/VERSION"
+
+Here is an example URL::
+
+ SRC_URI = "crate://crates.io/glob/0.2.11"
+
+.. _npm-fetcher:
+
+NPM Fetcher (``npm://``)
+------------------------
+
+This submodule fetches source code from an
+`NPM <https://en.wikipedia.org/wiki/Npm_(software)>`__
+Javascript package registry.
+
+The format for the :term:`SRC_URI` setting must be::
+
+ SRC_URI = "npm://some.registry.url;ParameterA=xxx;ParameterB=xxx;..."
+
+This fetcher supports the following parameters:
+
+- *"package":* The NPM package name. This is a mandatory parameter.
+
+- *"version":* The NPM package version. This is a mandatory parameter.
+
+- *"downloadfilename":* Specifies the filename used when storing the downloaded file.
+
+- *"destsuffix":* Specifies the directory to use to unpack the package (default: ``npm``).
+
+Note that NPM fetcher only fetches the package source itself. The dependencies
+can be fetched through the `npmsw-fetcher`_.
+
+Here is an example URL with both fetchers::
+
+ SRC_URI = " \
+ npm://registry.npmjs.org/;package=cute-files;version=${PV} \
+ npmsw://${THISDIR}/${BPN}/npm-shrinkwrap.json \
+ "
+
+See :yocto_docs:`Creating Node Package Manager (NPM) Packages
+</dev-manual/common-tasks.html#creating-node-package-manager-npm-packages>`
+in the Yocto Project manual for details about using
+:yocto_docs:`devtool <https://docs.yoctoproject.org/ref-manual/devtool-reference.html>`
+to automatically create a recipe from an NPM URL.
+
+.. _npmsw-fetcher:
+
+NPM shrinkwrap Fetcher (``npmsw://``)
+-------------------------------------
+
+This submodule fetches source code from an
+`NPM shrinkwrap <https://docs.npmjs.com/cli/v8/commands/npm-shrinkwrap>`__
+description file, which lists the dependencies
+of an NPM package while locking their versions.
+
+The format for the :term:`SRC_URI` setting must be::
+
+ SRC_URI = "npmsw://some.registry.url;ParameterA=xxx;ParameterB=xxx;..."
+
+This fetcher supports the following parameters:
+
+- *"dev":* Set this parameter to ``1`` to install "devDependencies".
+
+- *"destsuffix":* Specifies the directory to use to unpack the dependencies
+ (``${S}`` by default).
+
+Note that the shrinkwrap file can also be provided by the recipe for
+the package which has such dependencies, for example::
+
+ SRC_URI = " \
+ npm://registry.npmjs.org/;package=cute-files;version=${PV} \
+ npmsw://${THISDIR}/${BPN}/npm-shrinkwrap.json \
+ "
+
+Such a file can automatically be generated using
+:yocto_docs:`devtool <https://docs.yoctoproject.org/ref-manual/devtool-reference.html>`
+as described in the :yocto_docs:`Creating Node Package Manager (NPM) Packages
+</dev-manual/common-tasks.html#creating-node-package-manager-npm-packages>`
+section of the Yocto Project.
+
Other Fetchers
--------------
@@ -670,8 +789,6 @@ Fetch submodules also exist for the following:
- Mercurial (``hg://``)
-- npm (``npm://``)
-
- OSC (``osc://``)
- Secure FTP (``sftp://``)
@@ -686,4 +803,4 @@ submodules. However, you might find the code helpful and readable.
Auto Revisions
==============
-We need to document ``AUTOREV`` and ``SRCREV_FORMAT`` here.
+We need to document ``AUTOREV`` and :term:`SRCREV_FORMAT` here.
diff --git a/bitbake/doc/bitbake-user-manual/bitbake-user-manual-hello.rst b/bitbake/doc/bitbake-user-manual/bitbake-user-manual-hello.rst
index e3fd321..722dc5a 100644
--- a/bitbake/doc/bitbake-user-manual/bitbake-user-manual-hello.rst
+++ b/bitbake/doc/bitbake-user-manual/bitbake-user-manual-hello.rst
@@ -20,7 +20,7 @@ Obtaining BitBake
See the :ref:`bitbake-user-manual/bitbake-user-manual-hello:obtaining bitbake` section for
information on how to obtain BitBake. Once you have the source code on
-your machine, the BitBake directory appears as follows: ::
+your machine, the BitBake directory appears as follows::
$ ls -al
total 100
@@ -49,7 +49,7 @@ Setting Up the BitBake Environment
First, you need to be sure that you can run BitBake. Set your working
directory to where your local BitBake files are and run the following
-command: ::
+command::
$ ./bin/bitbake --version
BitBake Build Tool Core version 1.23.0, bitbake version 1.23.0
@@ -61,14 +61,14 @@ The recommended method to run BitBake is from a directory of your
choice. To be able to run BitBake from any directory, you need to add
the executable binary to your binary to your shell's environment
``PATH`` variable. First, look at your current ``PATH`` variable by
-entering the following: ::
+entering the following::
$ echo $PATH
Next, add the directory location
for the BitBake binary to the ``PATH``. Here is an example that adds the
``/home/scott-lenovo/bitbake/bin`` directory to the front of the
-``PATH`` variable: ::
+``PATH`` variable::
$ export PATH=/home/scott-lenovo/bitbake/bin:$PATH
@@ -99,7 +99,7 @@ discussion mailing list about the BitBake build tool.
This example was inspired by and drew heavily from
`Mailing List post - The BitBake equivalent of "Hello, World!"
- <http://www.mail-archive.com/yocto@yoctoproject.org/msg09379.html>`_.
+ <https://www.mail-archive.com/yocto@yoctoproject.org/msg09379.html>`_.
As stated earlier, the goal of this example is to eventually compile
"Hello World". However, it is unknown what BitBake needs and what you
@@ -116,7 +116,7 @@ Following is the complete "Hello World" example.
#. **Create a Project Directory:** First, set up a directory for the
"Hello World" project. Here is how you can do so in your home
- directory: ::
+ directory::
$ mkdir ~/hello
$ cd ~/hello
@@ -127,7 +127,7 @@ Following is the complete "Hello World" example.
directory is a good way to isolate your project.
#. **Run BitBake:** At this point, you have nothing but a project
- directory. Run the ``bitbake`` command and see what it does: ::
+ directory. Run the ``bitbake`` command and see what it does::
$ bitbake
The BBPATH variable is not set and bitbake did not
@@ -145,23 +145,23 @@ Following is the complete "Hello World" example.
The majority of this output is specific to environment variables that
are not directly relevant to BitBake. However, the very first
- message regarding the ``BBPATH`` variable and the
+ message regarding the :term:`BBPATH` variable and the
``conf/bblayers.conf`` file is relevant.
When you run BitBake, it begins looking for metadata files. The
:term:`BBPATH` variable is what tells BitBake where
- to look for those files. ``BBPATH`` is not set and you need to set
- it. Without ``BBPATH``, BitBake cannot find any configuration files
+ to look for those files. :term:`BBPATH` is not set and you need to set
+ it. Without :term:`BBPATH`, BitBake cannot find any configuration files
(``.conf``) or recipe files (``.bb``) at all. BitBake also cannot
find the ``bitbake.conf`` file.
-#. **Setting BBPATH:** For this example, you can set ``BBPATH`` in
+#. **Setting BBPATH:** For this example, you can set :term:`BBPATH` in
the same manner that you set ``PATH`` earlier in the appendix. You
should realize, though, that it is much more flexible to set the
- ``BBPATH`` variable up in a configuration file for each project.
+ :term:`BBPATH` variable up in a configuration file for each project.
From your shell, enter the following commands to set and export the
- ``BBPATH`` variable: ::
+ :term:`BBPATH` variable::
$ BBPATH="projectdirectory"
$ export BBPATH
@@ -175,8 +175,8 @@ Following is the complete "Hello World" example.
("~") character as BitBake does not expand that character as the
shell would.
-#. **Run BitBake:** Now that you have ``BBPATH`` defined, run the
- ``bitbake`` command again: ::
+#. **Run BitBake:** Now that you have :term:`BBPATH` defined, run the
+ ``bitbake`` command again::
$ bitbake
ERROR: Traceback (most recent call last):
@@ -205,18 +205,18 @@ Following is the complete "Hello World" example.
recipe files. For this example, you need to create the file in your
project directory and define some key BitBake variables. For more
information on the ``bitbake.conf`` file, see
- http://git.openembedded.org/bitbake/tree/conf/bitbake.conf.
+ https://git.openembedded.org/bitbake/tree/conf/bitbake.conf.
Use the following commands to create the ``conf`` directory in the
- project directory: ::
+ project directory::
$ mkdir conf
From within the ``conf`` directory,
use some editor to create the ``bitbake.conf`` so that it contains
- the following: ::
+ the following::
- PN = "${@bb.parse.BBHandler.vars_from_file(d.getVar('FILE', False),d)[0] or 'defaultpkgname'}"
+ PN = "${@bb.parse.vars_from_file(d.getVar('FILE', False),d)[0] or 'defaultpkgname'}"
TMPDIR = "${TOPDIR}/tmp"
CACHE = "${TMPDIR}/cache"
@@ -251,7 +251,7 @@ Following is the complete "Hello World" example.
glossary.
#. **Run BitBake:** After making sure that the ``conf/bitbake.conf`` file
- exists, you can run the ``bitbake`` command again: ::
+ exists, you can run the ``bitbake`` command again::
$ bitbake
ERROR: Traceback (most recent call last):
@@ -278,7 +278,7 @@ Following is the complete "Hello World" example.
in the ``classes`` directory of the project (i.e ``hello/classes``
in this example).
- Create the ``classes`` directory as follows: ::
+ Create the ``classes`` directory as follows::
$ cd $HOME/hello
$ mkdir classes
@@ -291,7 +291,7 @@ Following is the complete "Hello World" example.
environments BitBake is supporting.
#. **Run BitBake:** After making sure that the ``classes/base.bbclass``
- file exists, you can run the ``bitbake`` command again: ::
+ file exists, you can run the ``bitbake`` command again::
$ bitbake
Nothing to do. Use 'bitbake world' to build everything, or run 'bitbake --help' for usage information.
@@ -314,7 +314,7 @@ Following is the complete "Hello World" example.
Minimally, you need a recipe file and a layer configuration file in
your layer. The configuration file needs to be in the ``conf``
directory inside the layer. Use these commands to set up the layer
- and the ``conf`` directory: ::
+ and the ``conf`` directory::
$ cd $HOME
$ mkdir mylayer
@@ -322,12 +322,12 @@ Following is the complete "Hello World" example.
$ mkdir conf
Move to the ``conf`` directory and create a ``layer.conf`` file that has the
- following: ::
+ following::
BBPATH .= ":${LAYERDIR}"
- BBFILES += "${LAYERDIR}/\*.bb"
+ BBFILES += "${LAYERDIR}/*.bb"
BBFILE_COLLECTIONS += "mylayer"
- `BBFILE_PATTERN_mylayer := "^${LAYERDIR_RE}/"
+ BBFILE_PATTERN_mylayer := "^${LAYERDIR_RE}/"
For information on these variables, click on :term:`BBFILES`,
:term:`LAYERDIR`, :term:`BBFILE_COLLECTIONS` or :term:`BBFILE_PATTERN_mylayer <BBFILE_PATTERN>`
@@ -335,7 +335,7 @@ Following is the complete "Hello World" example.
You need to create the recipe file next. Inside your layer at the
top-level, use an editor and create a recipe file named
- ``printhello.bb`` that has the following: ::
+ ``printhello.bb`` that has the following::
DESCRIPTION = "Prints Hello World"
PN = 'printhello'
@@ -356,7 +356,7 @@ Following is the complete "Hello World" example.
follow the links to the glossary.
#. **Run BitBake With a Target:** Now that a BitBake target exists, run
- the command and provide that target: ::
+ the command and provide that target::
$ cd $HOME/hello
$ bitbake printhello
@@ -376,7 +376,7 @@ Following is the complete "Hello World" example.
``hello/conf`` for this example).
Set your working directory to the ``hello/conf`` directory and then
- create the ``bblayers.conf`` file so that it contains the following: ::
+ create the ``bblayers.conf`` file so that it contains the following::
BBLAYERS ?= " \
/home/<you>/mylayer \
@@ -386,7 +386,7 @@ Following is the complete "Hello World" example.
#. **Run BitBake With a Target:** Now that you have supplied the
``bblayers.conf`` file, run the ``bitbake`` command and provide the
- target: ::
+ target::
$ bitbake printhello
Parsing recipes: 100% |##################################################################################|
diff --git a/bitbake/doc/bitbake-user-manual/bitbake-user-manual-intro.rst b/bitbake/doc/bitbake-user-manual/bitbake-user-manual-intro.rst
index 6f9d392..35ffb88 100644
--- a/bitbake/doc/bitbake-user-manual/bitbake-user-manual-intro.rst
+++ b/bitbake/doc/bitbake-user-manual/bitbake-user-manual-intro.rst
@@ -27,7 +27,7 @@ Linux software stacks using a task-oriented approach.
Conceptually, BitBake is similar to GNU Make in some regards but has
significant differences:
-- BitBake executes tasks according to provided metadata that builds up
+- BitBake executes tasks according to the provided metadata that builds up
the tasks. Metadata is stored in recipe (``.bb``) and related recipe
"append" (``.bbappend``) files, configuration (``.conf``) and
underlying include (``.inc``) files, and in class (``.bbclass``)
@@ -60,11 +60,10 @@ member Chris Larson split the project into two distinct pieces:
- OpenEmbedded, a metadata set utilized by BitBake
Today, BitBake is the primary basis of the
-`OpenEmbedded <http://www.openembedded.org/>`__ project, which is being
-used to build and maintain Linux distributions such as the `Angstrom
-Distribution <http://www.angstrom-distribution.org/>`__, and which is
-also being used as the build tool for Linux projects such as the `Yocto
-Project <http://www.yoctoproject.org>`__.
+`OpenEmbedded <https://www.openembedded.org/>`__ project, which is being
+used to build and maintain Linux distributions such as the `Poky
+Reference Distribution <https://www.yoctoproject.org/software-item/poky/>`__,
+developed under the umbrella of the `Yocto Project <https://www.yoctoproject.org>`__.
Prior to BitBake, no other build tool adequately met the needs of an
aspiring embedded Linux distribution. All of the build systems used by
@@ -248,13 +247,13 @@ underlying, similarly-named recipe files.
When you name an append file, you can use the "``%``" wildcard character
to allow for matching recipe names. For example, suppose you have an
-append file named as follows: ::
+append file named as follows::
busybox_1.21.%.bbappend
That append file
would match any ``busybox_1.21.``\ x\ ``.bb`` version of the recipe. So,
-the append file would match the following recipe names: ::
+the append file would match the following recipe names::
busybox_1.21.1.bb
busybox_1.21.2.bb
@@ -290,7 +289,7 @@ You can obtain BitBake several different ways:
are using. The metadata is generally backwards compatible but not
forward compatible.
- Here is an example that clones the BitBake repository: ::
+ Here is an example that clones the BitBake repository::
$ git clone git://git.openembedded.org/bitbake
@@ -298,7 +297,7 @@ You can obtain BitBake several different ways:
Git repository into a directory called ``bitbake``. Alternatively,
you can designate a directory after the ``git clone`` command if you
want to call the new directory something other than ``bitbake``. Here
- is an example that names the directory ``bbdev``: ::
+ is an example that names the directory ``bbdev``::
$ git clone git://git.openembedded.org/bitbake bbdev
@@ -317,9 +316,9 @@ You can obtain BitBake several different ways:
method for getting BitBake. Cloning the repository makes it easier
to update as patches are added to the stable branches.
- The following example downloads a snapshot of BitBake version 1.17.0: ::
+ The following example downloads a snapshot of BitBake version 1.17.0::
- $ wget http://git.openembedded.org/bitbake/snapshot/bitbake-1.17.0.tar.gz
+ $ wget https://git.openembedded.org/bitbake/snapshot/bitbake-1.17.0.tar.gz
$ tar zxpvf bitbake-1.17.0.tar.gz
After extraction of the tarball using
@@ -347,7 +346,7 @@ execution examples.
Usage and syntax
----------------
-Following is the usage and syntax for BitBake: ::
+Following is the usage and syntax for BitBake::
$ bitbake -h
Usage: bitbake [options] [recipename/target recipe:do_task ...]
@@ -417,8 +416,8 @@ Following is the usage and syntax for BitBake: ::
-l DEBUG_DOMAINS, --log-domains=DEBUG_DOMAINS
Show debug logging for the specified logging domains
-P, --profile Profile the command and save reports.
- -u UI, --ui=UI The user interface to use (knotty, ncurses or taskexp
- - default knotty).
+ -u UI, --ui=UI The user interface to use (knotty, ncurses, taskexp or
+ teamcity - default knotty).
--token=XMLRPCTOKEN Specify the connection token to be used when
connecting to a remote server.
--revisions-changed Set the exit code depending on whether upstream
@@ -433,6 +432,9 @@ Following is the usage and syntax for BitBake: ::
Environment variable BB_SERVER_TIMEOUT.
--no-setscene Do not run any setscene tasks. sstate will be ignored
and everything needed, built.
+ --skip-setscene Skip setscene tasks if they would be executed. Tasks
+ previously restored from sstate will be kept, unlike
+ --no-setscene
--setscene-only Only run setscene tasks, don't run any real tasks.
--remote-server=REMOTE_SERVER
Connect to the specified server.
@@ -469,11 +471,11 @@ default task, which is "build". BitBake obeys inter-task dependencies
when doing so.
The following command runs the build task, which is the default task, on
-the ``foo_1.0.bb`` recipe file: ::
+the ``foo_1.0.bb`` recipe file::
$ bitbake -b foo_1.0.bb
-The following command runs the clean task on the ``foo.bb`` recipe file: ::
+The following command runs the clean task on the ``foo.bb`` recipe file::
$ bitbake -b foo.bb -c clean
@@ -497,13 +499,13 @@ functionality, or when there are multiple versions of a recipe.
The ``bitbake`` command, when not using "--buildfile" or "-b" only
accepts a "PROVIDES". You cannot provide anything else. By default, a
recipe file generally "PROVIDES" its "packagename" as shown in the
-following example: ::
+following example::
$ bitbake foo
This next example "PROVIDES" the
package name and also uses the "-c" option to tell BitBake to just
-execute the ``do_clean`` task: ::
+execute the ``do_clean`` task::
$ bitbake -c clean foo
@@ -514,7 +516,7 @@ The BitBake command line supports specifying different tasks for
individual targets when you specify multiple targets. For example,
suppose you had two targets (or recipes) ``myfirstrecipe`` and
``mysecondrecipe`` and you needed BitBake to run ``taskA`` for the first
-recipe and ``taskB`` for the second recipe: ::
+recipe and ``taskB`` for the second recipe::
$ bitbake myfirstrecipe:do_taskA mysecondrecipe:do_taskB
@@ -534,13 +536,13 @@ current working directory:
- ``pn-buildlist``: Shows a simple list of targets that are to be
built.
-To stop depending on common depends, use the "-I" depend option and
+To stop depending on common depends, use the ``-I`` depend option and
BitBake omits them from the graph. Leaving this information out can
produce more readable graphs. This way, you can remove from the graph
-``DEPENDS`` from inherited classes such as ``base.bbclass``.
+:term:`DEPENDS` from inherited classes such as ``base.bbclass``.
Here are two examples that create dependency graphs. The second example
-omits depends common in OpenEmbedded from the graph: ::
+omits depends common in OpenEmbedded from the graph::
$ bitbake -g foo
@@ -564,7 +566,7 @@ for two separate targets:
.. image:: figures/bb_multiconfig_files.png
:align: center
-The reason for this required file hierarchy is because the ``BBPATH``
+The reason for this required file hierarchy is because the :term:`BBPATH`
variable is not constructed until the layers are parsed. Consequently,
using the configuration file as a pre-configuration file is not possible
unless it is located in the current working directory.
@@ -582,17 +584,17 @@ accomplished by setting the
configuration files for ``target1`` and ``target2`` defined in the build
directory. The following statement in the ``local.conf`` file both
enables BitBake to perform multiple configuration builds and specifies
-the two extra multiconfigs: ::
+the two extra multiconfigs::
BBMULTICONFIG = "target1 target2"
Once the target configuration files are in place and BitBake has been
enabled to perform multiple configuration builds, use the following
-command form to start the builds: ::
+command form to start the builds::
$ bitbake [mc:multiconfigname:]target [[[mc:multiconfigname:]target] ... ]
-Here is an example for two extra multiconfigs: ``target1`` and ``target2``: ::
+Here is an example for two extra multiconfigs: ``target1`` and ``target2``::
$ bitbake mc::target mc:target1:target mc:target2:target
@@ -613,12 +615,12 @@ multiconfig.
To enable dependencies in a multiple configuration build, you must
declare the dependencies in the recipe using the following statement
-form: ::
+form::
task_or_package[mcdepends] = "mc:from_multiconfig:to_multiconfig:recipe_name:task_on_which_to_depend"
To better show how to use this statement, consider an example with two
-multiconfigs: ``target1`` and ``target2``: ::
+multiconfigs: ``target1`` and ``target2``::
image_task[mcdepends] = "mc:target1:target2:image2:rootfs_task"
@@ -629,7 +631,7 @@ completion of the rootfs_task used to build out image2, which is
associated with the "target2" multiconfig.
Once you set up this dependency, you can build the "target1" multiconfig
-using a BitBake command as follows: ::
+using a BitBake command as follows::
$ bitbake mc:target1:image1
@@ -639,7 +641,7 @@ the ``rootfs_task`` for the "target2" multiconfig build.
Having a recipe depend on the root filesystem of another build might not
seem that useful. Consider this change to the statement in the image1
-recipe: ::
+recipe::
image_task[mcdepends] = "mc:target1:target2:image2:image_task"
diff --git a/bitbake/doc/bitbake-user-manual/bitbake-user-manual-metadata.rst b/bitbake/doc/bitbake-user-manual/bitbake-user-manual-metadata.rst
index d4190c2..3378216 100644
--- a/bitbake/doc/bitbake-user-manual/bitbake-user-manual-metadata.rst
+++ b/bitbake/doc/bitbake-user-manual/bitbake-user-manual-metadata.rst
@@ -26,7 +26,7 @@ assignment. ::
VARIABLE = "value"
As expected, if you include leading or
-trailing spaces as part of an assignment, the spaces are retained: ::
+trailing spaces as part of an assignment, the spaces are retained::
VARIABLE = " value"
VARIABLE = "value "
@@ -40,7 +40,7 @@ blank space (i.e. these are not the same values). ::
You can use single quotes instead of double quotes when setting a
variable's value. Doing so allows you to use values that contain the
-double quote character: ::
+double quote character::
VARIABLE = 'I have a " in my value'
@@ -77,7 +77,7 @@ occurs, you can use BitBake to check the actual value of the suspect
variable. You can make these checks for both configuration and recipe
level changes:
-- For configuration changes, use the following: ::
+- For configuration changes, use the following::
$ bitbake -e
@@ -91,9 +91,10 @@ level changes:
Variables that are exported to the environment are preceded by the
string "export" in the command's output.
-- For recipe changes, use the following: ::
+- To find changes to a given variable in a specific recipe, use the
+ following::
- $ bitbake recipe -e \| grep VARIABLE="
+ $ bitbake recipename -e | grep VARIABLENAME=\"
This command checks to see if the variable actually makes
it into a specific recipe.
@@ -103,20 +104,20 @@ Line Joining
Outside of :ref:`functions <bitbake-user-manual/bitbake-user-manual-metadata:functions>`,
BitBake joins any line ending in
-a backslash character ("\") with the following line before parsing
-statements. The most common use for the "\" character is to split
-variable assignments over multiple lines, as in the following example: ::
+a backslash character ("\\") with the following line before parsing
+statements. The most common use for the "\\" character is to split
+variable assignments over multiple lines, as in the following example::
FOO = "bar \
baz \
qaz"
-Both the "\" character and the newline
+Both the "\\" character and the newline
character that follow it are removed when joining lines. Thus, no
newline characters end up in the value of ``FOO``.
Consider this additional example where the two assignments both assign
-"barbaz" to ``FOO``: ::
+"barbaz" to ``FOO``::
FOO = "barbaz"
FOO = "bar\
@@ -124,7 +125,7 @@ Consider this additional example where the two assignments both assign
.. note::
- BitBake does not interpret escape sequences like "\n" in variable
+ BitBake does not interpret escape sequences like "\\n" in variable
values. For these to have an effect, the value must be passed to some
utility that interprets escape sequences, such as
``printf`` or ``echo -n``.
@@ -149,7 +150,7 @@ The "=" operator does not immediately expand variable references in the
right-hand side. Instead, expansion is deferred until the variable
assigned to is actually used. The result depends on the current values
of the referenced variables. The following example should clarify this
-behavior: ::
+behavior::
A = "${B} baz"
B = "${C} bar"
@@ -158,7 +159,7 @@ behavior: ::
C = "qux"
*At this point, ${A} equals "qux bar baz"*
B = "norf"
- *At this point, ${A} equals "norf baz"\*
+ *At this point, ${A} equals "norf baz"*
Contrast this behavior with the
:ref:`bitbake-user-manual/bitbake-user-manual-metadata:immediate variable
@@ -177,7 +178,7 @@ Setting a default value (?=)
You can use the "?=" operator to achieve a "softer" assignment for a
variable. This type of assignment allows you to define a variable if it
is undefined when the statement is parsed, but to leave the value alone
-if the variable has a value. Here is an example: ::
+if the variable has a value. Here is an example::
A ?= "aval"
@@ -194,28 +195,51 @@ value. However, if ``A`` is not set, the variable is set to "aval".
Setting a weak default value (??=)
----------------------------------
-It is possible to use a "weaker" assignment than in the previous section
-by using the "??=" operator. This assignment behaves identical to "?="
-except that the assignment is made at the end of the parsing process
-rather than immediately. Consequently, when multiple "??=" assignments
-exist, the last one is used. Also, any "=" or "?=" assignment will
-override the value set with "??=". Here is an example: ::
+The weak default value of a variable is the value which that variable
+will expand to if no value has been assigned to it via any of the other
+assignment operators. The "??=" operator takes effect immediately, replacing
+any previously defined weak default value. Here is an example::
- A ??= "somevalue"
- A ??= "someothervalue"
+ W ??= "x"
+ A := "${W}" # Immediate variable expansion
+ W ??= "y"
+ B := "${W}" # Immediate variable expansion
+ W ??= "z"
+ C = "${W}"
+ W ?= "i"
-If ``A`` is set before the above statements are
-parsed, the variable retains its value. If ``A`` is not set, the
-variable is set to "someothervalue".
+After parsing we will have::
-Again, this assignment is a "lazy" or "weak" assignment because it does
-not occur until the end of the parsing process.
+ A = "x"
+ B = "y"
+ C = "i"
+ W = "i"
+
+Appending and prepending non-override style will not substitute the weak
+default value, which means that after parsing::
+
+ W ??= "x"
+ W += "y"
+
+we will have::
+
+ W = " y"
+
+On the other hand, override-style appends/prepends/removes are applied after
+any active weak default value has been substituted::
+
+ W ??= "x"
+ W:append = "y"
+
+After parsing we will have::
+
+ W = "xy"
Immediate variable expansion (:=)
---------------------------------
The ":=" operator results in a variable's contents being expanded
-immediately, rather than when the variable is actually used: ::
+immediately, rather than when the variable is actually used::
T = "123"
A := "test ${T}"
@@ -225,7 +249,7 @@ immediately, rather than when the variable is actually used: ::
C := "${C}append"
In this example, ``A`` contains "test 123", even though the final value
-of ``T`` is "456". The variable ``B`` will end up containing "456
+of :term:`T` is "456". The variable :term:`B` will end up containing "456
cvalappend". This is because references to undefined variables are
preserved as is during (immediate)expansion. This is in contrast to GNU
Make, where undefined variables expand to nothing. The variable ``C``
@@ -241,14 +265,14 @@ the "+=" and "=+" operators. These operators insert a space between the
current value and prepended or appended value.
These operators take immediate effect during parsing. Here are some
-examples: ::
+examples::
B = "bval"
B += "additionaldata"
C = "cval"
C =+ "test"
-The variable ``B`` contains "bval additionaldata" and ``C`` contains "test
+The variable :term:`B` contains "bval additionaldata" and ``C`` contains "test
cval".
.. _appending-and-prepending-without-spaces:
@@ -260,14 +284,14 @@ If you want to append or prepend values without an inserted space, use
the ".=" and "=." operators.
These operators take immediate effect during parsing. Here are some
-examples: ::
+examples::
B = "bval"
B .= "additionaldata"
C = "cval"
C =. "test"
-The variable ``B`` contains "bvaladditionaldata" and ``C`` contains
+The variable :term:`B` contains "bvaladditionaldata" and ``C`` contains
"testcval".
Appending and Prepending (Override Style Syntax)
@@ -278,16 +302,16 @@ style syntax. When you use this syntax, no spaces are inserted.
These operators differ from the ":=", ".=", "=.", "+=", and "=+"
operators in that their effects are applied at variable expansion time
-rather than being immediately applied. Here are some examples: ::
+rather than being immediately applied. Here are some examples::
B = "bval"
- B_append = " additional data"
+ B:append = " additional data"
C = "cval"
- C_prepend = "additional data "
+ C:prepend = "additional data "
D = "dval"
- D_append = "additional data"
+ D:append = "additional data"
-The variable ``B``
+The variable :term:`B`
becomes "bval additional data" and ``C`` becomes "additional data cval".
The variable ``D`` becomes "dvaladditional data".
@@ -309,13 +333,13 @@ syntax. Specifying a value for removal causes all occurrences of that
value to be removed from the variable.
When you use this syntax, BitBake expects one or more strings.
-Surrounding spaces and spacing are preserved. Here is an example: ::
+Surrounding spaces and spacing are preserved. Here is an example::
FOO = "123 456 789 123456 123 456 123 456"
- FOO_remove = "123"
- FOO_remove = "456"
+ FOO:remove = "123"
+ FOO:remove = "456"
FOO2 = " abc def ghi abcdef abc def abc def def"
- FOO2_remove = "\
+ FOO2:remove = "\
def \
abc \
ghi \
@@ -324,40 +348,40 @@ Surrounding spaces and spacing are preserved. Here is an example: ::
The variable ``FOO`` becomes
" 789 123456 " and ``FOO2`` becomes " abcdef ".
-Like "_append" and "_prepend", "_remove" is applied at variable
+Like ":append" and ":prepend", ":remove" is applied at variable
expansion time.
Override Style Operation Advantages
-----------------------------------
-An advantage of the override style operations "_append", "_prepend", and
-"_remove" as compared to the "+=" and "=+" operators is that the
+An advantage of the override style operations ":append", ":prepend", and
+":remove" as compared to the "+=" and "=+" operators is that the
override style operators provide guaranteed operations. For example,
consider a class ``foo.bbclass`` that needs to add the value "val" to
-the variable ``FOO``, and a recipe that uses ``foo.bbclass`` as follows: ::
+the variable ``FOO``, and a recipe that uses ``foo.bbclass`` as follows::
inherit foo
FOO = "initial"
If ``foo.bbclass`` uses the "+=" operator,
as follows, then the final value of ``FOO`` will be "initial", which is
-not what is desired: ::
+not what is desired::
FOO += "val"
If, on the other hand, ``foo.bbclass``
-uses the "_append" operator, then the final value of ``FOO`` will be
-"initial val", as intended: ::
+uses the ":append" operator, then the final value of ``FOO`` will be
+"initial val", as intended::
- FOO_append = " val"
+ FOO:append = " val"
.. note::
- It is never necessary to use "+=" together with "_append". The following
- sequence of assignments appends "barbaz" to FOO: ::
+ It is never necessary to use "+=" together with ":append". The following
+ sequence of assignments appends "barbaz" to FOO::
- FOO_append = "bar"
- FOO_append = "baz"
+ FOO:append = "bar"
+ FOO:append = "baz"
The only effect of changing the second assignment in the previous
@@ -378,10 +402,10 @@ You can find more out about variable flags in general in the
You can define, append, and prepend values to variable flags. All the
standard syntax operations previously mentioned work for variable flags
-except for override style syntax (i.e. "_prepend", "_append", and
-"_remove").
+except for override style syntax (i.e. ":prepend", ":append", and
+":remove").
-Here are some examples showing how to set variable flags: ::
+Here are some examples showing how to set variable flags::
FOO[a] = "abc"
FOO[b] = "123"
@@ -393,7 +417,7 @@ respectively. The ``[a]`` flag becomes "abc 456".
No need exists to pre-define variable flags. You can simply start using
them. One extremely common application is to attach some brief
-documentation to a BitBake variable as follows: ::
+documentation to a BitBake variable as follows::
CACHE[doc] = "The directory holding the cache of the metadata."
@@ -401,7 +425,7 @@ Inline Python Variable Expansion
--------------------------------
You can use inline Python variable expansion to set variables. Here is
-an example: ::
+an example::
DATE = "${@time.strftime('%Y%m%d',time.gmtime())}"
@@ -410,21 +434,21 @@ This example results in the ``DATE`` variable being set to the current date.
Probably the most common use of this feature is to extract the value of
variables from BitBake's internal data dictionary, ``d``. The following
lines select the values of a package name and its version number,
-respectively: ::
+respectively::
- PN = "${@bb.parse.BBHandler.vars_from_file(d.getVar('FILE', False),d)[0] or 'defaultpkgname'}"
- PV = "${@bb.parse.BBHandler.vars_from_file(d.getVar('FILE', False),d)[1] or '1.0'}"
+ PN = "${@bb.parse.vars_from_file(d.getVar('FILE', False),d)[0] or 'defaultpkgname'}"
+ PV = "${@bb.parse.vars_from_file(d.getVar('FILE', False),d)[1] or '1.0'}"
.. note::
Inline Python expressions work just like variable expansions insofar as the
"=" and ":=" operators are concerned. Given the following assignment, foo()
- is called each time FOO is expanded: ::
+ is called each time FOO is expanded::
FOO = "${@foo()}"
Contrast this with the following immediate assignment, where foo() is only
- called once, while the assignment is parsed: ::
+ called once, while the assignment is parsed::
FOO := "${@foo()}"
@@ -437,7 +461,7 @@ Unsetting variables
It is possible to completely remove a variable or a variable flag from
BitBake's internal data dictionary by using the "unset" keyword. Here is
-an example: ::
+an example::
unset DATE
unset do_fetch[noexec]
@@ -452,7 +476,7 @@ When specifying pathnames for use with BitBake, do not use the tilde
cause BitBake to not recognize the path since BitBake does not expand
this character in the same way a shell would.
-Instead, provide a fuller path as the following example illustrates: ::
+Instead, provide a fuller path as the following example illustrates::
BBLAYERS ?= " \
/home/scott-lenovo/LayerA \
@@ -463,7 +487,7 @@ Exporting Variables to the Environment
You can export variables to the environment of running tasks by using
the ``export`` keyword. For example, in the following example, the
-``do_foo`` task prints "value from the environment" when run: ::
+``do_foo`` task prints "value from the environment" when run::
export ENV_VARIABLE
ENV_VARIABLE = "value from the environment"
@@ -481,7 +505,7 @@ It does not matter whether ``export ENV_VARIABLE`` appears before or
after assignments to ``ENV_VARIABLE``.
It is also possible to combine ``export`` with setting a value for the
-variable. Here is an example: ::
+variable. Here is an example::
export ENV_VARIABLE = "variable-value"
@@ -496,78 +520,78 @@ Conditional Syntax (Overrides)
BitBake uses :term:`OVERRIDES` to control what
variables are overridden after BitBake parses recipes and configuration
-files. This section describes how you can use ``OVERRIDES`` as
+files. This section describes how you can use :term:`OVERRIDES` as
conditional metadata, talks about key expansion in relationship to
-``OVERRIDES``, and provides some examples to help with understanding.
+:term:`OVERRIDES`, and provides some examples to help with understanding.
Conditional Metadata
--------------------
-You can use ``OVERRIDES`` to conditionally select a specific version of
+You can use :term:`OVERRIDES` to conditionally select a specific version of
a variable and to conditionally append or prepend the value of a
variable.
.. note::
- Overrides can only use lower-case characters. Additionally,
- underscores are not permitted in override names as they are used to
+ Overrides can only use lower-case characters, digits and dashes.
+ In particular, colons are not permitted in override names as they are used to
separate overrides from each other and from the variable name.
-- *Selecting a Variable:* The ``OVERRIDES`` variable is a
+- *Selecting a Variable:* The :term:`OVERRIDES` variable is a
colon-character-separated list that contains items for which you want
to satisfy conditions. Thus, if you have a variable that is
- conditional on "arm", and "arm" is in ``OVERRIDES``, then the
+ conditional on "arm", and "arm" is in :term:`OVERRIDES`, then the
"arm"-specific version of the variable is used rather than the
- non-conditional version. Here is an example: ::
+ non-conditional version. Here is an example::
OVERRIDES = "architecture:os:machine"
TEST = "default"
- TEST_os = "osspecific"
- TEST_nooverride = "othercondvalue"
+ TEST:os = "osspecific"
+ TEST:nooverride = "othercondvalue"
- In this example, the ``OVERRIDES``
+ In this example, the :term:`OVERRIDES`
variable lists three overrides: "architecture", "os", and "machine".
The variable ``TEST`` by itself has a default value of "default". You
select the os-specific version of the ``TEST`` variable by appending
- the "os" override to the variable (i.e. ``TEST_os``).
+ the "os" override to the variable (i.e. ``TEST:os``).
To better understand this, consider a practical example that assumes
an OpenEmbedded metadata-based Linux kernel recipe file. The
following lines from the recipe file first set the kernel branch
variable ``KBRANCH`` to a default value, then conditionally override
- that value based on the architecture of the build: ::
+ that value based on the architecture of the build::
KBRANCH = "standard/base"
- KBRANCH_qemuarm = "standard/arm-versatile-926ejs"
- KBRANCH_qemumips = "standard/mti-malta32"
- KBRANCH_qemuppc = "standard/qemuppc"
- KBRANCH_qemux86 = "standard/common-pc/base"
- KBRANCH_qemux86-64 = "standard/common-pc-64/base"
- KBRANCH_qemumips64 = "standard/mti-malta64"
+ KBRANCH:qemuarm = "standard/arm-versatile-926ejs"
+ KBRANCH:qemumips = "standard/mti-malta32"
+ KBRANCH:qemuppc = "standard/qemuppc"
+ KBRANCH:qemux86 = "standard/common-pc/base"
+ KBRANCH:qemux86-64 = "standard/common-pc-64/base"
+ KBRANCH:qemumips64 = "standard/mti-malta64"
- *Appending and Prepending:* BitBake also supports append and prepend
operations to variable values based on whether a specific item is
- listed in ``OVERRIDES``. Here is an example: ::
+ listed in :term:`OVERRIDES`. Here is an example::
DEPENDS = "glibc ncurses"
OVERRIDES = "machine:local"
- DEPENDS_append_machine = "libmad"
+ DEPENDS:append:machine = "libmad"
- In this example, ``DEPENDS`` becomes "glibc ncurses libmad".
+ In this example, :term:`DEPENDS` becomes "glibc ncurses libmad".
Again, using an OpenEmbedded metadata-based kernel recipe file as an
example, the following lines will conditionally append to the
- ``KERNEL_FEATURES`` variable based on the architecture: ::
+ ``KERNEL_FEATURES`` variable based on the architecture::
- KERNEL_FEATURES_append = " ${KERNEL_EXTRA_FEATURES}"
- KERNEL_FEATURES_append_qemux86=" cfg/sound.scc cfg/paravirt_kvm.scc"
- KERNEL_FEATURES_append_qemux86-64=" cfg/sound.scc cfg/paravirt_kvm.scc"
+ KERNEL_FEATURES:append = " ${KERNEL_EXTRA_FEATURES}"
+ KERNEL_FEATURES:append:qemux86=" cfg/sound.scc cfg/paravirt_kvm.scc"
+ KERNEL_FEATURES:append:qemux86-64=" cfg/sound.scc cfg/paravirt_kvm.scc"
- *Setting a Variable for a Single Task:* BitBake supports setting a
- variable just for the duration of a single task. Here is an example: ::
+ variable just for the duration of a single task. Here is an example::
- FOO_task-configure = "val 1"
- FOO_task-compile = "val 2"
+ FOO:task-configure = "val 1"
+ FOO:task-compile = "val 2"
In the
previous example, ``FOO`` has the value "val 1" while the
@@ -580,15 +604,25 @@ variable.
``do_compile`` task.
You can also use this syntax with other combinations (e.g.
- "``_prepend``") as shown in the following example: ::
+ "``:prepend``") as shown in the following example::
+
+ EXTRA_OEMAKE:prepend:task-compile = "${PARALLEL_MAKE} "
+
+.. note::
+
+ Before BitBake 1.52 (Honister 3.4), the syntax for :term:`OVERRIDES`
+ used ``_`` instead of ``:``, so you will still find a lot of documentation
+ using ``_append``, ``_prepend``, and ``_remove``, for example.
- EXTRA_OEMAKE_prepend_task-compile = "${PARALLEL_MAKE} "
+ For details, see the
+ :yocto_docs:`Overrides Syntax Changes </migration-guides/migration-3.4.html#override-syntax-changes>`
+ section in the Yocto Project manual migration notes.
Key Expansion
-------------
Key expansion happens when the BitBake datastore is finalized. To better
-understand this, consider the following example: ::
+understand this, consider the following example::
A${B} = "X"
B = "2"
@@ -612,57 +646,57 @@ users.
There is often confusion concerning the order in which overrides and
various "append" operators take effect. Recall that an append or prepend
-operation using "_append" and "_prepend" does not result in an immediate
+operation using ":append" and ":prepend" does not result in an immediate
assignment as would "+=", ".=", "=+", or "=.". Consider the following
-example: ::
+example::
OVERRIDES = "foo"
A = "Z"
- A_foo_append = "X"
+ A:foo:append = "X"
For this case,
``A`` is unconditionally set to "Z" and "X" is unconditionally and
-immediately appended to the variable ``A_foo``. Because overrides have
-not been applied yet, ``A_foo`` is set to "X" due to the append and
+immediately appended to the variable ``A:foo``. Because overrides have
+not been applied yet, ``A:foo`` is set to "X" due to the append and
``A`` simply equals "Z".
Applying overrides, however, changes things. Since "foo" is listed in
-``OVERRIDES``, the conditional variable ``A`` is replaced with the "foo"
-version, which is equal to "X". So effectively, ``A_foo`` replaces
+:term:`OVERRIDES`, the conditional variable ``A`` is replaced with the "foo"
+version, which is equal to "X". So effectively, ``A:foo`` replaces
``A``.
-This next example changes the order of the override and the append: ::
+This next example changes the order of the override and the append::
OVERRIDES = "foo"
A = "Z"
- A_append_foo = "X"
+ A:append:foo = "X"
For this case, before
-overrides are handled, ``A`` is set to "Z" and ``A_append_foo`` is set
+overrides are handled, ``A`` is set to "Z" and ``A:append:foo`` is set
to "X". Once the override for "foo" is applied, however, ``A`` gets
appended with "X". Consequently, ``A`` becomes "ZX". Notice that spaces
are not appended.
This next example has the order of the appends and overrides reversed
-back as in the first example: ::
+back as in the first example::
OVERRIDES = "foo"
A = "Y"
- A_foo_append = "Z"
- A_foo_append = "X"
+ A:foo:append = "Z"
+ A:foo:append = "X"
For this case, before any overrides are resolved,
``A`` is set to "Y" using an immediate assignment. After this immediate
-assignment, ``A_foo`` is set to "Z", and then further appended with "X"
+assignment, ``A:foo`` is set to "Z", and then further appended with "X"
leaving the variable set to "ZX". Finally, applying the override for
"foo" results in the conditional variable ``A`` becoming "ZX" (i.e.
-``A`` is replaced with ``A_foo``).
+``A`` is replaced with ``A:foo``).
-This final example mixes in some varying operators: ::
+This final example mixes in some varying operators::
A = "1"
- A_append = "2"
- A_append = "3"
+ A:append = "2"
+ A:append = "3"
A += "4"
A .= "5"
@@ -670,7 +704,7 @@ For this case, the type of append
operators are affecting the order of assignments as BitBake passes
through the code multiple times. Initially, ``A`` is set to "1 45"
because of the three statements that use immediate operators. After
-these assignments are made, BitBake applies the "_append" operations.
+these assignments are made, BitBake applies the ":append" operations.
Those operations result in ``A`` becoming "1 4523".
Sharing Functionality
@@ -686,7 +720,7 @@ share the task.
This section presents the mechanisms BitBake provides to allow you to
share functionality between recipes. Specifically, the mechanisms
-include ``include``, ``inherit``, ``INHERIT``, and ``require``
+include ``include``, ``inherit``, :term:`INHERIT`, and ``require``
directives.
Locating Include and Class Files
@@ -702,7 +736,7 @@ current directory for ``include`` and ``require`` directives.
In order for include and class files to be found by BitBake, they need
to be located in a "classes" subdirectory that can be found in
-``BBPATH``.
+:term:`BBPATH`.
``inherit`` Directive
---------------------
@@ -720,12 +754,12 @@ file and then have your recipe inherit that class file.
As an example, your recipes could use the following directive to inherit
an ``autotools.bbclass`` file. The class file would contain common
-functionality for using Autotools that could be shared across recipes: ::
+functionality for using Autotools that could be shared across recipes::
inherit autotools
In this case, BitBake would search for the directory
-``classes/autotools.bbclass`` in ``BBPATH``.
+``classes/autotools.bbclass`` in :term:`BBPATH`.
.. note::
@@ -734,7 +768,7 @@ In this case, BitBake would search for the directory
If you want to use the directive to inherit multiple classes, separate
them with spaces. The following example shows how to inherit both the
-``buildhistory`` and ``rm_work`` classes: ::
+``buildhistory`` and ``rm_work`` classes::
inherit buildhistory rm_work
@@ -742,19 +776,19 @@ An advantage with the inherit directive as compared to both the
:ref:`include <bitbake-user-manual/bitbake-user-manual-metadata:\`\`include\`\` directive>` and :ref:`require <bitbake-user-manual/bitbake-user-manual-metadata:\`\`require\`\` directive>`
directives is that you can inherit class files conditionally. You can
accomplish this by using a variable expression after the ``inherit``
-statement. Here is an example: ::
+statement. Here is an example::
inherit ${VARNAME}
If ``VARNAME`` is
going to be set, it needs to be set before the ``inherit`` statement is
parsed. One way to achieve a conditional inherit in this case is to use
-overrides: ::
+overrides::
VARIABLE = ""
- VARIABLE_someoverride = "myclass"
+ VARIABLE:someoverride = "myclass"
-Another method is by using anonymous Python. Here is an example: ::
+Another method is by using anonymous Python. Here is an example::
python () {
if condition == value:
@@ -764,7 +798,7 @@ Another method is by using anonymous Python. Here is an example: ::
}
Alternatively, you could use an in-line Python expression in the
-following form: ::
+following form::
inherit ${@'classname' if condition else ''}
inherit ${@functionname(params)}
@@ -780,7 +814,7 @@ BitBake understands the ``include`` directive. This directive causes
BitBake to parse whatever file you specify, and to insert that file at
that location. The directive is much like its equivalent in Make except
that if the path specified on the include line is a relative path,
-BitBake locates the first file it can find within ``BBPATH``.
+BitBake locates the first file it can find within :term:`BBPATH`.
The include directive is a more generic method of including
functionality as compared to the :ref:`inherit <bitbake-user-manual/bitbake-user-manual-metadata:\`\`inherit\`\` directive>`
@@ -790,7 +824,7 @@ encapsulated functionality or configuration that does not suit a
``.bbclass`` file.
As an example, suppose you needed a recipe to include some self-test
-definitions: ::
+definitions::
include test_defs.inc
@@ -822,7 +856,7 @@ does not suit a ``.bbclass`` file.
Similar to how BitBake handles :ref:`include <bitbake-user-manual/bitbake-user-manual-metadata:\`\`include\`\` directive>`, if
the path specified on the require line is a relative path, BitBake
-locates the first file it can find within ``BBPATH``.
+locates the first file it can find within :term:`BBPATH`.
As an example, suppose you have two versions of a recipe (e.g.
``foo_1.2.2.bb`` and ``foo_2.0.0.bb``) where each version contains some
@@ -831,7 +865,7 @@ include file named ``foo.inc`` that contains the common definitions
needed to build "foo". You need to be sure ``foo.inc`` is located in the
same directory as your two recipe files as well. Once these conditions
are set up, you can share the functionality using a ``require``
-directive from within each recipe: ::
+directive from within each recipe::
require foo.inc
@@ -844,14 +878,14 @@ class. BitBake only supports this directive when used within a
configuration file.
As an example, suppose you needed to inherit a class file called
-``abc.bbclass`` from a configuration file as follows: ::
+``abc.bbclass`` from a configuration file as follows::
INHERIT += "abc"
This configuration directive causes the named class to be inherited at
the point of the directive during parsing. As with the ``inherit``
directive, the ``.bbclass`` file must be located in a "classes"
-subdirectory in one of the directories specified in ``BBPATH``.
+subdirectory in one of the directories specified in :term:`BBPATH`.
.. note::
@@ -862,7 +896,7 @@ subdirectory in one of the directories specified in ``BBPATH``.
If you want to use the directive to inherit multiple classes, you can
provide them on the same line in the ``local.conf`` file. Use spaces to
separate the classes. The following example shows how to inherit both
-the ``autotools`` and ``pkgconfig`` classes: ::
+the ``autotools`` and ``pkgconfig`` classes::
INHERIT += "autotools pkgconfig"
@@ -893,9 +927,9 @@ Regardless of the type of function, you can only define them in class
Shell Functions
---------------
-Functions written in shell script and executed either directly as
+Functions written in shell script are executed either directly as
functions, tasks, or both. They can also be called by other shell
-functions. Here is an example shell function definition: ::
+functions. Here is an example shell function definition::
some_function () {
echo "Hello World"
@@ -907,19 +941,19 @@ rules. The scripts are executed by ``/bin/sh``, which may not be a bash
shell but might be something such as ``dash``. You should not use
Bash-specific script (bashisms).
-Overrides and override-style operators like ``_append`` and ``_prepend``
+Overrides and override-style operators like ``:append`` and ``:prepend``
can also be applied to shell functions. Most commonly, this application
would be used in a ``.bbappend`` file to modify functions in the main
recipe. It can also be used to modify functions inherited from classes.
-As an example, consider the following: ::
+As an example, consider the following::
do_foo() {
bbplain first
fn
}
- fn_prepend() {
+ fn:prepend() {
bbplain second
}
@@ -927,11 +961,11 @@ As an example, consider the following: ::
bbplain third
}
- do_foo_append() {
+ do_foo:append() {
bbplain fourth
}
-Running ``do_foo`` prints the following: ::
+Running ``do_foo`` prints the following::
recipename do_foo: first
recipename do_foo: second
@@ -943,7 +977,7 @@ Running ``do_foo`` prints the following: ::
Overrides and override-style operators can be applied to any shell
function, not just :ref:`tasks <bitbake-user-manual/bitbake-user-manual-metadata:tasks>`.
-You can use the ``bitbake -e`` recipename command to view the final
+You can use the ``bitbake -e recipename`` command to view the final
assembled function after all overrides have been applied.
BitBake-Style Python Functions
@@ -952,7 +986,7 @@ BitBake-Style Python Functions
These functions are written in Python and executed by BitBake or other
Python functions using ``bb.build.exec_func()``.
-An example BitBake function is: ::
+An example BitBake function is::
python some_python_function () {
d.setVar("TEXT", "Hello World")
@@ -975,9 +1009,9 @@ import these modules. Also in these types of functions, the datastore
Similar to shell functions, you can also apply overrides and
override-style operators to BitBake-style Python functions.
-As an example, consider the following: ::
+As an example, consider the following::
- python do_foo_prepend() {
+ python do_foo:prepend() {
bb.plain("first")
}
@@ -985,17 +1019,17 @@ As an example, consider the following: ::
bb.plain("second")
}
- python do_foo_append() {
+ python do_foo:append() {
bb.plain("third")
}
-Running ``do_foo`` prints the following: ::
+Running ``do_foo`` prints the following::
recipename do_foo: first
recipename do_foo: second
recipename do_foo: third
-You can use the ``bitbake -e`` recipename command to view
+You can use the ``bitbake -e recipename`` command to view
the final assembled function after all overrides have been applied.
Python Functions
@@ -1004,7 +1038,7 @@ Python Functions
These functions are written in Python and are executed by other Python
code. Examples of Python functions are utility functions that you intend
to call from in-line Python or from within other Python functions. Here
-is an example: ::
+is an example::
def get_depends(d):
if d.getVar('SOMECONDITION'):
@@ -1015,7 +1049,7 @@ is an example: ::
SOMECONDITION = "1"
DEPENDS = "${@get_depends(d)}"
-This would result in ``DEPENDS`` containing ``dependencywithcond``.
+This would result in :term:`DEPENDS` containing ``dependencywithcond``.
Here are some things to know about Python functions:
@@ -1056,7 +1090,7 @@ functions and regular Python functions defined with "def":
- Regular Python functions are called with the usual Python syntax.
BitBake-style Python functions are usually tasks and are called
directly by BitBake, but can also be called manually from Python code
- by using the ``bb.build.exec_func()`` function. Here is an example: ::
+ by using the ``bb.build.exec_func()`` function. Here is an example::
bb.build.exec_func("my_bitbake_style_function", d)
@@ -1094,7 +1128,7 @@ Sometimes it is useful to set variables or perform other operations
programmatically during parsing. To do this, you can define special
Python functions, called anonymous Python functions, that run at the end
of parsing. For example, the following conditionally sets a variable
-based on the value of another variable: ::
+based on the value of another variable::
python () {
if d.getVar('SOMEVAR') == 'value':
@@ -1107,7 +1141,7 @@ the name "__anonymous", rather than no name.
Anonymous Python functions always run at the end of parsing, regardless
of where they are defined. If a recipe contains many anonymous
functions, they run in the same order as they are defined within the
-recipe. As an example, consider the following snippet: ::
+recipe. As an example, consider the following snippet::
python () {
d.setVar('FOO', 'foo 2')
@@ -1122,7 +1156,7 @@ recipe. As an example, consider the following snippet: ::
BAR = "bar 1"
The previous example is conceptually
-equivalent to the following snippet: ::
+equivalent to the following snippet::
FOO = "foo 1"
BAR = "bar 1"
@@ -1134,12 +1168,12 @@ equivalent to the following snippet: ::
values set for the variables within the anonymous functions become
available to tasks, which always run after parsing.
-Overrides and override-style operators such as "``_append``" are applied
+Overrides and override-style operators such as "``:append``" are applied
before anonymous functions run. In the following example, ``FOO`` ends
-up with the value "foo from anonymous": ::
+up with the value "foo from anonymous"::
FOO = "foo"
- FOO_append = " from outside"
+ FOO:append = " from outside"
python () {
d.setVar("FOO", "foo from anonymous")
@@ -1164,7 +1198,7 @@ To understand the benefits of this feature, consider the basic scenario
where a class defines a task function and your recipe inherits the
class. In this basic scenario, your recipe inherits the task function as
defined in the class. If desired, your recipe can add to the start and
-end of the function by using the "_prepend" or "_append" operations
+end of the function by using the ":prepend" or ":append" operations
respectively, or it can redefine the function completely. However, if it
redefines the function, there is no means for it to call the class
version of the function. ``EXPORT_FUNCTIONS`` provides a mechanism that
@@ -1173,24 +1207,24 @@ version of the function.
To make use of this technique, you need the following things in place:
-- The class needs to define the function as follows: ::
+- The class needs to define the function as follows::
classname_functionname
For example, if you have a class file
``bar.bbclass`` and a function named ``do_foo``, the class must
- define the function as follows: ::
+ define the function as follows::
bar_do_foo
- The class needs to contain the ``EXPORT_FUNCTIONS`` statement as
- follows: ::
+ follows::
EXPORT_FUNCTIONS functionname
For example, continuing with
the same example, the statement in the ``bar.bbclass`` would be as
- follows: ::
+ follows::
EXPORT_FUNCTIONS do_foo
@@ -1199,7 +1233,7 @@ To make use of this technique, you need the following things in place:
class version of the function, it should call ``bar_do_foo``.
Assuming ``do_foo`` was a shell function and ``EXPORT_FUNCTIONS`` was
used as above, the recipe's function could conditionally call the
- class version of the function as follows: ::
+ class version of the function as follows::
do_foo() {
if [ somecondition ] ; then
@@ -1233,7 +1267,7 @@ Tasks are either :ref:`shell functions <bitbake-user-manual/bitbake-user-manual-
that have been promoted to tasks by using the ``addtask`` command. The
``addtask`` command can also optionally describe dependencies between
the task and other tasks. Here is an example that shows how to define a
-task and declare some dependencies: ::
+task and declare some dependencies::
python do_printdate () {
import time
@@ -1264,12 +1298,12 @@ Additionally, the ``do_printdate`` task becomes dependent upon the
rerun for experimentation purposes, you can make BitBake always
consider the task "out-of-date" by using the
:ref:`[nostamp] <bitbake-user-manual/bitbake-user-manual-metadata:Variable Flags>`
- variable flag, as follows: ::
+ variable flag, as follows::
do_printdate[nostamp] = "1"
You can also explicitly run the task and provide the
- -f option as follows: ::
+ -f option as follows::
$ bitbake recipe -c printdate -f
@@ -1278,7 +1312,7 @@ Additionally, the ``do_printdate`` task becomes dependent upon the
name.
You might wonder about the practical effects of using ``addtask``
-without specifying any dependencies as is done in the following example: ::
+without specifying any dependencies as is done in the following example::
addtask printdate
@@ -1286,7 +1320,7 @@ In this example, assuming dependencies have not been
added through some other means, the only way to run the task is by
explicitly selecting it with ``bitbake`` recipe ``-c printdate``. You
can use the ``do_listtasks`` task to list all tasks defined in a recipe
-as shown in the following example: ::
+as shown in the following example::
$ bitbake recipe -c listtasks
@@ -1312,7 +1346,7 @@ Deleting a Task
As well as being able to add tasks, you can delete them. Simply use the
``deltask`` command to delete a task. For example, to delete the example
-task used in the previous sections, you would use: ::
+task used in the previous sections, you would use::
deltask printdate
@@ -1328,7 +1362,7 @@ to run before ``do_a``.
If you want dependencies such as these to remain intact, use the
``[noexec]`` varflag to disable the task instead of using the
-``deltask`` command to delete it: ::
+``deltask`` command to delete it::
do_b[noexec] = "1"
@@ -1342,8 +1376,8 @@ the build machine cannot influence the build.
.. note::
By default, BitBake cleans the environment to include only those
- things exported or listed in its whitelist to ensure that the build
- environment is reproducible and consistent. You can prevent this
+ things exported or listed in its passthrough list to ensure that the
+ build environment is reproducible and consistent. You can prevent this
"cleaning" by setting the :term:`BB_PRESERVE_ENV` variable.
Consequently, if you do want something to get passed into the build task
@@ -1351,14 +1385,14 @@ environment, you must take these two steps:
#. Tell BitBake to load what you want from the environment into the
datastore. You can do so through the
- :term:`BB_ENV_WHITELIST` and
- :term:`BB_ENV_EXTRAWHITE` variables. For
+ :term:`BB_ENV_PASSTHROUGH` and
+ :term:`BB_ENV_PASSTHROUGH_ADDITIONS` variables. For
example, assume you want to prevent the build system from accessing
- your ``$HOME/.ccache`` directory. The following command "whitelists"
- the environment variable ``CCACHE_DIR`` causing BitBake to allow that
- variable into the datastore: ::
+ your ``$HOME/.ccache`` directory. The following command adds the
+ the environment variable ``CCACHE_DIR`` to BitBake's passthrough
+ list to allow that variable into the datastore::
- export BB_ENV_EXTRAWHITE="$BB_ENV_EXTRAWHITE CCACHE_DIR"
+ export BB_ENV_PASSTHROUGH_ADDITIONS="$BB_ENV_PASSTHROUGH_ADDITIONS CCACHE_DIR"
#. Tell BitBake to export what you have loaded into the datastore to the
task environment of every running task. Loading something from the
@@ -1366,7 +1400,7 @@ environment, you must take these two steps:
available in the datastore. To export it to the task environment of
every running task, use a command similar to the following in your
local configuration file ``local.conf`` or your distribution
- configuration file: ::
+ configuration file::
export CCACHE_DIR
@@ -1375,17 +1409,17 @@ environment, you must take these two steps:
A side effect of the previous steps is that BitBake records the
variable as a dependency of the build process in things like the
setscene checksums. If doing so results in unnecessary rebuilds of
- tasks, you can whitelist the variable so that the setscene code
+ tasks, you can also flag the variable so that the setscene code
ignores the dependency when it creates checksums.
Sometimes, it is useful to be able to obtain information from the
original execution environment. BitBake saves a copy of the original
environment into a special variable named :term:`BB_ORIGENV`.
-The ``BB_ORIGENV`` variable returns a datastore object that can be
+The :term:`BB_ORIGENV` variable returns a datastore object that can be
queried using the standard datastore operators such as
``getVar(, False)``. The datastore object is useful, for example, to
-find the original ``DISPLAY`` variable. Here is an example: ::
+find the original ``DISPLAY`` variable. Here is an example::
origenv = d.getVar("BB_ORIGENV", False)
bar = origenv.getVar("BAR", False)
@@ -1398,7 +1432,7 @@ Variable Flags
Variable flags (varflags) help control a task's functionality and
dependencies. BitBake reads and writes varflags to the datastore using
-the following command forms: ::
+the following command forms::
variable = d.getVarFlags("variable")
self.d.setVarFlags("FOO", {"func": True})
@@ -1467,7 +1501,7 @@ functionality of the task:
can result in unpredictable behavior.
- Setting the varflag to a value greater than the value used in
- the ``BB_NUMBER_THREADS`` variable causes ``number_threads`` to
+ the :term:`BB_NUMBER_THREADS` variable causes ``number_threads`` to
have no effect.
- ``[postfuncs]``: List of functions to call after the completion of
@@ -1537,7 +1571,7 @@ intent is to make it easy to do things like email notification on build
failures.
Following is an example event handler that prints the name of the event
-and the content of the ``FILE`` variable: ::
+and the content of the :term:`FILE` variable::
addhandler myclass_eventhandler
python myclass_eventhandler() {
@@ -1576,7 +1610,7 @@ might have an interest in viewing:
- ``bb.event.ConfigParsed()``: Fired when the base configuration; which
consists of ``bitbake.conf``, ``base.bbclass`` and any global
- ``INHERIT`` statements; has been parsed. You can see multiple such
+ :term:`INHERIT` statements; has been parsed. You can see multiple such
events when each of the workers parse the base configuration or if
the server changes configuration and reparses. Any given datastore
only has one such event executed against it, however. If
@@ -1647,13 +1681,18 @@ user interfaces:
.. _variants-class-extension-mechanism:
-Variants - Class Extension Mechanism
-====================================
+Variants --- Class Extension Mechanism
+======================================
+
+BitBake supports multiple incarnations of a recipe file via the
+:term:`BBCLASSEXTEND` variable.
+
+The :term:`BBCLASSEXTEND` variable is a space separated list of classes used
+to "extend" the recipe for each variant. Here is an example that results in a
+second incarnation of the current recipe being available. This second
+incarnation will have the "native" class inherited. ::
-BitBake supports two features that facilitate creating from a single
-recipe file multiple incarnations of that recipe file where all
-incarnations are buildable. These features are enabled through the
-:term:`BBCLASSEXTEND` and :term:`BBVERSIONS` variables.
+ BBCLASSEXTEND = "native"
.. note::
@@ -1663,34 +1702,6 @@ incarnations are buildable. These features are enabled through the
class. For specific examples, see the OE-Core native , nativesdk , and
multilib classes.
-- ``BBCLASSEXTEND``: This variable is a space separated list of
- classes used to "extend" the recipe for each variant. Here is an
- example that results in a second incarnation of the current recipe
- being available. This second incarnation will have the "native" class
- inherited. ::
-
- BBCLASSEXTEND = "native"
-
-- ``BBVERSIONS``: This variable allows a single recipe to build
- multiple versions of a project from a single recipe file. You can
- also specify conditional metadata (using the
- :term:`OVERRIDES` mechanism) for a single
- version, or an optionally named range of versions. Here is an
- example: ::
-
- BBVERSIONS = "1.0 2.0 git"
- SRC_URI_git = "git://someurl/somepath.git"
-
- BBVERSIONS = "1.0.[0-6]:1.0.0+ 1.0.[7-9]:1.0.7+"
- SRC_URI_append_1.0.7+ = "file://some_patch_which_the_new_versions_need.patch;patch=1"
-
- The name of the range defaults to the original version of the recipe. For
- example, in OpenEmbedded, the recipe file ``foo_1.0.0+.bb`` creates a default
- name range of ``1.0.0+``. This is useful because the range name is not only
- placed into overrides, but it is also made available for the metadata to use
- in the variable that defines the base recipe versions for use in ``file://``
- search paths (:term:`FILESPATH`).
-
Dependencies
============
@@ -1719,7 +1730,7 @@ Dependencies Internal to the ``.bb`` File
BitBake uses the ``addtask`` directive to manage dependencies that are
internal to a given recipe file. You can use the ``addtask`` directive
to indicate when a task is dependent on other tasks or when other tasks
-depend on that recipe. Here is an example: ::
+depend on that recipe. Here is an example::
addtask printdate after do_fetch before do_build
@@ -1743,7 +1754,7 @@ task depends on the completion of the ``do_printdate`` task.
- The directive ``addtask mytask after do_configure`` by itself
never causes ``do_mytask`` to run. ``do_mytask`` can still be run
- manually as follows: ::
+ manually as follows::
$ bitbake recipe -c mytask
@@ -1756,13 +1767,13 @@ Build Dependencies
BitBake uses the :term:`DEPENDS` variable to manage
build time dependencies. The ``[deptask]`` varflag for tasks signifies
-the task of each item listed in ``DEPENDS`` that must complete before
-that task can be executed. Here is an example: ::
+the task of each item listed in :term:`DEPENDS` that must complete before
+that task can be executed. Here is an example::
do_configure[deptask] = "do_populate_sysroot"
In this example, the ``do_populate_sysroot`` task
-of each item in ``DEPENDS`` must complete before ``do_configure`` can
+of each item in :term:`DEPENDS` must complete before ``do_configure`` can
execute.
Runtime Dependencies
@@ -1771,8 +1782,8 @@ Runtime Dependencies
BitBake uses the :term:`PACKAGES`, :term:`RDEPENDS`, and :term:`RRECOMMENDS`
variables to manage runtime dependencies.
-The ``PACKAGES`` variable lists runtime packages. Each of those packages
-can have ``RDEPENDS`` and ``RRECOMMENDS`` runtime dependencies. The
+The :term:`PACKAGES` variable lists runtime packages. Each of those packages
+can have :term:`RDEPENDS` and :term:`RRECOMMENDS` runtime dependencies. The
``[rdeptask]`` flag for tasks is used to signify the task of each item
runtime dependency which must have completed before that task can be
executed. ::
@@ -1780,9 +1791,9 @@ executed. ::
do_package_qa[rdeptask] = "do_packagedata"
In the previous
-example, the ``do_packagedata`` task of each item in ``RDEPENDS`` must
+example, the ``do_packagedata`` task of each item in :term:`RDEPENDS` must
have completed before ``do_package_qa`` can execute.
-Although ``RDEPENDS`` contains entries from the
+Although :term:`RDEPENDS` contains entries from the
runtime dependency namespace, BitBake knows how to map them back
to the build-time dependency namespace, in which the tasks are defined.
@@ -1799,7 +1810,7 @@ dependencies are discovered and added.
The ``[recrdeptask]`` flag is most commonly used in high-level recipes
that need to wait for some task to finish "globally". For example,
-``image.bbclass`` has the following: ::
+``image.bbclass`` has the following::
do_rootfs[recrdeptask] += "do_packagedata"
@@ -1808,7 +1819,7 @@ the current recipe and all recipes reachable (by way of dependencies)
from the image recipe must run before the ``do_rootfs`` task can run.
BitBake allows a task to recursively depend on itself by
-referencing itself in the task list: ::
+referencing itself in the task list::
do_a[recrdeptask] = "do_a do_b"
@@ -1825,7 +1836,7 @@ Inter-Task Dependencies
BitBake uses the ``[depends]`` flag in a more generic form to manage
inter-task dependencies. This more generic form allows for
inter-dependency checks for specific tasks rather than checks for the
-data in ``DEPENDS``. Here is an example: ::
+data in :term:`DEPENDS`. Here is an example::
do_patch[depends] = "quilt-native:do_populate_sysroot"
@@ -1920,7 +1931,7 @@ To help understand how BitBake does this, the section assumes an
OpenEmbedded metadata-based example.
These checksums are stored in :term:`STAMP`. You can
-examine the checksums using the following BitBake command: ::
+examine the checksums using the following BitBake command::
$ bitbake-dumpsigs
@@ -1943,16 +1954,6 @@ The following list describes related variables:
Specifies a function BitBake calls that determines whether BitBake
requires a setscene dependency to be met.
-- :term:`BB_SETSCENE_VERIFY_FUNCTION2`:
- Specifies a function to call that verifies the list of planned task
- execution before the main task execution happens.
-
-- :term:`BB_STAMP_POLICY`: Defines the mode
- for comparing timestamps of stamp files.
-
-- :term:`BB_STAMP_WHITELIST`: Lists stamp
- files that are looked at when the stamp policy is "whitelist".
-
- :term:`BB_TASKHASH`: Within an executing task,
this variable holds the hash of the task as returned by the currently
enabled signature generator.
@@ -1967,7 +1968,7 @@ Wildcard Support in Variables
=============================
Support for wildcard use in variables varies depending on the context in
-which it is used. For example, some variables and file names allow
+which it is used. For example, some variables and filenames allow
limited use of wildcards through the "``%``" and "``*``" characters.
Other variables or names support Python's
`glob <https://docs.python.org/3/library/glob.html>`_ syntax,
diff --git a/bitbake/doc/bitbake-user-manual/bitbake-user-manual-ref-variables.rst b/bitbake/doc/bitbake-user-manual/bitbake-user-manual-ref-variables.rst
index 489fa15..12aef3c 100644
--- a/bitbake/doc/bitbake-user-manual/bitbake-user-manual-ref-variables.rst
+++ b/bitbake/doc/bitbake-user-manual/bitbake-user-manual-ref-variables.rst
@@ -23,18 +23,15 @@ overview of their function and contents.
systems extend the functionality of the variable as it is
described here in this glossary.
- - Finally, there are variables mentioned in this glossary that do
- not appear in the BitBake glossary. These other variables are
- variables used in systems that use BitBake.
-
.. glossary::
+ :sorted:
:term:`ASSUME_PROVIDED`
Lists recipe names (:term:`PN` values) BitBake does not
attempt to build. Instead, BitBake assumes these recipes have already
been built.
- In OpenEmbedded-Core, ``ASSUME_PROVIDED`` mostly specifies native
+ In OpenEmbedded-Core, :term:`ASSUME_PROVIDED` mostly specifies native
tools that should not be built. An example is ``git-native``, which
when specified allows for the Git binary from the host to be used
rather than building ``git-native``.
@@ -87,14 +84,25 @@ overview of their function and contents.
- Attempts to access networks not in the host list cause a failure.
- Using ``BB_ALLOWED_NETWORKS`` in conjunction with
+ Using :term:`BB_ALLOWED_NETWORKS` in conjunction with
:term:`PREMIRRORS` is very useful. Adding the
- host you want to use to ``PREMIRRORS`` results in the source code
+ host you want to use to :term:`PREMIRRORS` results in the source code
being fetched from an allowed location and avoids raising an error
when a host that is not allowed is in a
:term:`SRC_URI` statement. This is because the
- fetcher does not attempt to use the host listed in ``SRC_URI`` after
- a successful fetch from the ``PREMIRRORS`` occurs.
+ fetcher does not attempt to use the host listed in :term:`SRC_URI` after
+ a successful fetch from the :term:`PREMIRRORS` occurs.
+
+ :term:`BB_BASEHASH_IGNORE_VARS`
+ Lists variables that are excluded from checksum and dependency data.
+ Variables that are excluded can therefore change without affecting
+ the checksum mechanism. A common example would be the variable for
+ the path of the build. BitBake's output should not (and usually does
+ not) depend on the directory in which it was built.
+
+ :term:`BB_CHECK_SSL_CERTS`
+ Specifies if SSL certificates should be checked when fetching. The default
+ value is ``1`` and certificates are not checked if the value is set to ``0``.
:term:`BB_CONSOLELOG`
Specifies the path to a log file into which BitBake's user interface
@@ -130,14 +138,14 @@ overview of their function and contents.
you to control the build based on these parameters.
Disk space monitoring is disabled by default. When setting this
- variable, use the following form: ::
+ variable, use the following form::
BB_DISKMON_DIRS = "<action>,<dir>,<threshold> [...]"
where:
<action> is:
- ABORT: Immediately abort the build when
+ HALT: Immediately halt the build when
a threshold is broken.
STOPTASKS: Stop the build after the currently
executing tasks have finished when
@@ -166,48 +174,48 @@ overview of their function and contents.
not specify G, M, or K, Kbytes is assumed by
default. Do not use GB, MB, or KB.
- Here are some examples: ::
+ Here are some examples::
- BB_DISKMON_DIRS = "ABORT,${TMPDIR},1G,100K WARN,${SSTATE_DIR},1G,100K"
+ BB_DISKMON_DIRS = "HALT,${TMPDIR},1G,100K WARN,${SSTATE_DIR},1G,100K"
BB_DISKMON_DIRS = "STOPTASKS,${TMPDIR},1G"
- BB_DISKMON_DIRS = "ABORT,${TMPDIR},,100K"
+ BB_DISKMON_DIRS = "HALT,${TMPDIR},,100K"
The first example works only if you also set the
:term:`BB_DISKMON_WARNINTERVAL`
- variable. This example causes the build system to immediately abort
+ variable. This example causes the build system to immediately halt
when either the disk space in ``${TMPDIR}`` drops below 1 Gbyte or
the available free inodes drops below 100 Kbytes. Because two
directories are provided with the variable, the build system also
issues a warning when the disk space in the ``${SSTATE_DIR}``
directory drops below 1 Gbyte or the number of free inodes drops
below 100 Kbytes. Subsequent warnings are issued during intervals as
- defined by the ``BB_DISKMON_WARNINTERVAL`` variable.
+ defined by the :term:`BB_DISKMON_WARNINTERVAL` variable.
The second example stops the build after all currently executing
tasks complete when the minimum disk space in the ``${TMPDIR}``
directory drops below 1 Gbyte. No disk monitoring occurs for the free
inodes in this case.
- The final example immediately aborts the build when the number of
+ The final example immediately halts the build when the number of
free inodes in the ``${TMPDIR}`` directory drops below 100 Kbytes. No
disk space monitoring for the directory itself occurs in this case.
:term:`BB_DISKMON_WARNINTERVAL`
Defines the disk space and free inode warning intervals.
- If you are going to use the ``BB_DISKMON_WARNINTERVAL`` variable, you
+ If you are going to use the :term:`BB_DISKMON_WARNINTERVAL` variable, you
must also use the :term:`BB_DISKMON_DIRS`
variable and define its action as "WARN". During the build,
subsequent warnings are issued each time disk space or number of free
inodes further reduces by the respective interval.
- If you do not provide a ``BB_DISKMON_WARNINTERVAL`` variable and you
- do use ``BB_DISKMON_DIRS`` with the "WARN" action, the disk
+ If you do not provide a :term:`BB_DISKMON_WARNINTERVAL` variable and you
+ do use :term:`BB_DISKMON_DIRS` with the "WARN" action, the disk
monitoring interval defaults to the following:
BB_DISKMON_WARNINTERVAL = "50M,5K"
When specifying the variable in your configuration file, use the
- following form: ::
+ following form::
BB_DISKMON_WARNINTERVAL = "<disk_space_interval>,<disk_inode_interval>"
@@ -223,7 +231,7 @@ overview of their function and contents.
G, M, or K for Gbytes, Mbytes, or Kbytes,
respectively. You cannot use GB, MB, or KB.
- Here is an example: ::
+ Here is an example::
BB_DISKMON_DIRS = "WARN,${SSTATE_DIR},1G,100K"
BB_DISKMON_WARNINTERVAL = "50M,5K"
@@ -235,23 +243,23 @@ overview of their function and contents.
based on the interval occur each time a respective interval is
reached beyond the initial warning (i.e. 1 Gbytes and 100 Kbytes).
- :term:`BB_ENV_WHITELIST`
- Specifies the internal whitelist of variables to allow through from
+ :term:`BB_ENV_PASSTHROUGH`
+ Specifies the internal list of variables to allow through from
the external environment into BitBake's datastore. If the value of
this variable is not specified (which is the default), the following
list is used: :term:`BBPATH`, :term:`BB_PRESERVE_ENV`,
- :term:`BB_ENV_WHITELIST`, and :term:`BB_ENV_EXTRAWHITE`.
+ :term:`BB_ENV_PASSTHROUGH`, and :term:`BB_ENV_PASSTHROUGH_ADDITIONS`.
.. note::
You must set this variable in the external environment in order
for it to work.
- :term:`BB_ENV_EXTRAWHITE`
- Specifies an additional set of variables to allow through (whitelist)
- from the external environment into BitBake's datastore. This list of
- variables are on top of the internal list set in
- :term:`BB_ENV_WHITELIST`.
+ :term:`BB_ENV_PASSTHROUGH_ADDITIONS`
+ Specifies an additional set of variables to allow through from the
+ external environment into BitBake's datastore. This list of variables
+ are on top of the internal list set in
+ :term:`BB_ENV_PASSTHROUGH`.
.. note::
@@ -267,7 +275,7 @@ overview of their function and contents.
:term:`BB_FILENAME`
Contains the filename of the recipe that owns the currently running
task. For example, if the ``do_fetch`` task that resides in the
- ``my-recipe.bb`` is executing, the ``BB_FILENAME`` variable contains
+ ``my-recipe.bb`` is executing, the :term:`BB_FILENAME` variable contains
"/foo/path/my-recipe.bb".
:term:`BB_GENERATE_MIRROR_TARBALLS`
@@ -280,24 +288,61 @@ overview of their function and contents.
BB_GENERATE_MIRROR_TARBALLS = "1"
- :term:`BB_HASHCONFIG_WHITELIST`
- Lists variables that are excluded from base configuration checksum,
- which is used to determine if the cache can be reused.
+ :term:`BB_GENERATE_SHALLOW_TARBALLS`
+ Setting this variable to "1" when :term:`BB_GIT_SHALLOW` is also set to
+ "1" causes bitbake to generate shallow mirror tarballs when fetching git
+ repositories. The number of commits included in the shallow mirror
+ tarballs is controlled by :term:`BB_GIT_SHALLOW_DEPTH`.
- One of the ways BitBake determines whether to re-parse the main
- metadata is through checksums of the variables in the datastore of
- the base configuration data. There are variables that you typically
- want to exclude when checking whether or not to re-parse and thus
- rebuild the cache. As an example, you would usually exclude ``TIME``
- and ``DATE`` because these variables are always changing. If you did
- not exclude them, BitBake would never reuse the cache.
+ If both :term:`BB_GIT_SHALLOW` and :term:`BB_GENERATE_MIRROR_TARBALLS` are
+ enabled, bitbake will generate shallow mirror tarballs by default for git
+ repositories. This separate variable exists so that shallow tarball
+ generation can be enabled without needing to also enable normal mirror
+ generation if it is not desired.
- :term:`BB_HASHBASE_WHITELIST`
- Lists variables that are excluded from checksum and dependency data.
- Variables that are excluded can therefore change without affecting
- the checksum mechanism. A common example would be the variable for
- the path of the build. BitBake's output should not (and usually does
- not) depend on the directory in which it was built.
+ For example usage, see :term:`BB_GIT_SHALLOW`.
+
+ :term:`BB_GIT_SHALLOW`
+ Setting this variable to "1" enables the support for fetching, using and
+ generating mirror tarballs of `shallow git repositories <https://riptutorial.com/git/example/4584/shallow-clone>`_.
+ The external `git-make-shallow <https://git.openembedded.org/bitbake/tree/bin/git-make-shallow>`_
+ script is used for shallow mirror tarball creation.
+
+ When :term:`BB_GIT_SHALLOW` is enabled, bitbake will attempt to fetch a shallow
+ mirror tarball. If the shallow mirror tarball cannot be fetched, it will
+ try to fetch the full mirror tarball and use that.
+
+ When a mirror tarball is not available, a full git clone will be performed
+ regardless of whether this variable is set or not. Support for shallow
+ clones is not currently implemented as git does not directly support
+ shallow cloning a particular git commit hash (it only supports cloning
+ from a tag or branch reference).
+
+ See also :term:`BB_GIT_SHALLOW_DEPTH` and
+ :term:`BB_GENERATE_SHALLOW_TARBALLS`.
+
+ Example usage::
+
+ BB_GIT_SHALLOW ?= "1"
+
+ # Keep only the top commit
+ BB_GIT_SHALLOW_DEPTH ?= "1"
+
+ # This defaults to enabled if both BB_GIT_SHALLOW and
+ # BB_GENERATE_MIRROR_TARBALLS are enabled
+ BB_GENERATE_SHALLOW_TARBALLS ?= "1"
+
+ :term:`BB_GIT_SHALLOW_DEPTH`
+ When used with :term:`BB_GENERATE_SHALLOW_TARBALLS`, this variable sets
+ the number of commits to include in generated shallow mirror tarballs.
+ With a depth of 1, only the commit referenced in :term:`SRCREV` is
+ included in the shallow mirror tarball. Increasing the depth includes
+ additional parent commits, working back through the commit history.
+
+ If this variable is unset, bitbake will default to a depth of 1 when
+ generating shallow mirror tarballs.
+
+ For example usage, see :term:`BB_GIT_SHALLOW`.
:term:`BB_HASHCHECK_FUNCTION`
Specifies the name of the function to call during the "setscene" part
@@ -313,6 +358,51 @@ overview of their function and contents.
However, the more accurate the data returned, the more efficient the
build will be.
+ :term:`BB_HASHCONFIG_IGNORE_VARS`
+ Lists variables that are excluded from base configuration checksum,
+ which is used to determine if the cache can be reused.
+
+ One of the ways BitBake determines whether to re-parse the main
+ metadata is through checksums of the variables in the datastore of
+ the base configuration data. There are variables that you typically
+ want to exclude when checking whether or not to re-parse and thus
+ rebuild the cache. As an example, you would usually exclude ``TIME``
+ and ``DATE`` because these variables are always changing. If you did
+ not exclude them, BitBake would never reuse the cache.
+
+ :term:`BB_HASHSERVE`
+ Specifies the Hash Equivalence server to use.
+
+ If set to ``auto``, BitBake automatically starts its own server
+ over a UNIX domain socket. An option is to connect this server
+ to an upstream one, by setting :term:`BB_HASHSERVE_UPSTREAM`.
+
+ If set to ``unix://path``, BitBake will connect to an existing
+ hash server available over a UNIX domain socket.
+
+ If set to ``host:port``, BitBake will connect to a remote server on the
+ specified host. This allows multiple clients to share the same
+ hash equivalence data.
+
+ The remote server can be started manually through
+ the ``bin/bitbake-hashserv`` script provided by BitBake,
+ which supports UNIX domain sockets too. This script also allows
+ to start the server in read-only mode, to avoid accepting
+ equivalences that correspond to Share State caches that are
+ only available on specific clients.
+
+ :term:`BB_HASHSERVE_UPSTREAM`
+ Specifies an upstream Hash Equivalence server.
+
+ This optional setting is only useful when a local Hash Equivalence
+ server is started (setting :term:`BB_HASHSERVE` to ``auto``),
+ and you wish the local server to query an upstream server for
+ Hash Equivalence data.
+
+ Example usage::
+
+ BB_HASHSERVE_UPSTREAM = "hashserv.yocto.io:8687"
+
:term:`BB_INVALIDCONF`
Used in combination with the ``ConfigParsed`` event to trigger
re-parsing the base metadata (i.e. all the recipes). The
@@ -327,15 +417,28 @@ overview of their function and contents.
:term:`BB_LOGFMT`
Specifies the name of the log files saved into
- ``${``\ :term:`T`\ ``}``. By default, the ``BB_LOGFMT``
- variable is undefined and the log file names get created using the
- following form: ::
+ ``${``\ :term:`T`\ ``}``. By default, the :term:`BB_LOGFMT`
+ variable is undefined and the log filenames get created using the
+ following form::
log.{task}.{pid}
If you want to force log files to take a specific name, you can set this
variable in a configuration file.
+ :term:`BB_MULTI_PROVIDER_ALLOWED`
+ Allows you to suppress BitBake warnings caused when building two
+ separate recipes that provide the same output.
+
+ BitBake normally issues a warning when building two different recipes
+ where each provides the same output. This scenario is usually
+ something the user does not want. However, cases do exist where it
+ makes sense, particularly in the ``virtual/*`` namespace. You can use
+ this variable to suppress BitBake's warnings.
+
+ To use the variable, list provider names (e.g. recipe names,
+ ``virtual/kernel``, and so forth).
+
:term:`BB_NICE_LEVEL`
Allows BitBake to run at a specific priority (i.e. nice level).
System permissions usually mean that BitBake can reduce its priority
@@ -351,19 +454,20 @@ overview of their function and contents.
running builds when not connected to the Internet, and when operating
in certain kinds of firewall environments.
+ :term:`BB_NUMBER_PARSE_THREADS`
+ Sets the number of threads BitBake uses when parsing. By default, the
+ number of threads is equal to the number of cores on the system.
+
:term:`BB_NUMBER_THREADS`
The maximum number of tasks BitBake should run in parallel at any one
time. If your host development system supports multiple cores, a good
rule of thumb is to set this variable to twice the number of cores.
- :term:`BB_NUMBER_PARSE_THREADS`
- Sets the number of threads BitBake uses when parsing. By default, the
- number of threads is equal to the number of cores on the system.
-
:term:`BB_ORIGENV`
Contains a copy of the original external environment in which BitBake
- was run. The copy is taken before any whitelisted variable values are
- filtered into BitBake's datastore.
+ was run. The copy is taken before any variable values configured to
+ pass through from the external environment are filtered into BitBake's
+ datastore.
.. note::
@@ -371,8 +475,8 @@ overview of their function and contents.
queried using the normal datastore operations.
:term:`BB_PRESERVE_ENV`
- Disables whitelisting and instead allows all variables through from
- the external environment into BitBake's datastore.
+ Disables environment filtering and instead allows all variables through
+ from the external environment into BitBake's datastore.
.. note::
@@ -382,8 +486,8 @@ overview of their function and contents.
:term:`BB_RUNFMT`
Specifies the name of the executable script files (i.e. run files)
saved into ``${``\ :term:`T`\ ``}``. By default, the
- ``BB_RUNFMT`` variable is undefined and the run file names get
- created using the following form: ::
+ :term:`BB_RUNFMT` variable is undefined and the run filenames get
+ created using the following form::
run.{task}.{pid}
@@ -399,14 +503,14 @@ overview of their function and contents.
Selects the name of the scheduler to use for the scheduling of
BitBake tasks. Three options exist:
- - *basic* - The basic framework from which everything derives. Using
+ - *basic* --- the basic framework from which everything derives. Using
this option causes tasks to be ordered numerically as they are
parsed.
- - *speed* - Executes tasks first that have more tasks depending on
+ - *speed* --- executes tasks first that have more tasks depending on
them. The "speed" option is the default.
- - *completion* - Causes the scheduler to try to complete a given
+ - *completion* --- causes the scheduler to try to complete a given
recipe once its build has started.
:term:`BB_SCHEDULERS`
@@ -426,17 +530,6 @@ overview of their function and contents.
The function specified by this variable returns a "True" or "False"
depending on whether the dependency needs to be met.
- :term:`BB_SETSCENE_VERIFY_FUNCTION2`
- Specifies a function to call that verifies the list of planned task
- execution before the main task execution happens. The function is
- called once BitBake has a list of setscene tasks that have run and
- either succeeded or failed.
-
- The function allows for a task list check to see if they make sense.
- Even if BitBake was planning to skip a task, the returned value of
- the function can force BitBake to run the task, which is necessary
- under certain metadata defined circumstances.
-
:term:`BB_SIGNATURE_EXCLUDE_FLAGS`
Lists variable flags (varflags) that can be safely excluded from
checksum and dependency data for keys in the datastore. When
@@ -459,40 +552,17 @@ overview of their function and contents.
:term:`BB_SRCREV_POLICY`
Defines the behavior of the fetcher when it interacts with source
control systems and dynamic source revisions. The
- ``BB_SRCREV_POLICY`` variable is useful when working without a
+ :term:`BB_SRCREV_POLICY` variable is useful when working without a
network.
The variable can be set using one of two policies:
- - *cache* - Retains the value the system obtained previously rather
+ - *cache* --- retains the value the system obtained previously rather
than querying the source control system each time.
- - *clear* - Queries the source controls system every time. With this
+ - *clear* --- queries the source controls system every time. With this
policy, there is no cache. The "clear" policy is the default.
- :term:`BB_STAMP_POLICY`
- Defines the mode used for how timestamps of stamp files are compared.
- You can set the variable to one of the following modes:
-
- - *perfile* - Timestamp comparisons are only made between timestamps
- of a specific recipe. This is the default mode.
-
- - *full* - Timestamp comparisons are made for all dependencies.
-
- - *whitelist* - Identical to "full" mode except timestamp
- comparisons are made for recipes listed in the
- :term:`BB_STAMP_WHITELIST` variable.
-
- .. note::
-
- Stamp policies are largely obsolete with the introduction of
- setscene tasks.
-
- :term:`BB_STAMP_WHITELIST`
- Lists files whose stamp file timestamps are compared when the stamp
- policy mode is set to "whitelist". For information on stamp policies,
- see the :term:`BB_STAMP_POLICY` variable.
-
:term:`BB_STRICT_CHECKSUM`
Sets a more strict checksum mechanism for non-local URLs. Setting
this variable to a value causes BitBake to report an error if it
@@ -503,7 +573,7 @@ overview of their function and contents.
Allows adjustment of a task's Input/Output priority. During
Autobuilder testing, random failures can occur for tasks due to I/O
starvation. These failures occur during various QEMU runtime
- timeouts. You can use the ``BB_TASK_IONICE_LEVEL`` variable to adjust
+ timeouts. You can use the :term:`BB_TASK_IONICE_LEVEL` variable to adjust
the I/O priority of these tasks.
.. note::
@@ -511,7 +581,7 @@ overview of their function and contents.
This variable works similarly to the :term:`BB_TASK_NICE_LEVEL`
variable except with a task's I/O priorities.
- Set the variable as follows: ::
+ Set the variable as follows::
BB_TASK_IONICE_LEVEL = "class.prio"
@@ -529,7 +599,7 @@ overview of their function and contents.
In order for your I/O priority settings to take effect, you need the
Completely Fair Queuing (CFQ) Scheduler selected for the backing block
device. To select the scheduler, use the following command form where
- device is the device (e.g. sda, sdb, and so forth): ::
+ device is the device (e.g. sda, sdb, and so forth)::
$ sudo sh -c "echo cfq > /sys/block/device/queu/scheduler"
@@ -538,7 +608,7 @@ overview of their function and contents.
You can use this variable in combination with task overrides to raise
or lower priorities of specific tasks. For example, on the `Yocto
- Project <http://www.yoctoproject.org>`__ autobuilder, QEMU emulation
+ Project <https://www.yoctoproject.org>`__ autobuilder, QEMU emulation
in images is given a higher priority as compared to build tasks to
ensure that images do not suffer timeouts on loaded systems.
@@ -570,20 +640,20 @@ overview of their function and contents.
To build a different variant of the recipe with a minimal amount of
code, it usually is as simple as adding the variable to your recipe.
Here are two examples. The "native" variants are from the
- OpenEmbedded-Core metadata: ::
+ OpenEmbedded-Core metadata::
BBCLASSEXTEND =+ "native nativesdk"
BBCLASSEXTEND =+ "multilib:multilib_name"
.. note::
- Internally, the ``BBCLASSEXTEND`` mechanism generates recipe
+ Internally, the :term:`BBCLASSEXTEND` mechanism generates recipe
variants by rewriting variable values and applying overrides such
as ``_class-native``. For example, to generate a native version of
a recipe, a :term:`DEPENDS` on "foo" is
- rewritten to a ``DEPENDS`` on "foo-native".
+ rewritten to a :term:`DEPENDS` on "foo-native".
- Even when using ``BBCLASSEXTEND``, the recipe is only parsed once.
+ Even when using :term:`BBCLASSEXTEND`, the recipe is only parsed once.
Parsing once adds some limitations. For example, it is not
possible to include a different file depending on the variant,
since ``include`` statements are processed when the recipe is
@@ -616,17 +686,17 @@ overview of their function and contents.
This variable is useful in situations where the same recipe appears
in more than one layer. Setting this variable allows you to
prioritize a layer against other layers that contain the same recipe
- - effectively letting you control the precedence for the multiple
+ --- effectively letting you control the precedence for the multiple
layers. The precedence established through this variable stands
regardless of a recipe's version (:term:`PV` variable).
- For example, a layer that has a recipe with a higher ``PV`` value but
- for which the ``BBFILE_PRIORITY`` is set to have a lower precedence
+ For example, a layer that has a recipe with a higher :term:`PV` value but
+ for which the :term:`BBFILE_PRIORITY` is set to have a lower precedence
still has a lower precedence.
- A larger value for the ``BBFILE_PRIORITY`` variable results in a
+ A larger value for the :term:`BBFILE_PRIORITY` variable results in a
higher precedence. For example, the value 6 has a higher precedence
- than the value 5. If not specified, the ``BBFILE_PRIORITY`` variable
- is set based on layer dependencies (see the ``LAYERDEPENDS`` variable
+ than the value 5. If not specified, the :term:`BBFILE_PRIORITY` variable
+ is set based on layer dependencies (see the :term:`LAYERDEPENDS` variable
for more information. The default priority, if unspecified for a
layer with no dependencies, is the lowest defined priority + 1 (or 1
if no priorities are defined).
@@ -649,7 +719,7 @@ overview of their function and contents.
Activates content depending on presence of identified layers. You
identify the layers by the collections that the layers define.
- Use the ``BBFILES_DYNAMIC`` variable to avoid ``.bbappend`` files whose
+ Use the :term:`BBFILES_DYNAMIC` variable to avoid ``.bbappend`` files whose
corresponding ``.bb`` file is in a layer that attempts to modify other
layers through ``.bbappend`` but does not want to introduce a hard
dependency on those other layers.
@@ -658,12 +728,12 @@ overview of their function and contents.
``.bb`` files in case a layer is not present. Use this avoid hard
dependency on those other layers.
- Use the following form for ``BBFILES_DYNAMIC``: ::
+ Use the following form for :term:`BBFILES_DYNAMIC`::
collection_name:filename_pattern
The following example identifies two collection names and two filename
- patterns: ::
+ patterns::
BBFILES_DYNAMIC += "\
clang-layer:${LAYERDIR}/bbappends/meta-clang/*/*/*.bbappend \
@@ -671,14 +741,14 @@ overview of their function and contents.
"
When the collection name is prefixed with "!" it will add the file pattern in case
- the layer is absent: ::
+ the layer is absent::
BBFILES_DYNAMIC += "\
!clang-layer:${LAYERDIR}/backfill/meta-clang/*/*/*.bb \
"
This next example shows an error message that occurs because invalid
- entries are found, which cause parsing to abort: ::
+ entries are found, which cause parsing to fail::
ERROR: BBFILES_DYNAMIC entries must be of the form {!}<collection name>:<filename pattern>, not:
/work/my-layer/bbappends/meta-security-isafw/*/*/*.bbappend
@@ -695,13 +765,13 @@ overview of their function and contents.
:term:`BBINCLUDELOGS_LINES`
If :term:`BBINCLUDELOGS` is set, specifies
the maximum number of lines from the task log file to print when
- reporting a failed task. If you do not set ``BBINCLUDELOGS_LINES``,
+ reporting a failed task. If you do not set :term:`BBINCLUDELOGS_LINES`,
the entire log is printed.
:term:`BBLAYERS`
Lists the layers to enable during the build. This variable is defined
in the ``bblayers.conf`` configuration file in the build directory.
- Here is an example: ::
+ Here is an example::
BBLAYERS = " \
/home/scottrif/poky/meta \
@@ -721,7 +791,7 @@ overview of their function and contents.
:term:`BBMASK`
Prevents BitBake from processing recipes and recipe append files.
- You can use the ``BBMASK`` variable to "hide" these ``.bb`` and
+ You can use the :term:`BBMASK` variable to "hide" these ``.bb`` and
``.bbappend`` files. BitBake ignores any recipe or recipe append
files that match any of the expressions. It is as if BitBake does not
see them at all. Consequently, matching files are not parsed or
@@ -735,13 +805,13 @@ overview of their function and contents.
The following example uses a complete regular expression to tell
BitBake to ignore all recipe and recipe append files in the
- ``meta-ti/recipes-misc/`` directory: ::
+ ``meta-ti/recipes-misc/`` directory::
BBMASK = "meta-ti/recipes-misc/"
If you want to mask out multiple directories or recipes, you can
specify multiple regular expression fragments. This next example
- masks out multiple directories and individual recipes: ::
+ masks out multiple directories and individual recipes::
BBMASK += "/meta-ti/recipes-misc/ meta-ti/recipes-ti/packagegroup/"
BBMASK += "/meta-oe/recipes-support/"
@@ -758,11 +828,11 @@ overview of their function and contents.
Enables BitBake to perform multiple configuration builds and lists
each separate configuration (multiconfig). You can use this variable
to cause BitBake to build multiple targets where each target has a
- separate configuration. Define ``BBMULTICONFIG`` in your
+ separate configuration. Define :term:`BBMULTICONFIG` in your
``conf/local.conf`` configuration file.
As an example, the following line specifies three multiconfigs, each
- having a separate configuration file: ::
+ having a separate configuration file::
BBMULTIFONFIG = "configA configB configC"
@@ -770,7 +840,7 @@ overview of their function and contents.
build directory within a directory named ``conf/multiconfig`` (e.g.
build_directory\ ``/conf/multiconfig/configA.conf``).
- For information on how to use ``BBMULTICONFIG`` in an environment
+ For information on how to use :term:`BBMULTICONFIG` in an environment
that supports building targets with multiple configurations, see the
":ref:`bitbake-user-manual/bitbake-user-manual-intro:executing a multiple configuration build`"
section.
@@ -781,9 +851,9 @@ overview of their function and contents.
variable.
If you run BitBake from a directory outside of the build directory,
- you must be sure to set ``BBPATH`` to point to the build directory.
+ you must be sure to set :term:`BBPATH` to point to the build directory.
Set the variable as you would any environment variable and then run
- BitBake: ::
+ BitBake::
$ BBPATH="build_directory"
$ export BBPATH
@@ -797,16 +867,6 @@ overview of their function and contents.
Allows you to use a configuration file to add to the list of
command-line target recipes you want to build.
- :term:`BBVERSIONS`
- Allows a single recipe to build multiple versions of a project from a
- single recipe file. You also able to specify conditional metadata
- using the :term:`OVERRIDES` mechanism for a
- single version or for an optionally named range of versions.
-
- For more information on ``BBVERSIONS``, see the
- ":ref:`bitbake-user-manual/bitbake-user-manual-metadata:variants - class extension mechanism`"
- section.
-
:term:`BITBAKE_UI`
Used to specify the UI module to use when running BitBake. Using this
variable is equivalent to using the ``-u`` command-line option.
@@ -838,7 +898,7 @@ overview of their function and contents.
The most common usage of this is variable is to set it to "-1" within
a recipe for a development version of a piece of software. Using the
variable in this way causes the stable version of the recipe to build
- by default in the absence of ``PREFERRED_VERSION`` being used to
+ by default in the absence of :term:`PREFERRED_VERSION` being used to
build the development version.
.. note::
@@ -851,8 +911,8 @@ overview of their function and contents.
Lists a recipe's build-time dependencies (i.e. other recipe files).
Consider this simple example for two recipes named "a" and "b" that
- produce similarly named packages. In this example, the ``DEPENDS``
- statement appears in the "a" recipe: ::
+ produce similarly named packages. In this example, the :term:`DEPENDS`
+ statement appears in the "a" recipe::
DEPENDS = "b"
@@ -869,7 +929,7 @@ overview of their function and contents.
:term:`DL_DIR`
The central download directory used by the build process to store
- downloads. By default, ``DL_DIR`` gets files suitable for mirroring for
+ downloads. By default, :term:`DL_DIR` gets files suitable for mirroring for
everything except Git repositories. If you want tarballs of Git
repositories, use the :term:`BB_GENERATE_MIRROR_TARBALLS` variable.
@@ -884,14 +944,14 @@ overview of their function and contents.
.. note::
- Recipes added to ``EXCLUDE_FROM_WORLD`` may still be built during a world
+ Recipes added to :term:`EXCLUDE_FROM_WORLD` may still be built during a world
build in order to satisfy dependencies of other recipes. Adding a
- recipe to ``EXCLUDE_FROM_WORLD`` only ensures that the recipe is not
+ recipe to :term:`EXCLUDE_FROM_WORLD` only ensures that the recipe is not
explicitly added to the list of build targets in a world build.
:term:`FAKEROOT`
Contains the command to use when running a shell script in a fakeroot
- environment. The ``FAKEROOT`` variable is obsolete and has been
+ environment. The :term:`FAKEROOT` variable is obsolete and has been
replaced by the other ``FAKEROOT*`` variables. See these entries in
the glossary for more information.
@@ -954,9 +1014,9 @@ overview of their function and contents.
Causes the named class or classes to be inherited globally. Anonymous
functions in the class or classes are not executed for the base
configuration and in each individual recipe. The OpenEmbedded build
- system ignores changes to ``INHERIT`` in individual recipes.
+ system ignores changes to :term:`INHERIT` in individual recipes.
- For more information on ``INHERIT``, see the
+ For more information on :term:`INHERIT`, see the
":ref:`bitbake-user-manual/bitbake-user-manual-metadata:\`\`inherit\`\` configuration directive`"
section.
@@ -1004,29 +1064,16 @@ overview of their function and contents.
the build system searches for source code, it first tries the local
download directory. If that location fails, the build system tries
locations defined by :term:`PREMIRRORS`, the
- upstream source, and then locations specified by ``MIRRORS`` in that
+ upstream source, and then locations specified by :term:`MIRRORS` in that
order.
- :term:`MULTI_PROVIDER_WHITELIST`
- Allows you to suppress BitBake warnings caused when building two
- separate recipes that provide the same output.
-
- BitBake normally issues a warning when building two different recipes
- where each provides the same output. This scenario is usually
- something the user does not want. However, cases do exist where it
- makes sense, particularly in the ``virtual/*`` namespace. You can use
- this variable to suppress BitBake's warnings.
-
- To use the variable, list provider names (e.g. recipe names,
- ``virtual/kernel``, and so forth).
-
:term:`OVERRIDES`
- BitBake uses ``OVERRIDES`` to control what variables are overridden
+ BitBake uses :term:`OVERRIDES` to control what variables are overridden
after BitBake parses recipes and configuration files.
Following is a simple example that uses an overrides list based on
machine architectures: OVERRIDES = "arm:x86:mips:powerpc" You can
- find information on how to use ``OVERRIDES`` in the
+ find information on how to use :term:`OVERRIDES` in the
":ref:`bitbake-user-manual/bitbake-user-manual-metadata:conditional syntax
(overrides)`" section.
@@ -1040,11 +1087,11 @@ overview of their function and contents.
:term:`PACKAGES_DYNAMIC`
A promise that your recipe satisfies runtime dependencies for
optional modules that are found in other recipes.
- ``PACKAGES_DYNAMIC`` does not actually satisfy the dependencies, it
+ :term:`PACKAGES_DYNAMIC` does not actually satisfy the dependencies, it
only states that they should be satisfied. For example, if a hard,
runtime dependency (:term:`RDEPENDS`) of another
package is satisfied during the build through the
- ``PACKAGES_DYNAMIC`` variable, but a package with the module name is
+ :term:`PACKAGES_DYNAMIC` variable, but a package with the module name is
never actually produced, then the other package will be broken.
:term:`PE`
@@ -1074,7 +1121,7 @@ overview of their function and contents.
recipes provide the same item. You should always suffix the variable
with the name of the provided item, and you should set it to the
:term:`PN` of the recipe to which you want to give
- precedence. Some examples: ::
+ precedence. Some examples::
PREFERRED_PROVIDER_virtual/kernel ?= "linux-yocto"
PREFERRED_PROVIDER_virtual/xserver = "xserver-xf86"
@@ -1083,14 +1130,14 @@ overview of their function and contents.
:term:`PREFERRED_PROVIDERS`
Determines which recipe should be given preference for cases where
multiple recipes provide the same item. Functionally,
- ``PREFERRED_PROVIDERS`` is identical to
- :term:`PREFERRED_PROVIDER`. However, the ``PREFERRED_PROVIDERS`` variable
+ :term:`PREFERRED_PROVIDERS` is identical to
+ :term:`PREFERRED_PROVIDER`. However, the :term:`PREFERRED_PROVIDERS` variable
lets you define preferences for multiple situations using the following
- form: ::
+ form::
PREFERRED_PROVIDERS = "xxx:yyy aaa:bbb ..."
- This form is a convenient replacement for the following: ::
+ This form is a convenient replacement for the following::
PREFERRED_PROVIDER_xxx = "yyy"
PREFERRED_PROVIDER_aaa = "bbb"
@@ -1102,11 +1149,11 @@ overview of their function and contents.
select, and you should set :term:`PV` accordingly for
precedence.
- The ``PREFERRED_VERSION`` variable supports limited wildcard use
+ The :term:`PREFERRED_VERSION` variable supports limited wildcard use
through the "``%``" character. You can use the character to match any
number of characters, which can be useful when specifying versions
that contain long revision numbers that potentially change. Here are
- two examples: ::
+ two examples::
PREFERRED_VERSION_python = "2.7.3"
PREFERRED_VERSION_linux-yocto = "4.12%"
@@ -1125,18 +1172,18 @@ overview of their function and contents.
Specifies additional paths from which BitBake gets source code. When
the build system searches for source code, it first tries the local
download directory. If that location fails, the build system tries
- locations defined by ``PREMIRRORS``, the upstream source, and then
+ locations defined by :term:`PREMIRRORS`, the upstream source, and then
locations specified by :term:`MIRRORS` in that order.
Typically, you would add a specific server for the build system to
attempt before any others by adding something like the following to
- your configuration: ::
+ your configuration::
- PREMIRRORS_prepend = "\
- git://.*/.* http://www.yoctoproject.org/sources/ \n \
- ftp://.*/.* http://www.yoctoproject.org/sources/ \n \
- http://.*/.* http://www.yoctoproject.org/sources/ \n \
- https://.*/.* http://www.yoctoproject.org/sources/ \n"
+ PREMIRRORS:prepend = "\
+ git://.*/.* http://downloads.yoctoproject.org/mirror/sources/ \
+ ftp://.*/.* http://downloads.yoctoproject.org/mirror/sources/ \
+ http://.*/.* http://downloads.yoctoproject.org/mirror/sources/ \
+ https://.*/.* http://downloads.yoctoproject.org/mirror/sources/"
These changes cause the build system to intercept Git, FTP, HTTP, and
HTTPS requests and direct them to the ``http://`` sources mirror. You can
@@ -1145,25 +1192,25 @@ overview of their function and contents.
:term:`PROVIDES`
A list of aliases by which a particular recipe can be known. By
- default, a recipe's own ``PN`` is implicitly already in its
- ``PROVIDES`` list. If a recipe uses ``PROVIDES``, the additional
+ default, a recipe's own :term:`PN` is implicitly already in its
+ :term:`PROVIDES` list. If a recipe uses :term:`PROVIDES`, the additional
aliases are synonyms for the recipe and can be useful satisfying
dependencies of other recipes during the build as specified by
- ``DEPENDS``.
+ :term:`DEPENDS`.
- Consider the following example ``PROVIDES`` statement from a recipe
- file ``libav_0.8.11.bb``: ::
+ Consider the following example :term:`PROVIDES` statement from a recipe
+ file ``libav_0.8.11.bb``::
PROVIDES += "libpostproc"
- The ``PROVIDES`` statement results in the "libav" recipe also being known
+ The :term:`PROVIDES` statement results in the "libav" recipe also being known
as "libpostproc".
In addition to providing recipes under alternate names, the
- ``PROVIDES`` mechanism is also used to implement virtual targets. A
+ :term:`PROVIDES` mechanism is also used to implement virtual targets. A
virtual target is a name that corresponds to some particular
functionality (e.g. a Linux kernel). Recipes that provide the
- functionality in question list the virtual target in ``PROVIDES``.
+ functionality in question list the virtual target in :term:`PROVIDES`.
Recipes that depend on the functionality in question can include the
virtual target in :term:`DEPENDS` to leave the
choice of provider open.
@@ -1175,12 +1222,12 @@ overview of their function and contents.
:term:`PRSERV_HOST`
The network based :term:`PR` service host and port.
- Following is an example of how the ``PRSERV_HOST`` variable is set: ::
+ Following is an example of how the :term:`PRSERV_HOST` variable is set::
PRSERV_HOST = "localhost:0"
You must set the variable if you want to automatically start a local PR
- service. You can set ``PRSERV_HOST`` to other values to use a remote PR
+ service. You can set :term:`PRSERV_HOST` to other values to use a remote PR
service.
:term:`PV`
@@ -1192,26 +1239,26 @@ overview of their function and contents.
a package in this list cannot be found during the build, you will get
a build error.
- Because the ``RDEPENDS`` variable applies to packages being built,
+ Because the :term:`RDEPENDS` variable applies to packages being built,
you should always use the variable in a form with an attached package
name. For example, suppose you are building a development package
that depends on the ``perl`` package. In this case, you would use the
- following ``RDEPENDS`` statement: ::
+ following :term:`RDEPENDS` statement::
- RDEPENDS_${PN}-dev += "perl"
+ RDEPENDS:${PN}-dev += "perl"
In the example, the development package depends on the ``perl`` package.
- Thus, the ``RDEPENDS`` variable has the ``${PN}-dev`` package name as part
+ Thus, the :term:`RDEPENDS` variable has the ``${PN}-dev`` package name as part
of the variable.
BitBake supports specifying versioned dependencies. Although the
syntax varies depending on the packaging format, BitBake hides these
differences from you. Here is the general syntax to specify versions
- with the ``RDEPENDS`` variable: ::
+ with the :term:`RDEPENDS` variable::
- RDEPENDS_${PN} = "package (operator version)"
+ RDEPENDS:${PN} = "package (operator version)"
- For ``operator``, you can specify the following: ::
+ For ``operator``, you can specify the following::
=
<
@@ -1220,9 +1267,9 @@ overview of their function and contents.
>=
For example, the following sets up a dependency on version 1.2 or
- greater of the package ``foo``: ::
+ greater of the package ``foo``::
- RDEPENDS_${PN} = "foo (>= 1.2)"
+ RDEPENDS:${PN} = "foo (>= 1.2)"
For information on build-time dependencies, see the :term:`DEPENDS`
variable.
@@ -1233,41 +1280,41 @@ overview of their function and contents.
:term:`REQUIRED_VERSION`
If there are multiple versions of a recipe available, this variable
- determines which version should be given preference. ``REQUIRED_VERSION``
+ determines which version should be given preference. :term:`REQUIRED_VERSION`
works in exactly the same manner as :term:`PREFERRED_VERSION`, except
that if the specified version is not available then an error message
is shown and the build fails immediately.
- If both ``REQUIRED_VERSION`` and ``PREFERRED_VERSION`` are set for
- the same recipe, the ``REQUIRED_VERSION`` value applies.
+ If both :term:`REQUIRED_VERSION` and :term:`PREFERRED_VERSION` are set for
+ the same recipe, the :term:`REQUIRED_VERSION` value applies.
:term:`RPROVIDES`
A list of package name aliases that a package also provides. These
aliases are useful for satisfying runtime dependencies of other
packages both during the build and on the target (as specified by
- ``RDEPENDS``).
+ :term:`RDEPENDS`).
As with all package-controlling variables, you must always use the
variable in conjunction with a package name override. Here is an
- example: ::
+ example::
- RPROVIDES_${PN} = "widget-abi-2"
+ RPROVIDES:${PN} = "widget-abi-2"
:term:`RRECOMMENDS`
A list of packages that extends the usability of a package being
built. The package being built does not depend on this list of
packages in order to successfully build, but needs them for the
extended usability. To specify runtime dependencies for packages, see
- the ``RDEPENDS`` variable.
+ the :term:`RDEPENDS` variable.
BitBake supports specifying versioned recommends. Although the syntax
varies depending on the packaging format, BitBake hides these
differences from you. Here is the general syntax to specify versions
- with the ``RRECOMMENDS`` variable: ::
+ with the :term:`RRECOMMENDS` variable::
- RRECOMMENDS_${PN} = "package (operator version)"
+ RRECOMMENDS:${PN} = "package (operator version)"
- For ``operator``, you can specify the following: ::
+ For ``operator``, you can specify the following::
=
<
@@ -1276,78 +1323,114 @@ overview of their function and contents.
>=
For example, the following sets up a recommend on version
- 1.2 or greater of the package ``foo``: ::
+ 1.2 or greater of the package ``foo``::
- RRECOMMENDS_${PN} = "foo (>= 1.2)"
+ RRECOMMENDS:${PN} = "foo (>= 1.2)"
:term:`SECTION`
The section in which packages should be categorized.
:term:`SRC_URI`
- The list of source files - local or remote. This variable tells
+ The list of source files --- local or remote. This variable tells
BitBake which bits to pull for the build and how to pull them. For
example, if the recipe or append file needs to fetch a single tarball
- from the Internet, the recipe or append file uses a ``SRC_URI`` entry
- that specifies that tarball. On the other hand, if the recipe or
- append file needs to fetch a tarball and include a custom file, the
- recipe or append file needs an ``SRC_URI`` variable that specifies
- all those sources.
-
- The following list explains the available URI protocols:
+ from the Internet, the recipe or append file uses a :term:`SRC_URI`
+ entry that specifies that tarball. On the other hand, if the recipe or
+ append file needs to fetch a tarball, apply two patches, and include
+ a custom file, the recipe or append file needs an :term:`SRC_URI`
+ variable that specifies all those sources.
+
+ The following list explains the available URI protocols. URI
+ protocols are highly dependent on particular BitBake Fetcher
+ submodules. Depending on the fetcher BitBake uses, various URL
+ parameters are employed. For specifics on the supported Fetchers, see
+ the :ref:`bitbake-user-manual/bitbake-user-manual-fetching:fetchers`
+ section.
- - ``file://`` : Fetches files, which are usually files shipped
- with the metadata, from the local machine. The path is relative to
- the :term:`FILESPATH` variable.
+ - ``az://``: Fetches files from an Azure Storage account using HTTPS.
- - ``bzr://`` : Fetches files from a Bazaar revision control
+ - ``bzr://``: Fetches files from a Bazaar revision control
repository.
- - ``git://`` : Fetches files from a Git revision control
+ - ``ccrc://``: Fetches files from a ClearCase repository.
+
+ - ``cvs://``: Fetches files from a CVS revision control
repository.
- - ``osc://`` : Fetches files from an OSC (OpenSUSE Build service)
- revision control repository.
+ - ``file://``: Fetches files, which are usually files shipped
+ with the Metadata, from the local machine.
+ The path is relative to the :term:`FILESPATH`
+ variable. Thus, the build system searches, in order, from the
+ following directories, which are assumed to be a subdirectories of
+ the directory in which the recipe file (``.bb``) or append file
+ (``.bbappend``) resides:
- - ``repo://`` : Fetches files from a repo (Git) repository.
+ - ``${BPN}``: the base recipe name without any special suffix
+ or version numbers.
- - ``http://`` : Fetches files from the Internet using HTTP.
+ - ``${BP}`` - ``${BPN}-${PV}``: the base recipe name and
+ version but without any special package name suffix.
- - ``https://`` : Fetches files from the Internet using HTTPS.
+ - ``files``: files within a directory, which is named ``files``
+ and is also alongside the recipe or append file.
- - ``ftp://`` : Fetches files from the Internet using FTP.
+ - ``ftp://``: Fetches files from the Internet using FTP.
- - ``cvs://`` : Fetches files from a CVS revision control
+ - ``git://``: Fetches files from a Git revision control
repository.
- - ``hg://`` : Fetches files from a Mercurial (``hg``) revision
- control repository.
+ - ``gitsm://``: Fetches submodules from a Git revision control
+ repository.
- - ``p4://`` : Fetches files from a Perforce (``p4``) revision
+ - ``hg://``: Fetches files from a Mercurial (``hg``) revision
control repository.
- - ``ssh://`` : Fetches files from a secure shell.
+ - ``http://``: Fetches files from the Internet using HTTP.
- - ``svn://`` : Fetches files from a Subversion (``svn``) revision
+ - ``https://``: Fetches files from the Internet using HTTPS.
+
+ - ``npm://``: Fetches JavaScript modules from a registry.
+
+ - ``osc://``: Fetches files from an OSC (OpenSUSE Build service)
+ revision control repository.
+
+ - ``p4://``: Fetches files from a Perforce (``p4``) revision
control repository.
- - ``az://`` : Fetches files from an Azure Storage account using HTTPS.
+ - ``repo://``: Fetches files from a repo (Git) repository.
+
+ - ``ssh://``: Fetches files from a secure shell.
+
+ - ``svn://``: Fetches files from a Subversion (``svn``) revision
+ control repository.
Here are some additional options worth mentioning:
- - ``unpack`` : Controls whether or not to unpack the file if it is
- an archive. The default action is to unpack the file.
+ - ``downloadfilename``: Specifies the filename used when storing
+ the downloaded file.
+
+ - ``name``: Specifies a name to be used for association with
+ :term:`SRC_URI` checksums or :term:`SRCREV` when you have more than one
+ file or git repository specified in :term:`SRC_URI`. For example::
+
+ SRC_URI = "git://example.com/foo.git;branch=main;name=first \
+ git://example.com/bar.git;branch=main;name=second \
+ http://example.com/file.tar.gz;name=third"
- - ``subdir`` : Places the file (or extracts its contents) into the
+ SRCREV_first = "f1d2d2f924e986ac86fdf7b36c94bcdf32beec15"
+ SRCREV_second = "e242ed3bffccdf271b7fbaf34ed72d089537b42f"
+ SRC_URI[third.sha256sum] = "13550350a8681c84c861aac2e5b440161c2b33a3e4f302ac680ca5b686de48de"
+
+ - ``subdir``: Places the file (or extracts its contents) into the
specified subdirectory. This option is useful for unusual tarballs
or other archives that do not have their files already in a
subdirectory within the archive.
- - ``name`` : Specifies a name to be used for association with
- ``SRC_URI`` checksums when you have more than one file specified
- in ``SRC_URI``.
+ - ``subpath``: Limits the checkout to a specific subpath of the
+ tree when using the Git fetcher is used.
- - ``downloadfilename`` : Specifies the filename used when storing
- the downloaded file.
+ - ``unpack``: Controls whether or not to unpack the file if it is
+ an archive. The default action is to unpack the file.
:term:`SRCDATE`
The date of the source code used to build the package. This variable
@@ -1359,7 +1442,7 @@ overview of their function and contents.
variable applies only when using Subversion, Git, Mercurial and
Bazaar. If you want to build a fixed revision and you want to avoid
performing a query on the remote repository every time BitBake parses
- your recipe, you should specify a ``SRCREV`` that is a full revision
+ your recipe, you should specify a :term:`SRCREV` that is a full revision
identifier and not just a tag.
:term:`SRCREV_FORMAT`
@@ -1368,10 +1451,10 @@ overview of their function and contents.
:term:`SRC_URI`.
The system needs help constructing these values under these
- circumstances. Each component in the ``SRC_URI`` is assigned a name
- and these are referenced in the ``SRCREV_FORMAT`` variable. Consider
+ circumstances. Each component in the :term:`SRC_URI` is assigned a name
+ and these are referenced in the :term:`SRCREV_FORMAT` variable. Consider
an example with URLs named "machine" and "meta". In this case,
- ``SRCREV_FORMAT`` could look like "machine_meta" and those names
+ :term:`SRCREV_FORMAT` could look like "machine_meta" and those names
would have the SCM versions substituted into each position. Only one
``AUTOINC`` placeholder is added and if needed. And, this placeholder
is placed at the start of the returned string.
@@ -1383,7 +1466,7 @@ overview of their function and contents.
:term:`STAMPCLEAN`
Specifies the base path used to create recipe stamp files. Unlike the
- :term:`STAMP` variable, ``STAMPCLEAN`` can contain
+ :term:`STAMP` variable, :term:`STAMPCLEAN` can contain
wildcards to match the range of files a clean operation should
remove. BitBake uses a clean operation to remove any other stamps it
should be removing when creating a new stamp.
diff --git a/bitbake/doc/releases.rst b/bitbake/doc/releases.rst
index d68d715..6635032 100644
--- a/bitbake/doc/releases.rst
+++ b/bitbake/doc/releases.rst
@@ -1,32 +1,74 @@
.. SPDX-License-Identifier: CC-BY-2.5
-=========================
- Current Release Manuals
-=========================
+===========================
+ Supported Release Manuals
+===========================
+
+******************************
+Release Series 3.4 (honister)
+******************************
+
+- :yocto_docs:`3.4 BitBake User Manual </3.4/bitbake-user-manual/bitbake-user-manual.html>`
+- :yocto_docs:`3.4.1 BitBake User Manual </3.4.1/bitbake-user-manual/bitbake-user-manual.html>`
+- :yocto_docs:`3.4.2 BitBake User Manual </3.4.2/bitbake-user-manual/bitbake-user-manual.html>`
+
+******************************
+Release Series 3.3 (hardknott)
+******************************
+
+- :yocto_docs:`3.3 BitBake User Manual </3.3/bitbake-user-manual/bitbake-user-manual.html>`
+- :yocto_docs:`3.3.1 BitBake User Manual </3.3.1/bitbake-user-manual/bitbake-user-manual.html>`
+- :yocto_docs:`3.3.2 BitBake User Manual </3.3.2/bitbake-user-manual/bitbake-user-manual.html>`
+- :yocto_docs:`3.3.3 BitBake User Manual </3.3.3/bitbake-user-manual/bitbake-user-manual.html>`
+- :yocto_docs:`3.3.4 BitBake User Manual </3.3.4/bitbake-user-manual/bitbake-user-manual.html>`
+- :yocto_docs:`3.3.5 BitBake User Manual </3.3.5/bitbake-user-manual/bitbake-user-manual.html>`
****************************
-3.1 'dunfell' Release Series
+Release Series 3.1 (dunfell)
****************************
- :yocto_docs:`3.1 BitBake User Manual </3.1/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`3.1.1 BitBake User Manual </3.1.1/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`3.1.2 BitBake User Manual </3.1.2/bitbake-user-manual/bitbake-user-manual.html>`
+- :yocto_docs:`3.1.3 BitBake User Manual </3.1.3/bitbake-user-manual/bitbake-user-manual.html>`
+- :yocto_docs:`3.1.4 BitBake User Manual </3.1.4/bitbake-user-manual/bitbake-user-manual.html>`
+- :yocto_docs:`3.1.5 BitBake User Manual </3.1.5/bitbake-user-manual/bitbake-user-manual.html>`
+- :yocto_docs:`3.1.6 BitBake User Manual </3.1.6/bitbake-user-manual/bitbake-user-manual.html>`
+- :yocto_docs:`3.1.7 BitBake User Manual </3.1.7/bitbake-user-manual/bitbake-user-manual.html>`
+- :yocto_docs:`3.1.8 BitBake User Manual </3.1.8/bitbake-user-manual/bitbake-user-manual.html>`
+- :yocto_docs:`3.1.9 BitBake User Manual </3.1.9/bitbake-user-manual/bitbake-user-manual.html>`
+- :yocto_docs:`3.1.10 BitBake User Manual </3.1.10/bitbake-user-manual/bitbake-user-manual.html>`
+- :yocto_docs:`3.1.11 BitBake User Manual </3.1.11/bitbake-user-manual/bitbake-user-manual.html>`
+- :yocto_docs:`3.1.12 BitBake User Manual </3.1.12/bitbake-user-manual/bitbake-user-manual.html>`
+- :yocto_docs:`3.1.13 BitBake User Manual </3.1.13/bitbake-user-manual/bitbake-user-manual.html>`
+- :yocto_docs:`3.1.14 BitBake User Manual </3.1.14/bitbake-user-manual/bitbake-user-manual.html>`
==========================
- Previous Release Manuals
+ Outdated Release Manuals
==========================
+*******************************
+Release Series 3.2 (gatesgarth)
+*******************************
+
+- :yocto_docs:`3.2 BitBake User Manual </3.2/bitbake-user-manual/bitbake-user-manual.html>`
+- :yocto_docs:`3.2.1 BitBake User Manual </3.2.1/bitbake-user-manual/bitbake-user-manual.html>`
+- :yocto_docs:`3.2.2 BitBake User Manual </3.2.2/bitbake-user-manual/bitbake-user-manual.html>`
+- :yocto_docs:`3.2.3 BitBake User Manual </3.2.3/bitbake-user-manual/bitbake-user-manual.html>`
+- :yocto_docs:`3.2.4 BitBake User Manual </3.2.4/bitbake-user-manual/bitbake-user-manual.html>`
+
*************************
-3.0 'zeus' Release Series
+Release Series 3.0 (zeus)
*************************
- :yocto_docs:`3.0 BitBake User Manual </3.0/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`3.0.1 BitBake User Manual </3.0.1/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`3.0.2 BitBake User Manual </3.0.2/bitbake-user-manual/bitbake-user-manual.html>`
- :yocto_docs:`3.0.3 BitBake User Manual </3.0.3/bitbake-user-manual/bitbake-user-manual.html>`
+- :yocto_docs:`3.0.4 BitBake User Manual </3.0.4/bitbake-user-manual/bitbake-user-manual.html>`
****************************
-2.7 'warrior' Release Series
+Release Series 2.7 (warrior)
****************************
- :yocto_docs:`2.7 BitBake User Manual </2.7/bitbake-user-manual/bitbake-user-manual.html>`
@@ -36,7 +78,7 @@
- :yocto_docs:`2.7.4 BitBake User Manual </2.7.4/bitbake-user-manual/bitbake-user-manual.html>`
*************************
-2.6 'thud' Release Series
+Release Series 2.6 (thud)
*************************
- :yocto_docs:`2.6 BitBake User Manual </2.6/bitbake-user-manual/bitbake-user-manual.html>`
@@ -46,16 +88,16 @@
- :yocto_docs:`2.6.4 BitBake User Manual </2.6.4/bitbake-user-manual/bitbake-user-manual.html>`
*************************
-2.5 'sumo' Release Series
+Release Series 2.5 (sumo)
*************************
-- :yocto_docs:`2.5 BitBake User Manual </2.5/bitbake-user-manual/bitbake-user-manual.html>`
-- :yocto_docs:`2.5.1 BitBake User Manual </2.5.1/bitbake-user-manual/bitbake-user-manual.html>`
-- :yocto_docs:`2.5.2 BitBake User Manual </2.5.2/bitbake-user-manual/bitbake-user-manual.html>`
-- :yocto_docs:`2.5.3 BitBake User Manual </2.5.3/bitbake-user-manual/bitbake-user-manual.html>`
+- :yocto_docs:`2.5 Documentation </2.5>`
+- :yocto_docs:`2.5.1 Documentation </2.5.1>`
+- :yocto_docs:`2.5.2 Documentation </2.5.2>`
+- :yocto_docs:`2.5.3 Documentation </2.5.3>`
**************************
-2.4 'rocko' Release Series
+Release Series 2.4 (rocko)
**************************
- :yocto_docs:`2.4 BitBake User Manual </2.4/bitbake-user-manual/bitbake-user-manual.html>`
@@ -65,7 +107,7 @@
- :yocto_docs:`2.4.4 BitBake User Manual </2.4.4/bitbake-user-manual/bitbake-user-manual.html>`
*************************
-2.3 'pyro' Release Series
+Release Series 2.3 (pyro)
*************************
- :yocto_docs:`2.3 BitBake User Manual </2.3/bitbake-user-manual/bitbake-user-manual.html>`
@@ -75,7 +117,7 @@
- :yocto_docs:`2.3.4 BitBake User Manual </2.3.4/bitbake-user-manual/bitbake-user-manual.html>`
**************************
-2.2 'morty' Release Series
+Release Series 2.2 (morty)
**************************
- :yocto_docs:`2.2 BitBake User Manual </2.2/bitbake-user-manual/bitbake-user-manual.html>`
@@ -84,7 +126,7 @@
- :yocto_docs:`2.2.3 BitBake User Manual </2.2.3/bitbake-user-manual/bitbake-user-manual.html>`
****************************
-2.1 'krogoth' Release Series
+Release Series 2.1 (krogoth)
****************************
- :yocto_docs:`2.1 BitBake User Manual </2.1/bitbake-user-manual/bitbake-user-manual.html>`
@@ -93,7 +135,7 @@
- :yocto_docs:`2.1.3 BitBake User Manual </2.1.3/bitbake-user-manual/bitbake-user-manual.html>`
***************************
-2.0 'jethro' Release Series
+Release Series 2.0 (jethro)
***************************
- :yocto_docs:`1.9 BitBake User Manual </1.9/bitbake-user-manual/bitbake-user-manual.html>`
@@ -103,7 +145,7 @@
- :yocto_docs:`2.0.3 BitBake User Manual </2.0.3/bitbake-user-manual/bitbake-user-manual.html>`
*************************
-1.8 'fido' Release Series
+Release Series 1.8 (fido)
*************************
- :yocto_docs:`1.8 BitBake User Manual </1.8/bitbake-user-manual/bitbake-user-manual.html>`
@@ -111,7 +153,7 @@
- :yocto_docs:`1.8.2 BitBake User Manual </1.8.2/bitbake-user-manual/bitbake-user-manual.html>`
**************************
-1.7 'dizzy' Release Series
+Release Series 1.7 (dizzy)
**************************
- :yocto_docs:`1.7 BitBake User Manual </1.7/bitbake-user-manual/bitbake-user-manual.html>`
@@ -120,7 +162,7 @@
- :yocto_docs:`1.7.3 BitBake User Manual </1.7.3/bitbake-user-manual/bitbake-user-manual.html>`
**************************
-1.6 'daisy' Release Series
+Release Series 1.6 (daisy)
**************************
- :yocto_docs:`1.6 BitBake User Manual </1.6/bitbake-user-manual/bitbake-user-manual.html>`
diff --git a/bitbake/lib/bb/COW.py b/bitbake/lib/bb/COW.py
index 23c22b6..76bc08a 100644
--- a/bitbake/lib/bb/COW.py
+++ b/bitbake/lib/bb/COW.py
@@ -3,6 +3,8 @@
#
# Copyright (C) 2006 Tim Ansell
#
+# SPDX-License-Identifier: GPL-2.0-only
+#
# Please Note:
# Be careful when using mutable types (ie Dict and Lists) - operations involving these are SLOW.
# Assign a file to __warn__ to get warnings about slow operations.
diff --git a/bitbake/lib/bb/__init__.py b/bitbake/lib/bb/__init__.py
index 9f61dae..b8333bd 100644
--- a/bitbake/lib/bb/__init__.py
+++ b/bitbake/lib/bb/__init__.py
@@ -9,11 +9,11 @@
# SPDX-License-Identifier: GPL-2.0-only
#
-__version__ = "1.50.0"
+__version__ = "2.0.0"
import sys
-if sys.version_info < (3, 5, 0):
- raise RuntimeError("Sorry, python 3.5.0 or later is required for this version of bitbake")
+if sys.version_info < (3, 6, 0):
+ raise RuntimeError("Sorry, python 3.6.0 or later is required for this version of bitbake")
class BBHandledException(Exception):
@@ -71,6 +71,13 @@ class BBLoggerMixin(object):
def verbnote(self, msg, *args, **kwargs):
return self.log(logging.INFO + 2, msg, *args, **kwargs)
+ def warnonce(self, msg, *args, **kwargs):
+ return self.log(logging.WARNING - 1, msg, *args, **kwargs)
+
+ def erroronce(self, msg, *args, **kwargs):
+ return self.log(logging.ERROR - 1, msg, *args, **kwargs)
+
+
Logger = logging.getLoggerClass()
class BBLogger(Logger, BBLoggerMixin):
def __init__(self, name, *args, **kwargs):
@@ -157,9 +164,15 @@ def verbnote(*args):
def warn(*args):
mainlogger.warning(''.join(args))
+def warnonce(*args):
+ mainlogger.warnonce(''.join(args))
+
def error(*args, **kwargs):
mainlogger.error(''.join(args), extra=kwargs)
+def erroronce(*args):
+ mainlogger.erroronce(''.join(args))
+
def fatal(*args, **kwargs):
mainlogger.critical(''.join(args), extra=kwargs)
raise BBHandledException()
diff --git a/bitbake/lib/bb/asyncrpc/__init__.py b/bitbake/lib/bb/asyncrpc/__init__.py
new file mode 100644
index 0000000..9a85e99
--- /dev/null
+++ b/bitbake/lib/bb/asyncrpc/__init__.py
@@ -0,0 +1,33 @@
+#
+# Copyright BitBake Contributors
+#
+# SPDX-License-Identifier: GPL-2.0-only
+#
+
+import itertools
+import json
+
+# The Python async server defaults to a 64K receive buffer, so we hardcode our
+# maximum chunk size. It would be better if the client and server reported to
+# each other what the maximum chunk sizes were, but that will slow down the
+# connection setup with a round trip delay so I'd rather not do that unless it
+# is necessary
+DEFAULT_MAX_CHUNK = 32 * 1024
+
+
+def chunkify(msg, max_chunk):
+ if len(msg) < max_chunk - 1:
+ yield ''.join((msg, "\n"))
+ else:
+ yield ''.join((json.dumps({
+ 'chunk-stream': None
+ }), "\n"))
+
+ args = [iter(msg)] * (max_chunk - 1)
+ for m in map(''.join, itertools.zip_longest(*args, fillvalue='')):
+ yield ''.join(itertools.chain(m, "\n"))
+ yield "\n"
+
+
+from .client import AsyncClient, Client
+from .serv import AsyncServer, AsyncServerConnection, ClientError, ServerError
diff --git a/bitbake/lib/bb/asyncrpc/client.py b/bitbake/lib/bb/asyncrpc/client.py
new file mode 100644
index 0000000..fa042bb
--- /dev/null
+++ b/bitbake/lib/bb/asyncrpc/client.py
@@ -0,0 +1,178 @@
+#
+# Copyright BitBake Contributors
+#
+# SPDX-License-Identifier: GPL-2.0-only
+#
+
+import abc
+import asyncio
+import json
+import os
+import socket
+import sys
+from . import chunkify, DEFAULT_MAX_CHUNK
+
+
+class AsyncClient(object):
+ def __init__(self, proto_name, proto_version, logger, timeout=30):
+ self.reader = None
+ self.writer = None
+ self.max_chunk = DEFAULT_MAX_CHUNK
+ self.proto_name = proto_name
+ self.proto_version = proto_version
+ self.logger = logger
+ self.timeout = timeout
+
+ async def connect_tcp(self, address, port):
+ async def connect_sock():
+ return await asyncio.open_connection(address, port)
+
+ self._connect_sock = connect_sock
+
+ async def connect_unix(self, path):
+ async def connect_sock():
+ # AF_UNIX has path length issues so chdir here to workaround
+ cwd = os.getcwd()
+ try:
+ os.chdir(os.path.dirname(path))
+ # The socket must be opened synchronously so that CWD doesn't get
+ # changed out from underneath us so we pass as a sock into asyncio
+ sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM, 0)
+ sock.connect(os.path.basename(path))
+ finally:
+ os.chdir(cwd)
+ return await asyncio.open_unix_connection(sock=sock)
+
+ self._connect_sock = connect_sock
+
+ async def setup_connection(self):
+ s = '%s %s\n\n' % (self.proto_name, self.proto_version)
+ self.writer.write(s.encode("utf-8"))
+ await self.writer.drain()
+
+ async def connect(self):
+ if self.reader is None or self.writer is None:
+ (self.reader, self.writer) = await self._connect_sock()
+ await self.setup_connection()
+
+ async def close(self):
+ self.reader = None
+
+ if self.writer is not None:
+ self.writer.close()
+ self.writer = None
+
+ async def _send_wrapper(self, proc):
+ count = 0
+ while True:
+ try:
+ await self.connect()
+ return await proc()
+ except (
+ OSError,
+ ConnectionError,
+ json.JSONDecodeError,
+ UnicodeDecodeError,
+ ) as e:
+ self.logger.warning("Error talking to server: %s" % e)
+ if count >= 3:
+ if not isinstance(e, ConnectionError):
+ raise ConnectionError(str(e))
+ raise e
+ await self.close()
+ count += 1
+
+ async def send_message(self, msg):
+ async def get_line():
+ try:
+ line = await asyncio.wait_for(self.reader.readline(), self.timeout)
+ except asyncio.TimeoutError:
+ raise ConnectionError("Timed out waiting for server")
+
+ if not line:
+ raise ConnectionError("Connection closed")
+
+ line = line.decode("utf-8")
+
+ if not line.endswith("\n"):
+ raise ConnectionError("Bad message %r" % (line))
+
+ return line
+
+ async def proc():
+ for c in chunkify(json.dumps(msg), self.max_chunk):
+ self.writer.write(c.encode("utf-8"))
+ await self.writer.drain()
+
+ l = await get_line()
+
+ m = json.loads(l)
+ if m and "chunk-stream" in m:
+ lines = []
+ while True:
+ l = (await get_line()).rstrip("\n")
+ if not l:
+ break
+ lines.append(l)
+
+ m = json.loads("".join(lines))
+
+ return m
+
+ return await self._send_wrapper(proc)
+
+ async def ping(self):
+ return await self.send_message(
+ {'ping': {}}
+ )
+
+
+class Client(object):
+ def __init__(self):
+ self.client = self._get_async_client()
+ self.loop = asyncio.new_event_loop()
+
+ # Override any pre-existing loop.
+ # Without this, the PR server export selftest triggers a hang
+ # when running with Python 3.7. The drawback is that there is
+ # potential for issues if the PR and hash equiv (or some new)
+ # clients need to both be instantiated in the same process.
+ # This should be revisited if/when Python 3.9 becomes the
+ # minimum required version for BitBake, as it seems not
+ # required (but harmless) with it.
+ asyncio.set_event_loop(self.loop)
+
+ self._add_methods('connect_tcp', 'ping')
+
+ @abc.abstractmethod
+ def _get_async_client(self):
+ pass
+
+ def _get_downcall_wrapper(self, downcall):
+ def wrapper(*args, **kwargs):
+ return self.loop.run_until_complete(downcall(*args, **kwargs))
+
+ return wrapper
+
+ def _add_methods(self, *methods):
+ for m in methods:
+ downcall = getattr(self.client, m)
+ setattr(self, m, self._get_downcall_wrapper(downcall))
+
+ def connect_unix(self, path):
+ self.loop.run_until_complete(self.client.connect_unix(path))
+ self.loop.run_until_complete(self.client.connect())
+
+ @property
+ def max_chunk(self):
+ return self.client.max_chunk
+
+ @max_chunk.setter
+ def max_chunk(self, value):
+ self.client.max_chunk = value
+
+ def close(self):
+ self.loop.run_until_complete(self.client.close())
+ if sys.version_info >= (3, 6):
+ self.loop.run_until_complete(self.loop.shutdown_asyncgens())
+ self.loop.close()
diff --git a/bitbake/lib/bb/asyncrpc/serv.py b/bitbake/lib/bb/asyncrpc/serv.py
new file mode 100644
index 0000000..e14df18
--- /dev/null
+++ b/bitbake/lib/bb/asyncrpc/serv.py
@@ -0,0 +1,288 @@
+#
+# Copyright BitBake Contributors
+#
+# SPDX-License-Identifier: GPL-2.0-only
+#
+
+import abc
+import asyncio
+import json
+import os
+import signal
+import socket
+import sys
+import multiprocessing
+from . import chunkify, DEFAULT_MAX_CHUNK
+
+
+class ClientError(Exception):
+ pass
+
+
+class ServerError(Exception):
+ pass
+
+
+class AsyncServerConnection(object):
+ def __init__(self, reader, writer, proto_name, logger):
+ self.reader = reader
+ self.writer = writer
+ self.proto_name = proto_name
+ self.max_chunk = DEFAULT_MAX_CHUNK
+ self.handlers = {
+ 'chunk-stream': self.handle_chunk,
+ 'ping': self.handle_ping,
+ }
+ self.logger = logger
+
+ async def process_requests(self):
+ try:
+ self.addr = self.writer.get_extra_info('peername')
+ self.logger.debug('Client %r connected' % (self.addr,))
+
+ # Read protocol and version
+ client_protocol = await self.reader.readline()
+ if client_protocol is None:
+ return
+
+ (client_proto_name, client_proto_version) = client_protocol.decode('utf-8').rstrip().split()
+ if client_proto_name != self.proto_name:
+ self.logger.debug('Rejecting invalid protocol %s' % (self.proto_name))
+ return
+
+ self.proto_version = tuple(int(v) for v in client_proto_version.split('.'))
+ if not self.validate_proto_version():
+ self.logger.debug('Rejecting invalid protocol version %s' % (client_proto_version))
+ return
+
+ # Read headers. Currently, no headers are implemented, so look for
+ # an empty line to signal the end of the headers
+ while True:
+ line = await self.reader.readline()
+ if line is None:
+ return
+
+ line = line.decode('utf-8').rstrip()
+ if not line:
+ break
+
+ # Handle messages
+ while True:
+ d = await self.read_message()
+ if d is None:
+ break
+ await self.dispatch_message(d)
+ await self.writer.drain()
+ except ClientError as e:
+ self.logger.error(str(e))
+ finally:
+ self.writer.close()
+
+ async def dispatch_message(self, msg):
+ for k in self.handlers.keys():
+ if k in msg:
+ self.logger.debug('Handling %s' % k)
+ await self.handlers[k](msg[k])
+ return
+
+ raise ClientError("Unrecognized command %r" % msg)
+
+ def write_message(self, msg):
+ for c in chunkify(json.dumps(msg), self.max_chunk):
+ self.writer.write(c.encode('utf-8'))
+
+ async def read_message(self):
+ l = await self.reader.readline()
+ if not l:
+ return None
+
+ try:
+ message = l.decode('utf-8')
+
+ if not message.endswith('\n'):
+ return None
+
+ return json.loads(message)
+ except (json.JSONDecodeError, UnicodeDecodeError) as e:
+ self.logger.error('Bad message from client: %r' % message)
+ raise e
+
+ async def handle_chunk(self, request):
+ lines = []
+ try:
+ while True:
+ l = await self.reader.readline()
+ l = l.rstrip(b"\n").decode("utf-8")
+ if not l:
+ break
+ lines.append(l)
+
+ msg = json.loads(''.join(lines))
+ except (json.JSONDecodeError, UnicodeDecodeError) as e:
+ self.logger.error('Bad message from client: %r' % lines)
+ raise e
+
+ if 'chunk-stream' in msg:
+ raise ClientError("Nested chunks are not allowed")
+
+ await self.dispatch_message(msg)
+
+ async def handle_ping(self, request):
+ response = {'alive': True}
+ self.write_message(response)
+
+
+class AsyncServer(object):
+ def __init__(self, logger):
+ self._cleanup_socket = None
+ self.logger = logger
+ self.start = None
+ self.address = None
+ self.loop = None
+
+ def start_tcp_server(self, host, port):
+ def start_tcp():
+ self.server = self.loop.run_until_complete(
+ asyncio.start_server(self.handle_client, host, port)
+ )
+
+ for s in self.server.sockets:
+ self.logger.debug('Listening on %r' % (s.getsockname(),))
+ # Newer python does this automatically. Do it manually here for
+ # maximum compatibility
+ s.setsockopt(socket.SOL_TCP, socket.TCP_NODELAY, 1)
+ s.setsockopt(socket.SOL_TCP, socket.TCP_QUICKACK, 1)
+
+ name = self.server.sockets[0].getsockname()
+ if self.server.sockets[0].family == socket.AF_INET6:
+ self.address = "[%s]:%d" % (name[0], name[1])
+ else:
+ self.address = "%s:%d" % (name[0], name[1])
+
+ self.start = start_tcp
+
+ def start_unix_server(self, path):
+ def cleanup():
+ os.unlink(path)
+
+ def start_unix():
+ cwd = os.getcwd()
+ try:
+ # Work around path length limits in AF_UNIX
+ os.chdir(os.path.dirname(path))
+ self.server = self.loop.run_until_complete(
+ asyncio.start_unix_server(self.handle_client, os.path.basename(path))
+ )
+ finally:
+ os.chdir(cwd)
+
+ self.logger.debug('Listening on %r' % path)
+
+ self._cleanup_socket = cleanup
+ self.address = "unix://%s" % os.path.abspath(path)
+
+ self.start = start_unix
+
+ @abc.abstractmethod
+ def accept_client(self, reader, writer):
+ pass
+
+ async def handle_client(self, reader, writer):
+ # writer.transport.set_write_buffer_limits(0)
+ try:
+ client = self.accept_client(reader, writer)
+ await client.process_requests()
+ except Exception as e:
+ import traceback
+ self.logger.error('Error from client: %s' % str(e), exc_info=True)
+ traceback.print_exc()
+ writer.close()
+ self.logger.debug('Client disconnected')
+
+ def run_loop_forever(self):
+ try:
+ self.loop.run_forever()
+ except KeyboardInterrupt:
+ pass
+
+ def signal_handler(self):
+ self.logger.debug("Got exit signal")
+ self.loop.stop()
+
+ def _serve_forever(self):
+ try:
+ self.loop.add_signal_handler(signal.SIGTERM, self.signal_handler)
+ signal.pthread_sigmask(signal.SIG_UNBLOCK, [signal.SIGTERM])
+
+ self.run_loop_forever()
+ self.server.close()
+
+ self.loop.run_until_complete(self.server.wait_closed())
+ self.logger.debug('Server shutting down')
+ finally:
+ if self._cleanup_socket is not None:
+ self._cleanup_socket()
+
+ def serve_forever(self):
+ """
+ Serve requests in the current process
+ """
+ # Create loop and override any loop that may have existed in
+ # a parent process. It is possible that the usecases of
+ # serve_forever might be constrained enough to allow using
+ # get_event_loop here, but better safe than sorry for now.
+ self.loop = asyncio.new_event_loop()
+ asyncio.set_event_loop(self.loop)
+ self.start()
+ self._serve_forever()
+
+ def serve_as_process(self, *, prefunc=None, args=()):
+ """
+ Serve requests in a child process
+ """
+ def run(queue):
+ # Create loop and override any loop that may have existed
+ # in a parent process. Without doing this and instead
+ # using get_event_loop, at the very minimum the hashserv
+ # unit tests will hang when running the second test.
+ # This happens since get_event_loop in the spawned server
+ # process for the second testcase ends up with the loop
+ # from the hashserv client created in the unit test process
+ # when running the first testcase. The problem is somewhat
+ # more general, though, as any potential use of asyncio in
+ # Cooker could create a loop that needs to replaced in this
+ # new process.
+ self.loop = asyncio.new_event_loop()
+ asyncio.set_event_loop(self.loop)
+ try:
+ self.start()
+ finally:
+ queue.put(self.address)
+ queue.close()
+
+ if prefunc is not None:
+ prefunc(self, *args)
+
+ self._serve_forever()
+
+ if sys.version_info >= (3, 6):
+ self.loop.run_until_complete(self.loop.shutdown_asyncgens())
+ self.loop.close()
+
+ queue = multiprocessing.Queue()
+
+ # Temporarily block SIGTERM. The server process will inherit this
+ # block which will ensure it doesn't receive the SIGTERM until the
+ # handler is ready for it
+ mask = signal.pthread_sigmask(signal.SIG_BLOCK, [signal.SIGTERM])
+ try:
+ self.process = multiprocessing.Process(target=run, args=(queue,))
+ self.process.start()
+
+ self.address = queue.get()
+ queue.close()
+ queue.join_thread()
+
+ return self.process
+ finally:
+ signal.pthread_sigmask(signal.SIG_SETMASK, mask)
diff --git a/bitbake/lib/bb/build.py b/bitbake/lib/bb/build.py
index 163572b..55f68b9 100644
--- a/bitbake/lib/bb/build.py
+++ b/bitbake/lib/bb/build.py
@@ -295,8 +295,8 @@ def exec_func_python(func, d, runfile, cwd=None):
lineno = int(d.getVarFlag(func, "lineno", False))
bb.methodpool.insert_method(func, text, fn, lineno - 1)
- comp = utils.better_compile(code, func, "exec_python_func() autogenerated")
- utils.better_exec(comp, {"d": d}, code, "exec_python_func() autogenerated")
+ comp = utils.better_compile(code, func, "exec_func_python() autogenerated")
+ utils.better_exec(comp, {"d": d}, code, "exec_func_python() autogenerated")
finally:
# We want any stdout/stderr to be printed before any other log messages to make debugging
# more accurate. In some cases we seem to lose stdout/stderr entirely in logging tests without this.
@@ -569,7 +569,6 @@ exit $ret
def _task_data(fn, task, d):
localdata = bb.data.createCopy(d)
localdata.setVar('BB_FILENAME', fn)
- localdata.setVar('BB_CURRENTTASK', task[3:])
localdata.setVar('OVERRIDES', 'task-%s:%s' %
(task[3:].replace('_', '-'), d.getVar('OVERRIDES', False)))
localdata.finalize()
@@ -583,7 +582,7 @@ def _exec_task(fn, task, d, quieterr):
running it with its own local metadata, and with some useful variables set.
"""
if not d.getVarFlag(task, 'task', False):
- event.fire(TaskInvalid(task, d), d)
+ event.fire(TaskInvalid(task, fn, d), d)
logger.error("No such task: %s" % task)
return 1
@@ -686,51 +685,55 @@ def _exec_task(fn, task, d, quieterr):
try:
try:
event.fire(TaskStarted(task, fn, logfn, flags, localdata), localdata)
- except (bb.BBHandledException, SystemExit):
- return 1
- try:
for func in (prefuncs or '').split():
exec_func(func, localdata)
exec_func(task, localdata)
for func in (postfuncs or '').split():
exec_func(func, localdata)
- except bb.BBHandledException:
- event.fire(TaskFailed(task, fn, logfn, localdata, True), localdata)
- return 1
- except (Exception, SystemExit) as exc:
- if quieterr:
- event.fire(TaskFailedSilent(task, fn, logfn, localdata), localdata)
- else:
- errprinted = errchk.triggered
- # If the output is already on stdout, we've printed the information in the
- # logs once already so don't duplicate
- if verboseStdoutLogging:
- errprinted = True
+ finally:
+ # Need to flush and close the logs before sending events where the
+ # UI may try to look at the logs.
+ sys.stdout.flush()
+ sys.stderr.flush()
+
+ bblogger.removeHandler(handler)
+
+ # Restore the backup fds
+ os.dup2(osi[0], osi[1])
+ os.dup2(oso[0], oso[1])
+ os.dup2(ose[0], ose[1])
+
+ # Close the backup fds
+ os.close(osi[0])
+ os.close(oso[0])
+ os.close(ose[0])
+
+ logfile.close()
+ if os.path.exists(logfn) and os.path.getsize(logfn) == 0:
+ logger.debug2("Zero size logfn %s, removing", logfn)
+ bb.utils.remove(logfn)
+ bb.utils.remove(loglink)
+ except (Exception, SystemExit) as exc:
+ handled = False
+ if isinstance(exc, bb.BBHandledException):
+ handled = True
+
+ if quieterr:
+ if not handled:
+ logger.warning(repr(exc))
+ event.fire(TaskFailedSilent(task, fn, logfn, localdata), localdata)
+ else:
+ errprinted = errchk.triggered
+ # If the output is already on stdout, we've printed the information in the
+ # logs once already so don't duplicate
+ if verboseStdoutLogging or handled:
+ errprinted = True
+ if not handled:
logger.error(repr(exc))
- event.fire(TaskFailed(task, fn, logfn, localdata, errprinted), localdata)
- return 1
- finally:
- sys.stdout.flush()
- sys.stderr.flush()
-
- bblogger.removeHandler(handler)
-
- # Restore the backup fds
- os.dup2(osi[0], osi[1])
- os.dup2(oso[0], oso[1])
- os.dup2(ose[0], ose[1])
-
- # Close the backup fds
- os.close(osi[0])
- os.close(oso[0])
- os.close(ose[0])
+ event.fire(TaskFailed(task, fn, logfn, localdata, errprinted), localdata)
+ return 1
- logfile.close()
- if os.path.exists(logfn) and os.path.getsize(logfn) == 0:
- logger.debug2("Zero size logfn %s, removing", logfn)
- bb.utils.remove(logfn)
- bb.utils.remove(loglink)
event.fire(TaskSucceeded(task, fn, logfn, localdata), localdata)
if not localdata.getVarFlag(task, 'nostamp', False) and not localdata.getVarFlag(task, 'selfstamp', False):
@@ -832,11 +835,7 @@ def stamp_cleanmask_internal(taskname, d, file_name):
return [cleanmask, cleanmask.replace(taskflagname, taskflagname + "_setscene")]
-def make_stamp(task, d, file_name = None):
- """
- Creates/updates a stamp for a given task
- (d can be a data dict or dataCache)
- """
+def clean_stamp(task, d, file_name = None):
cleanmask = stamp_cleanmask_internal(task, d, file_name)
for mask in cleanmask:
for name in glob.glob(mask):
@@ -847,6 +846,14 @@ def make_stamp(task, d, file_name = None):
if name.endswith('.taint'):
continue
os.unlink(name)
+ return
+
+def make_stamp(task, d, file_name = None):
+ """
+ Creates/updates a stamp for a given task
+ (d can be a data dict or dataCache)
+ """
+ clean_stamp(task, d, file_name)
stamp = stamp_internal(task, d, file_name)
# Remove the file and recreate to force timestamp
@@ -935,6 +942,11 @@ def add_tasks(tasklist, d):
task_deps[name] = {}
if name in flags:
deptask = d.expand(flags[name])
+ if name in ['noexec', 'fakeroot', 'nostamp']:
+ if deptask != '1':
+ bb.warn("In a future version of BitBake, setting the '{}' flag to something other than '1' "
+ "will result in the flag not being set. See YP bug #13808.".format(name))
+
task_deps[name][task] = deptask
getTask('mcdepends')
getTask('depends')
diff --git a/bitbake/lib/bb/cache.py b/bitbake/lib/bb/cache.py
index 5f9c0a7..92e9a3c 100644
--- a/bitbake/lib/bb/cache.py
+++ b/bitbake/lib/bb/cache.py
@@ -54,12 +54,12 @@ class RecipeInfoCommon(object):
@classmethod
def pkgvar(cls, var, packages, metadata):
- return dict((pkg, cls.depvar("%s_%s" % (var, pkg), metadata))
+ return dict((pkg, cls.depvar("%s:%s" % (var, pkg), metadata))
for pkg in packages)
@classmethod
def taskvar(cls, var, tasks, metadata):
- return dict((task, cls.getvar("%s_task-%s" % (var, task), metadata))
+ return dict((task, cls.getvar("%s:task-%s" % (var, task), metadata))
for task in tasks)
@classmethod
@@ -284,36 +284,15 @@ def parse_recipe(bb_data, bbfile, appends, mc=''):
Parse a recipe
"""
- chdir_back = False
-
bb_data.setVar("__BBMULTICONFIG", mc)
- # expand tmpdir to include this topdir
- bb_data.setVar('TMPDIR', bb_data.getVar('TMPDIR') or "")
bbfile_loc = os.path.abspath(os.path.dirname(bbfile))
- oldpath = os.path.abspath(os.getcwd())
bb.parse.cached_mtime_noerror(bbfile_loc)
- # The ConfHandler first looks if there is a TOPDIR and if not
- # then it would call getcwd().
- # Previously, we chdir()ed to bbfile_loc, called the handler
- # and finally chdir()ed back, a couple of thousand times. We now
- # just fill in TOPDIR to point to bbfile_loc if there is no TOPDIR yet.
- if not bb_data.getVar('TOPDIR', False):
- chdir_back = True
- bb_data.setVar('TOPDIR', bbfile_loc)
- try:
- if appends:
- bb_data.setVar('__BBAPPEND', " ".join(appends))
- bb_data = bb.parse.handle(bbfile, bb_data)
- if chdir_back:
- os.chdir(oldpath)
- return bb_data
- except:
- if chdir_back:
- os.chdir(oldpath)
- raise
-
+ if appends:
+ bb_data.setVar('__BBAPPEND', " ".join(appends))
+ bb_data = bb.parse.handle(bbfile, bb_data)
+ return bb_data
class NoCache(object):
@@ -640,7 +619,7 @@ class Cache(NoCache):
for f in flist:
if not f:
continue
- f, exist = f.split(":")
+ f, exist = f.rsplit(":", 1)
if (exist == "True" and not os.path.exists(f)) or (exist == "False" and os.path.exists(f)):
self.logger.debug2("%s's file checksum list file %s changed",
fn, f)
diff --git a/bitbake/lib/bb/checksum.py b/bitbake/lib/bb/checksum.py
index 1d50a26..557793d 100644
--- a/bitbake/lib/bb/checksum.py
+++ b/bitbake/lib/bb/checksum.py
@@ -11,10 +11,13 @@ import os
import stat
import bb.utils
import logging
+import re
from bb.cache import MultiProcessCache
logger = logging.getLogger("BitBake.Cache")
+filelist_regex = re.compile(r'(?:(?<=:True)|(?<=:False))\s+')
+
# mtime cache (non-persistent)
# based upon the assumption that files do not change during bitbake run
class FileMtimeCache(object):
@@ -50,6 +53,7 @@ class FileChecksumCache(MultiProcessCache):
MultiProcessCache.__init__(self)
def get_checksum(self, f):
+ f = os.path.normpath(f)
entry = self.cachedata[0].get(f)
cmtime = self.mtime_cache.cached_mtime(f)
if entry:
@@ -84,22 +88,36 @@ class FileChecksumCache(MultiProcessCache):
return None
return checksum
+ #
+ # Changing the format of file-checksums is problematic as both OE and Bitbake have
+ # knowledge of them. We need to encode a new piece of data, the portion of the path
+ # we care about from a checksum perspective. This means that files that change subdirectory
+ # are tracked by the task hashes. To do this, we do something horrible and put a "/./" into
+ # the path. The filesystem handles it but it gives us a marker to know which subsection
+ # of the path to cache.
+ #
def checksum_dir(pth):
# Handle directories recursively
if pth == "/":
bb.fatal("Refusing to checksum /")
+ pth = pth.rstrip("/")
dirchecksums = []
for root, dirs, files in os.walk(pth, topdown=True):
[dirs.remove(d) for d in list(dirs) if d in localdirsexclude]
for name in files:
- fullpth = os.path.join(root, name)
+ fullpth = os.path.join(root, name).replace(pth, os.path.join(pth, "."))
checksum = checksum_file(fullpth)
if checksum:
dirchecksums.append((fullpth, checksum))
return dirchecksums
checksums = []
- for pth in filelist.split():
+ for pth in filelist_regex.split(filelist):
+ if not pth:
+ continue
+ pth = pth.strip()
+ if not pth:
+ continue
exist = pth.split(":")[1]
if exist == "False":
continue
diff --git a/bitbake/lib/bb/codeparser.py b/bitbake/lib/bb/codeparser.py
index 0cec452..9d66d3a 100644
--- a/bitbake/lib/bb/codeparser.py
+++ b/bitbake/lib/bb/codeparser.py
@@ -1,4 +1,6 @@
#
+# Copyright BitBake Contributors
+#
# SPDX-License-Identifier: GPL-2.0-only
#
@@ -195,6 +197,10 @@ class BufferedLogger(Logger):
self.target.handle(record)
self.buffer = []
+class DummyLogger():
+ def flush(self):
+ return
+
class PythonParser():
getvars = (".getVar", ".appendVar", ".prependVar", "oe.utils.conditional")
getvarflags = (".getVarFlag", ".appendVarFlag", ".prependVarFlag")
@@ -276,7 +282,9 @@ class PythonParser():
self.contains = {}
self.execs = set()
self.references = set()
- self.log = BufferedLogger('BitBake.Data.PythonParser', logging.DEBUG, log)
+ self._log = log
+ # Defer init as expensive
+ self.log = DummyLogger()
self.unhandled_message = "in call of %s, argument '%s' is not a string literal"
self.unhandled_message = "while parsing %s, %s" % (name, self.unhandled_message)
@@ -303,6 +311,9 @@ class PythonParser():
self.contains[i] = set(codeparsercache.pythoncacheextras[h].contains[i])
return
+ # Need to parse so take the hit on the real log buffer
+ self.log = BufferedLogger('BitBake.Data.PythonParser', logging.DEBUG, self._log)
+
# We can't add to the linenumbers for compile, we can pad to the correct number of blank lines though
node = "\n" * int(lineno) + node
code = compile(check_indent(str(node)), filename, "exec",
@@ -321,7 +332,11 @@ class ShellParser():
self.funcdefs = set()
self.allexecs = set()
self.execs = set()
- self.log = BufferedLogger('BitBake.Data.%s' % name, logging.DEBUG, log)
+ self._name = name
+ self._log = log
+ # Defer init as expensive
+ self.log = DummyLogger()
+
self.unhandled_template = "unable to handle non-literal command '%s'"
self.unhandled_template = "while parsing %s, %s" % (name, self.unhandled_template)
@@ -340,6 +355,9 @@ class ShellParser():
self.execs = set(codeparsercache.shellcacheextras[h].execs)
return self.execs
+ # Need to parse so take the hit on the real log buffer
+ self.log = BufferedLogger('BitBake.Data.%s' % self._name, logging.DEBUG, self._log)
+
self._parse_shell(value)
self.execs = set(cmd for cmd in self.allexecs if cmd not in self.funcdefs)
diff --git a/bitbake/lib/bb/command.py b/bitbake/lib/bb/command.py
index dd77cdd..ec86885 100644
--- a/bitbake/lib/bb/command.py
+++ b/bitbake/lib/bb/command.py
@@ -20,6 +20,7 @@ Commands are queued in a CommandQueue
from collections import OrderedDict, defaultdict
+import io
import bb.event
import bb.cooker
import bb.remotedata
@@ -64,9 +65,17 @@ class Command:
# Ensure cooker is ready for commands
if command != "updateConfig" and command != "setFeatures":
- self.cooker.init_configdata()
- if not self.remotedatastores:
- self.remotedatastores = bb.remotedata.RemoteDatastores(self.cooker)
+ try:
+ self.cooker.init_configdata()
+ if not self.remotedatastores:
+ self.remotedatastores = bb.remotedata.RemoteDatastores(self.cooker)
+ except (Exception, SystemExit) as exc:
+ import traceback
+ if isinstance(exc, bb.BBHandledException):
+ # We need to start returning real exceptions here. Until we do, we can't
+ # tell if an exception is an instance of bb.BBHandledException
+ return None, "bb.BBHandledException()\n" + traceback.format_exc()
+ return None, traceback.format_exc()
if hasattr(CommandsSync, command):
# Can run synchronous commands straight away
@@ -500,6 +509,17 @@ class CommandsSync:
d = command.remotedatastores[dsindex].varhistory
return getattr(d, method)(*args, **kwargs)
+ def dataStoreConnectorVarHistCmdEmit(self, command, params):
+ dsindex = params[0]
+ var = params[1]
+ oval = params[2]
+ val = params[3]
+ d = command.remotedatastores[params[4]]
+
+ o = io.StringIO()
+ command.remotedatastores[dsindex].varhistory.emit(var, oval, val, o, d)
+ return o.getvalue()
+
def dataStoreConnectorIncHistCmd(self, command, params):
dsindex = params[0]
method = params[1]
@@ -647,6 +667,16 @@ class CommandsAsync:
command.finishAsyncCommand()
findFilesMatchingInDir.needcache = False
+ def testCookerCommandEvent(self, command, params):
+ """
+ Dummy command used by OEQA selftest to test tinfoil without IO
+ """
+ pattern = params[0]
+
+ command.cooker.testCookerCommandEvent(pattern)
+ command.finishAsyncCommand()
+ testCookerCommandEvent.needcache = False
+
def findConfigFilePath(self, command, params):
"""
Find the path of the requested configuration file
diff --git a/bitbake/lib/bb/compress/_pipecompress.py b/bitbake/lib/bb/compress/_pipecompress.py
new file mode 100644
index 0000000..4a403d6
--- /dev/null
+++ b/bitbake/lib/bb/compress/_pipecompress.py
@@ -0,0 +1,196 @@
+#
+# Copyright BitBake Contributors
+#
+# SPDX-License-Identifier: GPL-2.0-only
+#
+# Helper library to implement streaming compression and decompression using an
+# external process
+#
+# This library should be used directly by end users; a wrapper library for the
+# specific compression tool should be created
+
+import builtins
+import io
+import os
+import subprocess
+
+
+def open_wrap(
+ cls, filename, mode="rb", *, encoding=None, errors=None, newline=None, **kwargs
+):
+ """
+ Open a compressed file in binary or text mode.
+
+ Users should not call this directly. A specific compression library can use
+ this helper to provide it's own "open" command
+
+ The filename argument can be an actual filename (a str or bytes object), or
+ an existing file object to read from or write to.
+
+ The mode argument can be "r", "rb", "w", "wb", "x", "xb", "a" or "ab" for
+ binary mode, or "rt", "wt", "xt" or "at" for text mode. The default mode is
+ "rb".
+
+ For binary mode, this function is equivalent to the cls constructor:
+ cls(filename, mode). In this case, the encoding, errors and newline
+ arguments must not be provided.
+
+ For text mode, a cls object is created, and wrapped in an
+ io.TextIOWrapper instance with the specified encoding, error handling
+ behavior, and line ending(s).
+ """
+ if "t" in mode:
+ if "b" in mode:
+ raise ValueError("Invalid mode: %r" % (mode,))
+ else:
+ if encoding is not None:
+ raise ValueError("Argument 'encoding' not supported in binary mode")
+ if errors is not None:
+ raise ValueError("Argument 'errors' not supported in binary mode")
+ if newline is not None:
+ raise ValueError("Argument 'newline' not supported in binary mode")
+
+ file_mode = mode.replace("t", "")
+ if isinstance(filename, (str, bytes, os.PathLike, int)):
+ binary_file = cls(filename, file_mode, **kwargs)
+ elif hasattr(filename, "read") or hasattr(filename, "write"):
+ binary_file = cls(None, file_mode, fileobj=filename, **kwargs)
+ else:
+ raise TypeError("filename must be a str or bytes object, or a file")
+
+ if "t" in mode:
+ return io.TextIOWrapper(
+ binary_file, encoding, errors, newline, write_through=True
+ )
+ else:
+ return binary_file
+
+
+class CompressionError(OSError):
+ pass
+
+
+class PipeFile(io.RawIOBase):
+ """
+ Class that implements generically piping to/from a compression program
+
+ Derived classes should add the function get_compress() and get_decompress()
+ that return the required commands. Input will be piped into stdin and the
+ (de)compressed output should be written to stdout, e.g.:
+
+ class FooFile(PipeCompressionFile):
+ def get_decompress(self):
+ return ["fooc", "--decompress", "--stdout"]
+
+ def get_compress(self):
+ return ["fooc", "--compress", "--stdout"]
+
+ """
+
+ READ = 0
+ WRITE = 1
+
+ def __init__(self, filename=None, mode="rb", *, stderr=None, fileobj=None):
+ if "t" in mode or "U" in mode:
+ raise ValueError("Invalid mode: {!r}".format(mode))
+
+ if not "b" in mode:
+ mode += "b"
+
+ if mode.startswith("r"):
+ self.mode = self.READ
+ elif mode.startswith("w"):
+ self.mode = self.WRITE
+ else:
+ raise ValueError("Invalid mode %r" % mode)
+
+ if fileobj is not None:
+ self.fileobj = fileobj
+ else:
+ self.fileobj = builtins.open(filename, mode or "rb")
+
+ if self.mode == self.READ:
+ self.p = subprocess.Popen(
+ self.get_decompress(),
+ stdin=self.fileobj,
+ stdout=subprocess.PIPE,
+ stderr=stderr,
+ close_fds=True,
+ )
+ self.pipe = self.p.stdout
+ else:
+ self.p = subprocess.Popen(
+ self.get_compress(),
+ stdin=subprocess.PIPE,
+ stdout=self.fileobj,
+ stderr=stderr,
+ close_fds=True,
+ )
+ self.pipe = self.p.stdin
+
+ self.__closed = False
+
+ def _check_process(self):
+ if self.p is None:
+ return
+
+ returncode = self.p.wait()
+ if returncode:
+ raise CompressionError("Process died with %d" % returncode)
+ self.p = None
+
+ def close(self):
+ if self.closed:
+ return
+
+ self.pipe.close()
+ if self.p is not None:
+ self._check_process()
+ self.fileobj.close()
+
+ self.__closed = True
+
+ @property
+ def closed(self):
+ return self.__closed
+
+ def fileno(self):
+ return self.pipe.fileno()
+
+ def flush(self):
+ self.pipe.flush()
+
+ def isatty(self):
+ return self.pipe.isatty()
+
+ def readable(self):
+ return self.mode == self.READ
+
+ def writable(self):
+ return self.mode == self.WRITE
+
+ def readinto(self, b):
+ if self.mode != self.READ:
+ import errno
+
+ raise OSError(
+ errno.EBADF, "read() on write-only %s object" % self.__class__.__name__
+ )
+ size = self.pipe.readinto(b)
+ if size == 0:
+ self._check_process()
+ return size
+
+ def write(self, data):
+ if self.mode != self.WRITE:
+ import errno
+
+ raise OSError(
+ errno.EBADF, "write() on read-only %s object" % self.__class__.__name__
+ )
+ data = self.pipe.write(data)
+
+ if not data:
+ self._check_process()
+
+ return data
diff --git a/bitbake/lib/bb/compress/lz4.py b/bitbake/lib/bb/compress/lz4.py
new file mode 100644
index 0000000..88b0989
--- /dev/null
+++ b/bitbake/lib/bb/compress/lz4.py
@@ -0,0 +1,19 @@
+#
+# Copyright BitBake Contributors
+#
+# SPDX-License-Identifier: GPL-2.0-only
+#
+
+import bb.compress._pipecompress
+
+
+def open(*args, **kwargs):
+ return bb.compress._pipecompress.open_wrap(LZ4File, *args, **kwargs)
+
+
+class LZ4File(bb.compress._pipecompress.PipeFile):
+ def get_compress(self):
+ return ["lz4c", "-z", "-c"]
+
+ def get_decompress(self):
+ return ["lz4c", "-d", "-c"]
diff --git a/bitbake/lib/bb/compress/zstd.py b/bitbake/lib/bb/compress/zstd.py
new file mode 100644
index 0000000..cdbbe9d
--- /dev/null
+++ b/bitbake/lib/bb/compress/zstd.py
@@ -0,0 +1,30 @@
+#
+# Copyright BitBake Contributors
+#
+# SPDX-License-Identifier: GPL-2.0-only
+#
+
+import bb.compress._pipecompress
+import shutil
+
+
+def open(*args, **kwargs):
+ return bb.compress._pipecompress.open_wrap(ZstdFile, *args, **kwargs)
+
+
+class ZstdFile(bb.compress._pipecompress.PipeFile):
+ def __init__(self, *args, num_threads=1, compresslevel=3, **kwargs):
+ self.num_threads = num_threads
+ self.compresslevel = compresslevel
+ super().__init__(*args, **kwargs)
+
+ def _get_zstd(self):
+ if self.num_threads == 1 or not shutil.which("pzstd"):
+ return ["zstd"]
+ return ["pzstd", "-p", "%d" % self.num_threads]
+
+ def get_compress(self):
+ return self._get_zstd() + ["-c", "-%d" % self.compresslevel]
+
+ def get_decompress(self):
+ return self._get_zstd() + ["-d", "-c"]
diff --git a/bitbake/lib/bb/cooker.py b/bitbake/lib/bb/cooker.py
index c946800..2adf4d2 100644
--- a/bitbake/lib/bb/cooker.py
+++ b/bitbake/lib/bb/cooker.py
@@ -13,7 +13,6 @@ import sys, os, glob, os.path, re, time
import itertools
import logging
import multiprocessing
-import sre_constants
import threading
from io import StringIO, UnsupportedOperation
from contextlib import closing
@@ -159,6 +158,9 @@ class BBCooker:
for f in featureSet:
self.featureset.setFeature(f)
+ self.orig_syspath = sys.path.copy()
+ self.orig_sysmodules = [*sys.modules]
+
self.configuration = bb.cookerdata.CookerConfiguration()
self.idleCallBackRegister = idleCallBackRegister
@@ -166,27 +168,15 @@ class BBCooker:
bb.debug(1, "BBCooker starting %s" % time.time())
sys.stdout.flush()
- self.configwatcher = pyinotify.WatchManager()
- bb.debug(1, "BBCooker pyinotify1 %s" % time.time())
- sys.stdout.flush()
+ self.configwatcher = None
+ self.confignotifier = None
- self.configwatcher.bbseen = set()
- self.configwatcher.bbwatchedfiles = set()
- self.confignotifier = pyinotify.Notifier(self.configwatcher, self.config_notifications)
- bb.debug(1, "BBCooker pyinotify2 %s" % time.time())
- sys.stdout.flush()
self.watchmask = pyinotify.IN_CLOSE_WRITE | pyinotify.IN_CREATE | pyinotify.IN_DELETE | \
pyinotify.IN_DELETE_SELF | pyinotify.IN_MODIFY | pyinotify.IN_MOVE_SELF | \
pyinotify.IN_MOVED_FROM | pyinotify.IN_MOVED_TO
- self.watcher = pyinotify.WatchManager()
- bb.debug(1, "BBCooker pyinotify3 %s" % time.time())
- sys.stdout.flush()
- self.watcher.bbseen = set()
- self.watcher.bbwatchedfiles = set()
- self.notifier = pyinotify.Notifier(self.watcher, self.notifications)
- bb.debug(1, "BBCooker pyinotify complete %s" % time.time())
- sys.stdout.flush()
+ self.watcher = None
+ self.notifier = None
# If being called by something like tinfoil, we need to clean cached data
# which may now be invalid
@@ -199,7 +189,7 @@ class BBCooker:
self.inotify_modified_files = []
- def _process_inotify_updates(server, cooker, abort):
+ def _process_inotify_updates(server, cooker, halt):
cooker.process_inotify_updates()
return 1.0
@@ -237,9 +227,29 @@ class BBCooker:
sys.stdout.flush()
self.handlePRServ()
+ def setupConfigWatcher(self):
+ if self.configwatcher:
+ self.configwatcher.close()
+ self.confignotifier = None
+ self.configwatcher = None
+ self.configwatcher = pyinotify.WatchManager()
+ self.configwatcher.bbseen = set()
+ self.configwatcher.bbwatchedfiles = set()
+ self.confignotifier = pyinotify.Notifier(self.configwatcher, self.config_notifications)
+
+ def setupParserWatcher(self):
+ if self.watcher:
+ self.watcher.close()
+ self.notifier = None
+ self.watcher = None
+ self.watcher = pyinotify.WatchManager()
+ self.watcher.bbseen = set()
+ self.watcher.bbwatchedfiles = set()
+ self.notifier = pyinotify.Notifier(self.watcher, self.notifications)
+
def process_inotify_updates(self):
for n in [self.confignotifier, self.notifier]:
- if n.check_events(timeout=0):
+ if n and n.check_events(timeout=0):
# read notified events and enqeue them
n.read_events()
n.process_events()
@@ -253,6 +263,12 @@ class BBCooker:
return
if not event.pathname in self.configwatcher.bbwatchedfiles:
return
+ if "IN_ISDIR" in event.maskname:
+ if "IN_CREATE" in event.maskname or "IN_DELETE" in event.maskname:
+ if event.pathname in self.configwatcher.bbseen:
+ self.configwatcher.bbseen.remove(event.pathname)
+ # Could remove all entries starting with the directory but for now...
+ bb.parse.clear_cache()
if not event.pathname in self.inotify_modified_files:
self.inotify_modified_files.append(event.pathname)
self.baseconfig_valid = False
@@ -266,6 +282,12 @@ class BBCooker:
if event.pathname.endswith("bitbake-cookerdaemon.log") \
or event.pathname.endswith("bitbake.lock"):
return
+ if "IN_ISDIR" in event.maskname:
+ if "IN_CREATE" in event.maskname or "IN_DELETE" in event.maskname:
+ if event.pathname in self.watcher.bbseen:
+ self.watcher.bbseen.remove(event.pathname)
+ # Could remove all entries starting with the directory but for now...
+ bb.parse.clear_cache()
if not event.pathname in self.inotify_modified_files:
self.inotify_modified_files.append(event.pathname)
self.parsecache_valid = False
@@ -330,6 +352,13 @@ class BBCooker:
self.state = state.initial
self.caches_array = []
+ sys.path = self.orig_syspath.copy()
+ for mod in [*sys.modules]:
+ if mod not in self.orig_sysmodules:
+ del sys.modules[mod]
+
+ self.setupConfigWatcher()
+
# Need to preserve BB_CONSOLELOG over resets
consolelog = None
if hasattr(self, "data"):
@@ -382,7 +411,7 @@ class BBCooker:
try:
self.prhost = prserv.serv.auto_start(self.data)
except prserv.serv.PRServiceConfigError as e:
- bb.fatal("Unable to start PR Server, exiting")
+ bb.fatal("Unable to start PR Server, exiting, check the bitbake-cookerdaemon.log")
if self.data.getVar("BB_HASHSERVE") == "auto":
# Create a new hash server bound to a unix domain socket
@@ -405,8 +434,7 @@ class BBCooker:
sync=False,
upstream=upstream,
)
- self.hashserv.process = multiprocessing.Process(target=self.hashserv.serve_forever)
- self.hashserv.process.start()
+ self.hashserv.serve_as_process()
self.data.setVar("BB_HASHSERVE", self.hashservaddr)
self.databuilder.origdata.setVar("BB_HASHSERVE", self.hashservaddr)
self.databuilder.data.setVar("BB_HASHSERVE", self.hashservaddr)
@@ -506,7 +534,7 @@ class BBCooker:
logger.debug("Base environment change, triggering reparse")
self.reset()
- def runCommands(self, server, data, abort):
+ def runCommands(self, server, data, halt):
"""
Run any queued asynchronous command
This is done by the idle handler so it runs in true context rather than
@@ -556,6 +584,8 @@ class BBCooker:
if not orig_tracking:
self.enableDataTracking()
self.reset()
+ # reset() resets to the UI requested value so we have to redo this
+ self.enableDataTracking()
def mc_base(p):
if p.startswith('mc:'):
@@ -579,7 +609,7 @@ class BBCooker:
if pkgs_to_build[0] in set(ignore.split()):
bb.fatal("%s is in ASSUME_PROVIDED" % pkgs_to_build[0])
- taskdata, runlist = self.buildTaskData(pkgs_to_build, None, self.configuration.abort, allowincomplete=True)
+ taskdata, runlist = self.buildTaskData(pkgs_to_build, None, self.configuration.halt, allowincomplete=True)
mc = runlist[0][0]
fn = runlist[0][3]
@@ -608,7 +638,7 @@ class BBCooker:
data.emit_env(env, envdata, True)
logger.plain(env.getvalue())
- # emit the metadata which isnt valid shell
+ # emit the metadata which isn't valid shell
for e in sorted(envdata.keys()):
if envdata.getVarFlag(e, 'func', False) and envdata.getVarFlag(e, 'python', False):
logger.plain("\npython %s () {\n%s}\n", e, envdata.getVar(e, False))
@@ -617,7 +647,7 @@ class BBCooker:
self.disableDataTracking()
self.reset()
- def buildTaskData(self, pkgs_to_build, task, abort, allowincomplete=False):
+ def buildTaskData(self, pkgs_to_build, task, halt, allowincomplete=False):
"""
Prepare a runqueue and taskdata object for iteration over pkgs_to_build
"""
@@ -664,7 +694,7 @@ class BBCooker:
localdata = {}
for mc in self.multiconfigs:
- taskdata[mc] = bb.taskdata.TaskData(abort, skiplist=self.skiplist, allowincomplete=allowincomplete)
+ taskdata[mc] = bb.taskdata.TaskData(halt, skiplist=self.skiplist, allowincomplete=allowincomplete)
localdata[mc] = data.createCopy(self.databuilder.mcdata[mc])
bb.data.expandKeys(localdata[mc])
@@ -713,19 +743,18 @@ class BBCooker:
taskdata[mc].add_unresolved(localdata[mc], self.recipecaches[mc])
mcdeps |= set(taskdata[mc].get_mcdepends())
new = False
- for mc in self.multiconfigs:
- for k in mcdeps:
- if k in seen:
- continue
- l = k.split(':')
- depmc = l[2]
- if depmc not in self.multiconfigs:
- bb.fatal("Multiconfig dependency %s depends on nonexistent multiconfig configuration named configuration %s" % (k,depmc))
- else:
- logger.debug("Adding providers for multiconfig dependency %s" % l[3])
- taskdata[depmc].add_provider(localdata[depmc], self.recipecaches[depmc], l[3])
- seen.add(k)
- new = True
+ for k in mcdeps:
+ if k in seen:
+ continue
+ l = k.split(':')
+ depmc = l[2]
+ if depmc not in self.multiconfigs:
+ bb.fatal("Multiconfig dependency %s depends on nonexistent multiconfig configuration named configuration %s" % (k,depmc))
+ else:
+ logger.debug("Adding providers for multiconfig dependency %s" % l[3])
+ taskdata[depmc].add_provider(localdata[depmc], self.recipecaches[depmc], l[3])
+ seen.add(k)
+ new = True
for mc in self.multiconfigs:
taskdata[mc].add_unresolved(localdata[mc], self.recipecaches[mc])
@@ -738,7 +767,7 @@ class BBCooker:
Prepare a runqueue and taskdata object for iteration over pkgs_to_build
"""
- # We set abort to False here to prevent unbuildable targets raising
+ # We set halt to False here to prevent unbuildable targets raising
# an exception when we're just generating data
taskdata, runlist = self.buildTaskData(pkgs_to_build, task, False, allowincomplete=True)
@@ -1081,6 +1110,11 @@ class BBCooker:
if matches:
bb.event.fire(bb.event.FilesMatchingFound(filepattern, matches), self.data)
+ def testCookerCommandEvent(self, filepattern):
+ # Dummy command used by OEQA selftest to test tinfoil without IO
+ matches = ["A", "B"]
+ bb.event.fire(bb.event.FilesMatchingFound(filepattern, matches), self.data)
+
def findProviders(self, mc=''):
return bb.providers.findProviders(self.databuilder.mcdata[mc], self.recipecaches[mc], self.recipecaches[mc].pkg_pn)
@@ -1421,7 +1455,7 @@ class BBCooker:
# Setup taskdata structure
taskdata = {}
- taskdata[mc] = bb.taskdata.TaskData(self.configuration.abort)
+ taskdata[mc] = bb.taskdata.TaskData(self.configuration.halt)
taskdata[mc].add_provider(self.databuilder.mcdata[mc], self.recipecaches[mc], item)
if quietlog:
@@ -1437,11 +1471,11 @@ class BBCooker:
rq = bb.runqueue.RunQueue(self, self.data, self.recipecaches, taskdata, runlist)
- def buildFileIdle(server, rq, abort):
+ def buildFileIdle(server, rq, halt):
msg = None
interrupted = 0
- if abort or self.state == state.forceshutdown:
+ if halt or self.state == state.forceshutdown:
rq.finish_runqueue(True)
msg = "Forced shutdown"
interrupted = 2
@@ -1483,10 +1517,10 @@ class BBCooker:
Attempt to build the targets specified
"""
- def buildTargetsIdle(server, rq, abort):
+ def buildTargetsIdle(server, rq, halt):
msg = None
interrupted = 0
- if abort or self.state == state.forceshutdown:
+ if halt or self.state == state.forceshutdown:
rq.finish_runqueue(True)
msg = "Forced shutdown"
interrupted = 2
@@ -1529,7 +1563,7 @@ class BBCooker:
bb.event.fire(bb.event.BuildInit(packages), self.data)
- taskdata, runlist = self.buildTaskData(targets, task, self.configuration.abort)
+ taskdata, runlist = self.buildTaskData(targets, task, self.configuration.halt)
buildname = self.data.getVar("BUILDNAME", False)
@@ -1597,7 +1631,7 @@ class BBCooker:
if self.state in (state.shutdown, state.forceshutdown, state.error):
if hasattr(self.parser, 'shutdown'):
- self.parser.shutdown(clean=False, force = True)
+ self.parser.shutdown(clean=False)
self.parser.final_cleanup()
raise bb.BBHandledException()
@@ -1605,6 +1639,8 @@ class BBCooker:
self.updateCacheSync()
if self.state != state.parsing and not self.parsecache_valid:
+ self.setupParserWatcher()
+
bb.parse.siggen.reset(self.data)
self.parseConfiguration ()
if CookerFeatures.SEND_SANITYEVENTS in self.featureset:
@@ -1664,7 +1700,7 @@ class BBCooker:
# Return a copy, don't modify the original
pkgs_to_build = pkgs_to_build[:]
- if len(pkgs_to_build) == 0:
+ if not pkgs_to_build:
raise NothingToBuild
ignore = (self.data.getVar("ASSUME_PROVIDED") or "").split()
@@ -1711,6 +1747,8 @@ class BBCooker:
def post_serve(self):
self.shutdown(force=True)
prserv.serv.auto_shutdown()
+ if hasattr(bb.parse, "siggen"):
+ bb.parse.siggen.exit()
if self.hashserv:
self.hashserv.process.terminate()
self.hashserv.process.join()
@@ -1724,13 +1762,15 @@ class BBCooker:
self.state = state.shutdown
if self.parser:
- self.parser.shutdown(clean=not force, force=force)
+ self.parser.shutdown(clean=not force)
self.parser.final_cleanup()
def finishcommand(self):
self.state = state.initial
def reset(self):
+ if hasattr(bb.parse, "siggen"):
+ bb.parse.siggen.exit()
self.initConfigurationData()
self.handlePRServ()
@@ -1759,7 +1799,7 @@ class CookerCollectFiles(object):
def __init__(self, priorities, mc=''):
self.mc = mc
self.bbappends = []
- # Priorities is a list of tupples, with the second element as the pattern.
+ # Priorities is a list of tuples, with the second element as the pattern.
# We need to sort the list with the longest pattern first, and so on to
# the shortest. This allows nested layers to be properly evaluated.
self.bbfile_config_priorities = sorted(priorities, key=lambda tup: tup[1], reverse=True)
@@ -1803,10 +1843,10 @@ class CookerCollectFiles(object):
files.sort( key=lambda fileitem: self.calc_bbfile_priority(fileitem)[0] )
config.setVar("BBFILES_PRIORITIZED", " ".join(files))
- if not len(files):
+ if not files:
files = self.get_bbfiles()
- if not len(files):
+ if not files:
collectlog.error("no recipe files to build, check your BBPATH and BBFILES?")
bb.event.fire(CookerExit(), eventdata)
@@ -1866,7 +1906,7 @@ class CookerCollectFiles(object):
try:
re.compile(mask)
bbmasks.append(mask)
- except sre_constants.error:
+ except re.error:
collectlog.critical("BBMASK contains an invalid regular expression, ignoring: %s" % mask)
# Then validate the combined regular expressions. This should never
@@ -1874,7 +1914,7 @@ class CookerCollectFiles(object):
bbmask = "|".join(bbmasks)
try:
bbmask_compiled = re.compile(bbmask)
- except sre_constants.error:
+ except re.error:
collectlog.critical("BBMASK is not a valid regular expression, ignoring: %s" % bbmask)
bbmask = None
@@ -1992,15 +2032,30 @@ class ParsingFailure(Exception):
Exception.__init__(self, realexception, recipe)
class Parser(multiprocessing.Process):
- def __init__(self, jobs, results, quit, init, profile):
+ def __init__(self, jobs, results, quit, profile):
self.jobs = jobs
self.results = results
self.quit = quit
- self.init = init
multiprocessing.Process.__init__(self)
self.context = bb.utils.get_context().copy()
self.handlers = bb.event.get_class_handlers().copy()
self.profile = profile
+ self.queue_signals = False
+ self.signal_received = []
+ self.signal_threadlock = threading.Lock()
+
+ def catch_sig(self, signum, frame):
+ if self.queue_signals:
+ self.signal_received.append(signum)
+ else:
+ self.handle_sig(signum, frame)
+
+ def handle_sig(self, signum, frame):
+ if signum == signal.SIGTERM:
+ signal.signal(signal.SIGTERM, signal.SIG_DFL)
+ os.kill(os.getpid(), signal.SIGTERM)
+ elif signum == signal.SIGINT:
+ signal.default_int_handler(signum, frame)
def run(self):
@@ -2020,36 +2075,48 @@ class Parser(multiprocessing.Process):
prof.dump_stats(logfile)
def realrun(self):
- if self.init:
- self.init()
+ # Signal handling here is hard. We must not terminate any process or thread holding the write
+ # lock for the event stream as it will not be released, ever, and things will hang.
+ # Python handles signals in the main thread/process but they can be raised from any thread and
+ # we want to defer processing of any SIGTERM/SIGINT signal until we're outside the critical section
+ # and don't hold the lock (see server/process.py). We therefore always catch the signals (so any
+ # new thread should also do so) and we defer handling but we handle with the local thread lock
+ # held (a threading lock, not a multiprocessing one) so that no other thread in the process
+ # can be in the critical section.
+ signal.signal(signal.SIGTERM, self.catch_sig)
+ signal.signal(signal.SIGHUP, signal.SIG_DFL)
+ signal.signal(signal.SIGINT, self.catch_sig)
+ bb.utils.set_process_name(multiprocessing.current_process().name)
+ multiprocessing.util.Finalize(None, bb.codeparser.parser_cache_save, exitpriority=1)
+ multiprocessing.util.Finalize(None, bb.fetch.fetcher_parse_save, exitpriority=1)
pending = []
- while True:
- try:
- self.quit.get_nowait()
- except queue.Empty:
- pass
- else:
- self.results.close()
- self.results.join_thread()
- break
-
- if pending:
- result = pending.pop()
- else:
+ try:
+ while True:
try:
- job = self.jobs.pop()
- except IndexError:
- self.results.close()
- self.results.join_thread()
+ self.quit.get_nowait()
+ except queue.Empty:
+ pass
+ else:
break
- result = self.parse(*job)
- # Clear the siggen cache after parsing to control memory usage, its huge
- bb.parse.siggen.postparsing_clean_cache()
- try:
- self.results.put(result, timeout=0.25)
- except queue.Full:
- pending.append(result)
+
+ if pending:
+ result = pending.pop()
+ else:
+ try:
+ job = self.jobs.pop()
+ except IndexError:
+ break
+ result = self.parse(*job)
+ # Clear the siggen cache after parsing to control memory usage, its huge
+ bb.parse.siggen.postparsing_clean_cache()
+ try:
+ self.results.put(result, timeout=0.25)
+ except queue.Full:
+ pending.append(result)
+ finally:
+ self.results.close()
+ self.results.join_thread()
def parse(self, mc, cache, filename, appends):
try:
@@ -2070,12 +2137,12 @@ class Parser(multiprocessing.Process):
tb = sys.exc_info()[2]
exc.recipe = filename
exc.traceback = list(bb.exceptions.extract_traceback(tb, context=3))
- return True, exc
+ return True, None, exc
# Need to turn BaseExceptions into Exceptions here so we gracefully shutdown
# and for example a worker thread doesn't just exit on its own in response to
# a SystemExit event for example.
except BaseException as exc:
- return True, ParsingFailure(exc, filename)
+ return True, None, ParsingFailure(exc, filename)
finally:
bb.event.LogHandler.filter = origfilter
@@ -2126,13 +2193,6 @@ class CookerParser(object):
self.processes = []
if self.toparse:
bb.event.fire(bb.event.ParseStarted(self.toparse), self.cfgdata)
- def init():
- signal.signal(signal.SIGTERM, signal.SIG_DFL)
- signal.signal(signal.SIGHUP, signal.SIG_DFL)
- signal.signal(signal.SIGINT, signal.SIG_IGN)
- bb.utils.set_process_name(multiprocessing.current_process().name)
- multiprocessing.util.Finalize(None, bb.codeparser.parser_cache_save, exitpriority=1)
- multiprocessing.util.Finalize(None, bb.fetch.fetcher_parse_save, exitpriority=1)
self.parser_quit = multiprocessing.Queue(maxsize=self.num_processes)
self.result_queue = multiprocessing.Queue()
@@ -2142,14 +2202,14 @@ class CookerParser(object):
self.jobs = chunkify(list(self.willparse), self.num_processes)
for i in range(0, self.num_processes):
- parser = Parser(self.jobs[i], self.result_queue, self.parser_quit, init, self.cooker.configuration.profile)
+ parser = Parser(self.jobs[i], self.result_queue, self.parser_quit, self.cooker.configuration.profile)
parser.start()
self.process_names.append(parser.name)
self.processes.append(parser)
self.results = itertools.chain(self.results, self.parse_generator())
- def shutdown(self, clean=True, force=False):
+ def shutdown(self, clean=True):
if not self.toparse:
return
if self.haveshutdown:
@@ -2163,6 +2223,8 @@ class CookerParser(object):
self.total)
bb.event.fire(event, self.cfgdata)
+ else:
+ bb.error("Parsing halted due to errors, see error messages above")
for process in self.processes:
self.parser_quit.put(None)
@@ -2176,11 +2238,24 @@ class CookerParser(object):
break
for process in self.processes:
- if force:
- process.join(.1)
+ process.join(0.5)
+
+ for process in self.processes:
+ if process.exitcode is None:
+ os.kill(process.pid, signal.SIGINT)
+
+ for process in self.processes:
+ process.join(0.5)
+
+ for process in self.processes:
+ if process.exitcode is None:
process.terminate()
- else:
- process.join()
+
+ for process in self.processes:
+ process.join()
+ # Added in 3.7, cleans up zombies
+ if hasattr(process, "close"):
+ process.close()
self.parser_quit.close()
# Allow data left in the cancel queue to be discarded
@@ -2230,14 +2305,10 @@ class CookerParser(object):
result = self.result_queue.get(timeout=0.25)
except queue.Empty:
empty = True
- pass
+ yield None, None, None
else:
empty = False
- value = result[1]
- if isinstance(value, BaseException):
- raise value
- else:
- yield result
+ yield result
if not (self.parsed >= self.toparse):
raise bb.parse.ParseError("Not all recipes parsed, parser thread killed/died? Exiting.", None)
@@ -2248,24 +2319,31 @@ class CookerParser(object):
parsed = None
try:
parsed, mc, result = next(self.results)
+ if isinstance(result, BaseException):
+ # Turn exceptions back into exceptions
+ raise result
+ if parsed is None:
+ # Timeout, loop back through the main loop
+ return True
+
except StopIteration:
self.shutdown()
return False
except bb.BBHandledException as exc:
self.error += 1
- logger.error('Failed to parse recipe: %s' % exc.recipe)
- self.shutdown(clean=False, force=True)
+ logger.debug('Failed to parse recipe: %s' % exc.recipe)
+ self.shutdown(clean=False)
return False
except ParsingFailure as exc:
self.error += 1
logger.error('Unable to parse %s: %s' %
(exc.recipe, bb.exceptions.to_string(exc.realexception)))
- self.shutdown(clean=False, force=True)
+ self.shutdown(clean=False)
return False
except bb.parse.ParseError as exc:
self.error += 1
logger.error(str(exc))
- self.shutdown(clean=False, force=True)
+ self.shutdown(clean=False)
return False
except bb.data_smart.ExpansionError as exc:
self.error += 1
@@ -2274,7 +2352,7 @@ class CookerParser(object):
tb = list(itertools.dropwhile(lambda e: e.filename.startswith(bbdir), exc.traceback))
logger.error('ExpansionError during parsing %s', value.recipe,
exc_info=(etype, value, tb))
- self.shutdown(clean=False, force=True)
+ self.shutdown(clean=False)
return False
except Exception as exc:
self.error += 1
@@ -2286,7 +2364,7 @@ class CookerParser(object):
# Most likely, an exception occurred during raising an exception
import traceback
logger.error('Exception during parse: %s' % traceback.format_exc())
- self.shutdown(clean=False, force=True)
+ self.shutdown(clean=False)
return False
self.current += 1
diff --git a/bitbake/lib/bb/cookerdata.py b/bitbake/lib/bb/cookerdata.py
index ba657c0..fe5696c 100644
--- a/bitbake/lib/bb/cookerdata.py
+++ b/bitbake/lib/bb/cookerdata.py
@@ -57,7 +57,7 @@ class ConfigParameters(object):
def updateToServer(self, server, environment):
options = {}
- for o in ["abort", "force", "invalidate_stamp",
+ for o in ["halt", "force", "invalidate_stamp",
"dry_run", "dump_signatures",
"extra_assume_provided", "profile",
"prefile", "postfile", "server_timeout",
@@ -86,7 +86,7 @@ class ConfigParameters(object):
action['msg'] = "Only one target can be used with the --environment option."
elif self.options.buildfile and len(self.options.pkgs_to_build) > 0:
action['msg'] = "No target should be used with the --environment and --buildfile options."
- elif len(self.options.pkgs_to_build) > 0:
+ elif self.options.pkgs_to_build:
action['action'] = ["showEnvironmentTarget", self.options.pkgs_to_build]
else:
action['action'] = ["showEnvironment", self.options.buildfile]
@@ -124,7 +124,7 @@ class CookerConfiguration(object):
self.prefile = []
self.postfile = []
self.cmd = None
- self.abort = True
+ self.halt = True
self.force = False
self.profile = False
self.nosetscene = False
@@ -210,7 +210,7 @@ def findConfigFile(configfile, data):
#
# We search for a conf/bblayers.conf under an entry in BBPATH or in cwd working
-# up to /. If that fails, we search for a conf/bitbake.conf in BBPATH.
+# up to /. If that fails, bitbake would fall back to cwd.
#
def findTopdir():
@@ -223,11 +223,8 @@ def findTopdir():
layerconf = findConfigFile("bblayers.conf", d)
if layerconf:
return os.path.dirname(os.path.dirname(layerconf))
- if bbpath:
- bitbakeconf = bb.utils.which(bbpath, "conf/bitbake.conf")
- if bitbakeconf:
- return os.path.dirname(os.path.dirname(bitbakeconf))
- return None
+
+ return os.path.abspath(os.getcwd())
class CookerDataBuilder(object):
@@ -250,6 +247,9 @@ class CookerDataBuilder(object):
self.savedenv = bb.data.init()
for k in cookercfg.env:
self.savedenv.setVar(k, cookercfg.env[k])
+ if k in bb.data_smart.bitbake_renamed_vars:
+ bb.error('Variable %s from the shell environment has been renamed to %s' % (k, bb.data_smart.bitbake_renamed_vars[k]))
+ bb.fatal("Exiting to allow enviroment variables to be corrected")
filtered_keys = bb.utils.approved_variables()
bb.data.inheritFromOS(self.basedata, self.savedenv, filtered_keys)
@@ -261,12 +261,12 @@ class CookerDataBuilder(object):
self.data = self.basedata
self.mcdata = {}
- def parseBaseConfiguration(self):
+ def parseBaseConfiguration(self, worker=False):
data_hash = hashlib.sha256()
try:
self.data = self.parseConfigurationFiles(self.prefiles, self.postfiles)
- if self.data.getVar("BB_WORKERCONTEXT", False) is None:
+ if self.data.getVar("BB_WORKERCONTEXT", False) is None and not worker:
bb.fetch.fetcher_init(self.data)
bb.parse.init_parser(self.data)
bb.codeparser.parser_cache_init(self.data)
@@ -310,6 +310,26 @@ class CookerDataBuilder(object):
logger.exception("Error parsing configuration files")
raise bb.BBHandledException()
+
+ # Handle obsolete variable names
+ d = self.data
+ renamedvars = d.getVarFlags('BB_RENAMED_VARIABLES') or {}
+ renamedvars.update(bb.data_smart.bitbake_renamed_vars)
+ issues = False
+ for v in renamedvars:
+ if d.getVar(v) != None or d.hasOverrides(v):
+ issues = True
+ loginfo = {}
+ history = d.varhistory.get_variable_refs(v)
+ for h in history:
+ for line in history[h]:
+ loginfo = {'file' : h, 'line' : line}
+ bb.data.data_smart._print_rename_error(v, loginfo, renamedvars)
+ if not history:
+ bb.data.data_smart._print_rename_error(v, loginfo, renamedvars)
+ if issues:
+ raise bb.BBHandledException()
+
# Create a copy so we can reset at a later date when UIs disconnect
self.origdata = self.data
self.data = bb.data.createCopy(self.origdata)
@@ -417,6 +437,9 @@ class CookerDataBuilder(object):
" invoked bitbake from the wrong directory?")
raise SystemExit(msg)
+ if not data.getVar("TOPDIR"):
+ data.setVar("TOPDIR", os.path.abspath(os.getcwd()))
+
data = parse_config_file(os.path.join("conf", "bitbake.conf"), data)
# Parse files for loading *after* bitbake.conf and any includes
@@ -428,7 +451,7 @@ class CookerDataBuilder(object):
for bbclass in bbclasses:
data = _inherit(bbclass, data)
- # Nomally we only register event handlers at the end of parsing .bb files
+ # Normally we only register event handlers at the end of parsing .bb files
# We register any handlers we've found so far here...
for var in data.getVar('__BBHANDLERS', False) or []:
handlerfn = data.getVarFlag(var, "filename", False)
diff --git a/bitbake/lib/bb/daemonize.py b/bitbake/lib/bb/daemonize.py
index c187fcf..7689404 100644
--- a/bitbake/lib/bb/daemonize.py
+++ b/bitbake/lib/bb/daemonize.py
@@ -1,4 +1,6 @@
#
+# Copyright BitBake Contributors
+#
# SPDX-License-Identifier: GPL-2.0-only
#
@@ -74,26 +76,26 @@ def createDaemon(function, logfile):
with open('/dev/null', 'r') as si:
os.dup2(si.fileno(), sys.stdin.fileno())
- try:
- so = open(logfile, 'a+')
- os.dup2(so.fileno(), sys.stdout.fileno())
- os.dup2(so.fileno(), sys.stderr.fileno())
- except io.UnsupportedOperation:
- sys.stdout = open(logfile, 'a+')
+ with open(logfile, 'a+') as so:
+ try:
+ os.dup2(so.fileno(), sys.stdout.fileno())
+ os.dup2(so.fileno(), sys.stderr.fileno())
+ except io.UnsupportedOperation:
+ sys.stdout = so
- # Have stdout and stderr be the same so log output matches chronologically
- # and there aren't two seperate buffers
- sys.stderr = sys.stdout
+ # Have stdout and stderr be the same so log output matches chronologically
+ # and there aren't two separate buffers
+ sys.stderr = sys.stdout
- try:
- function()
- except Exception as e:
- traceback.print_exc()
- finally:
- bb.event.print_ui_queue()
- # os._exit() doesn't flush open files like os.exit() does. Manually flush
- # stdout and stderr so that any logging output will be seen, particularly
- # exception tracebacks.
- sys.stdout.flush()
- sys.stderr.flush()
- os._exit(0)
+ try:
+ function()
+ except Exception as e:
+ traceback.print_exc()
+ finally:
+ bb.event.print_ui_queue()
+ # os._exit() doesn't flush open files like os.exit() does. Manually flush
+ # stdout and stderr so that any logging output will be seen, particularly
+ # exception tracebacks.
+ sys.stdout.flush()
+ sys.stderr.flush()
+ os._exit(0)
diff --git a/bitbake/lib/bb/data.py b/bitbake/lib/bb/data.py
index 9702285..c09d9b0 100644
--- a/bitbake/lib/bb/data.py
+++ b/bitbake/lib/bb/data.py
@@ -226,7 +226,7 @@ def emit_func(func, o=sys.__stdout__, d = init()):
deps = newdeps
seen |= deps
newdeps = set()
- for dep in deps:
+ for dep in sorted(deps):
if d.getVarFlag(dep, "func", False) and not d.getVarFlag(dep, "python", False):
emit_var(dep, o, d, False) and o.write('\n')
newdeps |= bb.codeparser.ShellParser(dep, logger).parse_shell(d.getVar(dep))
@@ -272,34 +272,37 @@ def update_data(d):
"""Performs final steps upon the datastore, including application of overrides"""
d.finalize(parent = True)
-def build_dependencies(key, keys, shelldeps, varflagsexcl, d):
+def build_dependencies(key, keys, shelldeps, varflagsexcl, ignored_vars, d):
deps = set()
try:
if key[-1] == ']':
vf = key[:-1].split('[')
+ if vf[1] == "vardepvalueexclude":
+ return deps, ""
value, parser = d.getVarFlag(vf[0], vf[1], False, retparser=True)
deps |= parser.references
deps = deps | (keys & parser.execs)
return deps, value
varflags = d.getVarFlags(key, ["vardeps", "vardepvalue", "vardepsexclude", "exports", "postfuncs", "prefuncs", "lineno", "filename"]) or {}
vardeps = varflags.get("vardeps")
+ exclusions = varflags.get("vardepsexclude", "").split()
- def handle_contains(value, contains, d):
- newvalue = ""
+ def handle_contains(value, contains, exclusions, d):
+ newvalue = []
+ if value:
+ newvalue.append(str(value))
for k in sorted(contains):
+ if k in exclusions or k in ignored_vars:
+ continue
l = (d.getVar(k) or "").split()
for item in sorted(contains[k]):
for word in item.split():
if not word in l:
- newvalue += "\n%s{%s} = Unset" % (k, item)
+ newvalue.append("\n%s{%s} = Unset" % (k, item))
break
else:
- newvalue += "\n%s{%s} = Set" % (k, item)
- if not newvalue:
- return value
- if not value:
- return newvalue
- return value + newvalue
+ newvalue.append("\n%s{%s} = Set" % (k, item))
+ return "".join(newvalue)
def handle_remove(value, deps, removes, d):
for r in sorted(removes):
@@ -318,7 +321,7 @@ def build_dependencies(key, keys, shelldeps, varflagsexcl, d):
parser.parse_python(value, filename=varflags.get("filename"), lineno=varflags.get("lineno"))
deps = deps | parser.references
deps = deps | (keys & parser.execs)
- value = handle_contains(value, parser.contains, d)
+ value = handle_contains(value, parser.contains, exclusions, d)
else:
value, parsedvar = d.getVarFlag(key, "_content", False, retparser=True)
parser = bb.codeparser.ShellParser(key, logger)
@@ -326,7 +329,7 @@ def build_dependencies(key, keys, shelldeps, varflagsexcl, d):
deps = deps | shelldeps
deps = deps | parsedvar.references
deps = deps | (keys & parser.execs) | (keys & parsedvar.execs)
- value = handle_contains(value, parsedvar.contains, d)
+ value = handle_contains(value, parsedvar.contains, exclusions, d)
if hasattr(parsedvar, "removes"):
value = handle_remove(value, deps, parsedvar.removes, d)
if vardeps is None:
@@ -341,7 +344,7 @@ def build_dependencies(key, keys, shelldeps, varflagsexcl, d):
value, parser = d.getVarFlag(key, "_content", False, retparser=True)
deps |= parser.references
deps = deps | (keys & parser.execs)
- value = handle_contains(value, parser.contains, d)
+ value = handle_contains(value, parser.contains, exclusions, d)
if hasattr(parser, "removes"):
value = handle_remove(value, deps, parser.removes, d)
@@ -361,7 +364,7 @@ def build_dependencies(key, keys, shelldeps, varflagsexcl, d):
deps |= set(varfdeps)
deps |= set((vardeps or "").split())
- deps -= set(varflags.get("vardepsexclude", "").split())
+ deps -= set(exclusions)
except bb.parse.SkipRecipe:
raise
except Exception as e:
@@ -371,7 +374,7 @@ def build_dependencies(key, keys, shelldeps, varflagsexcl, d):
#bb.note("Variable %s references %s and calls %s" % (key, str(deps), str(execs)))
#d.setVarFlag(key, "vardeps", deps)
-def generate_dependencies(d, whitelist):
+def generate_dependencies(d, ignored_vars):
keys = set(key for key in d if not key.startswith("__"))
shelldeps = set(key for key in d.getVar("__exportlist", False) if d.getVarFlag(key, "export", False) and not d.getVarFlag(key, "unexport", False))
@@ -382,22 +385,22 @@ def generate_dependencies(d, whitelist):
tasklist = d.getVar('__BBTASKS', False) or []
for task in tasklist:
- deps[task], values[task] = build_dependencies(task, keys, shelldeps, varflagsexcl, d)
+ deps[task], values[task] = build_dependencies(task, keys, shelldeps, varflagsexcl, ignored_vars, d)
newdeps = deps[task]
seen = set()
while newdeps:
- nextdeps = newdeps - whitelist
+ nextdeps = newdeps - ignored_vars
seen |= nextdeps
newdeps = set()
for dep in nextdeps:
if dep not in deps:
- deps[dep], values[dep] = build_dependencies(dep, keys, shelldeps, varflagsexcl, d)
+ deps[dep], values[dep] = build_dependencies(dep, keys, shelldeps, varflagsexcl, ignored_vars, d)
newdeps |= deps[dep]
newdeps -= seen
#print "For %s: %s" % (task, str(deps[task]))
return tasklist, deps, values
-def generate_dependency_hash(tasklist, gendeps, lookupcache, whitelist, fn):
+def generate_dependency_hash(tasklist, gendeps, lookupcache, ignored_vars, fn):
taskdeps = {}
basehash = {}
@@ -406,9 +409,11 @@ def generate_dependency_hash(tasklist, gendeps, lookupcache, whitelist, fn):
if data is None:
bb.error("Task %s from %s seems to be empty?!" % (task, fn))
- data = ''
+ data = []
+ else:
+ data = [data]
- gendeps[task] -= whitelist
+ gendeps[task] -= ignored_vars
newdeps = gendeps[task]
seen = set()
while newdeps:
@@ -416,20 +421,20 @@ def generate_dependency_hash(tasklist, gendeps, lookupcache, whitelist, fn):
seen |= nextdeps
newdeps = set()
for dep in nextdeps:
- if dep in whitelist:
+ if dep in ignored_vars:
continue
- gendeps[dep] -= whitelist
+ gendeps[dep] -= ignored_vars
newdeps |= gendeps[dep]
newdeps -= seen
alldeps = sorted(seen)
for dep in alldeps:
- data = data + dep
+ data.append(dep)
var = lookupcache[dep]
if var is not None:
- data = data + str(var)
+ data.append(str(var))
k = fn + ":" + task
- basehash[k] = hashlib.sha256(data.encode("utf-8")).hexdigest()
+ basehash[k] = hashlib.sha256("".join(data).encode("utf-8")).hexdigest()
taskdeps[task] = alldeps
return taskdeps, basehash
diff --git a/bitbake/lib/bb/data_smart.py b/bitbake/lib/bb/data_smart.py
index 65857a9..dd20ca5 100644
--- a/bitbake/lib/bb/data_smart.py
+++ b/bitbake/lib/bb/data_smart.py
@@ -26,13 +26,25 @@ from bb.COW import COWDictBase
logger = logging.getLogger("BitBake.Data")
-__setvar_keyword__ = ["_append", "_prepend", "_remove"]
-__setvar_regexp__ = re.compile(r'(?P<base>.*?)(?P<keyword>_append|_prepend|_remove)(_(?P<add>[^A-Z]*))?$')
+__setvar_keyword__ = [":append", ":prepend", ":remove"]
+__setvar_regexp__ = re.compile(r'(?P<base>.*?)(?P<keyword>:append|:prepend|:remove)(:(?P<add>[^A-Z]*))?$')
__expand_var_regexp__ = re.compile(r"\${[a-zA-Z0-9\-_+./~:]+?}")
__expand_python_regexp__ = re.compile(r"\${@.+?}")
__whitespace_split__ = re.compile(r'(\s)')
__override_regexp__ = re.compile(r'[a-z0-9]+')
+bitbake_renamed_vars = {
+ "BB_ENV_WHITELIST": "BB_ENV_PASSTHROUGH",
+ "BB_ENV_EXTRAWHITE": "BB_ENV_PASSTHROUGH_ADDITIONS",
+ "BB_HASHBASE_WHITELIST": "BB_BASEHASH_IGNORE_VARS",
+ "BB_HASHCONFIG_WHITELIST": "BB_HASHCONFIG_IGNORE_VARS",
+ "BB_HASHTASK_WHITELIST": "BB_TASKHASH_IGNORE_TASKS",
+ "BB_SETSCENE_ENFORCE_WHITELIST": "BB_SETSCENE_ENFORCE_IGNORE_TASKS",
+ "MULTI_PROVIDER_WHITELIST": "BB_MULTI_PROVIDER_ALLOWED",
+ "BB_STAMP_WHITELIST": "is a deprecated variable and support has been removed",
+ "BB_STAMP_POLICY": "is a deprecated variable and support has been removed",
+}
+
def infer_caller_details(loginfo, parent = False, varval = True):
"""Save the caller the trouble of specifying everything."""
# Save effort.
@@ -140,6 +152,9 @@ class DataContext(dict):
self['d'] = metadata
def __missing__(self, key):
+ # Skip commonly accessed invalid variables
+ if key in ['bb', 'oe', 'int', 'bool', 'time', 'str', 'os']:
+ raise KeyError(key)
value = self.metadata.getVar(key)
if value is None or self.metadata.getVarFlag(key, 'func', False):
raise KeyError(key)
@@ -151,6 +166,7 @@ class ExpansionError(Exception):
self.expression = expression
self.variablename = varname
self.exception = exception
+ self.varlist = [varname or expression or ""]
if varname:
if expression:
self.msg = "Failure expanding variable %s, expression was %s which triggered exception %s: %s" % (varname, expression, type(exception).__name__, exception)
@@ -160,8 +176,14 @@ class ExpansionError(Exception):
self.msg = "Failure expanding expression %s which triggered exception %s: %s" % (expression, type(exception).__name__, exception)
Exception.__init__(self, self.msg)
self.args = (varname, expression, exception)
+
+ def addVar(self, varname):
+ if varname:
+ self.varlist.append(varname)
+
def __str__(self):
- return self.msg
+ chain = "\nThe variable dependency chain for the failure is: " + " -> ".join(self.varlist)
+ return self.msg + chain
class IncludeHistory(object):
def __init__(self, parent = None, filename = '[TOP LEVEL]'):
@@ -277,7 +299,7 @@ class VariableHistory(object):
for (r, override) in d.overridedata[var]:
for event in self.variable(r):
loginfo = event.copy()
- if 'flag' in loginfo and not loginfo['flag'].startswith("_"):
+ if 'flag' in loginfo and not loginfo['flag'].startswith(("_", ":")):
continue
loginfo['variable'] = var
loginfo['op'] = 'override[%s]:%s' % (override, loginfo['op'])
@@ -329,6 +351,16 @@ class VariableHistory(object):
lines.append(line)
return lines
+ def get_variable_refs(self, var):
+ """Return a dict of file/line references"""
+ var_history = self.variable(var)
+ refs = {}
+ for event in var_history:
+ if event['file'] not in refs:
+ refs[event['file']] = []
+ refs[event['file']].append(event['line'])
+ return refs
+
def get_variable_items_files(self, var):
"""
Use variable history to map items added to a list variable and
@@ -342,7 +374,7 @@ class VariableHistory(object):
for event in history:
if 'flag' in event:
continue
- if event['op'] == '_remove':
+ if event['op'] == ':remove':
continue
if isset and event['op'] == 'set?':
continue
@@ -363,6 +395,23 @@ class VariableHistory(object):
else:
self.variables[var] = []
+def _print_rename_error(var, loginfo, renamedvars, fullvar=None):
+ info = ""
+ if "file" in loginfo:
+ info = " file: %s" % loginfo["file"]
+ if "line" in loginfo:
+ info += " line: %s" % loginfo["line"]
+ if fullvar and fullvar != var:
+ info += " referenced as: %s" % fullvar
+ if info:
+ info = " (%s)" % info.strip()
+ renameinfo = renamedvars[var]
+ if " " in renameinfo:
+ # A space signals a string to display instead of a rename
+ bb.erroronce('Variable %s %s%s' % (var, renameinfo, info))
+ else:
+ bb.erroronce('Variable %s has been renamed to %s%s' % (var, renameinfo, info))
+
class DataSmart(MutableMapping):
def __init__(self):
self.dict = {}
@@ -370,6 +419,8 @@ class DataSmart(MutableMapping):
self.inchistory = IncludeHistory()
self.varhistory = VariableHistory(self)
self._tracking = False
+ self._var_renames = {}
+ self._var_renames.update(bitbake_renamed_vars)
self.expand_cache = {}
@@ -407,7 +458,8 @@ class DataSmart(MutableMapping):
raise
if s == olds:
break
- except ExpansionError:
+ except ExpansionError as e:
+ e.addVar(varname)
raise
except bb.parse.SkipRecipe:
raise
@@ -480,10 +532,26 @@ class DataSmart(MutableMapping):
else:
self.initVar(var)
+ def hasOverrides(self, var):
+ return var in self.overridedata
def setVar(self, var, value, **loginfo):
#print("var=" + str(var) + " val=" + str(value))
- var = var.replace(":", "_")
+
+ if not var.startswith("__anon_") and ("_append" in var or "_prepend" in var or "_remove" in var):
+ info = "%s" % var
+ if "file" in loginfo:
+ info += " file: %s" % loginfo["file"]
+ if "line" in loginfo:
+ info += " line: %s" % loginfo["line"]
+ bb.fatal("Variable %s contains an operation using the old override syntax. Please convert this layer/metadata before attempting to use with a newer bitbake." % info)
+
+ shortvar = var.split(":", 1)[0]
+ if shortvar in self._var_renames:
+ _print_rename_error(shortvar, loginfo, self._var_renames, fullvar=var)
+ # Mark that we have seen a renamed variable
+ self.setVar("_FAILPARSINGERRORHANDLED", True)
+
self.expand_cache = {}
parsing=False
if 'parsing' in loginfo:
@@ -512,7 +580,7 @@ class DataSmart(MutableMapping):
# pay the cookie monster
# more cookies for the cookie monster
- if '_' in var:
+ if ':' in var:
self._setvar_update_overrides(base, **loginfo)
if base in self.overridevars:
@@ -523,27 +591,27 @@ class DataSmart(MutableMapping):
self._makeShadowCopy(var)
if not parsing:
- if "_append" in self.dict[var]:
- del self.dict[var]["_append"]
- if "_prepend" in self.dict[var]:
- del self.dict[var]["_prepend"]
- if "_remove" in self.dict[var]:
- del self.dict[var]["_remove"]
+ if ":append" in self.dict[var]:
+ del self.dict[var][":append"]
+ if ":prepend" in self.dict[var]:
+ del self.dict[var][":prepend"]
+ if ":remove" in self.dict[var]:
+ del self.dict[var][":remove"]
if var in self.overridedata:
active = []
self.need_overrides()
for (r, o) in self.overridedata[var]:
if o in self.overridesset:
active.append(r)
- elif "_" in o:
- if set(o.split("_")).issubset(self.overridesset):
+ elif ":" in o:
+ if set(o.split(":")).issubset(self.overridesset):
active.append(r)
for a in active:
self.delVar(a)
del self.overridedata[var]
# more cookies for the cookie monster
- if '_' in var:
+ if ':' in var:
self._setvar_update_overrides(var, **loginfo)
# setting var
@@ -569,8 +637,8 @@ class DataSmart(MutableMapping):
def _setvar_update_overrides(self, var, **loginfo):
# aka pay the cookie monster
- override = var[var.rfind('_')+1:]
- shortvar = var[:var.rfind('_')]
+ override = var[var.rfind(':')+1:]
+ shortvar = var[:var.rfind(':')]
while override and __override_regexp__.match(override):
if shortvar not in self.overridedata:
self.overridedata[shortvar] = []
@@ -579,9 +647,9 @@ class DataSmart(MutableMapping):
self.overridedata[shortvar] = list(self.overridedata[shortvar])
self.overridedata[shortvar].append([var, override])
override = None
- if "_" in shortvar:
- override = var[shortvar.rfind('_')+1:]
- shortvar = var[:shortvar.rfind('_')]
+ if ":" in shortvar:
+ override = var[shortvar.rfind(':')+1:]
+ shortvar = var[:shortvar.rfind(':')]
if len(shortvar) == 0:
override = None
@@ -592,8 +660,6 @@ class DataSmart(MutableMapping):
"""
Rename the variable key to newkey
"""
- key = key.replace(":", "_")
- newkey = newkey.replace(":", "_")
if key == newkey:
bb.warn("Calling renameVar with equivalent keys (%s) is invalid" % key)
return
@@ -607,10 +673,11 @@ class DataSmart(MutableMapping):
self.varhistory.record(**loginfo)
self.setVar(newkey, val, ignore=True, parsing=True)
- for i in (__setvar_keyword__):
- src = self.getVarFlag(key, i, False)
- if src is None:
+ srcflags = self.getVarFlags(key, False, True) or {}
+ for i in srcflags:
+ if i not in (__setvar_keyword__):
continue
+ src = srcflags[i]
dest = self.getVarFlag(newkey, i, False) or []
dest.extend(src)
@@ -622,7 +689,7 @@ class DataSmart(MutableMapping):
self.overridedata[newkey].append([v.replace(key, newkey), o])
self.renameVar(v, v.replace(key, newkey))
- if '_' in newkey and val is None:
+ if ':' in newkey and val is None:
self._setvar_update_overrides(newkey, **loginfo)
loginfo['variable'] = key
@@ -634,15 +701,14 @@ class DataSmart(MutableMapping):
def appendVar(self, var, value, **loginfo):
loginfo['op'] = 'append'
self.varhistory.record(**loginfo)
- self.setVar(var + "_append", value, ignore=True, parsing=True)
+ self.setVar(var + ":append", value, ignore=True, parsing=True)
def prependVar(self, var, value, **loginfo):
loginfo['op'] = 'prepend'
self.varhistory.record(**loginfo)
- self.setVar(var + "_prepend", value, ignore=True, parsing=True)
+ self.setVar(var + ":prepend", value, ignore=True, parsing=True)
def delVar(self, var, **loginfo):
- var = var.replace(":", "_")
self.expand_cache = {}
loginfo['detail'] = ""
@@ -651,9 +717,9 @@ class DataSmart(MutableMapping):
self.dict[var] = {}
if var in self.overridedata:
del self.overridedata[var]
- if '_' in var:
- override = var[var.rfind('_')+1:]
- shortvar = var[:var.rfind('_')]
+ if ':' in var:
+ override = var[var.rfind(':')+1:]
+ shortvar = var[:var.rfind(':')]
while override and override.islower():
try:
if shortvar in self.overridedata:
@@ -663,16 +729,23 @@ class DataSmart(MutableMapping):
except ValueError as e:
pass
override = None
- if "_" in shortvar:
- override = var[shortvar.rfind('_')+1:]
- shortvar = var[:shortvar.rfind('_')]
+ if ":" in shortvar:
+ override = var[shortvar.rfind(':')+1:]
+ shortvar = var[:shortvar.rfind(':')]
if len(shortvar) == 0:
override = None
def setVarFlag(self, var, flag, value, **loginfo):
- var = var.replace(":", "_")
self.expand_cache = {}
+ if var == "BB_RENAMED_VARIABLES":
+ self._var_renames[flag] = value
+
+ if var in self._var_renames:
+ _print_rename_error(var, loginfo, self._var_renames)
+ # Mark that we have seen a renamed variable
+ self.setVar("_FAILPARSINGERRORHANDLED", True)
+
if 'op' not in loginfo:
loginfo['op'] = "set"
loginfo['flag'] = flag
@@ -681,7 +754,7 @@ class DataSmart(MutableMapping):
self._makeShadowCopy(var)
self.dict[var][flag] = value
- if flag == "_defaultval" and '_' in var:
+ if flag == "_defaultval" and ':' in var:
self._setvar_update_overrides(var, **loginfo)
if flag == "_defaultval" and var in self.overridevars:
self._setvar_update_overridevars(var, value)
@@ -694,7 +767,6 @@ class DataSmart(MutableMapping):
self.dict["__exportlist"]["_content"].add(var)
def getVarFlag(self, var, flag, expand=True, noweakdefault=False, parsing=False, retparser=False):
- var = var.replace(":", "_")
if flag == "_content":
cachename = var
else:
@@ -714,11 +786,11 @@ class DataSmart(MutableMapping):
active = {}
self.need_overrides()
for (r, o) in overridedata:
- # What about double overrides both with "_" in the name?
+ # FIXME What about double overrides both with "_" in the name?
if o in self.overridesset:
active[o] = r
- elif "_" in o:
- if set(o.split("_")).issubset(self.overridesset):
+ elif ":" in o:
+ if set(o.split(":")).issubset(self.overridesset):
active[o] = r
mod = True
@@ -726,10 +798,10 @@ class DataSmart(MutableMapping):
mod = False
for o in self.overrides:
for a in active.copy():
- if a.endswith("_" + o):
+ if a.endswith(":" + o):
t = active[a]
del active[a]
- active[a.replace("_" + o, "")] = t
+ active[a.replace(":" + o, "")] = t
mod = True
elif a == o:
match = active[a]
@@ -748,31 +820,31 @@ class DataSmart(MutableMapping):
value = copy.copy(local_var["_defaultval"])
- if flag == "_content" and local_var is not None and "_append" in local_var and not parsing:
- if not value:
- value = ""
+ if flag == "_content" and local_var is not None and ":append" in local_var and not parsing:
self.need_overrides()
- for (r, o) in local_var["_append"]:
+ for (r, o) in local_var[":append"]:
match = True
if o:
- for o2 in o.split("_"):
+ for o2 in o.split(":"):
if not o2 in self.overrides:
match = False
if match:
+ if value is None:
+ value = ""
value = value + r
- if flag == "_content" and local_var is not None and "_prepend" in local_var and not parsing:
- if not value:
- value = ""
+ if flag == "_content" and local_var is not None and ":prepend" in local_var and not parsing:
self.need_overrides()
- for (r, o) in local_var["_prepend"]:
+ for (r, o) in local_var[":prepend"]:
match = True
if o:
- for o2 in o.split("_"):
+ for o2 in o.split(":"):
if not o2 in self.overrides:
match = False
if match:
+ if value is None:
+ value = ""
value = r + value
parser = None
@@ -781,12 +853,12 @@ class DataSmart(MutableMapping):
if expand:
value = parser.value
- if value and flag == "_content" and local_var is not None and "_remove" in local_var and not parsing:
+ if value and flag == "_content" and local_var is not None and ":remove" in local_var and not parsing:
self.need_overrides()
- for (r, o) in local_var["_remove"]:
+ for (r, o) in local_var[":remove"]:
match = True
if o:
- for o2 in o.split("_"):
+ for o2 in o.split(":"):
if not o2 in self.overrides:
match = False
if match:
@@ -799,7 +871,7 @@ class DataSmart(MutableMapping):
expanded_removes[r] = self.expand(r).split()
parser.removes = set()
- val = ""
+ val = []
for v in __whitespace_split__.split(parser.value):
skip = False
for r in removes:
@@ -808,8 +880,8 @@ class DataSmart(MutableMapping):
skip = True
if skip:
continue
- val = val + v
- parser.value = val
+ val.append(v)
+ parser.value = "".join(val)
if expand:
value = parser.value
@@ -822,7 +894,6 @@ class DataSmart(MutableMapping):
return value
def delVarFlag(self, var, flag, **loginfo):
- var = var.replace(":", "_")
self.expand_cache = {}
local_var, _ = self._findVar(var)
@@ -840,7 +911,6 @@ class DataSmart(MutableMapping):
del self.dict[var][flag]
def appendVarFlag(self, var, flag, value, **loginfo):
- var = var.replace(":", "_")
loginfo['op'] = 'append'
loginfo['flag'] = flag
self.varhistory.record(**loginfo)
@@ -848,7 +918,6 @@ class DataSmart(MutableMapping):
self.setVarFlag(var, flag, newvalue, ignore=True)
def prependVarFlag(self, var, flag, value, **loginfo):
- var = var.replace(":", "_")
loginfo['op'] = 'prepend'
loginfo['flag'] = flag
self.varhistory.record(**loginfo)
@@ -856,7 +925,6 @@ class DataSmart(MutableMapping):
self.setVarFlag(var, flag, newvalue, ignore=True)
def setVarFlags(self, var, flags, **loginfo):
- var = var.replace(":", "_")
self.expand_cache = {}
infer_caller_details(loginfo)
if not var in self.dict:
@@ -871,13 +939,12 @@ class DataSmart(MutableMapping):
self.dict[var][i] = flags[i]
def getVarFlags(self, var, expand = False, internalflags=False):
- var = var.replace(":", "_")
local_var, _ = self._findVar(var)
flags = {}
if local_var:
for i in local_var:
- if i.startswith("_") and not internalflags:
+ if i.startswith(("_", ":")) and not internalflags:
continue
flags[i] = local_var[i]
if expand and i in expand:
@@ -888,7 +955,6 @@ class DataSmart(MutableMapping):
def delVarFlags(self, var, **loginfo):
- var = var.replace(":", "_")
self.expand_cache = {}
if not var in self.dict:
self._makeShadowCopy(var)
@@ -919,6 +985,7 @@ class DataSmart(MutableMapping):
data.inchistory = self.inchistory.copy()
data._tracking = self._tracking
+ data._var_renames = self._var_renames
data.overrides = None
data.overridevars = copy.copy(self.overridevars)
@@ -941,7 +1008,7 @@ class DataSmart(MutableMapping):
value = self.getVar(variable, False)
for key in keys:
referrervalue = self.getVar(key, False)
- if referrervalue and ref in referrervalue:
+ if referrervalue and isinstance(referrervalue, str) and ref in referrervalue:
self.setVar(key, referrervalue.replace(ref, value))
def localkeys(self):
@@ -976,8 +1043,8 @@ class DataSmart(MutableMapping):
for (r, o) in self.overridedata[var]:
if o in self.overridesset:
overrides.add(var)
- elif "_" in o:
- if set(o.split("_")).issubset(self.overridesset):
+ elif ":" in o:
+ if set(o.split(":")).issubset(self.overridesset):
overrides.add(var)
for k in keylist(self.dict):
@@ -1007,10 +1074,10 @@ class DataSmart(MutableMapping):
d = self.createCopy()
bb.data.expandKeys(d)
- config_whitelist = set((d.getVar("BB_HASHCONFIG_WHITELIST") or "").split())
+ config_ignore_vars = set((d.getVar("BB_HASHCONFIG_IGNORE_VARS") or "").split())
keys = set(key for key in iter(d) if not key.startswith("__"))
for key in keys:
- if key in config_whitelist:
+ if key in config_ignore_vars:
continue
value = d.getVar(key, False) or ""
diff --git a/bitbake/lib/bb/event.py b/bitbake/lib/bb/event.py
index 0454c75..9766860 100644
--- a/bitbake/lib/bb/event.py
+++ b/bitbake/lib/bb/event.py
@@ -40,7 +40,7 @@ class HeartbeatEvent(Event):
"""Triggered at regular time intervals of 10 seconds. Other events can fire much more often
(runQueueTaskStarted when there are many short tasks) or not at all for long periods
of time (again runQueueTaskStarted, when there is just one long-running task), so this
- event is more suitable for doing some task-independent work occassionally."""
+ event is more suitable for doing some task-independent work occasionally."""
def __init__(self, time):
Event.__init__(self)
self.time = time
@@ -132,8 +132,14 @@ def print_ui_queue():
if not _uiready:
from bb.msg import BBLogFormatter
# Flush any existing buffered content
- sys.stdout.flush()
- sys.stderr.flush()
+ try:
+ sys.stdout.flush()
+ except:
+ pass
+ try:
+ sys.stderr.flush()
+ except:
+ pass
stdout = logging.StreamHandler(sys.stdout)
stderr = logging.StreamHandler(sys.stderr)
formatter = BBLogFormatter("%(levelname)s: %(message)s")
@@ -486,7 +492,7 @@ class BuildCompleted(BuildBase, OperationCompleted):
BuildBase.__init__(self, n, p, failures)
class DiskFull(Event):
- """Disk full case build aborted"""
+ """Disk full case build halted"""
def __init__(self, dev, type, freespace, mountpoint):
Event.__init__(self)
self._dev = dev
@@ -764,7 +770,7 @@ class LogHandler(logging.Handler):
class MetadataEvent(Event):
"""
Generic event that target for OE-Core classes
- to report information during asynchrous execution
+ to report information during asynchronous execution
"""
def __init__(self, eventtype, eventdata):
Event.__init__(self)
diff --git a/bitbake/lib/bb/exceptions.py b/bitbake/lib/bb/exceptions.py
index ecbad59..801db9c 100644
--- a/bitbake/lib/bb/exceptions.py
+++ b/bitbake/lib/bb/exceptions.py
@@ -1,4 +1,6 @@
#
+# Copyright BitBake Contributors
+#
# SPDX-License-Identifier: GPL-2.0-only
#
diff --git a/bitbake/lib/bb/fetch2/README b/bitbake/lib/bb/fetch2/README
new file mode 100644
index 0000000..67b787e
--- /dev/null
+++ b/bitbake/lib/bb/fetch2/README
@@ -0,0 +1,57 @@
+There are expectations of users of the fetcher code. This file attempts to document
+some of the constraints that are present. Some are obvious, some are less so. It is
+documented in the context of how OE uses it but the API calls are generic.
+
+a) network access for sources is only expected to happen in the do_fetch task.
+ This is not enforced or tested but is required so that we can:
+
+ i) audit the sources used (i.e. for license/manifest reasons)
+ ii) support offline builds with a suitable cache
+ iii) allow work to continue even with downtime upstream
+ iv) allow for changes upstream in incompatible ways
+ v) allow rebuilding of the software in X years time
+
+b) network access is not expected in do_unpack task.
+
+c) you can take DL_DIR and use it as a mirror for offline builds.
+
+d) access to the network is only made when explicitly configured in recipes
+ (e.g. use of AUTOREV, or use of git tags which change revision).
+
+e) fetcher output is deterministic (i.e. if you fetch configuration XXX now it
+ will match in future exactly in a clean build with a new DL_DIR).
+ One specific pain point example are git tags. They can be replaced and change
+ so the git fetcher has to resolve them with the network. We use git revisions
+ where possible to avoid this and ensure determinism.
+
+f) network access is expected to work with the standard linux proxy variables
+ so that access behind firewalls works (the fetcher sets these in the
+ environment but only in the do_fetch tasks).
+
+g) access during parsing has to be minimal, a "git ls-remote" for an AUTOREV
+ git recipe might be ok but you can't expect to checkout a git tree.
+
+h) we need to provide revision information during parsing such that a version
+ for the recipe can be constructed.
+
+i) versions are expected to be able to increase in a way which sorts allowing
+ package feeds to operate (see PR server required for git revisions to sort).
+
+j) API to query for possible version upgrades of a url is highly desireable to
+ allow our automated upgrage code to function (it is implied this does always
+ have network access).
+
+k) Where fixes or changes to behaviour in the fetcher are made, we ask that
+ test cases are added (run with "bitbake-selftest bb.tests.fetch"). We do
+ have fairly extensive test coverage of the fetcher as it is the only way
+ to track all of its corner cases, it still doesn't give entire coverage
+ though sadly.
+
+l) If using tools during parse time, they will have to be in ASSUME_PROVIDED
+ in OE's context as we can't build git-native, then parse a recipe and use
+ git ls-remote.
+
+Not all fetchers support all features, autorev is optional and doesn't make
+sense for some. Upgrade detection means different things in different contexts
+too.
+
diff --git a/bitbake/lib/bb/fetch2/__init__.py b/bitbake/lib/bb/fetch2/__init__.py
index 1005ec1..a314062 100644
--- a/bitbake/lib/bb/fetch2/__init__.py
+++ b/bitbake/lib/bb/fetch2/__init__.py
@@ -113,7 +113,7 @@ class MissingParameterError(BBFetchException):
self.args = (missing, url)
class ParameterError(BBFetchException):
- """Exception raised when a url cannot be proccessed due to invalid parameters."""
+ """Exception raised when a url cannot be processed due to invalid parameters."""
def __init__(self, message, url):
msg = "URL: '%s' has invalid parameters. %s" % (url, message)
self.url = url
@@ -182,7 +182,7 @@ class URI(object):
Some notes about relative URIs: while it's specified that
a URI beginning with <scheme>:// should either be directly
followed by a hostname or a /, the old URI handling of the
- fetch2 library did not comform to this. Therefore, this URI
+ fetch2 library did not conform to this. Therefore, this URI
class has some kludges to make sure that URIs are parsed in
a way comforming to bitbake's current usage. This URI class
supports the following:
@@ -199,7 +199,7 @@ class URI(object):
file://hostname/absolute/path.diff (would be IETF compliant)
Note that the last case only applies to a list of
- "whitelisted" schemes (currently only file://), that requires
+ explicitly allowed schemes (currently only file://), that requires
its URIs to not have a network location.
"""
@@ -402,24 +402,24 @@ def encodeurl(decoded):
if not type:
raise MissingParameterError('type', "encoded from the data %s" % str(decoded))
- url = '%s://' % type
+ url = ['%s://' % type]
if user and type != "file":
- url += "%s" % user
+ url.append("%s" % user)
if pswd:
- url += ":%s" % pswd
- url += "@"
+ url.append(":%s" % pswd)
+ url.append("@")
if host and type != "file":
- url += "%s" % host
+ url.append("%s" % host)
if path:
# Standardise path to ensure comparisons work
while '//' in path:
path = path.replace("//", "/")
- url += "%s" % urllib.parse.quote(path)
+ url.append("%s" % urllib.parse.quote(path))
if p:
for parm in p:
- url += ";%s=%s" % (parm, p[parm])
+ url.append(";%s=%s" % (parm, p[parm]))
- return url
+ return "".join(url)
def uri_replace(ud, uri_find, uri_replace, replacements, d, mirrortarball=None):
if not ud.url or not uri_find or not uri_replace:
@@ -471,8 +471,15 @@ def uri_replace(ud, uri_find, uri_replace, replacements, d, mirrortarball=None):
uri_decoded[5] = {}
elif ud.localpath and ud.method.supports_checksum(ud):
basename = os.path.basename(ud.localpath)
- if basename and not result_decoded[loc].endswith(basename):
- result_decoded[loc] = os.path.join(result_decoded[loc], basename)
+ if basename:
+ uri_basename = os.path.basename(uri_decoded[loc])
+ # Prefix with a slash as a sentinel in case
+ # result_decoded[loc] does not contain one.
+ path = "/" + result_decoded[loc]
+ if uri_basename and basename != uri_basename and path.endswith("/" + uri_basename):
+ result_decoded[loc] = path[1:-len(uri_basename)] + basename
+ elif not path.endswith("/" + basename):
+ result_decoded[loc] = os.path.join(path[1:], basename)
else:
return None
result = encodeurl(result_decoded)
@@ -758,6 +765,12 @@ def get_srcrev(d, method_name='sortable_revision'):
that fetcher provides a method with the given name and the same signature as sortable_revision.
"""
+ d.setVar("__BBSEENSRCREV", "1")
+ recursion = d.getVar("__BBINSRCREV")
+ if recursion:
+ raise FetchError("There are recursive references in fetcher variables, likely through SRC_URI")
+ d.setVar("__BBINSRCREV", True)
+
scms = []
fetcher = Fetch(d.getVar('SRC_URI').split(), d)
urldata = fetcher.ud
@@ -765,13 +778,14 @@ def get_srcrev(d, method_name='sortable_revision'):
if urldata[u].method.supports_srcrev():
scms.append(u)
- if len(scms) == 0:
+ if not scms:
raise FetchError("SRCREV was used yet no valid SCM was found in SRC_URI")
if len(scms) == 1 and len(urldata[scms[0]].names) == 1:
autoinc, rev = getattr(urldata[scms[0]].method, method_name)(urldata[scms[0]], d, urldata[scms[0]].names[0])
if len(rev) > 10:
rev = rev[:10]
+ d.delVar("__BBINSRCREV")
if autoinc:
return "AUTOINC+" + rev
return rev
@@ -806,12 +820,49 @@ def get_srcrev(d, method_name='sortable_revision'):
if seenautoinc:
format = "AUTOINC+" + format
+ d.delVar("__BBINSRCREV")
return format
def localpath(url, d):
fetcher = bb.fetch2.Fetch([url], d)
return fetcher.localpath(url)
+# Need to export PATH as binary could be in metadata paths
+# rather than host provided
+# Also include some other variables.
+FETCH_EXPORT_VARS = ['HOME', 'PATH',
+ 'HTTP_PROXY', 'http_proxy',
+ 'HTTPS_PROXY', 'https_proxy',
+ 'FTP_PROXY', 'ftp_proxy',
+ 'FTPS_PROXY', 'ftps_proxy',
+ 'NO_PROXY', 'no_proxy',
+ 'ALL_PROXY', 'all_proxy',
+ 'GIT_PROXY_COMMAND',
+ 'GIT_SSH',
+ 'GIT_SSH_COMMAND',
+ 'GIT_SSL_CAINFO',
+ 'GIT_SMART_HTTP',
+ 'SSH_AUTH_SOCK', 'SSH_AGENT_PID',
+ 'SOCKS5_USER', 'SOCKS5_PASSWD',
+ 'DBUS_SESSION_BUS_ADDRESS',
+ 'P4CONFIG',
+ 'SSL_CERT_FILE',
+ 'AWS_PROFILE',
+ 'AWS_ACCESS_KEY_ID',
+ 'AWS_SECRET_ACCESS_KEY',
+ 'AWS_DEFAULT_REGION']
+
+def get_fetcher_environment(d):
+ newenv = {}
+ origenv = d.getVar("BB_ORIGENV")
+ for name in bb.fetch2.FETCH_EXPORT_VARS:
+ value = d.getVar(name)
+ if not value and origenv:
+ value = origenv.getVar(name)
+ if value:
+ newenv[name] = value
+ return newenv
+
def runfetchcmd(cmd, d, quiet=False, cleanup=None, log=None, workdir=None):
"""
Run cmd returning the command output
@@ -820,25 +871,7 @@ def runfetchcmd(cmd, d, quiet=False, cleanup=None, log=None, workdir=None):
Optionally remove the files/directories listed in cleanup upon failure
"""
- # Need to export PATH as binary could be in metadata paths
- # rather than host provided
- # Also include some other variables.
- # FIXME: Should really include all export varaiables?
- exportvars = ['HOME', 'PATH',
- 'HTTP_PROXY', 'http_proxy',
- 'HTTPS_PROXY', 'https_proxy',
- 'FTP_PROXY', 'ftp_proxy',
- 'FTPS_PROXY', 'ftps_proxy',
- 'NO_PROXY', 'no_proxy',
- 'ALL_PROXY', 'all_proxy',
- 'GIT_PROXY_COMMAND',
- 'GIT_SSH',
- 'GIT_SSL_CAINFO',
- 'GIT_SMART_HTTP',
- 'SSH_AUTH_SOCK', 'SSH_AGENT_PID',
- 'SOCKS5_USER', 'SOCKS5_PASSWD',
- 'DBUS_SESSION_BUS_ADDRESS',
- 'P4CONFIG']
+ exportvars = FETCH_EXPORT_VARS
if not cleanup:
cleanup = []
@@ -1064,6 +1097,8 @@ def try_mirror_url(fetch, origud, ud, ld, check = False):
def ensure_symlink(target, link_name):
if not os.path.exists(link_name):
+ dirname = os.path.dirname(link_name)
+ bb.utils.mkdirhier(dirname)
if os.path.islink(link_name):
# Broken symbolic link
os.unlink(link_name)
@@ -1147,11 +1182,11 @@ def srcrev_internal_helper(ud, d, name):
pn = d.getVar("PN")
attempts = []
if name != '' and pn:
- attempts.append("SRCREV_%s_pn-%s" % (name, pn))
+ attempts.append("SRCREV_%s:pn-%s" % (name, pn))
if name != '':
attempts.append("SRCREV_%s" % name)
if pn:
- attempts.append("SRCREV_pn-%s" % pn)
+ attempts.append("SRCREV:pn-%s" % pn)
attempts.append("SRCREV")
for a in attempts:
@@ -1441,30 +1476,33 @@ class FetchMethod(object):
cmd = None
if unpack:
+ tar_cmd = 'tar --extract --no-same-owner'
+ if 'striplevel' in urldata.parm:
+ tar_cmd += ' --strip-components=%s' % urldata.parm['striplevel']
if file.endswith('.tar'):
- cmd = 'tar x --no-same-owner -f %s' % file
+ cmd = '%s -f %s' % (tar_cmd, file)
elif file.endswith('.tgz') or file.endswith('.tar.gz') or file.endswith('.tar.Z'):
- cmd = 'tar xz --no-same-owner -f %s' % file
+ cmd = '%s -z -f %s' % (tar_cmd, file)
elif file.endswith('.tbz') or file.endswith('.tbz2') or file.endswith('.tar.bz2'):
- cmd = 'bzip2 -dc %s | tar x --no-same-owner -f -' % file
+ cmd = 'bzip2 -dc %s | %s -f -' % (file, tar_cmd)
elif file.endswith('.gz') or file.endswith('.Z') or file.endswith('.z'):
cmd = 'gzip -dc %s > %s' % (file, efile)
elif file.endswith('.bz2'):
cmd = 'bzip2 -dc %s > %s' % (file, efile)
elif file.endswith('.txz') or file.endswith('.tar.xz'):
- cmd = 'xz -dc %s | tar x --no-same-owner -f -' % file
+ cmd = 'xz -dc %s | %s -f -' % (file, tar_cmd)
elif file.endswith('.xz'):
cmd = 'xz -dc %s > %s' % (file, efile)
elif file.endswith('.tar.lz'):
- cmd = 'lzip -dc %s | tar x --no-same-owner -f -' % file
+ cmd = 'lzip -dc %s | %s -f -' % (file, tar_cmd)
elif file.endswith('.lz'):
cmd = 'lzip -dc %s > %s' % (file, efile)
elif file.endswith('.tar.7z'):
- cmd = '7z x -so %s | tar x --no-same-owner -f -' % file
+ cmd = '7z x -so %s | %s -f -' % (file, tar_cmd)
elif file.endswith('.7z'):
cmd = '7za x -y %s 1>/dev/null' % file
elif file.endswith('.tzst') or file.endswith('.tar.zst'):
- cmd = 'zstd --decompress --stdout %s | tar x --no-same-owner -f -' % file
+ cmd = 'zstd --decompress --stdout %s | %s -f -' % (file, tar_cmd)
elif file.endswith('.zst'):
cmd = 'zstd --decompress --stdout %s > %s' % (file, efile)
elif file.endswith('.zip') or file.endswith('.jar'):
@@ -1497,7 +1535,7 @@ class FetchMethod(object):
raise UnpackError("Unable to unpack deb/ipk package - does not contain data.tar.* file", urldata.url)
else:
raise UnpackError("Unable to unpack deb/ipk package - could not list contents", urldata.url)
- cmd = 'ar x %s %s && tar --no-same-owner -xpf %s && rm %s' % (file, datafile, datafile, datafile)
+ cmd = 'ar x %s %s && %s -p -f %s && rm %s' % (file, datafile, tar_cmd, datafile, datafile)
# If 'subdir' param exists, create a dir and use it as destination for unpack cmd
if 'subdir' in urldata.parm:
@@ -1623,7 +1661,7 @@ class Fetch(object):
if localonly and cache:
raise Exception("bb.fetch2.Fetch.__init__: cannot set cache and localonly at same time")
- if len(urls) == 0:
+ if not urls:
urls = d.getVar("SRC_URI").split()
self.urls = urls
self.d = d
@@ -1712,7 +1750,9 @@ class Fetch(object):
self.d.setVar("BB_NO_NETWORK", "1")
firsterr = None
- verified_stamp = m.verify_donestamp(ud, self.d)
+ verified_stamp = False
+ if done:
+ verified_stamp = m.verify_donestamp(ud, self.d)
if not done and (not verified_stamp or m.need_update(ud, self.d)):
try:
if not trusted_network(self.d, ud.url):
@@ -1771,7 +1811,11 @@ class Fetch(object):
def checkstatus(self, urls=None):
"""
- Check all urls exist upstream
+ Check all URLs exist upstream.
+
+ Returns None if the URLs exist, raises FetchError if the check wasn't
+ successful but there wasn't an error (such as file not found), and
+ raises other exceptions in error cases.
"""
if not urls:
@@ -1916,6 +1960,7 @@ from . import clearcase
from . import npm
from . import npmsw
from . import az
+from . import crate
methods.append(local.Local())
methods.append(wget.Wget())
@@ -1936,3 +1981,4 @@ methods.append(clearcase.ClearCase())
methods.append(npm.Npm())
methods.append(npmsw.NpmShrinkWrap())
methods.append(az.Az())
+methods.append(crate.Crate())
diff --git a/bitbake/lib/bb/fetch2/crate.py b/bitbake/lib/bb/fetch2/crate.py
new file mode 100644
index 0000000..f4ddc78
--- /dev/null
+++ b/bitbake/lib/bb/fetch2/crate.py
@@ -0,0 +1,136 @@
+# ex:ts=4:sw=4:sts=4:et
+# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
+"""
+BitBake 'Fetch' implementation for crates.io
+"""
+
+# Copyright (C) 2016 Doug Goldstein
+#
+# SPDX-License-Identifier: GPL-2.0-only
+#
+# Based on functions from the base bb module, Copyright 2003 Holger Schurig
+
+import hashlib
+import json
+import os
+import subprocess
+import bb
+from bb.fetch2 import logger, subprocess_setup, UnpackError
+from bb.fetch2.wget import Wget
+
+
+class Crate(Wget):
+
+ """Class to fetch crates via wget"""
+
+ def _cargo_bitbake_path(self, rootdir):
+ return os.path.join(rootdir, "cargo_home", "bitbake")
+
+ def supports(self, ud, d):
+ """
+ Check to see if a given url is for this fetcher
+ """
+ return ud.type in ['crate']
+
+ def recommends_checksum(self, urldata):
+ return False
+
+ def urldata_init(self, ud, d):
+ """
+ Sets up to download the respective crate from crates.io
+ """
+
+ if ud.type == 'crate':
+ self._crate_urldata_init(ud, d)
+
+ super(Crate, self).urldata_init(ud, d)
+
+ def _crate_urldata_init(self, ud, d):
+ """
+ Sets up the download for a crate
+ """
+
+ # URL syntax is: crate://NAME/VERSION
+ # break the URL apart by /
+ parts = ud.url.split('/')
+ if len(parts) < 5:
+ raise bb.fetch2.ParameterError("Invalid URL: Must be crate://HOST/NAME/VERSION", ud.url)
+
+ # last field is version
+ version = parts[len(parts) - 1]
+ # second to last field is name
+ name = parts[len(parts) - 2]
+ # host (this is to allow custom crate registries to be specified
+ host = '/'.join(parts[2:len(parts) - 2])
+
+ # if using upstream just fix it up nicely
+ if host == 'crates.io':
+ host = 'crates.io/api/v1/crates'
+
+ ud.url = "https://%s/%s/%s/download" % (host, name, version)
+ ud.parm['downloadfilename'] = "%s-%s.crate" % (name, version)
+ ud.parm['name'] = name
+
+ logger.debug("Fetching %s to %s" % (ud.url, ud.parm['downloadfilename']))
+
+ def unpack(self, ud, rootdir, d):
+ """
+ Uses the crate to build the necessary paths for cargo to utilize it
+ """
+ if ud.type == 'crate':
+ return self._crate_unpack(ud, rootdir, d)
+ else:
+ super(Crate, self).unpack(ud, rootdir, d)
+
+ def _crate_unpack(self, ud, rootdir, d):
+ """
+ Unpacks a crate
+ """
+ thefile = ud.localpath
+
+ # possible metadata we need to write out
+ metadata = {}
+
+ # change to the rootdir to unpack but save the old working dir
+ save_cwd = os.getcwd()
+ os.chdir(rootdir)
+
+ pn = d.getVar('BPN')
+ if pn == ud.parm.get('name'):
+ cmd = "tar -xz --no-same-owner -f %s" % thefile
+ else:
+ cargo_bitbake = self._cargo_bitbake_path(rootdir)
+
+ cmd = "tar -xz --no-same-owner -f %s -C %s" % (thefile, cargo_bitbake)
+
+ # ensure we've got these paths made
+ bb.utils.mkdirhier(cargo_bitbake)
+
+ # generate metadata necessary
+ with open(thefile, 'rb') as f:
+ # get the SHA256 of the original tarball
+ tarhash = hashlib.sha256(f.read()).hexdigest()
+
+ metadata['files'] = {}
+ metadata['package'] = tarhash
+
+ path = d.getVar('PATH')
+ if path:
+ cmd = "PATH=\"%s\" %s" % (path, cmd)
+ bb.note("Unpacking %s to %s/" % (thefile, os.getcwd()))
+
+ ret = subprocess.call(cmd, preexec_fn=subprocess_setup, shell=True)
+
+ os.chdir(save_cwd)
+
+ if ret != 0:
+ raise UnpackError("Unpack command %s failed with return value %s" % (cmd, ret), ud.url)
+
+ # if we have metadata to write out..
+ if len(metadata) > 0:
+ cratepath = os.path.splitext(os.path.basename(thefile))[0]
+ bbpath = self._cargo_bitbake_path(rootdir)
+ mdfile = '.cargo-checksum.json'
+ mdpath = os.path.join(bbpath, cratepath, mdfile)
+ with open(mdpath, "w") as f:
+ json.dump(metadata, f)
diff --git a/bitbake/lib/bb/fetch2/git.py b/bitbake/lib/bb/fetch2/git.py
index d17e2f0..f0df6fb 100644
--- a/bitbake/lib/bb/fetch2/git.py
+++ b/bitbake/lib/bb/fetch2/git.py
@@ -146,6 +146,7 @@ class Git(FetchMethod):
# github stopped supporting git protocol
# https://github.blog/2021-09-01-improving-git-protocol-security-github/#no-more-unauthenticated-git
ud.proto = "https"
+ bb.warn("URL: %s uses git protocol which is no longer supported by github. Please change to ;protocol=https in the url." % ud.url)
if not ud.proto in ('git', 'file', 'ssh', 'http', 'https', 'rsync'):
raise bb.fetch2.ParameterError("Invalid protocol type", ud.url)
@@ -169,11 +170,18 @@ class Git(FetchMethod):
ud.nocheckout = 1
ud.unresolvedrev = {}
- branches = ud.parm.get("branch", "master").split(',')
+ branches = ud.parm.get("branch", "").split(',')
+ if branches == [""] and not ud.nobranch:
+ bb.warn("URL: %s does not set any branch parameter. The future default branch used by tools and repositories is uncertain and we will therefore soon require this is set in all git urls." % ud.url)
+ branches = ["master"]
if len(branches) != len(ud.names):
raise bb.fetch2.ParameterError("The number of name and branch parameters is not balanced", ud.url)
- ud.cloneflags = "-s -n"
+ ud.noshared = d.getVar("BB_GIT_NOSHARED") == "1"
+
+ ud.cloneflags = "-n"
+ if not ud.noshared:
+ ud.cloneflags += " -s"
if ud.bareclone:
ud.cloneflags += " --mirror"
@@ -232,7 +240,7 @@ class Git(FetchMethod):
for name in ud.names:
ud.unresolvedrev[name] = 'HEAD'
- ud.basecmd = d.getVar("FETCHCMD_git") or "git -c core.fsyncobjectfiles=0"
+ ud.basecmd = d.getVar("FETCHCMD_git") or "git -c core.fsyncobjectfiles=0 -c gc.autoDetach=false -c core.pager=cat"
write_tarballs = d.getVar("BB_GENERATE_MIRROR_TARBALLS") or "0"
ud.write_tarballs = write_tarballs != "0" or ud.rebaseable
@@ -391,7 +399,7 @@ class Git(FetchMethod):
if self._contains_lfs(ud, d, ud.clonedir) and self._need_lfs(ud):
# Unpack temporary working copy, use it to run 'git checkout' to force pre-fetching
- # of all LFS blobs needed at the the srcrev.
+ # of all LFS blobs needed at the srcrev.
#
# It would be nice to just do this inline here by running 'git-lfs fetch'
# on the bare clonedir, but that operation requires a working copy on some
@@ -454,7 +462,10 @@ class Git(FetchMethod):
logger.info("Creating tarball of git repository")
with create_atomic(ud.fullmirror) as tfile:
- runfetchcmd("tar -czf %s ." % tfile, d, workdir=ud.clonedir)
+ mtime = runfetchcmd("git log --all -1 --format=%cD", d,
+ quiet=True, workdir=ud.clonedir)
+ runfetchcmd("tar -czf %s --owner oe:0 --group oe:0 --mtime \"%s\" ."
+ % (tfile, mtime), d, workdir=ud.clonedir)
runfetchcmd("touch %s.done" % ud.fullmirror, d)
def clone_shallow_local(self, ud, dest, d):
@@ -516,13 +527,24 @@ class Git(FetchMethod):
def unpack(self, ud, destdir, d):
""" unpack the downloaded src to destdir"""
- subdir = ud.parm.get("subpath", "")
- if subdir != "":
- readpathspec = ":%s" % subdir
- def_destsuffix = "%s/" % os.path.basename(subdir.rstrip('/'))
- else:
- readpathspec = ""
- def_destsuffix = "git/"
+ subdir = ud.parm.get("subdir")
+ subpath = ud.parm.get("subpath")
+ readpathspec = ""
+ def_destsuffix = "git/"
+
+ if subpath:
+ readpathspec = ":%s" % subpath
+ def_destsuffix = "%s/" % os.path.basename(subpath.rstrip('/'))
+
+ if subdir:
+ # If 'subdir' param exists, create a dir and use it as destination for unpack cmd
+ if os.path.isabs(subdir):
+ if not os.path.realpath(subdir).startswith(os.path.realpath(destdir)):
+ raise bb.fetch2.UnpackError("subdir argument isn't a subdirectory of unpack root %s" % destdir, ud.url)
+ destdir = subdir
+ else:
+ destdir = os.path.join(destdir, subdir)
+ def_destsuffix = ""
destsuffix = ud.parm.get("destsuffix", def_destsuffix)
destdir = ud.destdir = os.path.join(destdir, destsuffix)
@@ -569,7 +591,7 @@ class Git(FetchMethod):
bb.note("Repository %s has LFS content but it is not being fetched" % (repourl))
if not ud.nocheckout:
- if subdir != "":
+ if subpath:
runfetchcmd("%s read-tree %s%s" % (ud.basecmd, ud.revisions[ud.names[0]], readpathspec), d,
workdir=destdir)
runfetchcmd("%s checkout-index -q -f -a" % ud.basecmd, d, workdir=destdir)
@@ -708,6 +730,12 @@ class Git(FetchMethod):
"""
Compute the HEAD revision for the url
"""
+ if not d.getVar("__BBSEENSRCREV"):
+ raise bb.fetch2.FetchError("Recipe uses a floating tag/branch without a fixed SRCREV yet doesn't call bb.fetch2.get_srcrev() (use SRCPV in PV for OE).")
+
+ # Ensure we mark as not cached
+ bb.fetch2.get_autorev(d)
+
output = self._lsremote(ud, d, "")
# Tags of the form ^{} may not work, need to fallback to other form
if ud.unresolvedrev[name][:5] == "refs/" or ud.usehead:
diff --git a/bitbake/lib/bb/fetch2/gitsm.py b/bitbake/lib/bb/fetch2/gitsm.py
index a4527bf..25d5db0 100644
--- a/bitbake/lib/bb/fetch2/gitsm.py
+++ b/bitbake/lib/bb/fetch2/gitsm.py
@@ -88,7 +88,7 @@ class GitSM(Git):
subrevision[m] = module_hash.split()[2]
# Convert relative to absolute uri based on parent uri
- if uris[m].startswith('..'):
+ if uris[m].startswith('..') or uris[m].startswith('./'):
newud = copy.copy(ud)
newud.path = os.path.realpath(os.path.join(newud.path, uris[m]))
uris[m] = Git._get_repo_url(self, newud)
@@ -115,6 +115,9 @@ class GitSM(Git):
# This has to be a file reference
proto = "file"
url = "gitsm://" + uris[module]
+ if "{}{}".format(ud.host, ud.path) in url:
+ raise bb.fetch2.FetchError("Submodule refers to the parent repository. This will cause deadlock situation in current version of Bitbake." \
+ "Consider using git fetcher instead.")
url += ';protocol=%s' % proto
url += ";name=%s" % module
@@ -140,16 +143,6 @@ class GitSM(Git):
if Git.need_update(self, ud, d):
return True
- try:
- # Check for the nugget dropped by the download operation
- known_srcrevs = runfetchcmd("%s config --get-all bitbake.srcrev" % \
- (ud.basecmd), d, workdir=ud.clonedir)
-
- if ud.revisions[ud.names[0]] in known_srcrevs.split():
- return False
- except bb.fetch2.FetchError:
- pass
-
need_update_list = []
def need_update_submodule(ud, url, module, modpath, workdir, d):
url += ";bareclone=1;nobranch=1"
@@ -172,13 +165,8 @@ class GitSM(Git):
shutil.rmtree(tmpdir)
else:
self.process_submodules(ud, ud.clonedir, need_update_submodule, d)
- if len(need_update_list) == 0:
- # We already have the required commits of all submodules. Drop
- # a nugget so we don't need to check again.
- runfetchcmd("%s config --add bitbake.srcrev %s" % \
- (ud.basecmd, ud.revisions[ud.names[0]]), d, workdir=ud.clonedir)
- if len(need_update_list) > 0:
+ if need_update_list:
logger.debug('gitsm: Submodules requiring update: %s' % (' '.join(need_update_list)))
return True
@@ -209,9 +197,6 @@ class GitSM(Git):
shutil.rmtree(tmpdir)
else:
self.process_submodules(ud, ud.clonedir, download_submodule, d)
- # Drop a nugget for the srcrev we've fetched (used by need_update)
- runfetchcmd("%s config --add bitbake.srcrev %s" % \
- (ud.basecmd, ud.revisions[ud.names[0]]), d, workdir=ud.clonedir)
def unpack(self, ud, destdir, d):
def unpack_submodules(ud, url, module, modpath, workdir, d):
diff --git a/bitbake/lib/bb/fetch2/npm.py b/bitbake/lib/bb/fetch2/npm.py
index 4789850..8a179a3 100644
--- a/bitbake/lib/bb/fetch2/npm.py
+++ b/bitbake/lib/bb/fetch2/npm.py
@@ -52,9 +52,13 @@ def npm_filename(package, version):
"""Get the filename of a npm package"""
return npm_package(package) + "-" + version + ".tgz"
-def npm_localfile(package, version):
+def npm_localfile(package, version=None):
"""Get the local filename of a npm package"""
- return os.path.join("npm2", npm_filename(package, version))
+ if version is not None:
+ filename = npm_filename(package, version)
+ else:
+ filename = package
+ return os.path.join("npm2", filename)
def npm_integrity(integrity):
"""
@@ -69,17 +73,31 @@ def npm_unpack(tarball, destdir, d):
bb.utils.mkdirhier(destdir)
cmd = "tar --extract --gzip --file=%s" % shlex.quote(tarball)
cmd += " --no-same-owner"
+ cmd += " --delay-directory-restore"
cmd += " --strip-components=1"
runfetchcmd(cmd, d, workdir=destdir)
+ runfetchcmd("chmod -R +X '%s'" % (destdir), d, quiet=True, workdir=destdir)
class NpmEnvironment(object):
"""
Using a npm config file seems more reliable than using cli arguments.
This class allows to create a controlled environment for npm commands.
"""
- def __init__(self, d, configs=None):
+ def __init__(self, d, configs=[], npmrc=None):
self.d = d
- self.configs = configs
+
+ self.user_config = tempfile.NamedTemporaryFile(mode="w", buffering=1)
+ for key, value in configs:
+ self.user_config.write("%s=%s\n" % (key, value))
+
+ if npmrc:
+ self.global_config_name = npmrc
+ else:
+ self.global_config_name = "/dev/null"
+
+ def __del__(self):
+ if self.user_config:
+ self.user_config.close()
def run(self, cmd, args=None, configs=None, workdir=None):
"""Run npm command in a controlled environment"""
@@ -87,23 +105,19 @@ class NpmEnvironment(object):
d = bb.data.createCopy(self.d)
d.setVar("HOME", tmpdir)
- cfgfile = os.path.join(tmpdir, "npmrc")
-
if not workdir:
workdir = tmpdir
def _run(cmd):
- cmd = "NPM_CONFIG_USERCONFIG=%s " % cfgfile + cmd
- cmd = "NPM_CONFIG_GLOBALCONFIG=%s " % cfgfile + cmd
+ cmd = "NPM_CONFIG_USERCONFIG=%s " % (self.user_config.name) + cmd
+ cmd = "NPM_CONFIG_GLOBALCONFIG=%s " % (self.global_config_name) + cmd
return runfetchcmd(cmd, d, workdir=workdir)
- if self.configs:
- for key, value in self.configs:
- _run("npm config set %s %s" % (key, shlex.quote(value)))
-
if configs:
+ bb.warn("Use of configs argument of NpmEnvironment.run() function"
+ " is deprecated. Please use args argument instead.")
for key, value in configs:
- _run("npm config set %s %s" % (key, shlex.quote(value)))
+ cmd += " --%s=%s" % (key, shlex.quote(value))
if args:
for key, value in args:
@@ -142,12 +156,12 @@ class Npm(FetchMethod):
raise ParameterError("Invalid 'version' parameter", ud.url)
# Extract the 'registry' part of the url
- ud.registry = re.sub(r"^npm://", "http://", ud.url.split(";")[0])
+ ud.registry = re.sub(r"^npm://", "https://", ud.url.split(";")[0])
# Using the 'downloadfilename' parameter as local filename
# or the npm package name.
if "downloadfilename" in ud.parm:
- ud.localfile = d.expand(ud.parm["downloadfilename"])
+ ud.localfile = npm_localfile(d.expand(ud.parm["downloadfilename"]))
else:
ud.localfile = npm_localfile(ud.package, ud.version)
@@ -165,14 +179,14 @@ class Npm(FetchMethod):
def _resolve_proxy_url(self, ud, d):
def _npm_view():
- configs = []
- configs.append(("json", "true"))
- configs.append(("registry", ud.registry))
+ args = []
+ args.append(("json", "true"))
+ args.append(("registry", ud.registry))
pkgver = shlex.quote(ud.package + "@" + ud.version)
cmd = ud.basecmd + " view %s" % pkgver
env = NpmEnvironment(d)
check_network_access(d, cmd, ud.registry)
- view_string = env.run(cmd, configs=configs)
+ view_string = env.run(cmd, args=args)
if not view_string:
raise FetchError("Unavailable package %s" % pkgver, ud.url)
diff --git a/bitbake/lib/bb/fetch2/npmsw.py b/bitbake/lib/bb/fetch2/npmsw.py
index fdecbc6..a8c4d35 100644
--- a/bitbake/lib/bb/fetch2/npmsw.py
+++ b/bitbake/lib/bb/fetch2/npmsw.py
@@ -24,6 +24,7 @@ import bb
from bb.fetch2 import Fetch
from bb.fetch2 import FetchMethod
from bb.fetch2 import ParameterError
+from bb.fetch2 import runfetchcmd
from bb.fetch2 import URI
from bb.fetch2.npm import npm_integrity
from bb.fetch2.npm import npm_localfile
@@ -80,13 +81,18 @@ class NpmShrinkWrap(FetchMethod):
extrapaths = []
destsubdirs = [os.path.join("node_modules", dep) for dep in deptree]
destsuffix = os.path.join(*destsubdirs)
+ unpack = True
integrity = params.get("integrity", None)
resolved = params.get("resolved", None)
version = params.get("version", None)
# Handle registry sources
- if is_semver(version) and resolved and integrity:
+ if is_semver(version) and integrity:
+ # Handle duplicate dependencies without url
+ if not resolved:
+ return
+
localfile = npm_localfile(name, version)
uri = URI(resolved)
@@ -111,7 +117,7 @@ class NpmShrinkWrap(FetchMethod):
# Handle http tarball sources
elif version.startswith("http") and integrity:
- localfile = os.path.join("npm2", os.path.basename(version))
+ localfile = npm_localfile(os.path.basename(version))
uri = URI(version)
uri.params["downloadfilename"] = localfile
@@ -125,6 +131,8 @@ class NpmShrinkWrap(FetchMethod):
# Handle git sources
elif version.startswith("git"):
+ if version.startswith("github:"):
+ version = "git+https://github.com/" + version[len("github:"):]
regex = re.compile(r"""
^
git\+
@@ -150,7 +158,12 @@ class NpmShrinkWrap(FetchMethod):
url = str(uri)
- # local tarball sources and local link sources are unsupported
+ # Handle local tarball and link sources
+ elif version.startswith("file"):
+ localpath = version[5:]
+ if not version.endswith(".tgz"):
+ unpack = False
+
else:
raise ParameterError("Unsupported dependency: %s" % name, ud.url)
@@ -159,6 +172,7 @@ class NpmShrinkWrap(FetchMethod):
"localpath": localpath,
"extrapaths": extrapaths,
"destsuffix": destsuffix,
+ "unpack": unpack,
})
try:
@@ -179,7 +193,7 @@ class NpmShrinkWrap(FetchMethod):
# This fetcher resolves multiple URIs from a shrinkwrap file and then
# forwards it to a proxy fetcher. The management of the donestamp file,
# the lockfile and the checksums are forwarded to the proxy fetcher.
- ud.proxy = Fetch([dep["url"] for dep in ud.deps], data)
+ ud.proxy = Fetch([dep["url"] for dep in ud.deps if dep["url"]], data)
ud.needdonestamp = False
@staticmethod
@@ -241,7 +255,16 @@ class NpmShrinkWrap(FetchMethod):
for dep in manual:
depdestdir = os.path.join(destdir, dep["destsuffix"])
- npm_unpack(dep["localpath"], depdestdir, d)
+ if dep["url"]:
+ npm_unpack(dep["localpath"], depdestdir, d)
+ else:
+ depsrcdir= os.path.join(destdir, dep["localpath"])
+ if dep["unpack"]:
+ npm_unpack(depsrcdir, depdestdir, d)
+ else:
+ bb.utils.mkdirhier(depdestdir)
+ cmd = 'cp -fpPRH "%s/." .' % (depsrcdir)
+ runfetchcmd(cmd, d, workdir=depdestdir)
def clean(self, ud, d):
"""Clean any existing full or partial download"""
diff --git a/bitbake/lib/bb/fetch2/osc.py b/bitbake/lib/bb/fetch2/osc.py
index d9ce443..bf4c2f0 100644
--- a/bitbake/lib/bb/fetch2/osc.py
+++ b/bitbake/lib/bb/fetch2/osc.py
@@ -1,4 +1,6 @@
#
+# Copyright BitBake Contributors
+#
# SPDX-License-Identifier: GPL-2.0-only
#
"""
@@ -36,6 +38,7 @@ class Osc(FetchMethod):
# Create paths to osc checkouts
oscdir = d.getVar("OSCDIR") or (d.getVar("DL_DIR") + "/osc")
relpath = self._strip_leading_slashes(ud.path)
+ ud.oscdir = oscdir
ud.pkgdir = os.path.join(oscdir, ud.host)
ud.moddir = os.path.join(ud.pkgdir, relpath, ud.module)
@@ -43,13 +46,13 @@ class Osc(FetchMethod):
ud.revision = ud.parm['rev']
else:
pv = d.getVar("PV", False)
- rev = bb.fetch2.srcrev_internal_helper(ud, d)
+ rev = bb.fetch2.srcrev_internal_helper(ud, d, '')
if rev:
ud.revision = rev
else:
ud.revision = ""
- ud.localfile = d.expand('%s_%s_%s.tar.gz' % (ud.module.replace('/', '.'), ud.path.replace('/', '.'), ud.revision))
+ ud.localfile = d.expand('%s_%s_%s.tar.gz' % (ud.module.replace('/', '.'), relpath.replace('/', '.'), ud.revision))
def _buildosccommand(self, ud, d, command):
"""
@@ -86,7 +89,7 @@ class Osc(FetchMethod):
logger.debug2("Fetch: checking for module directory '" + ud.moddir + "'")
- if os.access(os.path.join(d.getVar('OSCDIR'), ud.path, ud.module), os.R_OK):
+ if os.access(ud.moddir, os.R_OK):
oscupdatecmd = self._buildosccommand(ud, d, "update")
logger.info("Update "+ ud.url)
# update sources there
@@ -114,20 +117,23 @@ class Osc(FetchMethod):
Generate a .oscrc to be used for this run.
"""
- config_path = os.path.join(d.getVar('OSCDIR'), "oscrc")
+ config_path = os.path.join(ud.oscdir, "oscrc")
+ if not os.path.exists(ud.oscdir):
+ bb.utils.mkdirhier(ud.oscdir)
+
if (os.path.exists(config_path)):
os.remove(config_path)
f = open(config_path, 'w')
+ proto = ud.parm.get('proto', 'https')
f.write("[general]\n")
- f.write("apisrv = %s\n" % ud.host)
- f.write("scheme = http\n")
+ f.write("apiurl = %s://%s\n" % (proto, ud.host))
f.write("su-wrapper = su -c\n")
f.write("build-root = %s\n" % d.getVar('WORKDIR'))
f.write("urllist = %s\n" % d.getVar("OSCURLLIST"))
f.write("extra-pkgs = gzip\n")
f.write("\n")
- f.write("[%s]\n" % ud.host)
+ f.write("[%s://%s]\n" % (proto, ud.host))
f.write("user = %s\n" % ud.parm["user"])
f.write("pass = %s\n" % ud.parm["pswd"])
f.close()
diff --git a/bitbake/lib/bb/fetch2/s3.py b/bitbake/lib/bb/fetch2/s3.py
index ffca73c..6b8ffd5 100644
--- a/bitbake/lib/bb/fetch2/s3.py
+++ b/bitbake/lib/bb/fetch2/s3.py
@@ -18,10 +18,47 @@ The aws tool must be correctly installed and configured prior to use.
import os
import bb
import urllib.request, urllib.parse, urllib.error
+import re
from bb.fetch2 import FetchMethod
from bb.fetch2 import FetchError
from bb.fetch2 import runfetchcmd
+def convertToBytes(value, unit):
+ value = float(value)
+ if (unit == "KiB"):
+ value = value*1024.0;
+ elif (unit == "MiB"):
+ value = value*1024.0*1024.0;
+ elif (unit == "GiB"):
+ value = value*1024.0*1024.0*1024.0;
+ return value
+
+class S3ProgressHandler(bb.progress.LineFilterProgressHandler):
+ """
+ Extract progress information from s3 cp output, e.g.:
+ Completed 5.1 KiB/8.8 GiB (12.0 MiB/s) with 1 file(s) remaining
+ """
+ def __init__(self, d):
+ super(S3ProgressHandler, self).__init__(d)
+ # Send an initial progress event so the bar gets shown
+ self._fire_progress(0)
+
+ def writeline(self, line):
+ percs = re.findall(r'^Completed (\d+.{0,1}\d*) (\w+)\/(\d+.{0,1}\d*) (\w+) (\(.+\)) with\s+', line)
+ if percs:
+ completed = (percs[-1][0])
+ completedUnit = (percs[-1][1])
+ total = (percs[-1][2])
+ totalUnit = (percs[-1][3])
+ completed = convertToBytes(completed, completedUnit)
+ total = convertToBytes(total, totalUnit)
+ progress = (completed/total)*100.0
+ rate = percs[-1][4]
+ self.update(progress, rate)
+ return False
+ return True
+
+
class S3(FetchMethod):
"""Class to fetch urls via 'aws s3'"""
@@ -52,7 +89,9 @@ class S3(FetchMethod):
cmd = '%s cp s3://%s%s %s' % (ud.basecmd, ud.host, ud.path, ud.localpath)
bb.fetch2.check_network_access(d, cmd, ud.url)
- runfetchcmd(cmd, d)
+
+ progresshandler = S3ProgressHandler(d)
+ runfetchcmd(cmd, d, False, log=progresshandler)
# Additional sanity checks copied from the wget class (although there
# are no known issues which mean these are required, treat the aws cli
diff --git a/bitbake/lib/bb/fetch2/ssh.py b/bitbake/lib/bb/fetch2/ssh.py
index 2c8557e..8d082b3 100644
--- a/bitbake/lib/bb/fetch2/ssh.py
+++ b/bitbake/lib/bb/fetch2/ssh.py
@@ -32,6 +32,7 @@ IETF secsh internet draft:
import re, os
from bb.fetch2 import check_network_access, FetchMethod, ParameterError, runfetchcmd
+import urllib
__pattern__ = re.compile(r'''
@@ -40,9 +41,9 @@ __pattern__ = re.compile(r'''
( # Optional username/password block
(?P<user>\S+) # username
(:(?P<pass>\S+))? # colon followed by the password (optional)
- )?
(?P<cparam>(;[^;]+)*)? # connection parameters block (optional)
@
+ )?
(?P<host>\S+?) # non-greedy match of the host
(:(?P<port>[0-9]+))? # colon followed by the port (optional)
/
@@ -70,6 +71,7 @@ class SSH(FetchMethod):
"git:// prefix with protocol=ssh", urldata.url)
m = __pattern__.match(urldata.url)
path = m.group('path')
+ path = urllib.parse.unquote(path)
host = m.group('host')
urldata.localpath = os.path.join(d.getVar('DL_DIR'),
os.path.basename(os.path.normpath(path)))
@@ -96,6 +98,11 @@ class SSH(FetchMethod):
fr += '@%s' % host
else:
fr = host
+
+ if path[0] != '~':
+ path = '/%s' % path
+ path = urllib.parse.unquote(path)
+
fr += ':%s' % path
cmd = 'scp -B -r %s %s %s/' % (
@@ -108,3 +115,43 @@ class SSH(FetchMethod):
runfetchcmd(cmd, d)
+ def checkstatus(self, fetch, urldata, d):
+ """
+ Check the status of the url
+ """
+ m = __pattern__.match(urldata.url)
+ path = m.group('path')
+ host = m.group('host')
+ port = m.group('port')
+ user = m.group('user')
+ password = m.group('pass')
+
+ if port:
+ portarg = '-P %s' % port
+ else:
+ portarg = ''
+
+ if user:
+ fr = user
+ if password:
+ fr += ':%s' % password
+ fr += '@%s' % host
+ else:
+ fr = host
+
+ if path[0] != '~':
+ path = '/%s' % path
+ path = urllib.parse.unquote(path)
+
+ cmd = 'ssh -o BatchMode=true %s %s [ -f %s ]' % (
+ portarg,
+ fr,
+ path
+ )
+
+ check_network_access(d, cmd, urldata.url)
+
+ if runfetchcmd(cmd, d):
+ return True
+
+ return False
diff --git a/bitbake/lib/bb/fetch2/svn.py b/bitbake/lib/bb/fetch2/svn.py
index 80102b4..d40e4d2 100644
--- a/bitbake/lib/bb/fetch2/svn.py
+++ b/bitbake/lib/bb/fetch2/svn.py
@@ -57,7 +57,12 @@ class Svn(FetchMethod):
if 'rev' in ud.parm:
ud.revision = ud.parm['rev']
- ud.localfile = d.expand('%s_%s_%s_%s_.tar.gz' % (ud.module.replace('/', '.'), ud.host, ud.path.replace('/', '.'), ud.revision))
+ # Whether to use the @REV peg-revision syntax in the svn command or not
+ ud.pegrevision = True
+ if 'nopegrevision' in ud.parm:
+ ud.pegrevision = False
+
+ ud.localfile = d.expand('%s_%s_%s_%s_%s.tar.gz' % (ud.module.replace('/', '.'), ud.host, ud.path.replace('/', '.'), ud.revision, ["0", "1"][ud.pegrevision]))
def _buildsvncommand(self, ud, d, command):
"""
@@ -98,7 +103,8 @@ class Svn(FetchMethod):
if ud.revision:
options.append("-r %s" % ud.revision)
- suffix = "@%s" % (ud.revision)
+ if ud.pegrevision:
+ suffix = "@%s" % (ud.revision)
if command == "fetch":
transportuser = ud.parm.get("transportuser", "")
diff --git a/bitbake/lib/bb/fetch2/wget.py b/bitbake/lib/bb/fetch2/wget.py
index 7fa2a87..b3a3de5 100644
--- a/bitbake/lib/bb/fetch2/wget.py
+++ b/bitbake/lib/bb/fetch2/wget.py
@@ -52,18 +52,24 @@ class WgetProgressHandler(bb.progress.LineFilterProgressHandler):
class Wget(FetchMethod):
+ """Class to fetch urls via 'wget'"""
# CDNs like CloudFlare may do a 'browser integrity test' which can fail
# with the standard wget/urllib User-Agent, so pretend to be a modern
# browser.
user_agent = "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:84.0) Gecko/20100101 Firefox/84.0"
- """Class to fetch urls via 'wget'"""
+ def check_certs(self, d):
+ """
+ Should certificates be checked?
+ """
+ return (d.getVar("BB_CHECK_SSL_CERTS") or "1") != "0"
+
def supports(self, ud, d):
"""
Check to see if a given url can be fetched with wget.
"""
- return ud.type in ['http', 'https', 'ftp']
+ return ud.type in ['http', 'https', 'ftp', 'ftps']
def recommends_checksum(self, urldata):
return True
@@ -82,7 +88,10 @@ class Wget(FetchMethod):
if not ud.localfile:
ud.localfile = d.expand(urllib.parse.unquote(ud.host + ud.path).replace("/", "."))
- self.basecmd = d.getVar("FETCHCMD_wget") or "/usr/bin/env wget -t 2 -T 30 --passive-ftp --no-check-certificate"
+ self.basecmd = d.getVar("FETCHCMD_wget") or "/usr/bin/env wget -t 2 -T 30 --passive-ftp"
+
+ if not self.check_certs(d):
+ self.basecmd += " --no-check-certificate"
def _runwget(self, ud, d, command, quiet, workdir=None):
@@ -103,7 +112,17 @@ class Wget(FetchMethod):
fetchcmd += " -O %s" % shlex.quote(localpath)
if ud.user and ud.pswd:
- fetchcmd += " --user=%s --password=%s --auth-no-challenge" % (ud.user, ud.pswd)
+ fetchcmd += " --auth-no-challenge"
+ if ud.parm.get("redirectauth", "1") == "1":
+ # An undocumented feature of wget is that if the
+ # username/password are specified on the URI, wget will only
+ # send the Authorization header to the first host and not to
+ # any hosts that it is redirected to. With the increasing
+ # usage of temporary AWS URLs, this difference now matters as
+ # AWS will reject any request that has authentication both in
+ # the query parameters (from the redirect) and in the
+ # Authorization header.
+ fetchcmd += " --user=%s --password=%s" % (ud.user, ud.pswd)
uri = ud.url.split(";")[0]
if os.path.exists(ud.localpath):
@@ -209,7 +228,7 @@ class Wget(FetchMethod):
# We let the request fail and expect it to be
# tried once more ("try_again" in check_status()),
# with the dead connection removed from the cache.
- # If it still fails, we give up, which can happend for bad
+ # If it still fails, we give up, which can happen for bad
# HTTP proxy settings.
fetch.connection_cache.remove_connection(h.host, h.port)
raise urllib.error.URLError(err)
@@ -282,64 +301,82 @@ class Wget(FetchMethod):
newreq = urllib.request.HTTPRedirectHandler.redirect_request(self, req, fp, code, msg, headers, newurl)
newreq.get_method = req.get_method
return newreq
- exported_proxies = export_proxies(d)
-
- handlers = [FixedHTTPRedirectHandler, HTTPMethodFallback]
- if exported_proxies:
- handlers.append(urllib.request.ProxyHandler())
- handlers.append(CacheHTTPHandler())
- # Since Python 2.7.9 ssl cert validation is enabled by default
- # see PEP-0476, this causes verification errors on some https servers
- # so disable by default.
- import ssl
- if hasattr(ssl, '_create_unverified_context'):
- handlers.append(urllib.request.HTTPSHandler(context=ssl._create_unverified_context()))
- opener = urllib.request.build_opener(*handlers)
-
- try:
- uri = ud.url.split(";")[0]
- r = urllib.request.Request(uri)
- r.get_method = lambda: "HEAD"
- # Some servers (FusionForge, as used on Alioth) require that the
- # optional Accept header is set.
- r.add_header("Accept", "*/*")
- r.add_header("User-Agent", self.user_agent)
- def add_basic_auth(login_str, request):
- '''Adds Basic auth to http request, pass in login:password as string'''
- import base64
- encodeuser = base64.b64encode(login_str.encode('utf-8')).decode("utf-8")
- authheader = "Basic %s" % encodeuser
- r.add_header("Authorization", authheader)
-
- if ud.user and ud.pswd:
- add_basic_auth(ud.user + ':' + ud.pswd, r)
- try:
- import netrc
- n = netrc.netrc()
- login, unused, password = n.authenticators(urllib.parse.urlparse(uri).hostname)
- add_basic_auth("%s:%s" % (login, password), r)
- except (TypeError, ImportError, IOError, netrc.NetrcParseError):
- pass
-
- with opener.open(r, timeout=30) as response:
- pass
- except urllib.error.URLError as e:
- if try_again:
- logger.debug2("checkstatus: trying again")
- return self.checkstatus(fetch, ud, d, False)
- else:
- # debug for now to avoid spamming the logs in e.g. remote sstate searches
- logger.debug2("checkstatus() urlopen failed: %s" % e)
- return False
- except ConnectionResetError as e:
- if try_again:
- logger.debug2("checkstatus: trying again")
- return self.checkstatus(fetch, ud, d, False)
+ # We need to update the environment here as both the proxy and HTTPS
+ # handlers need variables set. The proxy needs http_proxy and friends to
+ # be set, and HTTPSHandler ends up calling into openssl to load the
+ # certificates. In buildtools configurations this will be looking at the
+ # wrong place for certificates by default: we set SSL_CERT_FILE to the
+ # right location in the buildtools environment script but as BitBake
+ # prunes prunes the environment this is lost. When binaries are executed
+ # runfetchcmd ensures these values are in the environment, but this is
+ # pure Python so we need to update the environment.
+ #
+ # Avoid tramping the environment too much by using bb.utils.environment
+ # to scope the changes to the build_opener request, which is when the
+ # environment lookups happen.
+ newenv = bb.fetch2.get_fetcher_environment(d)
+
+ with bb.utils.environment(**newenv):
+ import ssl
+
+ if self.check_certs(d):
+ context = ssl.create_default_context()
else:
- # debug for now to avoid spamming the logs in e.g. remote sstate searches
- logger.debug2("checkstatus() urlopen failed: %s" % e)
- return False
+ context = ssl._create_unverified_context()
+
+ handlers = [FixedHTTPRedirectHandler,
+ HTTPMethodFallback,
+ urllib.request.ProxyHandler(),
+ CacheHTTPHandler(),
+ urllib.request.HTTPSHandler(context=context)]
+ opener = urllib.request.build_opener(*handlers)
+
+ try:
+ uri = ud.url.split(";")[0]
+ r = urllib.request.Request(uri)
+ r.get_method = lambda: "HEAD"
+ # Some servers (FusionForge, as used on Alioth) require that the
+ # optional Accept header is set.
+ r.add_header("Accept", "*/*")
+ r.add_header("User-Agent", self.user_agent)
+ def add_basic_auth(login_str, request):
+ '''Adds Basic auth to http request, pass in login:password as string'''
+ import base64
+ encodeuser = base64.b64encode(login_str.encode('utf-8')).decode("utf-8")
+ authheader = "Basic %s" % encodeuser
+ r.add_header("Authorization", authheader)
+
+ if ud.user and ud.pswd:
+ add_basic_auth(ud.user + ':' + ud.pswd, r)
+
+ try:
+ import netrc
+ n = netrc.netrc()
+ login, unused, password = n.authenticators(urllib.parse.urlparse(uri).hostname)
+ add_basic_auth("%s:%s" % (login, password), r)
+ except (TypeError, ImportError, IOError, netrc.NetrcParseError):
+ pass
+
+ with opener.open(r, timeout=30) as response:
+ pass
+ except urllib.error.URLError as e:
+ if try_again:
+ logger.debug2("checkstatus: trying again")
+ return self.checkstatus(fetch, ud, d, False)
+ else:
+ # debug for now to avoid spamming the logs in e.g. remote sstate searches
+ logger.debug2("checkstatus() urlopen failed: %s" % e)
+ return False
+ except ConnectionResetError as e:
+ if try_again:
+ logger.debug2("checkstatus: trying again")
+ return self.checkstatus(fetch, ud, d, False)
+ else:
+ # debug for now to avoid spamming the logs in e.g. remote sstate searches
+ logger.debug2("checkstatus() urlopen failed: %s" % e)
+ return False
+
return True
def _parse_path(self, regex, s):
@@ -548,7 +585,7 @@ class Wget(FetchMethod):
# src.rpm extension was added only for rpm package. Can be removed if the rpm
# packaged will always be considered as having to be manually upgraded
- psuffix_regex = r"(tar\.gz|tgz|tar\.bz2|zip|xz|tar\.lz|rpm|bz2|orig\.tar\.gz|tar\.xz|src\.tar\.gz|src\.tgz|svnr\d+\.tar\.bz2|stable\.tar\.gz|src\.rpm)"
+ psuffix_regex = r"(tar\.\w+|tgz|zip|xz|rpm|bz2|orig\.tar\.\w+|src\.tar\.\w+|src\.tgz|svnr\d+\.tar\.\w+|stable\.tar\.\w+|src\.rpm)"
# match name, version and archive type of a package
package_regex_comp = re.compile(r"(?P<name>%s?\.?v?)(?P<pver>%s)(?P<arch>%s)?[\.-](?P<type>%s$)"
diff --git a/bitbake/lib/bb/main.py b/bitbake/lib/bb/main.py
index 06bad49..93eda36 100755
--- a/bitbake/lib/bb/main.py
+++ b/bitbake/lib/bb/main.py
@@ -112,13 +112,6 @@ def _showwarning(message, category, filename, lineno, file=None, line=None):
warnlog.warning(s)
warnings.showwarning = _showwarning
-warnings.filterwarnings("ignore")
-warnings.filterwarnings("default", module="(<string>$|(oe|bb)\.)")
-warnings.filterwarnings("ignore", category=PendingDeprecationWarning)
-warnings.filterwarnings("ignore", category=ImportWarning)
-warnings.filterwarnings("ignore", category=DeprecationWarning, module="<string>$")
-warnings.filterwarnings("ignore", message="With-statements now directly support multiple context managers")
-
def create_bitbake_parser():
parser = optparse.OptionParser(
@@ -134,7 +127,7 @@ def create_bitbake_parser():
help="Execute tasks from a specific .bb recipe directly. WARNING: Does "
"not handle any dependencies from other recipes.")
- parser.add_option("-k", "--continue", action="store_false", dest="abort", default=True,
+ parser.add_option("-k", "--continue", action="store_false", dest="halt", default=True,
help="Continue as much as possible after an error. While the target that "
"failed and anything depending on it cannot be built, as much as "
"possible will be built before stopping.")
diff --git a/bitbake/lib/bb/monitordisk.py b/bitbake/lib/bb/monitordisk.py
index 98f2109..a1b9100 100644
--- a/bitbake/lib/bb/monitordisk.py
+++ b/bitbake/lib/bb/monitordisk.py
@@ -76,7 +76,12 @@ def getDiskData(BBDirs):
return None
action = pathSpaceInodeRe.group(1)
- if action not in ("ABORT", "STOPTASKS", "WARN"):
+ if action == "ABORT":
+ # Emit a deprecation warning
+ logger.warnonce("The BB_DISKMON_DIRS \"ABORT\" action has been renamed to \"HALT\", update configuration")
+ action = "HALT"
+
+ if action not in ("HALT", "STOPTASKS", "WARN"):
printErr("Unknown disk space monitor action: %s" % action)
return None
@@ -177,7 +182,7 @@ class diskMonitor:
# use them to avoid printing too many warning messages
self.preFreeS = {}
self.preFreeI = {}
- # This is for STOPTASKS and ABORT, to avoid printing the message
+ # This is for STOPTASKS and HALT, to avoid printing the message
# repeatedly while waiting for the tasks to finish
self.checked = {}
for k in self.devDict:
@@ -219,8 +224,8 @@ class diskMonitor:
self.checked[k] = True
rq.finish_runqueue(False)
bb.event.fire(bb.event.DiskFull(dev, 'disk', freeSpace, path), self.configuration)
- elif action == "ABORT" and not self.checked[k]:
- logger.error("Immediately abort since the disk space monitor action is \"ABORT\"!")
+ elif action == "HALT" and not self.checked[k]:
+ logger.error("Immediately halt since the disk space monitor action is \"HALT\"!")
self.checked[k] = True
rq.finish_runqueue(True)
bb.event.fire(bb.event.DiskFull(dev, 'disk', freeSpace, path), self.configuration)
@@ -245,8 +250,8 @@ class diskMonitor:
self.checked[k] = True
rq.finish_runqueue(False)
bb.event.fire(bb.event.DiskFull(dev, 'inode', freeInode, path), self.configuration)
- elif action == "ABORT" and not self.checked[k]:
- logger.error("Immediately abort since the disk space monitor action is \"ABORT\"!")
+ elif action == "HALT" and not self.checked[k]:
+ logger.error("Immediately halt since the disk space monitor action is \"HALT\"!")
self.checked[k] = True
rq.finish_runqueue(True)
bb.event.fire(bb.event.DiskFull(dev, 'inode', freeInode, path), self.configuration)
diff --git a/bitbake/lib/bb/msg.py b/bitbake/lib/bb/msg.py
index 291b38f..93575d8 100644
--- a/bitbake/lib/bb/msg.py
+++ b/bitbake/lib/bb/msg.py
@@ -30,7 +30,9 @@ class BBLogFormatter(logging.Formatter):
PLAIN = logging.INFO + 1
VERBNOTE = logging.INFO + 2
ERROR = logging.ERROR
+ ERRORONCE = logging.ERROR - 1
WARNING = logging.WARNING
+ WARNONCE = logging.WARNING - 1
CRITICAL = logging.CRITICAL
levelnames = {
@@ -42,7 +44,9 @@ class BBLogFormatter(logging.Formatter):
PLAIN : '',
VERBNOTE: 'NOTE',
WARNING : 'WARNING',
+ WARNONCE : 'WARNING',
ERROR : 'ERROR',
+ ERRORONCE : 'ERROR',
CRITICAL: 'ERROR',
}
@@ -58,7 +62,9 @@ class BBLogFormatter(logging.Formatter):
PLAIN : BASECOLOR,
VERBNOTE: BASECOLOR,
WARNING : YELLOW,
+ WARNONCE : YELLOW,
ERROR : RED,
+ ERRORONCE : RED,
CRITICAL: RED,
}
@@ -121,6 +127,22 @@ class BBLogFilter(object):
return True
return False
+class LogFilterShowOnce(logging.Filter):
+ def __init__(self):
+ self.seen_warnings = set()
+ self.seen_errors = set()
+
+ def filter(self, record):
+ if record.levelno == bb.msg.BBLogFormatter.WARNONCE:
+ if record.msg in self.seen_warnings:
+ return False
+ self.seen_warnings.add(record.msg)
+ if record.levelno == bb.msg.BBLogFormatter.ERRORONCE:
+ if record.msg in self.seen_errors:
+ return False
+ self.seen_errors.add(record.msg)
+ return True
+
class LogFilterGEQLevel(logging.Filter):
def __init__(self, level):
self.strlevel = str(level)
@@ -206,6 +228,7 @@ def logger_create(name, output=sys.stderr, level=logging.INFO, preserve_handlers
"""Standalone logger creation function"""
logger = logging.getLogger(name)
console = logging.StreamHandler(output)
+ console.addFilter(bb.msg.LogFilterShowOnce())
format = bb.msg.BBLogFormatter("%(levelname)s: %(message)s")
if color == 'always' or (color == 'auto' and output.isatty()):
format.enable_color()
@@ -293,10 +316,17 @@ def setLoggingConfig(defaultconfig, userconfigfile=None):
# Convert all level parameters to integers in case users want to use the
# bitbake defined level names
- for h in logconfig["handlers"].values():
+ for name, h in logconfig["handlers"].items():
if "level" in h:
h["level"] = bb.msg.stringToLevel(h["level"])
+ # Every handler needs its own instance of the once filter.
+ once_filter_name = name + ".showonceFilter"
+ logconfig.setdefault("filters", {})[once_filter_name] = {
+ "()": "bb.msg.LogFilterShowOnce",
+ }
+ h.setdefault("filters", []).append(once_filter_name)
+
for l in logconfig["loggers"].values():
if "level" in l:
l["level"] = bb.msg.stringToLevel(l["level"])
diff --git a/bitbake/lib/bb/parse/__init__.py b/bitbake/lib/bb/parse/__init__.py
index c01807b..3476095 100644
--- a/bitbake/lib/bb/parse/__init__.py
+++ b/bitbake/lib/bb/parse/__init__.py
@@ -113,6 +113,8 @@ def init(fn, data):
return h['init'](data)
def init_parser(d):
+ if hasattr(bb.parse, "siggen"):
+ bb.parse.siggen.exit()
bb.parse.siggen = bb.siggen.init(d)
def resolve_file(fn, d):
diff --git a/bitbake/lib/bb/parse/ast.py b/bitbake/lib/bb/parse/ast.py
index db2bdc3..9e0a0f5 100644
--- a/bitbake/lib/bb/parse/ast.py
+++ b/bitbake/lib/bb/parse/ast.py
@@ -97,7 +97,6 @@ class DataNode(AstNode):
def eval(self, data):
groupd = self.groupd
key = groupd["var"]
- key = key.replace(":", "_")
loginfo = {
'variable': key,
'file': self.filename,
@@ -131,6 +130,10 @@ class DataNode(AstNode):
else:
val = groupd["value"]
+ if ":append" in key or ":remove" in key or ":prepend" in key:
+ if op in ["append", "prepend", "postdot", "predot", "ques"]:
+ bb.warn(key + " " + groupd[op] + " is not a recommended operator combination, please replace it.")
+
flag = None
if 'flag' in groupd and groupd['flag'] is not None:
flag = groupd['flag']
@@ -146,7 +149,7 @@ class DataNode(AstNode):
data.setVar(key, val, parsing=True, **loginfo)
class MethodNode(AstNode):
- tr_tbl = str.maketrans('/.+-@%&', '_______')
+ tr_tbl = str.maketrans('/.+-@%&~', '________')
def __init__(self, filename, lineno, func_name, body, python, fakeroot):
AstNode.__init__(self, filename, lineno)
@@ -208,7 +211,6 @@ class ExportFuncsNode(AstNode):
def eval(self, data):
for func in self.n:
- func = func.replace(":", "_")
calledfunc = self.classname + "_" + func
if data.getVar(func, False) and not data.getVarFlag(func, 'export_func', False):
@@ -221,7 +223,7 @@ class ExportFuncsNode(AstNode):
for flag in [ "func", "python" ]:
if data.getVarFlag(calledfunc, flag, False):
data.setVarFlag(func, flag, data.getVarFlag(calledfunc, flag, False))
- for flag in [ "dirs" ]:
+ for flag in ["dirs", "cleandirs", "fakeroot"]:
if data.getVarFlag(func, flag, False):
data.setVarFlag(calledfunc, flag, data.getVarFlag(func, flag, False))
data.setVarFlag(func, "filename", "autogenerated")
@@ -331,6 +333,10 @@ def runAnonFuncs(d):
def finalize(fn, d, variant = None):
saved_handlers = bb.event.get_handlers().copy()
try:
+ # Found renamed variables. Exit immediately
+ if d.getVar("_FAILPARSINGERRORHANDLED", False) == True:
+ raise bb.BBHandledException()
+
for var in d.getVar('__BBHANDLERS', False) or []:
# try to add the handler
handlerfn = d.getVarFlag(var, "filename", False)
diff --git a/bitbake/lib/bb/parse/parse_py/BBHandler.py b/bitbake/lib/bb/parse/parse_py/BBHandler.py
index 152ef6a..6841573 100644
--- a/bitbake/lib/bb/parse/parse_py/BBHandler.py
+++ b/bitbake/lib/bb/parse/parse_py/BBHandler.py
@@ -19,9 +19,6 @@ from . import ConfHandler
from .. import resolve_file, ast, logger, ParseError
from .ConfHandler import include, init
-# For compatibility
-bb.deprecate_import(__name__, "bb.parse", ["vars_from_file"])
-
__func_start_regexp__ = re.compile(r"(((?P<py>python(?=(\s|\()))|(?P<fr>fakeroot(?=\s)))\s*)*(?P<func>[\w\.\-\+\{\}\$:]+)?\s*\(\s*\)\s*{$" )
__inherit_regexp__ = re.compile(r"inherit\s+(.+)" )
__export_func_regexp__ = re.compile(r"EXPORT_FUNCTIONS\s+(.+)" )
@@ -181,10 +178,10 @@ def feeder(lineno, s, fn, root, statements, eof=False):
if s and s[0] == '#':
if len(__residue__) != 0 and __residue__[0][0] != "#":
- bb.fatal("There is a comment on line %s of file %s (%s) which is in the middle of a multiline expression.\nBitbake used to ignore these but no longer does so, please fix your metadata as errors are likely as a result of this change." % (lineno, fn, s))
+ bb.fatal("There is a comment on line %s of file %s:\n'''\n%s\n'''\nwhich is in the middle of a multiline expression. This syntax is invalid, please correct it." % (lineno, fn, s))
if len(__residue__) != 0 and __residue__[0][0] == "#" and (not s or s[0] != "#"):
- bb.fatal("There is a confusing multiline, partially commented expression on line %s of file %s (%s).\nPlease clarify whether this is all a comment or should be parsed." % (lineno, fn, s))
+ bb.fatal("There is a confusing multiline partially commented expression on line %s of file %s:\n%s\nPlease clarify whether this is all a comment or should be parsed." % (lineno - len(__residue__), fn, "\n".join(__residue__)))
if s and s[-1] == '\\':
__residue__.append(s[:-1])
diff --git a/bitbake/lib/bb/parse/parse_py/ConfHandler.py b/bitbake/lib/bb/parse/parse_py/ConfHandler.py
index 0834fe3..451e68d 100644
--- a/bitbake/lib/bb/parse/parse_py/ConfHandler.py
+++ b/bitbake/lib/bb/parse/parse_py/ConfHandler.py
@@ -48,10 +48,7 @@ __unset_regexp__ = re.compile( r"unset\s+([a-zA-Z0-9\-_+.${}/~]+)$" )
__unset_flag_regexp__ = re.compile( r"unset\s+([a-zA-Z0-9\-_+.${}/~]+)\[([a-zA-Z0-9\-_+.]+)\]$" )
def init(data):
- topdir = data.getVar('TOPDIR', False)
- if not topdir:
- data.setVar('TOPDIR', os.getcwd())
-
+ return
def supports(fn, d):
return fn[-5:] == ".conf"
@@ -128,16 +125,21 @@ def handle(fn, data, include):
s = f.readline()
if not s:
break
+ origlineno = lineno
+ origline = s
w = s.strip()
# skip empty lines
if not w:
continue
s = s.rstrip()
while s[-1] == '\\':
- s2 = f.readline().rstrip()
+ line = f.readline()
+ origline += line
+ s2 = line.rstrip()
lineno = lineno + 1
if (not s2 or s2 and s2[0] != "#") and s[0] == "#" :
- bb.fatal("There is a confusing multiline, partially commented expression on line %s of file %s (%s).\nPlease clarify whether this is all a comment or should be parsed." % (lineno, fn, s))
+ bb.fatal("There is a confusing multiline, partially commented expression starting on line %s of file %s:\n%s\nPlease clarify whether this is all a comment or should be parsed." % (origlineno, fn, origline))
+
s = s[:-1] + s2
# skip comments
if s[0] == '#':
@@ -150,8 +152,6 @@ def handle(fn, data, include):
if oldfile:
data.setVar('FILE', oldfile)
- f.close()
-
for f in confFilters:
f(fn, data)
diff --git a/bitbake/lib/bb/persist_data.py b/bitbake/lib/bb/persist_data.py
index 6f32d81..ce84a15 100644
--- a/bitbake/lib/bb/persist_data.py
+++ b/bitbake/lib/bb/persist_data.py
@@ -19,7 +19,6 @@ import logging
import os.path
import sqlite3
import sys
-import warnings
from collections.abc import Mapping
sqlversion = sqlite3.sqlite_version_info
@@ -64,7 +63,7 @@ class SQLTable(collections.abc.MutableMapping):
"""
Decorator that starts a database transaction and creates a database
cursor for performing queries. If no exception is thrown, the
- database results are commited. If an exception occurs, the database
+ database results are committed. If an exception occurs, the database
is rolled back. In all cases, the cursor is closed after the
function ends.
@@ -209,7 +208,7 @@ class SQLTable(collections.abc.MutableMapping):
def __lt__(self, other):
if not isinstance(other, Mapping):
- raise NotImplemented
+ raise NotImplementedError()
return len(self) < len(other)
@@ -239,55 +238,6 @@ class SQLTable(collections.abc.MutableMapping):
def has_key(self, key):
return key in self
-
-class PersistData(object):
- """Deprecated representation of the bitbake persistent data store"""
- def __init__(self, d):
- warnings.warn("Use of PersistData is deprecated. Please use "
- "persist(domain, d) instead.",
- category=DeprecationWarning,
- stacklevel=2)
-
- self.data = persist(d)
- logger.debug("Using '%s' as the persistent data cache",
- self.data.filename)
-
- def addDomain(self, domain):
- """
- Add a domain (pending deprecation)
- """
- return self.data[domain]
-
- def delDomain(self, domain):
- """
- Removes a domain and all the data it contains
- """
- del self.data[domain]
-
- def getKeyValues(self, domain):
- """
- Return a list of key + value pairs for a domain
- """
- return list(self.data[domain].items())
-
- def getValue(self, domain, key):
- """
- Return the value of a key for a domain
- """
- return self.data[domain][key]
-
- def setValue(self, domain, key, value):
- """
- Sets the value of a key for a domain
- """
- self.data[domain][key] = value
-
- def delValue(self, domain, key):
- """
- Deletes a key/value pair
- """
- del self.data[domain][key]
-
def persist(domain, d):
"""Convenience factory for SQLTable objects based upon metadata"""
import bb.utils
diff --git a/bitbake/lib/bb/process.py b/bitbake/lib/bb/process.py
index af5d804..4c7b6d3 100644
--- a/bitbake/lib/bb/process.py
+++ b/bitbake/lib/bb/process.py
@@ -1,4 +1,6 @@
#
+# Copyright BitBake Contributors
+#
# SPDX-License-Identifier: GPL-2.0-only
#
@@ -142,7 +144,7 @@ def _logged_communicate(pipe, log, input, extrafiles):
while pipe.poll() is None:
read_all_pipes(log, rin, outdata, errdata)
- # Pocess closed, drain all pipes...
+ # Process closed, drain all pipes...
read_all_pipes(log, rin, outdata, errdata)
finally:
log.flush()
diff --git a/bitbake/lib/bb/progress.py b/bitbake/lib/bb/progress.py
index d051ba0..9518be7 100644
--- a/bitbake/lib/bb/progress.py
+++ b/bitbake/lib/bb/progress.py
@@ -94,12 +94,15 @@ class LineFilterProgressHandler(ProgressHandler):
while True:
breakpos = self._linebuffer.find('\n') + 1
if breakpos == 0:
- break
+ # for the case when the line with progress ends with only '\r'
+ breakpos = self._linebuffer.find('\r') + 1
+ if breakpos == 0:
+ break
line = self._linebuffer[:breakpos]
self._linebuffer = self._linebuffer[breakpos:]
# Drop any line feeds and anything that precedes them
lbreakpos = line.rfind('\r') + 1
- if lbreakpos:
+ if lbreakpos and lbreakpos != breakpos:
line = line[lbreakpos:]
if self.writeline(filter_color(line)):
super().write(line)
@@ -145,7 +148,7 @@ class MultiStageProgressReporter:
for tasks made up of python code spread across multiple
classes / functions - the progress reporter object can
be passed around or stored at the object level and calls
- to next_stage() and update() made whereever needed.
+ to next_stage() and update() made wherever needed.
"""
def __init__(self, d, stage_weights, debug=False):
"""
diff --git a/bitbake/lib/bb/providers.py b/bitbake/lib/bb/providers.py
index 3ec11a4..e11a463 100644
--- a/bitbake/lib/bb/providers.py
+++ b/bitbake/lib/bb/providers.py
@@ -94,7 +94,7 @@ def versionVariableMatch(cfgData, keyword, pn):
# pn can contain '_', e.g. gcc-cross-x86_64 and an override cannot
# hence we do this manually rather than use OVERRIDES
- ver = cfgData.getVar("%s_VERSION_pn-%s" % (keyword, pn))
+ ver = cfgData.getVar("%s_VERSION:pn-%s" % (keyword, pn))
if not ver:
ver = cfgData.getVar("%s_VERSION_%s" % (keyword, pn))
if not ver:
@@ -133,7 +133,7 @@ def findPreferredProvider(pn, cfgData, dataCache, pkg_pn = None, item = None):
if required_v is not None:
if preferred_v is not None:
- logger.warn("REQUIRED_VERSION and PREFERRED_VERSION for package %s%s are both set using REQUIRED_VERSION %s", pn, itemstr, required_v)
+ logger.warning("REQUIRED_VERSION and PREFERRED_VERSION for package %s%s are both set using REQUIRED_VERSION %s", pn, itemstr, required_v)
else:
logger.debug("REQUIRED_VERSION is set for package %s%s", pn, itemstr)
# REQUIRED_VERSION always takes precedence over PREFERRED_VERSION
@@ -173,7 +173,7 @@ def findPreferredProvider(pn, cfgData, dataCache, pkg_pn = None, item = None):
pv_str = '%s:%s' % (preferred_e, pv_str)
if preferred_file is None:
if not required:
- logger.warn("preferred version %s of %s not available%s", pv_str, pn, itemstr)
+ logger.warning("preferred version %s of %s not available%s", pv_str, pn, itemstr)
available_vers = []
for file_set in pkg_pn:
for f in file_set:
@@ -185,7 +185,7 @@ def findPreferredProvider(pn, cfgData, dataCache, pkg_pn = None, item = None):
available_vers.append(ver_str)
if available_vers:
available_vers.sort()
- logger.warn("versions of %s available: %s", pn, ' '.join(available_vers))
+ logger.warning("versions of %s available: %s", pn, ' '.join(available_vers))
if required:
logger.error("required version %s of %s not available%s", pv_str, pn, itemstr)
else:
@@ -396,8 +396,8 @@ def getRuntimeProviders(dataCache, rdepend):
return rproviders
# Only search dynamic packages if we can't find anything in other variables
- for pattern in dataCache.packages_dynamic:
- pattern = pattern.replace(r'+', r"\+")
+ for pat_key in dataCache.packages_dynamic:
+ pattern = pat_key.replace(r'+', r"\+")
if pattern in regexp_cache:
regexp = regexp_cache[pattern]
else:
@@ -408,7 +408,7 @@ def getRuntimeProviders(dataCache, rdepend):
raise
regexp_cache[pattern] = regexp
if regexp.match(rdepend):
- rproviders += dataCache.packages_dynamic[pattern]
+ rproviders += dataCache.packages_dynamic[pat_key]
logger.debug("Assuming %s is a dynamic package, but it may not exist" % rdepend)
return rproviders
diff --git a/bitbake/lib/bb/runqueue.py b/bitbake/lib/bb/runqueue.py
index cd10da8..48e2540 100644
--- a/bitbake/lib/bb/runqueue.py
+++ b/bitbake/lib/bb/runqueue.py
@@ -24,6 +24,7 @@ import pickle
from multiprocessing import Process
import shlex
import pprint
+import time
bblogger = logging.getLogger("BitBake")
logger = logging.getLogger("BitBake.RunQueue")
@@ -159,6 +160,55 @@ class RunQueueScheduler(object):
self.buildable.append(tid)
self.rev_prio_map = None
+ self.is_pressure_usable()
+
+ def is_pressure_usable(self):
+ """
+ If monitoring pressure, return True if pressure files can be open and read. For example
+ openSUSE /proc/pressure/* files have readable file permissions but when read the error EOPNOTSUPP (Operation not supported)
+ is returned.
+ """
+ if self.rq.max_cpu_pressure or self.rq.max_io_pressure or self.rq.max_memory_pressure:
+ try:
+ with open("/proc/pressure/cpu") as cpu_pressure_fds, \
+ open("/proc/pressure/io") as io_pressure_fds, \
+ open("/proc/pressure/memory") as memory_pressure_fds:
+
+ self.prev_cpu_pressure = cpu_pressure_fds.readline().split()[4].split("=")[1]
+ self.prev_io_pressure = io_pressure_fds.readline().split()[4].split("=")[1]
+ self.prev_memory_pressure = memory_pressure_fds.readline().split()[4].split("=")[1]
+ self.prev_pressure_time = time.time()
+ self.check_pressure = True
+ except:
+ bb.note("The /proc/pressure files can't be read. Continuing build without monitoring pressure")
+ self.check_pressure = False
+ else:
+ self.check_pressure = False
+
+ def exceeds_max_pressure(self):
+ """
+ Monitor the difference in total pressure at least once per second, if
+ BB_PRESSURE_MAX_{CPU|IO|MEMORY} are set, return True if above threshold.
+ """
+ if self.check_pressure:
+ with open("/proc/pressure/cpu") as cpu_pressure_fds, \
+ open("/proc/pressure/io") as io_pressure_fds, \
+ open("/proc/pressure/memory") as memory_pressure_fds:
+ # extract "total" from /proc/pressure/{cpu|io}
+ curr_cpu_pressure = cpu_pressure_fds.readline().split()[4].split("=")[1]
+ curr_io_pressure = io_pressure_fds.readline().split()[4].split("=")[1]
+ curr_memory_pressure = memory_pressure_fds.readline().split()[4].split("=")[1]
+ exceeds_cpu_pressure = self.rq.max_cpu_pressure and (float(curr_cpu_pressure) - float(self.prev_cpu_pressure)) > self.rq.max_cpu_pressure
+ exceeds_io_pressure = self.rq.max_io_pressure and (float(curr_io_pressure) - float(self.prev_io_pressure)) > self.rq.max_io_pressure
+ exceeds_memory_pressure = self.rq.max_memory_pressure and (float(curr_memory_pressure) - float(self.prev_memory_pressure)) > self.rq.max_memory_pressure
+ now = time.time()
+ if now - self.prev_pressure_time > 1.0:
+ self.prev_cpu_pressure = curr_cpu_pressure
+ self.prev_io_pressure = curr_io_pressure
+ self.prev_memory_pressure = curr_memory_pressure
+ self.prev_pressure_time = now
+ return (exceeds_cpu_pressure or exceeds_io_pressure or exceeds_memory_pressure)
+ return False
def next_buildable_task(self):
"""
@@ -172,6 +222,12 @@ class RunQueueScheduler(object):
if not buildable:
return None
+ # Bitbake requires that at least one task be active. Only check for pressure if
+ # this is the case, otherwise the pressure limitation could result in no tasks
+ # being active and no new tasks started thereby, at times, breaking the scheduler.
+ if self.rq.stats.active and self.exceeds_max_pressure():
+ return None
+
# Filter out tasks that have a max number of threads that have been exceeded
skip_buildable = {}
for running in self.rq.runq_running.difference(self.rq.runq_complete):
@@ -385,10 +441,9 @@ class RunQueueData:
self.rq = rq
self.warn_multi_bb = False
- self.stampwhitelist = cfgData.getVar("BB_STAMP_WHITELIST") or ""
- self.multi_provider_whitelist = (cfgData.getVar("MULTI_PROVIDER_WHITELIST") or "").split()
- self.setscenewhitelist = get_setscene_enforce_whitelist(cfgData, targets)
- self.setscenewhitelist_checked = False
+ self.multi_provider_allowed = (cfgData.getVar("BB_MULTI_PROVIDER_ALLOWED") or "").split()
+ self.setscene_ignore_tasks = get_setscene_enforce_ignore_tasks(cfgData, targets)
+ self.setscene_ignore_tasks_checked = False
self.setscene_enforce = (cfgData.getVar('BB_SETSCENE_ENFORCE') == "1")
self.init_progress_reporter = bb.progress.DummyMultiStageProcessProgressReporter()
@@ -486,7 +541,7 @@ class RunQueueData:
msgs.append(" Task %s (dependent Tasks %s)\n" % (dep, self.runq_depends_names(self.runtaskentries[dep].depends)))
msgs.append("\n")
if len(valid_chains) > 10:
- msgs.append("Aborted dependency loops search after 10 matches.\n")
+ msgs.append("Halted dependency loops search after 10 matches.\n")
raise TooManyLoops
continue
scan = False
@@ -547,7 +602,7 @@ class RunQueueData:
next_points.append(revdep)
task_done[revdep] = True
endpoints = next_points
- if len(next_points) == 0:
+ if not next_points:
break
# Circular dependency sanity check
@@ -589,7 +644,7 @@ class RunQueueData:
found = False
for mc in self.taskData:
- if len(taskData[mc].taskentries) > 0:
+ if taskData[mc].taskentries:
found = True
break
if not found:
@@ -773,7 +828,7 @@ class RunQueueData:
# Find the dependency chain endpoints
endpoints = set()
for tid in self.runtaskentries:
- if len(deps[tid]) == 0:
+ if not deps[tid]:
endpoints.add(tid)
# Iterate the chains collating dependencies
while endpoints:
@@ -784,11 +839,11 @@ class RunQueueData:
cumulativedeps[dep].update(cumulativedeps[tid])
if tid in deps[dep]:
deps[dep].remove(tid)
- if len(deps[dep]) == 0:
+ if not deps[dep]:
next.add(dep)
endpoints = next
#for tid in deps:
- # if len(deps[tid]) != 0:
+ # if deps[tid]:
# bb.warn("Sanity test failure, dependencies left for %s (%s)" % (tid, deps[tid]))
# Loop here since recrdeptasks can depend upon other recrdeptasks and we have to
@@ -956,7 +1011,7 @@ class RunQueueData:
del self.runtaskentries[tid]
if self.cooker.configuration.runall:
- if len(self.runtaskentries) == 0:
+ if not self.runtaskentries:
bb.msg.fatal("RunQueue", "Could not find any tasks with the tasknames %s to run within the recipes of the taskgraphs of the targets %s" % (str(self.cooker.configuration.runall), str(self.targets)))
self.init_progress_reporter.next_stage()
@@ -981,7 +1036,7 @@ class RunQueueData:
delcount.add(tid)
del self.runtaskentries[tid]
- if len(self.runtaskentries) == 0:
+ if not self.runtaskentries:
bb.msg.fatal("RunQueue", "Could not find any tasks with the tasknames %s to run within the taskgraphs of the targets %s" % (str(self.cooker.configuration.runonly), str(self.targets)))
#
@@ -989,8 +1044,8 @@ class RunQueueData:
#
# Check to make sure we still have tasks to run
- if len(self.runtaskentries) == 0:
- if not taskData[''].abort:
+ if not self.runtaskentries:
+ if not taskData[''].halt:
bb.msg.fatal("RunQueue", "All buildable tasks have been run but the build is incomplete (--continue mode). Errors for the tasks that failed will have been printed above.")
else:
bb.msg.fatal("RunQueue", "No active tasks and not in --continue mode?! Please report this bug.")
@@ -1013,7 +1068,7 @@ class RunQueueData:
endpoints = []
for tid in self.runtaskentries:
revdeps = self.runtaskentries[tid].revdeps
- if len(revdeps) == 0:
+ if not revdeps:
endpoints.append(tid)
for dep in revdeps:
if dep in self.runtaskentries[tid].depends:
@@ -1049,7 +1104,7 @@ class RunQueueData:
for prov in prov_list:
if len(prov_list[prov]) < 2:
continue
- if prov in self.multi_provider_whitelist:
+ if prov in self.multi_provider_allowed:
continue
seen_pn = []
# If two versions of the same PN are being built its fatal, we don't support it.
@@ -1059,12 +1114,12 @@ class RunQueueData:
seen_pn.append(pn)
else:
bb.fatal("Multiple versions of %s are due to be built (%s). Only one version of a given PN should be built in any given build. You likely need to set PREFERRED_VERSION_%s to select the correct version or don't depend on multiple versions." % (pn, " ".join(prov_list[prov]), pn))
- msg = "Multiple .bb files are due to be built which each provide %s:\n %s" % (prov, "\n ".join(prov_list[prov]))
+ msgs = ["Multiple .bb files are due to be built which each provide %s:\n %s" % (prov, "\n ".join(prov_list[prov]))]
#
# Construct a list of things which uniquely depend on each provider
# since this may help the user figure out which dependency is triggering this warning
#
- msg += "\nA list of tasks depending on these providers is shown and may help explain where the dependency comes from."
+ msgs.append("\nA list of tasks depending on these providers is shown and may help explain where the dependency comes from.")
deplist = {}
commondeps = None
for provfn in prov_list[prov]:
@@ -1084,12 +1139,12 @@ class RunQueueData:
commondeps &= deps
deplist[provfn] = deps
for provfn in deplist:
- msg += "\n%s has unique dependees:\n %s" % (provfn, "\n ".join(deplist[provfn] - commondeps))
+ msgs.append("\n%s has unique dependees:\n %s" % (provfn, "\n ".join(deplist[provfn] - commondeps)))
#
# Construct a list of provides and runtime providers for each recipe
# (rprovides has to cover RPROVIDES, PACKAGES, PACKAGES_DYNAMIC)
#
- msg += "\nIt could be that one recipe provides something the other doesn't and should. The following provider and runtime provider differences may be helpful."
+ msgs.append("\nIt could be that one recipe provides something the other doesn't and should. The following provider and runtime provider differences may be helpful.")
provide_results = {}
rprovide_results = {}
commonprovs = None
@@ -1116,29 +1171,18 @@ class RunQueueData:
else:
commonrprovs &= rprovides
rprovide_results[provfn] = rprovides
- #msg += "\nCommon provides:\n %s" % ("\n ".join(commonprovs))
- #msg += "\nCommon rprovides:\n %s" % ("\n ".join(commonrprovs))
+ #msgs.append("\nCommon provides:\n %s" % ("\n ".join(commonprovs)))
+ #msgs.append("\nCommon rprovides:\n %s" % ("\n ".join(commonrprovs)))
for provfn in prov_list[prov]:
- msg += "\n%s has unique provides:\n %s" % (provfn, "\n ".join(provide_results[provfn] - commonprovs))
- msg += "\n%s has unique rprovides:\n %s" % (provfn, "\n ".join(rprovide_results[provfn] - commonrprovs))
+ msgs.append("\n%s has unique provides:\n %s" % (provfn, "\n ".join(provide_results[provfn] - commonprovs)))
+ msgs.append("\n%s has unique rprovides:\n %s" % (provfn, "\n ".join(rprovide_results[provfn] - commonrprovs)))
if self.warn_multi_bb:
- logger.verbnote(msg)
+ logger.verbnote("".join(msgs))
else:
- logger.error(msg)
+ logger.error("".join(msgs))
self.init_progress_reporter.next_stage()
-
- # Create a whitelist usable by the stamp checks
- self.stampfnwhitelist = {}
- for mc in self.taskData:
- self.stampfnwhitelist[mc] = []
- for entry in self.stampwhitelist.split():
- if entry not in self.taskData[mc].build_targets:
- continue
- fn = self.taskData.build_targets[entry][0]
- self.stampfnwhitelist[mc].append(fn)
-
self.init_progress_reporter.next_stage()
# Iterate over the task list looking for tasks with a 'setscene' function
@@ -1186,9 +1230,9 @@ class RunQueueData:
# Iterate over the task list and call into the siggen code
dealtwith = set()
todeal = set(self.runtaskentries)
- while len(todeal) > 0:
+ while todeal:
for tid in todeal.copy():
- if len(self.runtaskentries[tid].depends - dealtwith) == 0:
+ if not (self.runtaskentries[tid].depends - dealtwith):
dealtwith.add(tid)
todeal.remove(tid)
self.prepare_task_hash(tid)
@@ -1227,7 +1271,6 @@ class RunQueue:
self.cfgData = cfgData
self.rqdata = RunQueueData(self, cooker, cfgData, dataCaches, taskData, targets)
- self.stamppolicy = cfgData.getVar("BB_STAMP_POLICY") or "perfile"
self.hashvalidate = cfgData.getVar("BB_HASHCHECK_FUNCTION") or None
self.depvalidate = cfgData.getVar("BB_SETSCENE_DEPVALID") or None
@@ -1356,14 +1399,6 @@ class RunQueue:
if taskname is None:
taskname = tn
- if self.stamppolicy == "perfile":
- fulldeptree = False
- else:
- fulldeptree = True
- stampwhitelist = []
- if self.stamppolicy == "whitelist":
- stampwhitelist = self.rqdata.stampfnwhitelist[mc]
-
stampfile = bb.build.stampfile(taskname, self.rqdata.dataCaches[mc], taskfn)
# If the stamp is missing, it's not current
@@ -1395,7 +1430,7 @@ class RunQueue:
continue
if t3 and t3 > t2:
continue
- if fn == fn2 or (fulldeptree and fn2 not in stampwhitelist):
+ if fn == fn2:
if not t2:
logger.debug2('Stampfile %s does not exist', stampfile2)
iscurrent = False
@@ -1445,7 +1480,7 @@ class RunQueue:
"""
Run the tasks in a queue prepared by rqdata.prepare()
Upon failure, optionally try to recover the build using any alternate providers
- (if the abort on failure configuration option isn't set)
+ (if the halt on failure configuration option isn't set)
"""
retval = True
@@ -1498,10 +1533,10 @@ class RunQueue:
self.rqexe = RunQueueExecute(self)
# If we don't have any setscene functions, skip execution
- if len(self.rqdata.runq_setscene_tids) == 0:
+ if not self.rqdata.runq_setscene_tids:
logger.info('No setscene tasks')
for tid in self.rqdata.runtaskentries:
- if len(self.rqdata.runtaskentries[tid].depends) == 0:
+ if not self.rqdata.runtaskentries[tid].depends:
self.rqexe.setbuildable(tid)
self.rqexe.tasks_notcovered.add(tid)
self.rqexe.sqdone = True
@@ -1695,7 +1730,7 @@ class RunQueue:
(mc, fn, taskname, taskfn) = split_tid_mcfn(tid)
pn = self.rqdata.dataCaches[mc].pkg_fn[taskfn]
h = self.rqdata.runtaskentries[tid].hash
- matches = bb.siggen.find_siginfo(pn, taskname, [], self.cfgData)
+ matches = bb.siggen.find_siginfo(pn, taskname, [], self.cooker.databuilder.mcdata[mc])
match = None
for m in matches:
if h in m:
@@ -1720,6 +1755,9 @@ class RunQueueExecute:
self.number_tasks = int(self.cfgData.getVar("BB_NUMBER_THREADS") or 1)
self.scheduler = self.cfgData.getVar("BB_SCHEDULER") or "speed"
+ self.max_cpu_pressure = self.cfgData.getVar("BB_PRESSURE_MAX_CPU")
+ self.max_io_pressure = self.cfgData.getVar("BB_PRESSURE_MAX_IO")
+ self.max_memory_pressure = self.cfgData.getVar("BB_PRESSURE_MAX_MEMORY")
self.sq_buildable = set()
self.sq_running = set()
@@ -1754,6 +1792,29 @@ class RunQueueExecute:
if self.number_tasks <= 0:
bb.fatal("Invalid BB_NUMBER_THREADS %s" % self.number_tasks)
+ lower_limit = 1.0
+ upper_limit = 1000000.0
+ if self.max_cpu_pressure:
+ self.max_cpu_pressure = float(self.max_cpu_pressure)
+ if self.max_cpu_pressure < lower_limit:
+ bb.fatal("Invalid BB_PRESSURE_MAX_CPU %s, minimum value is %s." % (self.max_cpu_pressure, lower_limit))
+ if self.max_cpu_pressure > upper_limit:
+ bb.warn("Your build will be largely unregulated since BB_PRESSURE_MAX_CPU is set to %s. It is very unlikely that such high pressure will be experienced." % (self.max_cpu_pressure))
+
+ if self.max_io_pressure:
+ self.max_io_pressure = float(self.max_io_pressure)
+ if self.max_io_pressure < lower_limit:
+ bb.fatal("Invalid BB_PRESSURE_MAX_IO %s, minimum value is %s." % (self.max_io_pressure, lower_limit))
+ if self.max_io_pressure > upper_limit:
+ bb.warn("Your build will be largely unregulated since BB_PRESSURE_MAX_IO is set to %s. It is very unlikely that such high pressure will be experienced." % (self.max_io_pressure))
+
+ if self.max_memory_pressure:
+ self.max_memory_pressure = float(self.max_memory_pressure)
+ if self.max_memory_pressure < lower_limit:
+ bb.fatal("Invalid BB_PRESSURE_MAX_MEMORY %s, minimum value is %s." % (self.max_memory_pressure, lower_limit))
+ if self.max_memory_pressure > upper_limit:
+ bb.warn("Your build will be largely unregulated since BB_PRESSURE_MAX_MEMORY is set to %s. It is very unlikely that such high pressure will be experienced." % (self.max_io_pressure))
+
# List of setscene tasks which we've covered
self.scenequeue_covered = set()
# List of tasks which are covered (including setscene ones)
@@ -1778,7 +1839,7 @@ class RunQueueExecute:
bb.fatal("Invalid scheduler '%s'. Available schedulers: %s" %
(self.scheduler, ", ".join(obj.name for obj in schedulers)))
- #if len(self.rqdata.runq_setscene_tids) > 0:
+ #if self.rqdata.runq_setscene_tids:
self.sqdata = SQData()
build_scenequeue_data(self.sqdata, self.rqdata, self.rq, self.cooker, self.stampcache, self)
@@ -1819,7 +1880,7 @@ class RunQueueExecute:
# worker must have died?
pass
- if len(self.failed_tids) != 0:
+ if self.failed_tids:
self.rq.state = runQueueFailed
return
@@ -1835,7 +1896,7 @@ class RunQueueExecute:
self.rq.read_workers()
return self.rq.active_fds()
- if len(self.failed_tids) != 0:
+ if self.failed_tids:
self.rq.state = runQueueFailed
return True
@@ -1933,7 +1994,7 @@ class RunQueueExecute:
self.stats.taskFailed()
self.failed_tids.append(task)
- fakeroot_log = ""
+ fakeroot_log = []
if fakerootlog and os.path.exists(fakerootlog):
with open(fakerootlog) as fakeroot_log_file:
fakeroot_failed = False
@@ -1943,14 +2004,14 @@ class RunQueueExecute:
fakeroot_failed = True
if 'doing new pid setup and server start' in line:
break
- fakeroot_log = line + fakeroot_log
+ fakeroot_log.append(line)
if not fakeroot_failed:
- fakeroot_log = None
+ fakeroot_log = []
- bb.event.fire(runQueueTaskFailed(task, self.stats, exitcode, self.rq, fakeroot_log=fakeroot_log), self.cfgData)
+ bb.event.fire(runQueueTaskFailed(task, self.stats, exitcode, self.rq, fakeroot_log=("".join(fakeroot_log) or None)), self.cfgData)
- if self.rqdata.taskData[''].abort:
+ if self.rqdata.taskData[''].halt:
self.rq.state = runQueueCleanUp
def task_skip(self, task, reason):
@@ -1999,7 +2060,7 @@ class RunQueueExecute:
if x not in self.tasks_scenequeue_done:
logger.error("Task %s was never processed by the setscene code" % x)
err = True
- if len(self.rqdata.runtaskentries[x].depends) == 0 and x not in self.runq_buildable:
+ if not self.rqdata.runtaskentries[x].depends and x not in self.runq_buildable:
logger.error("Task %s was never marked as buildable by the setscene code" % x)
err = True
return err
@@ -2023,7 +2084,7 @@ class RunQueueExecute:
# Find the next setscene to run
for nexttask in self.sorted_setscene_tids:
if nexttask in self.sq_buildable and nexttask not in self.sq_running and self.sqdata.stamps[nexttask] not in self.build_stamps.values():
- if nexttask not in self.sqdata.unskippable and len(self.sqdata.sq_revdeps[nexttask]) > 0 and self.sqdata.sq_revdeps[nexttask].issubset(self.scenequeue_covered) and self.check_dependencies(nexttask, self.sqdata.sq_revdeps[nexttask]):
+ if nexttask not in self.sqdata.unskippable and self.sqdata.sq_revdeps[nexttask] and self.sqdata.sq_revdeps[nexttask].issubset(self.scenequeue_covered) and self.check_dependencies(nexttask, self.sqdata.sq_revdeps[nexttask]):
if nexttask not in self.rqdata.target_tids:
logger.debug2("Skipping setscene for task %s" % nexttask)
self.sq_task_skip(nexttask)
@@ -2128,9 +2189,9 @@ class RunQueueExecute:
if task is not None:
(mc, fn, taskname, taskfn) = split_tid_mcfn(task)
- if self.rqdata.setscenewhitelist is not None:
- if self.check_setscenewhitelist(task):
- self.task_fail(task, "setscene whitelist")
+ if self.rqdata.setscene_ignore_tasks is not None:
+ if self.check_setscene_ignore_tasks(task):
+ self.task_fail(task, "setscene ignore_tasks")
return True
if task in self.tasks_covered:
@@ -2187,19 +2248,18 @@ class RunQueueExecute:
if self.can_start_task():
return True
- if self.stats.active > 0 or len(self.sq_live) > 0:
+ if self.stats.active > 0 or self.sq_live:
self.rq.read_workers()
return self.rq.active_fds()
# No more tasks can be run. If we have deferred setscene tasks we should run them.
if self.sq_deferred:
- tid = self.sq_deferred.pop(list(self.sq_deferred.keys())[0])
- logger.warning("Runqeueue deadlocked on deferred tasks, forcing task %s" % tid)
- if tid not in self.runq_complete:
- self.sq_task_failoutright(tid)
+ deferred_tid = list(self.sq_deferred.keys())[0]
+ blocking_tid = self.sq_deferred.pop(deferred_tid)
+ logger.warning("Runqeueue deadlocked on deferred tasks, forcing task %s blocked by %s" % (deferred_tid, blocking_tid))
return True
- if len(self.failed_tids) != 0:
+ if self.failed_tids:
self.rq.state = runQueueFailed
return True
@@ -2278,7 +2338,7 @@ class RunQueueExecute:
covered.intersection_update(self.tasks_scenequeue_done)
for tid in notcovered | covered:
- if len(self.rqdata.runtaskentries[tid].depends) == 0:
+ if not self.rqdata.runtaskentries[tid].depends:
self.setbuildable(tid)
elif self.rqdata.runtaskentries[tid].depends.issubset(self.runq_complete):
self.setbuildable(tid)
@@ -2320,6 +2380,9 @@ class RunQueueExecute:
self.rqdata.runtaskentries[hashtid].unihash = unihash
bb.parse.siggen.set_unihash(hashtid, unihash)
toprocess.add(hashtid)
+ if torehash:
+ # Need to save after set_unihash above
+ bb.parse.siggen.save_unitaskhashes()
# Work out all tasks which depend upon these
total = set()
@@ -2337,7 +2400,7 @@ class RunQueueExecute:
# Now iterate those tasks in dependency order to regenerate their taskhash/unihash
next = set()
for p in total:
- if len(self.rqdata.runtaskentries[p].depends) == 0:
+ if not self.rqdata.runtaskentries[p].depends:
next.add(p)
elif self.rqdata.runtaskentries[p].depends.isdisjoint(total):
next.add(p)
@@ -2347,7 +2410,7 @@ class RunQueueExecute:
current = next.copy()
next = set()
for tid in current:
- if len(self.rqdata.runtaskentries[p].depends) and not self.rqdata.runtaskentries[tid].depends.isdisjoint(total):
+ if self.rqdata.runtaskentries[p].depends and not self.rqdata.runtaskentries[tid].depends.isdisjoint(total):
continue
orighash = self.rqdata.runtaskentries[tid].hash
dc = bb.parse.siggen.get_data_caches(self.rqdata.dataCaches, mc_from_tid(tid))
@@ -2413,7 +2476,7 @@ class RunQueueExecute:
self.tasks_scenequeue_done.remove(tid)
for dep in self.sqdata.sq_covered_tasks[tid]:
if dep in self.runq_complete and dep not in self.runq_tasksrun:
- bb.error("Task %s marked as completed but now needing to rerun? Aborting build." % dep)
+ bb.error("Task %s marked as completed but now needing to rerun? Halting build." % dep)
self.failed_tids.append(tid)
self.rq.state = runQueueCleanUp
return
@@ -2434,7 +2497,7 @@ class RunQueueExecute:
if not harddepfail and self.sqdata.sq_revdeps[tid].issubset(self.scenequeue_covered | self.scenequeue_notcovered):
if tid not in self.sq_buildable:
self.sq_buildable.add(tid)
- if len(self.sqdata.sq_revdeps[tid]) == 0:
+ if not self.sqdata.sq_revdeps[tid]:
self.sq_buildable.add(tid)
if tid in self.sqdata.outrightfail:
@@ -2459,11 +2522,14 @@ class RunQueueExecute:
if update_tasks:
self.sqdone = False
- for tid in [t[0] for t in update_tasks]:
- h = pending_hash_index(tid, self.rqdata)
- if h in self.sqdata.hashes and tid != self.sqdata.hashes[h]:
- self.sq_deferred[tid] = self.sqdata.hashes[h]
- bb.note("Deferring %s after %s" % (tid, self.sqdata.hashes[h]))
+ for mc in sorted(self.sqdata.multiconfigs):
+ for tid in sorted([t[0] for t in update_tasks]):
+ if mc_from_tid(tid) != mc:
+ continue
+ h = pending_hash_index(tid, self.rqdata)
+ if h in self.sqdata.hashes and tid != self.sqdata.hashes[h]:
+ self.sq_deferred[tid] = self.sqdata.hashes[h]
+ bb.note("Deferring %s after %s" % (tid, self.sqdata.hashes[h]))
update_scenequeue_data([t[0] for t in update_tasks], self.sqdata, self.rqdata, self.rq, self.cooker, self.stampcache, self, summary=False)
for (tid, harddepfail, origvalid) in update_tasks:
@@ -2522,11 +2588,11 @@ class RunQueueExecute:
self.scenequeue_updatecounters(task)
def sq_check_taskfail(self, task):
- if self.rqdata.setscenewhitelist is not None:
+ if self.rqdata.setscene_ignore_tasks is not None:
realtask = task.split('_setscene')[0]
(mc, fn, taskname, taskfn) = split_tid_mcfn(realtask)
pn = self.rqdata.dataCaches[mc].pkg_fn[taskfn]
- if not check_setscene_enforce_whitelist(pn, taskname, self.rqdata.setscenewhitelist):
+ if not check_setscene_enforce_ignore_tasks(pn, taskname, self.rqdata.setscene_ignore_tasks):
logger.error('Task %s.%s failed' % (pn, taskname + "_setscene"))
self.rq.state = runQueueCleanUp
@@ -2589,8 +2655,8 @@ class RunQueueExecute:
#bb.note("Task %s: " % task + str(taskdepdata).replace("], ", "],\n"))
return taskdepdata
- def check_setscenewhitelist(self, tid):
- # Check task that is going to run against the whitelist
+ def check_setscene_ignore_tasks(self, tid):
+ # Check task that is going to run against the ignore tasks list
(mc, fn, taskname, taskfn) = split_tid_mcfn(tid)
# Ignore covered tasks
if tid in self.tasks_covered:
@@ -2604,14 +2670,15 @@ class RunQueueExecute:
return False
pn = self.rqdata.dataCaches[mc].pkg_fn[taskfn]
- if not check_setscene_enforce_whitelist(pn, taskname, self.rqdata.setscenewhitelist):
+ if not check_setscene_enforce_ignore_tasks(pn, taskname, self.rqdata.setscene_ignore_tasks):
if tid in self.rqdata.runq_setscene_tids:
- msg = 'Task %s.%s attempted to execute unexpectedly and should have been setscened' % (pn, taskname)
+ msg = ['Task %s.%s attempted to execute unexpectedly and should have been setscened' % (pn, taskname)]
else:
- msg = 'Task %s.%s attempted to execute unexpectedly' % (pn, taskname)
+ msg = ['Task %s.%s attempted to execute unexpectedly' % (pn, taskname)]
for t in self.scenequeue_notcovered:
- msg = msg + "\nTask %s, unihash %s, taskhash %s" % (t, self.rqdata.runtaskentries[t].unihash, self.rqdata.runtaskentries[t].hash)
- logger.error(msg + '\nThis is usually due to missing setscene tasks. Those missing in this build were: %s' % pprint.pformat(self.scenequeue_notcovered))
+ msg.append("\nTask %s, unihash %s, taskhash %s" % (t, self.rqdata.runtaskentries[t].unihash, self.rqdata.runtaskentries[t].hash))
+ msg.append('\nThis is usually due to missing setscene tasks. Those missing in this build were: %s' % pprint.pformat(self.scenequeue_notcovered))
+ logger.error("".join(msg))
return True
return False
@@ -2650,7 +2717,7 @@ def build_scenequeue_data(sqdata, rqdata, rq, cooker, stampcache, sqrq):
for tid in rqdata.runtaskentries:
sq_revdeps[tid] = copy.copy(rqdata.runtaskentries[tid].revdeps)
sq_revdeps_squash[tid] = set()
- if (len(sq_revdeps[tid]) == 0) and tid not in rqdata.runq_setscene_tids:
+ if not sq_revdeps[tid] and tid not in rqdata.runq_setscene_tids:
#bb.warn("Added endpoint %s" % (tid))
endpoints[tid] = set()
@@ -2684,16 +2751,15 @@ def build_scenequeue_data(sqdata, rqdata, rq, cooker, stampcache, sqrq):
sq_revdeps_squash[point] = set()
if point in rqdata.runq_setscene_tids:
sq_revdeps_squash[point] = tasks
- tasks = set()
continue
for dep in rqdata.runtaskentries[point].depends:
if point in sq_revdeps[dep]:
sq_revdeps[dep].remove(point)
if tasks:
sq_revdeps_squash[dep] |= tasks
- if len(sq_revdeps[dep]) == 0 and dep not in rqdata.runq_setscene_tids:
+ if not sq_revdeps[dep] and dep not in rqdata.runq_setscene_tids:
newendpoints[dep] = task
- if len(newendpoints) != 0:
+ if newendpoints:
process_endpoints(newendpoints)
process_endpoints(endpoints)
@@ -2705,7 +2771,7 @@ def build_scenequeue_data(sqdata, rqdata, rq, cooker, stampcache, sqrq):
# Take the build endpoints (no revdeps) and find the sstate tasks they depend upon
new = True
for tid in rqdata.runtaskentries:
- if len(rqdata.runtaskentries[tid].revdeps) == 0:
+ if not rqdata.runtaskentries[tid].revdeps:
sqdata.unskippable.add(tid)
sqdata.unskippable |= sqrq.cantskip
while new:
@@ -2714,7 +2780,7 @@ def build_scenequeue_data(sqdata, rqdata, rq, cooker, stampcache, sqrq):
for tid in sorted(orig, reverse=True):
if tid in rqdata.runq_setscene_tids:
continue
- if len(rqdata.runtaskentries[tid].depends) == 0:
+ if not rqdata.runtaskentries[tid].depends:
# These are tasks which have no setscene tasks in their chain, need to mark as directly buildable
sqrq.setbuildable(tid)
sqdata.unskippable |= rqdata.runtaskentries[tid].depends
@@ -2729,8 +2795,8 @@ def build_scenequeue_data(sqdata, rqdata, rq, cooker, stampcache, sqrq):
for taskcounter, tid in enumerate(rqdata.runtaskentries):
if tid in rqdata.runq_setscene_tids:
pass
- elif len(sq_revdeps_squash[tid]) != 0:
- bb.msg.fatal("RunQueue", "Something went badly wrong during scenequeue generation, aborting. Please report this problem.")
+ elif sq_revdeps_squash[tid]:
+ bb.msg.fatal("RunQueue", "Something went badly wrong during scenequeue generation, halting. Please report this problem.")
else:
del sq_revdeps_squash[tid]
rqdata.init_progress_reporter.update(taskcounter)
@@ -2794,7 +2860,7 @@ def build_scenequeue_data(sqdata, rqdata, rq, cooker, stampcache, sqrq):
sqdata.multiconfigs = set()
for tid in sqdata.sq_revdeps:
sqdata.multiconfigs.add(mc_from_tid(tid))
- if len(sqdata.sq_revdeps[tid]) == 0:
+ if not sqdata.sq_revdeps[tid]:
sqrq.sq_buildable.add(tid)
rqdata.init_progress_reporter.finish()
@@ -3048,7 +3114,7 @@ class runQueuePipe():
raise
end = len(self.queue)
found = True
- while found and len(self.queue):
+ while found and self.queue:
found = False
index = self.queue.find(b"</event>")
while index != -1 and self.queue.startswith(b"<event>"):
@@ -3086,16 +3152,16 @@ class runQueuePipe():
def close(self):
while self.read():
continue
- if len(self.queue) > 0:
+ if self.queue:
print("Warning, worker left partial message: %s" % self.queue)
self.input.close()
-def get_setscene_enforce_whitelist(d, targets):
+def get_setscene_enforce_ignore_tasks(d, targets):
if d.getVar('BB_SETSCENE_ENFORCE') != '1':
return None
- whitelist = (d.getVar("BB_SETSCENE_ENFORCE_WHITELIST") or "").split()
+ ignore_tasks = (d.getVar("BB_SETSCENE_ENFORCE_IGNORE_TASKS") or "").split()
outlist = []
- for item in whitelist[:]:
+ for item in ignore_tasks[:]:
if item.startswith('%:'):
for (mc, target, task, fn) in targets:
outlist.append(target + ':' + item.split(':')[1])
@@ -3103,12 +3169,12 @@ def get_setscene_enforce_whitelist(d, targets):
outlist.append(item)
return outlist
-def check_setscene_enforce_whitelist(pn, taskname, whitelist):
+def check_setscene_enforce_ignore_tasks(pn, taskname, ignore_tasks):
import fnmatch
- if whitelist is not None:
+ if ignore_tasks is not None:
item = '%s:%s' % (pn, taskname)
- for whitelist_item in whitelist:
- if fnmatch.fnmatch(item, whitelist_item):
+ for ignore_tasks in ignore_tasks:
+ if fnmatch.fnmatch(item, ignore_tasks):
return True
return False
return True
diff --git a/bitbake/lib/bb/server/process.py b/bitbake/lib/bb/server/process.py
index fcdce19..f2c5c15 100644
--- a/bitbake/lib/bb/server/process.py
+++ b/bitbake/lib/bb/server/process.py
@@ -26,6 +26,8 @@ import errno
import re
import datetime
import pickle
+import traceback
+import gc
import bb.server.xmlrpcserver
from bb import daemonize
from multiprocessing import queues
@@ -217,8 +219,9 @@ class ProcessServer():
self.command_channel_reply.send(self.cooker.command.runCommand(command))
serverlog("Command Completed")
except Exception as e:
- serverlog('Exception in server main event loop running command %s (%s)' % (command, str(e)))
- logger.exception('Exception in server main event loop running command %s (%s)' % (command, str(e)))
+ stack = traceback.format_exc()
+ serverlog('Exception in server main event loop running command %s (%s)' % (command, stack))
+ logger.exception('Exception in server main event loop running command %s (%s)' % (command, stack))
if self.xmlrpc in ready:
self.xmlrpc.handle_requests()
@@ -241,9 +244,6 @@ class ProcessServer():
ready = self.idle_commands(.1, fds)
- if len(threading.enumerate()) != 1:
- serverlog("More than one thread left?: " + str(threading.enumerate()))
-
serverlog("Exiting")
# Remove the socket file so we don't get any more connections to avoid races
try:
@@ -261,6 +261,9 @@ class ProcessServer():
self.cooker.post_serve()
+ if len(threading.enumerate()) != 1:
+ serverlog("More than one thread left?: " + str(threading.enumerate()))
+
# Flush logs before we release the lock
sys.stdout.flush()
sys.stderr.flush()
@@ -324,10 +327,10 @@ class ProcessServer():
if e.errno != errno.ENOENT:
raise
- msg = "Delaying shutdown due to active processes which appear to be holding bitbake.lock"
+ msg = ["Delaying shutdown due to active processes which appear to be holding bitbake.lock"]
if procs:
- msg += ":\n%s" % str(procs.decode("utf-8"))
- serverlog(msg)
+ msg.append(":\n%s" % str(procs.decode("utf-8")))
+ serverlog("".join(msg))
def idle_commands(self, delay, fds=None):
nextsleep = delay
@@ -554,7 +557,7 @@ def execServer(lockfd, readypipeinfd, lockname, sockname, server_timeout, xmlrpc
server.run()
finally:
- # Flush any ,essages/errors to the logfile before exit
+ # Flush any messages/errors to the logfile before exit
sys.stdout.flush()
sys.stderr.flush()
@@ -735,10 +738,32 @@ class ConnectionWriter(object):
# Why bb.event needs this I have no idea
self.event = self
- def send(self, obj):
- obj = multiprocessing.reduction.ForkingPickler.dumps(obj)
+ def _send(self, obj):
+ gc.disable()
with self.wlock:
self.writer.send_bytes(obj)
+ gc.enable()
+
+ def send(self, obj):
+ obj = multiprocessing.reduction.ForkingPickler.dumps(obj)
+ # See notes/code in CookerParser
+ # We must not terminate holding this lock else processes will hang.
+ # For SIGTERM, raising afterwards avoids this.
+ # For SIGINT, we don't want to have written partial data to the pipe.
+ # pthread_sigmask block/unblock would be nice but doesn't work, https://bugs.python.org/issue47139
+ process = multiprocessing.current_process()
+ if process and hasattr(process, "queue_signals"):
+ with process.signal_threadlock:
+ process.queue_signals = True
+ self._send(obj)
+ process.queue_signals = False
+ try:
+ for sig in process.signal_received.pop():
+ process.handle_sig(sig, None)
+ except IndexError:
+ pass
+ else:
+ self._send(obj)
def fileno(self):
return self.writer.fileno()
diff --git a/bitbake/lib/bb/server/xmlrpcserver.py b/bitbake/lib/bb/server/xmlrpcserver.py
index 2fa71be..01f5553 100644
--- a/bitbake/lib/bb/server/xmlrpcserver.py
+++ b/bitbake/lib/bb/server/xmlrpcserver.py
@@ -11,6 +11,7 @@ import hashlib
import time
import inspect
from xmlrpc.server import SimpleXMLRPCServer, SimpleXMLRPCRequestHandler
+import bb.server.xmlrpcclient
import bb
diff --git a/bitbake/lib/bb/siggen.py b/bitbake/lib/bb/siggen.py
index 767aeb0..9a20fc8 100644
--- a/bitbake/lib/bb/siggen.py
+++ b/bitbake/lib/bb/siggen.py
@@ -1,4 +1,6 @@
#
+# Copyright BitBake Contributors
+#
# SPDX-License-Identifier: GPL-2.0-only
#
@@ -11,6 +13,8 @@ import pickle
import bb.data
import difflib
import simplediff
+import json
+import bb.compress.zstd
from bb.checksum import FileChecksumCache
from bb import runqueue
import hashserv
@@ -19,6 +23,17 @@ import hashserv.client
logger = logging.getLogger('BitBake.SigGen')
hashequiv_logger = logging.getLogger('BitBake.SigGen.HashEquiv')
+class SetEncoder(json.JSONEncoder):
+ def default(self, obj):
+ if isinstance(obj, set):
+ return dict(_set_object=list(sorted(obj)))
+ return json.JSONEncoder.default(self, obj)
+
+def SetDecoder(dct):
+ if '_set_object' in dct:
+ return set(dct['_set_object'])
+ return dct
+
def init(d):
siggens = [obj for obj in globals().values()
if type(obj) is type and issubclass(obj, SignatureGenerator)]
@@ -27,7 +42,6 @@ def init(d):
for sg in siggens:
if desired == sg.name:
return sg(d)
- break
else:
logger.error("Invalid signature generator '%s', using default 'noop'\n"
"Available generators: %s", desired,
@@ -143,6 +157,9 @@ class SignatureGenerator(object):
return DataCacheProxy()
+ def exit(self):
+ return
+
class SignatureGeneratorBasic(SignatureGenerator):
"""
"""
@@ -159,8 +176,8 @@ class SignatureGeneratorBasic(SignatureGenerator):
self.gendeps = {}
self.lookupcache = {}
self.setscenetasks = set()
- self.basewhitelist = set((data.getVar("BB_HASHBASE_WHITELIST") or "").split())
- self.taskwhitelist = None
+ self.basehash_ignore_vars = set((data.getVar("BB_BASEHASH_IGNORE_VARS") or "").split())
+ self.taskhash_ignore_tasks = None
self.init_rundepcheck(data)
checksum_cache_file = data.getVar("BB_HASH_CHECKSUM_CACHE_FILE")
if checksum_cache_file:
@@ -175,18 +192,18 @@ class SignatureGeneratorBasic(SignatureGenerator):
self.tidtopn = {}
def init_rundepcheck(self, data):
- self.taskwhitelist = data.getVar("BB_HASHTASK_WHITELIST") or None
- if self.taskwhitelist:
- self.twl = re.compile(self.taskwhitelist)
+ self.taskhash_ignore_tasks = data.getVar("BB_TASKHASH_IGNORE_TASKS") or None
+ if self.taskhash_ignore_tasks:
+ self.twl = re.compile(self.taskhash_ignore_tasks)
else:
self.twl = None
def _build_data(self, fn, d):
ignore_mismatch = ((d.getVar("BB_HASH_IGNORE_MISMATCH") or '') == '1')
- tasklist, gendeps, lookupcache = bb.data.generate_dependencies(d, self.basewhitelist)
+ tasklist, gendeps, lookupcache = bb.data.generate_dependencies(d, self.basehash_ignore_vars)
- taskdeps, basehash = bb.data.generate_dependency_hash(tasklist, gendeps, lookupcache, self.basewhitelist, fn)
+ taskdeps, basehash = bb.data.generate_dependency_hash(tasklist, gendeps, lookupcache, self.basehash_ignore_vars, fn)
for task in tasklist:
tid = fn + ":" + task
@@ -228,7 +245,7 @@ class SignatureGeneratorBasic(SignatureGenerator):
# self.dump_sigtask(fn, task, d.getVar("STAMP"), False)
for task in taskdeps:
- d.setVar("BB_BASEHASH_task-%s" % task, self.basehash[fn + ":" + task])
+ d.setVar("BB_BASEHASH:task-%s" % task, self.basehash[fn + ":" + task])
def postparsing_clean_cache(self):
#
@@ -240,7 +257,8 @@ class SignatureGeneratorBasic(SignatureGenerator):
def rundep_check(self, fn, recipename, task, dep, depname, dataCaches):
# Return True if we should keep the dependency, False to drop it
- # We only manipulate the dependencies for packages not in the whitelist
+ # We only manipulate the dependencies for packages not in the ignore
+ # list
if self.twl and not self.twl.search(recipename):
# then process the actual dependencies
if self.twl.search(depname):
@@ -315,6 +333,8 @@ class SignatureGeneratorBasic(SignatureGenerator):
for (f, cs) in self.file_checksum_values[tid]:
if cs:
+ if "/./" in f:
+ data = data + "./" + f.split("/./")[1]
data = data + cs
if tid in self.taints:
@@ -325,7 +345,7 @@ class SignatureGeneratorBasic(SignatureGenerator):
h = hashlib.sha256(data.encode("utf-8")).hexdigest()
self.taskhash[tid] = h
- #d.setVar("BB_TASKHASH_task-%s" % task, taskhash[task])
+ #d.setVar("BB_TASKHASH:task-%s" % task, taskhash[task])
return h
def writeout_file_checksum_cache(self):
@@ -357,22 +377,27 @@ class SignatureGeneratorBasic(SignatureGenerator):
data = {}
data['task'] = task
- data['basewhitelist'] = self.basewhitelist
- data['taskwhitelist'] = self.taskwhitelist
+ data['basehash_ignore_vars'] = self.basehash_ignore_vars
+ data['taskhash_ignore_tasks'] = self.taskhash_ignore_tasks
data['taskdeps'] = self.taskdeps[fn][task]
data['basehash'] = self.basehash[tid]
data['gendeps'] = {}
data['varvals'] = {}
data['varvals'][task] = self.lookupcache[fn][task]
for dep in self.taskdeps[fn][task]:
- if dep in self.basewhitelist:
+ if dep in self.basehash_ignore_vars:
continue
data['gendeps'][dep] = self.gendeps[fn][dep]
data['varvals'][dep] = self.lookupcache[fn][dep]
if runtime and tid in self.taskhash:
data['runtaskdeps'] = self.runtaskdeps[tid]
- data['file_checksum_values'] = [(os.path.basename(f), cs) for f,cs in self.file_checksum_values[tid]]
+ data['file_checksum_values'] = []
+ for f,cs in self.file_checksum_values[tid]:
+ if "/./" in f:
+ data['file_checksum_values'].append(("./" + f.split("/./")[1], cs))
+ else:
+ data['file_checksum_values'].append((os.path.basename(f), cs))
data['runtaskhashes'] = {}
for dep in data['runtaskdeps']:
data['runtaskhashes'][dep] = self.get_unihash(dep)
@@ -396,13 +421,13 @@ class SignatureGeneratorBasic(SignatureGenerator):
bb.error("Taskhash mismatch %s versus %s for %s" % (computed_taskhash, self.taskhash[tid], tid))
sigfile = sigfile.replace(self.taskhash[tid], computed_taskhash)
- fd, tmpfile = tempfile.mkstemp(dir=os.path.dirname(sigfile), prefix="sigtask.")
+ fd, tmpfile = bb.utils.mkstemp(dir=os.path.dirname(sigfile), prefix="sigtask.")
try:
- with os.fdopen(fd, "wb") as stream:
- p = pickle.dump(data, stream, -1)
- stream.flush()
+ with bb.compress.zstd.open(fd, "wt", encoding="utf-8", num_threads=1) as f:
+ json.dump(data, f, sort_keys=True, separators=(",", ":"), cls=SetEncoder)
+ f.flush()
os.chmod(tmpfile, 0o664)
- os.rename(tmpfile, sigfile)
+ bb.utils.rename(tmpfile, sigfile)
except (OSError, IOError) as err:
try:
os.unlink(tmpfile)
@@ -468,6 +493,18 @@ class SignatureGeneratorUniHashMixIn(object):
self._client = hashserv.create_client(self.server)
return self._client
+ def reset(self, data):
+ if getattr(self, '_client', None) is not None:
+ self._client.close()
+ self._client = None
+ return super().reset(data)
+
+ def exit(self):
+ if getattr(self, '_client', None) is not None:
+ self._client.close()
+ self._client = None
+ return super().exit()
+
def get_stampfile_hash(self, tid):
if tid in self.taskhash:
# If a unique hash is reported, use it as the stampfile hash. This
@@ -542,7 +579,7 @@ class SignatureGeneratorUniHashMixIn(object):
hashequiv_logger.debug((1, 2)[unihash == taskhash], 'Found unihash %s in place of %s for %s from %s' % (unihash, taskhash, tid, self.server))
else:
hashequiv_logger.debug2('No reported unihash for %s:%s from %s' % (tid, taskhash, self.server))
- except hashserv.client.HashConnectionError as e:
+ except ConnectionError as e:
bb.warn('Error contacting Hash Equivalence Server %s: %s' % (self.server, str(e)))
self.set_unihash(tid, unihash)
@@ -563,7 +600,7 @@ class SignatureGeneratorUniHashMixIn(object):
if self.setscenetasks and tid not in self.setscenetasks:
return
- # This can happen if locked sigs are in action. Detect and just abort
+ # This can happen if locked sigs are in action. Detect and just exit
if taskhash != self.taskhash[tid]:
return
@@ -621,7 +658,7 @@ class SignatureGeneratorUniHashMixIn(object):
d.setVar('BB_UNIHASH', new_unihash)
else:
hashequiv_logger.debug('Reported task %s as unihash %s to %s' % (taskhash, unihash, self.server))
- except hashserv.client.HashConnectionError as e:
+ except ConnectionError as e:
bb.warn('Error contacting Hash Equivalence Server %s: %s' % (self.server, str(e)))
finally:
if sigfile:
@@ -661,7 +698,7 @@ class SignatureGeneratorUniHashMixIn(object):
# TODO: What to do here?
hashequiv_logger.verbose('Task %s unihash reported as unwanted hash %s' % (tid, finalunihash))
- except hashserv.client.HashConnectionError as e:
+ except ConnectionError as e:
bb.warn('Error contacting Hash Equivalence Server %s: %s' % (self.server, str(e)))
return False
@@ -774,6 +811,16 @@ def clean_basepaths_list(a):
b.append(clean_basepath(x))
return b
+# Handled renamed fields
+def handle_renames(data):
+ if 'basewhitelist' in data:
+ data['basehash_ignore_vars'] = data['basewhitelist']
+ del data['basewhitelist']
+ if 'taskwhitelist' in data:
+ data['taskhash_ignore_tasks'] = data['taskwhitelist']
+ del data['taskwhitelist']
+
+
def compare_sigfiles(a, b, recursecb=None, color=False, collapsed=False):
output = []
@@ -794,20 +841,21 @@ def compare_sigfiles(a, b, recursecb=None, color=False, collapsed=False):
formatparams.update(values)
return formatstr.format(**formatparams)
- with open(a, 'rb') as f:
- p1 = pickle.Unpickler(f)
- a_data = p1.load()
- with open(b, 'rb') as f:
- p2 = pickle.Unpickler(f)
- b_data = p2.load()
+ with bb.compress.zstd.open(a, "rt", encoding="utf-8", num_threads=1) as f:
+ a_data = json.load(f, object_hook=SetDecoder)
+ with bb.compress.zstd.open(b, "rt", encoding="utf-8", num_threads=1) as f:
+ b_data = json.load(f, object_hook=SetDecoder)
+
+ for data in [a_data, b_data]:
+ handle_renames(data)
- def dict_diff(a, b, whitelist=set()):
+ def dict_diff(a, b, ignored_vars=set()):
sa = set(a.keys())
sb = set(b.keys())
common = sa & sb
changed = set()
for i in common:
- if a[i] != b[i] and i not in whitelist:
+ if a[i] != b[i] and i not in ignored_vars:
changed.add(i)
added = sb - sa
removed = sa - sb
@@ -815,11 +863,11 @@ def compare_sigfiles(a, b, recursecb=None, color=False, collapsed=False):
def file_checksums_diff(a, b):
from collections import Counter
- # Handle old siginfo format
- if isinstance(a, dict):
- a = [(os.path.basename(f), cs) for f, cs in a.items()]
- if isinstance(b, dict):
- b = [(os.path.basename(f), cs) for f, cs in b.items()]
+
+ # Convert lists back to tuples
+ a = [(f[0], f[1]) for f in a]
+ b = [(f[0], f[1]) for f in b]
+
# Compare lists, ensuring we can handle duplicate filenames if they exist
removedcount = Counter(a)
removedcount.subtract(b)
@@ -846,15 +894,15 @@ def compare_sigfiles(a, b, recursecb=None, color=False, collapsed=False):
removed = [x[0] for x in removed]
return changed, added, removed
- if 'basewhitelist' in a_data and a_data['basewhitelist'] != b_data['basewhitelist']:
- output.append(color_format("{color_title}basewhitelist changed{color_default} from '%s' to '%s'") % (a_data['basewhitelist'], b_data['basewhitelist']))
- if a_data['basewhitelist'] and b_data['basewhitelist']:
- output.append("changed items: %s" % a_data['basewhitelist'].symmetric_difference(b_data['basewhitelist']))
+ if 'basehash_ignore_vars' in a_data and a_data['basehash_ignore_vars'] != b_data['basehash_ignore_vars']:
+ output.append(color_format("{color_title}basehash_ignore_vars changed{color_default} from '%s' to '%s'") % (a_data['basehash_ignore_vars'], b_data['basehash_ignore_vars']))
+ if a_data['basehash_ignore_vars'] and b_data['basehash_ignore_vars']:
+ output.append("changed items: %s" % a_data['basehash_ignore_vars'].symmetric_difference(b_data['basehash_ignore_vars']))
- if 'taskwhitelist' in a_data and a_data['taskwhitelist'] != b_data['taskwhitelist']:
- output.append(color_format("{color_title}taskwhitelist changed{color_default} from '%s' to '%s'") % (a_data['taskwhitelist'], b_data['taskwhitelist']))
- if a_data['taskwhitelist'] and b_data['taskwhitelist']:
- output.append("changed items: %s" % a_data['taskwhitelist'].symmetric_difference(b_data['taskwhitelist']))
+ if 'taskhash_ignore_tasks' in a_data and a_data['taskhash_ignore_tasks'] != b_data['taskhash_ignore_tasks']:
+ output.append(color_format("{color_title}taskhash_ignore_tasks changed{color_default} from '%s' to '%s'") % (a_data['taskhash_ignore_tasks'], b_data['taskhash_ignore_tasks']))
+ if a_data['taskhash_ignore_tasks'] and b_data['taskhash_ignore_tasks']:
+ output.append("changed items: %s" % a_data['taskhash_ignore_tasks'].symmetric_difference(b_data['taskhash_ignore_tasks']))
if a_data['taskdeps'] != b_data['taskdeps']:
output.append(color_format("{color_title}Task dependencies changed{color_default} from:\n%s\nto:\n%s") % (sorted(a_data['taskdeps']), sorted(b_data['taskdeps'])))
@@ -862,23 +910,23 @@ def compare_sigfiles(a, b, recursecb=None, color=False, collapsed=False):
if a_data['basehash'] != b_data['basehash'] and not collapsed:
output.append(color_format("{color_title}basehash changed{color_default} from %s to %s") % (a_data['basehash'], b_data['basehash']))
- changed, added, removed = dict_diff(a_data['gendeps'], b_data['gendeps'], a_data['basewhitelist'] & b_data['basewhitelist'])
+ changed, added, removed = dict_diff(a_data['gendeps'], b_data['gendeps'], a_data['basehash_ignore_vars'] & b_data['basehash_ignore_vars'])
if changed:
- for dep in changed:
+ for dep in sorted(changed):
output.append(color_format("{color_title}List of dependencies for variable %s changed from '{color_default}%s{color_title}' to '{color_default}%s{color_title}'") % (dep, a_data['gendeps'][dep], b_data['gendeps'][dep]))
if a_data['gendeps'][dep] and b_data['gendeps'][dep]:
output.append("changed items: %s" % a_data['gendeps'][dep].symmetric_difference(b_data['gendeps'][dep]))
if added:
- for dep in added:
+ for dep in sorted(added):
output.append(color_format("{color_title}Dependency on variable %s was added") % (dep))
if removed:
- for dep in removed:
+ for dep in sorted(removed):
output.append(color_format("{color_title}Dependency on Variable %s was removed") % (dep))
changed, added, removed = dict_diff(a_data['varvals'], b_data['varvals'])
if changed:
- for dep in changed:
+ for dep in sorted(changed):
oldval = a_data['varvals'][dep]
newval = b_data['varvals'][dep]
if newval and oldval and ('\n' in oldval or '\n' in newval):
@@ -902,9 +950,9 @@ def compare_sigfiles(a, b, recursecb=None, color=False, collapsed=False):
output.append(color_format("{color_title}Variable {var} value changed from '{color_default}{oldval}{color_title}' to '{color_default}{newval}{color_title}'{color_default}", var=dep, oldval=oldval, newval=newval))
if not 'file_checksum_values' in a_data:
- a_data['file_checksum_values'] = {}
+ a_data['file_checksum_values'] = []
if not 'file_checksum_values' in b_data:
- b_data['file_checksum_values'] = {}
+ b_data['file_checksum_values'] = []
changed, added, removed = file_checksums_diff(a_data['file_checksum_values'], b_data['file_checksum_values'])
if changed:
@@ -948,7 +996,7 @@ def compare_sigfiles(a, b, recursecb=None, color=False, collapsed=False):
b = clean_basepaths(b_data['runtaskhashes'])
changed, added, removed = dict_diff(a, b)
if added:
- for dep in added:
+ for dep in sorted(added):
bdep_found = False
if removed:
for bdep in removed:
@@ -958,7 +1006,7 @@ def compare_sigfiles(a, b, recursecb=None, color=False, collapsed=False):
if not bdep_found:
output.append(color_format("{color_title}Dependency on task %s was added{color_default} with hash %s") % (dep, b[dep]))
if removed:
- for dep in removed:
+ for dep in sorted(removed):
adep_found = False
if added:
for adep in added:
@@ -968,9 +1016,9 @@ def compare_sigfiles(a, b, recursecb=None, color=False, collapsed=False):
if not adep_found:
output.append(color_format("{color_title}Dependency on task %s was removed{color_default} with hash %s") % (dep, a[dep]))
if changed:
- for dep in changed:
+ for dep in sorted(changed):
if not collapsed:
- output.append(color_format("{color_title}Hash for dependent task %s changed{color_default} from %s to %s") % (dep, a[dep], b[dep]))
+ output.append(color_format("{color_title}Hash for task dependency %s changed{color_default} from %s to %s") % (dep, a[dep], b[dep]))
if callable(recursecb):
recout = recursecb(dep, a[dep], b[dep])
if recout:
@@ -980,7 +1028,6 @@ def compare_sigfiles(a, b, recursecb=None, color=False, collapsed=False):
# If a dependent hash changed, might as well print the line above and then defer to the changes in
# that hash since in all likelyhood, they're the same changes this task also saw.
output = [output[-1]] + recout
- break
a_taint = a_data.get('taint', None)
b_taint = b_data.get('taint', None)
@@ -1018,6 +1065,8 @@ def calc_taskhash(sigdata):
for c in sigdata['file_checksum_values']:
if c[1]:
+ if "./" in c[0]:
+ data = data + c[0]
data = data + c[1]
if 'taint' in sigdata:
@@ -1032,32 +1081,33 @@ def calc_taskhash(sigdata):
def dump_sigfile(a):
output = []
- with open(a, 'rb') as f:
- p1 = pickle.Unpickler(f)
- a_data = p1.load()
+ with bb.compress.zstd.open(a, "rt", encoding="utf-8", num_threads=1) as f:
+ a_data = json.load(f, object_hook=SetDecoder)
+
+ handle_renames(a_data)
- output.append("basewhitelist: %s" % (a_data['basewhitelist']))
+ output.append("basehash_ignore_vars: %s" % (sorted(a_data['basehash_ignore_vars'])))
- output.append("taskwhitelist: %s" % (a_data['taskwhitelist']))
+ output.append("taskhash_ignore_tasks: %s" % (sorted(a_data['taskhash_ignore_tasks'] or [])))
output.append("Task dependencies: %s" % (sorted(a_data['taskdeps'])))
output.append("basehash: %s" % (a_data['basehash']))
- for dep in a_data['gendeps']:
- output.append("List of dependencies for variable %s is %s" % (dep, a_data['gendeps'][dep]))
+ for dep in sorted(a_data['gendeps']):
+ output.append("List of dependencies for variable %s is %s" % (dep, sorted(a_data['gendeps'][dep])))
- for dep in a_data['varvals']:
+ for dep in sorted(a_data['varvals']):
output.append("Variable %s value is %s" % (dep, a_data['varvals'][dep]))
if 'runtaskdeps' in a_data:
- output.append("Tasks this task depends on: %s" % (a_data['runtaskdeps']))
+ output.append("Tasks this task depends on: %s" % (sorted(a_data['runtaskdeps'])))
if 'file_checksum_values' in a_data:
- output.append("This task depends on the checksums of files: %s" % (a_data['file_checksum_values']))
+ output.append("This task depends on the checksums of files: %s" % (sorted(a_data['file_checksum_values'])))
if 'runtaskhashes' in a_data:
- for dep in a_data['runtaskhashes']:
+ for dep in sorted(a_data['runtaskhashes']):
output.append("Hash for dependent task %s is %s" % (dep, a_data['runtaskhashes'][dep]))
if 'taint' in a_data:
diff --git a/bitbake/lib/bb/taskdata.py b/bitbake/lib/bb/taskdata.py
index 47bad6d..66545a6 100644
--- a/bitbake/lib/bb/taskdata.py
+++ b/bitbake/lib/bb/taskdata.py
@@ -39,7 +39,7 @@ class TaskData:
"""
BitBake Task Data implementation
"""
- def __init__(self, abort = True, skiplist = None, allowincomplete = False):
+ def __init__(self, halt = True, skiplist = None, allowincomplete = False):
self.build_targets = {}
self.run_targets = {}
@@ -57,7 +57,7 @@ class TaskData:
self.failed_rdeps = []
self.failed_fns = []
- self.abort = abort
+ self.halt = halt
self.allowincomplete = allowincomplete
self.skiplist = skiplist
@@ -328,7 +328,7 @@ class TaskData:
try:
self.add_provider_internal(cfgData, dataCache, item)
except bb.providers.NoProvider:
- if self.abort:
+ if self.halt:
raise
self.remove_buildtarget(item)
@@ -451,12 +451,12 @@ class TaskData:
for target in self.build_targets:
if fn in self.build_targets[target]:
self.build_targets[target].remove(fn)
- if len(self.build_targets[target]) == 0:
+ if not self.build_targets[target]:
self.remove_buildtarget(target, missing_list)
for target in self.run_targets:
if fn in self.run_targets[target]:
self.run_targets[target].remove(fn)
- if len(self.run_targets[target]) == 0:
+ if not self.run_targets[target]:
self.remove_runtarget(target, missing_list)
def remove_buildtarget(self, target, missing_list=None):
@@ -479,7 +479,7 @@ class TaskData:
fn = tid.rsplit(":",1)[0]
self.fail_fn(fn, missing_list)
- if self.abort and target in self.external_targets:
+ if self.halt and target in self.external_targets:
logger.error("Required build target '%s' has no buildable providers.\nMissing or unbuildable dependency chain was: %s", target, missing_list)
raise bb.providers.NoProvider(target)
@@ -516,7 +516,7 @@ class TaskData:
self.add_provider_internal(cfgData, dataCache, target)
added = added + 1
except bb.providers.NoProvider:
- if self.abort and target in self.external_targets and not self.allowincomplete:
+ if self.halt and target in self.external_targets and not self.allowincomplete:
raise
if not self.allowincomplete:
self.remove_buildtarget(target)
diff --git a/bitbake/lib/bb/tests/codeparser.py b/bitbake/lib/bb/tests/codeparser.py
index f485204..71ed382 100644
--- a/bitbake/lib/bb/tests/codeparser.py
+++ b/bitbake/lib/bb/tests/codeparser.py
@@ -318,7 +318,7 @@ d.getVar(a(), False)
"filename": "example.bb",
})
- deps, values = bb.data.build_dependencies("FOO", set(self.d.keys()), set(), set(), self.d)
+ deps, values = bb.data.build_dependencies("FOO", set(self.d.keys()), set(), set(), set(), self.d)
self.assertEqual(deps, set(["somevar", "bar", "something", "inexpand", "test", "test2", "a"]))
@@ -365,7 +365,7 @@ esac
self.d.setVarFlags("FOO", {"func": True})
self.setEmptyVars(execs)
- deps, values = bb.data.build_dependencies("FOO", set(self.d.keys()), set(), set(), self.d)
+ deps, values = bb.data.build_dependencies("FOO", set(self.d.keys()), set(), set(), set(), self.d)
self.assertEqual(deps, set(["somevar", "inverted"] + execs))
@@ -375,7 +375,7 @@ esac
self.d.setVar("FOO", "foo=oe_libinstall; eval $foo")
self.d.setVarFlag("FOO", "vardeps", "oe_libinstall")
- deps, values = bb.data.build_dependencies("FOO", set(self.d.keys()), set(), set(), self.d)
+ deps, values = bb.data.build_dependencies("FOO", set(self.d.keys()), set(), set(), set(), self.d)
self.assertEqual(deps, set(["oe_libinstall"]))
@@ -384,7 +384,7 @@ esac
self.d.setVar("FOO", "foo=oe_libinstall; eval $foo")
self.d.setVarFlag("FOO", "vardeps", "${@'oe_libinstall'}")
- deps, values = bb.data.build_dependencies("FOO", set(self.d.keys()), set(), set(), self.d)
+ deps, values = bb.data.build_dependencies("FOO", set(self.d.keys()), set(), set(), set(), self.d)
self.assertEqual(deps, set(["oe_libinstall"]))
@@ -399,7 +399,7 @@ esac
# Check dependencies
self.d.setVar('ANOTHERVAR', expr)
self.d.setVar('TESTVAR', 'anothervalue testval testval2')
- deps, values = bb.data.build_dependencies("ANOTHERVAR", set(self.d.keys()), set(), set(), self.d)
+ deps, values = bb.data.build_dependencies("ANOTHERVAR", set(self.d.keys()), set(), set(), set(), self.d)
self.assertEqual(sorted(values.splitlines()),
sorted([expr,
'TESTVAR{anothervalue} = Set',
@@ -412,6 +412,24 @@ esac
# Check final value
self.assertEqual(self.d.getVar('ANOTHERVAR').split(), ['anothervalue', 'yetanothervalue', 'lastone'])
+ def test_contains_vardeps_excluded(self):
+ # Check the ignored_vars option to build_dependencies is handled by contains functionality
+ varval = '${TESTVAR2} ${@bb.utils.filter("TESTVAR", "somevalue anothervalue", d)}'
+ self.d.setVar('ANOTHERVAR', varval)
+ self.d.setVar('TESTVAR', 'anothervalue testval testval2')
+ self.d.setVar('TESTVAR2', 'testval3')
+ deps, values = bb.data.build_dependencies("ANOTHERVAR", set(self.d.keys()), set(), set(), set(["TESTVAR"]), self.d)
+ self.assertEqual(sorted(values.splitlines()), sorted([varval]))
+ self.assertEqual(deps, set(["TESTVAR2"]))
+ self.assertEqual(self.d.getVar('ANOTHERVAR').split(), ['testval3', 'anothervalue'])
+
+ # Check the vardepsexclude flag is handled by contains functionality
+ self.d.setVarFlag('ANOTHERVAR', 'vardepsexclude', 'TESTVAR')
+ deps, values = bb.data.build_dependencies("ANOTHERVAR", set(self.d.keys()), set(), set(), set(), self.d)
+ self.assertEqual(sorted(values.splitlines()), sorted([varval]))
+ self.assertEqual(deps, set(["TESTVAR2"]))
+ self.assertEqual(self.d.getVar('ANOTHERVAR').split(), ['testval3', 'anothervalue'])
+
#Currently no wildcard support
#def test_vardeps_wildcards(self):
# self.d.setVar("oe_libinstall", "echo test")
diff --git a/bitbake/lib/bb/tests/compression.py b/bitbake/lib/bb/tests/compression.py
new file mode 100644
index 0000000..95af3f9
--- /dev/null
+++ b/bitbake/lib/bb/tests/compression.py
@@ -0,0 +1,100 @@
+#
+# Copyright BitBake Contributors
+#
+# SPDX-License-Identifier: GPL-2.0-only
+#
+
+from pathlib import Path
+import bb.compress.lz4
+import bb.compress.zstd
+import contextlib
+import os
+import shutil
+import tempfile
+import unittest
+import subprocess
+
+
+class CompressionTests(object):
+ def setUp(self):
+ self._t = tempfile.TemporaryDirectory()
+ self.tmpdir = Path(self._t.name)
+ self.addCleanup(self._t.cleanup)
+
+ def _file_helper(self, mode_suffix, data):
+ tmp_file = self.tmpdir / "compressed"
+
+ with self.do_open(tmp_file, mode="w" + mode_suffix) as f:
+ f.write(data)
+
+ with self.do_open(tmp_file, mode="r" + mode_suffix) as f:
+ read_data = f.read()
+
+ self.assertEqual(read_data, data)
+
+ def test_text_file(self):
+ self._file_helper("t", "Hello")
+
+ def test_binary_file(self):
+ self._file_helper("b", "Hello".encode("utf-8"))
+
+ def _pipe_helper(self, mode_suffix, data):
+ rfd, wfd = os.pipe()
+ with open(rfd, "rb") as r, open(wfd, "wb") as w:
+ with self.do_open(r, mode="r" + mode_suffix) as decompress:
+ with self.do_open(w, mode="w" + mode_suffix) as compress:
+ compress.write(data)
+ read_data = decompress.read()
+
+ self.assertEqual(read_data, data)
+
+ def test_text_pipe(self):
+ self._pipe_helper("t", "Hello")
+
+ def test_binary_pipe(self):
+ self._pipe_helper("b", "Hello".encode("utf-8"))
+
+ def test_bad_decompress(self):
+ tmp_file = self.tmpdir / "compressed"
+ with tmp_file.open("wb") as f:
+ f.write(b"\x00")
+
+ with self.assertRaises(OSError):
+ with self.do_open(tmp_file, mode="rb", stderr=subprocess.DEVNULL) as f:
+ data = f.read()
+
+
+class LZ4Tests(CompressionTests, unittest.TestCase):
+ def setUp(self):
+ if shutil.which("lz4c") is None:
+ self.skipTest("'lz4c' not found")
+ super().setUp()
+
+ @contextlib.contextmanager
+ def do_open(self, *args, **kwargs):
+ with bb.compress.lz4.open(*args, **kwargs) as f:
+ yield f
+
+
+class ZStdTests(CompressionTests, unittest.TestCase):
+ def setUp(self):
+ if shutil.which("zstd") is None:
+ self.skipTest("'zstd' not found")
+ super().setUp()
+
+ @contextlib.contextmanager
+ def do_open(self, *args, **kwargs):
+ with bb.compress.zstd.open(*args, **kwargs) as f:
+ yield f
+
+
+class PZStdTests(CompressionTests, unittest.TestCase):
+ def setUp(self):
+ if shutil.which("pzstd") is None:
+ self.skipTest("'pzstd' not found")
+ super().setUp()
+
+ @contextlib.contextmanager
+ def do_open(self, *args, **kwargs):
+ with bb.compress.zstd.open(*args, num_threads=2, **kwargs) as f:
+ yield f
diff --git a/bitbake/lib/bb/tests/cooker.py b/bitbake/lib/bb/tests/cooker.py
index c82d4b7..9e524ae 100644
--- a/bitbake/lib/bb/tests/cooker.py
+++ b/bitbake/lib/bb/tests/cooker.py
@@ -1,6 +1,8 @@
#
# BitBake Tests for cooker.py
#
+# Copyright BitBake Contributors
+#
# SPDX-License-Identifier: GPL-2.0-only
#
diff --git a/bitbake/lib/bb/tests/data.py b/bitbake/lib/bb/tests/data.py
index 1d4a64b..e667c7c 100644
--- a/bitbake/lib/bb/tests/data.py
+++ b/bitbake/lib/bb/tests/data.py
@@ -245,35 +245,35 @@ class TestConcatOverride(unittest.TestCase):
def test_prepend(self):
self.d.setVar("TEST", "${VAL}")
- self.d.setVar("TEST_prepend", "${FOO}:")
+ self.d.setVar("TEST:prepend", "${FOO}:")
self.assertEqual(self.d.getVar("TEST"), "foo:val")
def test_append(self):
self.d.setVar("TEST", "${VAL}")
- self.d.setVar("TEST_append", ":${BAR}")
+ self.d.setVar("TEST:append", ":${BAR}")
self.assertEqual(self.d.getVar("TEST"), "val:bar")
def test_multiple_append(self):
self.d.setVar("TEST", "${VAL}")
- self.d.setVar("TEST_prepend", "${FOO}:")
- self.d.setVar("TEST_append", ":val2")
- self.d.setVar("TEST_append", ":${BAR}")
+ self.d.setVar("TEST:prepend", "${FOO}:")
+ self.d.setVar("TEST:append", ":val2")
+ self.d.setVar("TEST:append", ":${BAR}")
self.assertEqual(self.d.getVar("TEST"), "foo:val:val2:bar")
def test_append_unset(self):
- self.d.setVar("TEST_prepend", "${FOO}:")
- self.d.setVar("TEST_append", ":val2")
- self.d.setVar("TEST_append", ":${BAR}")
+ self.d.setVar("TEST:prepend", "${FOO}:")
+ self.d.setVar("TEST:append", ":val2")
+ self.d.setVar("TEST:append", ":${BAR}")
self.assertEqual(self.d.getVar("TEST"), "foo::val2:bar")
def test_remove(self):
self.d.setVar("TEST", "${VAL} ${BAR}")
- self.d.setVar("TEST_remove", "val")
+ self.d.setVar("TEST:remove", "val")
self.assertEqual(self.d.getVar("TEST"), " bar")
def test_remove_cleared(self):
self.d.setVar("TEST", "${VAL} ${BAR}")
- self.d.setVar("TEST_remove", "val")
+ self.d.setVar("TEST:remove", "val")
self.d.setVar("TEST", "${VAL} ${BAR}")
self.assertEqual(self.d.getVar("TEST"), "val bar")
@@ -281,42 +281,42 @@ class TestConcatOverride(unittest.TestCase):
# (including that whitespace is preserved)
def test_remove_inactive_override(self):
self.d.setVar("TEST", "${VAL} ${BAR} 123")
- self.d.setVar("TEST_remove_inactiveoverride", "val")
+ self.d.setVar("TEST:remove:inactiveoverride", "val")
self.assertEqual(self.d.getVar("TEST"), "val bar 123")
def test_doubleref_remove(self):
self.d.setVar("TEST", "${VAL} ${BAR}")
- self.d.setVar("TEST_remove", "val")
+ self.d.setVar("TEST:remove", "val")
self.d.setVar("TEST_TEST", "${TEST} ${TEST}")
self.assertEqual(self.d.getVar("TEST_TEST"), " bar bar")
def test_empty_remove(self):
self.d.setVar("TEST", "")
- self.d.setVar("TEST_remove", "val")
+ self.d.setVar("TEST:remove", "val")
self.assertEqual(self.d.getVar("TEST"), "")
def test_remove_expansion(self):
self.d.setVar("BAR", "Z")
self.d.setVar("TEST", "${BAR}/X Y")
- self.d.setVar("TEST_remove", "${BAR}/X")
+ self.d.setVar("TEST:remove", "${BAR}/X")
self.assertEqual(self.d.getVar("TEST"), " Y")
def test_remove_expansion_items(self):
self.d.setVar("TEST", "A B C D")
self.d.setVar("BAR", "B D")
- self.d.setVar("TEST_remove", "${BAR}")
+ self.d.setVar("TEST:remove", "${BAR}")
self.assertEqual(self.d.getVar("TEST"), "A C ")
def test_remove_preserve_whitespace(self):
# When the removal isn't active, the original value should be preserved
self.d.setVar("TEST", " A B")
- self.d.setVar("TEST_remove", "C")
+ self.d.setVar("TEST:remove", "C")
self.assertEqual(self.d.getVar("TEST"), " A B")
def test_remove_preserve_whitespace2(self):
# When the removal is active preserve the whitespace
self.d.setVar("TEST", " A B")
- self.d.setVar("TEST_remove", "B")
+ self.d.setVar("TEST:remove", "B")
self.assertEqual(self.d.getVar("TEST"), " A ")
class TestOverrides(unittest.TestCase):
@@ -329,70 +329,70 @@ class TestOverrides(unittest.TestCase):
self.assertEqual(self.d.getVar("TEST"), "testvalue")
def test_one_override(self):
- self.d.setVar("TEST_bar", "testvalue2")
+ self.d.setVar("TEST:bar", "testvalue2")
self.assertEqual(self.d.getVar("TEST"), "testvalue2")
def test_one_override_unset(self):
- self.d.setVar("TEST2_bar", "testvalue2")
+ self.d.setVar("TEST2:bar", "testvalue2")
self.assertEqual(self.d.getVar("TEST2"), "testvalue2")
- self.assertCountEqual(list(self.d.keys()), ['TEST', 'TEST2', 'OVERRIDES', 'TEST2_bar'])
+ self.assertCountEqual(list(self.d.keys()), ['TEST', 'TEST2', 'OVERRIDES', 'TEST2:bar'])
def test_multiple_override(self):
- self.d.setVar("TEST_bar", "testvalue2")
- self.d.setVar("TEST_local", "testvalue3")
- self.d.setVar("TEST_foo", "testvalue4")
+ self.d.setVar("TEST:bar", "testvalue2")
+ self.d.setVar("TEST:local", "testvalue3")
+ self.d.setVar("TEST:foo", "testvalue4")
self.assertEqual(self.d.getVar("TEST"), "testvalue3")
- self.assertCountEqual(list(self.d.keys()), ['TEST', 'TEST_foo', 'OVERRIDES', 'TEST_bar', 'TEST_local'])
+ self.assertCountEqual(list(self.d.keys()), ['TEST', 'TEST:foo', 'OVERRIDES', 'TEST:bar', 'TEST:local'])
def test_multiple_combined_overrides(self):
- self.d.setVar("TEST_local_foo_bar", "testvalue3")
+ self.d.setVar("TEST:local:foo:bar", "testvalue3")
self.assertEqual(self.d.getVar("TEST"), "testvalue3")
def test_multiple_overrides_unset(self):
- self.d.setVar("TEST2_local_foo_bar", "testvalue3")
+ self.d.setVar("TEST2:local:foo:bar", "testvalue3")
self.assertEqual(self.d.getVar("TEST2"), "testvalue3")
def test_keyexpansion_override(self):
self.d.setVar("LOCAL", "local")
- self.d.setVar("TEST_bar", "testvalue2")
- self.d.setVar("TEST_${LOCAL}", "testvalue3")
- self.d.setVar("TEST_foo", "testvalue4")
+ self.d.setVar("TEST:bar", "testvalue2")
+ self.d.setVar("TEST:${LOCAL}", "testvalue3")
+ self.d.setVar("TEST:foo", "testvalue4")
bb.data.expandKeys(self.d)
self.assertEqual(self.d.getVar("TEST"), "testvalue3")
def test_rename_override(self):
- self.d.setVar("ALTERNATIVE_ncurses-tools_class-target", "a")
+ self.d.setVar("ALTERNATIVE:ncurses-tools:class-target", "a")
self.d.setVar("OVERRIDES", "class-target")
- self.d.renameVar("ALTERNATIVE_ncurses-tools", "ALTERNATIVE_lib32-ncurses-tools")
- self.assertEqual(self.d.getVar("ALTERNATIVE_lib32-ncurses-tools"), "a")
+ self.d.renameVar("ALTERNATIVE:ncurses-tools", "ALTERNATIVE:lib32-ncurses-tools")
+ self.assertEqual(self.d.getVar("ALTERNATIVE:lib32-ncurses-tools"), "a")
def test_underscore_override(self):
- self.d.setVar("TEST_bar", "testvalue2")
- self.d.setVar("TEST_some_val", "testvalue3")
- self.d.setVar("TEST_foo", "testvalue4")
+ self.d.setVar("TEST:bar", "testvalue2")
+ self.d.setVar("TEST:some_val", "testvalue3")
+ self.d.setVar("TEST:foo", "testvalue4")
self.d.setVar("OVERRIDES", "foo:bar:some_val")
self.assertEqual(self.d.getVar("TEST"), "testvalue3")
def test_remove_with_override(self):
- self.d.setVar("TEST_bar", "testvalue2")
- self.d.setVar("TEST_some_val", "testvalue3 testvalue5")
- self.d.setVar("TEST_some_val_remove", "testvalue3")
- self.d.setVar("TEST_foo", "testvalue4")
+ self.d.setVar("TEST:bar", "testvalue2")
+ self.d.setVar("TEST:some_val", "testvalue3 testvalue5")
+ self.d.setVar("TEST:some_val:remove", "testvalue3")
+ self.d.setVar("TEST:foo", "testvalue4")
self.d.setVar("OVERRIDES", "foo:bar:some_val")
self.assertEqual(self.d.getVar("TEST"), " testvalue5")
def test_append_and_override_1(self):
- self.d.setVar("TEST_append", "testvalue2")
- self.d.setVar("TEST_bar", "testvalue3")
+ self.d.setVar("TEST:append", "testvalue2")
+ self.d.setVar("TEST:bar", "testvalue3")
self.assertEqual(self.d.getVar("TEST"), "testvalue3testvalue2")
def test_append_and_override_2(self):
- self.d.setVar("TEST_append_bar", "testvalue2")
+ self.d.setVar("TEST:append:bar", "testvalue2")
self.assertEqual(self.d.getVar("TEST"), "testvaluetestvalue2")
def test_append_and_override_3(self):
- self.d.setVar("TEST_bar_append", "testvalue2")
+ self.d.setVar("TEST:bar:append", "testvalue2")
self.assertEqual(self.d.getVar("TEST"), "testvalue2")
# Test an override with _<numeric> in it based on a real world OE issue
@@ -400,11 +400,16 @@ class TestOverrides(unittest.TestCase):
self.d.setVar("TARGET_ARCH", "x86_64")
self.d.setVar("PN", "test-${TARGET_ARCH}")
self.d.setVar("VERSION", "1")
- self.d.setVar("VERSION_pn-test-${TARGET_ARCH}", "2")
+ self.d.setVar("VERSION:pn-test-${TARGET_ARCH}", "2")
self.d.setVar("OVERRIDES", "pn-${PN}")
bb.data.expandKeys(self.d)
self.assertEqual(self.d.getVar("VERSION"), "2")
+ def test_append_and_unused_override(self):
+ # Had a bug where an unused override append could return "" instead of None
+ self.d.setVar("BAR:append:unusedoverride", "testvalue2")
+ self.assertEqual(self.d.getVar("BAR"), None)
+
class TestKeyExpansion(unittest.TestCase):
def setUp(self):
self.d = bb.data.init()
@@ -498,7 +503,7 @@ class TaskHash(unittest.TestCase):
d.setVar("VAR", "val")
# Adding an inactive removal shouldn't change the hash
d.setVar("BAR", "notbar")
- d.setVar("MYCOMMAND_remove", "${BAR}")
+ d.setVar("MYCOMMAND:remove", "${BAR}")
nexthash = gettask_bashhash("mytask", d)
self.assertEqual(orighash, nexthash)
diff --git a/bitbake/lib/bb/tests/fetch-testdata/debian/pool/main/m/minicom/index.html b/bitbake/lib/bb/tests/fetch-testdata/debian/pool/main/m/minicom/index.html
new file mode 100644
index 0000000..4a1eb4d
--- /dev/null
+++ b/bitbake/lib/bb/tests/fetch-testdata/debian/pool/main/m/minicom/index.html
@@ -0,0 +1,59 @@
+<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
+<html>
+ <head>
+ <title>Index of /debian/pool/main/m/minicom</title>
+ </head>
+ <body>
+<h1>Index of /debian/pool/main/m/minicom</h1>
+ <table>
+ <tr><th valign="top"><img src="/icons/blank.gif" alt="[ICO]"></th><th><a href="?C=N;O=D">Name</a></th><th><a href="?C=M;O=A">Last modified</a></th><th><a href="?C=S;O=A">Size</a></th></tr>
+ <tr><th colspan="4"><hr></th></tr>
+<tr><td valign="top"><img src="/icons/back.gif" alt="[PARENTDIR]"></td><td><a href="/debian/pool/main/m/">Parent Directory</a></td><td> </td><td align="right"> - </td></tr>
+<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7-1+deb8u1.debian.tar.xz">minicom_2.7-1+deb8u1.debian.tar.xz</a></td><td align="right">2017-04-24 08:22 </td><td align="right"> 14K</td></tr>
+<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7-1+deb8u1.dsc">minicom_2.7-1+deb8u1.dsc</a></td><td align="right">2017-04-24 08:22 </td><td align="right">1.9K</td></tr>
+<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7-1+deb8u1_amd64.deb">minicom_2.7-1+deb8u1_amd64.deb</a></td><td align="right">2017-04-25 21:10 </td><td align="right">257K</td></tr>
+<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7-1+deb8u1_armel.deb">minicom_2.7-1+deb8u1_armel.deb</a></td><td align="right">2017-04-26 00:58 </td><td align="right">246K</td></tr>
+<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7-1+deb8u1_armhf.deb">minicom_2.7-1+deb8u1_armhf.deb</a></td><td align="right">2017-04-26 00:58 </td><td align="right">245K</td></tr>
+<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7-1+deb8u1_i386.deb">minicom_2.7-1+deb8u1_i386.deb</a></td><td align="right">2017-04-25 21:41 </td><td align="right">258K</td></tr>
+<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7-1.1.debian.tar.xz">minicom_2.7-1.1.debian.tar.xz</a></td><td align="right">2017-04-22 09:34 </td><td align="right"> 14K</td></tr>
+<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7-1.1.dsc">minicom_2.7-1.1.dsc</a></td><td align="right">2017-04-22 09:34 </td><td align="right">1.9K</td></tr>
+<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7-1.1_amd64.deb">minicom_2.7-1.1_amd64.deb</a></td><td align="right">2017-04-22 15:29 </td><td align="right">261K</td></tr>
+<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7-1.1_arm64.deb">minicom_2.7-1.1_arm64.deb</a></td><td align="right">2017-04-22 15:29 </td><td align="right">250K</td></tr>
+<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7-1.1_armel.deb">minicom_2.7-1.1_armel.deb</a></td><td align="right">2017-04-22 15:29 </td><td align="right">255K</td></tr>
+<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7-1.1_armhf.deb">minicom_2.7-1.1_armhf.deb</a></td><td align="right">2017-04-22 15:29 </td><td align="right">254K</td></tr>
+<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7-1.1_i386.deb">minicom_2.7-1.1_i386.deb</a></td><td align="right">2017-04-22 15:29 </td><td align="right">266K</td></tr>
+<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7-1.1_mips.deb">minicom_2.7-1.1_mips.deb</a></td><td align="right">2017-04-22 15:29 </td><td align="right">258K</td></tr>
+<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7-1.1_mips64el.deb">minicom_2.7-1.1_mips64el.deb</a></td><td align="right">2017-04-22 15:29 </td><td align="right">259K</td></tr>
+<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7-1.1_mipsel.deb">minicom_2.7-1.1_mipsel.deb</a></td><td align="right">2017-04-22 15:29 </td><td align="right">259K</td></tr>
+<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7-1.1_ppc64el.deb">minicom_2.7-1.1_ppc64el.deb</a></td><td align="right">2017-04-22 15:29 </td><td align="right">253K</td></tr>
+<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7-1.1_s390x.deb">minicom_2.7-1.1_s390x.deb</a></td><td align="right">2017-04-22 15:29 </td><td align="right">261K</td></tr>
+<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7.1-1+b1_amd64.deb">minicom_2.7.1-1+b1_amd64.deb</a></td><td align="right">2018-05-06 08:14 </td><td align="right">262K</td></tr>
+<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7.1-1+b1_arm64.deb">minicom_2.7.1-1+b1_arm64.deb</a></td><td align="right">2018-05-06 07:58 </td><td align="right">250K</td></tr>
+<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7.1-1+b1_armel.deb">minicom_2.7.1-1+b1_armel.deb</a></td><td align="right">2018-05-06 08:45 </td><td align="right">253K</td></tr>
+<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7.1-1+b1_armhf.deb">minicom_2.7.1-1+b1_armhf.deb</a></td><td align="right">2018-05-06 10:42 </td><td align="right">253K</td></tr>
+<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7.1-1+b1_i386.deb">minicom_2.7.1-1+b1_i386.deb</a></td><td align="right">2018-05-06 08:55 </td><td align="right">266K</td></tr>
+<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7.1-1+b1_mips.deb">minicom_2.7.1-1+b1_mips.deb</a></td><td align="right">2018-05-06 08:14 </td><td align="right">258K</td></tr>
+<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7.1-1+b1_mipsel.deb">minicom_2.7.1-1+b1_mipsel.deb</a></td><td align="right">2018-05-06 12:13 </td><td align="right">259K</td></tr>
+<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7.1-1+b1_ppc64el.deb">minicom_2.7.1-1+b1_ppc64el.deb</a></td><td align="right">2018-05-06 09:10 </td><td align="right">260K</td></tr>
+<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7.1-1+b1_s390x.deb">minicom_2.7.1-1+b1_s390x.deb</a></td><td align="right">2018-05-06 08:14 </td><td align="right">257K</td></tr>
+<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7.1-1+b2_mips64el.deb">minicom_2.7.1-1+b2_mips64el.deb</a></td><td align="right">2018-05-06 09:41 </td><td align="right">260K</td></tr>
+<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7.1-1.debian.tar.xz">minicom_2.7.1-1.debian.tar.xz</a></td><td align="right">2017-08-13 15:40 </td><td align="right"> 14K</td></tr>
+<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.7.1-1.dsc">minicom_2.7.1-1.dsc</a></td><td align="right">2017-08-13 15:40 </td><td align="right">1.8K</td></tr>
+<tr><td valign="top"><img src="/icons/compressed.gif" alt="[ ]"></td><td><a href="minicom_2.7.1.orig.tar.gz">minicom_2.7.1.orig.tar.gz</a></td><td align="right">2017-08-13 15:40 </td><td align="right">855K</td></tr>
+<tr><td valign="top"><img src="/icons/compressed.gif" alt="[ ]"></td><td><a href="minicom_2.7.orig.tar.gz">minicom_2.7.orig.tar.gz</a></td><td align="right">2014-01-01 09:36 </td><td align="right">843K</td></tr>
+<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.8-2.debian.tar.xz">minicom_2.8-2.debian.tar.xz</a></td><td align="right">2021-06-15 03:47 </td><td align="right"> 14K</td></tr>
+<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.8-2.dsc">minicom_2.8-2.dsc</a></td><td align="right">2021-06-15 03:47 </td><td align="right">1.8K</td></tr>
+<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.8-2_amd64.deb">minicom_2.8-2_amd64.deb</a></td><td align="right">2021-06-15 03:58 </td><td align="right">280K</td></tr>
+<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.8-2_arm64.deb">minicom_2.8-2_arm64.deb</a></td><td align="right">2021-06-15 04:13 </td><td align="right">275K</td></tr>
+<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.8-2_armel.deb">minicom_2.8-2_armel.deb</a></td><td align="right">2021-06-15 04:13 </td><td align="right">271K</td></tr>
+<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.8-2_armhf.deb">minicom_2.8-2_armhf.deb</a></td><td align="right">2021-06-15 04:13 </td><td align="right">272K</td></tr>
+<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.8-2_i386.deb">minicom_2.8-2_i386.deb</a></td><td align="right">2021-06-15 04:13 </td><td align="right">285K</td></tr>
+<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.8-2_mips64el.deb">minicom_2.8-2_mips64el.deb</a></td><td align="right">2021-06-15 04:13 </td><td align="right">277K</td></tr>
+<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.8-2_mipsel.deb">minicom_2.8-2_mipsel.deb</a></td><td align="right">2021-06-15 04:13 </td><td align="right">278K</td></tr>
+<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.8-2_ppc64el.deb">minicom_2.8-2_ppc64el.deb</a></td><td align="right">2021-06-15 04:13 </td><td align="right">286K</td></tr>
+<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.8-2_s390x.deb">minicom_2.8-2_s390x.deb</a></td><td align="right">2021-06-15 03:58 </td><td align="right">275K</td></tr>
+<tr><td valign="top"><img src="/icons/unknown.gif" alt="[ ]"></td><td><a href="minicom_2.8.orig.tar.bz2">minicom_2.8.orig.tar.bz2</a></td><td align="right">2021-01-03 12:44 </td><td align="right">598K</td></tr>
+ <tr><th colspan="4"><hr></th></tr>
+</table>
+<address>Apache Server at ftp.debian.org Port 80</address>
+</body></html>
diff --git a/bitbake/lib/bb/tests/fetch.py b/bitbake/lib/bb/tests/fetch.py
index 3b64584..7bace41 100644
--- a/bitbake/lib/bb/tests/fetch.py
+++ b/bitbake/lib/bb/tests/fetch.py
@@ -11,6 +11,7 @@ import hashlib
import tempfile
import collections
import os
+import tarfile
from bb.fetch2 import URI
from bb.fetch2 import FetchMethod
import bb
@@ -18,7 +19,7 @@ from bb.tests.support.httpserver import HTTPService
def skipIfNoNetwork():
if os.environ.get("BB_SKIP_NETTESTS") == "yes":
- return unittest.skip("Network tests being skipped")
+ return unittest.skip("network test")
return lambda f: f
class URITest(unittest.TestCase):
@@ -376,7 +377,7 @@ class FetcherTest(unittest.TestCase):
def setUp(self):
self.origdir = os.getcwd()
self.d = bb.data.init()
- self.tempdir = tempfile.mkdtemp()
+ self.tempdir = tempfile.mkdtemp(prefix="bitbake-fetch-")
self.dldir = os.path.join(self.tempdir, "download")
os.mkdir(self.dldir)
self.d.setVar("DL_DIR", self.dldir)
@@ -393,57 +394,78 @@ class FetcherTest(unittest.TestCase):
bb.process.run('chmod u+rw -R %s' % self.tempdir)
bb.utils.prunedir(self.tempdir)
+ def git(self, cmd, cwd=None):
+ if isinstance(cmd, str):
+ cmd = 'git ' + cmd
+ else:
+ cmd = ['git'] + cmd
+ if cwd is None:
+ cwd = self.gitdir
+ return bb.process.run(cmd, cwd=cwd)[0]
+
+ def git_init(self, cwd=None):
+ self.git('init', cwd=cwd)
+ if not self.git(['config', 'user.email'], cwd=cwd):
+ self.git(['config', 'user.email', 'you@example.com'], cwd=cwd)
+ if not self.git(['config', 'user.name'], cwd=cwd):
+ self.git(['config', 'user.name', 'Your Name'], cwd=cwd)
+
class MirrorUriTest(FetcherTest):
replaceuris = {
- ("git://git.invalid.infradead.org/mtd-utils.git;tag=1234567890123456789012345678901234567890", "git://.*/.*", "http://somewhere.org/somedir/")
+ ("git://git.invalid.infradead.org/mtd-utils.git;tag=1234567890123456789012345678901234567890", "git://.*/.*", "http://somewhere.org/somedir/")
: "http://somewhere.org/somedir/git2_git.invalid.infradead.org.mtd-utils.git.tar.gz",
- ("git://git.invalid.infradead.org/mtd-utils.git;tag=1234567890123456789012345678901234567890", "git://.*/([^/]+/)*([^/]*)", "git://somewhere.org/somedir/\\2;protocol=http")
- : "git://somewhere.org/somedir/mtd-utils.git;tag=1234567890123456789012345678901234567890;protocol=http",
- ("git://git.invalid.infradead.org/foo/mtd-utils.git;tag=1234567890123456789012345678901234567890", "git://.*/([^/]+/)*([^/]*)", "git://somewhere.org/somedir/\\2;protocol=http")
- : "git://somewhere.org/somedir/mtd-utils.git;tag=1234567890123456789012345678901234567890;protocol=http",
- ("git://git.invalid.infradead.org/foo/mtd-utils.git;tag=1234567890123456789012345678901234567890", "git://.*/([^/]+/)*([^/]*)", "git://somewhere.org/\\2;protocol=http")
- : "git://somewhere.org/mtd-utils.git;tag=1234567890123456789012345678901234567890;protocol=http",
+ ("git://git.invalid.infradead.org/mtd-utils.git;tag=1234567890123456789012345678901234567890", "git://.*/([^/]+/)*([^/]*)", "git://somewhere.org/somedir/\\2;protocol=http")
+ : "git://somewhere.org/somedir/mtd-utils.git;tag=1234567890123456789012345678901234567890;protocol=http",
+ ("git://git.invalid.infradead.org/foo/mtd-utils.git;tag=1234567890123456789012345678901234567890", "git://.*/([^/]+/)*([^/]*)", "git://somewhere.org/somedir/\\2;protocol=http")
+ : "git://somewhere.org/somedir/mtd-utils.git;tag=1234567890123456789012345678901234567890;protocol=http",
+ ("git://git.invalid.infradead.org/foo/mtd-utils.git;tag=1234567890123456789012345678901234567890", "git://.*/([^/]+/)*([^/]*)", "git://somewhere.org/\\2;protocol=http")
+ : "git://somewhere.org/mtd-utils.git;tag=1234567890123456789012345678901234567890;protocol=http",
("git://someserver.org/bitbake;tag=1234567890123456789012345678901234567890", "git://someserver.org/bitbake", "git://git.openembedded.org/bitbake")
: "git://git.openembedded.org/bitbake;tag=1234567890123456789012345678901234567890",
- ("file://sstate-xyz.tgz", "file://.*", "file:///somewhere/1234/sstate-cache")
+ ("file://sstate-xyz.tgz", "file://.*", "file:///somewhere/1234/sstate-cache")
: "file:///somewhere/1234/sstate-cache/sstate-xyz.tgz",
- ("file://sstate-xyz.tgz", "file://.*", "file:///somewhere/1234/sstate-cache/")
+ ("file://sstate-xyz.tgz", "file://.*", "file:///somewhere/1234/sstate-cache/")
: "file:///somewhere/1234/sstate-cache/sstate-xyz.tgz",
- ("http://somewhere.org/somedir1/somedir2/somefile_1.2.3.tar.gz", "http://.*/.*", "http://somewhere2.org/somedir3")
+ ("http://somewhere.org/somedir1/somedir2/somefile_1.2.3.tar.gz", "http://.*/.*", "http://somewhere2.org/somedir3")
: "http://somewhere2.org/somedir3/somefile_1.2.3.tar.gz",
- ("http://somewhere.org/somedir1/somefile_1.2.3.tar.gz", "http://somewhere.org/somedir1/somefile_1.2.3.tar.gz", "http://somewhere2.org/somedir3/somefile_1.2.3.tar.gz")
+ ("http://somewhere.org/somedir1/somefile_1.2.3.tar.gz", "http://somewhere.org/somedir1/somefile_1.2.3.tar.gz", "http://somewhere2.org/somedir3/somefile_1.2.3.tar.gz")
: "http://somewhere2.org/somedir3/somefile_1.2.3.tar.gz",
("http://www.apache.org/dist/subversion/subversion-1.7.1.tar.bz2", "http://www.apache.org/dist", "http://archive.apache.org/dist")
: "http://archive.apache.org/dist/subversion/subversion-1.7.1.tar.bz2",
("http://www.apache.org/dist/subversion/subversion-1.7.1.tar.bz2", "http://.*/.*", "file:///somepath/downloads/")
: "file:///somepath/downloads/subversion-1.7.1.tar.bz2",
- ("git://git.invalid.infradead.org/mtd-utils.git;tag=1234567890123456789012345678901234567890", "git://.*/.*", "git://somewhere.org/somedir/BASENAME;protocol=http")
- : "git://somewhere.org/somedir/mtd-utils.git;tag=1234567890123456789012345678901234567890;protocol=http",
- ("git://git.invalid.infradead.org/foo/mtd-utils.git;tag=1234567890123456789012345678901234567890", "git://.*/.*", "git://somewhere.org/somedir/BASENAME;protocol=http")
- : "git://somewhere.org/somedir/mtd-utils.git;tag=1234567890123456789012345678901234567890;protocol=http",
- ("git://git.invalid.infradead.org/foo/mtd-utils.git;tag=1234567890123456789012345678901234567890", "git://.*/.*", "git://somewhere.org/somedir/MIRRORNAME;protocol=http")
- : "git://somewhere.org/somedir/git.invalid.infradead.org.foo.mtd-utils.git;tag=1234567890123456789012345678901234567890;protocol=http",
+ ("git://git.invalid.infradead.org/mtd-utils.git;tag=1234567890123456789012345678901234567890", "git://.*/.*", "git://somewhere.org/somedir/BASENAME;protocol=http")
+ : "git://somewhere.org/somedir/mtd-utils.git;tag=1234567890123456789012345678901234567890;protocol=http",
+ ("git://git.invalid.infradead.org/foo/mtd-utils.git;tag=1234567890123456789012345678901234567890", "git://.*/.*", "git://somewhere.org/somedir/BASENAME;protocol=http")
+ : "git://somewhere.org/somedir/mtd-utils.git;tag=1234567890123456789012345678901234567890;protocol=http",
+ ("git://git.invalid.infradead.org/foo/mtd-utils.git;tag=1234567890123456789012345678901234567890", "git://.*/.*", "git://somewhere.org/somedir/MIRRORNAME;protocol=http")
+ : "git://somewhere.org/somedir/git.invalid.infradead.org.foo.mtd-utils.git;tag=1234567890123456789012345678901234567890;protocol=http",
("http://somewhere.org/somedir1/somedir2/somefile_1.2.3.tar.gz", "http://.*/.*", "http://somewhere2.org")
: "http://somewhere2.org/somefile_1.2.3.tar.gz",
("http://somewhere.org/somedir1/somedir2/somefile_1.2.3.tar.gz", "http://.*/.*", "http://somewhere2.org/")
: "http://somewhere2.org/somefile_1.2.3.tar.gz",
("git://someserver.org/bitbake;tag=1234567890123456789012345678901234567890;branch=master", "git://someserver.org/bitbake;branch=master", "git://git.openembedded.org/bitbake;protocol=http")
: "git://git.openembedded.org/bitbake;tag=1234567890123456789012345678901234567890;branch=master;protocol=http",
-
("git://user1@someserver.org/bitbake;tag=1234567890123456789012345678901234567890;branch=master", "git://someserver.org/bitbake;branch=master", "git://user2@git.openembedded.org/bitbake;protocol=http")
: "git://user2@git.openembedded.org/bitbake;tag=1234567890123456789012345678901234567890;branch=master;protocol=http",
-
+ ("git://someserver.org/bitbake;tag=1234567890123456789012345678901234567890;protocol=git;branch=master", "git://someserver.org/bitbake", "git://someotherserver.org/bitbake;protocol=https")
+ : "git://someotherserver.org/bitbake;tag=1234567890123456789012345678901234567890;protocol=https;branch=master",
+ ("gitsm://git.qemu.org/git/seabios.git/;protocol=https;name=roms/seabios;subpath=roms/seabios;bareclone=1;nobranch=1;rev=1234567890123456789012345678901234567890", "gitsm://.*/.*", "http://petalinux.xilinx.com/sswreleases/rel-v${XILINX_VER_MAIN}/downloads") : "http://petalinux.xilinx.com/sswreleases/rel-v%24%7BXILINX_VER_MAIN%7D/downloads/git2_git.qemu.org.git.seabios.git..tar.gz",
+ ("https://somewhere.org/example/1.0.0/example;downloadfilename=some-example-1.0.0.tgz", "https://.*/.*", "file:///mirror/PATH")
+ : "file:///mirror/example/1.0.0/some-example-1.0.0.tgz;downloadfilename=some-example-1.0.0.tgz",
+ ("https://somewhere.org/example-1.0.0.tgz;downloadfilename=some-example-1.0.0.tgz", "https://.*/.*", "file:///mirror/some-example-1.0.0.tgz")
+ : "file:///mirror/some-example-1.0.0.tgz;downloadfilename=some-example-1.0.0.tgz",
#Renaming files doesn't work
#("http://somewhere.org/somedir1/somefile_1.2.3.tar.gz", "http://somewhere.org/somedir1/somefile_1.2.3.tar.gz", "http://somewhere2.org/somedir3/somefile_2.3.4.tar.gz") : "http://somewhere2.org/somedir3/somefile_2.3.4.tar.gz"
#("file://sstate-xyz.tgz", "file://.*/.*", "file:///somewhere/1234/sstate-cache") : "file:///somewhere/1234/sstate-cache/sstate-xyz.tgz",
}
- mirrorvar = "http://.*/.* file:///somepath/downloads/ \n" \
- "git://someserver.org/bitbake git://git.openembedded.org/bitbake \n" \
- "https://.*/.* file:///someotherpath/downloads/ \n" \
- "http://.*/.* file:///someotherpath/downloads/ \n"
+ mirrorvar = "http://.*/.* file:///somepath/downloads/ " \
+ "git://someserver.org/bitbake git://git.openembedded.org/bitbake " \
+ "https://.*/.* file:///someotherpath/downloads/ " \
+ "http://.*/.* file:///someotherpath/downloads/"
def test_urireplace(self):
for k, v in self.replaceuris.items():
@@ -468,8 +490,8 @@ class MirrorUriTest(FetcherTest):
def test_mirror_of_mirror(self):
# Test if mirror of a mirror works
- mirrorvar = self.mirrorvar + " http://.*/.* http://otherdownloads.yoctoproject.org/downloads/ \n"
- mirrorvar = mirrorvar + " http://otherdownloads.yoctoproject.org/.* http://downloads2.yoctoproject.org/downloads/ \n"
+ mirrorvar = self.mirrorvar + " http://.*/.* http://otherdownloads.yoctoproject.org/downloads/"
+ mirrorvar = mirrorvar + " http://otherdownloads.yoctoproject.org/.* http://downloads2.yoctoproject.org/downloads/"
fetcher = bb.fetch.FetchData("http://downloads.yoctoproject.org/releases/bitbake/bitbake-1.0.tar.gz", self.d)
mirrors = bb.fetch2.mirror_from_string(mirrorvar)
uris, uds = bb.fetch2.build_mirroruris(fetcher, mirrors, self.d)
@@ -478,8 +500,8 @@ class MirrorUriTest(FetcherTest):
'http://otherdownloads.yoctoproject.org/downloads/bitbake-1.0.tar.gz',
'http://downloads2.yoctoproject.org/downloads/bitbake-1.0.tar.gz'])
- recmirrorvar = "https://.*/[^/]* http://AAAA/A/A/A/ \n" \
- "https://.*/[^/]* https://BBBB/B/B/B/ \n"
+ recmirrorvar = "https://.*/[^/]* http://AAAA/A/A/A/ " \
+ "https://.*/[^/]* https://BBBB/B/B/B/"
def test_recursive(self):
fetcher = bb.fetch.FetchData("https://downloads.yoctoproject.org/releases/bitbake/bitbake-1.0.tar.gz", self.d)
@@ -493,15 +515,15 @@ class MirrorUriTest(FetcherTest):
class GitDownloadDirectoryNamingTest(FetcherTest):
def setUp(self):
super(GitDownloadDirectoryNamingTest, self).setUp()
- self.recipe_url = "git://git.openembedded.org/bitbake"
+ self.recipe_url = "git://git.openembedded.org/bitbake;branch=master"
self.recipe_dir = "git.openembedded.org.bitbake"
- self.mirror_url = "git://github.com/openembedded/bitbake.git;protocol=https"
+ self.mirror_url = "git://github.com/openembedded/bitbake.git;protocol=https;branch=master"
self.mirror_dir = "github.com.openembedded.bitbake.git"
self.d.setVar('SRCREV', '82ea737a0b42a8b53e11c9cde141e9e9c0bd8c40')
def setup_mirror_rewrite(self):
- self.d.setVar("PREMIRRORS", self.recipe_url + " " + self.mirror_url + " \n")
+ self.d.setVar("PREMIRRORS", self.recipe_url + " " + self.mirror_url)
@skipIfNoNetwork()
def test_that_directory_is_named_after_recipe_url_when_no_mirroring_is_used(self):
@@ -541,16 +563,16 @@ class GitDownloadDirectoryNamingTest(FetcherTest):
class TarballNamingTest(FetcherTest):
def setUp(self):
super(TarballNamingTest, self).setUp()
- self.recipe_url = "git://git.openembedded.org/bitbake"
+ self.recipe_url = "git://git.openembedded.org/bitbake;branch=master"
self.recipe_tarball = "git2_git.openembedded.org.bitbake.tar.gz"
- self.mirror_url = "git://github.com/openembedded/bitbake.git;protocol=https"
+ self.mirror_url = "git://github.com/openembedded/bitbake.git;protocol=https;branch=master"
self.mirror_tarball = "git2_github.com.openembedded.bitbake.git.tar.gz"
self.d.setVar('BB_GENERATE_MIRROR_TARBALLS', '1')
self.d.setVar('SRCREV', '82ea737a0b42a8b53e11c9cde141e9e9c0bd8c40')
def setup_mirror_rewrite(self):
- self.d.setVar("PREMIRRORS", self.recipe_url + " " + self.mirror_url + " \n")
+ self.d.setVar("PREMIRRORS", self.recipe_url + " " + self.mirror_url)
@skipIfNoNetwork()
def test_that_the_recipe_tarball_is_created_when_no_mirroring_is_used(self):
@@ -575,9 +597,9 @@ class TarballNamingTest(FetcherTest):
class GitShallowTarballNamingTest(FetcherTest):
def setUp(self):
super(GitShallowTarballNamingTest, self).setUp()
- self.recipe_url = "git://git.openembedded.org/bitbake"
+ self.recipe_url = "git://git.openembedded.org/bitbake;branch=master"
self.recipe_tarball = "gitshallow_git.openembedded.org.bitbake_82ea737-1_master.tar.gz"
- self.mirror_url = "git://github.com/openembedded/bitbake.git;protocol=https"
+ self.mirror_url = "git://github.com/openembedded/bitbake.git;protocol=https;branch=master"
self.mirror_tarball = "gitshallow_github.com.openembedded.bitbake.git_82ea737-1_master.tar.gz"
self.d.setVar('BB_GIT_SHALLOW', '1')
@@ -585,7 +607,7 @@ class GitShallowTarballNamingTest(FetcherTest):
self.d.setVar('SRCREV', '82ea737a0b42a8b53e11c9cde141e9e9c0bd8c40')
def setup_mirror_rewrite(self):
- self.d.setVar("PREMIRRORS", self.recipe_url + " " + self.mirror_url + " \n")
+ self.d.setVar("PREMIRRORS", self.recipe_url + " " + self.mirror_url)
@skipIfNoNetwork()
def test_that_the_tarball_is_named_after_recipe_url_when_no_mirroring_is_used(self):
@@ -607,6 +629,37 @@ class GitShallowTarballNamingTest(FetcherTest):
self.assertIn(self.mirror_tarball, dir)
+class CleanTarballTest(FetcherTest):
+ def setUp(self):
+ super(CleanTarballTest, self).setUp()
+ self.recipe_url = "git://git.openembedded.org/bitbake"
+ self.recipe_tarball = "git2_git.openembedded.org.bitbake.tar.gz"
+
+ self.d.setVar('BB_GENERATE_MIRROR_TARBALLS', '1')
+ self.d.setVar('SRCREV', '82ea737a0b42a8b53e11c9cde141e9e9c0bd8c40')
+
+ @skipIfNoNetwork()
+ def test_that_the_tarball_contents_does_not_leak_info(self):
+ fetcher = bb.fetch.Fetch([self.recipe_url], self.d)
+
+ fetcher.download()
+
+ fetcher.unpack(self.unpackdir)
+ mtime = bb.process.run('git log --all -1 --format=%ct',
+ cwd=os.path.join(self.unpackdir, 'git'))
+ self.assertEqual(len(mtime), 2)
+ mtime = int(mtime[0])
+
+ archive = tarfile.open(os.path.join(self.dldir, self.recipe_tarball))
+ self.assertNotEqual(len(archive.members), 0)
+ for member in archive.members:
+ self.assertEqual(member.uname, 'oe')
+ self.assertEqual(member.uid, 0)
+ self.assertEqual(member.gname, 'oe')
+ self.assertEqual(member.gid, 0)
+ self.assertEqual(member.mtime, mtime)
+
+
class FetcherLocalTest(FetcherTest):
def setUp(self):
def touch(fn):
@@ -624,6 +677,9 @@ class FetcherLocalTest(FetcherTest):
os.makedirs(os.path.join(self.localsrcdir, 'dir', 'subdir'))
touch(os.path.join(self.localsrcdir, 'dir', 'subdir', 'e'))
touch(os.path.join(self.localsrcdir, r'backslash\x2dsystemd-unit.device'))
+ bb.process.run('tar cf archive.tar -C dir .', cwd=self.localsrcdir)
+ bb.process.run('tar czf archive.tar.gz -C dir .', cwd=self.localsrcdir)
+ bb.process.run('tar cjf archive.tar.bz2 -C dir .', cwd=self.localsrcdir)
self.d.setVar("FILESPATH", self.localsrcdir)
def fetchUnpack(self, uris):
@@ -678,61 +734,58 @@ class FetcherLocalTest(FetcherTest):
with self.assertRaises(bb.fetch2.UnpackError):
self.fetchUnpack(['file://a;subdir=/bin/sh'])
- def test_local_gitfetch_usehead(self):
+ def test_local_striplevel(self):
+ tree = self.fetchUnpack(['file://archive.tar;subdir=bar;striplevel=1'])
+ self.assertEqual(tree, ['bar/c', 'bar/d', 'bar/subdir/e'])
+
+ def test_local_striplevel_gzip(self):
+ tree = self.fetchUnpack(['file://archive.tar.gz;subdir=bar;striplevel=1'])
+ self.assertEqual(tree, ['bar/c', 'bar/d', 'bar/subdir/e'])
+
+ def test_local_striplevel_bzip2(self):
+ tree = self.fetchUnpack(['file://archive.tar.bz2;subdir=bar;striplevel=1'])
+ self.assertEqual(tree, ['bar/c', 'bar/d', 'bar/subdir/e'])
+
+ def dummyGitTest(self, suffix):
# Create dummy local Git repo
src_dir = tempfile.mkdtemp(dir=self.tempdir,
prefix='gitfetch_localusehead_')
- src_dir = os.path.abspath(src_dir)
- bb.process.run("git init", cwd=src_dir)
- bb.process.run("git config user.email 'you@example.com'", cwd=src_dir)
- bb.process.run("git config user.name 'Your Name'", cwd=src_dir)
- bb.process.run("git commit --allow-empty -m'Dummy commit'",
- cwd=src_dir)
+ self.gitdir = os.path.abspath(src_dir)
+ self.git_init()
+ self.git(['commit', '--allow-empty', '-m', 'Dummy commit'])
# Use other branch than master
- bb.process.run("git checkout -b my-devel", cwd=src_dir)
- bb.process.run("git commit --allow-empty -m'Dummy commit 2'",
- cwd=src_dir)
- stdout = bb.process.run("git rev-parse HEAD", cwd=src_dir)
- orig_rev = stdout[0].strip()
+ self.git(['checkout', '-b', 'my-devel'])
+ self.git(['commit', '--allow-empty', '-m', 'Dummy commit 2'])
+ orig_rev = self.git(['rev-parse', 'HEAD']).strip()
# Fetch and check revision
self.d.setVar("SRCREV", "AUTOINC")
- url = "git://" + src_dir + ";protocol=file;usehead=1"
+ self.d.setVar("__BBSEENSRCREV", "1")
+ url = "git://" + self.gitdir + ";branch=master;protocol=file;" + suffix
fetcher = bb.fetch.Fetch([url], self.d)
fetcher.download()
fetcher.unpack(self.unpackdir)
- stdout = bb.process.run("git rev-parse HEAD",
- cwd=os.path.join(self.unpackdir, 'git'))
- unpack_rev = stdout[0].strip()
+ unpack_rev = self.git(['rev-parse', 'HEAD'],
+ cwd=os.path.join(self.unpackdir, 'git')).strip()
self.assertEqual(orig_rev, unpack_rev)
+ def test_local_gitfetch_usehead(self):
+ self.dummyGitTest("usehead=1")
+
def test_local_gitfetch_usehead_withname(self):
- # Create dummy local Git repo
- src_dir = tempfile.mkdtemp(dir=self.tempdir,
- prefix='gitfetch_localusehead_')
- src_dir = os.path.abspath(src_dir)
- bb.process.run("git init", cwd=src_dir)
- bb.process.run("git config user.email 'you@example.com'", cwd=src_dir)
- bb.process.run("git config user.name 'Your Name'", cwd=src_dir)
- bb.process.run("git commit --allow-empty -m'Dummy commit'",
- cwd=src_dir)
- # Use other branch than master
- bb.process.run("git checkout -b my-devel", cwd=src_dir)
- bb.process.run("git commit --allow-empty -m'Dummy commit 2'",
- cwd=src_dir)
- stdout = bb.process.run("git rev-parse HEAD", cwd=src_dir)
- orig_rev = stdout[0].strip()
+ self.dummyGitTest("usehead=1;name=newName")
- # Fetch and check revision
- self.d.setVar("SRCREV", "AUTOINC")
- url = "git://" + src_dir + ";protocol=file;usehead=1;name=newName"
- fetcher = bb.fetch.Fetch([url], self.d)
- fetcher.download()
- fetcher.unpack(self.unpackdir)
- stdout = bb.process.run("git rev-parse HEAD",
- cwd=os.path.join(self.unpackdir, 'git'))
- unpack_rev = stdout[0].strip()
- self.assertEqual(orig_rev, unpack_rev)
+ def test_local_gitfetch_shared(self):
+ self.dummyGitTest("usehead=1;name=sharedName")
+ alt = os.path.join(self.unpackdir, 'git/.git/objects/info/alternates')
+ self.assertTrue(os.path.exists(alt))
+
+ def test_local_gitfetch_noshared(self):
+ self.d.setVar('BB_GIT_NOSHARED', '1')
+ self.unpackdir += '_noshared'
+ self.dummyGitTest("usehead=1;name=noSharedName")
+ alt = os.path.join(self.unpackdir, 'git/.git/objects/info/alternates')
+ self.assertFalse(os.path.exists(alt))
class FetcherNoNetworkTest(FetcherTest):
def setUp(self):
@@ -840,12 +893,12 @@ class FetcherNoNetworkTest(FetcherTest):
class FetcherNetworkTest(FetcherTest):
@skipIfNoNetwork()
def test_fetch(self):
- fetcher = bb.fetch.Fetch(["http://downloads.yoctoproject.org/releases/bitbake/bitbake-1.0.tar.gz", "http://downloads.yoctoproject.org/releases/bitbake/bitbake-1.1.tar.gz"], self.d)
+ fetcher = bb.fetch.Fetch(["https://downloads.yoctoproject.org/releases/bitbake/bitbake-1.0.tar.gz", "https://downloads.yoctoproject.org/releases/bitbake/bitbake-1.1.tar.gz"], self.d)
fetcher.download()
self.assertEqual(os.path.getsize(self.dldir + "/bitbake-1.0.tar.gz"), 57749)
self.assertEqual(os.path.getsize(self.dldir + "/bitbake-1.1.tar.gz"), 57892)
self.d.setVar("BB_NO_NETWORK", "1")
- fetcher = bb.fetch.Fetch(["http://downloads.yoctoproject.org/releases/bitbake/bitbake-1.0.tar.gz", "http://downloads.yoctoproject.org/releases/bitbake/bitbake-1.1.tar.gz"], self.d)
+ fetcher = bb.fetch.Fetch(["https://downloads.yoctoproject.org/releases/bitbake/bitbake-1.0.tar.gz", "https://downloads.yoctoproject.org/releases/bitbake/bitbake-1.1.tar.gz"], self.d)
fetcher.download()
fetcher.unpack(self.unpackdir)
self.assertEqual(len(os.listdir(self.unpackdir + "/bitbake-1.0/")), 9)
@@ -853,21 +906,21 @@ class FetcherNetworkTest(FetcherTest):
@skipIfNoNetwork()
def test_fetch_mirror(self):
- self.d.setVar("MIRRORS", "http://.*/.* http://downloads.yoctoproject.org/releases/bitbake")
+ self.d.setVar("MIRRORS", "http://.*/.* https://downloads.yoctoproject.org/releases/bitbake")
fetcher = bb.fetch.Fetch(["http://invalid.yoctoproject.org/releases/bitbake/bitbake-1.0.tar.gz"], self.d)
fetcher.download()
self.assertEqual(os.path.getsize(self.dldir + "/bitbake-1.0.tar.gz"), 57749)
@skipIfNoNetwork()
def test_fetch_mirror_of_mirror(self):
- self.d.setVar("MIRRORS", "http://.*/.* http://invalid2.yoctoproject.org/ \n http://invalid2.yoctoproject.org/.* http://downloads.yoctoproject.org/releases/bitbake")
+ self.d.setVar("MIRRORS", "http://.*/.* http://invalid2.yoctoproject.org/ http://invalid2.yoctoproject.org/.* https://downloads.yoctoproject.org/releases/bitbake")
fetcher = bb.fetch.Fetch(["http://invalid.yoctoproject.org/releases/bitbake/bitbake-1.0.tar.gz"], self.d)
fetcher.download()
self.assertEqual(os.path.getsize(self.dldir + "/bitbake-1.0.tar.gz"), 57749)
@skipIfNoNetwork()
def test_fetch_file_mirror_of_mirror(self):
- self.d.setVar("MIRRORS", "http://.*/.* file:///some1where/ \n file:///some1where/.* file://some2where/ \n file://some2where/.* http://downloads.yoctoproject.org/releases/bitbake")
+ self.d.setVar("MIRRORS", "http://.*/.* file:///some1where/ file:///some1where/.* file://some2where/ file://some2where/.* https://downloads.yoctoproject.org/releases/bitbake")
fetcher = bb.fetch.Fetch(["http://invalid.yoctoproject.org/releases/bitbake/bitbake-1.0.tar.gz"], self.d)
os.mkdir(self.dldir + "/some2where")
fetcher.download()
@@ -875,16 +928,46 @@ class FetcherNetworkTest(FetcherTest):
@skipIfNoNetwork()
def test_fetch_premirror(self):
- self.d.setVar("PREMIRRORS", "http://.*/.* http://downloads.yoctoproject.org/releases/bitbake")
+ self.d.setVar("PREMIRRORS", "http://.*/.* https://downloads.yoctoproject.org/releases/bitbake")
fetcher = bb.fetch.Fetch(["http://invalid.yoctoproject.org/releases/bitbake/bitbake-1.0.tar.gz"], self.d)
fetcher.download()
self.assertEqual(os.path.getsize(self.dldir + "/bitbake-1.0.tar.gz"), 57749)
+ @skipIfNoNetwork()
+ def test_fetch_specify_downloadfilename(self):
+ fetcher = bb.fetch.Fetch(["https://downloads.yoctoproject.org/releases/bitbake/bitbake-1.0.tar.gz;downloadfilename=bitbake-v1.0.0.tar.gz"], self.d)
+ fetcher.download()
+ self.assertEqual(os.path.getsize(self.dldir + "/bitbake-v1.0.0.tar.gz"), 57749)
+
+ @skipIfNoNetwork()
+ def test_fetch_premirror_specify_downloadfilename_regex_uri(self):
+ self.d.setVar("PREMIRRORS", "http://.*/.* https://downloads.yoctoproject.org/releases/bitbake/")
+ fetcher = bb.fetch.Fetch(["http://invalid.yoctoproject.org/releases/bitbake/1.0.tar.gz;downloadfilename=bitbake-1.0.tar.gz"], self.d)
+ fetcher.download()
+ self.assertEqual(os.path.getsize(self.dldir + "/bitbake-1.0.tar.gz"), 57749)
+
+ @skipIfNoNetwork()
+ # BZ13039
+ def test_fetch_premirror_specify_downloadfilename_specific_uri(self):
+ self.d.setVar("PREMIRRORS", "http://invalid.yoctoproject.org/releases/bitbake https://downloads.yoctoproject.org/releases/bitbake")
+ fetcher = bb.fetch.Fetch(["http://invalid.yoctoproject.org/releases/bitbake/1.0.tar.gz;downloadfilename=bitbake-1.0.tar.gz"], self.d)
+ fetcher.download()
+ self.assertEqual(os.path.getsize(self.dldir + "/bitbake-1.0.tar.gz"), 57749)
+
+ @skipIfNoNetwork()
+ def test_fetch_premirror_use_downloadfilename_to_fetch(self):
+ # Ensure downloadfilename is used when fetching from premirror.
+ self.d.setVar("PREMIRRORS", "http://.*/.* https://downloads.yoctoproject.org/releases/bitbake")
+ fetcher = bb.fetch.Fetch(["http://invalid.yoctoproject.org/releases/bitbake/bitbake-1.1.tar.gz;downloadfilename=bitbake-1.0.tar.gz"], self.d)
+ fetcher.download()
+ self.assertEqual(os.path.getsize(self.dldir + "/bitbake-1.0.tar.gz"), 57749)
+
@skipIfNoNetwork()
def gitfetcher(self, url1, url2):
def checkrevision(self, fetcher):
fetcher.unpack(self.unpackdir)
- revision = bb.process.run("git rev-parse HEAD", shell=True, cwd=self.unpackdir + "/git")[0].strip()
+ revision = self.git(['rev-parse', 'HEAD'],
+ cwd=os.path.join(self.unpackdir, 'git')).strip()
self.assertEqual(revision, "270a05b0b4ba0959fe0624d2a4885d7b70426da5")
self.d.setVar("BB_GENERATE_MIRROR_TARBALLS", "1")
@@ -902,19 +985,19 @@ class FetcherNetworkTest(FetcherTest):
@skipIfNoNetwork()
def test_gitfetch(self):
- url1 = url2 = "git://git.openembedded.org/bitbake"
+ url1 = url2 = "git://git.openembedded.org/bitbake;branch=master"
self.gitfetcher(url1, url2)
@skipIfNoNetwork()
def test_gitfetch_goodsrcrev(self):
# SRCREV is set but matches rev= parameter
- url1 = url2 = "git://git.openembedded.org/bitbake;rev=270a05b0b4ba0959fe0624d2a4885d7b70426da5"
+ url1 = url2 = "git://git.openembedded.org/bitbake;rev=270a05b0b4ba0959fe0624d2a4885d7b70426da5;branch=master"
self.gitfetcher(url1, url2)
@skipIfNoNetwork()
def test_gitfetch_badsrcrev(self):
# SRCREV is set but does not match rev= parameter
- url1 = url2 = "git://git.openembedded.org/bitbake;rev=dead05b0b4ba0959fe0624d2a4885d7b70426da5"
+ url1 = url2 = "git://git.openembedded.org/bitbake;rev=dead05b0b4ba0959fe0624d2a4885d7b70426da5;branch=master"
self.assertRaises(bb.fetch.FetchError, self.gitfetcher, url1, url2)
@skipIfNoNetwork()
@@ -929,7 +1012,7 @@ class FetcherNetworkTest(FetcherTest):
# `usehead=1' and instead fetch the specified SRCREV. See
# test_local_gitfetch_usehead() for a positive use of the usehead
# feature.
- url = "git://git.openembedded.org/bitbake;usehead=1"
+ url = "git://git.openembedded.org/bitbake;usehead=1;branch=master"
self.assertRaises(bb.fetch.ParameterError, self.gitfetcher, url, url)
@skipIfNoNetwork()
@@ -938,20 +1021,20 @@ class FetcherNetworkTest(FetcherTest):
# `usehead=1' and instead fetch the specified SRCREV. See
# test_local_gitfetch_usehead() for a positive use of the usehead
# feature.
- url = "git://git.openembedded.org/bitbake;usehead=1;name=newName"
+ url = "git://git.openembedded.org/bitbake;usehead=1;name=newName;branch=master"
self.assertRaises(bb.fetch.ParameterError, self.gitfetcher, url, url)
@skipIfNoNetwork()
def test_gitfetch_finds_local_tarball_for_mirrored_url_when_previous_downloaded_by_the_recipe_url(self):
- recipeurl = "git://git.openembedded.org/bitbake"
- mirrorurl = "git://someserver.org/bitbake"
- self.d.setVar("PREMIRRORS", "git://someserver.org/bitbake git://git.openembedded.org/bitbake \n")
+ recipeurl = "git://git.openembedded.org/bitbake;branch=master"
+ mirrorurl = "git://someserver.org/bitbake;branch=master"
+ self.d.setVar("PREMIRRORS", "git://someserver.org/bitbake git://git.openembedded.org/bitbake")
self.gitfetcher(recipeurl, mirrorurl)
@skipIfNoNetwork()
def test_gitfetch_finds_local_tarball_when_previous_downloaded_from_a_premirror(self):
- recipeurl = "git://someserver.org/bitbake"
- self.d.setVar("PREMIRRORS", "git://someserver.org/bitbake git://git.openembedded.org/bitbake \n")
+ recipeurl = "git://someserver.org/bitbake;branch=master"
+ self.d.setVar("PREMIRRORS", "git://someserver.org/bitbake git://git.openembedded.org/bitbake")
self.gitfetcher(recipeurl, recipeurl)
@skipIfNoNetwork()
@@ -960,16 +1043,16 @@ class FetcherNetworkTest(FetcherTest):
recipeurl = "git://someserver.org/bitbake"
self.sourcedir = self.unpackdir.replace("unpacked", "sourcemirror.git")
os.chdir(self.tempdir)
- bb.process.run("git clone %s %s 2> /dev/null" % (realurl, self.sourcedir), shell=True)
- self.d.setVar("PREMIRRORS", "%s git://%s;protocol=file \n" % (recipeurl, self.sourcedir))
+ self.git(['clone', realurl, self.sourcedir], cwd=self.tempdir)
+ self.d.setVar("PREMIRRORS", "%s git://%s;protocol=file" % (recipeurl, self.sourcedir))
self.gitfetcher(recipeurl, recipeurl)
@skipIfNoNetwork()
def test_git_submodule(self):
# URL with ssh submodules
- url = "gitsm://git.yoctoproject.org/git-submodule-test;branch=ssh-gitsm-tests;rev=049da4a6cb198d7c0302e9e8b243a1443cb809a7"
+ url = "gitsm://git.yoctoproject.org/git-submodule-test;branch=ssh-gitsm-tests;rev=049da4a6cb198d7c0302e9e8b243a1443cb809a7;branch=master"
# Original URL (comment this if you have ssh access to git.yoctoproject.org)
- url = "gitsm://git.yoctoproject.org/git-submodule-test;branch=master;rev=a2885dd7d25380d23627e7544b7bbb55014b16ee"
+ url = "gitsm://git.yoctoproject.org/git-submodule-test;branch=master;rev=a2885dd7d25380d23627e7544b7bbb55014b16ee;branch=master"
fetcher = bb.fetch.Fetch([url], self.d)
fetcher.download()
# Previous cwd has been deleted
@@ -1059,7 +1142,7 @@ class FetcherNetworkTest(FetcherTest):
""" Prevent regression on deeply nested submodules not being checked out properly, even though they were fetched. """
# This repository also has submodules where the module (name), path and url do not align
- url = "gitsm://github.com/azure/iotedge.git;protocol=https;rev=d76e0316c6f324345d77c48a83ce836d09392699"
+ url = "gitsm://github.com/azure/iotedge.git;protocol=https;rev=d76e0316c6f324345d77c48a83ce836d09392699;branch=main"
fetcher = bb.fetch.Fetch([url], self.d)
fetcher.download()
# Previous cwd has been deleted
@@ -1165,43 +1248,43 @@ class SVNTest(FetcherTest):
class TrustedNetworksTest(FetcherTest):
def test_trusted_network(self):
# Ensure trusted_network returns False when the host IS in the list.
- url = "git://Someserver.org/foo;rev=1"
+ url = "git://Someserver.org/foo;rev=1;branch=master"
self.d.setVar("BB_ALLOWED_NETWORKS", "server1.org someserver.org server2.org server3.org")
self.assertTrue(bb.fetch.trusted_network(self.d, url))
def test_wild_trusted_network(self):
# Ensure trusted_network returns true when the *.host IS in the list.
- url = "git://Someserver.org/foo;rev=1"
+ url = "git://Someserver.org/foo;rev=1;branch=master"
self.d.setVar("BB_ALLOWED_NETWORKS", "server1.org *.someserver.org server2.org server3.org")
self.assertTrue(bb.fetch.trusted_network(self.d, url))
def test_prefix_wild_trusted_network(self):
# Ensure trusted_network returns true when the prefix matches *.host.
- url = "git://git.Someserver.org/foo;rev=1"
+ url = "git://git.Someserver.org/foo;rev=1;branch=master"
self.d.setVar("BB_ALLOWED_NETWORKS", "server1.org *.someserver.org server2.org server3.org")
self.assertTrue(bb.fetch.trusted_network(self.d, url))
def test_two_prefix_wild_trusted_network(self):
# Ensure trusted_network returns true when the prefix matches *.host.
- url = "git://something.git.Someserver.org/foo;rev=1"
+ url = "git://something.git.Someserver.org/foo;rev=1;branch=master"
self.d.setVar("BB_ALLOWED_NETWORKS", "server1.org *.someserver.org server2.org server3.org")
self.assertTrue(bb.fetch.trusted_network(self.d, url))
def test_port_trusted_network(self):
# Ensure trusted_network returns True, even if the url specifies a port.
- url = "git://someserver.org:8080/foo;rev=1"
+ url = "git://someserver.org:8080/foo;rev=1;branch=master"
self.d.setVar("BB_ALLOWED_NETWORKS", "someserver.org")
self.assertTrue(bb.fetch.trusted_network(self.d, url))
def test_untrusted_network(self):
# Ensure trusted_network returns False when the host is NOT in the list.
- url = "git://someserver.org/foo;rev=1"
+ url = "git://someserver.org/foo;rev=1;branch=master"
self.d.setVar("BB_ALLOWED_NETWORKS", "server1.org server2.org server3.org")
self.assertFalse(bb.fetch.trusted_network(self.d, url))
def test_wild_untrusted_network(self):
# Ensure trusted_network returns False when the host is NOT in the list.
- url = "git://*.someserver.org/foo;rev=1"
+ url = "git://*.someserver.org/foo;rev=1;branch=master"
self.d.setVar("BB_ALLOWED_NETWORKS", "server1.org server2.org server3.org")
self.assertFalse(bb.fetch.trusted_network(self.d, url))
@@ -1239,32 +1322,32 @@ class FetchLatestVersionTest(FetcherTest):
: "1.99.4",
# version pattern "vX.Y"
# mirror of git.infradead.org since network issues interfered with testing
- ("mtd-utils", "git://git.yoctoproject.org/mtd-utils.git", "ca39eb1d98e736109c64ff9c1aa2a6ecca222d8f", "")
+ ("mtd-utils", "git://git.yoctoproject.org/mtd-utils.git;branch=master", "ca39eb1d98e736109c64ff9c1aa2a6ecca222d8f", "")
: "1.5.0",
# version pattern "pkg_name-X.Y"
# mirror of git://anongit.freedesktop.org/git/xorg/proto/presentproto since network issues interfered with testing
- ("presentproto", "git://git.yoctoproject.org/bbfetchtests-presentproto", "24f3a56e541b0a9e6c6ee76081f441221a120ef9", "")
+ ("presentproto", "git://git.yoctoproject.org/bbfetchtests-presentproto;branch=master", "24f3a56e541b0a9e6c6ee76081f441221a120ef9", "")
: "1.0",
# version pattern "pkg_name-vX.Y.Z"
- ("dtc", "git://git.yoctoproject.org/bbfetchtests-dtc.git", "65cc4d2748a2c2e6f27f1cf39e07a5dbabd80ebf", "")
+ ("dtc", "git://git.yoctoproject.org/bbfetchtests-dtc.git;branch=master", "65cc4d2748a2c2e6f27f1cf39e07a5dbabd80ebf", "")
: "1.4.0",
# combination version pattern
- ("sysprof", "git://gitlab.gnome.org/GNOME/sysprof.git;protocol=https", "cd44ee6644c3641507fb53b8a2a69137f2971219", "")
+ ("sysprof", "git://gitlab.gnome.org/GNOME/sysprof.git;protocol=https;branch=master", "cd44ee6644c3641507fb53b8a2a69137f2971219", "")
: "1.2.0",
("u-boot-mkimage", "git://git.denx.de/u-boot.git;branch=master;protocol=git", "62c175fbb8a0f9a926c88294ea9f7e88eb898f6c", "")
: "2014.01",
# version pattern "yyyymmdd"
- ("mobile-broadband-provider-info", "git://gitlab.gnome.org/GNOME/mobile-broadband-provider-info.git;protocol=https", "4ed19e11c2975105b71b956440acdb25d46a347d", "")
+ ("mobile-broadband-provider-info", "git://gitlab.gnome.org/GNOME/mobile-broadband-provider-info.git;protocol=https;branch=master", "4ed19e11c2975105b71b956440acdb25d46a347d", "")
: "20120614",
# packages with a valid UPSTREAM_CHECK_GITTAGREGEX
# mirror of git://anongit.freedesktop.org/xorg/driver/xf86-video-omap since network issues interfered with testing
- ("xf86-video-omap", "git://git.yoctoproject.org/bbfetchtests-xf86-video-omap", "ae0394e687f1a77e966cf72f895da91840dffb8f", "(?P<pver>(\d+\.(\d\.?)*))")
+ ("xf86-video-omap", "git://git.yoctoproject.org/bbfetchtests-xf86-video-omap;branch=master", "ae0394e687f1a77e966cf72f895da91840dffb8f", r"(?P<pver>(\d+\.(\d\.?)*))")
: "0.4.3",
- ("build-appliance-image", "git://git.yoctoproject.org/poky", "b37dd451a52622d5b570183a81583cc34c2ff555", "(?P<pver>(([0-9][\.|_]?)+[0-9]))")
+ ("build-appliance-image", "git://git.yoctoproject.org/poky;branch=master", "b37dd451a52622d5b570183a81583cc34c2ff555", r"(?P<pver>(([0-9][\.|_]?)+[0-9]))")
: "11.0.0",
- ("chkconfig-alternatives-native", "git://github.com/kergoth/chkconfig;branch=sysroot;protocol=https", "cd437ecbd8986c894442f8fce1e0061e20f04dee", "chkconfig\-(?P<pver>((\d+[\.\-_]*)+))")
+ ("chkconfig-alternatives-native", "git://github.com/kergoth/chkconfig;branch=sysroot;protocol=https", "cd437ecbd8986c894442f8fce1e0061e20f04dee", r"chkconfig\-(?P<pver>((\d+[\.\-_]*)+))")
: "1.3.59",
- ("remake", "git://github.com/rocky/remake.git;protocol=https", "f05508e521987c8494c92d9c2871aec46307d51d", "(?P<pver>(\d+\.(\d+\.)*\d*(\+dbg\d+(\.\d+)*)*))")
+ ("remake", "git://github.com/rocky/remake.git;protocol=https;branch=master", "f05508e521987c8494c92d9c2871aec46307d51d", r"(?P<pver>(\d+\.(\d+\.)*\d*(\+dbg\d+(\.\d+)*)*))")
: "3.82+dbg0.9",
}
@@ -1284,10 +1367,10 @@ class FetchLatestVersionTest(FetcherTest):
#
# packages with versions only in current directory
#
- # http://downloads.yoctoproject.org/releases/eglibc/eglibc-2.18-svnr23787.tar.bz2
+ # https://downloads.yoctoproject.org/releases/eglibc/eglibc-2.18-svnr23787.tar.bz2
("eglic", "/releases/eglibc/eglibc-2.18-svnr23787.tar.bz2", "", "")
: "2.19",
- # http://downloads.yoctoproject.org/releases/gnu-config/gnu-config-20120814.tar.bz2
+ # https://downloads.yoctoproject.org/releases/gnu-config/gnu-config-20120814.tar.bz2
("gnu-config", "/releases/gnu-config/gnu-config-20120814.tar.bz2", "", "")
: "20120814",
#
@@ -1304,12 +1387,18 @@ class FetchLatestVersionTest(FetcherTest):
#
# http://www.cups.org/software/1.7.2/cups-1.7.2-source.tar.bz2
# https://github.com/apple/cups/releases
- ("cups", "/software/1.7.2/cups-1.7.2-source.tar.bz2", "/apple/cups/releases", "(?P<name>cups\-)(?P<pver>((\d+[\.\-_]*)+))\-source\.tar\.gz")
+ ("cups", "/software/1.7.2/cups-1.7.2-source.tar.bz2", "/apple/cups/releases", r"(?P<name>cups\-)(?P<pver>((\d+[\.\-_]*)+))\-source\.tar\.gz")
: "2.0.0",
# http://download.oracle.com/berkeley-db/db-5.3.21.tar.gz
# http://ftp.debian.org/debian/pool/main/d/db5.3/
- ("db", "/berkeley-db/db-5.3.21.tar.gz", "/debian/pool/main/d/db5.3/", "(?P<name>db5\.3_)(?P<pver>\d+(\.\d+)+).+\.orig\.tar\.xz")
+ ("db", "/berkeley-db/db-5.3.21.tar.gz", "/debian/pool/main/d/db5.3/", r"(?P<name>db5\.3_)(?P<pver>\d+(\.\d+)+).+\.orig\.tar\.xz")
: "5.3.10",
+ #
+ # packages where the tarball compression changed in the new version
+ #
+ # http://ftp.debian.org/debian/pool/main/m/minicom/minicom_2.7.1.orig.tar.gz
+ ("minicom", "/debian/pool/main/m/minicom/minicom_2.7.1.orig.tar.gz", "", "")
+ : "2.8",
}
@skipIfNoNetwork()
@@ -1350,13 +1439,13 @@ class FetchLatestVersionTest(FetcherTest):
class FetchCheckStatusTest(FetcherTest):
- test_wget_uris = ["http://downloads.yoctoproject.org/releases/sato/sato-engine-0.1.tar.gz",
- "http://downloads.yoctoproject.org/releases/sato/sato-engine-0.2.tar.gz",
- "http://downloads.yoctoproject.org/releases/sato/sato-engine-0.3.tar.gz",
+ test_wget_uris = ["https://downloads.yoctoproject.org/releases/sato/sato-engine-0.1.tar.gz",
+ "https://downloads.yoctoproject.org/releases/sato/sato-engine-0.2.tar.gz",
+ "https://downloads.yoctoproject.org/releases/sato/sato-engine-0.3.tar.gz",
"https://yoctoproject.org/",
"https://docs.yoctoproject.org",
- "http://downloads.yoctoproject.org/releases/opkg/opkg-0.1.7.tar.gz",
- "http://downloads.yoctoproject.org/releases/opkg/opkg-0.3.0.tar.gz",
+ "https://downloads.yoctoproject.org/releases/opkg/opkg-0.1.7.tar.gz",
+ "https://downloads.yoctoproject.org/releases/opkg/opkg-0.3.0.tar.gz",
"ftp://sourceware.org/pub/libffi/libffi-1.20.tar.gz",
# GitHub releases are hosted on Amazon S3, which doesn't support HEAD
"https://github.com/kergoth/tslib/releases/download/1.1/tslib-1.1.tar.xz"
@@ -1395,9 +1484,7 @@ class GitMakeShallowTest(FetcherTest):
FetcherTest.setUp(self)
self.gitdir = os.path.join(self.tempdir, 'gitshallow')
bb.utils.mkdirhier(self.gitdir)
- bb.process.run('git init', cwd=self.gitdir)
- bb.process.run('git config user.email "you@example.com"', cwd=self.gitdir)
- bb.process.run('git config user.name "Your Name"', cwd=self.gitdir)
+ self.git_init()
def assertRefs(self, expected_refs):
actual_refs = self.git(['for-each-ref', '--format=%(refname)']).splitlines()
@@ -1411,13 +1498,6 @@ class GitMakeShallowTest(FetcherTest):
actual_count = len(revs.splitlines())
self.assertEqual(expected_count, actual_count, msg='Object count `%d` is not the expected `%d`' % (actual_count, expected_count))
- def git(self, cmd):
- if isinstance(cmd, str):
- cmd = 'git ' + cmd
- else:
- cmd = ['git'] + cmd
- return bb.process.run(cmd, cwd=self.gitdir)[0]
-
def make_shallow(self, args=None):
if args is None:
args = ['HEAD']
@@ -1520,15 +1600,13 @@ class GitShallowTest(FetcherTest):
self.srcdir = os.path.join(self.tempdir, 'gitsource')
bb.utils.mkdirhier(self.srcdir)
- self.git('init', cwd=self.srcdir)
- self.git('config user.email "you@example.com"', cwd=self.srcdir)
- self.git('config user.name "Your Name"', cwd=self.srcdir)
+ self.git_init(cwd=self.srcdir)
self.d.setVar('WORKDIR', self.tempdir)
self.d.setVar('S', self.gitdir)
self.d.delVar('PREMIRRORS')
self.d.delVar('MIRRORS')
- uri = 'git://%s;protocol=file;subdir=${S}' % self.srcdir
+ uri = 'git://%s;protocol=file;subdir=${S};branch=master' % self.srcdir
self.d.setVar('SRC_URI', uri)
self.d.setVar('SRCREV', '${AUTOREV}')
self.d.setVar('AUTOREV', '${@bb.fetch2.get_autorev(d)}')
@@ -1536,6 +1614,7 @@ class GitShallowTest(FetcherTest):
self.d.setVar('BB_GIT_SHALLOW', '1')
self.d.setVar('BB_GENERATE_MIRROR_TARBALLS', '0')
self.d.setVar('BB_GENERATE_SHALLOW_TARBALLS', '1')
+ self.d.setVar("__BBSEENSRCREV", "1")
def assertRefs(self, expected_refs, cwd=None):
if cwd is None:
@@ -1553,15 +1632,6 @@ class GitShallowTest(FetcherTest):
actual_count = len(revs.splitlines())
self.assertEqual(expected_count, actual_count, msg='Object count `%d` is not the expected `%d`' % (actual_count, expected_count))
- def git(self, cmd, cwd=None):
- if isinstance(cmd, str):
- cmd = 'git ' + cmd
- else:
- cmd = ['git'] + cmd
- if cwd is None:
- cwd = self.gitdir
- return bb.process.run(cmd, cwd=cwd)[0]
-
def add_empty_file(self, path, cwd=None, msg=None):
if msg is None:
msg = path
@@ -1756,9 +1826,7 @@ class GitShallowTest(FetcherTest):
smdir = os.path.join(self.tempdir, 'gitsubmodule')
bb.utils.mkdirhier(smdir)
- self.git('init', cwd=smdir)
- self.git('config user.email "you@example.com"', cwd=smdir)
- self.git('config user.name "Your Name"', cwd=smdir)
+ self.git_init(cwd=smdir)
# Make this look like it was cloned from a remote...
self.git('config --add remote.origin.url "%s"' % smdir, cwd=smdir)
self.git('config --add remote.origin.fetch "+refs/heads/*:refs/remotes/origin/*"', cwd=smdir)
@@ -1766,11 +1834,11 @@ class GitShallowTest(FetcherTest):
self.add_empty_file('bsub', cwd=smdir)
self.git('submodule init', cwd=self.srcdir)
- self.git('submodule add file://%s' % smdir, cwd=self.srcdir)
+ self.git('-c protocol.file.allow=always submodule add file://%s' % smdir, cwd=self.srcdir)
self.git('submodule update', cwd=self.srcdir)
self.git('commit -m submodule -a', cwd=self.srcdir)
- uri = 'gitsm://%s;protocol=file;subdir=${S}' % self.srcdir
+ uri = 'gitsm://%s;protocol=file;subdir=${S};branch=master' % self.srcdir
fetcher, ud = self.fetch_shallow(uri)
# Verify the main repository is shallow
@@ -1788,9 +1856,7 @@ class GitShallowTest(FetcherTest):
smdir = os.path.join(self.tempdir, 'gitsubmodule')
bb.utils.mkdirhier(smdir)
- self.git('init', cwd=smdir)
- self.git('config user.email "you@example.com"', cwd=smdir)
- self.git('config user.name "Your Name"', cwd=smdir)
+ self.git_init(cwd=smdir)
# Make this look like it was cloned from a remote...
self.git('config --add remote.origin.url "%s"' % smdir, cwd=smdir)
self.git('config --add remote.origin.fetch "+refs/heads/*:refs/remotes/origin/*"', cwd=smdir)
@@ -1798,7 +1864,7 @@ class GitShallowTest(FetcherTest):
self.add_empty_file('bsub', cwd=smdir)
self.git('submodule init', cwd=self.srcdir)
- self.git('submodule add file://%s' % smdir, cwd=self.srcdir)
+ self.git('-c protocol.file.allow=always submodule add file://%s' % smdir, cwd=self.srcdir)
self.git('submodule update', cwd=self.srcdir)
self.git('commit -m submodule -a', cwd=self.srcdir)
@@ -1809,8 +1875,8 @@ class GitShallowTest(FetcherTest):
# Set up the mirror
mirrordir = os.path.join(self.tempdir, 'mirror')
- os.rename(self.dldir, mirrordir)
- self.d.setVar('PREMIRRORS', 'gitsm://.*/.* file://%s/\n' % mirrordir)
+ bb.utils.rename(self.dldir, mirrordir)
+ self.d.setVar('PREMIRRORS', 'gitsm://.*/.* file://%s/' % mirrordir)
# Fetch from the mirror
bb.utils.remove(self.dldir, recurse=True)
@@ -1836,7 +1902,7 @@ class GitShallowTest(FetcherTest):
self.git('commit --author "Foo Bar <foo@bar>" -m annex-c -a', cwd=self.srcdir)
bb.process.run('chmod u+w -R %s' % self.srcdir)
- uri = 'gitannex://%s;protocol=file;subdir=${S}' % self.srcdir
+ uri = 'gitannex://%s;protocol=file;subdir=${S};branch=master' % self.srcdir
fetcher, ud = self.fetch_shallow(uri)
self.assertRevCount(1)
@@ -1925,9 +1991,9 @@ class GitShallowTest(FetcherTest):
# Set up the mirror
mirrordir = os.path.join(self.tempdir, 'mirror')
bb.utils.mkdirhier(mirrordir)
- self.d.setVar('PREMIRRORS', 'git://.*/.* file://%s/\n' % mirrordir)
+ self.d.setVar('PREMIRRORS', 'git://.*/.* file://%s/' % mirrordir)
- os.rename(os.path.join(self.dldir, mirrortarball),
+ bb.utils.rename(os.path.join(self.dldir, mirrortarball),
os.path.join(mirrordir, mirrortarball))
# Fetch from the mirror
@@ -2083,7 +2149,7 @@ class GitShallowTest(FetcherTest):
self.d.setVar('SRCREV', 'e5939ff608b95cdd4d0ab0e1935781ab9a276ac0')
self.d.setVar('BB_GIT_SHALLOW', '1')
self.d.setVar('BB_GENERATE_SHALLOW_TARBALLS', '1')
- fetcher = bb.fetch.Fetch(["git://git.yoctoproject.org/fstests"], self.d)
+ fetcher = bb.fetch.Fetch(["git://git.yoctoproject.org/fstests;branch=master"], self.d)
fetcher.download()
bb.utils.remove(self.dldir + "/*.tar.gz")
@@ -2098,7 +2164,7 @@ class GitLfsTest(FetcherTest):
self.gitdir = os.path.join(self.tempdir, 'git')
self.srcdir = os.path.join(self.tempdir, 'gitsource')
-
+
self.d.setVar('WORKDIR', self.tempdir)
self.d.setVar('S', self.gitdir)
self.d.delVar('PREMIRRORS')
@@ -2106,25 +2172,15 @@ class GitLfsTest(FetcherTest):
self.d.setVar('SRCREV', '${AUTOREV}')
self.d.setVar('AUTOREV', '${@bb.fetch2.get_autorev(d)}')
+ self.d.setVar("__BBSEENSRCREV", "1")
bb.utils.mkdirhier(self.srcdir)
- self.git('init', cwd=self.srcdir)
- self.git('config user.email "you@example.com"', cwd=self.srcdir)
- self.git('config user.name "Your Name"', cwd=self.srcdir)
+ self.git_init(cwd=self.srcdir)
with open(os.path.join(self.srcdir, '.gitattributes'), 'wt') as attrs:
attrs.write('*.mp3 filter=lfs -text')
self.git(['add', '.gitattributes'], cwd=self.srcdir)
self.git(['commit', '-m', "attributes", '.gitattributes'], cwd=self.srcdir)
- def git(self, cmd, cwd=None):
- if isinstance(cmd, str):
- cmd = 'git ' + cmd
- else:
- cmd = ['git'] + cmd
- if cwd is None:
- cwd = self.gitdir
- return bb.process.run(cmd, cwd=cwd)[0]
-
def fetch(self, uri=None, download=True):
uris = self.d.getVar('SRC_URI').split()
uri = uris[0]
@@ -2139,7 +2195,7 @@ class GitLfsTest(FetcherTest):
def test_lfs_enabled(self):
import shutil
- uri = 'git://%s;protocol=file;subdir=${S};lfs=1' % self.srcdir
+ uri = 'git://%s;protocol=file;lfs=1;branch=master' % self.srcdir
self.d.setVar('SRC_URI', uri)
# Careful: suppress initial attempt at downloading until
@@ -2164,7 +2220,7 @@ class GitLfsTest(FetcherTest):
def test_lfs_disabled(self):
import shutil
- uri = 'git://%s;protocol=file;subdir=${S};lfs=0' % self.srcdir
+ uri = 'git://%s;protocol=file;lfs=0;branch=master' % self.srcdir
self.d.setVar('SRC_URI', uri)
# In contrast to test_lfs_enabled(), allow the implicit download
@@ -2188,13 +2244,13 @@ class GitLfsTest(FetcherTest):
class GitURLWithSpacesTest(FetcherTest):
test_git_urls = {
- "git://tfs-example.org:22/tfs/example%20path/example.git" : {
- 'url': 'git://tfs-example.org:22/tfs/example%20path/example.git',
+ "git://tfs-example.org:22/tfs/example%20path/example.git;branch=master" : {
+ 'url': 'git://tfs-example.org:22/tfs/example%20path/example.git;branch=master',
'gitsrcname': 'tfs-example.org.22.tfs.example_path.example.git',
'path': '/tfs/example path/example.git'
},
- "git://tfs-example.org:22/tfs/example%20path/example%20repo.git" : {
- 'url': 'git://tfs-example.org:22/tfs/example%20path/example%20repo.git',
+ "git://tfs-example.org:22/tfs/example%20path/example%20repo.git;branch=master" : {
+ 'url': 'git://tfs-example.org:22/tfs/example%20path/example%20repo.git;branch=master',
'gitsrcname': 'tfs-example.org.22.tfs.example_path.example_repo.git',
'path': '/tfs/example path/example repo.git'
}
@@ -2218,11 +2274,48 @@ class GitURLWithSpacesTest(FetcherTest):
self.assertEqual(ud.clonedir, os.path.join(self.dldir, "git2", ref['gitsrcname']))
self.assertEqual(ud.fullmirror, os.path.join(self.dldir, "git2_" + ref['gitsrcname'] + '.tar.gz'))
+class CrateTest(FetcherTest):
+ @skipIfNoNetwork()
+ def test_crate_url(self):
+
+ uri = "crate://crates.io/glob/0.2.11"
+ self.d.setVar('SRC_URI', uri)
+
+ uris = self.d.getVar('SRC_URI').split()
+ d = self.d
+
+ fetcher = bb.fetch2.Fetch(uris, self.d)
+ fetcher.download()
+ fetcher.unpack(self.tempdir)
+ self.assertEqual(sorted(os.listdir(self.tempdir)), ['cargo_home', 'download' , 'unpacked'])
+ self.assertEqual(sorted(os.listdir(self.tempdir + "/download")), ['glob-0.2.11.crate', 'glob-0.2.11.crate.done'])
+ self.assertTrue(os.path.exists(self.tempdir + "/cargo_home/bitbake/glob-0.2.11/.cargo-checksum.json"))
+ self.assertTrue(os.path.exists(self.tempdir + "/cargo_home/bitbake/glob-0.2.11/src/lib.rs"))
+
+ @skipIfNoNetwork()
+ def test_crate_url_multi(self):
+
+ uri = "crate://crates.io/glob/0.2.11 crate://crates.io/time/0.1.35"
+ self.d.setVar('SRC_URI', uri)
+
+ uris = self.d.getVar('SRC_URI').split()
+ d = self.d
+
+ fetcher = bb.fetch2.Fetch(uris, self.d)
+ fetcher.download()
+ fetcher.unpack(self.tempdir)
+ self.assertEqual(sorted(os.listdir(self.tempdir)), ['cargo_home', 'download' , 'unpacked'])
+ self.assertEqual(sorted(os.listdir(self.tempdir + "/download")), ['glob-0.2.11.crate', 'glob-0.2.11.crate.done', 'time-0.1.35.crate', 'time-0.1.35.crate.done'])
+ self.assertTrue(os.path.exists(self.tempdir + "/cargo_home/bitbake/glob-0.2.11/.cargo-checksum.json"))
+ self.assertTrue(os.path.exists(self.tempdir + "/cargo_home/bitbake/glob-0.2.11/src/lib.rs"))
+ self.assertTrue(os.path.exists(self.tempdir + "/cargo_home/bitbake/time-0.1.35/.cargo-checksum.json"))
+ self.assertTrue(os.path.exists(self.tempdir + "/cargo_home/bitbake/time-0.1.35/src/lib.rs"))
+
class NPMTest(FetcherTest):
def skipIfNoNpm():
import shutil
if not shutil.which('npm'):
- return unittest.skip('npm not installed, tests being skipped')
+ return unittest.skip('npm not installed')
return lambda f: f
@skipIfNoNpm()
@@ -2267,11 +2360,42 @@ class NPMTest(FetcherTest):
ud = fetcher.ud[fetcher.urls[0]]
fetcher.download()
self.assertTrue(os.path.exists(ud.localpath))
+
+ # Setup the mirror by renaming the download directory
+ mirrordir = os.path.join(self.tempdir, 'mirror')
+ bb.utils.rename(self.dldir, mirrordir)
+ os.mkdir(self.dldir)
+
+ # Configure the premirror to be used
+ self.d.setVar('PREMIRRORS', 'https?$://.*/.* file://%s/npm2' % mirrordir)
+ self.d.setVar('BB_FETCH_PREMIRRORONLY', '1')
+
+ # Fetch again
+ self.assertFalse(os.path.exists(ud.localpath))
+ # The npm fetcher doesn't handle that the .resolved file disappears
+ # while the fetcher object exists, which it does when we rename the
+ # download directory to "mirror" above. Thus we need a new fetcher to go
+ # with the now empty download directory.
+ fetcher = bb.fetch.Fetch([url], self.d)
+ ud = fetcher.ud[fetcher.urls[0]]
+ fetcher.download()
+ self.assertTrue(os.path.exists(ud.localpath))
+
+ @skipIfNoNpm()
+ @skipIfNoNetwork()
+ def test_npm_premirrors_with_specified_filename(self):
+ url = 'npm://registry.npmjs.org;package=@savoirfairelinux/node-server-example;version=1.0.0'
+ # Fetch once to get a tarball
+ fetcher = bb.fetch.Fetch([url], self.d)
+ ud = fetcher.ud[fetcher.urls[0]]
+ fetcher.download()
+ self.assertTrue(os.path.exists(ud.localpath))
# Setup the mirror
mirrordir = os.path.join(self.tempdir, 'mirror')
bb.utils.mkdirhier(mirrordir)
- os.replace(ud.localpath, os.path.join(mirrordir, os.path.basename(ud.localpath)))
- self.d.setVar('PREMIRRORS', 'https?$://.*/.* file://%s/\n' % mirrordir)
+ mirrorfilename = os.path.join(mirrordir, os.path.basename(ud.localpath))
+ os.replace(ud.localpath, mirrorfilename)
+ self.d.setVar('PREMIRRORS', 'https?$://.*/.* file://%s' % mirrorfilename)
self.d.setVar('BB_FETCH_PREMIRRORONLY', '1')
# Fetch again
self.assertFalse(os.path.exists(ud.localpath))
@@ -2291,7 +2415,7 @@ class NPMTest(FetcherTest):
mirrordir = os.path.join(self.tempdir, 'mirror')
bb.utils.mkdirhier(mirrordir)
os.replace(ud.localpath, os.path.join(mirrordir, os.path.basename(ud.localpath)))
- self.d.setVar('MIRRORS', 'https?$://.*/.* file://%s/\n' % mirrordir)
+ self.d.setVar('MIRRORS', 'https?$://.*/.* file://%s/' % mirrordir)
# Update the resolved url to an invalid url
with open(ud.resolvefile, 'r') as f:
url = f.read()
@@ -2310,7 +2434,7 @@ class NPMTest(FetcherTest):
url = 'npm://registry.npmjs.org;package=@savoirfairelinux/node-server-example;version=1.0.0;destsuffix=foo/bar;downloadfilename=foo-bar.tgz'
fetcher = bb.fetch.Fetch([url], self.d)
fetcher.download()
- self.assertTrue(os.path.exists(os.path.join(self.dldir, 'foo-bar.tgz')))
+ self.assertTrue(os.path.exists(os.path.join(self.dldir, 'npm2', 'foo-bar.tgz')))
fetcher.unpack(self.unpackdir)
unpackdir = os.path.join(self.unpackdir, 'foo', 'bar')
self.assertTrue(os.path.exists(os.path.join(unpackdir, 'package.json')))
@@ -2340,7 +2464,7 @@ class NPMTest(FetcherTest):
@skipIfNoNpm()
@skipIfNoNetwork()
def test_npm_registry_alternate(self):
- url = 'npm://registry.freajs.org;package=@savoirfairelinux/node-server-example;version=1.0.0'
+ url = 'npm://skimdb.npmjs.com;package=@savoirfairelinux/node-server-example;version=1.0.0'
fetcher = bb.fetch.Fetch([url], self.d)
fetcher.download()
fetcher.unpack(self.unpackdir)
@@ -2607,7 +2731,7 @@ class NPMTest(FetcherTest):
mirrordir = os.path.join(self.tempdir, 'mirror')
bb.utils.mkdirhier(mirrordir)
os.replace(ud.localpath, os.path.join(mirrordir, os.path.basename(ud.localpath)))
- self.d.setVar('PREMIRRORS', 'https?$://.*/.* file://%s/\n' % mirrordir)
+ self.d.setVar('PREMIRRORS', 'https?$://.*/.* file://%s/' % mirrordir)
self.d.setVar('BB_FETCH_PREMIRRORONLY', '1')
# Fetch again
self.assertFalse(os.path.exists(ud.localpath))
@@ -2636,7 +2760,7 @@ class NPMTest(FetcherTest):
mirrordir = os.path.join(self.tempdir, 'mirror')
bb.utils.mkdirhier(mirrordir)
os.replace(ud.localpath, os.path.join(mirrordir, os.path.basename(ud.localpath)))
- self.d.setVar('MIRRORS', 'https?$://.*/.* file://%s/\n' % mirrordir)
+ self.d.setVar('MIRRORS', 'https?$://.*/.* file://%s/' % mirrordir)
# Fetch again with invalid url
self.assertFalse(os.path.exists(ud.localpath))
swfile = self.create_shrinkwrap_file({
@@ -2651,3 +2775,30 @@ class NPMTest(FetcherTest):
fetcher = bb.fetch.Fetch(['npmsw://' + swfile], self.d)
fetcher.download()
self.assertTrue(os.path.exists(ud.localpath))
+
+class GitSharedTest(FetcherTest):
+ def setUp(self):
+ super(GitSharedTest, self).setUp()
+ self.recipe_url = "git://git.openembedded.org/bitbake;branch=master"
+ self.d.setVar('SRCREV', '82ea737a0b42a8b53e11c9cde141e9e9c0bd8c40')
+ self.d.setVar("__BBSEENSRCREV", "1")
+
+ @skipIfNoNetwork()
+ def test_shared_unpack(self):
+ fetcher = bb.fetch.Fetch([self.recipe_url], self.d)
+
+ fetcher.download()
+ fetcher.unpack(self.unpackdir)
+ alt = os.path.join(self.unpackdir, 'git/.git/objects/info/alternates')
+ self.assertTrue(os.path.exists(alt))
+
+ @skipIfNoNetwork()
+ def test_noshared_unpack(self):
+ self.d.setVar('BB_GIT_NOSHARED', '1')
+ self.unpackdir += '_noshared'
+ fetcher = bb.fetch.Fetch([self.recipe_url], self.d)
+
+ fetcher.download()
+ fetcher.unpack(self.unpackdir)
+ alt = os.path.join(self.unpackdir, 'git/.git/objects/info/alternates')
+ self.assertFalse(os.path.exists(alt))
diff --git a/bitbake/lib/bb/tests/parse.py b/bitbake/lib/bb/tests/parse.py
index 9e21e18..1a3b749 100644
--- a/bitbake/lib/bb/tests/parse.py
+++ b/bitbake/lib/bb/tests/parse.py
@@ -98,8 +98,8 @@ exportD = "d"
overridetest = """
-RRECOMMENDS_${PN} = "a"
-RRECOMMENDS_${PN}_libc = "b"
+RRECOMMENDS:${PN} = "a"
+RRECOMMENDS:${PN}:libc = "b"
OVERRIDES = "libc:${PN}"
PN = "gtk+"
"""
@@ -110,16 +110,16 @@ PN = "gtk+"
self.assertEqual(d.getVar("RRECOMMENDS"), "b")
bb.data.expandKeys(d)
self.assertEqual(d.getVar("RRECOMMENDS"), "b")
- d.setVar("RRECOMMENDS_gtk+", "c")
+ d.setVar("RRECOMMENDS:gtk+", "c")
self.assertEqual(d.getVar("RRECOMMENDS"), "c")
overridetest2 = """
EXTRA_OECONF = ""
-EXTRA_OECONF_class-target = "b"
-EXTRA_OECONF_append = " c"
+EXTRA_OECONF:class-target = "b"
+EXTRA_OECONF:append = " c"
"""
- def test_parse_overrides(self):
+ def test_parse_overrides2(self):
f = self.parsehelper(self.overridetest2)
d = bb.parse.handle(f.name, self.d)['']
d.appendVar("EXTRA_OECONF", " d")
@@ -128,7 +128,7 @@ EXTRA_OECONF_append = " c"
overridetest3 = """
DESCRIPTION = "A"
-DESCRIPTION_${PN}-dev = "${DESCRIPTION} B"
+DESCRIPTION:${PN}-dev = "${DESCRIPTION} B"
PN = "bc"
"""
@@ -136,15 +136,15 @@ PN = "bc"
f = self.parsehelper(self.overridetest3)
d = bb.parse.handle(f.name, self.d)['']
bb.data.expandKeys(d)
- self.assertEqual(d.getVar("DESCRIPTION_bc-dev"), "A B")
+ self.assertEqual(d.getVar("DESCRIPTION:bc-dev"), "A B")
d.setVar("DESCRIPTION", "E")
- d.setVar("DESCRIPTION_bc-dev", "C D")
+ d.setVar("DESCRIPTION:bc-dev", "C D")
d.setVar("OVERRIDES", "bc-dev")
self.assertEqual(d.getVar("DESCRIPTION"), "C D")
classextend = """
-VAR_var_override1 = "B"
+VAR_var:override1 = "B"
EXTRA = ":override1"
OVERRIDES = "nothing${EXTRA}"
@@ -194,3 +194,26 @@ deltask ${EMPTYVAR}
self.assertTrue('addtask ignored: " do_patch"' in stdout)
#self.assertTrue('dependent task do_foo for do_patch does not exist' in stdout)
+ broken_multiline_comment = """
+# First line of comment \\
+# Second line of comment \\
+
+"""
+ def test_parse_broken_multiline_comment(self):
+ f = self.parsehelper(self.broken_multiline_comment)
+ with self.assertRaises(bb.BBHandledException):
+ d = bb.parse.handle(f.name, self.d)['']
+
+
+ comment_in_var = """
+VAR = " \\
+ SOMEVAL \\
+# some comment \\
+ SOMEOTHERVAL \\
+"
+"""
+ def test_parse_comment_in_var(self):
+ f = self.parsehelper(self.comment_in_var)
+ with self.assertRaises(bb.BBHandledException):
+ d = bb.parse.handle(f.name, self.d)['']
+
diff --git a/bitbake/lib/bb/tests/runqueue-tests/conf/bitbake.conf b/bitbake/lib/bb/tests/runqueue-tests/conf/bitbake.conf
index efebf00..05d7fd0 100644
--- a/bitbake/lib/bb/tests/runqueue-tests/conf/bitbake.conf
+++ b/bitbake/lib/bb/tests/runqueue-tests/conf/bitbake.conf
@@ -12,6 +12,6 @@ STAMP = "${TMPDIR}/stamps/${PN}"
T = "${TMPDIR}/workdir/${PN}/temp"
BB_NUMBER_THREADS = "4"
-BB_HASHBASE_WHITELIST = "BB_CURRENT_MC BB_HASHSERVE TMPDIR TOPDIR SLOWTASKS SSTATEVALID FILE"
+BB_BASEHASH_IGNORE_VARS = "BB_CURRENT_MC BB_HASHSERVE TMPDIR TOPDIR SLOWTASKS SSTATEVALID FILE BB_CURRENTTASK"
include conf/multiconfig/${BB_CURRENT_MC}.conf
diff --git a/bitbake/lib/bb/tests/runqueue.py b/bitbake/lib/bb/tests/runqueue.py
index 4f335b8..061a5a1 100644
--- a/bitbake/lib/bb/tests/runqueue.py
+++ b/bitbake/lib/bb/tests/runqueue.py
@@ -29,13 +29,14 @@ class RunQueueTests(unittest.TestCase):
def run_bitbakecmd(self, cmd, builddir, sstatevalid="", slowtasks="", extraenv=None, cleanup=False):
env = os.environ.copy()
env["BBPATH"] = os.path.realpath(os.path.join(os.path.dirname(__file__), "runqueue-tests"))
- env["BB_ENV_EXTRAWHITE"] = "SSTATEVALID SLOWTASKS"
+ env["BB_ENV_PASSTHROUGH_ADDITIONS"] = "SSTATEVALID SLOWTASKS TOPDIR"
env["SSTATEVALID"] = sstatevalid
env["SLOWTASKS"] = slowtasks
+ env["TOPDIR"] = builddir
if extraenv:
for k in extraenv:
env[k] = extraenv[k]
- env["BB_ENV_EXTRAWHITE"] = env["BB_ENV_EXTRAWHITE"] + " " + k
+ env["BB_ENV_PASSTHROUGH_ADDITIONS"] = env["BB_ENV_PASSTHROUGH_ADDITIONS"] + " " + k
try:
output = subprocess.check_output(cmd, env=env, stderr=subprocess.STDOUT,universal_newlines=True, cwd=builddir)
print(output)
@@ -58,6 +59,8 @@ class RunQueueTests(unittest.TestCase):
expected = ['a1:' + x for x in self.alltasks]
self.assertEqual(set(tasks), set(expected))
+ self.shutdown(tempdir)
+
def test_single_setscenevalid(self):
with tempfile.TemporaryDirectory(prefix="runqueuetest") as tempdir:
cmd = ["bitbake", "a1"]
@@ -68,6 +71,8 @@ class RunQueueTests(unittest.TestCase):
'a1:populate_sysroot', 'a1:build']
self.assertEqual(set(tasks), set(expected))
+ self.shutdown(tempdir)
+
def test_intermediate_setscenevalid(self):
with tempfile.TemporaryDirectory(prefix="runqueuetest") as tempdir:
cmd = ["bitbake", "a1"]
@@ -77,6 +82,8 @@ class RunQueueTests(unittest.TestCase):
'a1:populate_sysroot_setscene', 'a1:build']
self.assertEqual(set(tasks), set(expected))
+ self.shutdown(tempdir)
+
def test_intermediate_notcovered(self):
with tempfile.TemporaryDirectory(prefix="runqueuetest") as tempdir:
cmd = ["bitbake", "a1"]
@@ -86,6 +93,8 @@ class RunQueueTests(unittest.TestCase):
'a1:package_qa_setscene', 'a1:build', 'a1:populate_sysroot_setscene']
self.assertEqual(set(tasks), set(expected))
+ self.shutdown(tempdir)
+
def test_all_setscenevalid(self):
with tempfile.TemporaryDirectory(prefix="runqueuetest") as tempdir:
cmd = ["bitbake", "a1"]
@@ -95,6 +104,8 @@ class RunQueueTests(unittest.TestCase):
'a1:package_qa_setscene', 'a1:build', 'a1:populate_sysroot_setscene']
self.assertEqual(set(tasks), set(expected))
+ self.shutdown(tempdir)
+
def test_no_settasks(self):
with tempfile.TemporaryDirectory(prefix="runqueuetest") as tempdir:
cmd = ["bitbake", "a1", "-c", "patch"]
@@ -103,6 +114,8 @@ class RunQueueTests(unittest.TestCase):
expected = ['a1:fetch', 'a1:unpack', 'a1:patch']
self.assertEqual(set(tasks), set(expected))
+ self.shutdown(tempdir)
+
def test_mix_covered_notcovered(self):
with tempfile.TemporaryDirectory(prefix="runqueuetest") as tempdir:
cmd = ["bitbake", "a1:do_patch", "a1:do_populate_sysroot"]
@@ -111,6 +124,7 @@ class RunQueueTests(unittest.TestCase):
expected = ['a1:fetch', 'a1:unpack', 'a1:patch', 'a1:populate_sysroot_setscene']
self.assertEqual(set(tasks), set(expected))
+ self.shutdown(tempdir)
# Test targets with intermediate setscene tasks alongside a target with no intermediate setscene tasks
def test_mixed_direct_tasks_setscene_tasks(self):
@@ -122,6 +136,8 @@ class RunQueueTests(unittest.TestCase):
'a1:package_qa_setscene', 'a1:build', 'a1:populate_sysroot_setscene']
self.assertEqual(set(tasks), set(expected))
+ self.shutdown(tempdir)
+
# This test slows down the execution of do_package_setscene until after other real tasks have
# started running which tests for a bug where tasks were being lost from the buildable list of real
# tasks if they weren't in tasks_covered or tasks_notcovered
@@ -136,12 +152,14 @@ class RunQueueTests(unittest.TestCase):
'a1:populate_sysroot', 'a1:build']
self.assertEqual(set(tasks), set(expected))
- def test_setscenewhitelist(self):
+ self.shutdown(tempdir)
+
+ def test_setscene_ignore_tasks(self):
with tempfile.TemporaryDirectory(prefix="runqueuetest") as tempdir:
cmd = ["bitbake", "a1"]
extraenv = {
"BB_SETSCENE_ENFORCE" : "1",
- "BB_SETSCENE_ENFORCE_WHITELIST" : "a1:do_package_write_rpm a1:do_build"
+ "BB_SETSCENE_ENFORCE_IGNORE_TASKS" : "a1:do_package_write_rpm a1:do_build"
}
sstatevalid = "a1:do_package a1:do_package_qa a1:do_packagedata a1:do_package_write_ipk a1:do_populate_lic a1:do_populate_sysroot"
tasks = self.run_bitbakecmd(cmd, tempdir, sstatevalid, extraenv=extraenv)
@@ -149,6 +167,8 @@ class RunQueueTests(unittest.TestCase):
'a1:populate_sysroot_setscene', 'a1:package_setscene']
self.assertEqual(set(tasks), set(expected))
+ self.shutdown(tempdir)
+
# Tests for problems with dependencies between setscene tasks
def test_no_setscenevalid_harddeps(self):
with tempfile.TemporaryDirectory(prefix="runqueuetest") as tempdir:
@@ -162,6 +182,8 @@ class RunQueueTests(unittest.TestCase):
'd1:populate_sysroot', 'd1:build']
self.assertEqual(set(tasks), set(expected))
+ self.shutdown(tempdir)
+
def test_no_setscenevalid_withdeps(self):
with tempfile.TemporaryDirectory(prefix="runqueuetest") as tempdir:
cmd = ["bitbake", "b1"]
@@ -172,6 +194,8 @@ class RunQueueTests(unittest.TestCase):
expected.remove('a1:package_qa')
self.assertEqual(set(tasks), set(expected))
+ self.shutdown(tempdir)
+
def test_single_a1_setscenevalid_withdeps(self):
with tempfile.TemporaryDirectory(prefix="runqueuetest") as tempdir:
cmd = ["bitbake", "b1"]
@@ -182,6 +206,8 @@ class RunQueueTests(unittest.TestCase):
'a1:populate_sysroot'] + ['b1:' + x for x in self.alltasks]
self.assertEqual(set(tasks), set(expected))
+ self.shutdown(tempdir)
+
def test_single_b1_setscenevalid_withdeps(self):
with tempfile.TemporaryDirectory(prefix="runqueuetest") as tempdir:
cmd = ["bitbake", "b1"]
@@ -193,6 +219,8 @@ class RunQueueTests(unittest.TestCase):
expected.remove('b1:package')
self.assertEqual(set(tasks), set(expected))
+ self.shutdown(tempdir)
+
def test_intermediate_setscenevalid_withdeps(self):
with tempfile.TemporaryDirectory(prefix="runqueuetest") as tempdir:
cmd = ["bitbake", "b1"]
@@ -203,6 +231,8 @@ class RunQueueTests(unittest.TestCase):
expected.remove('b1:package')
self.assertEqual(set(tasks), set(expected))
+ self.shutdown(tempdir)
+
def test_all_setscenevalid_withdeps(self):
with tempfile.TemporaryDirectory(prefix="runqueuetest") as tempdir:
cmd = ["bitbake", "b1"]
@@ -213,6 +243,8 @@ class RunQueueTests(unittest.TestCase):
'b1:packagedata_setscene', 'b1:package_qa_setscene', 'b1:populate_sysroot_setscene']
self.assertEqual(set(tasks), set(expected))
+ self.shutdown(tempdir)
+
def test_multiconfig_setscene_optimise(self):
with tempfile.TemporaryDirectory(prefix="runqueuetest") as tempdir:
extraenv = {
@@ -232,6 +264,8 @@ class RunQueueTests(unittest.TestCase):
expected.remove(x)
self.assertEqual(set(tasks), set(expected))
+ self.shutdown(tempdir)
+
def test_multiconfig_bbmask(self):
# This test validates that multiconfigs can independently mask off
# recipes they do not want with BBMASK. It works by having recipes
@@ -248,6 +282,8 @@ class RunQueueTests(unittest.TestCase):
cmd = ["bitbake", "mc:mc-1:fails-mc2", "mc:mc_2:fails-mc1"]
self.run_bitbakecmd(cmd, tempdir, "", extraenv=extraenv)
+ self.shutdown(tempdir)
+
def test_multiconfig_mcdepends(self):
with tempfile.TemporaryDirectory(prefix="runqueuetest") as tempdir:
extraenv = {
@@ -278,7 +314,8 @@ class RunQueueTests(unittest.TestCase):
["mc_2:a1:%s" % t for t in rerun_tasks]
self.assertEqual(set(tasks), set(expected))
- @unittest.skipIf(sys.version_info < (3, 5, 0), 'Python 3.5 or later required')
+ self.shutdown(tempdir)
+
def test_hashserv_single(self):
with tempfile.TemporaryDirectory(prefix="runqueuetest") as tempdir:
extraenv = {
@@ -304,7 +341,6 @@ class RunQueueTests(unittest.TestCase):
self.shutdown(tempdir)
- @unittest.skipIf(sys.version_info < (3, 5, 0), 'Python 3.5 or later required')
def test_hashserv_double(self):
with tempfile.TemporaryDirectory(prefix="runqueuetest") as tempdir:
extraenv = {
@@ -329,7 +365,6 @@ class RunQueueTests(unittest.TestCase):
self.shutdown(tempdir)
- @unittest.skipIf(sys.version_info < (3, 5, 0), 'Python 3.5 or later required')
def test_hashserv_multiple_setscene(self):
# Runs e1:do_package_setscene twice
with tempfile.TemporaryDirectory(prefix="runqueuetest") as tempdir:
@@ -361,7 +396,6 @@ class RunQueueTests(unittest.TestCase):
def shutdown(self, tempdir):
# Wait for the hashserve socket to disappear else we'll see races with the tempdir cleanup
- while (os.path.exists(tempdir + "/hashserve.sock") or os.path.exists(tempdir + "cache/hashserv.db-wal")):
+ while (os.path.exists(tempdir + "/hashserve.sock") or os.path.exists(tempdir + "cache/hashserv.db-wal") or os.path.exists(tempdir + "/bitbake.lock")):
time.sleep(0.5)
-
diff --git a/bitbake/lib/bb/tests/utils.py b/bitbake/lib/bb/tests/utils.py
index a7ff33d..c363f62 100644
--- a/bitbake/lib/bb/tests/utils.py
+++ b/bitbake/lib/bb/tests/utils.py
@@ -418,7 +418,7 @@ MULTILINE = " stuff \\
['MULTILINE'],
handle_var)
- testvalue = re.sub('\s+', ' ', value_in_callback.strip())
+ testvalue = re.sub(r'\s+', ' ', value_in_callback.strip())
self.assertEqual(expected_value, testvalue)
class EditBbLayersConf(unittest.TestCase):
@@ -666,3 +666,21 @@ class GetReferencedVars(unittest.TestCase):
layers = [{"SRC_URI"}, {"QT_GIT", "QT_MODULE", "QT_MODULE_BRANCH_PARAM", "QT_GIT_PROTOCOL"}, {"QT_GIT_PROJECT", "QT_MODULE_BRANCH", "BPN"}, {"PN", "SPECIAL_PKGSUFFIX"}]
self.check_referenced("${SRC_URI}", layers)
+
+
+class EnvironmentTests(unittest.TestCase):
+ def test_environment(self):
+ os.environ["A"] = "this is A"
+ self.assertIn("A", os.environ)
+ self.assertEqual(os.environ["A"], "this is A")
+ self.assertNotIn("B", os.environ)
+
+ with bb.utils.environment(B="this is B"):
+ self.assertIn("A", os.environ)
+ self.assertEqual(os.environ["A"], "this is A")
+ self.assertIn("B", os.environ)
+ self.assertEqual(os.environ["B"], "this is B")
+
+ self.assertIn("A", os.environ)
+ self.assertEqual(os.environ["A"], "this is A")
+ self.assertNotIn("B", os.environ)
diff --git a/bitbake/lib/bb/tinfoil.py b/bitbake/lib/bb/tinfoil.py
index 796a98f..e68a3b8 100644
--- a/bitbake/lib/bb/tinfoil.py
+++ b/bitbake/lib/bb/tinfoil.py
@@ -52,6 +52,10 @@ class TinfoilDataStoreConnectorVarHistory:
def remoteCommand(self, cmd, *args, **kwargs):
return self.tinfoil.run_command('dataStoreConnectorVarHistCmd', self.dsindex, cmd, args, kwargs)
+ def emit(self, var, oval, val, o, d):
+ ret = self.tinfoil.run_command('dataStoreConnectorVarHistCmdEmit', self.dsindex, var, oval, val, d.dsindex)
+ o.write(ret)
+
def __getattr__(self, name):
if not hasattr(bb.data_smart.VariableHistory, name):
raise AttributeError("VariableHistory has no such method %s" % name)
@@ -444,7 +448,7 @@ class Tinfoil:
self.run_actions(config_params)
self.recipes_parsed = True
- def run_command(self, command, *params):
+ def run_command(self, command, *params, handle_events=True):
"""
Run a command on the server (as implemented in bb.command).
Note that there are two types of command - synchronous and
@@ -464,7 +468,7 @@ class Tinfoil:
try:
result = self.server_connection.connection.runCommand(commandline)
finally:
- while True:
+ while handle_events:
event = self.wait_event()
if not event:
break
@@ -489,7 +493,7 @@ class Tinfoil:
Wait for an event from the server for the specified time.
A timeout of 0 means don't wait if there are no events in the queue.
Returns the next event in the queue or None if the timeout was
- reached. Note that in order to recieve any events you will
+ reached. Note that in order to receive any events you will
first need to set the internal event mask using set_event_mask()
(otherwise whatever event mask the UI set up will be in effect).
"""
@@ -757,7 +761,7 @@ class Tinfoil:
if parseprogress:
parseprogress.update(event.progress)
else:
- bb.warn("Got ProcessProgress event for someting that never started?")
+ bb.warn("Got ProcessProgress event for something that never started?")
continue
if isinstance(event, bb.event.ProcessFinished):
if self.quiet > 1:
diff --git a/bitbake/lib/bb/ui/buildinfohelper.py b/bitbake/lib/bb/ui/buildinfohelper.py
index 43aa592..129bb32 100644
--- a/bitbake/lib/bb/ui/buildinfohelper.py
+++ b/bitbake/lib/bb/ui/buildinfohelper.py
@@ -45,7 +45,7 @@ from pprint import pformat
import logging
from datetime import datetime, timedelta
-from django.db import transaction, connection
+from django.db import transaction
# pylint: disable=invalid-name
@@ -227,6 +227,12 @@ class ORMWrapper(object):
build.completed_on = timezone.now()
build.outcome = outcome
build.save()
+
+ # We force a sync point here to force the outcome status commit,
+ # which resolves a race condition with the build completion takedown
+ transaction.set_autocommit(True)
+ transaction.set_autocommit(False)
+
signal_runbuilds()
def update_target_set_license_manifest(self, target, license_manifest_path):
@@ -483,14 +489,14 @@ class ORMWrapper(object):
# we already created the root directory, so ignore any
# entry for it
- if len(path) == 0:
+ if not path:
continue
parent_path = "/".join(path.split("/")[:len(path.split("/")) - 1])
- if len(parent_path) == 0:
+ if not parent_path:
parent_path = "/"
parent_obj = self._cached_get(Target_File, target = target_obj, path = parent_path, inodetype = Target_File.ITYPE_DIRECTORY)
- tf_obj = Target_File.objects.create(
+ Target_File.objects.create(
target = target_obj,
path = path,
size = size,
@@ -555,7 +561,7 @@ class ORMWrapper(object):
parent_obj = Target_File.objects.get(target = target_obj, path = parent_path, inodetype = Target_File.ITYPE_DIRECTORY)
- tf_obj = Target_File.objects.create(
+ Target_File.objects.create(
target = target_obj,
path = path,
size = size,
@@ -571,7 +577,7 @@ class ORMWrapper(object):
assert isinstance(build_obj, Build)
assert isinstance(target_obj, Target)
- errormsg = ""
+ errormsg = []
for p in packagedict:
# Search name swtiches round the installed name vs package name
# by default installed name == package name
@@ -633,10 +639,10 @@ class ORMWrapper(object):
packagefile_objects.append(Package_File( package = packagedict[p]['object'],
path = targetpath,
size = targetfilesize))
- if len(packagefile_objects):
+ if packagefile_objects:
Package_File.objects.bulk_create(packagefile_objects)
except KeyError as e:
- errormsg += " stpi: Key error, package %s key %s \n" % ( p, e )
+ errormsg.append(" stpi: Key error, package %s key %s \n" % (p, e))
# save disk installed size
packagedict[p]['object'].installed_size = packagedict[p]['size']
@@ -673,13 +679,13 @@ class ORMWrapper(object):
logger.warning("Could not add dependency to the package %s "
"because %s is an unknown package", p, px)
- if len(packagedeps_objs) > 0:
+ if packagedeps_objs:
Package_Dependency.objects.bulk_create(packagedeps_objs)
else:
logger.info("No package dependencies created")
- if len(errormsg) > 0:
- logger.warning("buildinfohelper: target_package_info could not identify recipes: \n%s", errormsg)
+ if errormsg:
+ logger.warning("buildinfohelper: target_package_info could not identify recipes: \n%s", "".join(errormsg))
def save_target_image_file_information(self, target_obj, file_name, file_size):
Target_Image_File.objects.create(target=target_obj,
@@ -767,7 +773,7 @@ class ORMWrapper(object):
packagefile_objects.append(Package_File( package = bp_object,
path = path,
size = package_info['FILES_INFO'][path] ))
- if len(packagefile_objects):
+ if packagefile_objects:
Package_File.objects.bulk_create(packagefile_objects)
def _po_byname(p):
@@ -809,7 +815,7 @@ class ORMWrapper(object):
packagedeps_objs.append(Package_Dependency( package = bp_object,
depends_on = _po_byname(p), dep_type = Package_Dependency.TYPE_RCONFLICTS))
- if len(packagedeps_objs) > 0:
+ if packagedeps_objs:
Package_Dependency.objects.bulk_create(packagedeps_objs)
return bp_object
@@ -826,7 +832,7 @@ class ORMWrapper(object):
desc = vardump[root_var]['doc']
if desc is None:
desc = ''
- if len(desc):
+ if desc:
HelpText.objects.get_or_create(build=build_obj,
area=HelpText.VARIABLE,
key=k, text=desc)
@@ -846,7 +852,7 @@ class ORMWrapper(object):
file_name = vh['file'],
line_number = vh['line'],
operation = vh['op']))
- if len(varhist_objects):
+ if varhist_objects:
VariableHistory.objects.bulk_create(varhist_objects)
@@ -893,9 +899,6 @@ class BuildInfoHelper(object):
self.task_order = 0
self.autocommit_step = 1
self.server = server
- # we use manual transactions if the database doesn't autocommit on us
- if not connection.features.autocommits_when_autocommit_is_off:
- transaction.set_autocommit(False)
self.orm_wrapper = ORMWrapper()
self.has_build_history = has_build_history
self.tmp_dir = self.server.runCommand(["getVariable", "TMPDIR"])[0]
@@ -1059,27 +1062,6 @@ class BuildInfoHelper(object):
return recipe_info
- def _get_path_information(self, task_object):
- self._ensure_build()
-
- assert isinstance(task_object, Task)
- build_stats_format = "{tmpdir}/buildstats/{buildname}/{package}/"
- build_stats_path = []
-
- for t in self.internal_state['targets']:
- buildname = self.internal_state['build'].build_name
- pe, pv = task_object.recipe.version.split(":",1)
- if len(pe) > 0:
- package = task_object.recipe.name + "-" + pe + "_" + pv
- else:
- package = task_object.recipe.name + "-" + pv
-
- build_stats_path.append(build_stats_format.format(tmpdir=self.tmp_dir,
- buildname=buildname,
- package=package))
-
- return build_stats_path
-
################################
## external available methods to store information
@@ -1313,12 +1295,11 @@ class BuildInfoHelper(object):
task_information['outcome'] = Task.OUTCOME_FAILED
del self.internal_state['taskdata'][identifier]
- if not connection.features.autocommits_when_autocommit_is_off:
- # we force a sync point here, to get the progress bar to show
- if self.autocommit_step % 3 == 0:
- transaction.set_autocommit(True)
- transaction.set_autocommit(False)
- self.autocommit_step += 1
+ # we force a sync point here, to get the progress bar to show
+ if self.autocommit_step % 3 == 0:
+ transaction.set_autocommit(True)
+ transaction.set_autocommit(False)
+ self.autocommit_step += 1
self.orm_wrapper.get_update_task_object(task_information, True) # must exist
@@ -1404,7 +1385,7 @@ class BuildInfoHelper(object):
assert 'pn' in event._depgraph
assert 'tdepends' in event._depgraph
- errormsg = ""
+ errormsg = []
# save layer version priorities
if 'layer-priorities' in event._depgraph.keys():
@@ -1496,7 +1477,7 @@ class BuildInfoHelper(object):
elif dep in self.internal_state['recipes']:
dependency = self.internal_state['recipes'][dep]
else:
- errormsg += " stpd: KeyError saving recipe dependency for %s, %s \n" % (recipe, dep)
+ errormsg.append(" stpd: KeyError saving recipe dependency for %s, %s \n" % (recipe, dep))
continue
recipe_dep = Recipe_Dependency(recipe=target,
depends_on=dependency,
@@ -1537,8 +1518,8 @@ class BuildInfoHelper(object):
taskdeps_objects.append(Task_Dependency( task = target, depends_on = dep ))
Task_Dependency.objects.bulk_create(taskdeps_objects)
- if len(errormsg) > 0:
- logger.warning("buildinfohelper: dependency info not identify recipes: \n%s", errormsg)
+ if errormsg:
+ logger.warning("buildinfohelper: dependency info not identify recipes: \n%s", "".join(errormsg))
def store_build_package_information(self, event):
@@ -1618,7 +1599,7 @@ class BuildInfoHelper(object):
if 'backlog' in self.internal_state:
# if we have a backlog of events, do our best to save them here
- if len(self.internal_state['backlog']):
+ if self.internal_state['backlog']:
tempevent = self.internal_state['backlog'].pop()
logger.debug("buildinfohelper: Saving stored event %s "
% tempevent)
@@ -1990,8 +1971,6 @@ class BuildInfoHelper(object):
# Do not skip command line build events
self.store_log_event(tempevent,False)
- if not connection.features.autocommits_when_autocommit_is_off:
- transaction.set_autocommit(True)
# unset the brbe; this is to prevent subsequent command-line builds
# being incorrectly attached to the previous Toaster-triggered build;
diff --git a/bitbake/lib/bb/ui/knotty.py b/bitbake/lib/bb/ui/knotty.py
index 04285ea..61cf0a3 100644
--- a/bitbake/lib/bb/ui/knotty.py
+++ b/bitbake/lib/bb/ui/knotty.py
@@ -21,10 +21,11 @@ import fcntl
import struct
import copy
import atexit
+from itertools import groupby
from bb.ui import uihelper
-featureSet = [bb.cooker.CookerFeatures.SEND_SANITYEVENTS]
+featureSet = [bb.cooker.CookerFeatures.SEND_SANITYEVENTS, bb.cooker.CookerFeatures.BASEDATASTORE_TRACKING]
logger = logging.getLogger("BitBake")
interactive = sys.stdout.isatty()
@@ -227,7 +228,9 @@ class TerminalFilter(object):
def keepAlive(self, t):
if not self.cuu:
- print("Bitbake still alive (%ds)" % t)
+ print("Bitbake still alive (no events for %ds). Active tasks:" % t)
+ for t in self.helper.running_tasks:
+ print(t)
sys.stdout.flush()
def updateFooter(self):
@@ -249,58 +252,68 @@ class TerminalFilter(object):
return
tasks = []
for t in runningpids:
+ start_time = activetasks[t].get("starttime", None)
+ if start_time:
+ msg = "%s - %s (pid %s)" % (activetasks[t]["title"], self.elapsed(currenttime - start_time), activetasks[t]["pid"])
+ else:
+ msg = "%s (pid %s)" % (activetasks[t]["title"], activetasks[t]["pid"])
progress = activetasks[t].get("progress", None)
if progress is not None:
pbar = activetasks[t].get("progressbar", None)
rate = activetasks[t].get("rate", None)
- start_time = activetasks[t].get("starttime", None)
if not pbar or pbar.bouncing != (progress < 0):
if progress < 0:
- pbar = BBProgress("0: %s (pid %s)" % (activetasks[t]["title"], activetasks[t]["pid"]), 100, widgets=[' ', progressbar.BouncingSlider(), ''], extrapos=3, resize_handler=self.sigwinch_handle)
+ pbar = BBProgress("0: %s" % msg, 100, widgets=[' ', progressbar.BouncingSlider(), ''], extrapos=3, resize_handler=self.sigwinch_handle)
pbar.bouncing = True
else:
- pbar = BBProgress("0: %s (pid %s)" % (activetasks[t]["title"], activetasks[t]["pid"]), 100, widgets=[' ', progressbar.Percentage(), ' ', progressbar.Bar(), ''], extrapos=5, resize_handler=self.sigwinch_handle)
+ pbar = BBProgress("0: %s" % msg, 100, widgets=[' ', progressbar.Percentage(), ' ', progressbar.Bar(), ''], extrapos=5, resize_handler=self.sigwinch_handle)
pbar.bouncing = False
activetasks[t]["progressbar"] = pbar
- tasks.append((pbar, progress, rate, start_time))
+ tasks.append((pbar, msg, progress, rate, start_time))
else:
- start_time = activetasks[t].get("starttime", None)
- if start_time:
- tasks.append("%s - %s (pid %s)" % (activetasks[t]["title"], self.elapsed(currenttime - start_time), activetasks[t]["pid"]))
- else:
- tasks.append("%s (pid %s)" % (activetasks[t]["title"], activetasks[t]["pid"]))
+ tasks.append(msg)
if self.main.shutdown:
- content = "Waiting for %s running tasks to finish:" % len(activetasks)
+ content = pluralise("Waiting for %s running task to finish",
+ "Waiting for %s running tasks to finish", len(activetasks))
+ if not self.quiet:
+ content += ':'
print(content)
else:
+ scene_tasks = "%s of %s" % (self.helper.setscene_current, self.helper.setscene_total)
+ cur_tasks = "%s of %s" % (self.helper.tasknumber_current, self.helper.tasknumber_total)
+
+ content = ''
+ if not self.quiet:
+ msg = "Setscene tasks: %s" % scene_tasks
+ content += msg + "\n"
+ print(msg)
+
if self.quiet:
- content = "Running tasks (%s of %s)" % (self.helper.tasknumber_current, self.helper.tasknumber_total)
+ msg = "Running tasks (%s, %s)" % (scene_tasks, cur_tasks)
elif not len(activetasks):
- content = "No currently running tasks (%s of %s)" % (self.helper.tasknumber_current, self.helper.tasknumber_total)
+ msg = "No currently running tasks (%s)" % cur_tasks
else:
- content = "Currently %2s running tasks (%s of %s)" % (len(activetasks), self.helper.tasknumber_current, self.helper.tasknumber_total)
+ msg = "Currently %2s running tasks (%s)" % (len(activetasks), cur_tasks)
maxtask = self.helper.tasknumber_total
if not self.main_progress or self.main_progress.maxval != maxtask:
widgets = [' ', progressbar.Percentage(), ' ', progressbar.Bar()]
self.main_progress = BBProgress("Running tasks", maxtask, widgets=widgets, resize_handler=self.sigwinch_handle)
self.main_progress.start(False)
- self.main_progress.setmessage(content)
- progress = self.helper.tasknumber_current - 1
- if progress < 0:
- progress = 0
- content = self.main_progress.update(progress)
+ self.main_progress.setmessage(msg)
+ progress = max(0, self.helper.tasknumber_current - 1)
+ content += self.main_progress.update(progress)
print('')
- lines = 1 + int(len(content) / (self.columns + 1))
- if self.quiet == 0:
- for tasknum, task in enumerate(tasks[:(self.rows - 2)]):
+ lines = self.getlines(content)
+ if not self.quiet:
+ for tasknum, task in enumerate(tasks[:(self.rows - 1 - lines)]):
if isinstance(task, tuple):
- pbar, progress, rate, start_time = task
+ pbar, msg, progress, rate, start_time = task
if not pbar.start_time:
pbar.start(False)
if start_time:
pbar.start_time = start_time
- pbar.setmessage('%s:%s' % (tasknum, pbar.msg.split(':', 1)[1]))
+ pbar.setmessage('%s: %s' % (tasknum, msg))
pbar.setextra(rate)
if progress > -1:
content = pbar.update(progress)
@@ -310,11 +323,17 @@ class TerminalFilter(object):
else:
content = "%s: %s" % (tasknum, task)
print(content)
- lines = lines + 1 + int(len(content) / (self.columns + 1))
+ lines = lines + self.getlines(content)
self.footer_present = lines
self.lastpids = runningpids[:]
self.lastcount = self.helper.tasknumber_current
+ def getlines(self, content):
+ lines = 0
+ for line in content.split("\n"):
+ lines = lines + 1 + int(len(line) / (self.columns + 1))
+ return lines
+
def finish(self):
if self.stdinbackup:
fd = sys.stdin.fileno()
@@ -539,6 +558,13 @@ def main(server, eventHandler, params, tf = TerminalFilter):
except OSError:
pass
+ # Add the logging domains specified by the user on the command line
+ for (domainarg, iterator) in groupby(params.debug_domains):
+ dlevel = len(tuple(iterator))
+ l = logconfig["loggers"].setdefault("BitBake.%s" % domainarg, {})
+ l["level"] = logging.DEBUG - dlevel + 1
+ l.setdefault("handlers", []).extend(["BitBake.verbconsole"])
+
conf = bb.msg.setLoggingConfig(logconfig, logconfigfile)
if sys.stdin.isatty() and sys.stdout.isatty():
@@ -597,7 +623,8 @@ def main(server, eventHandler, params, tf = TerminalFilter):
warnings = 0
taskfailures = []
- printinterval = 5000
+ printintervaldelta = 10 * 60 # 10 minutes
+ printinterval = printintervaldelta
lastprint = time.time()
termfilter = tf(main, helper, console_handlers, params.options.quiet)
@@ -607,7 +634,7 @@ def main(server, eventHandler, params, tf = TerminalFilter):
try:
if (lastprint + printinterval) <= time.time():
termfilter.keepAlive(printinterval)
- printinterval += 5000
+ printinterval += printintervaldelta
event = eventHandler.waitEvent(0)
if event is None:
if main.shutdown > 1:
@@ -638,8 +665,8 @@ def main(server, eventHandler, params, tf = TerminalFilter):
if isinstance(event, logging.LogRecord):
lastprint = time.time()
- printinterval = 5000
- if event.levelno >= bb.msg.BBLogFormatter.ERROR:
+ printinterval = printintervaldelta
+ if event.levelno >= bb.msg.BBLogFormatter.ERRORONCE:
errors = errors + 1
return_value = 1
elif event.levelno == bb.msg.BBLogFormatter.WARNING:
@@ -653,10 +680,10 @@ def main(server, eventHandler, params, tf = TerminalFilter):
continue
# Prefix task messages with recipe/task
- if event.taskpid in helper.pidmap and event.levelno != bb.msg.BBLogFormatter.PLAIN:
+ if event.taskpid in helper.pidmap and event.levelno not in [bb.msg.BBLogFormatter.PLAIN, bb.msg.BBLogFormatter.WARNONCE, bb.msg.BBLogFormatter.ERRORONCE]:
taskinfo = helper.running_tasks[helper.pidmap[event.taskpid]]
event.msg = taskinfo['title'] + ': ' + event.msg
- if hasattr(event, 'fn'):
+ if hasattr(event, 'fn') and event.levelno not in [bb.msg.BBLogFormatter.WARNONCE, bb.msg.BBLogFormatter.ERRORONCE]:
event.msg = event.fn + ': ' + event.msg
logging.getLogger(event.name).handle(event)
continue
@@ -850,7 +877,6 @@ def main(server, eventHandler, params, tf = TerminalFilter):
state_force_shutdown()
main.shutdown = main.shutdown + 1
- pass
except Exception as e:
import traceback
sys.stderr.write(traceback.format_exc())
@@ -867,11 +893,11 @@ def main(server, eventHandler, params, tf = TerminalFilter):
for failure in taskfailures:
summary += "\n %s" % failure
if warnings:
- summary += pluralise("\nSummary: There was %s WARNING message shown.",
- "\nSummary: There were %s WARNING messages shown.", warnings)
+ summary += pluralise("\nSummary: There was %s WARNING message.",
+ "\nSummary: There were %s WARNING messages.", warnings)
if return_value and errors:
- summary += pluralise("\nSummary: There was %s ERROR message shown, returning a non-zero exit code.",
- "\nSummary: There were %s ERROR messages shown, returning a non-zero exit code.", errors)
+ summary += pluralise("\nSummary: There was %s ERROR message, returning a non-zero exit code.",
+ "\nSummary: There were %s ERROR messages, returning a non-zero exit code.", errors)
if summary and params.options.quiet == 0:
print(summary)
diff --git a/bitbake/lib/bb/ui/taskexp.py b/bitbake/lib/bb/ui/taskexp.py
index 2b24671..c00eaf6 100644
--- a/bitbake/lib/bb/ui/taskexp.py
+++ b/bitbake/lib/bb/ui/taskexp.py
@@ -8,6 +8,7 @@
#
import sys
+import traceback
try:
import gi
@@ -196,6 +197,7 @@ def main(server, eventHandler, params):
gtkgui.start()
try:
+ params.updateToServer(server, os.environ.copy())
params.updateFromServer(server)
cmdline = params.parseActions()
if not cmdline:
@@ -218,6 +220,9 @@ def main(server, eventHandler, params):
except client.Fault as x:
print("XMLRPC Fault getting commandline:\n %s" % x)
return
+ except Exception as e:
+ print("Exception in startup:\n %s" % traceback.format_exc())
+ return
if gtkthread.quit.isSet():
return
diff --git a/bitbake/lib/bb/ui/uievent.py b/bitbake/lib/bb/ui/uievent.py
index 8607d05..d595f17 100644
--- a/bitbake/lib/bb/ui/uievent.py
+++ b/bitbake/lib/bb/ui/uievent.py
@@ -44,7 +44,7 @@ class BBUIEventQueue:
for count_tries in range(5):
ret = self.BBServer.registerEventHandler(self.host, self.port)
- if isinstance(ret, collections.Iterable):
+ if isinstance(ret, collections.abc.Iterable):
self.EventHandle, error = ret
else:
self.EventHandle = ret
@@ -73,13 +73,13 @@ class BBUIEventQueue:
self.eventQueueLock.acquire()
- if len(self.eventQueue) == 0:
+ if not self.eventQueue:
self.eventQueueLock.release()
return None
item = self.eventQueue.pop(0)
- if len(self.eventQueue) == 0:
+ if not self.eventQueue:
self.eventQueueNotify.clear()
self.eventQueueLock.release()
diff --git a/bitbake/lib/bb/ui/uihelper.py b/bitbake/lib/bb/ui/uihelper.py
index 52fdae3..82913e0 100644
--- a/bitbake/lib/bb/ui/uihelper.py
+++ b/bitbake/lib/bb/ui/uihelper.py
@@ -50,8 +50,10 @@ class BBUIHelper:
removetid(event.pid, tid)
self.failed_tasks.append( { 'title' : "%s %s" % (event._package, event._task)})
elif isinstance(event, bb.runqueue.runQueueTaskStarted) or isinstance(event, bb.runqueue.sceneQueueTaskStarted):
- self.tasknumber_current = event.stats.completed + event.stats.active + event.stats.failed + event.stats.setscene_active + 1
+ self.tasknumber_current = event.stats.completed + event.stats.active + event.stats.failed
self.tasknumber_total = event.stats.total
+ self.setscene_current = event.stats.setscene_active + event.stats.setscene_covered + event.stats.setscene_notcovered
+ self.setscene_total = event.stats.setscene_total
self.needUpdate = True
elif isinstance(event, bb.build.TaskProgress):
if event.pid > 0 and event.pid in self.pidmap:
diff --git a/bitbake/lib/bb/utils.py b/bitbake/lib/bb/utils.py
index 74cbc69..92d44c5 100644
--- a/bitbake/lib/bb/utils.py
+++ b/bitbake/lib/bb/utils.py
@@ -27,6 +27,9 @@ import errno
import signal
import collections
import copy
+import ctypes
+import random
+import tempfile
from subprocess import getstatusoutput
from contextlib import contextmanager
from ctypes import cdll
@@ -252,7 +255,7 @@ def explode_dep_versions(s):
"""
Take an RDEPENDS style string of format:
"DEPEND1 (optional version) DEPEND2 (optional version) ..."
- skip null value and items appeared in dependancy string multiple times
+ skip null value and items appeared in dependency string multiple times
and return a dictionary of dependencies and versions.
"""
r = explode_dep_versions2(s)
@@ -380,7 +383,7 @@ def _print_exception(t, value, tb, realfile, text, context):
error.append("Exception: %s" % ''.join(exception))
- # If the exception is from spwaning a task, let's be helpful and display
+ # If the exception is from spawning a task, let's be helpful and display
# the output (which hopefully includes stderr).
if isinstance(value, subprocess.CalledProcessError) and value.output:
error.append("Subprocess output:")
@@ -428,12 +431,14 @@ def better_eval(source, locals, extraglobals = None):
return eval(source, ctx, locals)
@contextmanager
-def fileslocked(files):
+def fileslocked(files, *args, **kwargs):
"""Context manager for locking and unlocking file locks."""
locks = []
if files:
for lockfile in files:
- locks.append(bb.utils.lockfile(lockfile))
+ l = bb.utils.lockfile(lockfile, *args, **kwargs)
+ if l is not None:
+ locks.append(l)
try:
yield
@@ -452,13 +457,16 @@ def lockfile(name, shared=False, retry=True, block=False):
consider the possibility of sending a signal to the process to break
out - at which point you want block=True rather than retry=True.
"""
- if len(name) > 255:
- root, ext = os.path.splitext(name)
- name = root[:255 - len(ext)] + ext
+ basename = os.path.basename(name)
+ if len(basename) > 255:
+ root, ext = os.path.splitext(basename)
+ basename = root[:255 - len(ext)] + ext
dirname = os.path.dirname(name)
mkdirhier(dirname)
+ name = os.path.join(dirname, basename)
+
if not os.access(dirname, os.W_OK):
logger.error("Unable to acquire lock '%s', directory is not writable",
name)
@@ -537,7 +545,7 @@ def md5_file(filename):
Return the hex string representation of the MD5 checksum of filename.
"""
import hashlib
- return _hasher(hashlib.md5(), filename)
+ return _hasher(hashlib.new('MD5', usedforsecurity=False), filename)
def sha256_file(filename):
"""
@@ -588,8 +596,8 @@ def preserved_envvars():
v = [
'BBPATH',
'BB_PRESERVE_ENV',
- 'BB_ENV_WHITELIST',
- 'BB_ENV_EXTRAWHITE',
+ 'BB_ENV_PASSTHROUGH',
+ 'BB_ENV_PASSTHROUGH_ADDITIONS',
]
return v + preserved_envvars_exported()
@@ -620,21 +628,21 @@ def filter_environment(good_vars):
def approved_variables():
"""
- Determine and return the list of whitelisted variables which are approved
+ Determine and return the list of variables which are approved
to remain in the environment.
"""
if 'BB_PRESERVE_ENV' in os.environ:
return os.environ.keys()
approved = []
- if 'BB_ENV_WHITELIST' in os.environ:
- approved = os.environ['BB_ENV_WHITELIST'].split()
- approved.extend(['BB_ENV_WHITELIST'])
+ if 'BB_ENV_PASSTHROUGH' in os.environ:
+ approved = os.environ['BB_ENV_PASSTHROUGH'].split()
+ approved.extend(['BB_ENV_PASSTHROUGH'])
else:
approved = preserved_envvars()
- if 'BB_ENV_EXTRAWHITE' in os.environ:
- approved.extend(os.environ['BB_ENV_EXTRAWHITE'].split())
- if 'BB_ENV_EXTRAWHITE' not in approved:
- approved.extend(['BB_ENV_EXTRAWHITE'])
+ if 'BB_ENV_PASSTHROUGH_ADDITIONS' in os.environ:
+ approved.extend(os.environ['BB_ENV_PASSTHROUGH_ADDITIONS'].split())
+ if 'BB_ENV_PASSTHROUGH_ADDITIONS' not in approved:
+ approved.extend(['BB_ENV_PASSTHROUGH_ADDITIONS'])
return approved
def clean_environment():
@@ -688,8 +696,8 @@ def remove(path, recurse=False, ionice=False):
return
if recurse:
for name in glob.glob(path):
- if _check_unsafe_delete_path(path):
- raise Exception('bb.utils.remove: called with dangerous path "%s" and recurse=True, refusing to delete!' % path)
+ if _check_unsafe_delete_path(name):
+ raise Exception('bb.utils.remove: called with dangerous path "%s" and recurse=True, refusing to delete!' % name)
# shutil.rmtree(name) would be ideal but its too slow
cmd = []
if ionice:
@@ -747,7 +755,7 @@ def movefile(src, dest, newmtime = None, sstat = None):
if not sstat:
sstat = os.lstat(src)
except Exception as e:
- print("movefile: Stating source file failed...", e)
+ logger.warning("movefile: Stating source file failed...", e)
return None
destexists = 1
@@ -775,7 +783,7 @@ def movefile(src, dest, newmtime = None, sstat = None):
os.unlink(src)
return os.lstat(dest)
except Exception as e:
- print("movefile: failed to properly create symlink:", dest, "->", target, e)
+ logger.warning("movefile: failed to properly create symlink:", dest, "->", target, e)
return None
renamefailed = 1
@@ -787,12 +795,12 @@ def movefile(src, dest, newmtime = None, sstat = None):
if sstat[stat.ST_DEV] == dstat[stat.ST_DEV]:
try:
- os.rename(src, destpath)
+ bb.utils.rename(src, destpath)
renamefailed = 0
except Exception as e:
if e.errno != errno.EXDEV:
# Some random error.
- print("movefile: Failed to move", src, "to", dest, e)
+ logger.warning("movefile: Failed to move", src, "to", dest, e)
return None
# Invalid cross-device-link 'bind' mounted or actually Cross-Device
@@ -801,16 +809,16 @@ def movefile(src, dest, newmtime = None, sstat = None):
if stat.S_ISREG(sstat[stat.ST_MODE]):
try: # For safety copy then move it over.
shutil.copyfile(src, destpath + "#new")
- os.rename(destpath + "#new", destpath)
+ bb.utils.rename(destpath + "#new", destpath)
didcopy = 1
except Exception as e:
- print('movefile: copy', src, '->', dest, 'failed.', e)
+ logger.warning('movefile: copy', src, '->', dest, 'failed.', e)
return None
else:
#we don't yet handle special, so we need to fall back to /bin/mv
a = getstatusoutput("/bin/mv -f " + "'" + src + "' '" + dest + "'")
if a[0] != 0:
- print("movefile: Failed to move special file:" + src + "' to '" + dest + "'", a)
+ logger.warning("movefile: Failed to move special file:" + src + "' to '" + dest + "'", a)
return None # failure
try:
if didcopy:
@@ -818,7 +826,7 @@ def movefile(src, dest, newmtime = None, sstat = None):
os.chmod(destpath, stat.S_IMODE(sstat[stat.ST_MODE])) # Sticky is reset on chown
os.unlink(src)
except Exception as e:
- print("movefile: Failed to chown/chmod/unlink", dest, e)
+ logger.warning("movefile: Failed to chown/chmod/unlink", dest, e)
return None
if newmtime:
@@ -879,7 +887,7 @@ def copyfile(src, dest, newmtime = None, sstat = None):
# For safety copy then move it over.
shutil.copyfile(src, dest + "#new")
- os.rename(dest + "#new", dest)
+ bb.utils.rename(dest + "#new", dest)
except Exception as e:
logger.warning("copyfile: copy %s to %s failed (%s)" % (src, dest, e))
return False
@@ -1183,7 +1191,7 @@ def edit_metadata(meta_lines, variables, varfunc, match_overrides=False):
variables: a list of variable names to look for. Functions
may also be specified, but must be specified with '()' at
the end of the name. Note that the function doesn't have
- any intrinsic understanding of _append, _prepend, _remove,
+ any intrinsic understanding of :append, :prepend, :remove,
or overrides, so these are considered as part of the name.
These values go into a regular expression, so regular
expression syntax is allowed.
@@ -1595,6 +1603,36 @@ def set_process_name(name):
except:
pass
+def disable_network(uid=None, gid=None):
+ """
+ Disable networking in the current process if the kernel supports it, else
+ just return after logging to debug. To do this we need to create a new user
+ namespace, then map back to the original uid/gid.
+ """
+ libc = ctypes.CDLL('libc.so.6')
+
+ # From sched.h
+ # New user namespace
+ CLONE_NEWUSER = 0x10000000
+ # New network namespace
+ CLONE_NEWNET = 0x40000000
+
+ if uid is None:
+ uid = os.getuid()
+ if gid is None:
+ gid = os.getgid()
+
+ ret = libc.unshare(CLONE_NEWNET | CLONE_NEWUSER)
+ if ret != 0:
+ logger.debug("System doesn't suport disabling network without admin privs")
+ return
+ with open("/proc/self/uid_map", "w") as f:
+ f.write("%s %s 1" % (uid, uid))
+ with open("/proc/self/setgroups", "w") as f:
+ f.write("deny")
+ with open("/proc/self/gid_map", "w") as f:
+ f.write("%s %s 1" % (gid, gid))
+
def export_proxies(d):
""" export common proxies variables from datastore to environment """
import os
@@ -1676,3 +1714,66 @@ def is_semver(version):
return False
return True
+
+# Wrapper around os.rename which can handle cross device problems
+# e.g. from container filesystems
+def rename(src, dst):
+ try:
+ os.rename(src, dst)
+ except OSError as err:
+ if err.errno == 18:
+ # Invalid cross-device link error
+ shutil.move(src, dst)
+ else:
+ raise err
+
+@contextmanager
+def environment(**envvars):
+ """
+ Context manager to selectively update the environment with the specified mapping.
+ """
+ backup = dict(os.environ)
+ try:
+ os.environ.update(envvars)
+ yield
+ finally:
+ for var in envvars:
+ if var in backup:
+ os.environ[var] = backup[var]
+ elif var in os.environ:
+ del os.environ[var]
+
+def is_local_uid(uid=''):
+ """
+ Check whether uid is a local one or not.
+ Can't use pwd module since it gets all UIDs, not local ones only.
+ """
+ if not uid:
+ uid = os.getuid()
+ with open('/etc/passwd', 'r') as f:
+ for line in f:
+ line_split = line.split(':')
+ if len(line_split) < 3:
+ continue
+ if str(uid) == line_split[2]:
+ return True
+ return False
+
+def mkstemp(suffix=None, prefix=None, dir=None, text=False):
+ """
+ Generates a unique filename, independent of time.
+
+ mkstemp() in glibc (at least) generates unique file names based on the
+ current system time. When combined with highly parallel builds, and
+ operating over NFS (e.g. shared sstate/downloads) this can result in
+ conflicts and race conditions.
+
+ This function adds additional entropy to the file name so that a collision
+ is independent of time and thus extremely unlikely.
+ """
+ entropy = "".join(random.choices("abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890", k=20))
+ if prefix:
+ prefix = prefix + entropy
+ else:
+ prefix = tempfile.gettempprefix() + entropy
+ return tempfile.mkstemp(suffix=suffix, prefix=prefix, dir=dir, text=text)
diff --git a/bitbake/lib/bblayers/__init__.py b/bitbake/lib/bblayers/__init__.py
index 4e7c09d..78efd29 100644
--- a/bitbake/lib/bblayers/__init__.py
+++ b/bitbake/lib/bblayers/__init__.py
@@ -1,4 +1,6 @@
#
+# Copyright BitBake Contributors
+#
# SPDX-License-Identifier: GPL-2.0-only
#
diff --git a/bitbake/lib/bblayers/action.py b/bitbake/lib/bblayers/action.py
index f05f5d3..454c251 100644
--- a/bitbake/lib/bblayers/action.py
+++ b/bitbake/lib/bblayers/action.py
@@ -1,4 +1,6 @@
#
+# Copyright BitBake Contributors
+#
# SPDX-License-Identifier: GPL-2.0-only
#
@@ -53,7 +55,7 @@ class ActionPlugin(LayerPlugin):
except (bb.tinfoil.TinfoilUIException, bb.BBHandledException):
# Restore the back up copy of bblayers.conf
shutil.copy2(backup, bblayers_conf)
- bb.fatal("Parse failure with the specified layer added, aborting.")
+ bb.fatal("Parse failure with the specified layer added, exiting.")
else:
for item in notadded:
sys.stderr.write("Specified layer %s is already in BBLAYERS\n" % item)
diff --git a/bitbake/lib/bblayers/common.py b/bitbake/lib/bblayers/common.py
index 6c76ef3..f7b9cee 100644
--- a/bitbake/lib/bblayers/common.py
+++ b/bitbake/lib/bblayers/common.py
@@ -1,4 +1,6 @@
#
+# Copyright BitBake Contributors
+#
# SPDX-License-Identifier: GPL-2.0-only
#
diff --git a/bitbake/lib/bblayers/layerindex.py b/bitbake/lib/bblayers/layerindex.py
index b2f27b2..0ac8fd2 100644
--- a/bitbake/lib/bblayers/layerindex.py
+++ b/bitbake/lib/bblayers/layerindex.py
@@ -1,4 +1,6 @@
#
+# Copyright BitBake Contributors
+#
# SPDX-License-Identifier: GPL-2.0-only
#
@@ -159,12 +161,17 @@ class LayerIndexPlugin(ActionPlugin):
logger.plain(' recommended by: %s' % ' '.join(recommendedby))
if dependencies:
- fetchdir = self.tinfoil.config_data.getVar('BBLAYERS_FETCH_DIR')
- if not fetchdir:
- logger.error("Cannot get BBLAYERS_FETCH_DIR")
- return 1
+ if args.fetchdir:
+ fetchdir = args.fetchdir
+ else:
+ fetchdir = self.tinfoil.config_data.getVar('BBLAYERS_FETCH_DIR')
+ if not fetchdir:
+ logger.error("Cannot get BBLAYERS_FETCH_DIR")
+ return 1
+
if not os.path.exists(fetchdir):
os.makedirs(fetchdir)
+
addlayers = []
for deplayerbranch in dependencies:
@@ -206,6 +213,8 @@ class LayerIndexPlugin(ActionPlugin):
"""
args.show_only = True
args.ignore = []
+ args.fetchdir = ""
+ args.shallow = True
self.do_layerindex_fetch(args)
def register_commands(self, sp):
@@ -214,6 +223,7 @@ class LayerIndexPlugin(ActionPlugin):
parser_layerindex_fetch.add_argument('-b', '--branch', help='branch name to fetch')
parser_layerindex_fetch.add_argument('-s', '--shallow', help='do only shallow clones (--depth=1)', action='store_true')
parser_layerindex_fetch.add_argument('-i', '--ignore', help='assume the specified layers do not need to be fetched/added (separate multiple layers with commas, no spaces)', metavar='LAYER')
+ parser_layerindex_fetch.add_argument('-f', '--fetchdir', help='directory to fetch the layer(s) into (will be created if it does not exist)')
parser_layerindex_fetch.add_argument('layername', nargs='+', help='layer to fetch')
parser_layerindex_show_depends = self.add_command(sp, 'layerindex-show-depends', self.do_layerindex_show_depends, parserecipes=False)
diff --git a/bitbake/lib/bblayers/query.py b/bitbake/lib/bblayers/query.py
index 947422a..9142ec4 100644
--- a/bitbake/lib/bblayers/query.py
+++ b/bitbake/lib/bblayers/query.py
@@ -1,4 +1,6 @@
#
+# Copyright BitBake Contributors
+#
# SPDX-License-Identifier: GPL-2.0-only
#
@@ -154,7 +156,7 @@ skipped recipes will also be listed, with a " (skipped)" suffix.
def print_item(f, pn, ver, layer, ispref):
if not selected_layer or layer == selected_layer:
if not bare and f in skiplist:
- skipped = ' (skipped)'
+ skipped = ' (skipped: %s)' % self.tinfoil.cooker.skiplist[f].skipreason
else:
skipped = ''
if show_filenames:
@@ -441,10 +443,10 @@ NOTE: .bbappend files can impact the dependencies.
line = fnfile.readline()
# The "require/include xxx" in conf/machine/*.conf, .inc and .bbclass
- conf_re = re.compile(".*/conf/machine/[^\/]*\.conf$")
- inc_re = re.compile(".*\.inc$")
+ conf_re = re.compile(r".*/conf/machine/[^\/]*\.conf$")
+ inc_re = re.compile(r".*\.inc$")
# The "inherit xxx" in .bbclass
- bbclass_re = re.compile(".*\.bbclass$")
+ bbclass_re = re.compile(r".*\.bbclass$")
for layerdir in self.bblayers:
layername = self.get_layer_name(layerdir)
for dirpath, dirnames, filenames in os.walk(layerdir):
diff --git a/bitbake/lib/codegen.py b/bitbake/lib/codegen.py
index 62a6748..6955a7a 100644
--- a/bitbake/lib/codegen.py
+++ b/bitbake/lib/codegen.py
@@ -401,6 +401,12 @@ class SourceGenerator(NodeVisitor):
def visit_Num(self, node):
self.write(repr(node.n))
+ def visit_Constant(self, node):
+ # Python 3.8 deprecated visit_Num(), visit_Str(), visit_Bytes(),
+ # visit_NameConstant() and visit_Ellipsis(). They can be removed once we
+ # require 3.8+.
+ self.write(repr(node.value))
+
def visit_Tuple(self, node):
self.write('(')
idx = -1
diff --git a/bitbake/lib/hashserv/__init__.py b/bitbake/lib/hashserv/__init__.py
index 5f2e101..9cb3fd5 100644
--- a/bitbake/lib/hashserv/__init__.py
+++ b/bitbake/lib/hashserv/__init__.py
@@ -22,46 +22,68 @@ ADDR_TYPE_TCP = 1
# is necessary
DEFAULT_MAX_CHUNK = 32 * 1024
-TABLE_DEFINITION = (
- ("method", "TEXT NOT NULL"),
- ("outhash", "TEXT NOT NULL"),
- ("taskhash", "TEXT NOT NULL"),
- ("unihash", "TEXT NOT NULL"),
- ("created", "DATETIME"),
+UNIHASH_TABLE_DEFINITION = (
+ ("method", "TEXT NOT NULL", "UNIQUE"),
+ ("taskhash", "TEXT NOT NULL", "UNIQUE"),
+ ("unihash", "TEXT NOT NULL", ""),
+)
+
+UNIHASH_TABLE_COLUMNS = tuple(name for name, _, _ in UNIHASH_TABLE_DEFINITION)
+
+OUTHASH_TABLE_DEFINITION = (
+ ("method", "TEXT NOT NULL", "UNIQUE"),
+ ("taskhash", "TEXT NOT NULL", "UNIQUE"),
+ ("outhash", "TEXT NOT NULL", "UNIQUE"),
+ ("created", "DATETIME", ""),
# Optional fields
- ("owner", "TEXT"),
- ("PN", "TEXT"),
- ("PV", "TEXT"),
- ("PR", "TEXT"),
- ("task", "TEXT"),
- ("outhash_siginfo", "TEXT"),
+ ("owner", "TEXT", ""),
+ ("PN", "TEXT", ""),
+ ("PV", "TEXT", ""),
+ ("PR", "TEXT", ""),
+ ("task", "TEXT", ""),
+ ("outhash_siginfo", "TEXT", ""),
)
-TABLE_COLUMNS = tuple(name for name, _ in TABLE_DEFINITION)
+OUTHASH_TABLE_COLUMNS = tuple(name for name, _, _ in OUTHASH_TABLE_DEFINITION)
+
+def _make_table(cursor, name, definition):
+ cursor.execute('''
+ CREATE TABLE IF NOT EXISTS {name} (
+ id INTEGER PRIMARY KEY AUTOINCREMENT,
+ {fields}
+ UNIQUE({unique})
+ )
+ '''.format(
+ name=name,
+ fields=" ".join("%s %s," % (name, typ) for name, typ, _ in definition),
+ unique=", ".join(name for name, _, flags in definition if "UNIQUE" in flags)
+ ))
+
def setup_database(database, sync=True):
db = sqlite3.connect(database)
db.row_factory = sqlite3.Row
with closing(db.cursor()) as cursor:
- cursor.execute('''
- CREATE TABLE IF NOT EXISTS tasks_v2 (
- id INTEGER PRIMARY KEY AUTOINCREMENT,
- %s
- UNIQUE(method, outhash, taskhash)
- )
- ''' % " ".join("%s %s," % (name, typ) for name, typ in TABLE_DEFINITION))
+ _make_table(cursor, "unihashes_v2", UNIHASH_TABLE_DEFINITION)
+ _make_table(cursor, "outhashes_v2", OUTHASH_TABLE_DEFINITION)
+
cursor.execute('PRAGMA journal_mode = WAL')
cursor.execute('PRAGMA synchronous = %s' % ('NORMAL' if sync else 'OFF'))
# Drop old indexes
cursor.execute('DROP INDEX IF EXISTS taskhash_lookup')
cursor.execute('DROP INDEX IF EXISTS outhash_lookup')
+ cursor.execute('DROP INDEX IF EXISTS taskhash_lookup_v2')
+ cursor.execute('DROP INDEX IF EXISTS outhash_lookup_v2')
+
+ # TODO: Upgrade from tasks_v2?
+ cursor.execute('DROP TABLE IF EXISTS tasks_v2')
# Create new indexes
- cursor.execute('CREATE INDEX IF NOT EXISTS taskhash_lookup_v2 ON tasks_v2 (method, taskhash, created)')
- cursor.execute('CREATE INDEX IF NOT EXISTS outhash_lookup_v2 ON tasks_v2 (method, outhash)')
+ cursor.execute('CREATE INDEX IF NOT EXISTS taskhash_lookup_v3 ON unihashes_v2 (method, taskhash)')
+ cursor.execute('CREATE INDEX IF NOT EXISTS outhash_lookup_v3 ON outhashes_v2 (method, outhash)')
return db
diff --git a/bitbake/lib/hashserv/client.py b/bitbake/lib/hashserv/client.py
index e05c1eb..b2aa102 100644
--- a/bitbake/lib/hashserv/client.py
+++ b/bitbake/lib/hashserv/client.py
@@ -3,115 +3,28 @@
# SPDX-License-Identifier: GPL-2.0-only
#
-import asyncio
-import json
import logging
import socket
-import os
-from . import chunkify, DEFAULT_MAX_CHUNK, create_async_client
+import bb.asyncrpc
+from . import create_async_client
logger = logging.getLogger("hashserv.client")
-class HashConnectionError(Exception):
- pass
-
-
-class AsyncClient(object):
+class AsyncClient(bb.asyncrpc.AsyncClient):
MODE_NORMAL = 0
MODE_GET_STREAM = 1
def __init__(self):
- self.reader = None
- self.writer = None
+ super().__init__('OEHASHEQUIV', '1.1', logger)
self.mode = self.MODE_NORMAL
- self.max_chunk = DEFAULT_MAX_CHUNK
-
- async def connect_tcp(self, address, port):
- async def connect_sock():
- return await asyncio.open_connection(address, port)
-
- self._connect_sock = connect_sock
-
- async def connect_unix(self, path):
- async def connect_sock():
- return await asyncio.open_unix_connection(path)
-
- self._connect_sock = connect_sock
-
- async def connect(self):
- if self.reader is None or self.writer is None:
- (self.reader, self.writer) = await self._connect_sock()
-
- self.writer.write("OEHASHEQUIV 1.1\n\n".encode("utf-8"))
- await self.writer.drain()
-
- cur_mode = self.mode
- self.mode = self.MODE_NORMAL
- await self._set_mode(cur_mode)
-
- async def close(self):
- self.reader = None
-
- if self.writer is not None:
- self.writer.close()
- self.writer = None
-
- async def _send_wrapper(self, proc):
- count = 0
- while True:
- try:
- await self.connect()
- return await proc()
- except (
- OSError,
- HashConnectionError,
- json.JSONDecodeError,
- UnicodeDecodeError,
- ) as e:
- logger.warning("Error talking to server: %s" % e)
- if count >= 3:
- if not isinstance(e, HashConnectionError):
- raise HashConnectionError(str(e))
- raise e
- await self.close()
- count += 1
-
- async def send_message(self, msg):
- async def get_line():
- line = await self.reader.readline()
- if not line:
- raise HashConnectionError("Connection closed")
-
- line = line.decode("utf-8")
-
- if not line.endswith("\n"):
- raise HashConnectionError("Bad message %r" % message)
-
- return line
-
- async def proc():
- for c in chunkify(json.dumps(msg), self.max_chunk):
- self.writer.write(c.encode("utf-8"))
- await self.writer.drain()
-
- l = await get_line()
-
- m = json.loads(l)
- if m and "chunk-stream" in m:
- lines = []
- while True:
- l = (await get_line()).rstrip("\n")
- if not l:
- break
- lines.append(l)
-
- m = json.loads("".join(lines))
-
- return m
- return await self._send_wrapper(proc)
+ async def setup_connection(self):
+ await super().setup_connection()
+ cur_mode = self.mode
+ self.mode = self.MODE_NORMAL
+ await self._set_mode(cur_mode)
async def send_stream(self, msg):
async def proc():
@@ -119,7 +32,7 @@ class AsyncClient(object):
await self.writer.drain()
l = await self.reader.readline()
if not l:
- raise HashConnectionError("Connection closed")
+ raise ConnectionError("Connection closed")
return l.decode("utf-8").rstrip()
return await self._send_wrapper(proc)
@@ -128,11 +41,11 @@ class AsyncClient(object):
if new_mode == self.MODE_NORMAL and self.mode == self.MODE_GET_STREAM:
r = await self.send_stream("END")
if r != "ok":
- raise HashConnectionError("Bad response from server %r" % r)
+ raise ConnectionError("Bad response from server %r" % r)
elif new_mode == self.MODE_GET_STREAM and self.mode == self.MODE_NORMAL:
r = await self.send_message({"get-stream": None})
if r != "ok":
- raise HashConnectionError("Bad response from server %r" % r)
+ raise ConnectionError("Bad response from server %r" % r)
elif new_mode != self.mode:
raise Exception(
"Undefined mode transition %r -> %r" % (self.mode, new_mode)
@@ -189,45 +102,20 @@ class AsyncClient(object):
return (await self.send_message({"backfill-wait": None}))["tasks"]
-class Client(object):
+class Client(bb.asyncrpc.Client):
def __init__(self):
- self.client = AsyncClient()
- self.loop = asyncio.new_event_loop()
-
- for call in (
+ super().__init__()
+ self._add_methods(
"connect_tcp",
- "close",
"get_unihash",
"report_unihash",
"report_unihash_equiv",
"get_taskhash",
+ "get_outhash",
"get_stats",
"reset_stats",
"backfill_wait",
- ):
- downcall = getattr(self.client, call)
- setattr(self, call, self._get_downcall_wrapper(downcall))
-
- def _get_downcall_wrapper(self, downcall):
- def wrapper(*args, **kwargs):
- return self.loop.run_until_complete(downcall(*args, **kwargs))
-
- return wrapper
-
- def connect_unix(self, path):
- # AF_UNIX has path length issues so chdir here to workaround
- cwd = os.getcwd()
- try:
- os.chdir(os.path.dirname(path))
- self.loop.run_until_complete(self.client.connect_unix(os.path.basename(path)))
- self.loop.run_until_complete(self.client.connect())
- finally:
- os.chdir(cwd)
-
- @property
- def max_chunk(self):
- return self.client.max_chunk
-
- @max_chunk.setter
- def max_chunk(self, value):
- self.client.max_chunk = value
+ )
+
+ def _get_async_client(self):
+ return AsyncClient()
diff --git a/bitbake/lib/hashserv/server.py b/bitbake/lib/hashserv/server.py
index df0fa0a..d40a2ab 100644
--- a/bitbake/lib/hashserv/server.py
+++ b/bitbake/lib/hashserv/server.py
@@ -5,16 +5,14 @@
from contextlib import closing, contextmanager
from datetime import datetime
+import enum
import asyncio
-import json
import logging
import math
-import os
-import signal
-import socket
-import sys
import time
-from . import chunkify, DEFAULT_MAX_CHUNK, create_async_client, TABLE_COLUMNS
+from . import create_async_client, UNIHASH_TABLE_COLUMNS, OUTHASH_TABLE_COLUMNS
+import bb.asyncrpc
+
logger = logging.getLogger('hashserv.server')
@@ -109,80 +107,78 @@ class Stats(object):
return {k: getattr(self, k) for k in ('num', 'total_time', 'max_time', 'average', 'stdev')}
-class ClientError(Exception):
- pass
+@enum.unique
+class Resolve(enum.Enum):
+ FAIL = enum.auto()
+ IGNORE = enum.auto()
+ REPLACE = enum.auto()
+
-class ServerError(Exception):
- pass
+def insert_table(cursor, table, data, on_conflict):
+ resolve = {
+ Resolve.FAIL: "",
+ Resolve.IGNORE: " OR IGNORE",
+ Resolve.REPLACE: " OR REPLACE",
+ }[on_conflict]
-def insert_task(cursor, data, ignore=False):
keys = sorted(data.keys())
- query = '''INSERT%s INTO tasks_v2 (%s) VALUES (%s)''' % (
- " OR IGNORE" if ignore else "",
- ', '.join(keys),
- ', '.join(':' + k for k in keys))
+ query = 'INSERT{resolve} INTO {table} ({fields}) VALUES({values})'.format(
+ resolve=resolve,
+ table=table,
+ fields=", ".join(keys),
+ values=", ".join(":" + k for k in keys),
+ )
+ prevrowid = cursor.lastrowid
cursor.execute(query, data)
-
-async def copy_from_upstream(client, db, method, taskhash):
- d = await client.get_taskhash(method, taskhash, True)
- if d is not None:
- # Filter out unknown columns
- d = {k: v for k, v in d.items() if k in TABLE_COLUMNS}
- keys = sorted(d.keys())
-
- with closing(db.cursor()) as cursor:
- insert_task(cursor, d)
- db.commit()
-
- return d
-
-async def copy_outhash_from_upstream(client, db, method, outhash, taskhash):
- d = await client.get_outhash(method, outhash, taskhash)
+ logging.debug(
+ "Inserting %r into %s, %s",
+ data,
+ table,
+ on_conflict
+ )
+ return (cursor.lastrowid, cursor.lastrowid != prevrowid)
+
+def insert_unihash(cursor, data, on_conflict):
+ return insert_table(cursor, "unihashes_v2", data, on_conflict)
+
+def insert_outhash(cursor, data, on_conflict):
+ return insert_table(cursor, "outhashes_v2", data, on_conflict)
+
+async def copy_unihash_from_upstream(client, db, method, taskhash):
+ d = await client.get_taskhash(method, taskhash)
if d is not None:
- # Filter out unknown columns
- d = {k: v for k, v in d.items() if k in TABLE_COLUMNS}
- keys = sorted(d.keys())
-
with closing(db.cursor()) as cursor:
- insert_task(cursor, d)
+ insert_unihash(
+ cursor,
+ {k: v for k, v in d.items() if k in UNIHASH_TABLE_COLUMNS},
+ Resolve.IGNORE,
+ )
db.commit()
-
return d
-class ServerClient(object):
- FAST_QUERY = 'SELECT taskhash, method, unihash FROM tasks_v2 WHERE method=:method AND taskhash=:taskhash ORDER BY created ASC LIMIT 1'
- ALL_QUERY = 'SELECT * FROM tasks_v2 WHERE method=:method AND taskhash=:taskhash ORDER BY created ASC LIMIT 1'
- OUTHASH_QUERY = '''
- -- Find tasks with a matching outhash (that is, tasks that
- -- are equivalent)
- SELECT * FROM tasks_v2 WHERE method=:method AND outhash=:outhash
- -- If there is an exact match on the taskhash, return it.
- -- Otherwise return the oldest matching outhash of any
- -- taskhash
- ORDER BY CASE WHEN taskhash=:taskhash THEN 1 ELSE 2 END,
- created ASC
+class ServerCursor(object):
+ def __init__(self, db, cursor, upstream):
+ self.db = db
+ self.cursor = cursor
+ self.upstream = upstream
- -- Only return one row
- LIMIT 1
- '''
+class ServerClient(bb.asyncrpc.AsyncServerConnection):
def __init__(self, reader, writer, db, request_stats, backfill_queue, upstream, read_only):
- self.reader = reader
- self.writer = writer
+ super().__init__(reader, writer, 'OEHASHEQUIV', logger)
self.db = db
self.request_stats = request_stats
- self.max_chunk = DEFAULT_MAX_CHUNK
+ self.max_chunk = bb.asyncrpc.DEFAULT_MAX_CHUNK
self.backfill_queue = backfill_queue
self.upstream = upstream
- self.handlers = {
+ self.handlers.update({
'get': self.handle_get,
'get-outhash': self.handle_get_outhash,
'get-stream': self.handle_get_stream,
'get-stats': self.handle_get_stats,
- 'chunk-stream': self.handle_chunk,
- }
+ })
if not read_only:
self.handlers.update({
@@ -192,56 +188,19 @@ class ServerClient(object):
'backfill-wait': self.handle_backfill_wait,
})
+ def validate_proto_version(self):
+ return (self.proto_version > (1, 0) and self.proto_version <= (1, 1))
+
async def process_requests(self):
if self.upstream is not None:
self.upstream_client = await create_async_client(self.upstream)
else:
self.upstream_client = None
- try:
-
-
- self.addr = self.writer.get_extra_info('peername')
- logger.debug('Client %r connected' % (self.addr,))
-
- # Read protocol and version
- protocol = await self.reader.readline()
- if protocol is None:
- return
-
- (proto_name, proto_version) = protocol.decode('utf-8').rstrip().split()
- if proto_name != 'OEHASHEQUIV':
- return
-
- proto_version = tuple(int(v) for v in proto_version.split('.'))
- if proto_version < (1, 0) or proto_version > (1, 1):
- return
-
- # Read headers. Currently, no headers are implemented, so look for
- # an empty line to signal the end of the headers
- while True:
- line = await self.reader.readline()
- if line is None:
- return
+ await super().process_requests()
- line = line.decode('utf-8').rstrip()
- if not line:
- break
-
- # Handle messages
- while True:
- d = await self.read_message()
- if d is None:
- break
- await self.dispatch_message(d)
- await self.writer.drain()
- except ClientError as e:
- logger.error(str(e))
- finally:
- if self.upstream_client is not None:
- await self.upstream_client.close()
-
- self.writer.close()
+ if self.upstream_client is not None:
+ await self.upstream_client.close()
async def dispatch_message(self, msg):
for k in self.handlers.keys():
@@ -255,81 +214,107 @@ class ServerClient(object):
await self.handlers[k](msg[k])
return
- raise ClientError("Unrecognized command %r" % msg)
+ raise bb.asyncrpc.ClientError("Unrecognized command %r" % msg)
- def write_message(self, msg):
- for c in chunkify(json.dumps(msg), self.max_chunk):
- self.writer.write(c.encode('utf-8'))
-
- async def read_message(self):
- l = await self.reader.readline()
- if not l:
- return None
-
- try:
- message = l.decode('utf-8')
+ async def handle_get(self, request):
+ method = request['method']
+ taskhash = request['taskhash']
+ fetch_all = request.get('all', False)
- if not message.endswith('\n'):
- return None
+ with closing(self.db.cursor()) as cursor:
+ d = await self.get_unihash(cursor, method, taskhash, fetch_all)
- return json.loads(message)
- except (json.JSONDecodeError, UnicodeDecodeError) as e:
- logger.error('Bad message from client: %r' % message)
- raise e
+ self.write_message(d)
- async def handle_chunk(self, request):
- lines = []
- try:
- while True:
- l = await self.reader.readline()
- l = l.rstrip(b"\n").decode("utf-8")
- if not l:
- break
- lines.append(l)
+ async def get_unihash(self, cursor, method, taskhash, fetch_all=False):
+ d = None
+
+ if fetch_all:
+ cursor.execute(
+ '''
+ SELECT *, unihashes_v2.unihash AS unihash FROM outhashes_v2
+ INNER JOIN unihashes_v2 ON unihashes_v2.method=outhashes_v2.method AND unihashes_v2.taskhash=outhashes_v2.taskhash
+ WHERE outhashes_v2.method=:method AND outhashes_v2.taskhash=:taskhash
+ ORDER BY outhashes_v2.created ASC
+ LIMIT 1
+ ''',
+ {
+ 'method': method,
+ 'taskhash': taskhash,
+ }
- msg = json.loads(''.join(lines))
- except (json.JSONDecodeError, UnicodeDecodeError) as e:
- logger.error('Bad message from client: %r' % message)
- raise e
+ )
+ row = cursor.fetchone()
- if 'chunk-stream' in msg:
- raise ClientError("Nested chunks are not allowed")
+ if row is not None:
+ d = {k: row[k] for k in row.keys()}
+ elif self.upstream_client is not None:
+ d = await self.upstream_client.get_taskhash(method, taskhash, True)
+ self.update_unified(cursor, d)
+ self.db.commit()
+ else:
+ row = self.query_equivalent(cursor, method, taskhash)
+
+ if row is not None:
+ d = {k: row[k] for k in row.keys()}
+ elif self.upstream_client is not None:
+ d = await self.upstream_client.get_taskhash(method, taskhash)
+ d = {k: v for k, v in d.items() if k in UNIHASH_TABLE_COLUMNS}
+ insert_unihash(cursor, d, Resolve.IGNORE)
+ self.db.commit()
- await self.dispatch_message(msg)
+ return d
- async def handle_get(self, request):
+ async def handle_get_outhash(self, request):
method = request['method']
+ outhash = request['outhash']
taskhash = request['taskhash']
- if request.get('all', False):
- row = self.query_equivalent(method, taskhash, self.ALL_QUERY)
- else:
- row = self.query_equivalent(method, taskhash, self.FAST_QUERY)
-
- if row is not None:
- logger.debug('Found equivalent task %s -> %s', (row['taskhash'], row['unihash']))
- d = {k: row[k] for k in row.keys()}
- elif self.upstream_client is not None:
- d = await copy_from_upstream(self.upstream_client, self.db, method, taskhash)
- else:
- d = None
+ with closing(self.db.cursor()) as cursor:
+ d = await self.get_outhash(cursor, method, outhash, taskhash)
self.write_message(d)
- async def handle_get_outhash(self, request):
- with closing(self.db.cursor()) as cursor:
- cursor.execute(self.OUTHASH_QUERY,
- {k: request[k] for k in ('method', 'outhash', 'taskhash')})
-
- row = cursor.fetchone()
+ async def get_outhash(self, cursor, method, outhash, taskhash):
+ d = None
+ cursor.execute(
+ '''
+ SELECT *, unihashes_v2.unihash AS unihash FROM outhashes_v2
+ INNER JOIN unihashes_v2 ON unihashes_v2.method=outhashes_v2.method AND unihashes_v2.taskhash=outhashes_v2.taskhash
+ WHERE outhashes_v2.method=:method AND outhashes_v2.outhash=:outhash
+ ORDER BY outhashes_v2.created ASC
+ LIMIT 1
+ ''',
+ {
+ 'method': method,
+ 'outhash': outhash,
+ }
+ )
+ row = cursor.fetchone()
if row is not None:
- logger.debug('Found equivalent outhash %s -> %s', (row['outhash'], row['unihash']))
d = {k: row[k] for k in row.keys()}
- else:
- d = None
+ elif self.upstream_client is not None:
+ d = await self.upstream_client.get_outhash(method, outhash, taskhash)
+ self.update_unified(cursor, d)
+ self.db.commit()
- self.write_message(d)
+ return d
+
+ def update_unified(self, cursor, data):
+ if data is None:
+ return
+
+ insert_unihash(
+ cursor,
+ {k: v for k, v in data.items() if k in UNIHASH_TABLE_COLUMNS},
+ Resolve.IGNORE
+ )
+ insert_outhash(
+ cursor,
+ {k: v for k, v in data.items() if k in OUTHASH_TABLE_COLUMNS},
+ Resolve.IGNORE
+ )
async def handle_get_stream(self, request):
self.write_message('ok')
@@ -357,7 +342,12 @@ class ServerClient(object):
(method, taskhash) = l.split()
#logger.debug('Looking up %s %s' % (method, taskhash))
- row = self.query_equivalent(method, taskhash, self.FAST_QUERY)
+ cursor = self.db.cursor()
+ try:
+ row = self.query_equivalent(cursor, method, taskhash)
+ finally:
+ cursor.close()
+
if row is not None:
msg = ('%s\n' % row['unihash']).encode('utf-8')
#logger.debug('Found equivalent task %s -> %s', (row['taskhash'], row['unihash']))
@@ -384,55 +374,82 @@ class ServerClient(object):
async def handle_report(self, data):
with closing(self.db.cursor()) as cursor:
- cursor.execute(self.OUTHASH_QUERY,
- {k: data[k] for k in ('method', 'outhash', 'taskhash')})
+ outhash_data = {
+ 'method': data['method'],
+ 'outhash': data['outhash'],
+ 'taskhash': data['taskhash'],
+ 'created': datetime.now()
+ }
- row = cursor.fetchone()
+ for k in ('owner', 'PN', 'PV', 'PR', 'task', 'outhash_siginfo'):
+ if k in data:
+ outhash_data[k] = data[k]
+
+ # Insert the new entry, unless it already exists
+ (rowid, inserted) = insert_outhash(cursor, outhash_data, Resolve.IGNORE)
+
+ if inserted:
+ # If this row is new, check if it is equivalent to another
+ # output hash
+ cursor.execute(
+ '''
+ SELECT outhashes_v2.taskhash AS taskhash, unihashes_v2.unihash AS unihash FROM outhashes_v2
+ INNER JOIN unihashes_v2 ON unihashes_v2.method=outhashes_v2.method AND unihashes_v2.taskhash=outhashes_v2.taskhash
+ -- Select any matching output hash except the one we just inserted
+ WHERE outhashes_v2.method=:method AND outhashes_v2.outhash=:outhash AND outhashes_v2.taskhash!=:taskhash
+ -- Pick the oldest hash
+ ORDER BY outhashes_v2.created ASC
+ LIMIT 1
+ ''',
+ {
+ 'method': data['method'],
+ 'outhash': data['outhash'],
+ 'taskhash': data['taskhash'],
+ }
+ )
+ row = cursor.fetchone()
- if row is None and self.upstream_client:
- # Try upstream
- row = await copy_outhash_from_upstream(self.upstream_client,
- self.db,
- data['method'],
- data['outhash'],
- data['taskhash'])
-
- # If no matching outhash was found, or one *was* found but it
- # wasn't an exact match on the taskhash, a new entry for this
- # taskhash should be added
- if row is None or row['taskhash'] != data['taskhash']:
- # If a row matching the outhash was found, the unihash for
- # the new taskhash should be the same as that one.
- # Otherwise the caller provided unihash is used.
- unihash = data['unihash']
if row is not None:
+ # A matching output hash was found. Set our taskhash to the
+ # same unihash since they are equivalent
unihash = row['unihash']
+ resolve = Resolve.IGNORE
+ else:
+ # No matching output hash was found. This is probably the
+ # first outhash to be added.
+ unihash = data['unihash']
+ resolve = Resolve.IGNORE
+
+ # Query upstream to see if it has a unihash we can use
+ if self.upstream_client is not None:
+ upstream_data = await self.upstream_client.get_outhash(data['method'], data['outhash'], data['taskhash'])
+ if upstream_data is not None:
+ unihash = upstream_data['unihash']
+
+
+ insert_unihash(
+ cursor,
+ {
+ 'method': data['method'],
+ 'taskhash': data['taskhash'],
+ 'unihash': unihash,
+ },
+ resolve
+ )
+
+ unihash_data = await self.get_unihash(cursor, data['method'], data['taskhash'])
+ if unihash_data is not None:
+ unihash = unihash_data['unihash']
+ else:
+ unihash = data['unihash']
- insert_data = {
- 'method': data['method'],
- 'outhash': data['outhash'],
- 'taskhash': data['taskhash'],
- 'unihash': unihash,
- 'created': datetime.now()
- }
-
- for k in ('owner', 'PN', 'PV', 'PR', 'task', 'outhash_siginfo'):
- if k in data:
- insert_data[k] = data[k]
-
- insert_task(cursor, insert_data)
- self.db.commit()
-
- logger.info('Adding taskhash %s with unihash %s',
- data['taskhash'], unihash)
+ self.db.commit()
- d = {
- 'taskhash': data['taskhash'],
- 'method': data['method'],
- 'unihash': unihash
- }
- else:
- d = {k: row[k] for k in ('taskhash', 'method', 'unihash')}
+ d = {
+ 'taskhash': data['taskhash'],
+ 'method': data['method'],
+ 'unihash': unihash,
+ }
self.write_message(d)
@@ -440,23 +457,16 @@ class ServerClient(object):
with closing(self.db.cursor()) as cursor:
insert_data = {
'method': data['method'],
- 'outhash': "",
'taskhash': data['taskhash'],
'unihash': data['unihash'],
- 'created': datetime.now()
}
-
- for k in ('owner', 'PN', 'PV', 'PR', 'task', 'outhash_siginfo'):
- if k in data:
- insert_data[k] = data[k]
-
- insert_task(cursor, insert_data, ignore=True)
+ insert_unihash(cursor, insert_data, Resolve.IGNORE)
self.db.commit()
# Fetch the unihash that will be reported for the taskhash. If the
# unihash matches, it means this row was inserted (or the mapping
# was already valid)
- row = self.query_equivalent(data['method'], data['taskhash'], self.FAST_QUERY)
+ row = self.query_equivalent(cursor, data['method'], data['taskhash'])
if row['unihash'] == data['unihash']:
logger.info('Adding taskhash equivalence for %s with unihash %s',
@@ -489,84 +499,32 @@ class ServerClient(object):
await self.backfill_queue.join()
self.write_message(d)
- def query_equivalent(self, method, taskhash, query):
+ def query_equivalent(self, cursor, method, taskhash):
# This is part of the inner loop and must be as fast as possible
- try:
- cursor = self.db.cursor()
- cursor.execute(query, {'method': method, 'taskhash': taskhash})
- return cursor.fetchone()
- except:
- cursor.close()
+ cursor.execute(
+ 'SELECT taskhash, method, unihash FROM unihashes_v2 WHERE method=:method AND taskhash=:taskhash',
+ {
+ 'method': method,
+ 'taskhash': taskhash,
+ }
+ )
+ return cursor.fetchone()
-class Server(object):
- def __init__(self, db, loop=None, upstream=None, read_only=False):
+class Server(bb.asyncrpc.AsyncServer):
+ def __init__(self, db, upstream=None, read_only=False):
if upstream and read_only:
- raise ServerError("Read-only hashserv cannot pull from an upstream server")
+ raise bb.asyncrpc.ServerError("Read-only hashserv cannot pull from an upstream server")
+
+ super().__init__(logger)
self.request_stats = Stats()
self.db = db
-
- if loop is None:
- self.loop = asyncio.new_event_loop()
- self.close_loop = True
- else:
- self.loop = loop
- self.close_loop = False
-
self.upstream = upstream
self.read_only = read_only
- self._cleanup_socket = None
-
- def start_tcp_server(self, host, port):
- self.server = self.loop.run_until_complete(
- asyncio.start_server(self.handle_client, host, port)
- )
-
- for s in self.server.sockets:
- logger.info('Listening on %r' % (s.getsockname(),))
- # Newer python does this automatically. Do it manually here for
- # maximum compatibility
- s.setsockopt(socket.SOL_TCP, socket.TCP_NODELAY, 1)
- s.setsockopt(socket.SOL_TCP, socket.TCP_QUICKACK, 1)
-
- name = self.server.sockets[0].getsockname()
- if self.server.sockets[0].family == socket.AF_INET6:
- self.address = "[%s]:%d" % (name[0], name[1])
- else:
- self.address = "%s:%d" % (name[0], name[1])
-
- def start_unix_server(self, path):
- def cleanup():
- os.unlink(path)
-
- cwd = os.getcwd()
- try:
- # Work around path length limits in AF_UNIX
- os.chdir(os.path.dirname(path))
- self.server = self.loop.run_until_complete(
- asyncio.start_unix_server(self.handle_client, os.path.basename(path))
- )
- finally:
- os.chdir(cwd)
-
- logger.info('Listening on %r' % path)
-
- self._cleanup_socket = cleanup
- self.address = "unix://%s" % os.path.abspath(path)
-
- async def handle_client(self, reader, writer):
- # writer.transport.set_write_buffer_limits(0)
- try:
- client = ServerClient(reader, writer, self.db, self.request_stats, self.backfill_queue, self.upstream, self.read_only)
- await client.process_requests()
- except Exception as e:
- import traceback
- logger.error('Error from client: %s' % str(e), exc_info=True)
- traceback.print_exc()
- writer.close()
- logger.info('Client disconnected')
+ def accept_client(self, reader, writer):
+ return ServerClient(reader, writer, self.db, self.request_stats, self.backfill_queue, self.upstream, self.read_only)
@contextmanager
def _backfill_worker(self):
@@ -579,7 +537,7 @@ class Server(object):
self.backfill_queue.task_done()
break
method, taskhash = item
- await copy_from_upstream(client, self.db, method, taskhash)
+ await copy_unihash_from_upstream(client, self.db, method, taskhash)
self.backfill_queue.task_done()
finally:
await client.close()
@@ -597,31 +555,8 @@ class Server(object):
else:
yield
- def serve_forever(self):
- def signal_handler():
- self.loop.stop()
+ def run_loop_forever(self):
+ self.backfill_queue = asyncio.Queue()
- asyncio.set_event_loop(self.loop)
- try:
- self.backfill_queue = asyncio.Queue()
-
- self.loop.add_signal_handler(signal.SIGTERM, signal_handler)
-
- with self._backfill_worker():
- try:
- self.loop.run_forever()
- except KeyboardInterrupt:
- pass
-
- self.server.close()
-
- self.loop.run_until_complete(self.server.wait_closed())
- logger.info('Server shutting down')
- finally:
- if self.close_loop:
- if sys.version_info >= (3, 6):
- self.loop.run_until_complete(self.loop.shutdown_asyncgens())
- self.loop.close()
-
- if self._cleanup_socket is not None:
- self._cleanup_socket()
+ with self._backfill_worker():
+ super().run_loop_forever()
diff --git a/bitbake/lib/hashserv/tests.py b/bitbake/lib/hashserv/tests.py
index 1a69648..f6b85ae 100644
--- a/bitbake/lib/hashserv/tests.py
+++ b/bitbake/lib/hashserv/tests.py
@@ -6,7 +6,6 @@
#
from . import create_server, create_client
-from .client import HashConnectionError
import hashlib
import logging
import multiprocessing
@@ -16,28 +15,32 @@ import tempfile
import threading
import unittest
import socket
-
-def _run_server(server, idx):
- # logging.basicConfig(level=logging.DEBUG, filename='bbhashserv.log', filemode='w',
- # format='%(levelname)s %(filename)s:%(lineno)d %(message)s')
- sys.stdout = open('bbhashserv-%d.log' % idx, 'w')
+import time
+import signal
+
+def server_prefunc(server, idx):
+ logging.basicConfig(level=logging.DEBUG, filename='bbhashserv-%d.log' % idx, filemode='w',
+ format='%(levelname)s %(filename)s:%(lineno)d %(message)s')
+ server.logger.debug("Running server %d" % idx)
+ sys.stdout = open('bbhashserv-stdout-%d.log' % idx, 'w')
sys.stderr = sys.stdout
- server.serve_forever()
-
class HashEquivalenceTestSetup(object):
METHOD = 'TestMethod'
server_index = 0
- def start_server(self, dbpath=None, upstream=None, read_only=False):
+ def start_server(self, dbpath=None, upstream=None, read_only=False, prefunc=server_prefunc):
self.server_index += 1
if dbpath is None:
dbpath = os.path.join(self.temp_dir.name, "db%d.sqlite" % self.server_index)
- def cleanup_thread(thread):
- thread.terminate()
- thread.join()
+ def cleanup_server(server):
+ if server.process.exitcode is not None:
+ return
+
+ server.process.terminate()
+ server.process.join()
server = create_server(self.get_server_addr(self.server_index),
dbpath,
@@ -45,9 +48,8 @@ class HashEquivalenceTestSetup(object):
read_only=read_only)
server.dbpath = dbpath
- server.thread = multiprocessing.Process(target=_run_server, args=(server, self.server_index))
- server.thread.start()
- self.addCleanup(cleanup_thread, server.thread)
+ server.serve_as_process(prefunc=prefunc, args=(self.server_index,))
+ self.addCleanup(cleanup_server, server)
def cleanup_client(client):
client.close()
@@ -138,12 +140,17 @@ class HashEquivalenceCommonTests(object):
})
self.assertEqual(result['unihash'], unihash, 'Server returned bad unihash')
- result = self.client.get_taskhash(self.METHOD, taskhash, True)
- self.assertEqual(result['taskhash'], taskhash)
- self.assertEqual(result['unihash'], unihash)
- self.assertEqual(result['method'], self.METHOD)
- self.assertEqual(result['outhash'], outhash)
- self.assertEqual(result['outhash_siginfo'], siginfo)
+ result_unihash = self.client.get_taskhash(self.METHOD, taskhash, True)
+ self.assertEqual(result_unihash['taskhash'], taskhash)
+ self.assertEqual(result_unihash['unihash'], unihash)
+ self.assertEqual(result_unihash['method'], self.METHOD)
+
+ result_outhash = self.client.get_outhash(self.METHOD, outhash, taskhash)
+ self.assertEqual(result_outhash['taskhash'], taskhash)
+ self.assertEqual(result_outhash['method'], self.METHOD)
+ self.assertEqual(result_outhash['unihash'], unihash)
+ self.assertEqual(result_outhash['outhash'], outhash)
+ self.assertEqual(result_outhash['outhash_siginfo'], siginfo)
def test_stress(self):
def query_server(failures):
@@ -258,6 +265,39 @@ class HashEquivalenceCommonTests(object):
result = down_client.report_unihash(taskhash6, self.METHOD, outhash5, unihash6)
self.assertEqual(result['unihash'], unihash5, 'Server failed to copy unihash from upstream')
+ # Tests read through from server with
+ taskhash7 = '9d81d76242cc7cfaf7bf74b94b9cd2e29324ed74'
+ outhash7 = '8470d56547eea6236d7c81a644ce74670ca0bbda998e13c629ef6bb3f0d60b69'
+ unihash7 = '05d2a63c81e32f0a36542ca677e8ad852365c538'
+ self.client.report_unihash(taskhash7, self.METHOD, outhash7, unihash7)
+
+ result = down_client.get_taskhash(self.METHOD, taskhash7, True)
+ self.assertEqual(result['unihash'], unihash7, 'Server failed to copy unihash from upstream')
+ self.assertEqual(result['outhash'], outhash7, 'Server failed to copy unihash from upstream')
+ self.assertEqual(result['taskhash'], taskhash7, 'Server failed to copy unihash from upstream')
+ self.assertEqual(result['method'], self.METHOD)
+
+ taskhash8 = '86978a4c8c71b9b487330b0152aade10c1ee58aa'
+ outhash8 = 'ca8c128e9d9e4a28ef24d0508aa20b5cf880604eacd8f65c0e366f7e0cc5fbcf'
+ unihash8 = 'd8bcf25369d40590ad7d08c84d538982f2023e01'
+ self.client.report_unihash(taskhash8, self.METHOD, outhash8, unihash8)
+
+ result = down_client.get_outhash(self.METHOD, outhash8, taskhash8)
+ self.assertEqual(result['unihash'], unihash8, 'Server failed to copy unihash from upstream')
+ self.assertEqual(result['outhash'], outhash8, 'Server failed to copy unihash from upstream')
+ self.assertEqual(result['taskhash'], taskhash8, 'Server failed to copy unihash from upstream')
+ self.assertEqual(result['method'], self.METHOD)
+
+ taskhash9 = 'ae6339531895ddf5b67e663e6a374ad8ec71d81c'
+ outhash9 = 'afc78172c81880ae10a1fec994b5b4ee33d196a001a1b66212a15ebe573e00b5'
+ unihash9 = '6662e699d6e3d894b24408ff9a4031ef9b038ee8'
+ self.client.report_unihash(taskhash9, self.METHOD, outhash9, unihash9)
+
+ result = down_client.get_taskhash(self.METHOD, taskhash9, False)
+ self.assertEqual(result['unihash'], unihash9, 'Server failed to copy unihash from upstream')
+ self.assertEqual(result['taskhash'], taskhash9, 'Server failed to copy unihash from upstream')
+ self.assertEqual(result['method'], self.METHOD)
+
def test_ro_server(self):
(ro_client, ro_server) = self.start_server(dbpath=self.server.dbpath, read_only=True)
@@ -277,13 +317,90 @@ class HashEquivalenceCommonTests(object):
outhash2 = '3c979c3db45c569f51ab7626a4651074be3a9d11a84b1db076f5b14f7d39db44'
unihash2 = '90e9bc1d1f094c51824adca7f8ea79a048d68824'
- with self.assertRaises(HashConnectionError):
+ with self.assertRaises(ConnectionError):
ro_client.report_unihash(taskhash2, self.METHOD, outhash2, unihash2)
# Ensure that the database was not modified
self.assertClientGetHash(self.client, taskhash2, None)
+ def test_slow_server_start(self):
+ # Ensures that the server will exit correctly even if it gets a SIGTERM
+ # before entering the main loop
+
+ event = multiprocessing.Event()
+
+ def prefunc(server, idx):
+ nonlocal event
+ server_prefunc(server, idx)
+ event.wait()
+
+ def do_nothing(signum, frame):
+ pass
+
+ old_signal = signal.signal(signal.SIGTERM, do_nothing)
+ self.addCleanup(signal.signal, signal.SIGTERM, old_signal)
+
+ _, server = self.start_server(prefunc=prefunc)
+ server.process.terminate()
+ time.sleep(30)
+ event.set()
+ server.process.join(300)
+ self.assertIsNotNone(server.process.exitcode, "Server did not exit in a timely manner!")
+
+ def test_diverging_report_race(self):
+ # Tests that a reported task will correctly pick up an updated unihash
+
+ # This is a baseline report added to the database to ensure that there
+ # is something to match against as equivalent
+ outhash1 = 'afd11c366050bcd75ad763e898e4430e2a60659b26f83fbb22201a60672019fa'
+ taskhash1 = '3bde230c743fc45ab61a065d7a1815fbfa01c4740e4c895af2eb8dc0f684a4ab'
+ unihash1 = '3bde230c743fc45ab61a065d7a1815fbfa01c4740e4c895af2eb8dc0f684a4ab'
+ result = self.client.report_unihash(taskhash1, self.METHOD, outhash1, unihash1)
+
+ # Add a report that is equivalent to Task 1. It should ignore the
+ # provided unihash and report the unihash from task 1
+ taskhash2 = '6259ae8263bd94d454c086f501c37e64c4e83cae806902ca95b4ab513546b273'
+ unihash2 = taskhash2
+ result = self.client.report_unihash(taskhash2, self.METHOD, outhash1, unihash2)
+ self.assertEqual(result['unihash'], unihash1)
+
+ # Add another report for Task 2, but with a different outhash (e.g. the
+ # task is non-deterministic). It should still be marked with the Task 1
+ # unihash because it has the Task 2 taskhash, which is equivalent to
+ # Task 1
+ outhash3 = 'd2187ee3a8966db10b34fe0e863482288d9a6185cb8ef58a6c1c6ace87a2f24c'
+ result = self.client.report_unihash(taskhash2, self.METHOD, outhash3, unihash2)
+ self.assertEqual(result['unihash'], unihash1)
+
+
+ def test_diverging_report_reverse_race(self):
+ # Same idea as the previous test, but Tasks 2 and 3 are reported in
+ # reverse order the opposite order
+
+ outhash1 = 'afd11c366050bcd75ad763e898e4430e2a60659b26f83fbb22201a60672019fa'
+ taskhash1 = '3bde230c743fc45ab61a065d7a1815fbfa01c4740e4c895af2eb8dc0f684a4ab'
+ unihash1 = '3bde230c743fc45ab61a065d7a1815fbfa01c4740e4c895af2eb8dc0f684a4ab'
+ result = self.client.report_unihash(taskhash1, self.METHOD, outhash1, unihash1)
+
+ taskhash2 = '6259ae8263bd94d454c086f501c37e64c4e83cae806902ca95b4ab513546b273'
+ unihash2 = taskhash2
+
+ # Report Task 3 first. Since there is nothing else in the database it
+ # will use the client provided unihash
+ outhash3 = 'd2187ee3a8966db10b34fe0e863482288d9a6185cb8ef58a6c1c6ace87a2f24c'
+ result = self.client.report_unihash(taskhash2, self.METHOD, outhash3, unihash2)
+ self.assertEqual(result['unihash'], unihash2)
+
+ # Report Task 2. This is equivalent to Task 1 but there is already a mapping for
+ # taskhash2 so it will report unihash2
+ result = self.client.report_unihash(taskhash2, self.METHOD, outhash1, unihash2)
+ self.assertEqual(result['unihash'], unihash2)
+
+ # The originally reported unihash for Task 3 should be unchanged even if it
+ # shares a taskhash with Task 2
+ self.assertClientGetHash(self.client, taskhash2, unihash2)
+
class TestHashEquivalenceUnixServer(HashEquivalenceTestSetup, HashEquivalenceCommonTests, unittest.TestCase):
def get_server_addr(self, server_idx):
return "unix://" + os.path.join(self.temp_dir.name, 'sock%d' % server_idx)
diff --git a/bitbake/lib/layerindexlib/__init__.py b/bitbake/lib/layerindexlib/__init__.py
index 9ca127b..ac03d89 100644
--- a/bitbake/lib/layerindexlib/__init__.py
+++ b/bitbake/lib/layerindexlib/__init__.py
@@ -6,7 +6,6 @@
import datetime
import logging
-import imp
import os
from collections import OrderedDict
@@ -199,7 +198,7 @@ The format of the indexURI:
For example:
- http://layers.openembedded.org/layerindex/api/;branch=master;desc=OpenEmbedded%20Layer%20Index
+ https://layers.openembedded.org/layerindex/api/;branch=master;desc=OpenEmbedded%20Layer%20Index
cooker://
'''
if reload:
@@ -577,7 +576,7 @@ This function is used to implement debugging and provide the user info.
# index['config'] - configuration data for this index
# index['branches'] - dictionary of Branch objects, by id number
# index['layerItems'] - dictionary of layerItem objects, by id number
-# ...etc... (See: http://layers.openembedded.org/layerindex/api/)
+# ...etc... (See: https://layers.openembedded.org/layerindex/api/)
#
# The class needs to manage the 'index' entries and allow easily adding
# of new items, as well as simply loading of the items.
@@ -1279,7 +1278,7 @@ class Recipe(LayerIndexItemObj_LayerBranch):
filename, filepath, pn, pv, layerbranch,
summary="", description="", section="", license="",
homepage="", bugtracker="", provides="", bbclassextend="",
- inherits="", blacklisted="", updated=None):
+ inherits="", disallowed="", updated=None):
self.id = id
self.filename = filename
self.filepath = filepath
@@ -1295,7 +1294,7 @@ class Recipe(LayerIndexItemObj_LayerBranch):
self.bbclassextend = bbclassextend
self.inherits = inherits
self.updated = updated or datetime.datetime.today().isoformat()
- self.blacklisted = blacklisted
+ self.disallowed = disallowed
if isinstance(layerbranch, LayerBranch):
self.layerbranch = layerbranch
else:
diff --git a/bitbake/lib/layerindexlib/cooker.py b/bitbake/lib/layerindexlib/cooker.py
index 2de6e5f..ced3e06 100644
--- a/bitbake/lib/layerindexlib/cooker.py
+++ b/bitbake/lib/layerindexlib/cooker.py
@@ -279,7 +279,7 @@ class CookerPlugin(layerindexlib.plugin.IndexPlugin):
summary=pn, description=pn, section='?',
license='?', homepage='?', bugtracker='?',
provides='?', bbclassextend='?', inherits='?',
- blacklisted='?', layerbranch=depBranchId)
+ disallowed='?', layerbranch=depBranchId)
index = addElement("recipes", [recipe], index)
diff --git a/bitbake/lib/layerindexlib/restapi.py b/bitbake/lib/layerindexlib/restapi.py
index 26a1c96..81d99b0 100644
--- a/bitbake/lib/layerindexlib/restapi.py
+++ b/bitbake/lib/layerindexlib/restapi.py
@@ -31,7 +31,7 @@ class RestApiPlugin(layerindexlib.plugin.IndexPlugin):
The return value is a LayerIndexObj.
url is the url to the rest api of the layer index, such as:
- http://layers.openembedded.org/layerindex/api/
+ https://layers.openembedded.org/layerindex/api/
Or a local file...
"""
@@ -138,7 +138,7 @@ class RestApiPlugin(layerindexlib.plugin.IndexPlugin):
The return value is a LayerIndexObj.
ud is the parsed url to the rest api of the layer index, such as:
- http://layers.openembedded.org/layerindex/api/
+ https://layers.openembedded.org/layerindex/api/
"""
def _get_json_response(apiurl=None, username=None, password=None, retry=True):
diff --git a/bitbake/lib/layerindexlib/tests/restapi.py b/bitbake/lib/layerindexlib/tests/restapi.py
index 33b5c1c..71f0ae8 100644
--- a/bitbake/lib/layerindexlib/tests/restapi.py
+++ b/bitbake/lib/layerindexlib/tests/restapi.py
@@ -22,7 +22,7 @@ class LayerIndexWebRestApiTest(LayersTest):
self.assertFalse(os.environ.get("BB_SKIP_NETTESTS") == "yes", msg="BB_SKIP_NETTESTS set, but we tried to test anyway")
LayersTest.setUp(self)
self.layerindex = layerindexlib.LayerIndex(self.d)
- self.layerindex.load_layerindex('http://layers.openembedded.org/layerindex/api/;branch=sumo', load=['layerDependencies'])
+ self.layerindex.load_layerindex('https://layers.openembedded.org/layerindex/api/;branch=sumo', load=['layerDependencies'])
@skipIfNoNetwork()
def test_layerindex_is_empty(self):
diff --git a/bitbake/lib/ply/yacc.py b/bitbake/lib/ply/yacc.py
index 46e7dc9..767c4e4 100644
--- a/bitbake/lib/ply/yacc.py
+++ b/bitbake/lib/ply/yacc.py
@@ -2797,11 +2797,8 @@ class ParserReflect(object):
# Compute a signature over the grammar
def signature(self):
try:
- from hashlib import md5
- except ImportError:
- from md5 import md5
- try:
- sig = md5()
+ import hashlib
+ sig = hashlib.new('MD5', usedforsecurity=False)
if self.start:
sig.update(self.start.encode('latin-1'))
if self.prec:
diff --git a/bitbake/lib/prserv/__init__.py b/bitbake/lib/prserv/__init__.py
index 9961040..38ced81 100644
--- a/bitbake/lib/prserv/__init__.py
+++ b/bitbake/lib/prserv/__init__.py
@@ -1,4 +1,6 @@
#
+# Copyright BitBake Contributors
+#
# SPDX-License-Identifier: GPL-2.0-only
#
diff --git a/bitbake/lib/prserv/client.py b/bitbake/lib/prserv/client.py
new file mode 100644
index 0000000..69ab7a4
--- /dev/null
+++ b/bitbake/lib/prserv/client.py
@@ -0,0 +1,50 @@
+#
+# Copyright BitBake Contributors
+#
+# SPDX-License-Identifier: GPL-2.0-only
+#
+
+import logging
+import bb.asyncrpc
+
+logger = logging.getLogger("BitBake.PRserv")
+
+class PRAsyncClient(bb.asyncrpc.AsyncClient):
+ def __init__(self):
+ super().__init__('PRSERVICE', '1.0', logger)
+
+ async def getPR(self, version, pkgarch, checksum):
+ response = await self.send_message(
+ {'get-pr': {'version': version, 'pkgarch': pkgarch, 'checksum': checksum}}
+ )
+ if response:
+ return response['value']
+
+ async def importone(self, version, pkgarch, checksum, value):
+ response = await self.send_message(
+ {'import-one': {'version': version, 'pkgarch': pkgarch, 'checksum': checksum, 'value': value}}
+ )
+ if response:
+ return response['value']
+
+ async def export(self, version, pkgarch, checksum, colinfo):
+ response = await self.send_message(
+ {'export': {'version': version, 'pkgarch': pkgarch, 'checksum': checksum, 'colinfo': colinfo}}
+ )
+ if response:
+ return (response['metainfo'], response['datainfo'])
+
+ async def is_readonly(self):
+ response = await self.send_message(
+ {'is-readonly': {}}
+ )
+ if response:
+ return response['readonly']
+
+class PRClient(bb.asyncrpc.Client):
+ def __init__(self):
+ super().__init__()
+ self._add_methods('getPR', 'importone', 'export', 'is_readonly')
+
+ def _get_async_client(self):
+ return PRAsyncClient()
diff --git a/bitbake/lib/prserv/db.py b/bitbake/lib/prserv/db.py
index cb2a246..b4bda70 100644
--- a/bitbake/lib/prserv/db.py
+++ b/bitbake/lib/prserv/db.py
@@ -1,4 +1,6 @@
#
+# Copyright BitBake Contributors
+#
# SPDX-License-Identifier: GPL-2.0-only
#
@@ -30,21 +32,29 @@ if sqlversion[0] < 3 or (sqlversion[0] == 3 and sqlversion[1] < 3):
#
class PRTable(object):
- def __init__(self, conn, table, nohist):
+ def __init__(self, conn, table, nohist, read_only):
self.conn = conn
self.nohist = nohist
+ self.read_only = read_only
self.dirty = False
if nohist:
self.table = "%s_nohist" % table
else:
self.table = "%s_hist" % table
- self._execute("CREATE TABLE IF NOT EXISTS %s \
- (version TEXT NOT NULL, \
- pkgarch TEXT NOT NULL, \
- checksum TEXT NOT NULL, \
- value INTEGER, \
- PRIMARY KEY (version, pkgarch, checksum));" % self.table)
+ if self.read_only:
+ table_exists = self._execute(
+ "SELECT count(*) FROM sqlite_master \
+ WHERE type='table' AND name='%s'" % (self.table))
+ if not table_exists:
+ raise prserv.NotFoundError
+ else:
+ self._execute("CREATE TABLE IF NOT EXISTS %s \
+ (version TEXT NOT NULL, \
+ pkgarch TEXT NOT NULL, \
+ checksum TEXT NOT NULL, \
+ value INTEGER, \
+ PRIMARY KEY (version, pkgarch, checksum));" % self.table)
def _execute(self, *query):
"""Execute a query, waiting to acquire a lock if necessary"""
@@ -59,8 +69,9 @@ class PRTable(object):
raise exc
def sync(self):
- self.conn.commit()
- self._execute("BEGIN EXCLUSIVE TRANSACTION")
+ if not self.read_only:
+ self.conn.commit()
+ self._execute("BEGIN EXCLUSIVE TRANSACTION")
def sync_if_dirty(self):
if self.dirty:
@@ -75,6 +86,15 @@ class PRTable(object):
return row[0]
else:
#no value found, try to insert
+ if self.read_only:
+ data = self._execute("SELECT ifnull(max(value)+1,0) FROM %s where version=? AND pkgarch=?;" % (self.table),
+ (version, pkgarch))
+ row = data.fetchone()
+ if row is not None:
+ return row[0]
+ else:
+ return 0
+
try:
self._execute("INSERT INTO %s VALUES (?, ?, ?, (select ifnull(max(value)+1,0) from %s where version=? AND pkgarch=?));"
% (self.table,self.table),
@@ -103,6 +123,15 @@ class PRTable(object):
return row[0]
else:
#no value found, try to insert
+ if self.read_only:
+ data = self._execute("SELECT ifnull(max(value)+1,0) FROM %s where version=? AND pkgarch=?;" % (self.table),
+ (version, pkgarch))
+ row = data.fetchone()
+ if row is not None:
+ return row[0]
+ else:
+ return 0
+
try:
self._execute("INSERT OR REPLACE INTO %s VALUES (?, ?, ?, (select ifnull(max(value)+1,0) from %s where version=? AND pkgarch=?));"
% (self.table,self.table),
@@ -128,6 +157,9 @@ class PRTable(object):
return self._getValueHist(version, pkgarch, checksum)
def _importHist(self, version, pkgarch, checksum, value):
+ if self.read_only:
+ return None
+
val = None
data = self._execute("SELECT value FROM %s WHERE version=? AND pkgarch=? AND checksum=?;" % self.table,
(version, pkgarch, checksum))
@@ -152,6 +184,9 @@ class PRTable(object):
return val
def _importNohist(self, version, pkgarch, checksum, value):
+ if self.read_only:
+ return None
+
try:
#try to insert
self._execute("INSERT INTO %s VALUES (?, ?, ?, ?);" % (self.table),
@@ -245,19 +280,23 @@ class PRTable(object):
class PRData(object):
"""Object representing the PR database"""
- def __init__(self, filename, nohist=True):
+ def __init__(self, filename, nohist=True, read_only=False):
self.filename=os.path.abspath(filename)
self.nohist=nohist
+ self.read_only = read_only
#build directory hierarchy
try:
os.makedirs(os.path.dirname(self.filename))
except OSError as e:
if e.errno != errno.EEXIST:
raise e
- self.connection=sqlite3.connect(self.filename, isolation_level="EXCLUSIVE", check_same_thread = False)
+ uri = "file:%s%s" % (self.filename, "?mode=ro" if self.read_only else "")
+ logger.debug("Opening PRServ database '%s'" % (uri))
+ self.connection=sqlite3.connect(uri, uri=True, isolation_level="EXCLUSIVE", check_same_thread = False)
self.connection.row_factory=sqlite3.Row
- self.connection.execute("pragma synchronous = off;")
- self.connection.execute("PRAGMA journal_mode = MEMORY;")
+ if not self.read_only:
+ self.connection.execute("pragma synchronous = off;")
+ self.connection.execute("PRAGMA journal_mode = MEMORY;")
self._tables={}
def disconnect(self):
@@ -270,7 +309,7 @@ class PRData(object):
if tblname in self._tables:
return self._tables[tblname]
else:
- tableobj = self._tables[tblname] = PRTable(self.connection, tblname, self.nohist)
+ tableobj = self._tables[tblname] = PRTable(self.connection, tblname, self.nohist, self.read_only)
return tableobj
def __delitem__(self, tblname):
diff --git a/bitbake/lib/prserv/serv.py b/bitbake/lib/prserv/serv.py
index 25dcf8a..c686b20 100644
--- a/bitbake/lib/prserv/serv.py
+++ b/bitbake/lib/prserv/serv.py
@@ -1,354 +1,212 @@
#
+# Copyright BitBake Contributors
+#
# SPDX-License-Identifier: GPL-2.0-only
#
import os,sys,logging
import signal, time
-from xmlrpc.server import SimpleXMLRPCServer, SimpleXMLRPCRequestHandler
-import threading
-import queue
import socket
import io
import sqlite3
-import bb.server.xmlrpcclient
import prserv
import prserv.db
import errno
-import select
+import bb.asyncrpc
logger = logging.getLogger("BitBake.PRserv")
-if sys.hexversion < 0x020600F0:
- print("Sorry, python 2.6 or later is required.")
- sys.exit(1)
-
-class Handler(SimpleXMLRPCRequestHandler):
- def _dispatch(self,method,params):
- try:
- value=self.server.funcs[method](*params)
- except:
- import traceback
- traceback.print_exc()
- raise
- return value
-
PIDPREFIX = "/tmp/PRServer_%s_%s.pid"
singleton = None
-
-class PRServer(SimpleXMLRPCServer):
- def __init__(self, dbfile, logfile, interface, daemon=True):
- ''' constructor '''
+class PRServerClient(bb.asyncrpc.AsyncServerConnection):
+ def __init__(self, reader, writer, table, read_only):
+ super().__init__(reader, writer, 'PRSERVICE', logger)
+ self.handlers.update({
+ 'get-pr': self.handle_get_pr,
+ 'import-one': self.handle_import_one,
+ 'export': self.handle_export,
+ 'is-readonly': self.handle_is_readonly,
+ })
+ self.table = table
+ self.read_only = read_only
+
+ def validate_proto_version(self):
+ return (self.proto_version == (1, 0))
+
+ async def dispatch_message(self, msg):
try:
- SimpleXMLRPCServer.__init__(self, interface,
- logRequests=False, allow_none=True)
- except socket.error:
- ip=socket.gethostbyname(interface[0])
- port=interface[1]
- msg="PR Server unable to bind to %s:%s\n" % (ip, port)
- sys.stderr.write(msg)
- raise PRServiceConfigError
-
- self.dbfile=dbfile
- self.daemon=daemon
- self.logfile=logfile
- self.working_thread=None
- self.host, self.port = self.socket.getsockname()
- self.pidfile=PIDPREFIX % (self.host, self.port)
-
- self.register_function(self.getPR, "getPR")
- self.register_function(self.quit, "quit")
- self.register_function(self.ping, "ping")
- self.register_function(self.export, "export")
- self.register_function(self.dump_db, "dump_db")
- self.register_function(self.importone, "importone")
- self.register_introspection_functions()
-
- self.quitpipein, self.quitpipeout = os.pipe()
-
- self.requestqueue = queue.Queue()
- self.handlerthread = threading.Thread(target = self.process_request_thread)
- self.handlerthread.daemon = False
-
- def process_request_thread(self):
- """Same as in BaseServer but as a thread.
-
- In addition, exception handling is done here.
-
- """
- iter_count = 1
- # 60 iterations between syncs or sync if dirty every ~30 seconds
- iterations_between_sync = 60
-
- bb.utils.set_process_name("PRServ Handler")
-
- while not self.quitflag:
- try:
- (request, client_address) = self.requestqueue.get(True, 30)
- except queue.Empty:
- self.table.sync_if_dirty()
- continue
- if request is None:
- continue
- try:
- self.finish_request(request, client_address)
- self.shutdown_request(request)
- iter_count = (iter_count + 1) % iterations_between_sync
- if iter_count == 0:
- self.table.sync_if_dirty()
- except:
- self.handle_error(request, client_address)
- self.shutdown_request(request)
- self.table.sync()
- self.table.sync_if_dirty()
-
- def sigint_handler(self, signum, stack):
- if self.table:
+ await super().dispatch_message(msg)
+ except:
self.table.sync()
+ raise
- def sigterm_handler(self, signum, stack):
- if self.table:
- self.table.sync()
- self.quit()
- self.requestqueue.put((None, None))
+ self.table.sync_if_dirty()
- def process_request(self, request, client_address):
- self.requestqueue.put((request, client_address))
+ async def handle_get_pr(self, request):
+ version = request['version']
+ pkgarch = request['pkgarch']
+ checksum = request['checksum']
- def export(self, version=None, pkgarch=None, checksum=None, colinfo=True):
+ response = None
try:
- return self.table.export(version, pkgarch, checksum, colinfo)
+ value = self.table.getValue(version, pkgarch, checksum)
+ response = {'value': value}
+ except prserv.NotFoundError:
+ logger.error("can not find value for (%s, %s)",version, checksum)
except sqlite3.Error as exc:
logger.error(str(exc))
- return None
-
- def dump_db(self):
- """
- Returns a script (string) that reconstructs the state of the
- entire database at the time this function is called. The script
- language is defined by the backing database engine, which is a
- function of server configuration.
- Returns None if the database engine does not support dumping to
- script or if some other error is encountered in processing.
- """
- buff = io.StringIO()
- try:
- self.table.sync()
- self.table.dump_db(buff)
- return buff.getvalue()
- except Exception as exc:
- logger.error(str(exc))
- return None
- finally:
- buff.close()
- def importone(self, version, pkgarch, checksum, value):
- return self.table.importone(version, pkgarch, checksum, value)
+ self.write_message(response)
+
+ async def handle_import_one(self, request):
+ response = None
+ if not self.read_only:
+ version = request['version']
+ pkgarch = request['pkgarch']
+ checksum = request['checksum']
+ value = request['value']
- def ping(self):
- return not self.quitflag
+ value = self.table.importone(version, pkgarch, checksum, value)
+ if value is not None:
+ response = {'value': value}
- def getinfo(self):
- return (self.host, self.port)
+ self.write_message(response)
+
+ async def handle_export(self, request):
+ version = request['version']
+ pkgarch = request['pkgarch']
+ checksum = request['checksum']
+ colinfo = request['colinfo']
- def getPR(self, version, pkgarch, checksum):
try:
- return self.table.getValue(version, pkgarch, checksum)
- except prserv.NotFoundError:
- logger.error("can not find value for (%s, %s)",version, checksum)
- return None
+ (metainfo, datainfo) = self.table.export(version, pkgarch, checksum, colinfo)
except sqlite3.Error as exc:
logger.error(str(exc))
- return None
+ metainfo = datainfo = None
+
+ response = {'metainfo': metainfo, 'datainfo': datainfo}
+ self.write_message(response)
- def quit(self):
- self.quitflag=True
- os.write(self.quitpipeout, b"q")
- os.close(self.quitpipeout)
- return
+ async def handle_is_readonly(self, request):
+ response = {'readonly': self.read_only}
+ self.write_message(response)
- def work_forever(self,):
- self.quitflag = False
- # This timeout applies to the poll in TCPServer, we need the select
- # below to wake on our quit pipe closing. We only ever call into handle_request
- # if there is data there.
- self.timeout = 0.01
+class PRServer(bb.asyncrpc.AsyncServer):
+ def __init__(self, dbfile, read_only=False):
+ super().__init__(logger)
+ self.dbfile = dbfile
+ self.table = None
+ self.read_only = read_only
- bb.utils.set_process_name("PRServ")
+ def accept_client(self, reader, writer):
+ return PRServerClient(reader, writer, self.table, self.read_only)
- # DB connection must be created after all forks
- self.db = prserv.db.PRData(self.dbfile)
+ def _serve_forever(self):
+ self.db = prserv.db.PRData(self.dbfile, read_only=self.read_only)
self.table = self.db["PRMAIN"]
- logger.info("Started PRServer with DBfile: %s, IP: %s, PORT: %s, PID: %s" %
- (self.dbfile, self.host, self.port, str(os.getpid())))
-
- self.handlerthread.start()
- while not self.quitflag:
- ready = select.select([self.fileno(), self.quitpipein], [], [], 30)
- if self.quitflag:
- break
- if self.fileno() in ready[0]:
- self.handle_request()
- self.handlerthread.join()
- self.db.disconnect()
- logger.info("PRServer: stopping...")
- self.server_close()
- os.close(self.quitpipein)
- return
+ logger.info("Started PRServer with DBfile: %s, Address: %s, PID: %s" %
+ (self.dbfile, self.address, str(os.getpid())))
- def start(self):
- if self.daemon:
- pid = self.daemonize()
- else:
- pid = self.fork()
- self.pid = pid
-
- # Ensure both the parent sees this and the child from the work_forever log entry above
- logger.info("Started PRServer with DBfile: %s, IP: %s, PORT: %s, PID: %s" %
- (self.dbfile, self.host, self.port, str(pid)))
-
- def delpid(self):
- os.remove(self.pidfile)
-
- def daemonize(self):
- """
- See Advanced Programming in the UNIX, Sec 13.3
- """
- try:
- pid = os.fork()
- if pid > 0:
- os.waitpid(pid, 0)
- #parent return instead of exit to give control
- return pid
- except OSError as e:
- raise Exception("%s [%d]" % (e.strerror, e.errno))
-
- os.setsid()
- """
- fork again to make sure the daemon is not session leader,
- which prevents it from acquiring controlling terminal
- """
- try:
- pid = os.fork()
- if pid > 0: #parent
- os._exit(0)
- except OSError as e:
- raise Exception("%s [%d]" % (e.strerror, e.errno))
+ super()._serve_forever()
- self.cleanup_handles()
- os._exit(0)
+ self.table.sync_if_dirty()
+ self.db.disconnect()
- def fork(self):
- try:
- pid = os.fork()
- if pid > 0:
- self.socket.close() # avoid ResourceWarning in parent
- return pid
- except OSError as e:
- raise Exception("%s [%d]" % (e.strerror, e.errno))
-
- bb.utils.signal_on_parent_exit("SIGTERM")
- self.cleanup_handles()
- os._exit(0)
-
- def cleanup_handles(self):
- signal.signal(signal.SIGINT, self.sigint_handler)
- signal.signal(signal.SIGTERM, self.sigterm_handler)
- os.chdir("/")
-
- sys.stdout.flush()
- sys.stderr.flush()
-
- # We could be called from a python thread with io.StringIO as
- # stdout/stderr or it could be 'real' unix fd forking where we need
- # to physically close the fds to prevent the program launching us from
- # potentially hanging on a pipe. Handle both cases.
- si = open('/dev/null', 'r')
- try:
- os.dup2(si.fileno(),sys.stdin.fileno())
- except (AttributeError, io.UnsupportedOperation):
- sys.stdin = si
- so = open(self.logfile, 'a+')
- try:
- os.dup2(so.fileno(),sys.stdout.fileno())
- except (AttributeError, io.UnsupportedOperation):
- sys.stdout = so
- try:
- os.dup2(so.fileno(),sys.stderr.fileno())
- except (AttributeError, io.UnsupportedOperation):
- sys.stderr = so
-
- # Clear out all log handlers prior to the fork() to avoid calling
- # event handlers not part of the PRserver
- for logger_iter in logging.Logger.manager.loggerDict.keys():
- logging.getLogger(logger_iter).handlers = []
-
- # Ensure logging makes it to the logfile
- streamhandler = logging.StreamHandler()
- streamhandler.setLevel(logging.DEBUG)
- formatter = bb.msg.BBLogFormatter("%(levelname)s: %(message)s")
- streamhandler.setFormatter(formatter)
- logger.addHandler(streamhandler)
-
- # write pidfile
- pid = str(os.getpid())
- with open(self.pidfile, 'w') as pf:
- pf.write("%s\n" % pid)
-
- self.work_forever()
- self.delpid()
+ def signal_handler(self):
+ super().signal_handler()
+ if self.table:
+ self.table.sync()
class PRServSingleton(object):
- def __init__(self, dbfile, logfile, interface):
+ def __init__(self, dbfile, logfile, host, port):
self.dbfile = dbfile
self.logfile = logfile
- self.interface = interface
- self.host = None
- self.port = None
-
- def start(self):
- self.prserv = PRServer(self.dbfile, self.logfile, self.interface, daemon=False)
- self.prserv.start()
- self.host, self.port = self.prserv.getinfo()
-
- def getinfo(self):
- return (self.host, self.port)
-
-class PRServerConnection(object):
- def __init__(self, host, port):
- if is_local_special(host, port):
- host, port = singleton.getinfo()
self.host = host
self.port = port
- self.connection, self.transport = bb.server.xmlrpcclient._create_server(self.host, self.port)
-
- def terminate(self):
- try:
- logger.info("Terminating PRServer...")
- self.connection.quit()
- except Exception as exc:
- sys.stderr.write("%s\n" % str(exc))
- def getPR(self, version, pkgarch, checksum):
- return self.connection.getPR(version, pkgarch, checksum)
+ def start(self):
+ self.prserv = PRServer(self.dbfile)
+ self.prserv.start_tcp_server(socket.gethostbyname(self.host), self.port)
+ self.process = self.prserv.serve_as_process()
- def ping(self):
- return self.connection.ping()
+ if not self.prserv.address:
+ raise PRServiceConfigError
+ if not self.port:
+ self.port = int(self.prserv.address.rsplit(':', 1)[1])
- def export(self,version=None, pkgarch=None, checksum=None, colinfo=True):
- return self.connection.export(version, pkgarch, checksum, colinfo)
+def run_as_daemon(func, pidfile, logfile):
+ """
+ See Advanced Programming in the UNIX, Sec 13.3
+ """
+ try:
+ pid = os.fork()
+ if pid > 0:
+ os.waitpid(pid, 0)
+ #parent return instead of exit to give control
+ return pid
+ except OSError as e:
+ raise Exception("%s [%d]" % (e.strerror, e.errno))
- def dump_db(self):
- return self.connection.dump_db()
+ os.setsid()
+ """
+ fork again to make sure the daemon is not session leader,
+ which prevents it from acquiring controlling terminal
+ """
+ try:
+ pid = os.fork()
+ if pid > 0: #parent
+ os._exit(0)
+ except OSError as e:
+ raise Exception("%s [%d]" % (e.strerror, e.errno))
- def importone(self, version, pkgarch, checksum, value):
- return self.connection.importone(version, pkgarch, checksum, value)
+ os.chdir("/")
- def getinfo(self):
- return self.host, self.port
+ sys.stdout.flush()
+ sys.stderr.flush()
-def start_daemon(dbfile, host, port, logfile):
+ # We could be called from a python thread with io.StringIO as
+ # stdout/stderr or it could be 'real' unix fd forking where we need
+ # to physically close the fds to prevent the program launching us from
+ # potentially hanging on a pipe. Handle both cases.
+ si = open('/dev/null', 'r')
+ try:
+ os.dup2(si.fileno(),sys.stdin.fileno())
+ except (AttributeError, io.UnsupportedOperation):
+ sys.stdin = si
+ so = open(logfile, 'a+')
+ try:
+ os.dup2(so.fileno(),sys.stdout.fileno())
+ except (AttributeError, io.UnsupportedOperation):
+ sys.stdout = so
+ try:
+ os.dup2(so.fileno(),sys.stderr.fileno())
+ except (AttributeError, io.UnsupportedOperation):
+ sys.stderr = so
+
+ # Clear out all log handlers prior to the fork() to avoid calling
+ # event handlers not part of the PRserver
+ for logger_iter in logging.Logger.manager.loggerDict.keys():
+ logging.getLogger(logger_iter).handlers = []
+
+ # Ensure logging makes it to the logfile
+ streamhandler = logging.StreamHandler()
+ streamhandler.setLevel(logging.DEBUG)
+ formatter = bb.msg.BBLogFormatter("%(levelname)s: %(message)s")
+ streamhandler.setFormatter(formatter)
+ logger.addHandler(streamhandler)
+
+ # write pidfile
+ pid = str(os.getpid())
+ with open(pidfile, 'w') as pf:
+ pf.write("%s\n" % pid)
+
+ func()
+ os.remove(pidfile)
+ os._exit(0)
+
+def start_daemon(dbfile, host, port, logfile, read_only=False):
ip = socket.gethostbyname(host)
pidfile = PIDPREFIX % (ip, port)
try:
@@ -362,15 +220,13 @@ def start_daemon(dbfile, host, port, logfile):
% pidfile)
return 1
- server = PRServer(os.path.abspath(dbfile), os.path.abspath(logfile), (ip,port))
- server.start()
+ dbfile = os.path.abspath(dbfile)
+ def daemon_main():
+ server = PRServer(dbfile, read_only=read_only)
+ server.start_tcp_server(ip, port)
+ server.serve_forever()
- # Sometimes, the port (i.e. localhost:0) indicated by the user does not match with
- # the one the server actually is listening, so at least warn the user about it
- _,rport = server.getinfo()
- if port != rport:
- sys.stdout.write("Server is listening at port %s instead of %s\n"
- % (rport,port))
+ run_as_daemon(daemon_main, pidfile, os.path.abspath(logfile))
return 0
def stop_daemon(host, port):
@@ -400,25 +256,13 @@ def stop_daemon(host, port):
return 1
try:
- PRServerConnection(ip, port).terminate()
- except:
- logger.critical("Stop PRService %s:%d failed" % (host,port))
+ if is_running(pid):
+ print("Sending SIGTERM to pr-server.")
+ os.kill(pid, signal.SIGTERM)
+ time.sleep(0.1)
- try:
- if pid:
- wait_timeout = 0
- print("Waiting for pr-server to exit.")
- while is_running(pid) and wait_timeout < 50:
- time.sleep(0.1)
- wait_timeout += 1
-
- if is_running(pid):
- print("Sending SIGTERM to pr-server.")
- os.kill(pid,signal.SIGTERM)
- time.sleep(0.1)
-
- if os.path.exists(pidfile):
- os.remove(pidfile)
+ if os.path.exists(pidfile):
+ os.remove(pidfile)
except OSError as e:
err = str(e)
@@ -436,7 +280,7 @@ def is_running(pid):
return True
def is_local_special(host, port):
- if host.strip().upper() == 'localhost'.upper() and (not port):
+ if (host == 'localhost' or host == '127.0.0.1') and not port:
return True
else:
return False
@@ -460,7 +304,9 @@ def auto_start(d):
'Usage: PRSERV_HOST = "<hostname>:<port>"']))
raise PRServiceConfigError
- if is_local_special(host_params[0], int(host_params[1])):
+ host = host_params[0].strip().lower()
+ port = int(host_params[1])
+ if is_local_special(host, port):
import bb.utils
cachedir = (d.getVar("PERSISTENT_DIR") or d.getVar("CACHE"))
if not cachedir:
@@ -474,39 +320,43 @@ def auto_start(d):
auto_shutdown()
if not singleton:
bb.utils.mkdirhier(cachedir)
- singleton = PRServSingleton(os.path.abspath(dbfile), os.path.abspath(logfile), ("localhost",0))
+ singleton = PRServSingleton(os.path.abspath(dbfile), os.path.abspath(logfile), host, port)
singleton.start()
if singleton:
- host, port = singleton.getinfo()
- else:
- host = host_params[0]
- port = int(host_params[1])
+ host = singleton.host
+ port = singleton.port
try:
- connection = PRServerConnection(host,port)
- connection.ping()
- realhost, realport = connection.getinfo()
- return str(realhost) + ":" + str(realport)
-
+ ping(host, port)
+ return str(host) + ":" + str(port)
+
except Exception:
logger.critical("PRservice %s:%d not available" % (host, port))
raise PRServiceConfigError
def auto_shutdown():
global singleton
- if singleton:
- host, port = singleton.getinfo()
- try:
- PRServerConnection(host, port).terminate()
- except:
- logger.critical("Stop PRService %s:%d failed" % (host,port))
-
- try:
- os.waitpid(singleton.prserv.pid, 0)
- except ChildProcessError:
- pass
+ if singleton and singleton.process:
+ singleton.process.terminate()
+ singleton.process.join()
singleton = None
def ping(host, port):
- conn=PRServerConnection(host, port)
+ from . import client
+
+ conn = client.PRClient()
+ conn.connect_tcp(host, port)
return conn.ping()
+
+def connect(host, port):
+ from . import client
+
+ global singleton
+
+ if host.strip().lower() == 'localhost' and not port:
+ host = 'localhost'
+ port = singleton.port
+
+ conn = client.PRClient()
+ conn.connect_tcp(host, port)
+ return conn
diff --git a/bitbake/lib/pyinotify.py b/bitbake/lib/pyinotify.py
index 6ae40a2..3c5dab0 100644
--- a/bitbake/lib/pyinotify.py
+++ b/bitbake/lib/pyinotify.py
@@ -52,7 +52,6 @@ from collections import deque
from datetime import datetime, timedelta
import time
import re
-import asyncore
import glob
import locale
import subprocess
@@ -596,14 +595,24 @@ class _ProcessEvent:
@type event: Event object
@return: By convention when used from the ProcessEvent class:
- Returning False or None (default value) means keep on
- executing next chained functors (see chain.py example).
+ executing next chained functors (see chain.py example).
- Returning True instead means do not execute next
processing functions.
@rtype: bool
@raise ProcessEventError: Event object undispatchable,
unknown event.
"""
- stripped_mask = event.mask - (event.mask & IN_ISDIR)
+ stripped_mask = event.mask & ~IN_ISDIR
+ # Bitbake hack - we see event masks of 0x6, i.e., IN_MODIFY & IN_ATTRIB.
+ # The kernel inotify code can set more than one of the bits in the mask,
+ # fsnotify_change() in linux/fsnotify.h is quite clear that IN_ATTRIB,
+ # IN_MODIFY and IN_ACCESS can arrive together.
+ # This breaks the code below which assume only one mask bit is ever
+ # set in an event. We don't care about attrib or access in bitbake so
+ # drop those.
+ if stripped_mask & IN_MODIFY:
+ stripped_mask &= ~(IN_ATTRIB | IN_ACCESS)
+
maskname = EventsCodes.ALL_VALUES.get(stripped_mask)
if maskname is None:
raise ProcessEventError("Unknown mask 0x%08x" % stripped_mask)
@@ -1475,35 +1484,6 @@ class ThreadedNotifier(threading.Thread, Notifier):
self.loop()
-class AsyncNotifier(asyncore.file_dispatcher, Notifier):
- """
- This notifier inherits from asyncore.file_dispatcher in order to be able to
- use pyinotify along with the asyncore framework.
-
- """
- def __init__(self, watch_manager, default_proc_fun=None, read_freq=0,
- threshold=0, timeout=None, channel_map=None):
- """
- Initializes the async notifier. The only additional parameter is
- 'channel_map' which is the optional asyncore private map. See
- Notifier class for the meaning of the others parameters.
-
- """
- Notifier.__init__(self, watch_manager, default_proc_fun, read_freq,
- threshold, timeout)
- asyncore.file_dispatcher.__init__(self, self._fd, channel_map)
-
- def handle_read(self):
- """
- When asyncore tells us we can read from the fd, we proceed processing
- events. This method can be overridden for handling a notification
- differently.
-
- """
- self.read_events()
- self.process_events()
-
-
class TornadoAsyncNotifier(Notifier):
"""
Tornado ioloop adapter.
diff --git a/bitbake/lib/toaster/bldcontrol/localhostbecontroller.py b/bitbake/lib/toaster/bldcontrol/localhostbecontroller.py
index 75674cc..577e765 100644
--- a/bitbake/lib/toaster/bldcontrol/localhostbecontroller.py
+++ b/bitbake/lib/toaster/bldcontrol/localhostbecontroller.py
@@ -200,7 +200,7 @@ class LocalhostBEController(BuildEnvironmentController):
localdirpath = os.path.join(localdirname, dirpath)
logger.debug("localhostbecontroller: localdirpath expects '%s'" % localdirpath)
if not os.path.exists(localdirpath):
- raise BuildSetupException("Cannot find layer git path '%s' in checked out repository '%s:%s'. Aborting." % (localdirpath, giturl, commit))
+ raise BuildSetupException("Cannot find layer git path '%s' in checked out repository '%s:%s'. Exiting." % (localdirpath, giturl, commit))
if name != "bitbake":
layerlist.append("%03d:%s" % (index,localdirpath.rstrip("/")))
@@ -467,7 +467,7 @@ class LocalhostBEController(BuildEnvironmentController):
logger.debug("localhostbecontroller: waiting for bblock content to appear")
time.sleep(1)
else:
- raise BuildSetupException("Cannot find bitbake server lock file '%s'. Aborting." % bblock)
+ raise BuildSetupException("Cannot find bitbake server lock file '%s'. Exiting." % bblock)
with open(bblock) as fplock:
for line in fplock:
diff --git a/bitbake/lib/toaster/bldcontrol/management/commands/runbuilds.py b/bitbake/lib/toaster/bldcontrol/management/commands/runbuilds.py
index 19f659e..834e32b 100644
--- a/bitbake/lib/toaster/bldcontrol/management/commands/runbuilds.py
+++ b/bitbake/lib/toaster/bldcontrol/management/commands/runbuilds.py
@@ -180,6 +180,77 @@ class Command(BaseCommand):
except Exception as e:
logger.warning("runbuilds: schedule exception %s" % str(e))
+ # Test to see if a build pre-maturely died due to a bitbake crash
+ def check_dead_builds(self):
+ do_cleanup = False
+ try:
+ for br in BuildRequest.objects.filter(state=BuildRequest.REQ_INPROGRESS):
+ # Get the build directory
+ if br.project.builddir:
+ builddir = br.project.builddir
+ else:
+ builddir = '%s-toaster-%d' % (br.environment.builddir,br.project.id)
+ # Check log to see if there is a recent traceback
+ toaster_ui_log = os.path.join(builddir, 'toaster_ui.log')
+ test_file = os.path.join(builddir, '._toaster_check.txt')
+ os.system("tail -n 50 %s > %s" % (os.path.join(builddir, 'toaster_ui.log'),test_file))
+ traceback_text = ''
+ is_traceback = False
+ with open(test_file,'r') as test_file_fd:
+ test_file_tail = test_file_fd.readlines()
+ for line in test_file_tail:
+ if line.startswith('Traceback (most recent call last):'):
+ traceback_text = line
+ is_traceback = True
+ elif line.startswith('NOTE: ToasterUI waiting for events'):
+ # Ignore any traceback before new build start
+ traceback_text = ''
+ is_traceback = False
+ elif line.startswith('Note: Toaster traceback auto-stop'):
+ # Ignore any traceback before this previous traceback catch
+ traceback_text = ''
+ is_traceback = False
+ elif is_traceback:
+ traceback_text += line
+ # Test the results
+ is_stop = False
+ if is_traceback:
+ # Found a traceback
+ errtype = 'Bitbake crash'
+ errmsg = 'Bitbake crash\n' + traceback_text
+ state = BuildRequest.REQ_FAILED
+ # Clean up bitbake files
+ bitbake_lock = os.path.join(builddir, 'bitbake.lock')
+ if os.path.isfile(bitbake_lock):
+ os.remove(bitbake_lock)
+ bitbake_sock = os.path.join(builddir, 'bitbake.sock')
+ if os.path.isfile(bitbake_sock):
+ os.remove(bitbake_sock)
+ if os.path.isfile(test_file):
+ os.remove(test_file)
+ # Add note to ignore this traceback on next check
+ os.system('echo "Note: Toaster traceback auto-stop" >> %s' % toaster_ui_log)
+ is_stop = True
+ # Add more tests here
+ #elif ...
+ # Stop the build request?
+ if is_stop:
+ brerror = BRError(
+ req = br,
+ errtype = errtype,
+ errmsg = errmsg,
+ traceback = traceback_text,
+ )
+ brerror.save()
+ br.state = state
+ br.save()
+ do_cleanup = True
+ # Do cleanup
+ if do_cleanup:
+ self.cleanup()
+ except Exception as e:
+ logger.error("runbuilds: Error in check_dead_builds %s" % e)
+
def handle(self, **options):
pidfile_path = os.path.join(os.environ.get("BUILDDIR", "."),
".runbuilds.pid")
@@ -187,10 +258,18 @@ class Command(BaseCommand):
with open(pidfile_path, 'w') as pidfile:
pidfile.write("%s" % os.getpid())
+ # Clean up any stale/failed builds from previous Toaster run
self.runbuild()
signal.signal(signal.SIGUSR1, lambda sig, frame: None)
while True:
- signal.pause()
- self.runbuild()
+ sigset = signal.sigtimedwait([signal.SIGUSR1], 5)
+ if sigset:
+ for sig in sigset:
+ # Consume each captured pending event
+ self.runbuild()
+ else:
+ # Check for build exceptions
+ self.check_dead_builds()
+
diff --git a/bitbake/lib/toaster/bldcontrol/migrations/0008_models_bigautofield.py b/bitbake/lib/toaster/bldcontrol/migrations/0008_models_bigautofield.py
new file mode 100644
index 0000000..45b477d
--- /dev/null
+++ b/bitbake/lib/toaster/bldcontrol/migrations/0008_models_bigautofield.py
@@ -0,0 +1,48 @@
+# Generated by Django 3.2.12 on 2022-03-06 03:28
+
+from django.db import migrations, models
+
+
+class Migration(migrations.Migration):
+
+ dependencies = [
+ ('bldcontrol', '0007_brlayers_optional_gitinfo'),
+ ]
+
+ operations = [
+ migrations.AlterField(
+ model_name='brbitbake',
+ name='id',
+ field=models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID'),
+ ),
+ migrations.AlterField(
+ model_name='brerror',
+ name='id',
+ field=models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID'),
+ ),
+ migrations.AlterField(
+ model_name='brlayer',
+ name='id',
+ field=models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID'),
+ ),
+ migrations.AlterField(
+ model_name='brtarget',
+ name='id',
+ field=models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID'),
+ ),
+ migrations.AlterField(
+ model_name='brvariable',
+ name='id',
+ field=models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID'),
+ ),
+ migrations.AlterField(
+ model_name='buildenvironment',
+ name='id',
+ field=models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID'),
+ ),
+ migrations.AlterField(
+ model_name='buildrequest',
+ name='id',
+ field=models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID'),
+ ),
+ ]
diff --git a/bitbake/lib/toaster/manage.py b/bitbake/lib/toaster/manage.py
index ae32619..f8de49c 100755
--- a/bitbake/lib/toaster/manage.py
+++ b/bitbake/lib/toaster/manage.py
@@ -1,5 +1,7 @@
#!/usr/bin/env python3
#
+# Copyright BitBake Contributors
+#
# SPDX-License-Identifier: GPL-2.0-only
#
diff --git a/bitbake/lib/toaster/orm/fixtures/gen_fixtures.py b/bitbake/lib/toaster/orm/fixtures/gen_fixtures.py
new file mode 100755
index 0000000..0d5f453
--- /dev/null
+++ b/bitbake/lib/toaster/orm/fixtures/gen_fixtures.py
@@ -0,0 +1,445 @@
+#!/usr/bin/env python3
+# ex:ts=4:sw=4:sts=4:et
+# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
+#
+# Generate Toaster Fixtures for 'poky.xml' and 'oe-core.xml'
+#
+# Copyright (C) 2022 Wind River Systems
+# SPDX-License-Identifier: GPL-2.0-only
+#
+# Edit the 'current_releases' table for each new release cycle
+#
+# Usage: ./get_fixtures all
+#
+
+import os
+import sys
+import argparse
+
+verbose = False
+
+####################################
+# Releases
+#
+# https://wiki.yoctoproject.org/wiki/Releases
+#
+# NOTE: for the current releases table, it helps to keep continuing releases
+# in the same table positions since this minimizes the patch diff for review.
+# The order of the table does not matter since Toaster presents them sorted.
+#
+# Traditionally, the two most current releases are included in addition to the
+# 'master' branch and the local installation's 'HEAD'.
+# It is also policy to include all active LTS releases.
+#
+
+# [Codename, Yocto Project Version, Release Date, Current Version, Support Level, Poky Version, BitBake branch]
+current_releases = [
+ # Release slot #1
+ ['Kirkstone','3.5','April 2022','','Future - Long Term Support (until Apr. 2024)','27.0','1.54'],
+# ['Dunfell','3.1','April 2021','3.1.5 (March 2022)','Stable - Support for 13 months (until Apr. 2022)','23.0','1.46'],
+ # Release slot #2 'local'
+ ['HEAD','HEAD','','Local Yocto Project','HEAD','','HEAD'],
+ # Release slot #3 'master'
+ ['Master','master','','Yocto Project master','master','','master'],
+ # Release slot #4
+ ['Honister','3.4','October 2021','3.4.2 (February 2022)','Support for 7 months (until May 2022)','26.0','1.52'],
+# ['Gatesgarth','3.2','Oct 2020','3.2.4 (May 2021)','EOL','24.0','1.48'],
+ # Optional Release slot #4
+ ['Hardknott','3.3','April 2021','3.3.5 (March 2022)','Stable - Support for 13 months (until Apr. 2022)','25.0','1.50'],
+]
+
+default_poky_layers = [
+ 'openembedded-core',
+ 'meta-poky',
+ 'meta-yocto-bsp',
+]
+
+default_oe_core_layers = [
+ 'openembedded-core',
+]
+
+####################################
+# Templates
+
+prolog_template = '''\
+<?xml version="1.0" encoding="utf-8"?>
+<django-objects version="1.0">
+ <!-- Set the project default value for DISTRO -->
+ <object model="orm.toastersetting" pk="1">
+ <field type="CharField" name="name">DEFCONF_DISTRO</field>
+ <field type="CharField" name="value">{{distro}}</field>
+ </object>
+'''
+
+#<!-- Bitbake versions which correspond to the metadata release -->')
+bitbakeversion_poky_template = '''\
+ <object model="orm.bitbakeversion" pk="{{bitbake_id}}">
+ <field type="CharField" name="name">{{name}}</field>
+ <field type="CharField" name="giturl">git://git.yoctoproject.org/poky</field>
+ <field type="CharField" name="branch">{{branch}}</field>
+ <field type="CharField" name="dirpath">bitbake</field>
+ </object>
+'''
+bitbakeversion_oecore_template = '''\
+ <object model="orm.bitbakeversion" pk="{{bitbake_id}}">
+ <field type="CharField" name="name">{{name}}</field>
+ <field type="CharField" name="giturl">git://git.openembedded.org/bitbake</field>
+ <field type="CharField" name="branch">{{bitbakeversion}}</field>
+ </object>
+'''
+
+# <!-- Releases available -->
+releases_available_template = '''\
+ <object model="orm.release" pk="{{ra_count}}">
+ <field type="CharField" name="name">{{name}}</field>
+ <field type="CharField" name="description">{{description}}</field>
+ <field rel="ManyToOneRel" to="orm.bitbakeversion" name="bitbake_version">{{ra_count}}</field>
+ <field type="CharField" name="branch_name">{{release}}</field>
+ <field type="TextField" name="helptext">Toaster will run your builds {{help_source}}.</field>
+ </object>
+'''
+
+# <!-- Default project layers for each release -->
+default_layers_template = '''\
+ <object model="orm.releasedefaultlayer" pk="{{rdl_count}}">
+ <field rel="ManyToOneRel" to="orm.release" name="release">{{release_id}}</field>
+ <field type="CharField" name="layer_name">{{layer}}</field>
+ </object>
+'''
+
+default_layers_preface = '''\
+ <!-- Default layers provided by poky
+ openembedded-core
+ meta-poky
+ meta-yocto-bsp
+ -->
+'''
+
+layer_poky_template = '''\
+ <object model="orm.layer" pk="{{layer_id}}">
+ <field type="CharField" name="name">{{layer}}</field>
+ <field type="CharField" name="layer_index_url"></field>
+ <field type="CharField" name="vcs_url">{{vcs_url}}</field>
+ <field type="CharField" name="vcs_web_url">{{vcs_web_url}}</field>
+ <field type="CharField" name="vcs_web_tree_base_url">{{vcs_web_tree_base_url}}</field>
+ <field type="CharField" name="vcs_web_file_base_url">{{vcs_web_file_base_url}}</field>
+ </object>
+'''
+
+layer_oe_core_template = '''\
+ <object model="orm.layer" pk="{{layer_id}}">
+ <field type="CharField" name="name">{{layer}}</field>
+ <field type="CharField" name="vcs_url">{{vcs_url}}</field>
+ <field type="CharField" name="vcs_web_url">{{vcs_web_url}}</field>
+ <field type="CharField" name="vcs_web_tree_base_url">{{vcs_web_tree_base_url}}</field>
+ <field type="CharField" name="vcs_web_file_base_url">{{vcs_web_file_base_url}}</field>
+ </object>
+'''
+
+layer_version_template = '''\
+ <object model="orm.layer_version" pk="{{lv_count}}">
+ <field rel="ManyToOneRel" to="orm.layer" name="layer">{{layer_id}}</field>
+ <field type="IntegerField" name="layer_source">0</field>
+ <field rel="ManyToOneRel" to="orm.release" name="release">{{release_id}}</field>
+ <field type="CharField" name="branch">{{branch}}</field>
+ <field type="CharField" name="dirpath">{{dirpath}}</field>
+ </object>
+'''
+
+layer_version_HEAD_template = '''\
+ <object model="orm.layer_version" pk="{{lv_count}}">
+ <field rel="ManyToOneRel" to="orm.layer" name="layer">{{layer_id}}</field>
+ <field type="IntegerField" name="layer_source">0</field>
+ <field rel="ManyToOneRel" to="orm.release" name="release">{{release_id}}</field>
+ <field type="CharField" name="branch">{{branch}}</field>
+ <field type="CharField" name="commit">{{commit}}</field>
+ <field type="CharField" name="dirpath">{{dirpath}}</field>
+ </object>
+'''
+
+layer_version_oe_core_template = '''\
+ <object model="orm.layer_version" pk="1">
+ <field rel="ManyToOneRel" to="orm.layer" name="layer">1</field>
+ <field rel="ManyToOneRel" to="orm.release" name="release">2</field>
+ <field type="CharField" name="local_path">OE-CORE-LAYER-DIR</field>
+ <field type="CharField" name="branch">HEAD</field>
+ <field type="CharField" name="dirpath">meta</field>
+ <field type="IntegerField" name="layer_source">0</field>
+ </object>
+'''
+
+epilog_template = '''\
+</django-objects>
+'''
+
+#################################
+# Helper Routines
+#
+
+def print_str(str,fd):
+ # Avoid extra newline at end
+ if str and (str[-1] == '\n'):
+ str = str[0:-1]
+ print(str,file=fd)
+
+def print_template(template,params,fd):
+ for line in template.split('\n'):
+ p = line.find('{{')
+ while p > 0:
+ q = line.find('}}')
+ key = line[p+2:q]
+ if key in params:
+ line = line[0:p] + params[key] + line[q+2:]
+ else:
+ line = line[0:p] + '?' + key + '?' + line[q+2:]
+ p = line.find('{{')
+ if line:
+ print(line,file=fd)
+
+#################################
+# Generate poky.xml
+#
+
+def generate_poky():
+ fd = open('poky.xml','w')
+
+ params = {}
+ params['distro'] = 'poky'
+ print_template(prolog_template,params,fd)
+ print_str('',fd)
+
+ print_str(' <!-- Bitbake versions which correspond to the metadata release -->',fd)
+ for i,release in enumerate(current_releases):
+ params = {}
+ params['release'] = release[0]
+ params['Release'] = release[0]
+ params['release_version'] = release[1]
+ if not (params['release'] in ('HEAD')): # 'master',
+ params['release'] = params['release'][0].lower() + params['release'][1:]
+ params['name'] = params['release']
+ params['bitbake_id'] = str(i+1)
+ params['branch'] = params['release']
+ print_template(bitbakeversion_poky_template,params,fd)
+ print_str('',fd)
+
+ print_str('',fd)
+ print_str(' <!-- Releases available -->',fd)
+ for i,release in enumerate(current_releases):
+ params = {}
+ params['release'] = release[0]
+ params['Release'] = release[0]
+ params['release_version'] = release[1]
+ if not (params['release'] in ('HEAD')): #'master',
+ params['release'] = params['release'][0].lower() + params['release'][1:]
+ params['h_release'] = '?h={{release}}'
+ params['name'] = params['release']
+ params['ra_count'] = str(i+1)
+ params['branch'] = params['release']
+
+ if 'HEAD' == params['release']:
+ params['help_source'] = 'with the version of the Yocto Project you have cloned or downloaded to your computer'
+ params['description'] = 'Local Yocto Project'
+ params['name'] = 'local'
+ else:
+ params['help_source'] = 'using the tip of the <a href="https://git.yoctoproject.org/cgit/cgit.cgi/poky/log/{{h_release}}">Yocto Project {{Release}} branch</a>'
+ params['description'] = 'Yocto Project {{release_version}} "{{Release}}"'
+ if 'master' == params['release']:
+ params['h_release'] = ''
+ params['description'] = 'Yocto Project master'
+
+ print_template(releases_available_template,params,fd)
+ print_str('',fd)
+
+ print_str(' <!-- Default project layers for each release -->',fd)
+ rdl_count = 1
+ for i,release in enumerate(current_releases):
+ for j,layer in enumerate(default_poky_layers):
+ params = {}
+ params['layer'] = layer
+ params['release'] = release[0]
+ params['Release'] = release[0]
+ params['release_version'] = release[1]
+ if not (params['release'] in ('master','HEAD')):
+ params['release'] = params['release'][0].lower() + params['release'][1:]
+ params['release_id'] = str(i+1)
+ params['rdl_count'] = str(rdl_count)
+ params['branch'] = params['release']
+ print_template(default_layers_template,params,fd)
+ rdl_count += 1
+ print_str('',fd)
+
+ print_str(default_layers_preface,fd)
+ lv_count = 1
+ for i,layer in enumerate(default_poky_layers):
+ params = {}
+ params['layer'] = layer
+ params['layer_id'] = str(i+1)
+ params['vcs_url'] = 'git://git.yoctoproject.org/poky'
+ params['vcs_web_url'] = 'https://git.yoctoproject.org/cgit/cgit.cgi/poky'
+ params['vcs_web_tree_base_url'] = 'https://git.yoctoproject.org/cgit/cgit.cgi/poky/tree/%path%?h=%branch%'
+ params['vcs_web_file_base_url'] = 'https://git.yoctoproject.org/cgit/cgit.cgi/poky/tree/%path%?h=%branch%'
+
+ if i:
+ print_str('',fd)
+ print_template(layer_poky_template,params,fd)
+ for j,release in enumerate(current_releases):
+ params['release'] = release[0]
+ params['Release'] = release[0]
+ params['release_version'] = release[1]
+ if not (params['release'] in ('master','HEAD')):
+ params['release'] = params['release'][0].lower() + params['release'][1:]
+ params['release_id'] = str(j+1)
+ params['lv_count'] = str(lv_count)
+ params['branch'] = params['release']
+ params['commit'] = params['release']
+
+ params['dirpath'] = params['layer']
+ if params['layer'] in ('openembedded-core'): #'openembedded-core',
+ params['dirpath'] = 'meta'
+
+ if 'HEAD' == params['release']:
+ print_template(layer_version_HEAD_template,params,fd)
+ else:
+ print_template(layer_version_template,params,fd)
+ lv_count += 1
+
+ print_str(epilog_template,fd)
+ fd.close()
+
+#################################
+# Generate oe-core.xml
+#
+
+def generate_oe_core():
+ fd = open('oe-core.xml','w')
+
+ params = {}
+ params['distro'] = 'nodistro'
+ print_template(prolog_template,params,fd)
+ print_str('',fd)
+
+ print_str(' <!-- Bitbake versions which correspond to the metadata release -->',fd)
+ for i,release in enumerate(current_releases):
+ params = {}
+ params['release'] = release[0]
+ params['Release'] = release[0]
+ params['bitbakeversion'] = release[6]
+ params['release_version'] = release[1]
+ if not (params['release'] in ('HEAD')): # 'master',
+ params['release'] = params['release'][0].lower() + params['release'][1:]
+ params['name'] = params['release']
+ params['bitbake_id'] = str(i+1)
+ params['branch'] = params['release']
+ print_template(bitbakeversion_oecore_template,params,fd)
+ print_str('',fd)
+
+ print_str(' <!-- Releases available -->',fd)
+ for i,release in enumerate(current_releases):
+ params = {}
+ params['release'] = release[0]
+ params['Release'] = release[0]
+ params['release_version'] = release[1]
+ if not (params['release'] in ('HEAD')): #'master',
+ params['release'] = params['release'][0].lower() + params['release'][1:]
+ params['h_release'] = '?h={{release}}'
+ params['name'] = params['release']
+ params['ra_count'] = str(i+1)
+ params['branch'] = params['release']
+
+ if 'HEAD' == params['release']:
+ params['help_source'] = 'with the version of OpenEmbedded that you have cloned or downloaded to your computer'
+ params['description'] = 'Local Openembedded'
+ params['name'] = 'local'
+ else:
+ params['help_source'] = 'using the tip of the <a href=\\"https://cgit.openembedded.org/openembedded-core/log/{{h_release}}\\">OpenEmbedded {{Release}}</a> branch'
+ params['description'] = 'Openembedded {{Release}}'
+ if 'master' == params['release']:
+ params['h_release'] = ''
+ params['description'] = 'OpenEmbedded core master'
+ params['Release'] = params['release']
+
+ print_template(releases_available_template,params,fd)
+ print_str('',fd)
+
+ print_str(' <!-- Default layers for each release -->',fd)
+ rdl_count = 1
+ for i,release in enumerate(current_releases):
+ for j,layer in enumerate(default_oe_core_layers):
+ params = {}
+ params['layer'] = layer
+ params['release'] = release[0]
+ params['Release'] = release[0]
+ params['release_version'] = release[1]
+ if not (params['release'] in ('master','HEAD')):
+ params['release'] = params['release'][0].lower() + params['release'][1:]
+ params['release_id'] = str(i+1)
+ params['rdl_count'] = str(rdl_count)
+ params['branch'] = params['release']
+ print_template(default_layers_template,params,fd)
+ rdl_count += 1
+ print_str('',fd)
+
+ print_str('',fd)
+ print_str(' <!-- Layer for the Local release -->',fd)
+ lv_count = 1
+ for i,layer in enumerate(default_oe_core_layers):
+ params = {}
+ params['layer'] = layer
+ params['layer_id'] = str(i+1)
+ params['vcs_url'] = 'git://git.openembedded.org/openembedded-core'
+ params['vcs_web_url'] = 'https://cgit.openembedded.org/openembedded-core'
+ params['vcs_web_tree_base_url'] = 'https://cgit.openembedded.org/openembedded-core/tree/%path%?h=%branch%'
+ params['vcs_web_file_base_url'] = 'https://cgit.openembedded.org/openembedded-core/tree/%path%?h=%branch%'
+ if i:
+ print_str('',fd)
+ print_template(layer_oe_core_template,params,fd)
+
+ print_template(layer_version_oe_core_template,params,fd)
+ print_str('',fd)
+
+ print_str(epilog_template,fd)
+ fd.close()
+
+#################################
+# Help
+#
+
+def list_releases():
+ print("Release ReleaseVer BitbakeVer Support Level")
+ print("========== =========== ========== ==============================================")
+ for release in current_releases:
+ print("%10s %10s %11s %s" % (release[0],release[1],release[6],release[4]))
+
+#################################
+# main
+#
+
+def main(argv):
+ global verbose
+
+ parser = argparse.ArgumentParser(description='gen_fixtures.py: table generate the fixture files')
+ parser.add_argument('--poky', '-p', action='store_const', const='poky', dest='command', help='Generate the poky.xml file')
+ parser.add_argument('--oe-core', '-o', action='store_const', const='oe_core', dest='command', help='Generate the oe-core.xml file')
+ parser.add_argument('--all', '-a', action='store_const', const='all', dest='command', help='Generate all fixture files')
+ parser.add_argument('--list', '-l', action='store_const', const='list', dest='command', help='List the release table')
+ parser.add_argument('--verbose', '-v', action='store_true', dest='verbose', help='Enable verbose debugging output')
+ args = parser.parse_args()
+
+ verbose = args.verbose
+ if 'poky' == args.command:
+ generate_poky()
+ elif 'oe_core' == args.command:
+ generate_oe_core()
+ elif 'all' == args.command:
+ generate_poky()
+ generate_oe_core()
+ elif 'all' == args.command:
+ list_releases()
+ elif 'list' == args.command:
+ list_releases()
+
+ else:
+ print("No command for 'gen_fixtures.py' selected")
+
+if __name__ == '__main__':
+ main(sys.argv[1:])
diff --git a/bitbake/lib/toaster/orm/fixtures/oe-core.xml b/bitbake/lib/toaster/orm/fixtures/oe-core.xml
index 026d948..450e7a2 100644
--- a/bitbake/lib/toaster/orm/fixtures/oe-core.xml
+++ b/bitbake/lib/toaster/orm/fixtures/oe-core.xml
@@ -8,9 +8,9 @@
<!-- Bitbake versions which correspond to the metadata release -->
<object model="orm.bitbakeversion" pk="1">
- <field type="CharField" name="name">dunfell</field>
+ <field type="CharField" name="name">kirkstone</field>
<field type="CharField" name="giturl">git://git.openembedded.org/bitbake</field>
- <field type="CharField" name="branch">1.46</field>
+ <field type="CharField" name="branch">1.54</field>
</object>
<object model="orm.bitbakeversion" pk="2">
<field type="CharField" name="name">HEAD</field>
@@ -23,18 +23,23 @@
<field type="CharField" name="branch">master</field>
</object>
<object model="orm.bitbakeversion" pk="4">
- <field type="CharField" name="name">gatesgarth</field>
+ <field type="CharField" name="name">honister</field>
<field type="CharField" name="giturl">git://git.openembedded.org/bitbake</field>
- <field type="CharField" name="branch">1.48</field>
+ <field type="CharField" name="branch">1.52</field>
+ </object>
+ <object model="orm.bitbakeversion" pk="5">
+ <field type="CharField" name="name">hardknott</field>
+ <field type="CharField" name="giturl">git://git.openembedded.org/bitbake</field>
+ <field type="CharField" name="branch">1.50</field>
</object>
<!-- Releases available -->
<object model="orm.release" pk="1">
- <field type="CharField" name="name">dunfell</field>
- <field type="CharField" name="description">Openembedded Dunfell</field>
+ <field type="CharField" name="name">kirkstone</field>
+ <field type="CharField" name="description">Openembedded Kirkstone</field>
<field rel="ManyToOneRel" to="orm.bitbakeversion" name="bitbake_version">1</field>
- <field type="CharField" name="branch_name">dunfell</field>
- <field type="TextField" name="helptext">Toaster will run your builds using the tip of the <a href=\"http://cgit.openembedded.org/openembedded-core/log/?h=dunfell\">OpenEmbedded Dunfell</a> branch.</field>
+ <field type="CharField" name="branch_name">kirkstone</field>
+ <field type="TextField" name="helptext">Toaster will run your builds using the tip of the <a href=\"https://cgit.openembedded.org/openembedded-core/log/?h=kirkstone\">OpenEmbedded Kirkstone</a> branch.</field>
</object>
<object model="orm.release" pk="2">
<field type="CharField" name="name">local</field>
@@ -48,14 +53,21 @@
<field type="CharField" name="description">OpenEmbedded core master</field>
<field rel="ManyToOneRel" to="orm.bitbakeversion" name="bitbake_version">3</field>
<field type="CharField" name="branch_name">master</field>
- <field type="TextField" name="helptext">Toaster will run your builds using the tip of the <a href=\"http://cgit.openembedded.org/openembedded-core/log/\">OpenEmbedded master</a> branch.</field>
+ <field type="TextField" name="helptext">Toaster will run your builds using the tip of the <a href=\"https://cgit.openembedded.org/openembedded-core/log/\">OpenEmbedded master</a> branch.</field>
</object>
<object model="orm.release" pk="4">
- <field type="CharField" name="name">gatesgarth</field>
- <field type="CharField" name="description">Openembedded Gatesgarth</field>
+ <field type="CharField" name="name">honister</field>
+ <field type="CharField" name="description">Openembedded Honister</field>
<field rel="ManyToOneRel" to="orm.bitbakeversion" name="bitbake_version">4</field>
- <field type="CharField" name="branch_name">gatesgarth</field>
- <field type="TextField" name="helptext">Toaster will run your builds using the tip of the <a href=\"http://cgit.openembedded.org/openembedded-core/log/?h=gatesgarth\">OpenEmbedded Gatesgarth</a> branch.</field>
+ <field type="CharField" name="branch_name">honister</field>
+ <field type="TextField" name="helptext">Toaster will run your builds using the tip of the <a href=\"https://cgit.openembedded.org/openembedded-core/log/?h=honister\">OpenEmbedded Honister</a> branch.</field>
+ </object>
+ <object model="orm.release" pk="5">
+ <field type="CharField" name="name">hardknott</field>
+ <field type="CharField" name="description">Openembedded Hardknott</field>
+ <field rel="ManyToOneRel" to="orm.bitbakeversion" name="bitbake_version">5</field>
+ <field type="CharField" name="branch_name">hardknott</field>
+ <field type="TextField" name="helptext">Toaster will run your builds using the tip of the <a href=\"https://cgit.openembedded.org/openembedded-core/log/?h=hardknott\">OpenEmbedded Hardknott</a> branch.</field>
</object>
<!-- Default layers for each release -->
@@ -75,15 +87,19 @@
<field rel="ManyToOneRel" to="orm.release" name="release">4</field>
<field type="CharField" name="layer_name">openembedded-core</field>
</object>
+ <object model="orm.releasedefaultlayer" pk="5">
+ <field rel="ManyToOneRel" to="orm.release" name="release">5</field>
+ <field type="CharField" name="layer_name">openembedded-core</field>
+ </object>
<!-- Layer for the Local release -->
<object model="orm.layer" pk="1">
<field type="CharField" name="name">openembedded-core</field>
<field type="CharField" name="vcs_url">git://git.openembedded.org/openembedded-core</field>
- <field type="CharField" name="vcs_web_url">http://cgit.openembedded.org/openembedded-core</field>
- <field type="CharField" name="vcs_web_tree_base_url">http://cgit.openembedded.org/openembedded-core/tree/%path%?h=%branch%</field>
- <field type="CharField" name="vcs_web_file_base_url">http://cgit.openembedded.org/openembedded-core/tree/%path%?h=%branch%</field>
+ <field type="CharField" name="vcs_web_url">https://cgit.openembedded.org/openembedded-core</field>
+ <field type="CharField" name="vcs_web_tree_base_url">https://cgit.openembedded.org/openembedded-core/tree/%path%?h=%branch%</field>
+ <field type="CharField" name="vcs_web_file_base_url">https://cgit.openembedded.org/openembedded-core/tree/%path%?h=%branch%</field>
</object>
<object model="orm.layer_version" pk="1">
<field rel="ManyToOneRel" to="orm.layer" name="layer">1</field>
diff --git a/bitbake/lib/toaster/orm/fixtures/poky.xml b/bitbake/lib/toaster/orm/fixtures/poky.xml
index a468a54..20fcc01 100644
--- a/bitbake/lib/toaster/orm/fixtures/poky.xml
+++ b/bitbake/lib/toaster/orm/fixtures/poky.xml
@@ -8,9 +8,9 @@
<!-- Bitbake versions which correspond to the metadata release -->
<object model="orm.bitbakeversion" pk="1">
- <field type="CharField" name="name">dunfell</field>
+ <field type="CharField" name="name">kirkstone</field>
<field type="CharField" name="giturl">git://git.yoctoproject.org/poky</field>
- <field type="CharField" name="branch">dunfell</field>
+ <field type="CharField" name="branch">kirkstone</field>
<field type="CharField" name="dirpath">bitbake</field>
</object>
<object model="orm.bitbakeversion" pk="2">
@@ -26,20 +26,26 @@
<field type="CharField" name="dirpath">bitbake</field>
</object>
<object model="orm.bitbakeversion" pk="4">
- <field type="CharField" name="name">gatesgarth</field>
+ <field type="CharField" name="name">honister</field>
<field type="CharField" name="giturl">git://git.yoctoproject.org/poky</field>
- <field type="CharField" name="branch">gatesgarth</field>
+ <field type="CharField" name="branch">honister</field>
+ <field type="CharField" name="dirpath">bitbake</field>
+ </object>
+ <object model="orm.bitbakeversion" pk="5">
+ <field type="CharField" name="name">hardknott</field>
+ <field type="CharField" name="giturl">git://git.yoctoproject.org/poky</field>
+ <field type="CharField" name="branch">hardknott</field>
<field type="CharField" name="dirpath">bitbake</field>
</object>
<!-- Releases available -->
<object model="orm.release" pk="1">
- <field type="CharField" name="name">dunfell</field>
- <field type="CharField" name="description">Yocto Project 3.1 "Dunfell"</field>
+ <field type="CharField" name="name">kirkstone</field>
+ <field type="CharField" name="description">Yocto Project 4.0 "Kirkstone"</field>
<field rel="ManyToOneRel" to="orm.bitbakeversion" name="bitbake_version">1</field>
- <field type="CharField" name="branch_name">dunfell</field>
- <field type="TextField" name="helptext">Toaster will run your builds using the tip of the <a href="http://git.yoctoproject.org/cgit/cgit.cgi/poky/log/?h=dunfell">Yocto Project Dunfell branch</a>.</field>
+ <field type="CharField" name="branch_name">kirkstone</field>
+ <field type="TextField" name="helptext">Toaster will run your builds using the tip of the <a href="https://git.yoctoproject.org/cgit/cgit.cgi/poky/log/?h=kirkstone">Yocto Project Kirkstone branch</a>.</field>
</object>
<object model="orm.release" pk="2">
<field type="CharField" name="name">local</field>
@@ -53,14 +59,21 @@
<field type="CharField" name="description">Yocto Project master</field>
<field rel="ManyToOneRel" to="orm.bitbakeversion" name="bitbake_version">3</field>
<field type="CharField" name="branch_name">master</field>
- <field type="TextField" name="helptext">Toaster will run your builds using the tip of the <a href="http://git.yoctoproject.org/cgit/cgit.cgi/poky/log/">Yocto Project Master branch</a>.</field>
+ <field type="TextField" name="helptext">Toaster will run your builds using the tip of the <a href="https://git.yoctoproject.org/cgit/cgit.cgi/poky/log/">Yocto Project Master branch</a>.</field>
</object>
<object model="orm.release" pk="4">
- <field type="CharField" name="name">gatesgarth</field>
- <field type="CharField" name="description">Yocto Project 3.2 "Gatesgarth"</field>
+ <field type="CharField" name="name">honister</field>
+ <field type="CharField" name="description">Yocto Project 3.4 "Honister"</field>
<field rel="ManyToOneRel" to="orm.bitbakeversion" name="bitbake_version">4</field>
- <field type="CharField" name="branch_name">gatesgarth</field>
- <field type="TextField" name="helptext">Toaster will run your builds using the tip of the <a href="http://git.yoctoproject.org/cgit/cgit.cgi/poky/log/?h=gatesgarth">Yocto Project Gatesgarth branch</a>.</field>
+ <field type="CharField" name="branch_name">honister</field>
+ <field type="TextField" name="helptext">Toaster will run your builds using the tip of the <a href="https://git.yoctoproject.org/cgit/cgit.cgi/poky/log/?h=honister">Yocto Project Honister branch</a>.</field>
+ </object>
+ <object model="orm.release" pk="5">
+ <field type="CharField" name="name">hardknott</field>
+ <field type="CharField" name="description">Yocto Project 3.3 "Hardknott"</field>
+ <field rel="ManyToOneRel" to="orm.bitbakeversion" name="bitbake_version">5</field>
+ <field type="CharField" name="branch_name">hardknott</field>
+ <field type="TextField" name="helptext">Toaster will run your builds using the tip of the <a href="https://git.yoctoproject.org/cgit/cgit.cgi/poky/log/?h=hardknott">Yocto Project Hardknott branch</a>.</field>
</object>
<!-- Default project layers for each release -->
@@ -112,6 +125,18 @@
<field rel="ManyToOneRel" to="orm.release" name="release">4</field>
<field type="CharField" name="layer_name">meta-yocto-bsp</field>
</object>
+ <object model="orm.releasedefaultlayer" pk="13">
+ <field rel="ManyToOneRel" to="orm.release" name="release">5</field>
+ <field type="CharField" name="layer_name">openembedded-core</field>
+ </object>
+ <object model="orm.releasedefaultlayer" pk="14">
+ <field rel="ManyToOneRel" to="orm.release" name="release">5</field>
+ <field type="CharField" name="layer_name">meta-poky</field>
+ </object>
+ <object model="orm.releasedefaultlayer" pk="15">
+ <field rel="ManyToOneRel" to="orm.release" name="release">5</field>
+ <field type="CharField" name="layer_name">meta-yocto-bsp</field>
+ </object>
<!-- Default layers provided by poky
openembedded-core
@@ -122,15 +147,15 @@
<field type="CharField" name="name">openembedded-core</field>
<field type="CharField" name="layer_index_url"></field>
<field type="CharField" name="vcs_url">git://git.yoctoproject.org/poky</field>
- <field type="CharField" name="vcs_web_url">http://git.yoctoproject.org/cgit/cgit.cgi/poky</field>
- <field type="CharField" name="vcs_web_tree_base_url">http://git.yoctoproject.org/cgit/cgit.cgi/poky/tree/%path%?h=%branch%</field>
- <field type="CharField" name="vcs_web_file_base_url">http://git.yoctoproject.org/cgit/cgit.cgi/poky/tree/%path%?h=%branch%</field>
+ <field type="CharField" name="vcs_web_url">https://git.yoctoproject.org/cgit/cgit.cgi/poky</field>
+ <field type="CharField" name="vcs_web_tree_base_url">https://git.yoctoproject.org/cgit/cgit.cgi/poky/tree/%path%?h=%branch%</field>
+ <field type="CharField" name="vcs_web_file_base_url">https://git.yoctoproject.org/cgit/cgit.cgi/poky/tree/%path%?h=%branch%</field>
</object>
<object model="orm.layer_version" pk="1">
<field rel="ManyToOneRel" to="orm.layer" name="layer">1</field>
<field type="IntegerField" name="layer_source">0</field>
<field rel="ManyToOneRel" to="orm.release" name="release">1</field>
- <field type="CharField" name="branch">dunfell</field>
+ <field type="CharField" name="branch">kirkstone</field>
<field type="CharField" name="dirpath">meta</field>
</object>
<object model="orm.layer_version" pk="2">
@@ -152,7 +177,14 @@
<field rel="ManyToOneRel" to="orm.layer" name="layer">1</field>
<field type="IntegerField" name="layer_source">0</field>
<field rel="ManyToOneRel" to="orm.release" name="release">4</field>
- <field type="CharField" name="branch">gatesgarth</field>
+ <field type="CharField" name="branch">honister</field>
+ <field type="CharField" name="dirpath">meta</field>
+ </object>
+ <object model="orm.layer_version" pk="5">
+ <field rel="ManyToOneRel" to="orm.layer" name="layer">1</field>
+ <field type="IntegerField" name="layer_source">0</field>
+ <field rel="ManyToOneRel" to="orm.release" name="release">5</field>
+ <field type="CharField" name="branch">hardknott</field>
<field type="CharField" name="dirpath">meta</field>
</object>
@@ -160,18 +192,18 @@
<field type="CharField" name="name">meta-poky</field>
<field type="CharField" name="layer_index_url"></field>
<field type="CharField" name="vcs_url">git://git.yoctoproject.org/poky</field>
- <field type="CharField" name="vcs_web_url">http://git.yoctoproject.org/cgit/cgit.cgi/poky</field>
- <field type="CharField" name="vcs_web_tree_base_url">http://git.yoctoproject.org/cgit/cgit.cgi/poky/tree/%path%?h=%branch%</field>
- <field type="CharField" name="vcs_web_file_base_url">http://git.yoctoproject.org/cgit/cgit.cgi/poky/tree/%path%?h=%branch%</field>
+ <field type="CharField" name="vcs_web_url">https://git.yoctoproject.org/cgit/cgit.cgi/poky</field>
+ <field type="CharField" name="vcs_web_tree_base_url">https://git.yoctoproject.org/cgit/cgit.cgi/poky/tree/%path%?h=%branch%</field>
+ <field type="CharField" name="vcs_web_file_base_url">https://git.yoctoproject.org/cgit/cgit.cgi/poky/tree/%path%?h=%branch%</field>
</object>
- <object model="orm.layer_version" pk="5">
+ <object model="orm.layer_version" pk="6">
<field rel="ManyToOneRel" to="orm.layer" name="layer">2</field>
<field type="IntegerField" name="layer_source">0</field>
<field rel="ManyToOneRel" to="orm.release" name="release">1</field>
- <field type="CharField" name="branch">dunfell</field>
+ <field type="CharField" name="branch">kirkstone</field>
<field type="CharField" name="dirpath">meta-poky</field>
</object>
- <object model="orm.layer_version" pk="6">
+ <object model="orm.layer_version" pk="7">
<field rel="ManyToOneRel" to="orm.layer" name="layer">2</field>
<field type="IntegerField" name="layer_source">0</field>
<field rel="ManyToOneRel" to="orm.release" name="release">2</field>
@@ -179,18 +211,25 @@
<field type="CharField" name="commit">HEAD</field>
<field type="CharField" name="dirpath">meta-poky</field>
</object>
- <object model="orm.layer_version" pk="7">
+ <object model="orm.layer_version" pk="8">
<field rel="ManyToOneRel" to="orm.layer" name="layer">2</field>
<field type="IntegerField" name="layer_source">0</field>
<field rel="ManyToOneRel" to="orm.release" name="release">3</field>
<field type="CharField" name="branch">master</field>
<field type="CharField" name="dirpath">meta-poky</field>
</object>
- <object model="orm.layer_version" pk="8">
+ <object model="orm.layer_version" pk="9">
<field rel="ManyToOneRel" to="orm.layer" name="layer">2</field>
<field type="IntegerField" name="layer_source">0</field>
<field rel="ManyToOneRel" to="orm.release" name="release">4</field>
- <field type="CharField" name="branch">gatesgarth</field>
+ <field type="CharField" name="branch">honister</field>
+ <field type="CharField" name="dirpath">meta-poky</field>
+ </object>
+ <object model="orm.layer_version" pk="10">
+ <field rel="ManyToOneRel" to="orm.layer" name="layer">2</field>
+ <field type="IntegerField" name="layer_source">0</field>
+ <field rel="ManyToOneRel" to="orm.release" name="release">5</field>
+ <field type="CharField" name="branch">hardknott</field>
<field type="CharField" name="dirpath">meta-poky</field>
</object>
@@ -198,18 +237,18 @@
<field type="CharField" name="name">meta-yocto-bsp</field>
<field type="CharField" name="layer_index_url"></field>
<field type="CharField" name="vcs_url">git://git.yoctoproject.org/poky</field>
- <field type="CharField" name="vcs_web_url">http://git.yoctoproject.org/cgit/cgit.cgi/poky</field>
- <field type="CharField" name="vcs_web_tree_base_url">http://git.yoctoproject.org/cgit/cgit.cgi/poky/tree/%path%?h=%branch%</field>
- <field type="CharField" name="vcs_web_file_base_url">http://git.yoctoproject.org/cgit/cgit.cgi/poky/tree/%path%?h=%branch%</field>
+ <field type="CharField" name="vcs_web_url">https://git.yoctoproject.org/cgit/cgit.cgi/poky</field>
+ <field type="CharField" name="vcs_web_tree_base_url">https://git.yoctoproject.org/cgit/cgit.cgi/poky/tree/%path%?h=%branch%</field>
+ <field type="CharField" name="vcs_web_file_base_url">https://git.yoctoproject.org/cgit/cgit.cgi/poky/tree/%path%?h=%branch%</field>
</object>
- <object model="orm.layer_version" pk="9">
+ <object model="orm.layer_version" pk="11">
<field rel="ManyToOneRel" to="orm.layer" name="layer">3</field>
<field type="IntegerField" name="layer_source">0</field>
<field rel="ManyToOneRel" to="orm.release" name="release">1</field>
- <field type="CharField" name="branch">dunfell</field>
+ <field type="CharField" name="branch">kirkstone</field>
<field type="CharField" name="dirpath">meta-yocto-bsp</field>
</object>
- <object model="orm.layer_version" pk="10">
+ <object model="orm.layer_version" pk="12">
<field rel="ManyToOneRel" to="orm.layer" name="layer">3</field>
<field type="IntegerField" name="layer_source">0</field>
<field rel="ManyToOneRel" to="orm.release" name="release">2</field>
@@ -217,18 +256,25 @@
<field type="CharField" name="commit">HEAD</field>
<field type="CharField" name="dirpath">meta-yocto-bsp</field>
</object>
- <object model="orm.layer_version" pk="11">
+ <object model="orm.layer_version" pk="13">
<field rel="ManyToOneRel" to="orm.layer" name="layer">3</field>
<field type="IntegerField" name="layer_source">0</field>
<field rel="ManyToOneRel" to="orm.release" name="release">3</field>
<field type="CharField" name="branch">master</field>
<field type="CharField" name="dirpath">meta-yocto-bsp</field>
</object>
- <object model="orm.layer_version" pk="12">
+ <object model="orm.layer_version" pk="14">
<field rel="ManyToOneRel" to="orm.layer" name="layer">3</field>
<field type="IntegerField" name="layer_source">0</field>
<field rel="ManyToOneRel" to="orm.release" name="release">4</field>
- <field type="CharField" name="branch">gatesgarth</field>
+ <field type="CharField" name="branch">honister</field>
+ <field type="CharField" name="dirpath">meta-yocto-bsp</field>
+ </object>
+ <object model="orm.layer_version" pk="15">
+ <field rel="ManyToOneRel" to="orm.layer" name="layer">3</field>
+ <field type="IntegerField" name="layer_source">0</field>
+ <field rel="ManyToOneRel" to="orm.release" name="release">5</field>
+ <field type="CharField" name="branch">hardknott</field>
<field type="CharField" name="dirpath">meta-yocto-bsp</field>
</object>
</django-objects>
diff --git a/bitbake/lib/toaster/orm/fixtures/settings.xml b/bitbake/lib/toaster/orm/fixtures/settings.xml
index 78c0fdc..ab3ea02 100644
--- a/bitbake/lib/toaster/orm/fixtures/settings.xml
+++ b/bitbake/lib/toaster/orm/fixtures/settings.xml
@@ -19,7 +19,7 @@
<field type="CharField" name="value">${TOPDIR}/../sstate-cache</field>
</object>
<object model="orm.toastersetting" pk="6">
- <field type="CharField" name="name">DEFCONF_IMAGE_INSTALL_append</field>
+ <field type="CharField" name="name">DEFCONF_IMAGE_INSTALL:append</field>
<field type="CharField" name="value"></field>
</object>
<object model="orm.toastersetting" pk="7">
diff --git a/bitbake/lib/toaster/orm/management/commands/lsupdates.py b/bitbake/lib/toaster/orm/management/commands/lsupdates.py
index 2fbd7be..eb09755 100644
--- a/bitbake/lib/toaster/orm/management/commands/lsupdates.py
+++ b/bitbake/lib/toaster/orm/management/commands/lsupdates.py
@@ -21,7 +21,7 @@ import threading
import time
logger = logging.getLogger("toaster")
-DEFAULT_LAYERINDEX_SERVER = "http://layers.openembedded.org/layerindex/api/"
+DEFAULT_LAYERINDEX_SERVER = "https://layers.openembedded.org/layerindex/api/"
# Add path to bitbake modules for layerindexlib
# lib/toaster/orm/management/commands/lsupdates.py (abspath)
@@ -87,13 +87,13 @@ class Command(BaseCommand):
# update branches; only those that we already have names listed in the
# Releases table
- whitelist_branch_names = [rel.branch_name
- for rel in Release.objects.all()]
- if len(whitelist_branch_names) == 0:
+ allowed_branch_names = [rel.branch_name
+ for rel in Release.objects.all()]
+ if len(allowed_branch_names) == 0:
raise Exception("Failed to make list of branches to fetch")
logger.info("Fetching metadata for %s",
- " ".join(whitelist_branch_names))
+ " ".join(allowed_branch_names))
# We require a non-empty bb.data, but we can fake it with a dictionary
layerindex = layerindexlib.LayerIndex({"DUMMY" : "VALUE"})
@@ -101,8 +101,8 @@ class Command(BaseCommand):
http_progress = Spinner()
http_progress.start()
- if whitelist_branch_names:
- url_branches = ";branch=%s" % ','.join(whitelist_branch_names)
+ if allowed_branch_names:
+ url_branches = ";branch=%s" % ','.join(allowed_branch_names)
else:
url_branches = ""
layerindex.load_layerindex("%s%s" % (self.apiurl, url_branches))
diff --git a/bitbake/lib/toaster/orm/migrations/0020_models_bigautofield.py b/bitbake/lib/toaster/orm/migrations/0020_models_bigautofield.py
new file mode 100644
index 0000000..f19b5dd
--- /dev/null
+++ b/bitbake/lib/toaster/orm/migrations/0020_models_bigautofield.py
@@ -0,0 +1,173 @@
+# Generated by Django 3.2.12 on 2022-03-06 03:28
+
+from django.db import migrations, models
+
+
+class Migration(migrations.Migration):
+
+ dependencies = [
+ ('orm', '0019_django_2_2'),
+ ]
+
+ operations = [
+ migrations.AlterField(
+ model_name='bitbakeversion',
+ name='id',
+ field=models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID'),
+ ),
+ migrations.AlterField(
+ model_name='build',
+ name='id',
+ field=models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID'),
+ ),
+ migrations.AlterField(
+ model_name='distro',
+ name='id',
+ field=models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID'),
+ ),
+ migrations.AlterField(
+ model_name='helptext',
+ name='id',
+ field=models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID'),
+ ),
+ migrations.AlterField(
+ model_name='layer',
+ name='id',
+ field=models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID'),
+ ),
+ migrations.AlterField(
+ model_name='layer_version',
+ name='id',
+ field=models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID'),
+ ),
+ migrations.AlterField(
+ model_name='layerversiondependency',
+ name='id',
+ field=models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID'),
+ ),
+ migrations.AlterField(
+ model_name='logmessage',
+ name='id',
+ field=models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID'),
+ ),
+ migrations.AlterField(
+ model_name='machine',
+ name='id',
+ field=models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID'),
+ ),
+ migrations.AlterField(
+ model_name='package',
+ name='id',
+ field=models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID'),
+ ),
+ migrations.AlterField(
+ model_name='package_dependency',
+ name='id',
+ field=models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID'),
+ ),
+ migrations.AlterField(
+ model_name='package_file',
+ name='id',
+ field=models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID'),
+ ),
+ migrations.AlterField(
+ model_name='project',
+ name='id',
+ field=models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID'),
+ ),
+ migrations.AlterField(
+ model_name='projectlayer',
+ name='id',
+ field=models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID'),
+ ),
+ migrations.AlterField(
+ model_name='projecttarget',
+ name='id',
+ field=models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID'),
+ ),
+ migrations.AlterField(
+ model_name='projectvariable',
+ name='id',
+ field=models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID'),
+ ),
+ migrations.AlterField(
+ model_name='provides',
+ name='id',
+ field=models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID'),
+ ),
+ migrations.AlterField(
+ model_name='recipe',
+ name='id',
+ field=models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID'),
+ ),
+ migrations.AlterField(
+ model_name='recipe_dependency',
+ name='id',
+ field=models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID'),
+ ),
+ migrations.AlterField(
+ model_name='release',
+ name='id',
+ field=models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID'),
+ ),
+ migrations.AlterField(
+ model_name='releasedefaultlayer',
+ name='id',
+ field=models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID'),
+ ),
+ migrations.AlterField(
+ model_name='target',
+ name='id',
+ field=models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID'),
+ ),
+ migrations.AlterField(
+ model_name='target_file',
+ name='id',
+ field=models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID'),
+ ),
+ migrations.AlterField(
+ model_name='target_image_file',
+ name='id',
+ field=models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID'),
+ ),
+ migrations.AlterField(
+ model_name='target_installed_package',
+ name='id',
+ field=models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID'),
+ ),
+ migrations.AlterField(
+ model_name='targetkernelfile',
+ name='id',
+ field=models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID'),
+ ),
+ migrations.AlterField(
+ model_name='targetsdkfile',
+ name='id',
+ field=models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID'),
+ ),
+ migrations.AlterField(
+ model_name='task',
+ name='id',
+ field=models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID'),
+ ),
+ migrations.AlterField(
+ model_name='task_dependency',
+ name='id',
+ field=models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID'),
+ ),
+ migrations.AlterField(
+ model_name='toastersetting',
+ name='id',
+ field=models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID'),
+ ),
+ migrations.AlterField(
+ model_name='variable',
+ name='id',
+ field=models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID'),
+ ),
+ migrations.AlterField(
+ model_name='variablehistory',
+ name='id',
+ field=models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID'),
+ ),
+ ]
diff --git a/bitbake/lib/toaster/orm/models.py b/bitbake/lib/toaster/orm/models.py
index 7f7e922..2cb7d7e 100644
--- a/bitbake/lib/toaster/orm/models.py
+++ b/bitbake/lib/toaster/orm/models.py
@@ -58,7 +58,6 @@ if 'sqlite' in settings.DATABASES['default']['ENGINE']:
return _base_insert(self, *args, **kwargs)
QuerySet._insert = _insert
- from django.utils import six
def _create_object_from_params(self, lookup, params):
"""
Tries to create an object using passed params.
@@ -1717,9 +1716,9 @@ class CustomImageRecipe(Recipe):
def generate_recipe_file_contents(self):
"""Generate the contents for the recipe file."""
- # If we have no excluded packages we only need to _append
+ # If we have no excluded packages we only need to :append
if self.excludes_set.count() == 0:
- packages_conf = "IMAGE_INSTALL_append = \" "
+ packages_conf = "IMAGE_INSTALL:append = \" "
for pkg in self.appends_set.all():
packages_conf += pkg.name+' '
diff --git a/bitbake/lib/toaster/toastergui/templates/base.html b/bitbake/lib/toaster/toastergui/templates/base.html
index 9e19cc3..2b30549 100644
--- a/bitbake/lib/toaster/toastergui/templates/base.html
+++ b/bitbake/lib/toaster/toastergui/templates/base.html
@@ -123,7 +123,7 @@
{% endif %}
{% endif %}
<li id="navbar-docs">
- <a target="_blank" href="https://www.yoctoproject.org/docs/latest/toaster-manual/toaster-manual.html">
+ <a target="_blank" href="http://docs.yoctoproject.org/toaster-manual/index.html#toaster-user-manual">
<i class="glyphicon glyphicon-book"></i>
Documentation
</a>
diff --git a/bitbake/lib/toaster/toastergui/templates/configvars.html b/bitbake/lib/toaster/toastergui/templates/configvars.html
index 33fef93..691dace 100644
--- a/bitbake/lib/toaster/toastergui/templates/configvars.html
+++ b/bitbake/lib/toaster/toastergui/templates/configvars.html
@@ -66,7 +66,7 @@
<td class="description">
{% if variable.description %}
{{variable.description}}
- <a href="https://www.yoctoproject.org/docs/current/ref-manual/ref-manual.html#var-{{variable.variable_name|variable_parent_name}}" target="_blank">
+ <a href="http://docs.yoctoproject.org/ref-manual/variables.html#term-{{variable.variable_name|variable_parent_name}}" target="_blank">
<span class="glyphicon glyphicon-new-window get-info"></span></a>
{% endif %}
</td>
diff --git a/bitbake/lib/toaster/toastergui/templates/landing.html b/bitbake/lib/toaster/toastergui/templates/landing.html
index bfaaf6f..08b40fb 100644
--- a/bitbake/lib/toaster/toastergui/templates/landing.html
+++ b/bitbake/lib/toaster/toastergui/templates/landing.html
@@ -15,7 +15,7 @@
<p>A web interface to <a href="https://www.openembedded.org">OpenEmbedded</a> and <a href="https://www.yoctoproject.org/tools-resources/projects/bitbake">BitBake</a>, the <a href="https://www.yoctoproject.org">Yocto Project</a> build system.</p>
<p class="top-air">
- <a class="btn btn-info btn-lg" href="https://www.yoctoproject.org/docs/latest/toaster-manual/toaster-manual.html#toaster-manual-setup-and-use">
+ <a class="btn btn-info btn-lg" href="http://docs.yoctoproject.org/toaster-manual/setup-and-use.html#setting-up-and-using-toaster">
Toaster is ready to capture your command line builds
</a>
</p>
@@ -33,7 +33,7 @@
Toaster has no layer information. Without layer information, you cannot run builds. To generate layer information you can:
<ul>
<li>
- <a href="https://www.yoctoproject.org/docs/latest/toaster-manual/toaster-manual.html#layer-source">Configure a layer source</a>
+ <a href="http://docs.yoctoproject.org/toaster-manual/reference.html#layer-source">Configure a layer source</a>
</li>
<li>
<a href="{% url 'newproject' %}">Create a project</a>, then import layers
@@ -44,7 +44,7 @@
<ul class="list-unstyled lead">
<li>
- <a href="https://www.yoctoproject.org/docs/latest/toaster-manual/toaster-manual.html">
+ <a href="http://docs.yoctoproject.org/toaster-manual/index.html#toaster-user-manual">
Read the Toaster manual
</a>
</li>
diff --git a/bitbake/lib/toaster/toastergui/templates/landing_not_managed.html b/bitbake/lib/toaster/toastergui/templates/landing_not_managed.html
deleted file mode 100644
index e7200b8..0000000
--- a/bitbake/lib/toaster/toastergui/templates/landing_not_managed.html
+++ /dev/null
@@ -1,34 +0,0 @@
-{% extends "base.html" %}
-
-{% load static %}
-{% load projecttags %}
-{% load humanize %}
-
-{% block title %} Welcome to Toaster {% endblock %}
-
-{% block pagecontent %}
-
- <div class="container">
- <div class="row">
- <!-- Empty - no build module -->
- <div class="page-header top-air">
- <h1>
- This page only works with Toaster in 'Build' mode
- </h1>
- </div>
- <div class="alert alert-info lead">
- <p">
- The 'Build' mode allows you to configure and run your Yocto Project builds from Toaster.
- <ul>
- <li><a href="https://www.yoctoproject.org/docs/latest/toaster-manual/toaster-manual.html#intro-modes">
- Read about the 'Build' mode
- </a></li>
- <li><a href="/">
- View your builds
- </a></li>
- </ul>
- </p>
- </div>
- </div>
-
-{% endblock %}
diff --git a/bitbake/lib/toaster/toastergui/templates/layerdetails.html b/bitbake/lib/toaster/toastergui/templates/layerdetails.html
index 1e26e31..923ca3b 100644
--- a/bitbake/lib/toaster/toastergui/templates/layerdetails.html
+++ b/bitbake/lib/toaster/toastergui/templates/layerdetails.html
@@ -355,7 +355,7 @@
{% if layerversion.layer_source == layer_source.TYPE_LAYERINDEX %}
<dt>Layer index</dt>
<dd>
- <a href="http://layers.openembedded.org/layerindex/branch/{{layerversion.release.name}}/layer/{{layerversion.layer.name}}">Layer index {{layerversion.layer.name}}</a>
+ <a href="https://layers.openembedded.org/layerindex/branch/{{layerversion.release.name}}/layer/{{layerversion.layer.name}}">Layer index {{layerversion.layer.name}}</a>
</dd>
{% endif %}
</dl>
diff --git a/bitbake/lib/toaster/toastergui/templates/package_detail_base.html b/bitbake/lib/toaster/toastergui/templates/package_detail_base.html
index 66f8e7f..a4fcd2a 100644
--- a/bitbake/lib/toaster/toastergui/templates/package_detail_base.html
+++ b/bitbake/lib/toaster/toastergui/templates/package_detail_base.html
@@ -127,7 +127,7 @@
{% comment %}
# Removed per team meeting of 1/29/2014 until
# decision on index search algorithm
- <a href="http://layers.openembedded.org" target="_blank">
+ <a href="https://layers.openembedded.org" target="_blank">
<i class="glyphicon glyphicon-share get-info"></i>
</a>
{% endcomment %}
diff --git a/bitbake/lib/toaster/toastergui/templates/project.html b/bitbake/lib/toaster/toastergui/templates/project.html
index d8ad2c7..22239a8 100644
--- a/bitbake/lib/toaster/toastergui/templates/project.html
+++ b/bitbake/lib/toaster/toastergui/templates/project.html
@@ -139,7 +139,7 @@
<ul>
<li><a href="{% url 'projectlayers' project.id %}">Choose from the layers compatible with this project</a></li>
<li><a href="{% url 'importlayer' project.id %}">Import a layer</a></li>
- <li><a href="https://www.yoctoproject.org/docs/current/dev-manual/dev-manual.html#understanding-and-creating-layers" target="_blank">Read about layers in the documentation</a></li>
+ <li><a href="http://docs.yoctoproject.org/dev-manual/common-tasks.html#understanding-and-creating-layers" target="_blank">Read about layers in the documentation</a></li>
<li>Or type a layer name below</li>
</ul>
</div>
diff --git a/bitbake/lib/toaster/toastergui/templates/project_specific.html b/bitbake/lib/toaster/toastergui/templates/project_specific.html
index 42725c0..76d45b1 100644
--- a/bitbake/lib/toaster/toastergui/templates/project_specific.html
+++ b/bitbake/lib/toaster/toastergui/templates/project_specific.html
@@ -137,7 +137,7 @@
<ul>
<li><a href="{% url 'projectlayers' project.id %}">Choose from the layers compatible with this project</a></li>
<li><a href="{% url 'importlayer' project.id %}">Import a layer</a></li>
- <li><a href="https://www.yoctoproject.org/docs/current/dev-manual/dev-manual.html#understanding-and-creating-layers" target="_blank">Read about layers in the documentation</a></li>
+ <li><a href="http://docs.yoctoproject.org/dev-manual/common-tasks.html#understanding-and-creating-layers" target="_blank">Read about layers in the documentation</a></li>
<li>Or type a layer name below</li>
</ul>
</div>
diff --git a/bitbake/lib/toaster/toastergui/templates/projectconf.html b/bitbake/lib/toaster/toastergui/templates/projectconf.html
index bd49f1f..c306835 100644
--- a/bitbake/lib/toaster/toastergui/templates/projectconf.html
+++ b/bitbake/lib/toaster/toastergui/templates/projectconf.html
@@ -73,7 +73,7 @@
{% if image_install_append_defined %}
<dt>
- <span class="js-config-var-name js-config-var-managed-name">IMAGE_INSTALL_append</span>
+ <span class="js-config-var-name js-config-var-managed-name">IMAGE_INSTALL:append</span>
<span class="glyphicon glyphicon-question-sign get-help" title="Specifies additional packages to install into an image. If your build creates more than one image, the packages will be installed in all of them"></span>
</dt>
<dd class="variable-list">
@@ -83,7 +83,7 @@
<form id="change-image_install-form" class="form-inline" style="display:none;">
<div class="row">
<div class="col-md-4">
- <span class="help-block">To set IMAGE_INSTALL_append to more than one package, type the package names separated by a space.</span>
+ <span class="help-block">To set IMAGE_INSTALL:append to more than one package, type the package names separated by a space.</span>
</div>
</div>
<div class="form-group">
@@ -167,8 +167,8 @@
{% for fstype in vars_fstypes %}
<input type="hidden" class="js-checkbox-fstypes-list" value="{{fstype}}">
{% endfor %}
- {% for b in vars_blacklist %}
- <input type="hidden" class="js-config-blacklist-name" value="{{b}}">
+ {% for b in vars_disallowed %}
+ <input type="hidden" class="js-config-disallowed-name" value="{{b}}">
{% endfor %}
{% for b in vars_managed %}
<input type="hidden" class="js-config-managed-name" value="{{b}}">
@@ -201,12 +201,12 @@
<p>Toaster cannot set any variables that impact 1) the configuration of the build servers,
or 2) where artifacts produced by the build are stored. Such variables include: </p>
<p>
- <code><a href="https://www.yoctoproject.org/docs/current/ref-manual/ref-manual.html#var-BB_DISKMON_DIRS" target="_blank">BB_DISKMON_DIRS</a></code>
- <code><a href="https://www.yoctoproject.org/docs/current/ref-manual/ref-manual.html#var-BB_NUMBER_THREADS" target="_blank">BB_NUMBER_THREADS</a></code>
+ <code><a href="http://docs.yoctoproject.org/ref-manual/variables.html#term-BB_DISKMON_DIRS" target="_blank">BB_DISKMON_DIRS</a></code>
+ <code><a href="http://docs.yoctoproject.org/ref-manual/variables.html#term-BB_NUMBER_THREADS" target="_blank">BB_NUMBER_THREADS</a></code>
<code>CVS_PROXY_HOST</code>
<code>CVS_PROXY_PORT</code>
- <code><a href="https://www.yoctoproject.org/docs/current/ref-manual/ref-manual.html#var-PARALLEL_MAKE" target="_blank">PARALLEL_MAKE</a></code>
- <code><a href="https://www.yoctoproject.org/docs/current/ref-manual/ref-manual.html#var-TMPDIR" target="_blank">TMPDIR</a></code></p>
+ <code><a href="http://docs.yoctoproject.org/ref-manual/variables.html#term-PARALLEL_MAKE" target="_blank">PARALLEL_MAKE</a></code>
+ <code><a href="http://docs.yoctoproject.org/ref-manual/variables.html#term-TMPDIR" target="_blank">TMPDIR</a></code></p>
<p>Plus the following standard shell environment variables:</p>
<p><code>http_proxy</code> <code>ftp_proxy</code> <code>https_proxy</code> <code>all_proxy</code></p>
</div>
@@ -238,9 +238,9 @@ function validate_new_variable() {
}
}
- var blacklist_configvars = document.getElementsByClassName('js-config-blacklist-name');
- for (var i = 0, length = blacklist_configvars.length; i < length; i++) {
- if (blacklist_configvars[i].value.toUpperCase() == variable.toUpperCase()) {
+ var disallowed_configvars = document.getElementsByClassName('js-config-disallowed-name');
+ for (var i = 0, length = disallowed_configvars.length; i < length; i++) {
+ if (disallowed_configvars[i].value.toUpperCase() == variable.toUpperCase()) {
error_msg = "You cannot edit this variable in Toaster because it is set by the build servers";
}
}
@@ -771,10 +771,10 @@ $(document).ready(function() {
{% if image_install_append_defined %}
- // init IMAGE_INSTALL_append trash icon
+ // init IMAGE_INSTALL:append trash icon
setDeleteTooltip($('#delete-image_install-icon'));
- // change IMAGE_INSTALL_append variable
+ // change IMAGE_INSTALL:append variable
$('#change-image_install-icon').click(function() {
// preset the edit value
var current_val = $("span#image_install").text().trim();
@@ -814,7 +814,7 @@ $(document).ready(function() {
$('#apply-change-image_install').click(function(){
// insure these non-empty values have single space prefix
var value = " " + $('#new-image_install').val().trim();
- postEditAjaxRequest({"configvarChange" : 'IMAGE_INSTALL_append:'+value});
+ postEditAjaxRequest({"configvarChange" : 'IMAGE_INSTALL:append:'+value});
$('#image_install').text(value);
$('#image_install').removeClass('text-muted');
$("#change-image_install-form").slideUp(function () {
@@ -826,10 +826,10 @@ $(document).ready(function() {
});
});
- // delete IMAGE_INSTALL_append variable value
+ // delete IMAGE_INSTALL:append variable value
$('#delete-image_install-icon').click(function(){
$(this).tooltip('hide');
- postEditAjaxRequest({"configvarChange" : 'IMAGE_INSTALL_append:'+''});
+ postEditAjaxRequest({"configvarChange" : 'IMAGE_INSTALL:append:'+''});
$('#image_install').parent().fadeOut(1000, function(){
$('#image_install').addClass('text-muted');
$('#image_install').text('Not set');
@@ -1011,7 +1011,7 @@ $(document).ready(function() {
$(".save").attr("disabled","disabled");
// Reload page if admin-removed core managed value is manually added back in
- if (0 <= " DISTRO DL_DIR IMAGE_FSTYPES IMAGE_INSTALL_append PACKAGE_CLASSES SSTATE_DIR ".indexOf( " "+variable+" " )) {
+ if (0 <= " DISTRO DL_DIR IMAGE_FSTYPES IMAGE_INSTALL:append PACKAGE_CLASSES SSTATE_DIR ".indexOf( " "+variable+" " )) {
// delayed reload to avoid race condition with postEditAjaxRequest
do_reload=true;
}
diff --git a/bitbake/lib/toaster/toastergui/views.py b/bitbake/lib/toaster/toastergui/views.py
index 9a5e48e..a571b8c 100644
--- a/bitbake/lib/toaster/toastergui/views.py
+++ b/bitbake/lib/toaster/toastergui/views.py
@@ -1683,12 +1683,12 @@ if True:
t=request.POST['configvarDel'].strip()
pt = ProjectVariable.objects.get(pk = int(t)).delete()
- # return all project settings, filter out blacklist and elsewhere-managed variables
- vars_managed,vars_fstypes,vars_blacklist = get_project_configvars_context()
+ # return all project settings, filter out disallowed and elsewhere-managed variables
+ vars_managed,vars_fstypes,vars_disallowed = get_project_configvars_context()
configvars_query = ProjectVariable.objects.filter(project_id = pid).all()
for var in vars_managed:
configvars_query = configvars_query.exclude(name = var)
- for var in vars_blacklist:
+ for var in vars_disallowed:
configvars_query = configvars_query.exclude(name = var)
return_data = {
@@ -1708,7 +1708,7 @@ if True:
except ProjectVariable.DoesNotExist:
pass
try:
- return_data['image_install_append'] = ProjectVariable.objects.get(project = prj, name = "IMAGE_INSTALL_append").value,
+ return_data['image_install:append'] = ProjectVariable.objects.get(project = prj, name = "IMAGE_INSTALL:append").value,
except ProjectVariable.DoesNotExist:
pass
try:
@@ -1781,7 +1781,7 @@ if True:
'MACHINE', 'BBLAYERS'
}
- vars_blacklist = {
+ vars_disallowed = {
'PARALLEL_MAKE','BB_NUMBER_THREADS',
'BB_DISKMON_DIRS','BB_NUMBER_THREADS','CVS_PROXY_HOST','CVS_PROXY_PORT',
'PARALLEL_MAKE','TMPDIR',
@@ -1790,7 +1790,7 @@ if True:
vars_fstypes = Target_Image_File.SUFFIXES
- return(vars_managed,sorted(vars_fstypes),vars_blacklist)
+ return(vars_managed,sorted(vars_fstypes),vars_disallowed)
def projectconf(request, pid):
@@ -1799,12 +1799,12 @@ if True:
except Project.DoesNotExist:
return HttpResponseNotFound("<h1>Project id " + pid + " is unavailable</h1>")
- # remove blacklist and externally managed varaibles from this list
- vars_managed,vars_fstypes,vars_blacklist = get_project_configvars_context()
+ # remove disallowed and externally managed varaibles from this list
+ vars_managed,vars_fstypes,vars_disallowed = get_project_configvars_context()
configvars = ProjectVariable.objects.filter(project_id = pid).all()
for var in vars_managed:
configvars = configvars.exclude(name = var)
- for var in vars_blacklist:
+ for var in vars_disallowed:
configvars = configvars.exclude(name = var)
context = {
@@ -1812,7 +1812,7 @@ if True:
'configvars': configvars,
'vars_managed': vars_managed,
'vars_fstypes': vars_fstypes,
- 'vars_blacklist': vars_blacklist,
+ 'vars_disallowed': vars_disallowed,
}
try:
@@ -1839,7 +1839,7 @@ if True:
except ProjectVariable.DoesNotExist:
pass
try:
- context['image_install_append'] = ProjectVariable.objects.get(project = prj, name = "IMAGE_INSTALL_append").value
+ context['image_install:append'] = ProjectVariable.objects.get(project = prj, name = "IMAGE_INSTALL:append").value
context['image_install_append_defined'] = "1"
except ProjectVariable.DoesNotExist:
pass
diff --git a/bitbake/lib/toaster/toastermain/management/commands/buildimport.py b/bitbake/lib/toaster/toastermain/management/commands/buildimport.py
index 59da6ff..e25b55e 100644
--- a/bitbake/lib/toaster/toastermain/management/commands/buildimport.py
+++ b/bitbake/lib/toaster/toastermain/management/commands/buildimport.py
@@ -451,7 +451,7 @@ class Command(BaseCommand):
# Catch vars relevant to Toaster (in case no Toaster section)
self.update_project_vars(project,'DISTRO')
self.update_project_vars(project,'MACHINE')
- self.update_project_vars(project,'IMAGE_INSTALL_append')
+ self.update_project_vars(project,'IMAGE_INSTALL:append')
self.update_project_vars(project,'IMAGE_FSTYPES')
self.update_project_vars(project,'PACKAGE_CLASSES')
# These vars are typically only assigned by Toaster
diff --git a/bitbake/lib/toaster/toastermain/settings.py b/bitbake/lib/toaster/toastermain/settings.py
index a4b370c..609c85d 100644
--- a/bitbake/lib/toaster/toastermain/settings.py
+++ b/bitbake/lib/toaster/toastermain/settings.py
@@ -39,6 +39,9 @@ DATABASES = {
}
}
+# New in Django 3.2
+DEFAULT_AUTO_FIELD = 'django.db.models.BigAutoField'
+
# Needed when Using sqlite especially to add a longer timeout for waiting
# for the database lock to be released
# https://docs.djangoproject.com/en/1.6/ref/databases/#database-is-locked-errors
diff --git a/bitbake/toaster-requirements.txt b/bitbake/toaster-requirements.txt
index 735b614..dedd423 100644
--- a/bitbake/toaster-requirements.txt
+++ b/bitbake/toaster-requirements.txt
@@ -1,3 +1,3 @@
-Django>2.2,<2.3
+Django>3.2,<3.3
beautifulsoup4>=4.4.0
pytz
--
2.34.1
next prev parent reply other threads:[~2023-01-25 19:23 UTC|newest]
Thread overview: 29+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-01-25 19:23 [PATCH v8 00/20] Migrate to Bitbake 2.0 Anton Mikanovich
2023-01-25 19:23 ` [PATCH v8 01/20] meta: change deprecated parse calls Anton Mikanovich
2023-01-25 19:23 ` [PATCH v8 02/20] scripts/contrib: add override conversion script Anton Mikanovich
2023-01-25 19:23 ` [PATCH v8 03/20] scripts/contrib: configure " Anton Mikanovich
2023-01-25 19:23 ` [PATCH v8 04/20] meta-isar: set default branch names Anton Mikanovich
2023-01-25 19:23 ` [PATCH v8 05/20] meta: remove non recommended syntax Anton Mikanovich
2023-01-25 19:23 ` Anton Mikanovich [this message]
2023-01-25 19:23 ` [PATCH v8 07/20] meta: update bitbake variables Anton Mikanovich
2023-01-25 19:23 ` [PATCH v8 08/20] bitbake.conf: align hash vars with openembedded Anton Mikanovich
2023-01-25 19:23 ` [PATCH v8 09/20] meta: mark network and sudo tasks Anton Mikanovich
2023-01-25 19:23 ` [PATCH v8 10/20] meta: update overrides syntax Anton Mikanovich
2023-01-25 19:23 ` [PATCH v8 11/20] sstate: update bbclass Anton Mikanovich
2023-01-25 19:23 ` [PATCH v8 12/20] bitbake.conf: declare default XZ and ZSTD options Anton Mikanovich
2023-01-25 19:23 ` [PATCH v8 13/20] Revert "devshell: Use different termination test to avoid warnings" Anton Mikanovich
2023-01-25 19:23 ` [PATCH v8 14/20] meta: align with OE-core libraries update Anton Mikanovich
2023-01-25 19:23 ` [PATCH v8 15/20] Revert "Revert "devshell: Use different termination test to avoid warnings"" Anton Mikanovich
2023-01-25 19:23 ` [PATCH v8 16/20] CI: adapt tests to syntax change Anton Mikanovich
2023-01-25 19:23 ` [PATCH v8 17/20] isar-sstate: adapt sstate maintenance script Anton Mikanovich
2023-01-25 19:23 ` [PATCH v8 18/20] doc: require zstd tool Anton Mikanovich
2023-01-25 19:23 ` [PATCH v8 19/20] RECIPE-API-CHANGELOG: add tips after bitbake version update Anton Mikanovich
2023-01-25 19:23 ` [PATCH v8 20/20] docs: update override syntax Anton Mikanovich
2023-01-25 23:43 ` [PATCH v8 00/20] Migrate to Bitbake 2.0 Roberto A. Foglietta
2023-01-26 7:29 ` Anton Mikanovich
2023-01-26 13:23 ` Roberto A. Foglietta
2023-01-26 19:59 ` Henning Schild
2023-01-27 4:09 ` Roberto A. Foglietta
2023-01-31 11:26 ` Uladzimir Bely
2023-02-01 6:17 ` Uladzimir Bely
2023-02-02 9:02 ` Florian Bezdeka
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230125192337.86869-7-amikan@ilbers.de \
--to=amikan@ilbers.de \
--cc=isar-users@googlegroups.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox