* [PATCH 0/3] bitbake upstream update and eliminate no-gpg-check option usage
@ 2018-11-07 16:09 Maxim Yu. Osipov
2018-11-07 16:09 ` [PATCH 1/3] Update bitbake from the upstream Maxim Yu. Osipov
` (2 more replies)
0 siblings, 3 replies; 18+ messages in thread
From: Maxim Yu. Osipov @ 2018-11-07 16:09 UTC (permalink / raw)
To: isar-users
Hi everybody,
See details in corresponding patches.
Kind regards,
Maxim.
Maxim Yu. Osipov (3):
Update bitbake from the upstream.
meta: Set LAYERSERIES_* variables
isar-bootstrap: Eliminate no-gpg-check option usage
bitbake/bin/bitbake | 2 +-
bitbake/bin/bitbake-selftest | 7 +-
bitbake/bin/toaster | 13 +-
bitbake/contrib/dump_cache.py | 85 +-
.../bitbake-user-manual-execution.xml | 2 +-
.../bitbake-user-manual-fetching.xml | 40 +-
.../bitbake-user-manual-hello.xml | 8 +-
.../bitbake-user-manual-intro.xml | 178 ++-
.../bitbake-user-manual-metadata.xml | 142 +-
.../bitbake-user-manual-ref-variables.xml | 118 +-
.../bitbake-user-manual/bitbake-user-manual.xml | 2 +-
.../figures/bb_multiconfig_files.png | 0
bitbake/lib/bb/COW.py | 2 +-
bitbake/lib/bb/__init__.py | 18 +-
bitbake/lib/bb/build.py | 8 +-
bitbake/lib/bb/cache.py | 7 +-
bitbake/lib/bb/checksum.py | 2 +
bitbake/lib/bb/codeparser.py | 4 +-
bitbake/lib/bb/cooker.py | 57 +-
bitbake/lib/bb/cookerdata.py | 5 +-
bitbake/lib/bb/daemonize.py | 25 +-
bitbake/lib/bb/data.py | 61 +-
bitbake/lib/bb/data_smart.py | 108 +-
bitbake/lib/bb/event.py | 5 +-
bitbake/lib/bb/fetch2/__init__.py | 62 +-
bitbake/lib/bb/fetch2/bzr.py | 5 +-
bitbake/lib/bb/fetch2/clearcase.py | 3 +-
bitbake/lib/bb/fetch2/cvs.py | 5 +-
bitbake/lib/bb/fetch2/git.py | 66 +-
bitbake/lib/bb/fetch2/gitsm.py | 264 ++--
bitbake/lib/bb/fetch2/hg.py | 2 +-
bitbake/lib/bb/fetch2/npm.py | 9 +-
bitbake/lib/bb/fetch2/osc.py | 5 +-
bitbake/lib/bb/fetch2/perforce.py | 8 +-
bitbake/lib/bb/fetch2/repo.py | 12 +-
bitbake/lib/bb/fetch2/svn.py | 5 +-
bitbake/lib/bb/main.py | 15 +-
bitbake/lib/bb/msg.py | 3 +
bitbake/lib/bb/parse/__init__.py | 3 +-
bitbake/lib/bb/parse/ast.py | 46 +-
bitbake/lib/bb/parse/parse_py/BBHandler.py | 3 -
bitbake/lib/bb/parse/parse_py/ConfHandler.py | 3 -
bitbake/lib/bb/runqueue.py | 278 ++--
bitbake/lib/bb/server/process.py | 27 +-
bitbake/lib/bb/siggen.py | 54 +-
bitbake/lib/bb/taskdata.py | 18 +-
bitbake/lib/bb/tests/cooker.py | 83 ++
bitbake/lib/bb/tests/data.py | 77 +-
bitbake/lib/bb/tests/fetch.py | 295 ++++-
bitbake/lib/bb/tests/parse.py | 4 +
bitbake/lib/bb/ui/buildinfohelper.py | 9 +-
bitbake/lib/bb/ui/taskexp.py | 10 +-
bitbake/lib/bb/utils.py | 60 +-
bitbake/lib/bblayers/action.py | 2 +-
bitbake/lib/bblayers/layerindex.py | 323 ++---
bitbake/lib/layerindexlib/README | 28 +
bitbake/lib/layerindexlib/__init__.py | 1363 ++++++++++++++++++++
bitbake/lib/layerindexlib/cooker.py | 344 +++++
bitbake/lib/layerindexlib/plugin.py | 60 +
bitbake/lib/layerindexlib/restapi.py | 398 ++++++
bitbake/lib/layerindexlib/tests/__init__.py | 0
bitbake/lib/layerindexlib/tests/common.py | 43 +
bitbake/lib/layerindexlib/tests/cooker.py | 123 ++
bitbake/lib/layerindexlib/tests/layerindexobj.py | 226 ++++
bitbake/lib/layerindexlib/tests/restapi.py | 184 +++
bitbake/lib/layerindexlib/tests/testdata/README | 11 +
.../tests/testdata/build/conf/bblayers.conf | 15 +
.../tests/testdata/layer1/conf/layer.conf | 17 +
.../tests/testdata/layer2/conf/layer.conf | 20 +
.../tests/testdata/layer3/conf/layer.conf | 19 +
.../tests/testdata/layer4/conf/layer.conf | 22 +
.../toaster/bldcontrol/localhostbecontroller.py | 212 ++-
.../management/commands/checksettings.py | 8 +-
.../bldcontrol/management/commands/runbuilds.py | 2 +-
bitbake/lib/toaster/orm/fixtures/oe-core.xml | 28 +-
bitbake/lib/toaster/orm/fixtures/poky.xml | 76 +-
.../toaster/orm/management/commands/lsupdates.py | 228 ++--
.../orm/migrations/0018_project_specific.py | 28 +
bitbake/lib/toaster/orm/models.py | 74 +-
bitbake/lib/toaster/toastergui/api.py | 176 ++-
.../lib/toaster/toastergui/static/js/layerBtn.js | 12 +
.../toaster/toastergui/static/js/layerdetails.js | 3 +-
.../lib/toaster/toastergui/static/js/libtoaster.js | 108 +-
.../lib/toaster/toastergui/static/js/mrbsection.js | 4 +-
.../toastergui/static/js/newcustomimage_modal.js | 7 +
.../toaster/toastergui/static/js/projecttopbar.js | 22 +
bitbake/lib/toaster/toastergui/tables.py | 12 +-
.../toastergui/templates/base_specific.html | 128 ++
.../templates/baseprojectspecificpage.html | 48 +
.../toastergui/templates/customise_btn.html | 6 +-
.../templates/generic-toastertable-page.html | 2 +-
.../toaster/toastergui/templates/importlayer.html | 4 +-
.../toastergui/templates/landing_specific.html | 50 +
.../toaster/toastergui/templates/layerdetails.html | 3 +-
.../toaster/toastergui/templates/mrb_section.html | 2 +-
.../toastergui/templates/newcustomimage.html | 4 +-
.../toaster/toastergui/templates/newproject.html | 57 +-
.../toastergui/templates/newproject_specific.html | 95 ++
.../lib/toaster/toastergui/templates/project.html | 7 +-
.../toastergui/templates/project_specific.html | 162 +++
.../templates/project_specific_topbar.html | 80 ++
.../toaster/toastergui/templates/projectconf.html | 7 +-
.../lib/toaster/toastergui/templates/recipe.html | 2 +-
.../toastergui/templates/recipe_add_btn.html | 23 +
bitbake/lib/toaster/toastergui/urls.py | 13 +
bitbake/lib/toaster/toastergui/views.py | 165 ++-
bitbake/lib/toaster/toastergui/widgets.py | 23 +-
.../toastermain/management/commands/builddelete.py | 6 +-
.../toastermain/management/commands/buildimport.py | 584 +++++++++
bitbake/toaster-requirements.txt | 2 +-
meta-isar/conf/layer.conf | 1 +
meta/conf/layer.conf | 5 +-
.../recipes-core/isar-bootstrap/isar-bootstrap.inc | 3 -
113 files changed, 7029 insertions(+), 984 deletions(-)
create mode 100644 bitbake/doc/bitbake-user-manual/figures/bb_multiconfig_files.png
create mode 100644 bitbake/lib/bb/tests/cooker.py
create mode 100644 bitbake/lib/layerindexlib/README
create mode 100644 bitbake/lib/layerindexlib/__init__.py
create mode 100644 bitbake/lib/layerindexlib/cooker.py
create mode 100644 bitbake/lib/layerindexlib/plugin.py
create mode 100644 bitbake/lib/layerindexlib/restapi.py
create mode 100644 bitbake/lib/layerindexlib/tests/__init__.py
create mode 100644 bitbake/lib/layerindexlib/tests/common.py
create mode 100644 bitbake/lib/layerindexlib/tests/cooker.py
create mode 100644 bitbake/lib/layerindexlib/tests/layerindexobj.py
create mode 100644 bitbake/lib/layerindexlib/tests/restapi.py
create mode 100644 bitbake/lib/layerindexlib/tests/testdata/README
create mode 100644 bitbake/lib/layerindexlib/tests/testdata/build/conf/bblayers.conf
create mode 100644 bitbake/lib/layerindexlib/tests/testdata/layer1/conf/layer.conf
create mode 100644 bitbake/lib/layerindexlib/tests/testdata/layer2/conf/layer.conf
create mode 100644 bitbake/lib/layerindexlib/tests/testdata/layer3/conf/layer.conf
create mode 100644 bitbake/lib/layerindexlib/tests/testdata/layer4/conf/layer.conf
create mode 100644 bitbake/lib/toaster/orm/migrations/0018_project_specific.py
create mode 100644 bitbake/lib/toaster/toastergui/templates/base_specific.html
create mode 100644 bitbake/lib/toaster/toastergui/templates/baseprojectspecificpage.html
create mode 100644 bitbake/lib/toaster/toastergui/templates/landing_specific.html
create mode 100644 bitbake/lib/toaster/toastergui/templates/newproject_specific.html
create mode 100644 bitbake/lib/toaster/toastergui/templates/project_specific.html
create mode 100644 bitbake/lib/toaster/toastergui/templates/project_specific_topbar.html
create mode 100644 bitbake/lib/toaster/toastergui/templates/recipe_add_btn.html
mode change 100755 => 100644 bitbake/lib/toaster/toastergui/views.py
create mode 100644 bitbake/lib/toaster/toastermain/management/commands/buildimport.py
--
2.11.0
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH 1/3] Update bitbake from the upstream.
2018-11-07 16:09 [PATCH 0/3] bitbake upstream update and eliminate no-gpg-check option usage Maxim Yu. Osipov
@ 2018-11-07 16:09 ` Maxim Yu. Osipov
2018-11-07 17:58 ` Henning Schild
2018-11-07 16:09 ` [PATCH 2/3] meta: Set LAYERSERIES_* variables Maxim Yu. Osipov
2018-11-07 16:09 ` [PATCH 3/3] isar-bootstrap: Eliminate no-gpg-check option usage Maxim Yu. Osipov
2 siblings, 1 reply; 18+ messages in thread
From: Maxim Yu. Osipov @ 2018-11-07 16:09 UTC (permalink / raw)
To: isar-users
Origin: https://github.com/openembedded/bitbake.git
Commit: 701f76f773a6e77258f307a4f8e2ec1a8552f6f3
Signed-off-by: Maxim Yu. Osipov <mosipov@ilbers.de>
---
bitbake/bin/bitbake | 2 +-
bitbake/bin/bitbake-selftest | 7 +-
bitbake/bin/toaster | 13 +-
bitbake/contrib/dump_cache.py | 85 +-
.../bitbake-user-manual-execution.xml | 2 +-
.../bitbake-user-manual-fetching.xml | 40 +-
.../bitbake-user-manual-hello.xml | 8 +-
.../bitbake-user-manual-intro.xml | 178 ++-
.../bitbake-user-manual-metadata.xml | 142 +-
.../bitbake-user-manual-ref-variables.xml | 118 +-
.../bitbake-user-manual/bitbake-user-manual.xml | 2 +-
.../figures/bb_multiconfig_files.png | 0
bitbake/lib/bb/COW.py | 2 +-
bitbake/lib/bb/__init__.py | 18 +-
bitbake/lib/bb/build.py | 8 +-
bitbake/lib/bb/cache.py | 7 +-
bitbake/lib/bb/checksum.py | 2 +
bitbake/lib/bb/codeparser.py | 4 +-
bitbake/lib/bb/cooker.py | 57 +-
bitbake/lib/bb/cookerdata.py | 5 +-
bitbake/lib/bb/daemonize.py | 25 +-
bitbake/lib/bb/data.py | 61 +-
bitbake/lib/bb/data_smart.py | 108 +-
bitbake/lib/bb/event.py | 5 +-
bitbake/lib/bb/fetch2/__init__.py | 62 +-
bitbake/lib/bb/fetch2/bzr.py | 5 +-
bitbake/lib/bb/fetch2/clearcase.py | 3 +-
bitbake/lib/bb/fetch2/cvs.py | 5 +-
bitbake/lib/bb/fetch2/git.py | 66 +-
bitbake/lib/bb/fetch2/gitsm.py | 264 ++--
bitbake/lib/bb/fetch2/hg.py | 2 +-
bitbake/lib/bb/fetch2/npm.py | 9 +-
bitbake/lib/bb/fetch2/osc.py | 5 +-
bitbake/lib/bb/fetch2/perforce.py | 8 +-
bitbake/lib/bb/fetch2/repo.py | 12 +-
bitbake/lib/bb/fetch2/svn.py | 5 +-
bitbake/lib/bb/main.py | 15 +-
bitbake/lib/bb/msg.py | 3 +
bitbake/lib/bb/parse/__init__.py | 3 +-
bitbake/lib/bb/parse/ast.py | 46 +-
bitbake/lib/bb/parse/parse_py/BBHandler.py | 3 -
bitbake/lib/bb/parse/parse_py/ConfHandler.py | 3 -
bitbake/lib/bb/runqueue.py | 278 ++--
bitbake/lib/bb/server/process.py | 27 +-
bitbake/lib/bb/siggen.py | 54 +-
bitbake/lib/bb/taskdata.py | 18 +-
bitbake/lib/bb/tests/cooker.py | 83 ++
bitbake/lib/bb/tests/data.py | 77 +-
bitbake/lib/bb/tests/fetch.py | 295 ++++-
bitbake/lib/bb/tests/parse.py | 4 +
bitbake/lib/bb/ui/buildinfohelper.py | 9 +-
bitbake/lib/bb/ui/taskexp.py | 10 +-
bitbake/lib/bb/utils.py | 60 +-
bitbake/lib/bblayers/action.py | 2 +-
bitbake/lib/bblayers/layerindex.py | 323 ++---
bitbake/lib/layerindexlib/README | 28 +
bitbake/lib/layerindexlib/__init__.py | 1363 ++++++++++++++++++++
bitbake/lib/layerindexlib/cooker.py | 344 +++++
bitbake/lib/layerindexlib/plugin.py | 60 +
bitbake/lib/layerindexlib/restapi.py | 398 ++++++
bitbake/lib/layerindexlib/tests/__init__.py | 0
bitbake/lib/layerindexlib/tests/common.py | 43 +
bitbake/lib/layerindexlib/tests/cooker.py | 123 ++
bitbake/lib/layerindexlib/tests/layerindexobj.py | 226 ++++
bitbake/lib/layerindexlib/tests/restapi.py | 184 +++
bitbake/lib/layerindexlib/tests/testdata/README | 11 +
.../tests/testdata/build/conf/bblayers.conf | 15 +
.../tests/testdata/layer1/conf/layer.conf | 17 +
.../tests/testdata/layer2/conf/layer.conf | 20 +
.../tests/testdata/layer3/conf/layer.conf | 19 +
.../tests/testdata/layer4/conf/layer.conf | 22 +
.../toaster/bldcontrol/localhostbecontroller.py | 212 ++-
.../management/commands/checksettings.py | 8 +-
.../bldcontrol/management/commands/runbuilds.py | 2 +-
bitbake/lib/toaster/orm/fixtures/oe-core.xml | 28 +-
bitbake/lib/toaster/orm/fixtures/poky.xml | 76 +-
.../toaster/orm/management/commands/lsupdates.py | 228 ++--
.../orm/migrations/0018_project_specific.py | 28 +
bitbake/lib/toaster/orm/models.py | 74 +-
bitbake/lib/toaster/toastergui/api.py | 176 ++-
.../lib/toaster/toastergui/static/js/layerBtn.js | 12 +
.../toaster/toastergui/static/js/layerdetails.js | 3 +-
.../lib/toaster/toastergui/static/js/libtoaster.js | 108 +-
.../lib/toaster/toastergui/static/js/mrbsection.js | 4 +-
.../toastergui/static/js/newcustomimage_modal.js | 7 +
.../toaster/toastergui/static/js/projecttopbar.js | 22 +
bitbake/lib/toaster/toastergui/tables.py | 12 +-
.../toastergui/templates/base_specific.html | 128 ++
.../templates/baseprojectspecificpage.html | 48 +
.../toastergui/templates/customise_btn.html | 6 +-
.../templates/generic-toastertable-page.html | 2 +-
.../toaster/toastergui/templates/importlayer.html | 4 +-
.../toastergui/templates/landing_specific.html | 50 +
.../toaster/toastergui/templates/layerdetails.html | 3 +-
.../toaster/toastergui/templates/mrb_section.html | 2 +-
.../toastergui/templates/newcustomimage.html | 4 +-
.../toaster/toastergui/templates/newproject.html | 57 +-
.../toastergui/templates/newproject_specific.html | 95 ++
.../lib/toaster/toastergui/templates/project.html | 7 +-
.../toastergui/templates/project_specific.html | 162 +++
.../templates/project_specific_topbar.html | 80 ++
.../toaster/toastergui/templates/projectconf.html | 7 +-
.../lib/toaster/toastergui/templates/recipe.html | 2 +-
.../toastergui/templates/recipe_add_btn.html | 23 +
bitbake/lib/toaster/toastergui/urls.py | 13 +
bitbake/lib/toaster/toastergui/views.py | 165 ++-
bitbake/lib/toaster/toastergui/widgets.py | 23 +-
.../toastermain/management/commands/builddelete.py | 6 +-
.../toastermain/management/commands/buildimport.py | 584 +++++++++
bitbake/toaster-requirements.txt | 2 +-
110 files changed, 7024 insertions(+), 980 deletions(-)
create mode 100644 bitbake/doc/bitbake-user-manual/figures/bb_multiconfig_files.png
create mode 100644 bitbake/lib/bb/tests/cooker.py
create mode 100644 bitbake/lib/layerindexlib/README
create mode 100644 bitbake/lib/layerindexlib/__init__.py
create mode 100644 bitbake/lib/layerindexlib/cooker.py
create mode 100644 bitbake/lib/layerindexlib/plugin.py
create mode 100644 bitbake/lib/layerindexlib/restapi.py
create mode 100644 bitbake/lib/layerindexlib/tests/__init__.py
create mode 100644 bitbake/lib/layerindexlib/tests/common.py
create mode 100644 bitbake/lib/layerindexlib/tests/cooker.py
create mode 100644 bitbake/lib/layerindexlib/tests/layerindexobj.py
create mode 100644 bitbake/lib/layerindexlib/tests/restapi.py
create mode 100644 bitbake/lib/layerindexlib/tests/testdata/README
create mode 100644 bitbake/lib/layerindexlib/tests/testdata/build/conf/bblayers.conf
create mode 100644 bitbake/lib/layerindexlib/tests/testdata/layer1/conf/layer.conf
create mode 100644 bitbake/lib/layerindexlib/tests/testdata/layer2/conf/layer.conf
create mode 100644 bitbake/lib/layerindexlib/tests/testdata/layer3/conf/layer.conf
create mode 100644 bitbake/lib/layerindexlib/tests/testdata/layer4/conf/layer.conf
create mode 100644 bitbake/lib/toaster/orm/migrations/0018_project_specific.py
create mode 100644 bitbake/lib/toaster/toastergui/templates/base_specific.html
create mode 100644 bitbake/lib/toaster/toastergui/templates/baseprojectspecificpage.html
create mode 100644 bitbake/lib/toaster/toastergui/templates/landing_specific.html
create mode 100644 bitbake/lib/toaster/toastergui/templates/newproject_specific.html
create mode 100644 bitbake/lib/toaster/toastergui/templates/project_specific.html
create mode 100644 bitbake/lib/toaster/toastergui/templates/project_specific_topbar.html
create mode 100644 bitbake/lib/toaster/toastergui/templates/recipe_add_btn.html
mode change 100755 => 100644 bitbake/lib/toaster/toastergui/views.py
create mode 100644 bitbake/lib/toaster/toastermain/management/commands/buildimport.py
diff --git a/bitbake/bin/bitbake b/bitbake/bin/bitbake
index 95e4109..57dec2a 100755
--- a/bitbake/bin/bitbake
+++ b/bitbake/bin/bitbake
@@ -38,7 +38,7 @@ from bb.main import bitbake_main, BitBakeConfigParameters, BBMainException
if sys.getfilesystemencoding() != "utf-8":
sys.exit("Please use a locale setting which supports UTF-8 (such as LANG=en_US.UTF-8).\nPython can't change the filesystem locale after loading so we need a UTF-8 when Python starts or things won't work.")
-__version__ = "1.37.0"
+__version__ = "1.40.0"
if __name__ == "__main__":
if __version__ != bb.__version__:
diff --git a/bitbake/bin/bitbake-selftest b/bitbake/bin/bitbake-selftest
index afe1603..cfa7ac5 100755
--- a/bitbake/bin/bitbake-selftest
+++ b/bitbake/bin/bitbake-selftest
@@ -22,16 +22,21 @@ sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(__file__)), 'lib
import unittest
try:
import bb
+ import layerindexlib
except RuntimeError as exc:
sys.exit(str(exc))
tests = ["bb.tests.codeparser",
+ "bb.tests.cooker",
"bb.tests.cow",
"bb.tests.data",
"bb.tests.event",
"bb.tests.fetch",
"bb.tests.parse",
- "bb.tests.utils"]
+ "bb.tests.utils",
+ "layerindexlib.tests.layerindexobj",
+ "layerindexlib.tests.restapi",
+ "layerindexlib.tests.cooker"]
for t in tests:
t = '.'.join(t.split('.')[:3])
diff --git a/bitbake/bin/toaster b/bitbake/bin/toaster
index 4036f0a..9fffbc6 100755
--- a/bitbake/bin/toaster
+++ b/bitbake/bin/toaster
@@ -18,11 +18,12 @@
# along with this program. If not, see http://www.gnu.org/licenses/.
HELP="
-Usage: source toaster start|stop [webport=<address:port>] [noweb] [nobuild]
+Usage: source toaster start|stop [webport=<address:port>] [noweb] [nobuild] [toasterdir]
Optional arguments:
[nobuild] Setup the environment for capturing builds with toaster but disable managed builds
[noweb] Setup the environment for capturing builds with toaster but don't start the web server
[webport] Set the development server (default: localhost:8000)
+ [toasterdir] Set absolute path to be used as TOASTER_DIR (default: BUILDDIR/../)
"
custom_extention()
@@ -68,7 +69,7 @@ webserverKillAll()
if [ -f ${pidfile} ]; then
pid=`cat ${pidfile}`
while kill -0 $pid 2>/dev/null; do
- kill -SIGTERM -$pid 2>/dev/null
+ kill -SIGTERM $pid 2>/dev/null
sleep 1
done
rm ${pidfile}
@@ -91,7 +92,7 @@ webserverStartAll()
echo "Starting webserver..."
- $MANAGE runserver "$ADDR_PORT" \
+ $MANAGE runserver --noreload "$ADDR_PORT" \
</dev/null >>${BUILDDIR}/toaster_web.log 2>&1 \
& echo $! >${BUILDDIR}/.toastermain.pid
@@ -186,6 +187,7 @@ unset OE_ROOT
WEBSERVER=1
export TOASTER_BUILDSERVER=1
ADDR_PORT="localhost:8000"
+TOASTERDIR=`dirname $BUILDDIR`
unset CMD
for param in $*; do
case $param in
@@ -211,6 +213,9 @@ for param in $*; do
ADDR_PORT="localhost:$PORT"
fi
;;
+ toasterdir=*)
+ TOASTERDIR="${param#*=}"
+ ;;
--help)
echo "$HELP"
return 0
@@ -241,7 +246,7 @@ fi
# 2) the build dir (in build)
# 3) the sqlite db if that is being used.
# 4) pid's we need to clean up on exit/shutdown
-export TOASTER_DIR=`dirname $BUILDDIR`
+export TOASTER_DIR=$TOASTERDIR
export BB_ENV_EXTRAWHITE="$BB_ENV_EXTRAWHITE TOASTER_DIR"
# Determine the action. If specified by arguments, fine, if not, toggle it
diff --git a/bitbake/contrib/dump_cache.py b/bitbake/contrib/dump_cache.py
index f4d4c1b..8963ca4 100755
--- a/bitbake/contrib/dump_cache.py
+++ b/bitbake/contrib/dump_cache.py
@@ -2,7 +2,7 @@
# ex:ts=4:sw=4:sts=4:et
# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
#
-# Copyright (C) 2012 Wind River Systems, Inc.
+# Copyright (C) 2012, 2018 Wind River Systems, Inc.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
@@ -18,51 +18,68 @@
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
#
-# This is used for dumping the bb_cache.dat, the output format is:
-# recipe_path PN PV PACKAGES
+# Used for dumping the bb_cache.dat
#
import os
import sys
-import warnings
+import argparse
# For importing bb.cache
sys.path.insert(0, os.path.join(os.path.abspath(os.path.dirname(sys.argv[0])), '../lib'))
from bb.cache import CoreRecipeInfo
-import pickle as pickle
+import pickle
-def main(argv=None):
- """
- Get the mapping for the target recipe.
- """
- if len(argv) != 1:
- print("Error, need one argument!", file=sys.stderr)
- return 2
+class DumpCache(object):
+ def __init__(self):
+ parser = argparse.ArgumentParser(
+ description="bb_cache.dat's dumper",
+ epilog="Use %(prog)s --help to get help")
+ parser.add_argument("-r", "--recipe",
+ help="specify the recipe, default: all recipes", action="store")
+ parser.add_argument("-m", "--members",
+ help = "specify the member, use comma as separator for multiple ones, default: all members", action="store", default="")
+ parser.add_argument("-s", "--skip",
+ help = "skip skipped recipes", action="store_true")
+ parser.add_argument("cachefile",
+ help = "specify bb_cache.dat", nargs = 1, action="store", default="")
- cachefile = argv[0]
+ self.args = parser.parse_args()
- with open(cachefile, "rb") as cachefile:
- pickled = pickle.Unpickler(cachefile)
- while cachefile:
- try:
- key = pickled.load()
- val = pickled.load()
- except Exception:
- break
- if isinstance(val, CoreRecipeInfo) and (not val.skipped):
- pn = val.pn
- # Filter out the native recipes.
- if key.startswith('virtual:native:') or pn.endswith("-native"):
- continue
+ def main(self):
+ with open(self.args.cachefile[0], "rb") as cachefile:
+ pickled = pickle.Unpickler(cachefile)
+ while True:
+ try:
+ key = pickled.load()
+ val = pickled.load()
+ except Exception:
+ break
+ if isinstance(val, CoreRecipeInfo):
+ pn = val.pn
- # 1.0 is the default version for a no PV recipe.
- if "pv" in val.__dict__:
- pv = val.pv
- else:
- pv = "1.0"
+ if self.args.recipe and self.args.recipe != pn:
+ continue
- print("%s %s %s %s" % (key, pn, pv, ' '.join(val.packages)))
+ if self.args.skip and val.skipped:
+ continue
-if __name__ == "__main__":
- sys.exit(main(sys.argv[1:]))
+ if self.args.members:
+ out = key
+ for member in self.args.members.split(','):
+ out += ": %s" % val.__dict__.get(member)
+ print("%s" % out)
+ else:
+ print("%s: %s" % (key, val.__dict__))
+ elif not self.args.recipe:
+ print("%s %s" % (key, val))
+if __name__ == "__main__":
+ try:
+ dump = DumpCache()
+ ret = dump.main()
+ except Exception as esc:
+ ret = 1
+ import traceback
+ traceback.print_exc()
+ sys.exit(ret)
diff --git a/bitbake/doc/bitbake-user-manual/bitbake-user-manual-execution.xml b/bitbake/doc/bitbake-user-manual/bitbake-user-manual-execution.xml
index e4cc422..f1caaec 100644
--- a/bitbake/doc/bitbake-user-manual/bitbake-user-manual-execution.xml
+++ b/bitbake/doc/bitbake-user-manual/bitbake-user-manual-execution.xml
@@ -781,7 +781,7 @@
The code in <filename>meta/lib/oe/sstatesig.py</filename> shows two examples
of this and also illustrates how you can insert your own policy into the system
if so desired.
- This file defines the two basic signature generators OpenEmbedded Core
+ This file defines the two basic signature generators OpenEmbedded-Core
uses: "OEBasic" and "OEBasicHash".
By default, there is a dummy "noop" signature handler enabled in BitBake.
This means that behavior is unchanged from previous versions.
diff --git a/bitbake/doc/bitbake-user-manual/bitbake-user-manual-fetching.xml b/bitbake/doc/bitbake-user-manual/bitbake-user-manual-fetching.xml
index c721e86..29ae486 100644
--- a/bitbake/doc/bitbake-user-manual/bitbake-user-manual-fetching.xml
+++ b/bitbake/doc/bitbake-user-manual/bitbake-user-manual-fetching.xml
@@ -777,6 +777,43 @@
</para>
</section>
+ <section id='repo-fetcher'>
+ <title>Repo Fetcher (<filename>repo://</filename>)</title>
+
+ <para>
+ This fetcher submodule fetches code from
+ <filename>google-repo</filename> source control system.
+ The fetcher works by initiating and syncing sources of the
+ repository into
+ <link linkend='var-REPODIR'><filename>REPODIR</filename></link>,
+ which is usually
+ <link linkend='var-DL_DIR'><filename>DL_DIR</filename></link><filename>/repo</filename>.
+ </para>
+
+ <para>
+ This fetcher supports the following parameters:
+ <itemizedlist>
+ <listitem><para>
+ <emphasis>"protocol":</emphasis>
+ Protocol to fetch the repository manifest (default: git).
+ </para></listitem>
+ <listitem><para>
+ <emphasis>"branch":</emphasis>
+ Branch or tag of repository to get (default: master).
+ </para></listitem>
+ <listitem><para>
+ <emphasis>"manifest":</emphasis>
+ Name of the manifest file (default: <filename>default.xml</filename>).
+ </para></listitem>
+ </itemizedlist>
+ Here are some example URLs:
+ <literallayout class='monospaced'>
+ SRC_URI = "repo://REPOROOT;protocol=git;branch=some_branch;manifest=my_manifest.xml"
+ SRC_URI = "repo://REPOROOT;protocol=file;branch=some_branch;manifest=my_manifest.xml"
+ </literallayout>
+ </para>
+ </section>
+
<section id='other-fetchers'>
<title>Other Fetchers</title>
@@ -796,9 +833,6 @@
Secure Shell (<filename>ssh://</filename>)
</para></listitem>
<listitem><para>
- Repo (<filename>repo://</filename>)
- </para></listitem>
- <listitem><para>
OSC (<filename>osc://</filename>)
</para></listitem>
<listitem><para>
diff --git a/bitbake/doc/bitbake-user-manual/bitbake-user-manual-hello.xml b/bitbake/doc/bitbake-user-manual/bitbake-user-manual-hello.xml
index f1060e5..9076f0f 100644
--- a/bitbake/doc/bitbake-user-manual/bitbake-user-manual-hello.xml
+++ b/bitbake/doc/bitbake-user-manual/bitbake-user-manual-hello.xml
@@ -383,10 +383,10 @@
code separate from the general metadata used by BitBake.
Thus, this example creates and uses a layer called "mylayer".
<note>
- You can find additional information on layers at
- <ulink url='http://www.yoctoproject.org/docs/2.3/bitbake-user-manual/bitbake-user-manual.html#layers'></ulink>.
- </note>
- </para>
+ You can find additional information on layers in the
+ "<link linkend='layers'>Layers</link>" section.
+ </note></para>
+
<para>Minimally, you need a recipe file and a layer configuration
file in your layer.
The configuration file needs to be in the <filename>conf</filename>
diff --git a/bitbake/doc/bitbake-user-manual/bitbake-user-manual-intro.xml b/bitbake/doc/bitbake-user-manual/bitbake-user-manual-intro.xml
index eb45809..f7d312a 100644
--- a/bitbake/doc/bitbake-user-manual/bitbake-user-manual-intro.xml
+++ b/bitbake/doc/bitbake-user-manual/bitbake-user-manual-intro.xml
@@ -342,13 +342,14 @@
<para>
When you name an append file, you can use the
- wildcard character (%) to allow for matching recipe names.
+ "<filename>%</filename>" wildcard character to allow for matching
+ recipe names.
For example, suppose you have an append file named
as follows:
<literallayout class='monospaced'>
busybox_1.21.%.bbappend
</literallayout>
- That append file would match any <filename>busybox_1.21.x.bb</filename>
+ That append file would match any <filename>busybox_1.21.</filename><replaceable>x</replaceable><filename>.bb</filename>
version of the recipe.
So, the append file would match the following recipe names:
<literallayout class='monospaced'>
@@ -356,6 +357,14 @@
busybox_1.21.2.bb
busybox_1.21.3.bb
</literallayout>
+ <note><title>Important</title>
+ The use of the "<filename>%</filename>" character
+ is limited in that it only works directly in front of the
+ <filename>.bbappend</filename> portion of the append file's
+ name.
+ You cannot use the wildcard character in any other
+ location of the name.
+ </note>
If the <filename>busybox</filename> recipe was updated to
<filename>busybox_1.3.0.bb</filename>, the append name would not
match.
@@ -564,8 +573,12 @@
Writes the event log of the build to a bitbake event
json file. Use '' (empty string) to assign the name
automatically.
- --runall=RUNALL Run the specified task for all build targets and their
- dependencies.
+ --runall=RUNALL Run the specified task for any recipe in the taskgraph
+ of the specified target (even if it wouldn't otherwise
+ have run).
+ --runonly=RUNONLY Run only the specified task within the taskgraph of
+ the specified targets (and any task dependencies those
+ tasks may have).
</literallayout>
</para>
</section>
@@ -719,6 +732,163 @@
</literallayout>
</para>
</section>
+
+ <section id='executing-a-multiple-configuration-build'>
+ <title>Executing a Multiple Configuration Build</title>
+
+ <para>
+ BitBake is able to build multiple images or packages
+ using a single command where the different targets
+ require different configurations (multiple configuration
+ builds).
+ Each target, in this scenario, is referred to as a
+ "multiconfig".
+ </para>
+
+ <para>
+ To accomplish a multiple configuration build, you must
+ define each target's configuration separately using
+ a parallel configuration file in the build directory.
+ The location for these multiconfig configuration files
+ is specific.
+ They must reside in the current build directory in
+ a sub-directory of <filename>conf</filename> named
+ <filename>multiconfig</filename>.
+ Following is an example for two separate targets:
+ <imagedata fileref="figures/bb_multiconfig_files.png" align="center" width="4in" depth="3in" />
+ </para>
+
+ <para>
+ The reason for this required file hierarchy
+ is because the <filename>BBPATH</filename> variable
+ is not constructed until the layers are parsed.
+ Consequently, using the configuration file as a
+ pre-configuration file is not possible unless it is
+ located in the current working directory.
+ </para>
+
+ <para>
+ Minimally, each configuration file must define the
+ machine and the temporary directory BitBake uses
+ for the build.
+ Suggested practice dictates that you do not
+ overlap the temporary directories used during the
+ builds.
+ </para>
+
+ <para>
+ Aside from separate configuration files for each
+ target, you must also enable BitBake to perform multiple
+ configuration builds.
+ Enabling is accomplished by setting the
+ <link linkend='var-BBMULTICONFIG'><filename>BBMULTICONFIG</filename></link>
+ variable in the <filename>local.conf</filename>
+ configuration file.
+ As an example, suppose you had configuration files
+ for <filename>target1</filename> and
+ <filename>target2</filename> defined in the build
+ directory.
+ The following statement in the
+ <filename>local.conf</filename> file both enables
+ BitBake to perform multiple configuration builds and
+ specifies the two multiconfigs:
+ <literallayout class='monospaced'>
+ BBMULTICONFIG = "target1 target2"
+ </literallayout>
+ </para>
+
+ <para>
+ Once the target configuration files are in place and
+ BitBake has been enabled to perform multiple configuration
+ builds, use the following command form to start the
+ builds:
+ <literallayout class='monospaced'>
+ $ bitbake [multiconfig:<replaceable>multiconfigname</replaceable>:]<replaceable>target</replaceable> [[[multiconfig:<replaceable>multiconfigname</replaceable>:]<replaceable>target</replaceable>] ... ]
+ </literallayout>
+ Here is an example for two multiconfigs:
+ <filename>target1</filename> and
+ <filename>target2</filename>:
+ <literallayout class='monospaced'>
+ $ bitbake multiconfig:target1:<replaceable>target</replaceable> multiconfig:target2:<replaceable>target</replaceable>
+ </literallayout>
+ </para>
+ </section>
+
+ <section id='bb-enabling-multiple-configuration-build-dependencies'>
+ <title>Enabling Multiple Configuration Build Dependencies</title>
+
+ <para>
+ Sometimes dependencies can exist between targets
+ (multiconfigs) in a multiple configuration build.
+ For example, suppose that in order to build an image
+ for a particular architecture, the root filesystem of
+ another build for a different architecture needs to
+ exist.
+ In other words, the image for the first multiconfig depends
+ on the root filesystem of the second multiconfig.
+ This dependency is essentially that the task in the recipe
+ that builds one multiconfig is dependent on the
+ completion of the task in the recipe that builds
+ another multiconfig.
+ </para>
+
+ <para>
+ To enable dependencies in a multiple configuration
+ build, you must declare the dependencies in the recipe
+ using the following statement form:
+ <literallayout class='monospaced'>
+ <replaceable>task_or_package</replaceable>[mcdepends] = "multiconfig:<replaceable>from_multiconfig</replaceable>:<replaceable>to_multiconfig</replaceable>:<replaceable>recipe_name</replaceable>:<replaceable>task_on_which_to_depend</replaceable>"
+ </literallayout>
+ To better show how to use this statement, consider an
+ example with two multiconfigs: <filename>target1</filename>
+ and <filename>target2</filename>:
+ <literallayout class='monospaced'>
+ <replaceable>image_task</replaceable>[mcdepends] = "multiconfig:target1:target2:<replaceable>image2</replaceable>:<replaceable>rootfs_task</replaceable>"
+ </literallayout>
+ In this example, the
+ <replaceable>from_multiconfig</replaceable> is "target1" and
+ the <replaceable>to_multiconfig</replaceable> is "target2".
+ The task on which the image whose recipe contains
+ <replaceable>image_task</replaceable> depends on the
+ completion of the <replaceable>rootfs_task</replaceable>
+ used to build out <replaceable>image2</replaceable>, which
+ is associated with the "target2" multiconfig.
+ </para>
+
+ <para>
+ Once you set up this dependency, you can build the
+ "target1" multiconfig using a BitBake command as follows:
+ <literallayout class='monospaced'>
+ $ bitbake multiconfig:target1:<replaceable>image1</replaceable>
+ </literallayout>
+ This command executes all the tasks needed to create
+ <replaceable>image1</replaceable> for the "target1"
+ multiconfig.
+ Because of the dependency, BitBake also executes through
+ the <replaceable>rootfs_task</replaceable> for the "target2"
+ multiconfig build.
+ </para>
+
+ <para>
+ Having a recipe depend on the root filesystem of another
+ build might not seem that useful.
+ Consider this change to the statement in the
+ <replaceable>image1</replaceable> recipe:
+ <literallayout class='monospaced'>
+ <replaceable>image_task</replaceable>[mcdepends] = "multiconfig:target1:target2:<replaceable>image2</replaceable>:<replaceable>image_task</replaceable>"
+ </literallayout>
+ In this case, BitBake must create
+ <replaceable>image2</replaceable> for the "target2"
+ build since the "target1" build depends on it.
+ </para>
+
+ <para>
+ Because "target1" and "target2" are enabled for multiple
+ configuration builds and have separate configuration
+ files, BitBake places the artifacts for each build in the
+ respective temporary build directories.
+ </para>
+ </section>
</section>
</section>
</chapter>
diff --git a/bitbake/doc/bitbake-user-manual/bitbake-user-manual-metadata.xml b/bitbake/doc/bitbake-user-manual/bitbake-user-manual-metadata.xml
index f0cfffe..2490f6e 100644
--- a/bitbake/doc/bitbake-user-manual/bitbake-user-manual-metadata.xml
+++ b/bitbake/doc/bitbake-user-manual/bitbake-user-manual-metadata.xml
@@ -342,7 +342,7 @@
<para>
When you use this syntax, BitBake expects one or more strings.
- Surrounding spaces are removed as well.
+ Surrounding spaces and spacing are preserved.
Here is an example:
<literallayout class='monospaced'>
FOO = "123 456 789 123456 123 456 123 456"
@@ -352,8 +352,9 @@
FOO2_remove = "abc def"
</literallayout>
The variable <filename>FOO</filename> becomes
- "789 123456" and <filename>FOO2</filename> becomes
- "ghi abcdef".
+ " 789 123456 "
+ and <filename>FOO2</filename> becomes
+ " ghi abcdef ".
</para>
<para>
@@ -1929,6 +1930,38 @@
not careful.
</note>
</para></listitem>
+ <listitem><para><emphasis><filename>[number_threads]</filename>:</emphasis>
+ Limits tasks to a specific number of simultaneous threads
+ during execution.
+ This varflag is useful when your build host has a large number
+ of cores but certain tasks need to be rate-limited due to various
+ kinds of resource constraints (e.g. to avoid network throttling).
+ <filename>number_threads</filename> works similarly to the
+ <link linkend='var-BB_NUMBER_THREADS'><filename>BB_NUMBER_THREADS</filename></link>
+ variable but is task-specific.</para>
+
+ <para>Set the value globally.
+ For example, the following makes sure the
+ <filename>do_fetch</filename> task uses no more than two
+ simultaneous execution threads:
+ <literallayout class='monospaced'>
+ do_fetch[number_threads] = "2"
+ </literallayout>
+ <note><title>Warnings</title>
+ <itemizedlist>
+ <listitem><para>
+ Setting the varflag in individual recipes rather
+ than globally can result in unpredictable behavior.
+ </para></listitem>
+ <listitem><para>
+ Setting the varflag to a value greater than the
+ value used in the <filename>BB_NUMBER_THREADS</filename>
+ variable causes <filename>number_threads</filename>
+ to have no effect.
+ </para></listitem>
+ </itemizedlist>
+ </note>
+ </para></listitem>
<listitem><para><emphasis><filename>[postfuncs]</filename>:</emphasis>
List of functions to call after the completion of the task.
</para></listitem>
@@ -2652,48 +2685,97 @@
</para>
<para>
- This list is a place holder of content existed from previous work
- on the manual.
- Some or all of it probably needs integrated into the subsections
- that make up this section.
- For now, I have just provided a short glossary-like description
- for each variable.
- Ultimately, this list goes away.
+ These checksums are stored in
+ <link linkend='var-STAMP'><filename>STAMP</filename></link>.
+ You can examine the checksums using the following BitBake command:
+ <literallayout class='monospaced'>
+ $ bitbake-dumpsigs
+ </literallayout>
+ This command returns the signature data in a readable format
+ that allows you to examine the inputs used when the
+ OpenEmbedded build system generates signatures.
+ For example, using <filename>bitbake-dumpsigs</filename>
+ allows you to examine the <filename>do_compile</filename>
+ task's “sigdata” for a C application (e.g.
+ <filename>bash</filename>).
+ Running the command also reveals that the “CC” variable is part of
+ the inputs that are hashed.
+ Any changes to this variable would invalidate the stamp and
+ cause the <filename>do_compile</filename> task to run.
+ </para>
+
+ <para>
+ The following list describes related variables:
<itemizedlist>
- <listitem><para><filename>STAMP</filename>:
- The base path to create stamp files.</para></listitem>
- <listitem><para><filename>STAMPCLEAN</filename>
- Again, the base path to create stamp files but can use wildcards
- for matching a range of files for clean operations.
- </para></listitem>
- <listitem><para><filename>BB_STAMP_WHITELIST</filename>
- Lists stamp files that are looked at when the stamp policy
- is "whitelist".
- </para></listitem>
- <listitem><para><filename>BB_STAMP_POLICY</filename>
- Defines the mode for comparing timestamps of stamp files.
- </para></listitem>
- <listitem><para><filename>BB_HASHCHECK_FUNCTION</filename>
+ <listitem><para>
+ <link linkend='var-BB_HASHCHECK_FUNCTION'><filename>BB_HASHCHECK_FUNCTION</filename></link>:
Specifies the name of the function to call during
the "setscene" part of the task's execution in order
to validate the list of task hashes.
</para></listitem>
- <listitem><para><filename>BB_SETSCENE_VERIFY_FUNCTION2</filename>
+ <listitem><para>
+ <link linkend='var-BB_SETSCENE_DEPVALID'><filename>BB_SETSCENE_DEPVALID</filename></link>:
+ Specifies a function BitBake calls that determines
+ whether BitBake requires a setscene dependency to
+ be met.
+ </para></listitem>
+ <listitem><para>
+ <link linkend='var-BB_SETSCENE_VERIFY_FUNCTION2'><filename>BB_SETSCENE_VERIFY_FUNCTION2</filename></link>:
Specifies a function to call that verifies the list of
planned task execution before the main task execution
happens.
</para></listitem>
- <listitem><para><filename>BB_SETSCENE_DEPVALID</filename>
- Specifies a function BitBake calls that determines
- whether BitBake requires a setscene dependency to
- be met.
+ <listitem><para>
+ <link linkend='var-BB_STAMP_POLICY'><filename>BB_STAMP_POLICY</filename></link>:
+ Defines the mode for comparing timestamps of stamp files.
+ </para></listitem>
+ <listitem><para>
+ <link linkend='var-BB_STAMP_WHITELIST'><filename>BB_STAMP_WHITELIST</filename></link>:
+ Lists stamp files that are looked at when the stamp policy
+ is "whitelist".
</para></listitem>
- <listitem><para><filename>BB_TASKHASH</filename>
+ <listitem><para>
+ <link linkend='var-BB_TASKHASH'><filename>BB_TASKHASH</filename></link>:
Within an executing task, this variable holds the hash
of the task as returned by the currently enabled
signature generator.
</para></listitem>
+ <listitem><para>
+ <link linkend='var-STAMP'><filename>STAMP</filename></link>:
+ The base path to create stamp files.
+ </para></listitem>
+ <listitem><para>
+ <link linkend='var-STAMPCLEAN'><filename>STAMPCLEAN</filename></link>:
+ Again, the base path to create stamp files but can use wildcards
+ for matching a range of files for clean operations.
+ </para></listitem>
</itemizedlist>
</para>
</section>
+
+ <section id='wildcard-support-in-variables'>
+ <title>Wildcard Support in Variables</title>
+
+ <para>
+ Support for wildcard use in variables varies depending on the
+ context in which it is used.
+ For example, some variables and file names allow limited use of
+ wildcards through the "<filename>%</filename>" and
+ "<filename>*</filename>" characters.
+ Other variables or names support Python's
+ <ulink url='https://docs.python.org/3/library/glob.html'><filename>glob</filename></ulink>
+ syntax,
+ <ulink url='https://docs.python.org/3/library/fnmatch.html#module-fnmatch'><filename>fnmatch</filename></ulink>
+ syntax, or
+ <ulink url='https://docs.python.org/3/library/re.html#re'><filename>Regular Expression (re)</filename></ulink>
+ syntax.
+ </para>
+
+ <para>
+ For variables that have wildcard suport, the
+ documentation describes which form of wildcard, its
+ use, and its limitations.
+ </para>
+ </section>
+
</chapter>
diff --git a/bitbake/doc/bitbake-user-manual/bitbake-user-manual-ref-variables.xml b/bitbake/doc/bitbake-user-manual/bitbake-user-manual-ref-variables.xml
index d89e123..a84b2bc 100644
--- a/bitbake/doc/bitbake-user-manual/bitbake-user-manual-ref-variables.xml
+++ b/bitbake/doc/bitbake-user-manual/bitbake-user-manual-ref-variables.xml
@@ -78,7 +78,7 @@
</para>
<para>
- In OpenEmbedded Core, <filename>ASSUME_PROVIDED</filename>
+ In OpenEmbedded-Core, <filename>ASSUME_PROVIDED</filename>
mostly specifies native tools that should not be built.
An example is <filename>git-native</filename>, which
when specified allows for the Git binary from the host to
@@ -115,7 +115,8 @@
is either not set or set to "0".
</para></listitem>
<listitem><para>
- Limited support for wildcard matching against the
+ Limited support for the "<filename>*</filename>"
+ wildcard character for matching against the
beginning of host names exists.
For example, the following setting matches
<filename>git.gnu.org</filename>,
@@ -124,6 +125,20 @@
<literallayout class='monospaced'>
BB_ALLOWED_NETWORKS = "*.gnu.org"
</literallayout>
+ <note><title>Important</title>
+ <para>The use of the "<filename>*</filename>"
+ character only works at the beginning of
+ a host name and it must be isolated from
+ the remainder of the host name.
+ You cannot use the wildcard character in any
+ other location of the name or combined with
+ the front part of the name.</para>
+
+ <para>For example,
+ <filename>*.foo.bar</filename> is supported,
+ while <filename>*aa.foo.bar</filename> is not.
+ </para>
+ </note>
</para></listitem>
<listitem><para>
Mirrors not in the host list are skipped and
@@ -646,10 +661,10 @@
<glossdef>
<para>
Contains the name of the currently executing task.
- The value does not include the "do_" prefix.
+ The value includes the "do_" prefix.
For example, if the currently executing task is
<filename>do_config</filename>, the value is
- "config".
+ "do_config".
</para>
</glossdef>
</glossentry>
@@ -964,7 +979,7 @@
Allows you to extend a recipe so that it builds variants
of the software.
Some examples of these variants for recipes from the
- OpenEmbedded Core metadata are "natives" such as
+ OpenEmbedded-Core metadata are "natives" such as
<filename>quilt-native</filename>, which is a copy of
Quilt built to run on the build system; "crosses" such
as <filename>gcc-cross</filename>, which is a compiler
@@ -980,7 +995,7 @@
amount of code, it usually is as simple as adding the
variable to your recipe.
Here are two examples.
- The "native" variants are from the OpenEmbedded Core
+ The "native" variants are from the OpenEmbedded-Core
metadata:
<literallayout class='monospaced'>
BBCLASSEXTEND =+ "native nativesdk"
@@ -1082,7 +1097,19 @@
<glossentry id='var-BBFILES'><glossterm>BBFILES</glossterm>
<glossdef>
- <para>List of recipe files BitBake uses to build software.</para>
+ <para>
+ A space-separated list of recipe files BitBake uses to
+ build software.
+ </para>
+
+ <para>
+ When specifying recipe files, you can pattern match using
+ Python's
+ <ulink url='https://docs.python.org/3/library/glob.html'><filename>glob</filename></ulink>
+ syntax.
+ For details on the syntax, see the documentation by
+ following the previous link.
+ </para>
</glossdef>
</glossentry>
@@ -1166,15 +1193,19 @@
match any of the expressions.
It is as if BitBake does not see them at all.
Consequently, matching files are not parsed or otherwise
- used by BitBake.</para>
+ used by BitBake.
+ </para>
+
<para>
The values you provide are passed to Python's regular
expression compiler.
+ Consequently, the syntax follows Python's Regular
+ Expression (re) syntax.
The expressions are compared against the full paths to
the files.
For complete syntax information, see Python's
documentation at
- <ulink url='http://docs.python.org/release/2.3/lib/re-syntax.html'></ulink>.
+ <ulink url='http://docs.python.org/3/library/re.html#re'></ulink>.
</para>
<para>
@@ -1205,6 +1236,45 @@
</glossdef>
</glossentry>
+ <glossentry id='var-BBMULTICONFIG'><glossterm>BBMULTICONFIG</glossterm>
+ <info>
+ BBMULTICONFIG[doc] = "Enables BitBake to perform multiple configuration builds and lists each separate configuration (multiconfig)."
+ </info>
+ <glossdef>
+ <para role="glossdeffirst">
+<!-- <para role="glossdeffirst"><imagedata fileref="figures/define-generic.png" /> -->
+ Enables BitBake to perform multiple configuration builds
+ and lists each separate configuration (multiconfig).
+ You can use this variable to cause BitBake to build
+ multiple targets where each target has a separate
+ configuration.
+ Define <filename>BBMULTICONFIG</filename> in your
+ <filename>conf/local.conf</filename> configuration file.
+ </para>
+
+ <para>
+ As an example, the following line specifies three
+ multiconfigs, each having a separate configuration file:
+ <literallayout class='monospaced'>
+ BBMULTIFONFIG = "configA configB configC"
+ </literallayout>
+ Each configuration file you use must reside in the
+ build directory within a directory named
+ <filename>conf/multiconfig</filename> (e.g.
+ <replaceable>build_directory</replaceable><filename>/conf/multiconfig/configA.conf</filename>).
+ </para>
+
+ <para>
+ For information on how to use
+ <filename>BBMULTICONFIG</filename> in an environment that
+ supports building targets with multiple configurations,
+ see the
+ "<link linkend='executing-a-multiple-configuration-build'>Executing a Multiple Configuration Build</link>"
+ section.
+ </para>
+ </glossdef>
+ </glossentry>
+
<glossentry id='var-BBPATH'><glossterm>BBPATH</glossterm>
<glossdef>
<para>
@@ -1894,15 +1964,27 @@
you want to select, and you should set
<link linkend='var-PV'><filename>PV</filename></link>
accordingly for precedence.
- You can use the "<filename>%</filename>" character as a
- wildcard to match any number of characters, which can be
- useful when specifying versions that contain long revision
- numbers that could potentially change.
+ </para>
+
+ <para>
+ The <filename>PREFERRED_VERSION</filename> variable
+ supports limited wildcard use through the
+ "<filename>%</filename>" character.
+ You can use the character to match any number of
+ characters, which can be useful when specifying versions
+ that contain long revision numbers that potentially change.
Here are two examples:
<literallayout class='monospaced'>
PREFERRED_VERSION_python = "2.7.3"
PREFERRED_VERSION_linux-yocto = "4.12%"
</literallayout>
+ <note><title>Important</title>
+ The use of the "<filename>%</filename>" character
+ is limited in that it only works at the end of the
+ string.
+ You cannot use the wildcard character in any other
+ location of the string.
+ </note>
</para>
</glossdef>
</glossentry>
@@ -2089,6 +2171,16 @@
</glossdef>
</glossentry>
+ <glossentry id='var-REPODIR'><glossterm>REPODIR</glossterm>
+ <glossdef>
+ <para>
+ The directory in which a local copy of a
+ <filename>google-repo</filename> directory is stored
+ when it is synced.
+ </para>
+ </glossdef>
+ </glossentry>
+
<glossentry id='var-RPROVIDES'><glossterm>RPROVIDES</glossterm>
<glossdef>
<para>
diff --git a/bitbake/doc/bitbake-user-manual/bitbake-user-manual.xml b/bitbake/doc/bitbake-user-manual/bitbake-user-manual.xml
index d23e3ef..d793265 100644
--- a/bitbake/doc/bitbake-user-manual/bitbake-user-manual.xml
+++ b/bitbake/doc/bitbake-user-manual/bitbake-user-manual.xml
@@ -56,7 +56,7 @@
-->
<copyright>
- <year>2004-2017</year>
+ <year>2004-2018</year>
<holder>Richard Purdie</holder>
<holder>Chris Larson</holder>
<holder>and Phil Blundell</holder>
diff --git a/bitbake/doc/bitbake-user-manual/figures/bb_multiconfig_files.png b/bitbake/doc/bitbake-user-manual/figures/bb_multiconfig_files.png
new file mode 100644
index 0000000..e69de29
diff --git a/bitbake/lib/bb/COW.py b/bitbake/lib/bb/COW.py
index bec6208..7817473 100644
--- a/bitbake/lib/bb/COW.py
+++ b/bitbake/lib/bb/COW.py
@@ -150,7 +150,7 @@ class COWDictMeta(COWMeta):
yield value
if type == "items":
yield (key, value)
- raise StopIteration()
+ return
def iterkeys(cls):
return cls.iter("keys")
diff --git a/bitbake/lib/bb/__init__.py b/bitbake/lib/bb/__init__.py
index cd2f157..4bc47c8 100644
--- a/bitbake/lib/bb/__init__.py
+++ b/bitbake/lib/bb/__init__.py
@@ -21,7 +21,7 @@
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
-__version__ = "1.37.0"
+__version__ = "1.40.0"
import sys
if sys.version_info < (3, 4, 0):
@@ -63,6 +63,10 @@ class BBLogger(Logger):
def verbose(self, msg, *args, **kwargs):
return self.log(logging.INFO - 1, msg, *args, **kwargs)
+ def verbnote(self, msg, *args, **kwargs):
+ return self.log(logging.INFO + 2, msg, *args, **kwargs)
+
+
logging.raiseExceptions = False
logging.setLoggerClass(BBLogger)
@@ -93,6 +97,18 @@ def debug(lvl, *args):
def note(*args):
mainlogger.info(''.join(args))
+#
+# A higher prioity note which will show on the console but isn't a warning
+#
+# Something is happening the user should be aware of but they probably did
+# something to make it happen
+#
+def verbnote(*args):
+ mainlogger.verbnote(''.join(args))
+
+#
+# Warnings - things the user likely needs to pay attention to and fix
+#
def warn(*args):
mainlogger.warning(''.join(args))
diff --git a/bitbake/lib/bb/build.py b/bitbake/lib/bb/build.py
index 4631abd..3e2a94e 100644
--- a/bitbake/lib/bb/build.py
+++ b/bitbake/lib/bb/build.py
@@ -41,8 +41,6 @@ from bb import data, event, utils
bblogger = logging.getLogger('BitBake')
logger = logging.getLogger('BitBake.Build')
-NULL = open(os.devnull, 'r+')
-
__mtime_cache = {}
def cached_mtime_noerror(f):
@@ -533,7 +531,6 @@ def _exec_task(fn, task, d, quieterr):
self.triggered = True
# Handle logfiles
- si = open('/dev/null', 'r')
try:
bb.utils.mkdirhier(os.path.dirname(logfn))
logfile = open(logfn, 'w')
@@ -547,7 +544,8 @@ def _exec_task(fn, task, d, quieterr):
ose = [os.dup(sys.stderr.fileno()), sys.stderr.fileno()]
# Replace those fds with our own
- os.dup2(si.fileno(), osi[1])
+ with open('/dev/null', 'r') as si:
+ os.dup2(si.fileno(), osi[1])
os.dup2(logfile.fileno(), oso[1])
os.dup2(logfile.fileno(), ose[1])
@@ -608,7 +606,6 @@ def _exec_task(fn, task, d, quieterr):
os.close(osi[0])
os.close(oso[0])
os.close(ose[0])
- si.close()
logfile.close()
if os.path.exists(logfn) and os.path.getsize(logfn) == 0:
@@ -803,6 +800,7 @@ def add_tasks(tasklist, d):
if name in flags:
deptask = d.expand(flags[name])
task_deps[name][task] = deptask
+ getTask('mcdepends')
getTask('depends')
getTask('rdepends')
getTask('deptask')
diff --git a/bitbake/lib/bb/cache.py b/bitbake/lib/bb/cache.py
index 86ce0e7..258d679 100644
--- a/bitbake/lib/bb/cache.py
+++ b/bitbake/lib/bb/cache.py
@@ -37,7 +37,7 @@ import bb.utils
logger = logging.getLogger("BitBake.Cache")
-__cache_version__ = "151"
+__cache_version__ = "152"
def getCacheFile(path, filename, data_hash):
return os.path.join(path, filename + "." + data_hash)
@@ -395,7 +395,7 @@ class Cache(NoCache):
self.has_cache = True
self.cachefile = getCacheFile(self.cachedir, "bb_cache.dat", self.data_hash)
- logger.debug(1, "Using cache in '%s'", self.cachedir)
+ logger.debug(1, "Cache dir: %s", self.cachedir)
bb.utils.mkdirhier(self.cachedir)
cache_ok = True
@@ -408,6 +408,8 @@ class Cache(NoCache):
self.load_cachefile()
elif os.path.isfile(self.cachefile):
logger.info("Out of date cache found, rebuilding...")
+ else:
+ logger.debug(1, "Cache file %s not found, building..." % self.cachefile)
def load_cachefile(self):
cachesize = 0
@@ -424,6 +426,7 @@ class Cache(NoCache):
for cache_class in self.caches_array:
cachefile = getCacheFile(self.cachedir, cache_class.cachefile, self.data_hash)
+ logger.debug(1, 'Loading cache file: %s' % cachefile)
with open(cachefile, "rb") as cachefile:
pickled = pickle.Unpickler(cachefile)
# Check cache version information
diff --git a/bitbake/lib/bb/checksum.py b/bitbake/lib/bb/checksum.py
index 8428920..4e1598f 100644
--- a/bitbake/lib/bb/checksum.py
+++ b/bitbake/lib/bb/checksum.py
@@ -97,6 +97,8 @@ class FileChecksumCache(MultiProcessCache):
def checksum_dir(pth):
# Handle directories recursively
+ if pth == "/":
+ bb.fatal("Refusing to checksum /")
dirchecksums = []
for root, dirs, files in os.walk(pth):
for name in files:
diff --git a/bitbake/lib/bb/codeparser.py b/bitbake/lib/bb/codeparser.py
index 530f44e..ddd1b97 100644
--- a/bitbake/lib/bb/codeparser.py
+++ b/bitbake/lib/bb/codeparser.py
@@ -140,7 +140,7 @@ class CodeParserCache(MultiProcessCache):
# so that an existing cache gets invalidated. Additionally you'll need
# to increment __cache_version__ in cache.py in order to ensure that old
# recipe caches don't trigger "Taskhash mismatch" errors.
- CACHE_VERSION = 9
+ CACHE_VERSION = 10
def __init__(self):
MultiProcessCache.__init__(self)
@@ -214,7 +214,7 @@ class BufferedLogger(Logger):
self.buffer = []
class PythonParser():
- getvars = (".getVar", ".appendVar", ".prependVar")
+ getvars = (".getVar", ".appendVar", ".prependVar", "oe.utils.conditional")
getvarflags = (".getVarFlag", ".appendVarFlag", ".prependVarFlag")
containsfuncs = ("bb.utils.contains", "base_contains")
containsanyfuncs = ("bb.utils.contains_any", "bb.utils.filter")
diff --git a/bitbake/lib/bb/cooker.py b/bitbake/lib/bb/cooker.py
index cd365f7..71a0eba 100644
--- a/bitbake/lib/bb/cooker.py
+++ b/bitbake/lib/bb/cooker.py
@@ -516,6 +516,8 @@ class BBCooker:
fn = runlist[0][3]
else:
envdata = self.data
+ data.expandKeys(envdata)
+ parse.ast.runAnonFuncs(envdata)
if fn:
try:
@@ -536,7 +538,6 @@ class BBCooker:
logger.plain(env.getvalue())
# emit the metadata which isnt valid shell
- data.expandKeys(envdata)
for e in sorted(envdata.keys()):
if envdata.getVarFlag(e, 'func', False) and envdata.getVarFlag(e, 'python', False):
logger.plain("\npython %s () {\n%s}\n", e, envdata.getVar(e, False))
@@ -608,7 +609,14 @@ class BBCooker:
k2 = k.split(":do_")
k = k2[0]
ktask = k2[1]
- taskdata[mc].add_provider(localdata[mc], self.recipecaches[mc], k)
+ if mc:
+ # Provider might be from another mc
+ for mcavailable in self.multiconfigs:
+ # The first element is empty
+ if mcavailable:
+ taskdata[mcavailable].add_provider(localdata[mcavailable], self.recipecaches[mcavailable], k)
+ else:
+ taskdata[mc].add_provider(localdata[mc], self.recipecaches[mc], k)
current += 1
if not ktask.startswith("do_"):
ktask = "do_%s" % ktask
@@ -619,6 +627,27 @@ class BBCooker:
runlist.append([mc, k, ktask, fn])
bb.event.fire(bb.event.TreeDataPreparationProgress(current, len(fulltargetlist)), self.data)
+ mcdeps = taskdata[mc].get_mcdepends()
+ # No need to do check providers if there are no mcdeps or not an mc build
+ if mcdeps and mc:
+ # Make sure we can provide the multiconfig dependency
+ seen = set()
+ new = True
+ while new:
+ new = False
+ for mc in self.multiconfigs:
+ for k in mcdeps:
+ if k in seen:
+ continue
+ l = k.split(':')
+ depmc = l[2]
+ if depmc not in self.multiconfigs:
+ bb.fatal("Multiconfig dependency %s depends on nonexistent mc configuration %s" % (k,depmc))
+ else:
+ logger.debug(1, "Adding providers for multiconfig dependency %s" % l[3])
+ taskdata[depmc].add_provider(localdata[depmc], self.recipecaches[depmc], l[3])
+ seen.add(k)
+ new = True
for mc in self.multiconfigs:
taskdata[mc].add_unresolved(localdata[mc], self.recipecaches[mc])
@@ -705,8 +734,8 @@ class BBCooker:
if not dotname in depend_tree["tdepends"]:
depend_tree["tdepends"][dotname] = []
for dep in rq.rqdata.runtaskentries[tid].depends:
- (depmc, depfn, deptaskname, deptaskfn) = bb.runqueue.split_tid_mcfn(dep)
- deppn = self.recipecaches[mc].pkg_fn[deptaskfn]
+ (depmc, depfn, _, deptaskfn) = bb.runqueue.split_tid_mcfn(dep)
+ deppn = self.recipecaches[depmc].pkg_fn[deptaskfn]
depend_tree["tdepends"][dotname].append("%s.%s" % (deppn, bb.runqueue.taskname_from_tid(dep)))
if taskfn not in seen_fns:
seen_fns.append(taskfn)
@@ -1170,6 +1199,7 @@ class BBCooker:
elif regex == "":
parselog.debug(1, "BBFILE_PATTERN_%s is empty" % c)
errors = False
+ continue
else:
try:
cre = re.compile(regex)
@@ -1564,7 +1594,7 @@ class BBCooker:
pkgs_to_build.append(t)
if 'universe' in pkgs_to_build:
- parselog.warning("The \"universe\" target is only intended for testing and may produce errors.")
+ parselog.verbnote("The \"universe\" target is only intended for testing and may produce errors.")
parselog.debug(1, "collating packages for \"universe\"")
pkgs_to_build.remove('universe')
for mc in self.multiconfigs:
@@ -1603,8 +1633,6 @@ class BBCooker:
if self.parser:
self.parser.shutdown(clean=not force, force=force)
- self.notifier.stop()
- self.confignotifier.stop()
def finishcommand(self):
self.state = state.initial
@@ -1633,7 +1661,10 @@ class CookerExit(bb.event.Event):
class CookerCollectFiles(object):
def __init__(self, priorities):
self.bbappends = []
- self.bbfile_config_priorities = priorities
+ # Priorities is a list of tupples, with the second element as the pattern.
+ # We need to sort the list with the longest pattern first, and so on to
+ # the shortest. This allows nested layers to be properly evaluated.
+ self.bbfile_config_priorities = sorted(priorities, key=lambda tup: tup[1], reverse=True)
def calc_bbfile_priority( self, filename, matched = None ):
for _, _, regex, pri in self.bbfile_config_priorities:
@@ -1807,21 +1838,25 @@ class CookerCollectFiles(object):
realfn, cls, mc = bb.cache.virtualfn2realfn(p)
priorities[p] = self.calc_bbfile_priority(realfn, matched)
- # Don't show the warning if the BBFILE_PATTERN did match .bbappend files
unmatched = set()
for _, _, regex, pri in self.bbfile_config_priorities:
if not regex in matched:
unmatched.add(regex)
- def findmatch(regex):
+ # Don't show the warning if the BBFILE_PATTERN did match .bbappend files
+ def find_bbappend_match(regex):
for b in self.bbappends:
(bbfile, append) = b
if regex.match(append):
+ # If the bbappend is matched by already "matched set", return False
+ for matched_regex in matched:
+ if matched_regex.match(append):
+ return False
return True
return False
for unmatch in unmatched.copy():
- if findmatch(unmatch):
+ if find_bbappend_match(unmatch):
unmatched.remove(unmatch)
for collection, pattern, regex, _ in self.bbfile_config_priorities:
diff --git a/bitbake/lib/bb/cookerdata.py b/bitbake/lib/bb/cookerdata.py
index fab47c7..5df66e6 100644
--- a/bitbake/lib/bb/cookerdata.py
+++ b/bitbake/lib/bb/cookerdata.py
@@ -143,7 +143,8 @@ class CookerConfiguration(object):
self.writeeventlog = False
self.server_only = False
self.limited_deps = False
- self.runall = None
+ self.runall = []
+ self.runonly = []
self.env = {}
@@ -395,6 +396,8 @@ class CookerDataBuilder(object):
if compat and not (compat & layerseries):
bb.fatal("Layer %s is not compatible with the core layer which only supports these series: %s (layer is compatible with %s)"
% (c, " ".join(layerseries), " ".join(compat)))
+ elif not compat and not data.getVar("BB_WORKERCONTEXT"):
+ bb.warn("Layer %s should set LAYERSERIES_COMPAT_%s in its conf/layer.conf file to list the core layer names it is compatible with." % (c, c))
if not data.getVar("BBPATH"):
msg = "The BBPATH variable is not set"
diff --git a/bitbake/lib/bb/daemonize.py b/bitbake/lib/bb/daemonize.py
index 8300d1d..c937675 100644
--- a/bitbake/lib/bb/daemonize.py
+++ b/bitbake/lib/bb/daemonize.py
@@ -16,6 +16,10 @@ def createDaemon(function, logfile):
background as a daemon, returning control to the caller.
"""
+ # Ensure stdout/stderror are flushed before forking to avoid duplicate output
+ sys.stdout.flush()
+ sys.stderr.flush()
+
try:
# Fork a child process so the parent can exit. This returns control to
# the command-line or shell. It also guarantees that the child will not
@@ -49,8 +53,8 @@ def createDaemon(function, logfile):
# exit() or _exit()?
# _exit is like exit(), but it doesn't call any functions registered
# with atexit (and on_exit) or any registered signal handlers. It also
- # closes any open file descriptors. Using exit() may cause all stdio
- # streams to be flushed twice and any temporary files may be unexpectedly
+ # closes any open file descriptors, but doesn't flush any buffered output.
+ # Using exit() may cause all any temporary files to be unexpectedly
# removed. It's therefore recommended that child branches of a fork()
# and the parent branch(es) of a daemon use _exit().
os._exit(0)
@@ -61,17 +65,19 @@ def createDaemon(function, logfile):
# The second child.
# Replace standard fds with our own
- si = open('/dev/null', 'r')
- os.dup2(si.fileno(), sys.stdin.fileno())
+ with open('/dev/null', 'r') as si:
+ os.dup2(si.fileno(), sys.stdin.fileno())
try:
so = open(logfile, 'a+')
- se = so
os.dup2(so.fileno(), sys.stdout.fileno())
- os.dup2(se.fileno(), sys.stderr.fileno())
+ os.dup2(so.fileno(), sys.stderr.fileno())
except io.UnsupportedOperation:
sys.stdout = open(logfile, 'a+')
- sys.stderr = sys.stdout
+
+ # Have stdout and stderr be the same so log output matches chronologically
+ # and there aren't two seperate buffers
+ sys.stderr = sys.stdout
try:
function()
@@ -79,4 +85,9 @@ def createDaemon(function, logfile):
traceback.print_exc()
finally:
bb.event.print_ui_queue()
+ # os._exit() doesn't flush open files like os.exit() does. Manually flush
+ # stdout and stderr so that any logging output will be seen, particularly
+ # exception tracebacks.
+ sys.stdout.flush()
+ sys.stderr.flush()
os._exit(0)
diff --git a/bitbake/lib/bb/data.py b/bitbake/lib/bb/data.py
index 80a7879..d66d98c 100644
--- a/bitbake/lib/bb/data.py
+++ b/bitbake/lib/bb/data.py
@@ -38,6 +38,7 @@ the speed is more critical here.
# Based on functions from the base bb module, Copyright 2003 Holger Schurig
import sys, os, re
+import hashlib
if sys.argv[0][-5:] == "pydoc":
path = os.path.dirname(os.path.dirname(sys.argv[1]))
else:
@@ -283,14 +284,12 @@ def build_dependencies(key, keys, shelldeps, varflagsexcl, d):
try:
if key[-1] == ']':
vf = key[:-1].split('[')
- value = d.getVarFlag(vf[0], vf[1], False)
- parser = d.expandWithRefs(value, key)
+ value, parser = d.getVarFlag(vf[0], vf[1], False, retparser=True)
deps |= parser.references
deps = deps | (keys & parser.execs)
return deps, value
varflags = d.getVarFlags(key, ["vardeps", "vardepvalue", "vardepsexclude", "exports", "postfuncs", "prefuncs", "lineno", "filename"]) or {}
vardeps = varflags.get("vardeps")
- value = d.getVarFlag(key, "_content", False)
def handle_contains(value, contains, d):
newvalue = ""
@@ -309,10 +308,19 @@ def build_dependencies(key, keys, shelldeps, varflagsexcl, d):
return newvalue
return value + newvalue
+ def handle_remove(value, deps, removes, d):
+ for r in sorted(removes):
+ r2 = d.expandWithRefs(r, None)
+ value += "\n_remove of %s" % r
+ deps |= r2.references
+ deps = deps | (keys & r2.execs)
+ return value
+
if "vardepvalue" in varflags:
- value = varflags.get("vardepvalue")
+ value = varflags.get("vardepvalue")
elif varflags.get("func"):
if varflags.get("python"):
+ value = d.getVarFlag(key, "_content", False)
parser = bb.codeparser.PythonParser(key, logger)
if value and "\t" in value:
logger.warning("Variable %s contains tabs, please remove these (%s)" % (key, d.getVar("FILE")))
@@ -321,13 +329,15 @@ def build_dependencies(key, keys, shelldeps, varflagsexcl, d):
deps = deps | (keys & parser.execs)
value = handle_contains(value, parser.contains, d)
else:
- parsedvar = d.expandWithRefs(value, key)
+ value, parsedvar = d.getVarFlag(key, "_content", False, retparser=True)
parser = bb.codeparser.ShellParser(key, logger)
parser.parse_shell(parsedvar.value)
deps = deps | shelldeps
deps = deps | parsedvar.references
deps = deps | (keys & parser.execs) | (keys & parsedvar.execs)
value = handle_contains(value, parsedvar.contains, d)
+ if hasattr(parsedvar, "removes"):
+ value = handle_remove(value, deps, parsedvar.removes, d)
if vardeps is None:
parser.log.flush()
if "prefuncs" in varflags:
@@ -337,10 +347,12 @@ def build_dependencies(key, keys, shelldeps, varflagsexcl, d):
if "exports" in varflags:
deps = deps | set(varflags["exports"].split())
else:
- parser = d.expandWithRefs(value, key)
+ value, parser = d.getVarFlag(key, "_content", False, retparser=True)
deps |= parser.references
deps = deps | (keys & parser.execs)
value = handle_contains(value, parser.contains, d)
+ if hasattr(parser, "removes"):
+ value = handle_remove(value, deps, parser.removes, d)
if "vardepvalueexclude" in varflags:
exclude = varflags.get("vardepvalueexclude")
@@ -394,6 +406,43 @@ def generate_dependencies(d):
#print "For %s: %s" % (task, str(deps[task]))
return tasklist, deps, values
+def generate_dependency_hash(tasklist, gendeps, lookupcache, whitelist, fn):
+ taskdeps = {}
+ basehash = {}
+
+ for task in tasklist:
+ data = lookupcache[task]
+
+ if data is None:
+ bb.error("Task %s from %s seems to be empty?!" % (task, fn))
+ data = ''
+
+ gendeps[task] -= whitelist
+ newdeps = gendeps[task]
+ seen = set()
+ while newdeps:
+ nextdeps = newdeps
+ seen |= nextdeps
+ newdeps = set()
+ for dep in nextdeps:
+ if dep in whitelist:
+ continue
+ gendeps[dep] -= whitelist
+ newdeps |= gendeps[dep]
+ newdeps -= seen
+
+ alldeps = sorted(seen)
+ for dep in alldeps:
+ data = data + dep
+ var = lookupcache[dep]
+ if var is not None:
+ data = data + str(var)
+ k = fn + "." + task
+ basehash[k] = hashlib.md5(data.encode("utf-8")).hexdigest()
+ taskdeps[task] = alldeps
+
+ return taskdeps, basehash
+
def inherits_class(klass, d):
val = d.getVar('__inherit_cache', False) or []
needle = os.path.join('classes', '%s.bbclass' % klass)
diff --git a/bitbake/lib/bb/data_smart.py b/bitbake/lib/bb/data_smart.py
index 7b09af5..6b94fc4 100644
--- a/bitbake/lib/bb/data_smart.py
+++ b/bitbake/lib/bb/data_smart.py
@@ -42,6 +42,7 @@ __setvar_keyword__ = ["_append", "_prepend", "_remove"]
__setvar_regexp__ = re.compile('(?P<base>.*?)(?P<keyword>_append|_prepend|_remove)(_(?P<add>[^A-Z]*))?$')
__expand_var_regexp__ = re.compile(r"\${[^{}@\n\t :]+}")
__expand_python_regexp__ = re.compile(r"\${@.+?}")
+__whitespace_split__ = re.compile('(\s)')
def infer_caller_details(loginfo, parent = False, varval = True):
"""Save the caller the trouble of specifying everything."""
@@ -104,11 +105,7 @@ class VariableParse:
if self.varname and key:
if self.varname == key:
raise Exception("variable %s references itself!" % self.varname)
- if key in self.d.expand_cache:
- varparse = self.d.expand_cache[key]
- var = varparse.value
- else:
- var = self.d.getVarFlag(key, "_content")
+ var = self.d.getVarFlag(key, "_content")
self.references.add(key)
if var is not None:
return var
@@ -267,6 +264,16 @@ class VariableHistory(object):
return
self.variables[var].append(loginfo.copy())
+ def rename_variable_hist(self, oldvar, newvar):
+ if not self.dataroot._tracking:
+ return
+ if oldvar not in self.variables:
+ return
+ if newvar not in self.variables:
+ self.variables[newvar] = []
+ for i in self.variables[oldvar]:
+ self.variables[newvar].append(i.copy())
+
def variable(self, var):
remote_connector = self.dataroot.getVar('_remote_data', False)
if remote_connector:
@@ -401,9 +408,6 @@ class DataSmart(MutableMapping):
if not isinstance(s, str): # sanity check
return VariableParse(varname, self, s)
- if varname and varname in self.expand_cache:
- return self.expand_cache[varname]
-
varparse = VariableParse(varname, self)
while s.find('${') != -1:
@@ -427,9 +431,6 @@ class DataSmart(MutableMapping):
varparse.value = s
- if varname:
- self.expand_cache[varname] = varparse
-
return varparse
def expand(self, s, varname = None):
@@ -498,6 +499,7 @@ class DataSmart(MutableMapping):
def setVar(self, var, value, **loginfo):
#print("var=" + str(var) + " val=" + str(value))
+ self.expand_cache = {}
parsing=False
if 'parsing' in loginfo:
parsing=True
@@ -510,7 +512,7 @@ class DataSmart(MutableMapping):
if 'op' not in loginfo:
loginfo['op'] = "set"
- self.expand_cache = {}
+
match = __setvar_regexp__.match(var)
if match and match.group("keyword") in __setvar_keyword__:
base = match.group('base')
@@ -619,6 +621,7 @@ class DataSmart(MutableMapping):
val = self.getVar(key, 0, parsing=True)
if val is not None:
+ self.varhistory.rename_variable_hist(key, newkey)
loginfo['variable'] = newkey
loginfo['op'] = 'rename from %s' % key
loginfo['detail'] = val
@@ -660,6 +663,7 @@ class DataSmart(MutableMapping):
self.setVar(var + "_prepend", value, ignore=True, parsing=True)
def delVar(self, var, **loginfo):
+ self.expand_cache = {}
if '_remote_data' in self.dict:
connector = self.dict["_remote_data"]["_content"]
res = connector.delVar(var)
@@ -669,7 +673,6 @@ class DataSmart(MutableMapping):
loginfo['detail'] = ""
loginfo['op'] = 'del'
self.varhistory.record(**loginfo)
- self.expand_cache = {}
self.dict[var] = {}
if var in self.overridedata:
del self.overridedata[var]
@@ -692,13 +695,13 @@ class DataSmart(MutableMapping):
override = None
def setVarFlag(self, var, flag, value, **loginfo):
+ self.expand_cache = {}
if '_remote_data' in self.dict:
connector = self.dict["_remote_data"]["_content"]
res = connector.setVarFlag(var, flag, value)
if not res:
return
- self.expand_cache = {}
if 'op' not in loginfo:
loginfo['op'] = "set"
loginfo['flag'] = flag
@@ -719,9 +722,21 @@ class DataSmart(MutableMapping):
self.dict["__exportlist"]["_content"] = set()
self.dict["__exportlist"]["_content"].add(var)
- def getVarFlag(self, var, flag, expand=True, noweakdefault=False, parsing=False):
+ def getVarFlag(self, var, flag, expand=True, noweakdefault=False, parsing=False, retparser=False):
+ if flag == "_content":
+ cachename = var
+ else:
+ if not flag:
+ bb.warn("Calling getVarFlag with flag unset is invalid")
+ return None
+ cachename = var + "[" + flag + "]"
+
+ if expand and cachename in self.expand_cache:
+ return self.expand_cache[cachename].value
+
local_var, overridedata = self._findVar(var)
value = None
+ removes = set()
if flag == "_content" and overridedata is not None and not parsing:
match = False
active = {}
@@ -748,7 +763,11 @@ class DataSmart(MutableMapping):
match = active[a]
del active[a]
if match:
- value = self.getVar(match, False)
+ value, subparser = self.getVarFlag(match, "_content", False, retparser=True)
+ if hasattr(subparser, "removes"):
+ # We have to carry the removes from the overridden variable to apply at the
+ # end of processing
+ removes = subparser.removes
if local_var is not None and value is None:
if flag in local_var:
@@ -784,17 +803,13 @@ class DataSmart(MutableMapping):
if match:
value = r + value
- if expand and value:
- # Only getvar (flag == _content) hits the expand cache
- cachename = None
- if flag == "_content":
- cachename = var
- else:
- cachename = var + "[" + flag + "]"
- value = self.expand(value, cachename)
+ parser = None
+ if expand or retparser:
+ parser = self.expandWithRefs(value, cachename)
+ if expand:
+ value = parser.value
- if value and flag == "_content" and local_var is not None and "_remove" in local_var:
- removes = []
+ if value and flag == "_content" and local_var is not None and "_remove" in local_var and not parsing:
self.need_overrides()
for (r, o) in local_var["_remove"]:
match = True
@@ -803,26 +818,45 @@ class DataSmart(MutableMapping):
if not o2 in self.overrides:
match = False
if match:
- removes.extend(self.expand(r).split())
-
- if removes:
- filtered = filter(lambda v: v not in removes,
- value.split())
- value = " ".join(filtered)
- if expand and var in self.expand_cache:
- # We need to ensure the expand cache has the correct value
- # flag == "_content" here
- self.expand_cache[var].value = value
+ removes.add(r)
+
+ if value and flag == "_content" and not parsing:
+ if removes and parser:
+ expanded_removes = {}
+ for r in removes:
+ expanded_removes[r] = self.expand(r).split()
+
+ parser.removes = set()
+ val = ""
+ for v in __whitespace_split__.split(parser.value):
+ skip = False
+ for r in removes:
+ if v in expanded_removes[r]:
+ parser.removes.add(r)
+ skip = True
+ if skip:
+ continue
+ val = val + v
+ parser.value = val
+ if expand:
+ value = parser.value
+
+ if parser:
+ self.expand_cache[cachename] = parser
+
+ if retparser:
+ return value, parser
+
return value
def delVarFlag(self, var, flag, **loginfo):
+ self.expand_cache = {}
if '_remote_data' in self.dict:
connector = self.dict["_remote_data"]["_content"]
res = connector.delVarFlag(var, flag)
if not res:
return
- self.expand_cache = {}
local_var, _ = self._findVar(var)
if not local_var:
return
diff --git a/bitbake/lib/bb/event.py b/bitbake/lib/bb/event.py
index 5d00496..5b1b094 100644
--- a/bitbake/lib/bb/event.py
+++ b/bitbake/lib/bb/event.py
@@ -141,6 +141,9 @@ def print_ui_queue():
logger = logging.getLogger("BitBake")
if not _uiready:
from bb.msg import BBLogFormatter
+ # Flush any existing buffered content
+ sys.stdout.flush()
+ sys.stderr.flush()
stdout = logging.StreamHandler(sys.stdout)
stderr = logging.StreamHandler(sys.stderr)
formatter = BBLogFormatter("%(levelname)s: %(message)s")
@@ -395,7 +398,7 @@ class RecipeEvent(Event):
Event.__init__(self)
class RecipePreFinalise(RecipeEvent):
- """ Recipe Parsing Complete but not yet finialised"""
+ """ Recipe Parsing Complete but not yet finalised"""
class RecipeTaskPreProcess(RecipeEvent):
"""
diff --git a/bitbake/lib/bb/fetch2/__init__.py b/bitbake/lib/bb/fetch2/__init__.py
index 6bd0404..2b62b41 100644
--- a/bitbake/lib/bb/fetch2/__init__.py
+++ b/bitbake/lib/bb/fetch2/__init__.py
@@ -383,7 +383,7 @@ def decodeurl(url):
path = location
else:
host = location
- path = ""
+ path = "/"
if user:
m = re.compile('(?P<user>[^:]+)(:?(?P<pswd>.*))').match(user)
if m:
@@ -452,8 +452,8 @@ def uri_replace(ud, uri_find, uri_replace, replacements, d, mirrortarball=None):
# Handle URL parameters
if i:
# Any specified URL parameters must match
- for k in uri_replace_decoded[loc]:
- if uri_decoded[loc][k] != uri_replace_decoded[loc][k]:
+ for k in uri_find_decoded[loc]:
+ if uri_decoded[loc][k] != uri_find_decoded[loc][k]:
return None
# Overwrite any specified replacement parameters
for k in uri_replace_decoded[loc]:
@@ -643,26 +643,25 @@ def verify_donestamp(ud, d, origud=None):
if not ud.needdonestamp or (origud and not origud.needdonestamp):
return True
- if not os.path.exists(ud.donestamp):
+ if not os.path.exists(ud.localpath):
+ # local path does not exist
+ if os.path.exists(ud.donestamp):
+ # done stamp exists, but the downloaded file does not; the done stamp
+ # must be incorrect, re-trigger the download
+ bb.utils.remove(ud.donestamp)
return False
if (not ud.method.supports_checksum(ud) or
(origud and not origud.method.supports_checksum(origud))):
- # done stamp exists, checksums not supported; assume the local file is
- # current
- return True
-
- if not os.path.exists(ud.localpath):
- # done stamp exists, but the downloaded file does not; the done stamp
- # must be incorrect, re-trigger the download
- bb.utils.remove(ud.donestamp)
- return False
+ # if done stamp exists and checksums not supported; assume the local
+ # file is current
+ return os.path.exists(ud.donestamp)
precomputed_checksums = {}
# Only re-use the precomputed checksums if the donestamp is newer than the
# file. Do not rely on the mtime of directories, though. If ud.localpath is
# a directory, there will probably not be any checksums anyway.
- if (os.path.isdir(ud.localpath) or
+ if os.path.exists(ud.donestamp) and (os.path.isdir(ud.localpath) or
os.path.getmtime(ud.localpath) < os.path.getmtime(ud.donestamp)):
try:
with open(ud.donestamp, "rb") as cachefile:
@@ -838,14 +837,16 @@ def runfetchcmd(cmd, d, quiet=False, cleanup=None, log=None, workdir=None):
if not cleanup:
cleanup = []
- # If PATH contains WORKDIR which contains PV which contains SRCPV we
+ # If PATH contains WORKDIR which contains PV-PR which contains SRCPV we
# can end up in circular recursion here so give the option of breaking it
# in a data store copy.
try:
d.getVar("PV")
+ d.getVar("PR")
except bb.data_smart.ExpansionError:
d = bb.data.createCopy(d)
d.setVar("PV", "fetcheravoidrecurse")
+ d.setVar("PR", "fetcheravoidrecurse")
origenv = d.getVar("BB_ORIGENV", False)
for var in exportvars:
@@ -1017,16 +1018,7 @@ def try_mirror_url(fetch, origud, ud, ld, check = False):
origud.method.build_mirror_data(origud, ld)
return origud.localpath
# Otherwise the result is a local file:// and we symlink to it
- if not os.path.exists(origud.localpath):
- if os.path.islink(origud.localpath):
- # Broken symbolic link
- os.unlink(origud.localpath)
-
- # As per above, in case two tasks end up here simultaneously.
- try:
- os.symlink(ud.localpath, origud.localpath)
- except FileExistsError:
- pass
+ ensure_symlink(ud.localpath, origud.localpath)
update_stamp(origud, ld)
return ud.localpath
@@ -1060,6 +1052,22 @@ def try_mirror_url(fetch, origud, ud, ld, check = False):
bb.utils.unlockfile(lf)
+def ensure_symlink(target, link_name):
+ if not os.path.exists(link_name):
+ if os.path.islink(link_name):
+ # Broken symbolic link
+ os.unlink(link_name)
+
+ # In case this is executing without any file locks held (as is
+ # the case for file:// URLs), two tasks may end up here at the
+ # same time, in which case we do not want the second task to
+ # fail when the link has already been created by the first task.
+ try:
+ os.symlink(target, link_name)
+ except FileExistsError:
+ pass
+
+
def try_mirrors(fetch, d, origud, mirrors, check = False):
"""
Try to use a mirrored version of the sources.
@@ -1089,7 +1097,9 @@ def trusted_network(d, url):
return True
pkgname = d.expand(d.getVar('PN', False))
- trusted_hosts = d.getVarFlag('BB_ALLOWED_NETWORKS', pkgname, False)
+ trusted_hosts = None
+ if pkgname:
+ trusted_hosts = d.getVarFlag('BB_ALLOWED_NETWORKS', pkgname, False)
if not trusted_hosts:
trusted_hosts = d.getVar('BB_ALLOWED_NETWORKS')
diff --git a/bitbake/lib/bb/fetch2/bzr.py b/bitbake/lib/bb/fetch2/bzr.py
index 16123f8..658502f 100644
--- a/bitbake/lib/bb/fetch2/bzr.py
+++ b/bitbake/lib/bb/fetch2/bzr.py
@@ -41,8 +41,9 @@ class Bzr(FetchMethod):
init bzr specific variable within url data
"""
# Create paths to bzr checkouts
+ bzrdir = d.getVar("BZRDIR") or (d.getVar("DL_DIR") + "/bzr")
relpath = self._strip_leading_slashes(ud.path)
- ud.pkgdir = os.path.join(d.expand('${BZRDIR}'), ud.host, relpath)
+ ud.pkgdir = os.path.join(bzrdir, ud.host, relpath)
ud.setup_revisions(d)
@@ -57,7 +58,7 @@ class Bzr(FetchMethod):
command is "fetch", "update", "revno"
"""
- basecmd = d.expand('${FETCHCMD_bzr}')
+ basecmd = d.getVar("FETCHCMD_bzr") or "/usr/bin/env bzr"
proto = ud.parm.get('protocol', 'http')
diff --git a/bitbake/lib/bb/fetch2/clearcase.py b/bitbake/lib/bb/fetch2/clearcase.py
index 36beab6..3a6573d 100644
--- a/bitbake/lib/bb/fetch2/clearcase.py
+++ b/bitbake/lib/bb/fetch2/clearcase.py
@@ -69,7 +69,6 @@ from bb.fetch2 import FetchMethod
from bb.fetch2 import FetchError
from bb.fetch2 import runfetchcmd
from bb.fetch2 import logger
-from distutils import spawn
class ClearCase(FetchMethod):
"""Class to fetch urls via 'clearcase'"""
@@ -107,7 +106,7 @@ class ClearCase(FetchMethod):
else:
ud.module = ""
- ud.basecmd = d.getVar("FETCHCMD_ccrc") or spawn.find_executable("cleartool") or spawn.find_executable("rcleartool")
+ ud.basecmd = d.getVar("FETCHCMD_ccrc") or "/usr/bin/env cleartool || rcleartool"
if d.getVar("SRCREV") == "INVALID":
raise FetchError("Set a valid SRCREV for the clearcase fetcher in your recipe, e.g. SRCREV = \"/main/LATEST\" or any other label of your choice.")
diff --git a/bitbake/lib/bb/fetch2/cvs.py b/bitbake/lib/bb/fetch2/cvs.py
index 490c954..0e0a319 100644
--- a/bitbake/lib/bb/fetch2/cvs.py
+++ b/bitbake/lib/bb/fetch2/cvs.py
@@ -110,7 +110,7 @@ class Cvs(FetchMethod):
if ud.tag:
options.append("-r %s" % ud.tag)
- cvsbasecmd = d.getVar("FETCHCMD_cvs")
+ cvsbasecmd = d.getVar("FETCHCMD_cvs") or "/usr/bin/env cvs"
cvscmd = cvsbasecmd + " '-d" + cvsroot + "' co " + " ".join(options) + " " + ud.module
cvsupdatecmd = cvsbasecmd + " '-d" + cvsroot + "' update -d -P " + " ".join(options)
@@ -121,7 +121,8 @@ class Cvs(FetchMethod):
# create module directory
logger.debug(2, "Fetch: checking for module directory")
pkg = d.getVar('PN')
- pkgdir = os.path.join(d.getVar('CVSDIR'), pkg)
+ cvsdir = d.getVar("CVSDIR") or (d.getVar("DL_DIR") + "/cvs")
+ pkgdir = os.path.join(cvsdir, pkg)
moddir = os.path.join(pkgdir, localdir)
workdir = None
if os.access(os.path.join(moddir, 'CVS'), os.R_OK):
diff --git a/bitbake/lib/bb/fetch2/git.py b/bitbake/lib/bb/fetch2/git.py
index d34ea1d..15858a6 100644
--- a/bitbake/lib/bb/fetch2/git.py
+++ b/bitbake/lib/bb/fetch2/git.py
@@ -125,6 +125,9 @@ class GitProgressHandler(bb.progress.LineFilterProgressHandler):
class Git(FetchMethod):
+ bitbake_dir = os.path.abspath(os.path.join(os.path.dirname(os.path.join(os.path.abspath(__file__))), '..', '..', '..'))
+ make_shallow_path = os.path.join(bitbake_dir, 'bin', 'git-make-shallow')
+
"""Class to fetch a module or modules from git repositories"""
def init(self, d):
pass
@@ -258,7 +261,7 @@ class Git(FetchMethod):
gitsrcname = gitsrcname + '_' + ud.revisions[name]
dl_dir = d.getVar("DL_DIR")
- gitdir = d.getVar("GITDIR") or (dl_dir + "/git2/")
+ gitdir = d.getVar("GITDIR") or (dl_dir + "/git2")
ud.clonedir = os.path.join(gitdir, gitsrcname)
ud.localfile = ud.clonedir
@@ -296,17 +299,22 @@ class Git(FetchMethod):
return ud.clonedir
def need_update(self, ud, d):
+ return self.clonedir_need_update(ud, d) or self.shallow_tarball_need_update(ud) or self.tarball_need_update(ud)
+
+ def clonedir_need_update(self, ud, d):
if not os.path.exists(ud.clonedir):
return True
for name in ud.names:
if not self._contains_ref(ud, d, name, ud.clonedir):
return True
- if ud.shallow and ud.write_shallow_tarballs and not os.path.exists(ud.fullshallow):
- return True
- if ud.write_tarballs and not os.path.exists(ud.fullmirror):
- return True
return False
+ def shallow_tarball_need_update(self, ud):
+ return ud.shallow and ud.write_shallow_tarballs and not os.path.exists(ud.fullshallow)
+
+ def tarball_need_update(self, ud):
+ return ud.write_tarballs and not os.path.exists(ud.fullmirror)
+
def try_premirror(self, ud, d):
# If we don't do this, updating an existing checkout with only premirrors
# is not possible
@@ -319,16 +327,13 @@ class Git(FetchMethod):
def download(self, ud, d):
"""Fetch url"""
- no_clone = not os.path.exists(ud.clonedir)
- need_update = no_clone or self.need_update(ud, d)
-
# A current clone is preferred to either tarball, a shallow tarball is
# preferred to an out of date clone, and a missing clone will use
# either tarball.
- if ud.shallow and os.path.exists(ud.fullshallow) and need_update:
+ if ud.shallow and os.path.exists(ud.fullshallow) and self.need_update(ud, d):
ud.localpath = ud.fullshallow
return
- elif os.path.exists(ud.fullmirror) and no_clone:
+ elif os.path.exists(ud.fullmirror) and not os.path.exists(ud.clonedir):
bb.utils.mkdirhier(ud.clonedir)
runfetchcmd("tar -xzf %s" % ud.fullmirror, d, workdir=ud.clonedir)
@@ -350,11 +355,12 @@ class Git(FetchMethod):
for name in ud.names:
if not self._contains_ref(ud, d, name, ud.clonedir):
needupdate = True
+ break
+
if needupdate:
- try:
- runfetchcmd("%s remote rm origin" % ud.basecmd, d, workdir=ud.clonedir)
- except bb.fetch2.FetchError:
- logger.debug(1, "No Origin")
+ output = runfetchcmd("%s remote" % ud.basecmd, d, quiet=True, workdir=ud.clonedir)
+ if "origin" in output:
+ runfetchcmd("%s remote rm origin" % ud.basecmd, d, workdir=ud.clonedir)
runfetchcmd("%s remote add --mirror=fetch origin %s" % (ud.basecmd, repourl), d, workdir=ud.clonedir)
fetch_cmd = "LANG=C %s fetch -f --prune --progress %s refs/*:refs/*" % (ud.basecmd, repourl)
@@ -370,6 +376,7 @@ class Git(FetchMethod):
except OSError as exc:
if exc.errno != errno.ENOENT:
raise
+
for name in ud.names:
if not self._contains_ref(ud, d, name, ud.clonedir):
raise bb.fetch2.FetchError("Unable to find revision %s in branch %s even from upstream" % (ud.revisions[name], ud.branches[name]))
@@ -446,7 +453,7 @@ class Git(FetchMethod):
shallow_branches.append(r)
# Make the repository shallow
- shallow_cmd = ['git', 'make-shallow', '-s']
+ shallow_cmd = [self.make_shallow_path, '-s']
for b in shallow_branches:
shallow_cmd.append('-r')
shallow_cmd.append(b)
@@ -469,11 +476,27 @@ class Git(FetchMethod):
if os.path.exists(destdir):
bb.utils.prunedir(destdir)
- if ud.shallow and (not os.path.exists(ud.clonedir) or self.need_update(ud, d)):
- bb.utils.mkdirhier(destdir)
- runfetchcmd("tar -xzf %s" % ud.fullshallow, d, workdir=destdir)
- else:
- runfetchcmd("%s clone %s %s/ %s" % (ud.basecmd, ud.cloneflags, ud.clonedir, destdir), d)
+ source_found = False
+ source_error = []
+
+ if not source_found:
+ clonedir_is_up_to_date = not self.clonedir_need_update(ud, d)
+ if clonedir_is_up_to_date:
+ runfetchcmd("%s clone %s %s/ %s" % (ud.basecmd, ud.cloneflags, ud.clonedir, destdir), d)
+ source_found = True
+ else:
+ source_error.append("clone directory not available or not up to date: " + ud.clonedir)
+
+ if not source_found:
+ if ud.shallow and os.path.exists(ud.fullshallow):
+ bb.utils.mkdirhier(destdir)
+ runfetchcmd("tar -xzf %s" % ud.fullshallow, d, workdir=destdir)
+ source_found = True
+ else:
+ source_error.append("shallow clone not enabled or not available: " + ud.fullshallow)
+
+ if not source_found:
+ raise bb.fetch2.UnpackError("No up to date source found: " + "; ".join(source_error), ud.url)
repourl = self._get_repo_url(ud)
runfetchcmd("%s remote set-url origin %s" % (ud.basecmd, repourl), d, workdir=destdir)
@@ -592,7 +615,8 @@ class Git(FetchMethod):
tagregex = re.compile(d.getVar('UPSTREAM_CHECK_GITTAGREGEX') or "(?P<pver>([0-9][\.|_]?)+)")
try:
output = self._lsremote(ud, d, "refs/tags/*")
- except bb.fetch2.FetchError or bb.fetch2.NetworkAccess:
+ except (bb.fetch2.FetchError, bb.fetch2.NetworkAccess) as e:
+ bb.note("Could not list remote: %s" % str(e))
return pupver
verstring = ""
diff --git a/bitbake/lib/bb/fetch2/gitsm.py b/bitbake/lib/bb/fetch2/gitsm.py
index 0aff100..0a982da 100644
--- a/bitbake/lib/bb/fetch2/gitsm.py
+++ b/bitbake/lib/bb/fetch2/gitsm.py
@@ -31,9 +31,12 @@ NOTE: Switching a SRC_URI from "git://" to "gitsm://" requires a clean of your r
import os
import bb
+import copy
from bb.fetch2.git import Git
from bb.fetch2 import runfetchcmd
from bb.fetch2 import logger
+from bb.fetch2 import Fetch
+from bb.fetch2 import BBFetchException
class GitSM(Git):
def supports(self, ud, d):
@@ -42,94 +45,207 @@ class GitSM(Git):
"""
return ud.type in ['gitsm']
- def uses_submodules(self, ud, d, wd):
- for name in ud.names:
- try:
- runfetchcmd("%s show %s:.gitmodules" % (ud.basecmd, ud.revisions[name]), d, quiet=True, workdir=wd)
- return True
- except bb.fetch.FetchError:
- pass
- return False
+ @staticmethod
+ def parse_gitmodules(gitmodules):
+ modules = {}
+ module = ""
+ for line in gitmodules.splitlines():
+ if line.startswith('[submodule'):
+ module = line.split('"')[1]
+ modules[module] = {}
+ elif module and line.strip().startswith('path'):
+ path = line.split('=')[1].strip()
+ modules[module]['path'] = path
+ elif module and line.strip().startswith('url'):
+ url = line.split('=')[1].strip()
+ modules[module]['url'] = url
+ return modules
- def _set_relative_paths(self, repopath):
- """
- Fix submodule paths to be relative instead of absolute,
- so that when we move the repo it doesn't break
- (In Git 1.7.10+ this is done automatically)
- """
+ def update_submodules(self, ud, d):
submodules = []
- with open(os.path.join(repopath, '.gitmodules'), 'r') as f:
- for line in f.readlines():
- if line.startswith('[submodule'):
- submodules.append(line.split('"')[1])
+ paths = {}
+ uris = {}
+ local_paths = {}
- for module in submodules:
- repo_conf = os.path.join(repopath, module, '.git')
- if os.path.exists(repo_conf):
- with open(repo_conf, 'r') as f:
- lines = f.readlines()
- newpath = ''
- for i, line in enumerate(lines):
- if line.startswith('gitdir:'):
- oldpath = line.split(': ')[-1].rstrip()
- if oldpath.startswith('/'):
- newpath = '../' * (module.count('/') + 1) + '.git/modules/' + module
- lines[i] = 'gitdir: %s\n' % newpath
- break
- if newpath:
- with open(repo_conf, 'w') as f:
- for line in lines:
- f.write(line)
-
- repo_conf2 = os.path.join(repopath, '.git', 'modules', module, 'config')
- if os.path.exists(repo_conf2):
- with open(repo_conf2, 'r') as f:
- lines = f.readlines()
- newpath = ''
- for i, line in enumerate(lines):
- if line.lstrip().startswith('worktree = '):
- oldpath = line.split(' = ')[-1].rstrip()
- if oldpath.startswith('/'):
- newpath = '../' * (module.count('/') + 3) + module
- lines[i] = '\tworktree = %s\n' % newpath
- break
- if newpath:
- with open(repo_conf2, 'w') as f:
- for line in lines:
- f.write(line)
+ for name in ud.names:
+ try:
+ gitmodules = runfetchcmd("%s show %s:.gitmodules" % (ud.basecmd, ud.revisions[name]), d, quiet=True, workdir=ud.clonedir)
+ except:
+ # No submodules to update
+ continue
+
+ for m, md in self.parse_gitmodules(gitmodules).items():
+ submodules.append(m)
+ paths[m] = md['path']
+ uris[m] = md['url']
+ if uris[m].startswith('..'):
+ newud = copy.copy(ud)
+ newud.path = os.path.realpath(os.path.join(newud.path, md['url']))
+ uris[m] = Git._get_repo_url(self, newud)
- def update_submodules(self, ud, d):
- # We have to convert bare -> full repo, do the submodule bit, then convert back
- tmpclonedir = ud.clonedir + ".tmp"
- gitdir = tmpclonedir + os.sep + ".git"
- bb.utils.remove(tmpclonedir, True)
- os.mkdir(tmpclonedir)
- os.rename(ud.clonedir, gitdir)
- runfetchcmd("sed " + gitdir + "/config -i -e 's/bare.*=.*true/bare = false/'", d)
- runfetchcmd(ud.basecmd + " reset --hard", d, workdir=tmpclonedir)
- runfetchcmd(ud.basecmd + " checkout -f " + ud.revisions[ud.names[0]], d, workdir=tmpclonedir)
- runfetchcmd(ud.basecmd + " submodule update --init --recursive", d, workdir=tmpclonedir)
- self._set_relative_paths(tmpclonedir)
- runfetchcmd("sed " + gitdir + "/config -i -e 's/bare.*=.*false/bare = true/'", d, workdir=tmpclonedir)
- os.rename(gitdir, ud.clonedir,)
- bb.utils.remove(tmpclonedir, True)
+ for module in submodules:
+ module_hash = runfetchcmd("%s ls-tree -z -d %s %s" % (ud.basecmd, ud.revisions[name], paths[module]), d, quiet=True, workdir=ud.clonedir)
+ module_hash = module_hash.split()[2]
+
+ # Build new SRC_URI
+ proto = uris[module].split(':', 1)[0]
+ url = uris[module].replace('%s:' % proto, 'gitsm:', 1)
+ url += ';protocol=%s' % proto
+ url += ";name=%s" % module
+ url += ";bareclone=1;nocheckout=1"
+
+ ld = d.createCopy()
+ # Not necessary to set SRC_URI, since we're passing the URI to
+ # Fetch.
+ #ld.setVar('SRC_URI', url)
+ ld.setVar('SRCREV_%s' % module, module_hash)
+
+ # Workaround for issues with SRCPV/SRCREV_FORMAT errors
+ # error refer to 'multiple' repositories. Only the repository
+ # in the original SRC_URI actually matters...
+ ld.setVar('SRCPV', d.getVar('SRCPV'))
+ ld.setVar('SRCREV_FORMAT', module)
+
+ newfetch = Fetch([url], ld, cache=False)
+ newfetch.download()
+ local_paths[module] = newfetch.localpath(url)
+
+ # Correct the submodule references to the local download version...
+ runfetchcmd("%(basecmd)s config submodule.%(module)s.url %(url)s" % {'basecmd': ud.basecmd, 'module': module, 'url' : local_paths[module]}, d, workdir=ud.clonedir)
+
+ symlink_path = os.path.join(ud.clonedir, 'modules', paths[module])
+ if not os.path.exists(symlink_path):
+ try:
+ os.makedirs(os.path.dirname(symlink_path), exist_ok=True)
+ except OSError:
+ pass
+ os.symlink(local_paths[module], symlink_path)
+
+ return True
+
+ def need_update(self, ud, d):
+ main_repo_needs_update = Git.need_update(self, ud, d)
+
+ # First check that the main repository has enough history fetched. If it doesn't, then we don't
+ # even have the .gitmodules and gitlinks for the submodules to attempt asking whether the
+ # submodules' histories are recent enough.
+ if main_repo_needs_update:
+ return True
+
+ # Now check that the submodule histories are new enough. The git-submodule command doesn't have
+ # any clean interface for doing this aside from just attempting the checkout (with network
+ # fetched disabled).
+ return not self.update_submodules(ud, d)
def download(self, ud, d):
Git.download(self, ud, d)
if not ud.shallow or ud.localpath != ud.fullshallow:
- submodules = self.uses_submodules(ud, d, ud.clonedir)
- if submodules:
- self.update_submodules(ud, d)
+ self.update_submodules(ud, d)
+
+ def copy_submodules(self, submodules, ud, destdir, d):
+ if ud.bareclone:
+ repo_conf = destdir
+ else:
+ repo_conf = os.path.join(destdir, '.git')
+
+ if submodules and not os.path.exists(os.path.join(repo_conf, 'modules')):
+ os.mkdir(os.path.join(repo_conf, 'modules'))
+
+ for module in submodules:
+ srcpath = os.path.join(ud.clonedir, 'modules', module)
+ modpath = os.path.join(repo_conf, 'modules', module)
+
+ if os.path.exists(srcpath):
+ if os.path.exists(os.path.join(srcpath, '.git')):
+ srcpath = os.path.join(srcpath, '.git')
+
+ target = modpath
+ if os.path.exists(modpath):
+ target = os.path.dirname(modpath)
+
+ os.makedirs(os.path.dirname(target), exist_ok=True)
+ runfetchcmd("cp -fpLR %s %s" % (srcpath, target), d)
+ elif os.path.exists(modpath):
+ # Module already exists, likely unpacked from a shallow mirror clone
+ pass
+ else:
+ # This is fatal, as we do NOT want git-submodule to hit the network
+ raise bb.fetch2.FetchError('Submodule %s does not exist in %s or %s.' % (module, srcpath, modpath))
def clone_shallow_local(self, ud, dest, d):
super(GitSM, self).clone_shallow_local(ud, dest, d)
- runfetchcmd('cp -fpPRH "%s/modules" "%s/"' % (ud.clonedir, os.path.join(dest, '.git')), d)
+ # Copy over the submodules' fetched histories too.
+ repo_conf = os.path.join(dest, '.git')
+
+ submodules = []
+ for name in ud.names:
+ try:
+ gitmodules = runfetchcmd("%s show %s:.gitmodules" % (ud.basecmd, ud.revision), d, quiet=True, workdir=dest)
+ except:
+ # No submodules to update
+ continue
+
+ submodules = list(self.parse_gitmodules(gitmodules).keys())
+
+ self.copy_submodules(submodules, ud, dest, d)
def unpack(self, ud, destdir, d):
Git.unpack(self, ud, destdir, d)
- if self.uses_submodules(ud, d, ud.destdir):
- runfetchcmd(ud.basecmd + " checkout " + ud.revisions[ud.names[0]], d, workdir=ud.destdir)
- runfetchcmd(ud.basecmd + " submodule update --init --recursive", d, workdir=ud.destdir)
+ # Copy over the submodules' fetched histories too.
+ if ud.bareclone:
+ repo_conf = ud.destdir
+ else:
+ repo_conf = os.path.join(ud.destdir, '.git')
+
+ submodules = []
+ paths = {}
+ uris = {}
+ local_paths = {}
+ for name in ud.names:
+ try:
+ gitmodules = runfetchcmd("%s show HEAD:.gitmodules" % (ud.basecmd), d, quiet=True, workdir=ud.destdir)
+ except:
+ # No submodules to update
+ continue
+
+ for m, md in self.parse_gitmodules(gitmodules).items():
+ submodules.append(m)
+ paths[m] = md['path']
+ uris[m] = md['url']
+
+ self.copy_submodules(submodules, ud, ud.destdir, d)
+
+ submodules_queue = [(module, os.path.join(repo_conf, 'modules', module)) for module in submodules]
+ while len(submodules_queue) != 0:
+ module, modpath = submodules_queue.pop()
+
+ # add submodule children recursively
+ try:
+ gitmodules = runfetchcmd("%s show HEAD:.gitmodules" % (ud.basecmd), d, quiet=True, workdir=modpath)
+ for m, md in self.parse_gitmodules(gitmodules).items():
+ submodules_queue.append([m, os.path.join(modpath, 'modules', m)])
+ except:
+ # no children
+ pass
+
+ # Determine (from the submodule) the correct url to reference
+ try:
+ output = runfetchcmd("%(basecmd)s config remote.origin.url" % {'basecmd': ud.basecmd}, d, workdir=modpath)
+ except bb.fetch2.FetchError as e:
+ # No remote url defined in this submodule
+ continue
+
+ local_paths[module] = output
+
+ # Setup the local URL properly (like git submodule init or sync would do...)
+ runfetchcmd("%(basecmd)s config submodule.%(module)s.url %(url)s" % {'basecmd': ud.basecmd, 'module': module, 'url' : local_paths[module]}, d, workdir=ud.destdir)
+
+ # Ensure the submodule repository is NOT set to bare, since we're checking it out...
+ runfetchcmd("%s config core.bare false" % (ud.basecmd), d, quiet=True, workdir=modpath)
+
+ if submodules:
+ # Run submodule update, this sets up the directories -- without touching the config
+ runfetchcmd("%s submodule update --recursive --no-fetch" % (ud.basecmd), d, quiet=True, workdir=ud.destdir)
diff --git a/bitbake/lib/bb/fetch2/hg.py b/bitbake/lib/bb/fetch2/hg.py
index d0857e6..936d043 100644
--- a/bitbake/lib/bb/fetch2/hg.py
+++ b/bitbake/lib/bb/fetch2/hg.py
@@ -80,7 +80,7 @@ class Hg(FetchMethod):
ud.fullmirror = os.path.join(d.getVar("DL_DIR"), mirrortarball)
ud.mirrortarballs = [mirrortarball]
- hgdir = d.getVar("HGDIR") or (d.getVar("DL_DIR") + "/hg/")
+ hgdir = d.getVar("HGDIR") or (d.getVar("DL_DIR") + "/hg")
ud.pkgdir = os.path.join(hgdir, hgsrcname)
ud.moddir = os.path.join(ud.pkgdir, ud.module)
ud.localfile = ud.moddir
diff --git a/bitbake/lib/bb/fetch2/npm.py b/bitbake/lib/bb/fetch2/npm.py
index b5f148c..408dfc3 100644
--- a/bitbake/lib/bb/fetch2/npm.py
+++ b/bitbake/lib/bb/fetch2/npm.py
@@ -32,7 +32,6 @@ from bb.fetch2 import runfetchcmd
from bb.fetch2 import logger
from bb.fetch2 import UnpackError
from bb.fetch2 import ParameterError
-from distutils import spawn
def subprocess_setup():
# Python installs a SIGPIPE handler by default. This is usually not what
@@ -195,9 +194,11 @@ class Npm(FetchMethod):
outputurl = pdata['dist']['tarball']
data[pkg] = {}
data[pkg]['tgz'] = os.path.basename(outputurl)
- if not outputurl in fetchedlist:
- self._runwget(ud, d, "%s --directory-prefix=%s %s" % (self.basecmd, ud.prefixdir, outputurl), False)
- fetchedlist.append(outputurl)
+ if outputurl in fetchedlist:
+ return
+
+ self._runwget(ud, d, "%s --directory-prefix=%s %s" % (self.basecmd, ud.prefixdir, outputurl), False)
+ fetchedlist.append(outputurl)
dependencies = pdata.get('dependencies', {})
optionalDependencies = pdata.get('optionalDependencies', {})
diff --git a/bitbake/lib/bb/fetch2/osc.py b/bitbake/lib/bb/fetch2/osc.py
index 2b4f7d9..6c60456 100644
--- a/bitbake/lib/bb/fetch2/osc.py
+++ b/bitbake/lib/bb/fetch2/osc.py
@@ -32,8 +32,9 @@ class Osc(FetchMethod):
ud.module = ud.parm["module"]
# Create paths to osc checkouts
+ oscdir = d.getVar("OSCDIR") or (d.getVar("DL_DIR") + "/osc")
relpath = self._strip_leading_slashes(ud.path)
- ud.pkgdir = os.path.join(d.getVar('OSCDIR'), ud.host)
+ ud.pkgdir = os.path.join(oscdir, ud.host)
ud.moddir = os.path.join(ud.pkgdir, relpath, ud.module)
if 'rev' in ud.parm:
@@ -54,7 +55,7 @@ class Osc(FetchMethod):
command is "fetch", "update", "info"
"""
- basecmd = d.expand('${FETCHCMD_osc}')
+ basecmd = d.getVar("FETCHCMD_osc") or "/usr/bin/env osc"
proto = ud.parm.get('protocol', 'ocs')
diff --git a/bitbake/lib/bb/fetch2/perforce.py b/bitbake/lib/bb/fetch2/perforce.py
index 3debad5..903a8e6 100644
--- a/bitbake/lib/bb/fetch2/perforce.py
+++ b/bitbake/lib/bb/fetch2/perforce.py
@@ -43,13 +43,9 @@ class Perforce(FetchMethod):
provided by the env, use it. If P4PORT is specified by the recipe, use
its values, which may override the settings in P4CONFIG.
"""
- ud.basecmd = d.getVar('FETCHCMD_p4')
- if not ud.basecmd:
- ud.basecmd = "/usr/bin/env p4"
+ ud.basecmd = d.getVar("FETCHCMD_p4") or "/usr/bin/env p4"
- ud.dldir = d.getVar('P4DIR')
- if not ud.dldir:
- ud.dldir = '%s/%s' % (d.getVar('DL_DIR'), 'p4')
+ ud.dldir = d.getVar("P4DIR") or (d.getVar("DL_DIR") + "/p4")
path = ud.url.split('://')[1]
path = path.split(';')[0]
diff --git a/bitbake/lib/bb/fetch2/repo.py b/bitbake/lib/bb/fetch2/repo.py
index c22d9b5..8c7e818 100644
--- a/bitbake/lib/bb/fetch2/repo.py
+++ b/bitbake/lib/bb/fetch2/repo.py
@@ -45,6 +45,8 @@ class Repo(FetchMethod):
"master".
"""
+ ud.basecmd = d.getVar("FETCHCMD_repo") or "/usr/bin/env repo"
+
ud.proto = ud.parm.get('protocol', 'git')
ud.branch = ud.parm.get('branch', 'master')
ud.manifest = ud.parm.get('manifest', 'default.xml')
@@ -60,8 +62,8 @@ class Repo(FetchMethod):
logger.debug(1, "%s already exists (or was stashed). Skipping repo init / sync.", ud.localpath)
return
+ repodir = d.getVar("REPODIR") or (d.getVar("DL_DIR") + "/repo")
gitsrcname = "%s%s" % (ud.host, ud.path.replace("/", "."))
- repodir = d.getVar("REPODIR") or os.path.join(d.getVar("DL_DIR"), "repo")
codir = os.path.join(repodir, gitsrcname, ud.manifest)
if ud.user:
@@ -72,11 +74,11 @@ class Repo(FetchMethod):
repodir = os.path.join(codir, "repo")
bb.utils.mkdirhier(repodir)
if not os.path.exists(os.path.join(repodir, ".repo")):
- bb.fetch2.check_network_access(d, "repo init -m %s -b %s -u %s://%s%s%s" % (ud.manifest, ud.branch, ud.proto, username, ud.host, ud.path), ud.url)
- runfetchcmd("repo init -m %s -b %s -u %s://%s%s%s" % (ud.manifest, ud.branch, ud.proto, username, ud.host, ud.path), d, workdir=repodir)
+ bb.fetch2.check_network_access(d, "%s init -m %s -b %s -u %s://%s%s%s" % (ud.basecmd, ud.manifest, ud.branch, ud.proto, username, ud.host, ud.path), ud.url)
+ runfetchcmd("%s init -m %s -b %s -u %s://%s%s%s" % (ud.basecmd, ud.manifest, ud.branch, ud.proto, username, ud.host, ud.path), d, workdir=repodir)
- bb.fetch2.check_network_access(d, "repo sync %s" % ud.url, ud.url)
- runfetchcmd("repo sync", d, workdir=repodir)
+ bb.fetch2.check_network_access(d, "%s sync %s" % (ud.basecmd, ud.url), ud.url)
+ runfetchcmd("%s sync" % ud.basecmd, d, workdir=repodir)
scmdata = ud.parm.get("scmdata", "")
if scmdata == "keep":
diff --git a/bitbake/lib/bb/fetch2/svn.py b/bitbake/lib/bb/fetch2/svn.py
index 3f172ee..ed70bcf 100644
--- a/bitbake/lib/bb/fetch2/svn.py
+++ b/bitbake/lib/bb/fetch2/svn.py
@@ -49,7 +49,7 @@ class Svn(FetchMethod):
if not "module" in ud.parm:
raise MissingParameterError('module', ud.url)
- ud.basecmd = d.getVar('FETCHCMD_svn')
+ ud.basecmd = d.getVar("FETCHCMD_svn") or "/usr/bin/env svn --non-interactive --trust-server-cert"
ud.module = ud.parm["module"]
@@ -59,8 +59,9 @@ class Svn(FetchMethod):
ud.path_spec = ud.parm["path_spec"]
# Create paths to svn checkouts
+ svndir = d.getVar("SVNDIR") or (d.getVar("DL_DIR") + "/svn")
relpath = self._strip_leading_slashes(ud.path)
- ud.pkgdir = os.path.join(d.expand('${SVNDIR}'), ud.host, relpath)
+ ud.pkgdir = os.path.join(svndir, ud.host, relpath)
ud.moddir = os.path.join(ud.pkgdir, ud.module)
ud.setup_revisions(d)
diff --git a/bitbake/lib/bb/main.py b/bitbake/lib/bb/main.py
index 7711b29..732a315 100755
--- a/bitbake/lib/bb/main.py
+++ b/bitbake/lib/bb/main.py
@@ -292,8 +292,12 @@ class BitBakeConfigParameters(cookerdata.ConfigParameters):
help="Writes the event log of the build to a bitbake event json file. "
"Use '' (empty string) to assign the name automatically.")
- parser.add_option("", "--runall", action="store", dest="runall",
- help="Run the specified task for all build targets and their dependencies.")
+ parser.add_option("", "--runall", action="append", dest="runall",
+ help="Run the specified task for any recipe in the taskgraph of the specified target (even if it wouldn't otherwise have run).")
+
+ parser.add_option("", "--runonly", action="append", dest="runonly",
+ help="Run only the specified task within the taskgraph of the specified targets (and any task dependencies those tasks may have).")
+
options, targets = parser.parse_args(argv)
@@ -401,9 +405,6 @@ def setup_bitbake(configParams, configuration, extrafeatures=None):
# In status only mode there are no logs and no UI
logger.addHandler(handler)
- # Clear away any spurious environment variables while we stoke up the cooker
- cleanedvars = bb.utils.clean_environment()
-
if configParams.server_only:
featureset = []
ui_module = None
@@ -419,6 +420,10 @@ def setup_bitbake(configParams, configuration, extrafeatures=None):
server_connection = None
+ # Clear away any spurious environment variables while we stoke up the cooker
+ # (done after import_extension_module() above since for example import gi triggers env var usage)
+ cleanedvars = bb.utils.clean_environment()
+
if configParams.remote_server:
# Connect to a remote XMLRPC server
server_connection = bb.server.xmlrpcclient.connectXMLRPC(configParams.remote_server, featureset,
diff --git a/bitbake/lib/bb/msg.py b/bitbake/lib/bb/msg.py
index f1723be..96f077e 100644
--- a/bitbake/lib/bb/msg.py
+++ b/bitbake/lib/bb/msg.py
@@ -40,6 +40,7 @@ class BBLogFormatter(logging.Formatter):
VERBOSE = logging.INFO - 1
NOTE = logging.INFO
PLAIN = logging.INFO + 1
+ VERBNOTE = logging.INFO + 2
ERROR = logging.ERROR
WARNING = logging.WARNING
CRITICAL = logging.CRITICAL
@@ -51,6 +52,7 @@ class BBLogFormatter(logging.Formatter):
VERBOSE: 'NOTE',
NOTE : 'NOTE',
PLAIN : '',
+ VERBNOTE: 'NOTE',
WARNING : 'WARNING',
ERROR : 'ERROR',
CRITICAL: 'ERROR',
@@ -66,6 +68,7 @@ class BBLogFormatter(logging.Formatter):
VERBOSE : BASECOLOR,
NOTE : BASECOLOR,
PLAIN : BASECOLOR,
+ VERBNOTE: BASECOLOR,
WARNING : YELLOW,
ERROR : RED,
CRITICAL: RED,
diff --git a/bitbake/lib/bb/parse/__init__.py b/bitbake/lib/bb/parse/__init__.py
index 2fc4002..5397d57 100644
--- a/bitbake/lib/bb/parse/__init__.py
+++ b/bitbake/lib/bb/parse/__init__.py
@@ -134,8 +134,9 @@ def resolve_file(fn, d):
if not newfn:
raise IOError(errno.ENOENT, "file %s not found in %s" % (fn, bbpath))
fn = newfn
+ else:
+ mark_dependency(d, fn)
- mark_dependency(d, fn)
if not os.path.isfile(fn):
raise IOError(errno.ENOENT, "file %s not found" % fn)
diff --git a/bitbake/lib/bb/parse/ast.py b/bitbake/lib/bb/parse/ast.py
index dba4540..9d20c32 100644
--- a/bitbake/lib/bb/parse/ast.py
+++ b/bitbake/lib/bb/parse/ast.py
@@ -335,35 +335,39 @@ def handleInherit(statements, filename, lineno, m):
classes = m.group(1)
statements.append(InheritNode(filename, lineno, classes))
-def finalize(fn, d, variant = None):
- saved_handlers = bb.event.get_handlers().copy()
-
- for var in d.getVar('__BBHANDLERS', False) or []:
- # try to add the handler
- handlerfn = d.getVarFlag(var, "filename", False)
- if not handlerfn:
- bb.fatal("Undefined event handler function '%s'" % var)
- handlerln = int(d.getVarFlag(var, "lineno", False))
- bb.event.register(var, d.getVar(var, False), (d.getVarFlag(var, "eventmask") or "").split(), handlerfn, handlerln)
-
- bb.event.fire(bb.event.RecipePreFinalise(fn), d)
-
- bb.data.expandKeys(d)
+def runAnonFuncs(d):
code = []
for funcname in d.getVar("__BBANONFUNCS", False) or []:
code.append("%s(d)" % funcname)
bb.utils.better_exec("\n".join(code), {"d": d})
- tasklist = d.getVar('__BBTASKS', False) or []
- bb.event.fire(bb.event.RecipeTaskPreProcess(fn, list(tasklist)), d)
- bb.build.add_tasks(tasklist, d)
+def finalize(fn, d, variant = None):
+ saved_handlers = bb.event.get_handlers().copy()
+ try:
+ for var in d.getVar('__BBHANDLERS', False) or []:
+ # try to add the handler
+ handlerfn = d.getVarFlag(var, "filename", False)
+ if not handlerfn:
+ bb.fatal("Undefined event handler function '%s'" % var)
+ handlerln = int(d.getVarFlag(var, "lineno", False))
+ bb.event.register(var, d.getVar(var, False), (d.getVarFlag(var, "eventmask") or "").split(), handlerfn, handlerln)
+
+ bb.event.fire(bb.event.RecipePreFinalise(fn), d)
+
+ bb.data.expandKeys(d)
+ runAnonFuncs(d)
+
+ tasklist = d.getVar('__BBTASKS', False) or []
+ bb.event.fire(bb.event.RecipeTaskPreProcess(fn, list(tasklist)), d)
+ bb.build.add_tasks(tasklist, d)
- bb.parse.siggen.finalise(fn, d, variant)
+ bb.parse.siggen.finalise(fn, d, variant)
- d.setVar('BBINCLUDED', bb.parse.get_file_depends(d))
+ d.setVar('BBINCLUDED', bb.parse.get_file_depends(d))
- bb.event.fire(bb.event.RecipeParsed(fn), d)
- bb.event.set_handlers(saved_handlers)
+ bb.event.fire(bb.event.RecipeParsed(fn), d)
+ finally:
+ bb.event.set_handlers(saved_handlers)
def _create_variants(datastores, names, function, onlyfinalise):
def create_variant(name, orig_d, arg = None):
diff --git a/bitbake/lib/bb/parse/parse_py/BBHandler.py b/bitbake/lib/bb/parse/parse_py/BBHandler.py
index f89ad24..e5039e3 100644
--- a/bitbake/lib/bb/parse/parse_py/BBHandler.py
+++ b/bitbake/lib/bb/parse/parse_py/BBHandler.py
@@ -131,9 +131,6 @@ def handle(fn, d, include):
abs_fn = resolve_file(fn, d)
- if include:
- bb.parse.mark_dependency(d, abs_fn)
-
# actual loading
statements = get_statements(fn, abs_fn, base_name)
diff --git a/bitbake/lib/bb/parse/parse_py/ConfHandler.py b/bitbake/lib/bb/parse/parse_py/ConfHandler.py
index 97aa130..9d3ebe1 100644
--- a/bitbake/lib/bb/parse/parse_py/ConfHandler.py
+++ b/bitbake/lib/bb/parse/parse_py/ConfHandler.py
@@ -134,9 +134,6 @@ def handle(fn, data, include):
abs_fn = resolve_file(fn, data)
f = open(abs_fn, 'r')
- if include:
- bb.parse.mark_dependency(data, abs_fn)
-
statements = ast.StatementGroup()
lineno = 0
while True:
diff --git a/bitbake/lib/bb/runqueue.py b/bitbake/lib/bb/runqueue.py
index b7be102..9ce06c4 100644
--- a/bitbake/lib/bb/runqueue.py
+++ b/bitbake/lib/bb/runqueue.py
@@ -94,13 +94,13 @@ class RunQueueStats:
self.active = self.active - 1
self.failed = self.failed + 1
- def taskCompleted(self, number = 1):
- self.active = self.active - number
- self.completed = self.completed + number
+ def taskCompleted(self):
+ self.active = self.active - 1
+ self.completed = self.completed + 1
- def taskSkipped(self, number = 1):
- self.active = self.active + number
- self.skipped = self.skipped + number
+ def taskSkipped(self):
+ self.active = self.active + 1
+ self.skipped = self.skipped + 1
def taskActive(self):
self.active = self.active + 1
@@ -134,6 +134,7 @@ class RunQueueScheduler(object):
self.prio_map = [self.rqdata.runtaskentries.keys()]
self.buildable = []
+ self.skip_maxthread = {}
self.stamps = {}
for tid in self.rqdata.runtaskentries:
(mc, fn, taskname, taskfn) = split_tid_mcfn(tid)
@@ -150,8 +151,25 @@ class RunQueueScheduler(object):
self.buildable = [x for x in self.buildable if x not in self.rq.runq_running]
if not self.buildable:
return None
+
+ # Filter out tasks that have a max number of threads that have been exceeded
+ skip_buildable = {}
+ for running in self.rq.runq_running.difference(self.rq.runq_complete):
+ rtaskname = taskname_from_tid(running)
+ if rtaskname not in self.skip_maxthread:
+ self.skip_maxthread[rtaskname] = self.rq.cfgData.getVarFlag(rtaskname, "number_threads")
+ if not self.skip_maxthread[rtaskname]:
+ continue
+ if rtaskname in skip_buildable:
+ skip_buildable[rtaskname] += 1
+ else:
+ skip_buildable[rtaskname] = 1
+
if len(self.buildable) == 1:
tid = self.buildable[0]
+ taskname = taskname_from_tid(tid)
+ if taskname in skip_buildable and skip_buildable[taskname] >= int(self.skip_maxthread[taskname]):
+ return None
stamp = self.stamps[tid]
if stamp not in self.rq.build_stamps.values():
return tid
@@ -164,6 +182,9 @@ class RunQueueScheduler(object):
best = None
bestprio = None
for tid in self.buildable:
+ taskname = taskname_from_tid(tid)
+ if taskname in skip_buildable and skip_buildable[taskname] >= int(self.skip_maxthread[taskname]):
+ continue
prio = self.rev_prio_map[tid]
if bestprio is None or bestprio > prio:
stamp = self.stamps[tid]
@@ -178,7 +199,7 @@ class RunQueueScheduler(object):
"""
Return the id of the task we should build next
"""
- if self.rq.stats.active < self.rq.number_tasks:
+ if self.rq.can_start_task():
return self.next_buildable_task()
def newbuildable(self, task):
@@ -581,11 +602,18 @@ class RunQueueData:
if t in taskData[mc].taskentries:
depends.add(t)
- def add_resolved_dependencies(mc, fn, tasknames, depends):
- for taskname in tasknames:
- tid = build_tid(mc, fn, taskname)
- if tid in self.runtaskentries:
- depends.add(tid)
+ def add_mc_dependencies(mc, tid):
+ mcdeps = taskData[mc].get_mcdepends()
+ for dep in mcdeps:
+ mcdependency = dep.split(':')
+ pn = mcdependency[3]
+ frommc = mcdependency[1]
+ mcdep = mcdependency[2]
+ deptask = mcdependency[4]
+ if mc == frommc:
+ fn = taskData[mcdep].build_targets[pn][0]
+ newdep = '%s:%s' % (fn,deptask)
+ taskData[mc].taskentries[tid].tdepends.append(newdep)
for mc in taskData:
for tid in taskData[mc].taskentries:
@@ -603,12 +631,16 @@ class RunQueueData:
if fn in taskData[mc].failed_fns:
continue
+ # We add multiconfig dependencies before processing internal task deps (tdepends)
+ if 'mcdepends' in task_deps and taskname in task_deps['mcdepends']:
+ add_mc_dependencies(mc, tid)
+
# Resolve task internal dependencies
#
# e.g. addtask before X after Y
for t in taskData[mc].taskentries[tid].tdepends:
- (_, depfn, deptaskname, _) = split_tid_mcfn(t)
- depends.add(build_tid(mc, depfn, deptaskname))
+ (depmc, depfn, deptaskname, _) = split_tid_mcfn(t)
+ depends.add(build_tid(depmc, depfn, deptaskname))
# Resolve 'deptask' dependencies
#
@@ -673,57 +705,106 @@ class RunQueueData:
recursiveitasks[tid].append(newdep)
self.runtaskentries[tid].depends = depends
+ # Remove all self references
+ self.runtaskentries[tid].depends.discard(tid)
#self.dump_data()
+ self.init_progress_reporter.next_stage()
+
# Resolve recursive 'recrdeptask' dependencies (Part B)
#
# e.g. do_sometask[recrdeptask] = "do_someothertask"
# (makes sure sometask runs after someothertask of all DEPENDS, RDEPENDS and intertask dependencies, recursively)
# We need to do this separately since we need all of runtaskentries[*].depends to be complete before this is processed
- self.init_progress_reporter.next_stage(len(recursivetasks))
- extradeps = {}
- for taskcounter, tid in enumerate(recursivetasks):
- extradeps[tid] = set(self.runtaskentries[tid].depends)
-
- tasknames = recursivetasks[tid]
- seendeps = set()
-
- def generate_recdeps(t):
- newdeps = set()
- (mc, fn, taskname, _) = split_tid_mcfn(t)
- add_resolved_dependencies(mc, fn, tasknames, newdeps)
- extradeps[tid].update(newdeps)
- seendeps.add(t)
- newdeps.add(t)
- for i in newdeps:
- if i not in self.runtaskentries:
- # Not all recipes might have the recrdeptask task as a task
- continue
- task = self.runtaskentries[i].task
- for n in self.runtaskentries[i].depends:
- if n not in seendeps:
- generate_recdeps(n)
- generate_recdeps(tid)
- if tid in recursiveitasks:
- for dep in recursiveitasks[tid]:
- generate_recdeps(dep)
- self.init_progress_reporter.update(taskcounter)
+ # Generating/interating recursive lists of dependencies is painful and potentially slow
+ # Precompute recursive task dependencies here by:
+ # a) create a temp list of reverse dependencies (revdeps)
+ # b) walk up the ends of the chains (when a given task no longer has dependencies i.e. len(deps) == 0)
+ # c) combine the total list of dependencies in cumulativedeps
+ # d) optimise by pre-truncating 'task' off the items in cumulativedeps (keeps items in sets lower)
- # Remove circular references so that do_a[recrdeptask] = "do_a do_b" can work
- for tid in recursivetasks:
- extradeps[tid].difference_update(recursivetasksselfref)
+ revdeps = {}
+ deps = {}
+ cumulativedeps = {}
+ for tid in self.runtaskentries:
+ deps[tid] = set(self.runtaskentries[tid].depends)
+ revdeps[tid] = set()
+ cumulativedeps[tid] = set()
+ # Generate a temp list of reverse dependencies
for tid in self.runtaskentries:
- task = self.runtaskentries[tid].task
- # Add in extra dependencies
- if tid in extradeps:
- self.runtaskentries[tid].depends = extradeps[tid]
- # Remove all self references
- if tid in self.runtaskentries[tid].depends:
- logger.debug(2, "Task %s contains self reference!", tid)
- self.runtaskentries[tid].depends.remove(tid)
+ for dep in self.runtaskentries[tid].depends:
+ revdeps[dep].add(tid)
+ # Find the dependency chain endpoints
+ endpoints = set()
+ for tid in self.runtaskentries:
+ if len(deps[tid]) == 0:
+ endpoints.add(tid)
+ # Iterate the chains collating dependencies
+ while endpoints:
+ next = set()
+ for tid in endpoints:
+ for dep in revdeps[tid]:
+ cumulativedeps[dep].add(fn_from_tid(tid))
+ cumulativedeps[dep].update(cumulativedeps[tid])
+ if tid in deps[dep]:
+ deps[dep].remove(tid)
+ if len(deps[dep]) == 0:
+ next.add(dep)
+ endpoints = next
+ #for tid in deps:
+ # if len(deps[tid]) != 0:
+ # bb.warn("Sanity test failure, dependencies left for %s (%s)" % (tid, deps[tid]))
+
+ # Loop here since recrdeptasks can depend upon other recrdeptasks and we have to
+ # resolve these recursively until we aren't adding any further extra dependencies
+ extradeps = True
+ while extradeps:
+ extradeps = 0
+ for tid in recursivetasks:
+ tasknames = recursivetasks[tid]
+
+ totaldeps = set(self.runtaskentries[tid].depends)
+ if tid in recursiveitasks:
+ totaldeps.update(recursiveitasks[tid])
+ for dep in recursiveitasks[tid]:
+ if dep not in self.runtaskentries:
+ continue
+ totaldeps.update(self.runtaskentries[dep].depends)
+
+ deps = set()
+ for dep in totaldeps:
+ if dep in cumulativedeps:
+ deps.update(cumulativedeps[dep])
+
+ for t in deps:
+ for taskname in tasknames:
+ newtid = t + ":" + taskname
+ if newtid == tid:
+ continue
+ if newtid in self.runtaskentries and newtid not in self.runtaskentries[tid].depends:
+ extradeps += 1
+ self.runtaskentries[tid].depends.add(newtid)
+
+ # Handle recursive tasks which depend upon other recursive tasks
+ deps = set()
+ for dep in self.runtaskentries[tid].depends.intersection(recursivetasks):
+ deps.update(self.runtaskentries[dep].depends.difference(self.runtaskentries[tid].depends))
+ for newtid in deps:
+ for taskname in tasknames:
+ if not newtid.endswith(":" + taskname):
+ continue
+ if newtid in self.runtaskentries:
+ extradeps += 1
+ self.runtaskentries[tid].depends.add(newtid)
+
+ bb.debug(1, "Added %s recursive dependencies in this loop" % extradeps)
+
+ # Remove recrdeptask circular references so that do_a[recrdeptask] = "do_a do_b" can work
+ for tid in recursivetasksselfref:
+ self.runtaskentries[tid].depends.difference_update(recursivetasksselfref)
self.init_progress_reporter.next_stage()
@@ -798,30 +879,57 @@ class RunQueueData:
#
# Once all active tasks are marked, prune the ones we don't need.
- delcount = 0
+ delcount = {}
for tid in list(self.runtaskentries.keys()):
if tid not in runq_build:
+ delcount[tid] = self.runtaskentries[tid]
del self.runtaskentries[tid]
- delcount += 1
- self.init_progress_reporter.next_stage()
+ # Handle --runall
+ if self.cooker.configuration.runall:
+ # re-run the mark_active and then drop unused tasks from new list
+ runq_build = {}
+
+ for task in self.cooker.configuration.runall:
+ runall_tids = set()
+ for tid in list(self.runtaskentries):
+ wanttid = fn_from_tid(tid) + ":do_%s" % task
+ if wanttid in delcount:
+ self.runtaskentries[wanttid] = delcount[wanttid]
+ if wanttid in self.runtaskentries:
+ runall_tids.add(wanttid)
+
+ for tid in list(runall_tids):
+ mark_active(tid,1)
- if self.cooker.configuration.runall is not None:
- runall = "do_%s" % self.cooker.configuration.runall
- runall_tids = { k: v for k, v in self.runtaskentries.items() if taskname_from_tid(k) == runall }
+ for tid in list(self.runtaskentries.keys()):
+ if tid not in runq_build:
+ delcount[tid] = self.runtaskentries[tid]
+ del self.runtaskentries[tid]
+ if len(self.runtaskentries) == 0:
+ bb.msg.fatal("RunQueue", "Could not find any tasks with the tasknames %s to run within the recipes of the taskgraphs of the targets %s" % (str(self.cooker.configuration.runall), str(self.targets)))
+
+ self.init_progress_reporter.next_stage()
+
+ # Handle runonly
+ if self.cooker.configuration.runonly:
# re-run the mark_active and then drop unused tasks from new list
runq_build = {}
- for tid in list(runall_tids):
- mark_active(tid,1)
+
+ for task in self.cooker.configuration.runonly:
+ runonly_tids = { k: v for k, v in self.runtaskentries.items() if taskname_from_tid(k) == "do_%s" % task }
+
+ for tid in list(runonly_tids):
+ mark_active(tid,1)
for tid in list(self.runtaskentries.keys()):
if tid not in runq_build:
+ delcount[tid] = self.runtaskentries[tid]
del self.runtaskentries[tid]
- delcount += 1
if len(self.runtaskentries) == 0:
- bb.msg.fatal("RunQueue", "No remaining tasks to run for build target %s with runall %s" % (target, runall))
+ bb.msg.fatal("RunQueue", "Could not find any tasks with the tasknames %s to run within the taskgraphs of the targets %s" % (str(self.cooker.configuration.runonly), str(self.targets)))
#
# Step D - Sanity checks and computation
@@ -834,7 +942,7 @@ class RunQueueData:
else:
bb.msg.fatal("RunQueue", "No active tasks and not in --continue mode?! Please report this bug.")
- logger.verbose("Pruned %s inactive tasks, %s left", delcount, len(self.runtaskentries))
+ logger.verbose("Pruned %s inactive tasks, %s left", len(delcount), len(self.runtaskentries))
logger.verbose("Assign Weightings")
@@ -962,7 +1070,7 @@ class RunQueueData:
msg += "\n%s has unique rprovides:\n %s" % (provfn, "\n ".join(rprovide_results[provfn] - commonrprovs))
if self.warn_multi_bb:
- logger.warning(msg)
+ logger.verbnote(msg)
else:
logger.error(msg)
@@ -970,7 +1078,7 @@ class RunQueueData:
# Create a whitelist usable by the stamp checks
self.stampfnwhitelist = {}
- for mc in self.taskData:
+ for mc in self.taskData:
self.stampfnwhitelist[mc] = []
for entry in self.stampwhitelist.split():
if entry not in self.taskData[mc].build_targets:
@@ -1002,7 +1110,7 @@ class RunQueueData:
bb.debug(1, "Task %s is marked nostamp, cannot invalidate this task" % taskname)
else:
logger.verbose("Invalidate task %s, %s", taskname, fn)
- bb.parse.siggen.invalidate_task(taskname, self.dataCaches[mc], fn)
+ bb.parse.siggen.invalidate_task(taskname, self.dataCaches[mc], taskfn)
self.init_progress_reporter.next_stage()
@@ -1646,6 +1754,10 @@ class RunQueueExecute:
valid = bb.utils.better_eval(call, locs)
return valid
+ def can_start_task(self):
+ can_start = self.stats.active < self.number_tasks
+ return can_start
+
class RunQueueExecuteDummy(RunQueueExecute):
def __init__(self, rq):
self.rq = rq
@@ -1719,13 +1831,14 @@ class RunQueueExecuteTasks(RunQueueExecute):
bb.build.del_stamp(taskname, self.rqdata.dataCaches[mc], taskfn)
self.rq.scenequeue_covered.remove(tid)
- toremove = covered_remove
+ toremove = covered_remove | self.rq.scenequeue_notcovered
for task in toremove:
logger.debug(1, 'Not skipping task %s due to setsceneverify', task)
while toremove:
covered_remove = []
for task in toremove:
- removecoveredtask(task)
+ if task in self.rq.scenequeue_covered:
+ removecoveredtask(task)
for deptask in self.rqdata.runtaskentries[task].depends:
if deptask not in self.rq.scenequeue_covered:
continue
@@ -1795,14 +1908,13 @@ class RunQueueExecuteTasks(RunQueueExecute):
continue
if revdep in self.runq_buildable:
continue
- alldeps = 1
+ alldeps = True
for dep in self.rqdata.runtaskentries[revdep].depends:
if dep not in self.runq_complete:
- alldeps = 0
- if alldeps == 1:
+ alldeps = False
+ break
+ if alldeps:
self.setbuildable(revdep)
- fn = fn_from_tid(revdep)
- taskname = taskname_from_tid(revdep)
logger.debug(1, "Marking task %s as buildable", revdep)
def task_complete(self, task):
@@ -1826,8 +1938,8 @@ class RunQueueExecuteTasks(RunQueueExecute):
self.setbuildable(task)
bb.event.fire(runQueueTaskSkipped(task, self.stats, self.rq, reason), self.cfgData)
self.task_completeoutright(task)
- self.stats.taskCompleted()
self.stats.taskSkipped()
+ self.stats.taskCompleted()
def execute(self):
"""
@@ -1937,7 +2049,7 @@ class RunQueueExecuteTasks(RunQueueExecute):
self.build_stamps2.append(self.build_stamps[task])
self.runq_running.add(task)
self.stats.taskActive()
- if self.stats.active < self.number_tasks:
+ if self.can_start_task():
return True
if self.stats.active > 0:
@@ -1992,6 +2104,7 @@ class RunQueueExecuteScenequeue(RunQueueExecute):
# If we don't have any setscene functions, skip this step
if len(self.rqdata.runq_setscene_tids) == 0:
rq.scenequeue_covered = set()
+ rq.scenequeue_notcovered = set()
rq.state = runQueueRunInit
return
@@ -2207,10 +2320,15 @@ class RunQueueExecuteScenequeue(RunQueueExecute):
sq_hash.append(self.rqdata.runtaskentries[tid].hash)
sq_taskname.append(taskname)
sq_task.append(tid)
+
+ self.cooker.data.setVar("BB_SETSCENE_STAMPCURRENT_COUNT", len(stamppresent))
+
call = self.rq.hashvalidate + "(sq_fn, sq_task, sq_hash, sq_hashfn, d)"
locs = { "sq_fn" : sq_fn, "sq_task" : sq_taskname, "sq_hash" : sq_hash, "sq_hashfn" : sq_hashfn, "d" : self.cooker.data }
valid = bb.utils.better_eval(call, locs)
+ self.cooker.data.delVar("BB_SETSCENE_STAMPCURRENT_COUNT")
+
valid_new = stamppresent
for v in valid:
valid_new.append(sq_task[v])
@@ -2272,8 +2390,8 @@ class RunQueueExecuteScenequeue(RunQueueExecute):
def task_failoutright(self, task):
self.runq_running.add(task)
self.runq_buildable.add(task)
- self.stats.taskCompleted()
self.stats.taskSkipped()
+ self.stats.taskCompleted()
self.scenequeue_notcovered.add(task)
self.scenequeue_updatecounters(task, True)
@@ -2281,8 +2399,8 @@ class RunQueueExecuteScenequeue(RunQueueExecute):
self.runq_running.add(task)
self.runq_buildable.add(task)
self.task_completeoutright(task)
- self.stats.taskCompleted()
self.stats.taskSkipped()
+ self.stats.taskCompleted()
def execute(self):
"""
@@ -2292,7 +2410,7 @@ class RunQueueExecuteScenequeue(RunQueueExecute):
self.rq.read_workers()
task = None
- if self.stats.active < self.number_tasks:
+ if self.can_start_task():
# Find the next setscene to run
for nexttask in self.rqdata.runq_setscene_tids:
if nexttask in self.runq_buildable and nexttask not in self.runq_running and self.stamps[nexttask] not in self.build_stamps.values():
@@ -2351,7 +2469,7 @@ class RunQueueExecuteScenequeue(RunQueueExecute):
self.build_stamps2.append(self.build_stamps[task])
self.runq_running.add(task)
self.stats.taskActive()
- if self.stats.active < self.number_tasks:
+ if self.can_start_task():
return True
if self.stats.active > 0:
diff --git a/bitbake/lib/bb/server/process.py b/bitbake/lib/bb/server/process.py
index 3d31355..38b923f 100644
--- a/bitbake/lib/bb/server/process.py
+++ b/bitbake/lib/bb/server/process.py
@@ -223,6 +223,8 @@ class ProcessServer(multiprocessing.Process):
try:
self.cooker.shutdown(True)
+ self.cooker.notifier.stop()
+ self.cooker.confignotifier.stop()
except:
pass
@@ -375,11 +377,12 @@ class BitBakeServer(object):
if os.path.exists(sockname):
os.unlink(sockname)
+ # Place the log in the builddirectory alongside the lock file
+ logfile = os.path.join(os.path.dirname(self.bitbake_lock.name), "bitbake-cookerdaemon.log")
+
self.sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
# AF_UNIX has path length issues so chdir here to workaround
cwd = os.getcwd()
- logfile = os.path.join(cwd, "bitbake-cookerdaemon.log")
-
try:
os.chdir(os.path.dirname(sockname))
self.sock.bind(os.path.basename(sockname))
@@ -392,11 +395,16 @@ class BitBakeServer(object):
bb.daemonize.createDaemon(self._startServer, logfile)
self.sock.close()
self.bitbake_lock.close()
+ os.close(self.readypipein)
ready = ConnectionReader(self.readypipe)
r = ready.poll(30)
if r:
- r = ready.get()
+ try:
+ r = ready.get()
+ except EOFError:
+ # Trap the child exitting/closing the pipe and error out
+ r = None
if not r or r != "ready":
ready.close()
bb.error("Unable to start bitbake server")
@@ -422,21 +430,16 @@ class BitBakeServer(object):
bb.error("Server log for this session (%s):\n%s" % (logfile, "".join(lines)))
raise SystemExit(1)
ready.close()
- os.close(self.readypipein)
def _startServer(self):
print(self.start_log_format % (os.getpid(), datetime.datetime.now().strftime(self.start_log_datetime_format)))
server = ProcessServer(self.bitbake_lock, self.sock, self.sockname)
self.configuration.setServerRegIdleCallback(server.register_idle_function)
+ os.close(self.readypipe)
writer = ConnectionWriter(self.readypipein)
- try:
- self.cooker = bb.cooker.BBCooker(self.configuration, self.featureset)
- writer.send("ready")
- except:
- writer.send("fail")
- raise
- finally:
- os.close(self.readypipein)
+ self.cooker = bb.cooker.BBCooker(self.configuration, self.featureset)
+ writer.send("ready")
+ writer.close()
server.cooker = self.cooker
server.server_timeout = self.configuration.server_timeout
server.xmlrpcinterface = self.configuration.xmlrpcinterface
diff --git a/bitbake/lib/bb/siggen.py b/bitbake/lib/bb/siggen.py
index 5ef82d7..03c824e 100644
--- a/bitbake/lib/bb/siggen.py
+++ b/bitbake/lib/bb/siggen.py
@@ -110,42 +110,13 @@ class SignatureGeneratorBasic(SignatureGenerator):
ignore_mismatch = ((d.getVar("BB_HASH_IGNORE_MISMATCH") or '') == '1')
tasklist, gendeps, lookupcache = bb.data.generate_dependencies(d)
- taskdeps = {}
- basehash = {}
+ taskdeps, basehash = bb.data.generate_dependency_hash(tasklist, gendeps, lookupcache, self.basewhitelist, fn)
for task in tasklist:
- data = lookupcache[task]
-
- if data is None:
- bb.error("Task %s from %s seems to be empty?!" % (task, fn))
- data = ''
-
- gendeps[task] -= self.basewhitelist
- newdeps = gendeps[task]
- seen = set()
- while newdeps:
- nextdeps = newdeps
- seen |= nextdeps
- newdeps = set()
- for dep in nextdeps:
- if dep in self.basewhitelist:
- continue
- gendeps[dep] -= self.basewhitelist
- newdeps |= gendeps[dep]
- newdeps -= seen
-
- alldeps = sorted(seen)
- for dep in alldeps:
- data = data + dep
- var = lookupcache[dep]
- if var is not None:
- data = data + str(var)
- datahash = hashlib.md5(data.encode("utf-8")).hexdigest()
k = fn + "." + task
- if not ignore_mismatch and k in self.basehash and self.basehash[k] != datahash:
- bb.error("When reparsing %s, the basehash value changed from %s to %s. The metadata is not deterministic and this needs to be fixed." % (k, self.basehash[k], datahash))
- self.basehash[k] = datahash
- taskdeps[task] = alldeps
+ if not ignore_mismatch and k in self.basehash and self.basehash[k] != basehash[k]:
+ bb.error("When reparsing %s, the basehash value changed from %s to %s. The metadata is not deterministic and this needs to be fixed." % (k, self.basehash[k], basehash[k]))
+ self.basehash[k] = basehash[k]
self.taskdeps[fn] = taskdeps
self.gendeps[fn] = gendeps
@@ -193,15 +164,24 @@ class SignatureGeneratorBasic(SignatureGenerator):
return taint
def get_taskhash(self, fn, task, deps, dataCache):
+
+ mc = ''
+ if fn.startswith('multiconfig:'):
+ mc = fn.split(':')[1]
k = fn + "." + task
+
data = dataCache.basetaskhash[k]
self.basehash[k] = data
self.runtaskdeps[k] = []
self.file_checksum_values[k] = []
recipename = dataCache.pkg_fn[fn]
-
for dep in sorted(deps, key=clean_basepath):
- depname = dataCache.pkg_fn[self.pkgnameextract.search(dep).group('fn')]
+ pkgname = self.pkgnameextract.search(dep).group('fn')
+ if mc:
+ depmc = pkgname.split(':')[1]
+ if mc != depmc:
+ continue
+ depname = dataCache.pkg_fn[pkgname]
if not self.rundep_check(fn, recipename, task, dep, depname, dataCache):
continue
if dep not in self.taskhash:
@@ -347,7 +327,7 @@ class SignatureGeneratorBasicHash(SignatureGeneratorBasic):
def stampcleanmask(self, stampbase, fn, taskname, extrainfo):
return self.stampfile(stampbase, fn, taskname, extrainfo, clean=True)
-
+
def invalidate_task(self, task, d, fn):
bb.note("Tainting hash to force rebuild of task %s, %s" % (fn, task))
bb.build.write_taint(task, d, fn)
@@ -636,7 +616,7 @@ def compare_sigfiles(a, b, recursecb=None, color=False, collapsed=False):
if collapsed:
output.extend(recout)
else:
- # If a dependent hash changed, might as well print the line above and then defer to the changes in
+ # If a dependent hash changed, might as well print the line above and then defer to the changes in
# that hash since in all likelyhood, they're the same changes this task also saw.
output = [output[-1]] + recout
diff --git a/bitbake/lib/bb/taskdata.py b/bitbake/lib/bb/taskdata.py
index 0ea6c0b..94e822c 100644
--- a/bitbake/lib/bb/taskdata.py
+++ b/bitbake/lib/bb/taskdata.py
@@ -70,6 +70,8 @@ class TaskData:
self.skiplist = skiplist
+ self.mcdepends = []
+
def add_tasks(self, fn, dataCache):
"""
Add tasks for a given fn to the database
@@ -88,6 +90,13 @@ class TaskData:
self.add_extra_deps(fn, dataCache)
+ def add_mcdepends(task):
+ for dep in task_deps['mcdepends'][task].split():
+ if len(dep.split(':')) != 5:
+ bb.msg.fatal("TaskData", "Error for %s:%s[%s], multiconfig dependency %s does not contain exactly four ':' characters.\n Task '%s' should be specified in the form 'multiconfig:fromMC:toMC:packagename:task'" % (fn, task, 'mcdepends', dep, 'mcdepends'))
+ if dep not in self.mcdepends:
+ self.mcdepends.append(dep)
+
# Common code for dep_name/depends = 'depends'/idepends and 'rdepends'/irdepends
def handle_deps(task, dep_name, depends, seen):
if dep_name in task_deps and task in task_deps[dep_name]:
@@ -110,16 +119,20 @@ class TaskData:
parentids = []
for dep in task_deps['parents'][task]:
if dep not in task_deps['tasks']:
- bb.debug(2, "Not adding dependeny of %s on %s since %s does not exist" % (task, dep, dep))
+ bb.debug(2, "Not adding dependency of %s on %s since %s does not exist" % (task, dep, dep))
continue
parentid = "%s:%s" % (fn, dep)
parentids.append(parentid)
self.taskentries[tid].tdepends.extend(parentids)
+
# Touch all intertask dependencies
handle_deps(task, 'depends', self.taskentries[tid].idepends, self.seen_build_target)
handle_deps(task, 'rdepends', self.taskentries[tid].irdepends, self.seen_run_target)
+ if 'mcdepends' in task_deps and task in task_deps['mcdepends']:
+ add_mcdepends(task)
+
# Work out build dependencies
if not fn in self.depids:
dependids = set()
@@ -537,6 +550,9 @@ class TaskData:
provmap[name] = provider[0]
return provmap
+ def get_mcdepends(self):
+ return self.mcdepends
+
def dump_data(self):
"""
Dump some debug information on the internal data structures
diff --git a/bitbake/lib/bb/tests/cooker.py b/bitbake/lib/bb/tests/cooker.py
new file mode 100644
index 0000000..2b44236
--- /dev/null
+++ b/bitbake/lib/bb/tests/cooker.py
@@ -0,0 +1,83 @@
+# ex:ts=4:sw=4:sts=4:et
+# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
+#
+# BitBake Tests for cooker.py
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License version 2 as
+# published by the Free Software Foundation.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License along
+# with this program; if not, write to the Free Software Foundation, Inc.,
+# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
+#
+
+import unittest
+import tempfile
+import os
+import bb, bb.cooker
+import re
+import logging
+
+# Cooker tests
+class CookerTest(unittest.TestCase):
+ def setUp(self):
+ # At least one variable needs to be set
+ self.d = bb.data.init()
+ topdir = os.path.join(os.path.dirname(os.path.realpath(__file__)), "testdata/cooker")
+ self.d.setVar('TOPDIR', topdir)
+
+ def test_CookerCollectFiles_sublayers(self):
+ '''Test that a sublayer of an existing layer does not trigger
+ No bb files matched ...'''
+
+ def append_collection(topdir, path, d):
+ collection = path.split('/')[-1]
+ pattern = "^" + topdir + "/" + path + "/"
+ regex = re.compile(pattern)
+ priority = 5
+
+ d.setVar('BBFILE_COLLECTIONS', (d.getVar('BBFILE_COLLECTIONS') or "") + " " + collection)
+ d.setVar('BBFILE_PATTERN_%s' % (collection), pattern)
+ d.setVar('BBFILE_PRIORITY_%s' % (collection), priority)
+
+ return (collection, pattern, regex, priority)
+
+ topdir = self.d.getVar("TOPDIR")
+
+ # Priorities: list of (collection, pattern, regex, priority)
+ bbfile_config_priorities = []
+ # Order is important for this test, shortest to longest is typical failure case
+ bbfile_config_priorities.append( append_collection(topdir, 'first', self.d) )
+ bbfile_config_priorities.append( append_collection(topdir, 'second', self.d) )
+ bbfile_config_priorities.append( append_collection(topdir, 'second/third', self.d) )
+
+ pkgfns = [ topdir + '/first/recipes/sample1_1.0.bb',
+ topdir + '/second/recipes/sample2_1.0.bb',
+ topdir + '/second/third/recipes/sample3_1.0.bb' ]
+
+ class LogHandler(logging.Handler):
+ def __init__(self):
+ logging.Handler.__init__(self)
+ self.logdata = []
+
+ def emit(self, record):
+ self.logdata.append(record.getMessage())
+
+ # Move cooker to use my special logging
+ logger = bb.cooker.logger
+ log_handler = LogHandler()
+ logger.addHandler(log_handler)
+ collection = bb.cooker.CookerCollectFiles(bbfile_config_priorities)
+ collection.collection_priorities(pkgfns, self.d)
+ logger.removeHandler(log_handler)
+
+ # Should be empty (no generated messages)
+ expected = []
+
+ self.assertEqual(log_handler.logdata, expected)
diff --git a/bitbake/lib/bb/tests/data.py b/bitbake/lib/bb/tests/data.py
index a4a9dd3..db3e201 100644
--- a/bitbake/lib/bb/tests/data.py
+++ b/bitbake/lib/bb/tests/data.py
@@ -281,7 +281,7 @@ class TestConcatOverride(unittest.TestCase):
def test_remove(self):
self.d.setVar("TEST", "${VAL} ${BAR}")
self.d.setVar("TEST_remove", "val")
- self.assertEqual(self.d.getVar("TEST"), "bar")
+ self.assertEqual(self.d.getVar("TEST"), " bar")
def test_remove_cleared(self):
self.d.setVar("TEST", "${VAL} ${BAR}")
@@ -300,7 +300,7 @@ class TestConcatOverride(unittest.TestCase):
self.d.setVar("TEST", "${VAL} ${BAR}")
self.d.setVar("TEST_remove", "val")
self.d.setVar("TEST_TEST", "${TEST} ${TEST}")
- self.assertEqual(self.d.getVar("TEST_TEST"), "bar bar")
+ self.assertEqual(self.d.getVar("TEST_TEST"), " bar bar")
def test_empty_remove(self):
self.d.setVar("TEST", "")
@@ -311,13 +311,25 @@ class TestConcatOverride(unittest.TestCase):
self.d.setVar("BAR", "Z")
self.d.setVar("TEST", "${BAR}/X Y")
self.d.setVar("TEST_remove", "${BAR}/X")
- self.assertEqual(self.d.getVar("TEST"), "Y")
+ self.assertEqual(self.d.getVar("TEST"), " Y")
def test_remove_expansion_items(self):
self.d.setVar("TEST", "A B C D")
self.d.setVar("BAR", "B D")
self.d.setVar("TEST_remove", "${BAR}")
- self.assertEqual(self.d.getVar("TEST"), "A C")
+ self.assertEqual(self.d.getVar("TEST"), "A C ")
+
+ def test_remove_preserve_whitespace(self):
+ # When the removal isn't active, the original value should be preserved
+ self.d.setVar("TEST", " A B")
+ self.d.setVar("TEST_remove", "C")
+ self.assertEqual(self.d.getVar("TEST"), " A B")
+
+ def test_remove_preserve_whitespace2(self):
+ # When the removal is active preserve the whitespace
+ self.d.setVar("TEST", " A B")
+ self.d.setVar("TEST_remove", "B")
+ self.assertEqual(self.d.getVar("TEST"), " A ")
class TestOverrides(unittest.TestCase):
def setUp(self):
@@ -374,6 +386,15 @@ class TestOverrides(unittest.TestCase):
self.d.setVar("OVERRIDES", "foo:bar:some_val")
self.assertEqual(self.d.getVar("TEST"), "testvalue3")
+ def test_remove_with_override(self):
+ self.d.setVar("TEST_bar", "testvalue2")
+ self.d.setVar("TEST_some_val", "testvalue3 testvalue5")
+ self.d.setVar("TEST_some_val_remove", "testvalue3")
+ self.d.setVar("TEST_foo", "testvalue4")
+ self.d.setVar("OVERRIDES", "foo:bar:some_val")
+ self.assertEqual(self.d.getVar("TEST"), " testvalue5")
+
+
class TestKeyExpansion(unittest.TestCase):
def setUp(self):
self.d = bb.data.init()
@@ -443,6 +464,54 @@ class Contains(unittest.TestCase):
self.assertFalse(bb.utils.contains_any("SOMEFLAG", "x y z", True, False, self.d))
+class TaskHash(unittest.TestCase):
+ def test_taskhashes(self):
+ def gettask_bashhash(taskname, d):
+ tasklist, gendeps, lookupcache = bb.data.generate_dependencies(d)
+ taskdeps, basehash = bb.data.generate_dependency_hash(tasklist, gendeps, lookupcache, set(), "somefile")
+ bb.warn(str(lookupcache))
+ return basehash["somefile." + taskname]
+
+ d = bb.data.init()
+ d.setVar("__BBTASKS", ["mytask"])
+ d.setVar("__exportlist", [])
+ d.setVar("mytask", "${MYCOMMAND}")
+ d.setVar("MYCOMMAND", "${VAR}; foo; bar; exit 0")
+ d.setVar("VAR", "val")
+ orighash = gettask_bashhash("mytask", d)
+
+ # Changing a variable should change the hash
+ d.setVar("VAR", "val2")
+ nexthash = gettask_bashhash("mytask", d)
+ self.assertNotEqual(orighash, nexthash)
+
+ d.setVar("VAR", "val")
+ # Adding an inactive removal shouldn't change the hash
+ d.setVar("BAR", "notbar")
+ d.setVar("MYCOMMAND_remove", "${BAR}")
+ nexthash = gettask_bashhash("mytask", d)
+ self.assertEqual(orighash, nexthash)
+
+ # Adding an active removal should change the hash
+ d.setVar("BAR", "bar;")
+ nexthash = gettask_bashhash("mytask", d)
+ self.assertNotEqual(orighash, nexthash)
+
+ # Setup an inactive contains()
+ d.setVar("VAR", "${@bb.utils.contains('VAR2', 'A', 'val', '', d)}")
+ orighash = gettask_bashhash("mytask", d)
+
+ # Activate the contains() and the hash should change
+ d.setVar("VAR2", "A")
+ nexthash = gettask_bashhash("mytask", d)
+ self.assertNotEqual(orighash, nexthash)
+
+ # The contains should be inactive but even though VAR2 has a
+ # different value the hash should match the original
+ d.setVar("VAR2", "B")
+ nexthash = gettask_bashhash("mytask", d)
+ self.assertEqual(orighash, nexthash)
+
class Serialize(unittest.TestCase):
def test_serialize(self):
diff --git a/bitbake/lib/bb/tests/fetch.py b/bitbake/lib/bb/tests/fetch.py
index 11698f2..17909ec 100644
--- a/bitbake/lib/bb/tests/fetch.py
+++ b/bitbake/lib/bb/tests/fetch.py
@@ -20,6 +20,7 @@
#
import unittest
+import hashlib
import tempfile
import subprocess
import collections
@@ -401,6 +402,12 @@ class MirrorUriTest(FetcherTest):
: "git://somewhere.org/somedir/mtd-utils.git;tag=1234567890123456789012345678901234567890;protocol=http",
("git://git.invalid.infradead.org/foo/mtd-utils.git;tag=1234567890123456789012345678901234567890", "git://.*/.*", "git://somewhere.org/somedir/MIRRORNAME;protocol=http")
: "git://somewhere.org/somedir/git.invalid.infradead.org.foo.mtd-utils.git;tag=1234567890123456789012345678901234567890;protocol=http",
+ ("http://somewhere.org/somedir1/somedir2/somefile_1.2.3.tar.gz", "http://.*/.*", "http://somewhere2.org")
+ : "http://somewhere2.org/somefile_1.2.3.tar.gz",
+ ("http://somewhere.org/somedir1/somedir2/somefile_1.2.3.tar.gz", "http://.*/.*", "http://somewhere2.org/")
+ : "http://somewhere2.org/somefile_1.2.3.tar.gz",
+ ("git://someserver.org/bitbake;tag=1234567890123456789012345678901234567890;branch=master", "git://someserver.org/bitbake;branch=master", "git://git.openembedded.org/bitbake;protocol=http")
+ : "git://git.openembedded.org/bitbake;tag=1234567890123456789012345678901234567890;branch=master;protocol=http",
#Renaming files doesn't work
#("http://somewhere.org/somedir1/somefile_1.2.3.tar.gz", "http://somewhere.org/somedir1/somefile_1.2.3.tar.gz", "http://somewhere2.org/somedir3/somefile_2.3.4.tar.gz") : "http://somewhere2.org/somedir3/somefile_2.3.4.tar.gz"
@@ -456,6 +463,124 @@ class MirrorUriTest(FetcherTest):
'https://BBBB/B/B/B/bitbake/bitbake-1.0.tar.gz',
'http://AAAA/A/A/A/B/B/bitbake/bitbake-1.0.tar.gz'])
+
+class GitDownloadDirectoryNamingTest(FetcherTest):
+ def setUp(self):
+ super(GitDownloadDirectoryNamingTest, self).setUp()
+ self.recipe_url = "git://git.openembedded.org/bitbake"
+ self.recipe_dir = "git.openembedded.org.bitbake"
+ self.mirror_url = "git://github.com/openembedded/bitbake.git"
+ self.mirror_dir = "github.com.openembedded.bitbake.git"
+
+ self.d.setVar('SRCREV', '82ea737a0b42a8b53e11c9cde141e9e9c0bd8c40')
+
+ def setup_mirror_rewrite(self):
+ self.d.setVar("PREMIRRORS", self.recipe_url + " " + self.mirror_url + " \n")
+
+ @skipIfNoNetwork()
+ def test_that_directory_is_named_after_recipe_url_when_no_mirroring_is_used(self):
+ self.setup_mirror_rewrite()
+ fetcher = bb.fetch.Fetch([self.recipe_url], self.d)
+
+ fetcher.download()
+
+ dir = os.listdir(self.dldir + "/git2")
+ self.assertIn(self.recipe_dir, dir)
+
+ @skipIfNoNetwork()
+ def test_that_directory_exists_for_mirrored_url_and_recipe_url_when_mirroring_is_used(self):
+ self.setup_mirror_rewrite()
+ fetcher = bb.fetch.Fetch([self.recipe_url], self.d)
+
+ fetcher.download()
+
+ dir = os.listdir(self.dldir + "/git2")
+ self.assertIn(self.mirror_dir, dir)
+ self.assertIn(self.recipe_dir, dir)
+
+ @skipIfNoNetwork()
+ def test_that_recipe_directory_and_mirrored_directory_exists_when_mirroring_is_used_and_the_mirrored_directory_already_exists(self):
+ self.setup_mirror_rewrite()
+ fetcher = bb.fetch.Fetch([self.mirror_url], self.d)
+ fetcher.download()
+ fetcher = bb.fetch.Fetch([self.recipe_url], self.d)
+
+ fetcher.download()
+
+ dir = os.listdir(self.dldir + "/git2")
+ self.assertIn(self.mirror_dir, dir)
+ self.assertIn(self.recipe_dir, dir)
+
+
+class TarballNamingTest(FetcherTest):
+ def setUp(self):
+ super(TarballNamingTest, self).setUp()
+ self.recipe_url = "git://git.openembedded.org/bitbake"
+ self.recipe_tarball = "git2_git.openembedded.org.bitbake.tar.gz"
+ self.mirror_url = "git://github.com/openembedded/bitbake.git"
+ self.mirror_tarball = "git2_github.com.openembedded.bitbake.git.tar.gz"
+
+ self.d.setVar('BB_GENERATE_MIRROR_TARBALLS', '1')
+ self.d.setVar('SRCREV', '82ea737a0b42a8b53e11c9cde141e9e9c0bd8c40')
+
+ def setup_mirror_rewrite(self):
+ self.d.setVar("PREMIRRORS", self.recipe_url + " " + self.mirror_url + " \n")
+
+ @skipIfNoNetwork()
+ def test_that_the_recipe_tarball_is_created_when_no_mirroring_is_used(self):
+ fetcher = bb.fetch.Fetch([self.recipe_url], self.d)
+
+ fetcher.download()
+
+ dir = os.listdir(self.dldir)
+ self.assertIn(self.recipe_tarball, dir)
+
+ @skipIfNoNetwork()
+ def test_that_the_mirror_tarball_is_created_when_mirroring_is_used(self):
+ self.setup_mirror_rewrite()
+ fetcher = bb.fetch.Fetch([self.recipe_url], self.d)
+
+ fetcher.download()
+
+ dir = os.listdir(self.dldir)
+ self.assertIn(self.mirror_tarball, dir)
+
+
+class GitShallowTarballNamingTest(FetcherTest):
+ def setUp(self):
+ super(GitShallowTarballNamingTest, self).setUp()
+ self.recipe_url = "git://git.openembedded.org/bitbake"
+ self.recipe_tarball = "gitshallow_git.openembedded.org.bitbake_82ea737-1_master.tar.gz"
+ self.mirror_url = "git://github.com/openembedded/bitbake.git"
+ self.mirror_tarball = "gitshallow_github.com.openembedded.bitbake.git_82ea737-1_master.tar.gz"
+
+ self.d.setVar('BB_GIT_SHALLOW', '1')
+ self.d.setVar('BB_GENERATE_SHALLOW_TARBALLS', '1')
+ self.d.setVar('SRCREV', '82ea737a0b42a8b53e11c9cde141e9e9c0bd8c40')
+
+ def setup_mirror_rewrite(self):
+ self.d.setVar("PREMIRRORS", self.recipe_url + " " + self.mirror_url + " \n")
+
+ @skipIfNoNetwork()
+ def test_that_the_tarball_is_named_after_recipe_url_when_no_mirroring_is_used(self):
+ fetcher = bb.fetch.Fetch([self.recipe_url], self.d)
+
+ fetcher.download()
+
+ dir = os.listdir(self.dldir)
+ self.assertIn(self.recipe_tarball, dir)
+
+ @skipIfNoNetwork()
+ def test_that_the_mirror_tarball_is_created_when_mirroring_is_used(self):
+ self.setup_mirror_rewrite()
+ fetcher = bb.fetch.Fetch([self.recipe_url], self.d)
+
+ fetcher.download()
+
+ dir = os.listdir(self.dldir)
+ self.assertIn(self.mirror_tarball, dir)
+
+
class FetcherLocalTest(FetcherTest):
def setUp(self):
def touch(fn):
@@ -522,6 +647,109 @@ class FetcherLocalTest(FetcherTest):
with self.assertRaises(bb.fetch2.UnpackError):
self.fetchUnpack(['file://a;subdir=/bin/sh'])
+class FetcherNoNetworkTest(FetcherTest):
+ def setUp(self):
+ super().setUp()
+ # all test cases are based on not having network
+ self.d.setVar("BB_NO_NETWORK", "1")
+
+ def test_missing(self):
+ string = "this is a test file\n".encode("utf-8")
+ self.d.setVarFlag("SRC_URI", "md5sum", hashlib.md5(string).hexdigest())
+ self.d.setVarFlag("SRC_URI", "sha256sum", hashlib.sha256(string).hexdigest())
+
+ self.assertFalse(os.path.exists(os.path.join(self.dldir, "test-file.tar.gz")))
+ self.assertFalse(os.path.exists(os.path.join(self.dldir, "test-file.tar.gz.done")))
+ fetcher = bb.fetch.Fetch(["http://invalid.yoctoproject.org/test-file.tar.gz"], self.d)
+ with self.assertRaises(bb.fetch2.NetworkAccess):
+ fetcher.download()
+
+ def test_valid_missing_donestamp(self):
+ # create the file in the download directory with correct hash
+ string = "this is a test file\n".encode("utf-8")
+ with open(os.path.join(self.dldir, "test-file.tar.gz"), "wb") as f:
+ f.write(string)
+
+ self.d.setVarFlag("SRC_URI", "md5sum", hashlib.md5(string).hexdigest())
+ self.d.setVarFlag("SRC_URI", "sha256sum", hashlib.sha256(string).hexdigest())
+
+ self.assertTrue(os.path.exists(os.path.join(self.dldir, "test-file.tar.gz")))
+ self.assertFalse(os.path.exists(os.path.join(self.dldir, "test-file.tar.gz.done")))
+ fetcher = bb.fetch.Fetch(["http://invalid.yoctoproject.org/test-file.tar.gz"], self.d)
+ fetcher.download()
+ self.assertTrue(os.path.exists(os.path.join(self.dldir, "test-file.tar.gz.done")))
+
+ def test_invalid_missing_donestamp(self):
+ # create an invalid file in the download directory with incorrect hash
+ string = "this is a test file\n".encode("utf-8")
+ with open(os.path.join(self.dldir, "test-file.tar.gz"), "wb"):
+ pass
+
+ self.d.setVarFlag("SRC_URI", "md5sum", hashlib.md5(string).hexdigest())
+ self.d.setVarFlag("SRC_URI", "sha256sum", hashlib.sha256(string).hexdigest())
+
+ self.assertTrue(os.path.exists(os.path.join(self.dldir, "test-file.tar.gz")))
+ self.assertFalse(os.path.exists(os.path.join(self.dldir, "test-file.tar.gz.done")))
+ fetcher = bb.fetch.Fetch(["http://invalid.yoctoproject.org/test-file.tar.gz"], self.d)
+ with self.assertRaises(bb.fetch2.NetworkAccess):
+ fetcher.download()
+ # the existing file should not exist or should have be moved to "bad-checksum"
+ self.assertFalse(os.path.exists(os.path.join(self.dldir, "test-file.tar.gz")))
+
+ def test_nochecksums_missing(self):
+ self.assertFalse(os.path.exists(os.path.join(self.dldir, "test-file.tar.gz")))
+ self.assertFalse(os.path.exists(os.path.join(self.dldir, "test-file.tar.gz.done")))
+ # ssh fetch does not support checksums
+ fetcher = bb.fetch.Fetch(["ssh://invalid@invalid.yoctoproject.org/test-file.tar.gz"], self.d)
+ # attempts to download with missing donestamp
+ with self.assertRaises(bb.fetch2.NetworkAccess):
+ fetcher.download()
+
+ def test_nochecksums_missing_donestamp(self):
+ # create a file in the download directory
+ with open(os.path.join(self.dldir, "test-file.tar.gz"), "wb"):
+ pass
+
+ self.assertTrue(os.path.exists(os.path.join(self.dldir, "test-file.tar.gz")))
+ self.assertFalse(os.path.exists(os.path.join(self.dldir, "test-file.tar.gz.done")))
+ # ssh fetch does not support checksums
+ fetcher = bb.fetch.Fetch(["ssh://invalid@invalid.yoctoproject.org/test-file.tar.gz"], self.d)
+ # attempts to download with missing donestamp
+ with self.assertRaises(bb.fetch2.NetworkAccess):
+ fetcher.download()
+
+ def test_nochecksums_has_donestamp(self):
+ # create a file in the download directory with the donestamp
+ with open(os.path.join(self.dldir, "test-file.tar.gz"), "wb"):
+ pass
+ with open(os.path.join(self.dldir, "test-file.tar.gz.done"), "wb"):
+ pass
+
+ self.assertTrue(os.path.exists(os.path.join(self.dldir, "test-file.tar.gz")))
+ self.assertTrue(os.path.exists(os.path.join(self.dldir, "test-file.tar.gz.done")))
+ # ssh fetch does not support checksums
+ fetcher = bb.fetch.Fetch(["ssh://invalid@invalid.yoctoproject.org/test-file.tar.gz"], self.d)
+ # should not fetch
+ fetcher.download()
+ # both files should still exist
+ self.assertTrue(os.path.exists(os.path.join(self.dldir, "test-file.tar.gz")))
+ self.assertTrue(os.path.exists(os.path.join(self.dldir, "test-file.tar.gz.done")))
+
+ def test_nochecksums_missing_has_donestamp(self):
+ # create a file in the download directory with the donestamp
+ with open(os.path.join(self.dldir, "test-file.tar.gz.done"), "wb"):
+ pass
+
+ self.assertFalse(os.path.exists(os.path.join(self.dldir, "test-file.tar.gz")))
+ self.assertTrue(os.path.exists(os.path.join(self.dldir, "test-file.tar.gz.done")))
+ # ssh fetch does not support checksums
+ fetcher = bb.fetch.Fetch(["ssh://invalid@invalid.yoctoproject.org/test-file.tar.gz"], self.d)
+ with self.assertRaises(bb.fetch2.NetworkAccess):
+ fetcher.download()
+ # both files should still exist
+ self.assertFalse(os.path.exists(os.path.join(self.dldir, "test-file.tar.gz")))
+ self.assertFalse(os.path.exists(os.path.join(self.dldir, "test-file.tar.gz.done")))
+
class FetcherNetworkTest(FetcherTest):
@skipIfNoNetwork()
def test_fetch(self):
@@ -641,27 +869,27 @@ class FetcherNetworkTest(FetcherTest):
self.assertRaises(bb.fetch.ParameterError, self.gitfetcher, url, url)
@skipIfNoNetwork()
- def test_gitfetch_premirror(self):
- url1 = "git://git.openembedded.org/bitbake"
- url2 = "git://someserver.org/bitbake"
+ def test_gitfetch_finds_local_tarball_for_mirrored_url_when_previous_downloaded_by_the_recipe_url(self):
+ recipeurl = "git://git.openembedded.org/bitbake"
+ mirrorurl = "git://someserver.org/bitbake"
self.d.setVar("PREMIRRORS", "git://someserver.org/bitbake git://git.openembedded.org/bitbake \n")
- self.gitfetcher(url1, url2)
+ self.gitfetcher(recipeurl, mirrorurl)
@skipIfNoNetwork()
- def test_gitfetch_premirror2(self):
- url1 = url2 = "git://someserver.org/bitbake"
+ def test_gitfetch_finds_local_tarball_when_previous_downloaded_from_a_premirror(self):
+ recipeurl = "git://someserver.org/bitbake"
self.d.setVar("PREMIRRORS", "git://someserver.org/bitbake git://git.openembedded.org/bitbake \n")
- self.gitfetcher(url1, url2)
+ self.gitfetcher(recipeurl, recipeurl)
@skipIfNoNetwork()
- def test_gitfetch_premirror3(self):
+ def test_gitfetch_finds_local_repository_when_premirror_rewrites_the_recipe_url(self):
realurl = "git://git.openembedded.org/bitbake"
- dummyurl = "git://someserver.org/bitbake"
+ recipeurl = "git://someserver.org/bitbake"
self.sourcedir = self.unpackdir.replace("unpacked", "sourcemirror.git")
os.chdir(self.tempdir)
bb.process.run("git clone %s %s 2> /dev/null" % (realurl, self.sourcedir), shell=True)
- self.d.setVar("PREMIRRORS", "%s git://%s;protocol=file \n" % (dummyurl, self.sourcedir))
- self.gitfetcher(dummyurl, dummyurl)
+ self.d.setVar("PREMIRRORS", "%s git://%s;protocol=file \n" % (recipeurl, self.sourcedir))
+ self.gitfetcher(recipeurl, recipeurl)
@skipIfNoNetwork()
def test_git_submodule(self):
@@ -728,7 +956,7 @@ class URLHandle(unittest.TestCase):
# decodeurl and we need to handle them
decodedata = datatable.copy()
decodedata.update({
- "http://somesite.net;someparam=1": ('http', 'somesite.net', '', '', '', {'someparam': '1'}),
+ "http://somesite.net;someparam=1": ('http', 'somesite.net', '/', '', '', {'someparam': '1'}),
})
def test_decodeurl(self):
@@ -757,12 +985,12 @@ class FetchLatestVersionTest(FetcherTest):
("dtc", "git://git.qemu.org/dtc.git", "65cc4d2748a2c2e6f27f1cf39e07a5dbabd80ebf", "")
: "1.4.0",
# combination version pattern
- ("sysprof", "git://git.gnome.org/sysprof", "cd44ee6644c3641507fb53b8a2a69137f2971219", "")
+ ("sysprof", "git://gitlab.gnome.org/GNOME/sysprof.git;protocol=https", "cd44ee6644c3641507fb53b8a2a69137f2971219", "")
: "1.2.0",
("u-boot-mkimage", "git://git.denx.de/u-boot.git;branch=master;protocol=git", "62c175fbb8a0f9a926c88294ea9f7e88eb898f6c", "")
: "2014.01",
# version pattern "yyyymmdd"
- ("mobile-broadband-provider-info", "git://git.gnome.org/mobile-broadband-provider-info", "4ed19e11c2975105b71b956440acdb25d46a347d", "")
+ ("mobile-broadband-provider-info", "git://gitlab.gnome.org/GNOME/mobile-broadband-provider-info.git;protocol=https", "4ed19e11c2975105b71b956440acdb25d46a347d", "")
: "20120614",
# packages with a valid UPSTREAM_CHECK_GITTAGREGEX
("xf86-video-omap", "git://anongit.freedesktop.org/xorg/driver/xf86-video-omap", "ae0394e687f1a77e966cf72f895da91840dffb8f", "(?P<pver>(\d+\.(\d\.?)*))")
@@ -809,7 +1037,7 @@ class FetchLatestVersionTest(FetcherTest):
ud = bb.fetch2.FetchData(k[1], self.d)
pupver= ud.method.latest_versionstring(ud, self.d)
verstring = pupver[0]
- self.assertTrue(verstring, msg="Could not find upstream version")
+ self.assertTrue(verstring, msg="Could not find upstream version for %s" % k[0])
r = bb.utils.vercmp_string(v, verstring)
self.assertTrue(r == -1 or r == 0, msg="Package %s, version: %s <= %s" % (k[0], v, verstring))
@@ -822,7 +1050,7 @@ class FetchLatestVersionTest(FetcherTest):
ud = bb.fetch2.FetchData(k[1], self.d)
pupver = ud.method.latest_versionstring(ud, self.d)
verstring = pupver[0]
- self.assertTrue(verstring, msg="Could not find upstream version")
+ self.assertTrue(verstring, msg="Could not find upstream version for %s" % k[0])
r = bb.utils.vercmp_string(v, verstring)
self.assertTrue(r == -1 or r == 0, msg="Package %s, version: %s <= %s" % (k[0], v, verstring))
@@ -874,9 +1102,6 @@ class FetchCheckStatusTest(FetcherTest):
class GitMakeShallowTest(FetcherTest):
- bitbake_dir = os.path.join(os.path.dirname(os.path.join(os.path.abspath(__file__))), '..', '..', '..')
- make_shallow_path = os.path.join(bitbake_dir, 'bin', 'git-make-shallow')
-
def setUp(self):
FetcherTest.setUp(self)
self.gitdir = os.path.join(self.tempdir, 'gitshallow')
@@ -905,7 +1130,7 @@ class GitMakeShallowTest(FetcherTest):
def make_shallow(self, args=None):
if args is None:
args = ['HEAD']
- return bb.process.run([self.make_shallow_path] + args, cwd=self.gitdir)
+ return bb.process.run([bb.fetch2.git.Git.make_shallow_path] + args, cwd=self.gitdir)
def add_empty_file(self, path, msg=None):
if msg is None:
@@ -1237,6 +1462,9 @@ class GitShallowTest(FetcherTest):
smdir = os.path.join(self.tempdir, 'gitsubmodule')
bb.utils.mkdirhier(smdir)
self.git('init', cwd=smdir)
+ # Make this look like it was cloned from a remote...
+ self.git('config --add remote.origin.url "%s"' % smdir, cwd=smdir)
+ self.git('config --add remote.origin.fetch "+refs/heads/*:refs/remotes/origin/*"', cwd=smdir)
self.add_empty_file('asub', cwd=smdir)
self.git('submodule init', cwd=self.srcdir)
@@ -1470,3 +1698,30 @@ class GitShallowTest(FetcherTest):
self.assertNotEqual(orig_revs, revs)
self.assertRefs(['master', 'origin/master'])
self.assertRevCount(orig_revs - 1758)
+
+ def test_that_unpack_throws_an_error_when_the_git_clone_nor_shallow_tarball_exist(self):
+ self.add_empty_file('a')
+ fetcher, ud = self.fetch()
+ bb.utils.remove(self.gitdir, recurse=True)
+ bb.utils.remove(self.dldir, recurse=True)
+
+ with self.assertRaises(bb.fetch2.UnpackError) as context:
+ fetcher.unpack(self.d.getVar('WORKDIR'))
+
+ self.assertTrue("No up to date source found" in context.exception.msg)
+ self.assertTrue("clone directory not available or not up to date" in context.exception.msg)
+ self.assertTrue("shallow clone not enabled or not available" in context.exception.msg)
+
+ @skipIfNoNetwork()
+ def test_that_unpack_does_work_when_using_git_shallow_tarball_but_tarball_is_not_available(self):
+ self.d.setVar('SRCREV', 'e5939ff608b95cdd4d0ab0e1935781ab9a276ac0')
+ self.d.setVar('BB_GIT_SHALLOW', '1')
+ self.d.setVar('BB_GENERATE_SHALLOW_TARBALLS', '1')
+ fetcher = bb.fetch.Fetch(["git://git.yoctoproject.org/fstests"], self.d)
+ fetcher.download()
+
+ bb.utils.remove(self.dldir + "/*.tar.gz")
+ fetcher.unpack(self.unpackdir)
+
+ dir = os.listdir(self.unpackdir + "/git/")
+ self.assertIn("fstests.doap", dir)
diff --git a/bitbake/lib/bb/tests/parse.py b/bitbake/lib/bb/tests/parse.py
index 8f16ba4..1bc4740 100644
--- a/bitbake/lib/bb/tests/parse.py
+++ b/bitbake/lib/bb/tests/parse.py
@@ -44,9 +44,13 @@ C = "3"
"""
def setUp(self):
+ self.origdir = os.getcwd()
self.d = bb.data.init()
bb.parse.siggen = bb.siggen.init(self.d)
+ def tearDown(self):
+ os.chdir(self.origdir)
+
def parsehelper(self, content, suffix = ".bb"):
f = tempfile.NamedTemporaryFile(suffix = suffix)
diff --git a/bitbake/lib/bb/ui/buildinfohelper.py b/bitbake/lib/bb/ui/buildinfohelper.py
index 524a5b0..31323d2 100644
--- a/bitbake/lib/bb/ui/buildinfohelper.py
+++ b/bitbake/lib/bb/ui/buildinfohelper.py
@@ -1603,14 +1603,14 @@ class BuildInfoHelper(object):
mockevent.lineno = -1
self.store_log_event(mockevent)
- def store_log_event(self, event):
+ def store_log_event(self, event,cli_backlog=True):
self._ensure_build()
if event.levelno < formatter.WARNING:
return
# early return for CLI builds
- if self.brbe is None:
+ if cli_backlog and self.brbe is None:
if not 'backlog' in self.internal_state:
self.internal_state['backlog'] = []
self.internal_state['backlog'].append(event)
@@ -1622,7 +1622,7 @@ class BuildInfoHelper(object):
tempevent = self.internal_state['backlog'].pop()
logger.debug(1, "buildinfohelper: Saving stored event %s "
% tempevent)
- self.store_log_event(tempevent)
+ self.store_log_event(tempevent,cli_backlog)
else:
logger.info("buildinfohelper: All events saved")
del self.internal_state['backlog']
@@ -1987,7 +1987,8 @@ class BuildInfoHelper(object):
if 'backlog' in self.internal_state:
# we save missed events in the database for the current build
tempevent = self.internal_state['backlog'].pop()
- self.store_log_event(tempevent)
+ # Do not skip command line build events
+ self.store_log_event(tempevent,False)
if not connection.features.autocommits_when_autocommit_is_off:
transaction.set_autocommit(True)
diff --git a/bitbake/lib/bb/ui/taskexp.py b/bitbake/lib/bb/ui/taskexp.py
index 0e8e9d4..8305d70 100644
--- a/bitbake/lib/bb/ui/taskexp.py
+++ b/bitbake/lib/bb/ui/taskexp.py
@@ -103,9 +103,16 @@ class DepExplorer(Gtk.Window):
self.pkg_treeview.get_selection().connect("changed", self.on_cursor_changed)
column = Gtk.TreeViewColumn("Package", Gtk.CellRendererText(), text=COL_PKG_NAME)
self.pkg_treeview.append_column(column)
- pane.add1(scrolled)
scrolled.add(self.pkg_treeview)
+ self.search_entry = Gtk.SearchEntry.new()
+ self.pkg_treeview.set_search_entry(self.search_entry)
+
+ left_panel = Gtk.VPaned()
+ left_panel.add(self.search_entry)
+ left_panel.add(scrolled)
+ pane.add1(left_panel)
+
box = Gtk.VBox(homogeneous=True, spacing=4)
# Task Depends
@@ -129,6 +136,7 @@ class DepExplorer(Gtk.Window):
pane.add2(box)
self.show_all()
+ self.search_entry.grab_focus()
def on_package_activated(self, treeview, path, column, data_col):
model = treeview.get_model()
diff --git a/bitbake/lib/bb/utils.py b/bitbake/lib/bb/utils.py
index c540b49..73b6cb4 100644
--- a/bitbake/lib/bb/utils.py
+++ b/bitbake/lib/bb/utils.py
@@ -187,7 +187,7 @@ def explode_deps(s):
#r[-1] += ' ' + ' '.join(j)
return r
-def explode_dep_versions2(s):
+def explode_dep_versions2(s, *, sort=True):
"""
Take an RDEPENDS style string of format:
"DEPEND1 (optional version) DEPEND2 (optional version) ..."
@@ -250,7 +250,8 @@ def explode_dep_versions2(s):
if not (i in r and r[i]):
r[lastdep] = []
- r = collections.OrderedDict(sorted(r.items(), key=lambda x: x[0]))
+ if sort:
+ r = collections.OrderedDict(sorted(r.items(), key=lambda x: x[0]))
return r
def explode_dep_versions(s):
@@ -496,7 +497,11 @@ def lockfile(name, shared=False, retry=True, block=False):
if statinfo.st_ino == statinfo2.st_ino:
return lf
lf.close()
- except Exception:
+ except OSError as e:
+ if e.errno == errno.EACCES:
+ logger.error("Unable to acquire lock '%s', %s",
+ e.strerror, name)
+ sys.exit(1)
try:
lf.close()
except Exception:
@@ -523,12 +528,17 @@ def md5_file(filename):
"""
Return the hex string representation of the MD5 checksum of filename.
"""
- import hashlib
- m = hashlib.md5()
+ import hashlib, mmap
with open(filename, "rb") as f:
- for line in f:
- m.update(line)
+ m = hashlib.md5()
+ try:
+ with mmap.mmap(f.fileno(), 0, access=mmap.ACCESS_READ) as mm:
+ for chunk in iter(lambda: mm.read(8192), b''):
+ m.update(chunk)
+ except ValueError:
+ # You can't mmap() an empty file so silence this exception
+ pass
return m.hexdigest()
def sha256_file(filename):
@@ -806,8 +816,8 @@ def movefile(src, dest, newmtime = None, sstat = None):
return None # failure
try:
if didcopy:
- os.lchown(dest, sstat[stat.ST_UID], sstat[stat.ST_GID])
- os.chmod(dest, stat.S_IMODE(sstat[stat.ST_MODE])) # Sticky is reset on chown
+ os.lchown(destpath, sstat[stat.ST_UID], sstat[stat.ST_GID])
+ os.chmod(destpath, stat.S_IMODE(sstat[stat.ST_MODE])) # Sticky is reset on chown
os.unlink(src)
except Exception as e:
print("movefile: Failed to chown/chmod/unlink", dest, e)
@@ -900,6 +910,23 @@ def copyfile(src, dest, newmtime = None, sstat = None):
newmtime = sstat[stat.ST_MTIME]
return newmtime
+def break_hardlinks(src, sstat = None):
+ """
+ Ensures src is the only hardlink to this file. Other hardlinks,
+ if any, are not affected (other than in their st_nlink value, of
+ course). Returns true on success and false on failure.
+
+ """
+ try:
+ if not sstat:
+ sstat = os.lstat(src)
+ except Exception as e:
+ logger.warning("break_hardlinks: stat of %s failed (%s)" % (src, e))
+ return False
+ if sstat[stat.ST_NLINK] == 1:
+ return True
+ return copyfile(src, src, sstat=sstat)
+
def which(path, item, direction = 0, history = False, executable=False):
"""
Locate `item` in the list of paths `path` (colon separated string like $PATH).
@@ -1284,7 +1311,7 @@ def edit_metadata_file(meta_file, variables, varfunc):
return updated
-def edit_bblayers_conf(bblayers_conf, add, remove):
+def edit_bblayers_conf(bblayers_conf, add, remove, edit_cb=None):
"""Edit bblayers.conf, adding and/or removing layers
Parameters:
bblayers_conf: path to bblayers.conf file to edit
@@ -1292,6 +1319,8 @@ def edit_bblayers_conf(bblayers_conf, add, remove):
list to add nothing
remove: layer path (or list of layer paths) to remove; None or
empty list to remove nothing
+ edit_cb: optional callback function that will be called after
+ processing adds/removes once per existing entry.
Returns a tuple:
notadded: list of layers specified to be added but weren't
(because they were already in the list)
@@ -1355,6 +1384,17 @@ def edit_bblayers_conf(bblayers_conf, add, remove):
bblayers.append(addlayer)
del addlayers[:]
+ if edit_cb:
+ newlist = []
+ for layer in bblayers:
+ res = edit_cb(layer, canonicalise_path(layer))
+ if res != layer:
+ newlist.append(res)
+ updated = True
+ else:
+ newlist.append(layer)
+ bblayers = newlist
+
if updated:
if op == '+=' and not bblayers:
bblayers = None
diff --git a/bitbake/lib/bblayers/action.py b/bitbake/lib/bblayers/action.py
index aa575d1..a3f658f 100644
--- a/bitbake/lib/bblayers/action.py
+++ b/bitbake/lib/bblayers/action.py
@@ -45,7 +45,7 @@ class ActionPlugin(LayerPlugin):
notadded, _ = bb.utils.edit_bblayers_conf(bblayers_conf, layerdirs, None)
if not (args.force or notadded):
try:
- self.tinfoil.parseRecipes()
+ self.tinfoil.run_command('parseConfiguration')
except bb.tinfoil.TinfoilUIException:
# Restore the back up copy of bblayers.conf
shutil.copy2(backup, bblayers_conf)
diff --git a/bitbake/lib/bblayers/layerindex.py b/bitbake/lib/bblayers/layerindex.py
index 9af385d..9f02a9d 100644
--- a/bitbake/lib/bblayers/layerindex.py
+++ b/bitbake/lib/bblayers/layerindex.py
@@ -1,10 +1,9 @@
+import layerindexlib
+
import argparse
-import http.client
-import json
import logging
import os
import subprocess
-import urllib.parse
from bblayers.action import ActionPlugin
@@ -21,110 +20,6 @@ class LayerIndexPlugin(ActionPlugin):
This class inherits ActionPlugin to get do_add_layer.
"""
- def get_json_data(self, apiurl):
- proxy_settings = os.environ.get("http_proxy", None)
- conn = None
- _parsedurl = urllib.parse.urlparse(apiurl)
- path = _parsedurl.path
- query = _parsedurl.query
-
- def parse_url(url):
- parsedurl = urllib.parse.urlparse(url)
- if parsedurl.netloc[0] == '[':
- host, port = parsedurl.netloc[1:].split(']', 1)
- if ':' in port:
- port = port.rsplit(':', 1)[1]
- else:
- port = None
- else:
- if parsedurl.netloc.count(':') == 1:
- (host, port) = parsedurl.netloc.split(":")
- else:
- host = parsedurl.netloc
- port = None
- return (host, 80 if port is None else int(port))
-
- if proxy_settings is None:
- host, port = parse_url(apiurl)
- conn = http.client.HTTPConnection(host, port)
- conn.request("GET", path + "?" + query)
- else:
- host, port = parse_url(proxy_settings)
- conn = http.client.HTTPConnection(host, port)
- conn.request("GET", apiurl)
-
- r = conn.getresponse()
- if r.status != 200:
- raise Exception("Failed to read " + path + ": %d %s" % (r.status, r.reason))
- return json.loads(r.read().decode())
-
- def get_layer_deps(self, layername, layeritems, layerbranches, layerdependencies, branchnum, selfname=False):
- def layeritems_info_id(items_name, layeritems):
- litems_id = None
- for li in layeritems:
- if li['name'] == items_name:
- litems_id = li['id']
- break
- return litems_id
-
- def layerbranches_info(items_id, layerbranches):
- lbranch = {}
- for lb in layerbranches:
- if lb['layer'] == items_id and lb['branch'] == branchnum:
- lbranch['id'] = lb['id']
- lbranch['vcs_subdir'] = lb['vcs_subdir']
- break
- return lbranch
-
- def layerdependencies_info(lb_id, layerdependencies):
- ld_deps = []
- for ld in layerdependencies:
- if ld['layerbranch'] == lb_id and not ld['dependency'] in ld_deps:
- ld_deps.append(ld['dependency'])
- if not ld_deps:
- logger.error("The dependency of layerDependencies is not found.")
- return ld_deps
-
- def layeritems_info_name_subdir(items_id, layeritems):
- litems = {}
- for li in layeritems:
- if li['id'] == items_id:
- litems['vcs_url'] = li['vcs_url']
- litems['name'] = li['name']
- break
- return litems
-
- if selfname:
- selfid = layeritems_info_id(layername, layeritems)
- lbinfo = layerbranches_info(selfid, layerbranches)
- if lbinfo:
- selfsubdir = lbinfo['vcs_subdir']
- else:
- logger.error("%s is not found in the specified branch" % layername)
- return
- selfurl = layeritems_info_name_subdir(selfid, layeritems)['vcs_url']
- if selfurl:
- return selfurl, selfsubdir
- else:
- logger.error("Cannot get layer %s git repo and subdir" % layername)
- return
- ldict = {}
- itemsid = layeritems_info_id(layername, layeritems)
- if not itemsid:
- return layername, None
- lbid = layerbranches_info(itemsid, layerbranches)
- if lbid:
- lbid = layerbranches_info(itemsid, layerbranches)['id']
- else:
- logger.error("%s is not found in the specified branch" % layername)
- return None, None
- for dependency in layerdependencies_info(lbid, layerdependencies):
- lname = layeritems_info_name_subdir(dependency, layeritems)['name']
- lurl = layeritems_info_name_subdir(dependency, layeritems)['vcs_url']
- lsubdir = layerbranches_info(dependency, layerbranches)['vcs_subdir']
- ldict[lname] = lurl, lsubdir
- return None, ldict
-
def get_fetch_layer(self, fetchdir, url, subdir, fetch_layer):
layername = self.get_layer_name(url)
if os.path.splitext(layername)[1] == '.git':
@@ -136,95 +31,124 @@ class LayerIndexPlugin(ActionPlugin):
result = subprocess.call('git clone %s %s' % (url, repodir), shell = True)
if result:
logger.error("Failed to download %s" % url)
- return None, None
+ return None, None, None
else:
- return layername, layerdir
+ return subdir, layername, layerdir
else:
logger.plain("Repository %s needs to be fetched" % url)
- return layername, layerdir
+ return subdir, layername, layerdir
elif os.path.exists(layerdir):
- return layername, layerdir
+ return subdir, layername, layerdir
else:
logger.error("%s is not in %s" % (url, subdir))
- return None, None
+ return None, None, None
def do_layerindex_fetch(self, args):
"""Fetches a layer from a layer index along with its dependent layers, and adds them to conf/bblayers.conf.
"""
- apiurl = self.tinfoil.config_data.getVar('BBLAYERS_LAYERINDEX_URL')
- if not apiurl:
- logger.error("Cannot get BBLAYERS_LAYERINDEX_URL")
- return 1
+
+ def _construct_url(baseurls, branches):
+ urls = []
+ for baseurl in baseurls:
+ if baseurl[-1] != '/':
+ baseurl += '/'
+
+ if not baseurl.startswith('cooker'):
+ baseurl += "api/"
+
+ if branches:
+ baseurl += ";branch=%s" % ','.join(branches)
+
+ urls.append(baseurl)
+
+ return urls
+
+
+ # Set the default...
+ if args.branch:
+ branches = [args.branch]
else:
- if apiurl[-1] != '/':
- apiurl += '/'
- apiurl += "api/"
- apilinks = self.get_json_data(apiurl)
- branches = self.get_json_data(apilinks['branches'])
-
- branchnum = 0
- for branch in branches:
- if branch['name'] == args.branch:
- branchnum = branch['id']
- break
- if branchnum == 0:
- validbranches = ', '.join([branch['name'] for branch in branches])
- logger.error('Invalid layer branch name "%s". Valid branches: %s' % (args.branch, validbranches))
- return 1
+ branches = (self.tinfoil.config_data.getVar('LAYERSERIES_CORENAMES') or 'master').split()
+ logger.debug(1, 'Trying branches: %s' % branches)
ignore_layers = []
- for collection in self.tinfoil.config_data.getVar('BBFILE_COLLECTIONS').split():
- lname = self.tinfoil.config_data.getVar('BBLAYERS_LAYERINDEX_NAME_%s' % collection)
- if lname:
- ignore_layers.append(lname)
-
if args.ignore:
ignore_layers.extend(args.ignore.split(','))
- layeritems = self.get_json_data(apilinks['layerItems'])
- layerbranches = self.get_json_data(apilinks['layerBranches'])
- layerdependencies = self.get_json_data(apilinks['layerDependencies'])
- invaluenames = []
- repourls = {}
- printlayers = []
-
- def query_dependencies(layers, layeritems, layerbranches, layerdependencies, branchnum):
- depslayer = []
- for layername in layers:
- invaluename, layerdict = self.get_layer_deps(layername, layeritems, layerbranches, layerdependencies, branchnum)
- if layerdict:
- repourls[layername] = self.get_layer_deps(layername, layeritems, layerbranches, layerdependencies, branchnum, selfname=True)
- for layer in layerdict:
- if not layer in ignore_layers:
- depslayer.append(layer)
- printlayers.append((layername, layer, layerdict[layer][0], layerdict[layer][1]))
- if not layer in ignore_layers and not layer in repourls:
- repourls[layer] = (layerdict[layer][0], layerdict[layer][1])
- if invaluename and not invaluename in invaluenames:
- invaluenames.append(invaluename)
- return depslayer
-
- depslayers = query_dependencies(args.layername, layeritems, layerbranches, layerdependencies, branchnum)
- while depslayers:
- depslayer = query_dependencies(depslayers, layeritems, layerbranches, layerdependencies, branchnum)
- depslayers = depslayer
- if invaluenames:
- for invaluename in invaluenames:
- logger.error('Layer "%s" not found in layer index' % invaluename)
- return 1
- logger.plain("%s %s %s %s" % ("Layer".ljust(19), "Required by".ljust(19), "Git repository".ljust(54), "Subdirectory"))
- logger.plain('=' * 115)
- for layername in args.layername:
- layerurl = repourls[layername]
- logger.plain("%s %s %s %s" % (layername.ljust(20), '-'.ljust(20), layerurl[0].ljust(55), layerurl[1]))
- printedlayers = []
- for layer, dependency, gitrepo, subdirectory in printlayers:
- if dependency in printedlayers:
- continue
- logger.plain("%s %s %s %s" % (dependency.ljust(20), layer.ljust(20), gitrepo.ljust(55), subdirectory))
- printedlayers.append(dependency)
-
- if repourls:
+ # Load the cooker DB
+ cookerIndex = layerindexlib.LayerIndex(self.tinfoil.config_data)
+ cookerIndex.load_layerindex('cooker://', load='layerDependencies')
+
+ # Fast path, check if we already have what has been requested!
+ (dependencies, invalidnames) = cookerIndex.find_dependencies(names=args.layername, ignores=ignore_layers)
+ if not args.show_only and not invalidnames:
+ logger.plain("You already have the requested layer(s): %s" % args.layername)
+ return 0
+
+ # The information to show is already in the cookerIndex
+ if invalidnames:
+ # General URL to use to access the layer index
+ # While there is ONE right now, we're expect users could enter several
+ apiurl = self.tinfoil.config_data.getVar('BBLAYERS_LAYERINDEX_URL').split()
+ if not apiurl:
+ logger.error("Cannot get BBLAYERS_LAYERINDEX_URL")
+ return 1
+
+ remoteIndex = layerindexlib.LayerIndex(self.tinfoil.config_data)
+
+ for remoteurl in _construct_url(apiurl, branches):
+ logger.plain("Loading %s..." % remoteurl)
+ remoteIndex.load_layerindex(remoteurl)
+
+ if remoteIndex.is_empty():
+ logger.error("Remote layer index %s is empty for branches %s" % (apiurl, branches))
+ return 1
+
+ lIndex = cookerIndex + remoteIndex
+
+ (dependencies, invalidnames) = lIndex.find_dependencies(names=args.layername, ignores=ignore_layers)
+
+ if invalidnames:
+ for invaluename in invalidnames:
+ logger.error('Layer "%s" not found in layer index' % invaluename)
+ return 1
+
+ logger.plain("%s %s %s" % ("Layer".ljust(49), "Git repository (branch)".ljust(54), "Subdirectory"))
+ logger.plain('=' * 125)
+
+ for deplayerbranch in dependencies:
+ layerBranch = dependencies[deplayerbranch][0]
+
+ # TODO: Determine display behavior
+ # This is the local content, uncomment to hide local
+ # layers from the display.
+ #if layerBranch.index.config['TYPE'] == 'cooker':
+ # continue
+
+ layerDeps = dependencies[deplayerbranch][1:]
+
+ requiredby = []
+ recommendedby = []
+ for dep in layerDeps:
+ if dep.required:
+ requiredby.append(dep.layer.name)
+ else:
+ recommendedby.append(dep.layer.name)
+
+ logger.plain('%s %s %s' % (("%s:%s:%s" %
+ (layerBranch.index.config['DESCRIPTION'],
+ layerBranch.branch.name,
+ layerBranch.layer.name)).ljust(50),
+ ("%s (%s)" % (layerBranch.layer.vcs_url,
+ layerBranch.actual_branch)).ljust(55),
+ layerBranch.vcs_subdir
+ ))
+ if requiredby:
+ logger.plain(' required by: %s' % ' '.join(requiredby))
+ if recommendedby:
+ logger.plain(' recommended by: %s' % ' '.join(recommendedby))
+
+ if dependencies:
fetchdir = self.tinfoil.config_data.getVar('BBLAYERS_FETCH_DIR')
if not fetchdir:
logger.error("Cannot get BBLAYERS_FETCH_DIR")
@@ -232,26 +156,39 @@ class LayerIndexPlugin(ActionPlugin):
if not os.path.exists(fetchdir):
os.makedirs(fetchdir)
addlayers = []
- for repourl, subdir in repourls.values():
- name, layerdir = self.get_fetch_layer(fetchdir, repourl, subdir, not args.show_only)
+
+ for deplayerbranch in dependencies:
+ layerBranch = dependencies[deplayerbranch][0]
+
+ if layerBranch.index.config['TYPE'] == 'cooker':
+ # Anything loaded via cooker is already local, skip it
+ continue
+
+ subdir, name, layerdir = self.get_fetch_layer(fetchdir,
+ layerBranch.layer.vcs_url,
+ layerBranch.vcs_subdir,
+ not args.show_only)
if not name:
# Error already shown
return 1
addlayers.append((subdir, name, layerdir))
if not args.show_only:
- for subdir, name, layerdir in set(addlayers):
+ localargs = argparse.Namespace()
+ localargs.layerdir = []
+ localargs.force = args.force
+ for subdir, name, layerdir in addlayers:
if os.path.exists(layerdir):
if subdir:
- logger.plain("Adding layer \"%s\" to conf/bblayers.conf" % subdir)
+ logger.plain("Adding layer \"%s\" (%s) to conf/bblayers.conf" % (subdir, layerdir))
else:
- logger.plain("Adding layer \"%s\" to conf/bblayers.conf" % name)
- localargs = argparse.Namespace()
- localargs.layerdir = layerdir
- localargs.force = args.force
- self.do_add_layer(localargs)
+ logger.plain("Adding layer \"%s\" (%s) to conf/bblayers.conf" % (name, layerdir))
+ localargs.layerdir.append(layerdir)
else:
break
+ if localargs.layerdir:
+ self.do_add_layer(localargs)
+
def do_layerindex_show_depends(self, args):
"""Find layer dependencies from layer index.
"""
@@ -260,12 +197,12 @@ class LayerIndexPlugin(ActionPlugin):
self.do_layerindex_fetch(args)
def register_commands(self, sp):
- parser_layerindex_fetch = self.add_command(sp, 'layerindex-fetch', self.do_layerindex_fetch)
+ parser_layerindex_fetch = self.add_command(sp, 'layerindex-fetch', self.do_layerindex_fetch, parserecipes=False)
parser_layerindex_fetch.add_argument('-n', '--show-only', help='show dependencies and do nothing else', action='store_true')
- parser_layerindex_fetch.add_argument('-b', '--branch', help='branch name to fetch (default %(default)s)', default='master')
+ parser_layerindex_fetch.add_argument('-b', '--branch', help='branch name to fetch')
parser_layerindex_fetch.add_argument('-i', '--ignore', help='assume the specified layers do not need to be fetched/added (separate multiple layers with commas, no spaces)', metavar='LAYER')
parser_layerindex_fetch.add_argument('layername', nargs='+', help='layer to fetch')
- parser_layerindex_show_depends = self.add_command(sp, 'layerindex-show-depends', self.do_layerindex_show_depends)
- parser_layerindex_show_depends.add_argument('-b', '--branch', help='branch name to fetch (default %(default)s)', default='master')
+ parser_layerindex_show_depends = self.add_command(sp, 'layerindex-show-depends', self.do_layerindex_show_depends, parserecipes=False)
+ parser_layerindex_show_depends.add_argument('-b', '--branch', help='branch name to fetch')
parser_layerindex_show_depends.add_argument('layername', nargs='+', help='layer to query')
diff --git a/bitbake/lib/layerindexlib/README b/bitbake/lib/layerindexlib/README
new file mode 100644
index 0000000..5d927af
--- /dev/null
+++ b/bitbake/lib/layerindexlib/README
@@ -0,0 +1,28 @@
+The layerindexlib module is designed to permit programs to work directly
+with layer index information. (See layers.openembedded.org...)
+
+The layerindexlib module includes a plugin interface that is used to extend
+the basic functionality. There are two primary plugins available: restapi
+and cooker.
+
+The restapi plugin works with a web based REST Api compatible with the
+layerindex-web project, as well as the ability to store and retried a
+the information for one or more files on the disk.
+
+The cooker plugin works by reading the information from the current build
+project and processing it as if it were a layer index.
+
+
+TODO:
+
+__init__.py:
+Implement local on-disk caching (using the rest api store/load)
+Implement layer index style query operations on a combined index
+
+common.py:
+Stop network access if BB_NO_NETWORK or allowed hosts is restricted
+
+cooker.py:
+Cooker - Implement recipe parsing
+
+
diff --git a/bitbake/lib/layerindexlib/__init__.py b/bitbake/lib/layerindexlib/__init__.py
new file mode 100644
index 0000000..cb79cb3
--- /dev/null
+++ b/bitbake/lib/layerindexlib/__init__.py
@@ -0,0 +1,1363 @@
+# Copyright (C) 2016-2018 Wind River Systems, Inc.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License version 2 as
+# published by the Free Software Foundation.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
+# See the GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+import datetime
+
+import logging
+import imp
+
+from collections import OrderedDict
+from layerindexlib.plugin import LayerIndexPluginUrlError
+
+logger = logging.getLogger('BitBake.layerindexlib')
+
+# Exceptions
+
+class LayerIndexException(Exception):
+ '''LayerIndex Generic Exception'''
+ def __init__(self, message):
+ self.msg = message
+ Exception.__init__(self, message)
+
+ def __str__(self):
+ return self.msg
+
+class LayerIndexUrlError(LayerIndexException):
+ '''Exception raised when unable to access a URL for some reason'''
+ def __init__(self, url, message=""):
+ if message:
+ msg = "Unable to access layerindex url %s: %s" % (url, message)
+ else:
+ msg = "Unable to access layerindex url %s" % url
+ self.url = url
+ LayerIndexException.__init__(self, msg)
+
+class LayerIndexFetchError(LayerIndexException):
+ '''General layerindex fetcher exception when something fails'''
+ def __init__(self, url, message=""):
+ if message:
+ msg = "Unable to fetch layerindex url %s: %s" % (url, message)
+ else:
+ msg = "Unable to fetch layerindex url %s" % url
+ self.url = url
+ LayerIndexException.__init__(self, msg)
+
+
+# Interface to the overall layerindex system
+# the layer may contain one or more individual indexes
+class LayerIndex():
+ def __init__(self, d):
+ if not d:
+ raise LayerIndexException("Must be initialized with bb.data.")
+
+ self.data = d
+
+ # List of LayerIndexObj
+ self.indexes = []
+
+ self.plugins = []
+
+ import bb.utils
+ bb.utils.load_plugins(logger, self.plugins, os.path.dirname(__file__))
+ for plugin in self.plugins:
+ if hasattr(plugin, 'init'):
+ plugin.init(self)
+
+ def __add__(self, other):
+ newIndex = LayerIndex(self.data)
+
+ if self.__class__ != newIndex.__class__ or \
+ other.__class__ != newIndex.__class__:
+ raise TypeException("Can not add different types.")
+
+ for indexEnt in self.indexes:
+ newIndex.indexes.append(indexEnt)
+
+ for indexEnt in other.indexes:
+ newIndex.indexes.append(indexEnt)
+
+ return newIndex
+
+ def _parse_params(self, params):
+ '''Take a parameter list, return a dictionary of parameters.
+
+ Expected to be called from the data of urllib.parse.urlparse(url).params
+
+ If there are two conflicting parameters, last in wins...
+ '''
+
+ param_dict = {}
+ for param in params.split(';'):
+ if not param:
+ continue
+ item = param.split('=', 1)
+ logger.debug(1, item)
+ param_dict[item[0]] = item[1]
+
+ return param_dict
+
+ def _fetch_url(self, url, username=None, password=None, debuglevel=0):
+ '''Fetch data from a specific URL.
+
+ Fetch something from a specific URL. This is specifically designed to
+ fetch data from a layerindex-web instance, but may be useful for other
+ raw fetch actions.
+
+ It is not designed to be used to fetch recipe sources or similar. the
+ regular fetcher class should used for that.
+
+ It is the responsibility of the caller to check BB_NO_NETWORK and related
+ BB_ALLOWED_NETWORKS.
+ '''
+
+ if not url:
+ raise LayerIndexUrlError(url, "empty url")
+
+ import urllib
+ from urllib.request import urlopen, Request
+ from urllib.parse import urlparse
+
+ up = urlparse(url)
+
+ if username:
+ logger.debug(1, "Configuring authentication for %s..." % url)
+ password_mgr = urllib.request.HTTPPasswordMgrWithDefaultRealm()
+ password_mgr.add_password(None, "%s://%s" % (up.scheme, up.netloc), username, password)
+ handler = urllib.request.HTTPBasicAuthHandler(password_mgr)
+ opener = urllib.request.build_opener(handler, urllib.request.HTTPSHandler(debuglevel=debuglevel))
+ else:
+ opener = urllib.request.build_opener(urllib.request.HTTPSHandler(debuglevel=debuglevel))
+
+ urllib.request.install_opener(opener)
+
+ logger.debug(1, "Fetching %s (%s)..." % (url, ["without authentication", "with authentication"][bool(username)]))
+
+ try:
+ res = urlopen(Request(url, headers={'User-Agent': 'Mozilla/5.0 (bitbake/lib/layerindex)'}, unverifiable=True))
+ except urllib.error.HTTPError as e:
+ logger.debug(1, "HTTP Error: %s: %s" % (e.code, e.reason))
+ logger.debug(1, " Requested: %s" % (url))
+ logger.debug(1, " Actual: %s" % (e.geturl()))
+
+ if e.code == 404:
+ logger.debug(1, "Request not found.")
+ raise LayerIndexFetchError(url, e)
+ else:
+ logger.debug(1, "Headers:\n%s" % (e.headers))
+ raise LayerIndexFetchError(url, e)
+ except OSError as e:
+ error = 0
+ reason = ""
+
+ # Process base OSError first...
+ if hasattr(e, 'errno'):
+ error = e.errno
+ reason = e.strerror
+
+ # Process gaierror (socket error) subclass if available.
+ if hasattr(e, 'reason') and hasattr(e.reason, 'errno') and hasattr(e.reason, 'strerror'):
+ error = e.reason.errno
+ reason = e.reason.strerror
+ if error == -2:
+ raise LayerIndexFetchError(url, "%s: %s" % (e, reason))
+
+ if error and error != 0:
+ raise LayerIndexFetchError(url, "Unexpected exception: [Error %s] %s" % (error, reason))
+ else:
+ raise LayerIndexFetchError(url, "Unable to fetch OSError exception: %s" % e)
+
+ finally:
+ logger.debug(1, "...fetching %s (%s), done." % (url, ["without authentication", "with authentication"][bool(username)]))
+
+ return res
+
+
+ def load_layerindex(self, indexURI, load=['layerDependencies', 'recipes', 'machines', 'distros'], reload=False):
+ '''Load the layerindex.
+
+ indexURI - An index to load. (Use multiple calls to load multiple indexes)
+
+ reload - If reload is True, then any previously loaded indexes will be forgotten.
+
+ load - List of elements to load. Default loads all items.
+ Note: plugs may ignore this.
+
+The format of the indexURI:
+
+ <url>;branch=<branch>;cache=<cache>;desc=<description>
+
+ Note: the 'branch' parameter if set can select multiple branches by using
+ comma, such as 'branch=master,morty,pyro'. However, many operations only look
+ at the -first- branch specified!
+
+ The cache value may be undefined, in this case a network failure will
+ result in an error, otherwise the system will look for a file of the cache
+ name and load that instead.
+
+ For example:
+
+ http://layers.openembedded.org/layerindex/api/;branch=master;desc=OpenEmbedded%20Layer%20Index
+ cooker://
+'''
+ if reload:
+ self.indexes = []
+
+ logger.debug(1, 'Loading: %s' % indexURI)
+
+ if not self.plugins:
+ raise LayerIndexException("No LayerIndex Plugins available")
+
+ for plugin in self.plugins:
+ # Check if the plugin was initialized
+ logger.debug(1, 'Trying %s' % plugin.__class__)
+ if not hasattr(plugin, 'type') or not plugin.type:
+ continue
+ try:
+ # TODO: Implement 'cache', for when the network is not available
+ indexEnt = plugin.load_index(indexURI, load)
+ break
+ except LayerIndexPluginUrlError as e:
+ logger.debug(1, "%s doesn't support %s" % (plugin.type, e.url))
+ except NotImplementedError:
+ pass
+ else:
+ logger.debug(1, "No plugins support %s" % indexURI)
+ raise LayerIndexException("No plugins support %s" % indexURI)
+
+ # Mark CONFIG data as something we've added...
+ indexEnt.config['local'] = []
+ indexEnt.config['local'].append('config')
+
+ # No longer permit changes..
+ indexEnt.lockData()
+
+ self.indexes.append(indexEnt)
+
+ def store_layerindex(self, indexURI, index=None):
+ '''Store one layerindex
+
+Typically this will be used to create a local cache file of a remote index.
+
+ file://<path>;branch=<branch>
+
+We can write out in either the restapi or django formats. The split option
+will write out the individual elements split by layer and related components.
+'''
+ if not index:
+ logger.warning('No index to write, nothing to do.')
+ return
+
+ if not self.plugins:
+ raise LayerIndexException("No LayerIndex Plugins available")
+
+ for plugin in self.plugins:
+ # Check if the plugin was initialized
+ logger.debug(1, 'Trying %s' % plugin.__class__)
+ if not hasattr(plugin, 'type') or not plugin.type:
+ continue
+ try:
+ plugin.store_index(indexURI, index)
+ break
+ except LayerIndexPluginUrlError as e:
+ logger.debug(1, "%s doesn't support %s" % (plugin.type, e.url))
+ except NotImplementedError:
+ logger.debug(1, "Store not implemented in %s" % plugin.type)
+ pass
+ else:
+ logger.debug(1, "No plugins support %s" % url)
+ raise LayerIndexException("No plugins support %s" % url)
+
+
+ def is_empty(self):
+ '''Return True or False if the index has any usable data.
+
+We check the indexes entries to see if they have a branch set, as well as
+layerBranches set. If not, they are effectively blank.'''
+
+ found = False
+ for index in self.indexes:
+ if index.__bool__():
+ found = True
+ break
+ return not found
+
+
+ def find_vcs_url(self, vcs_url, branch=None):
+ '''Return the first layerBranch with the given vcs_url
+
+ If a branch has not been specified, we will iterate over the branches in
+ the default configuration until the first vcs_url/branch match.'''
+
+ for index in self.indexes:
+ logger.debug(1, ' searching %s' % index.config['DESCRIPTION'])
+ layerBranch = index.find_vcs_url(vcs_url, [branch])
+ if layerBranch:
+ return layerBranch
+ return None
+
+ def find_collection(self, collection, version=None, branch=None):
+ '''Return the first layerBranch with the given collection name
+
+ If a branch has not been specified, we will iterate over the branches in
+ the default configuration until the first collection/branch match.'''
+
+ logger.debug(1, 'find_collection: %s (%s) %s' % (collection, version, branch))
+
+ if branch:
+ branches = [branch]
+ else:
+ branches = None
+
+ for index in self.indexes:
+ logger.debug(1, ' searching %s' % index.config['DESCRIPTION'])
+ layerBranch = index.find_collection(collection, version, branches)
+ if layerBranch:
+ return layerBranch
+ else:
+ logger.debug(1, 'Collection %s (%s) not found for branch (%s)' % (collection, version, branch))
+ return None
+
+ def find_layerbranch(self, name, branch=None):
+ '''Return the layerBranch item for a given name and branch
+
+ If a branch has not been specified, we will iterate over the branches in
+ the default configuration until the first name/branch match.'''
+
+ if branch:
+ branches = [branch]
+ else:
+ branches = None
+
+ for index in self.indexes:
+ layerBranch = index.find_layerbranch(name, branches)
+ if layerBranch:
+ return layerBranch
+ return None
+
+ def find_dependencies(self, names=None, layerbranches=None, ignores=None):
+ '''Return a tuple of all dependencies and valid items for the list of (layer) names
+
+ The dependency scanning happens depth-first. The returned
+ dependencies should be in the best order to define bblayers.
+
+ names - list of layer names (searching layerItems)
+ branches - when specified (with names) only this list of branches are evaluated
+
+ layerbranches - list of layerbranches to resolve dependencies
+
+ ignores - list of layer names to ignore
+
+ return: (dependencies, invalid)
+
+ dependencies[LayerItem.name] = [ LayerBranch, LayerDependency1, LayerDependency2, ... ]
+ invalid = [ LayerItem.name1, LayerItem.name2, ... ]
+ '''
+
+ invalid = []
+
+ # Convert name/branch to layerbranches
+ if layerbranches is None:
+ layerbranches = []
+
+ for name in names:
+ if ignores and name in ignores:
+ continue
+
+ for index in self.indexes:
+ layerbranch = index.find_layerbranch(name)
+ if not layerbranch:
+ # Not in this index, hopefully it's in another...
+ continue
+ layerbranches.append(layerbranch)
+ break
+ else:
+ invalid.append(name)
+
+
+ def _resolve_dependencies(layerbranches, ignores, dependencies, invalid):
+ for layerbranch in layerbranches:
+ if ignores and layerbranch.layer.name in ignores:
+ continue
+
+ # Get a list of dependencies and then recursively process them
+ for layerdependency in layerbranch.index.layerDependencies_layerBranchId[layerbranch.id]:
+ deplayerbranch = layerdependency.dependency_layerBranch
+
+ if ignores and deplayerbranch.layer.name in ignores:
+ continue
+
+ # This little block is why we can't re-use the LayerIndexObj version,
+ # we must be able to satisfy each dependencies across layer indexes and
+ # use the layer index order for priority. (r stands for replacement below)
+
+ # If this is the primary index, we can fast path and skip this
+ if deplayerbranch.index != self.indexes[0]:
+ # Is there an entry in a prior index for this collection/version?
+ rdeplayerbranch = self.find_collection(
+ collection=deplayerbranch.collection,
+ version=deplayerbranch.version
+ )
+ if rdeplayerbranch != deplayerbranch:
+ logger.debug(1, 'Replaced %s:%s:%s with %s:%s:%s' % \
+ (deplayerbranch.index.config['DESCRIPTION'],
+ deplayerbranch.branch.name,
+ deplayerbranch.layer.name,
+ rdeplayerbranch.index.config['DESCRIPTION'],
+ rdeplayerbranch.branch.name,
+ rdeplayerbranch.layer.name))
+ deplayerbranch = rdeplayerbranch
+
+ # New dependency, we need to resolve it now... depth-first
+ if deplayerbranch.layer.name not in dependencies:
+ (dependencies, invalid) = _resolve_dependencies([deplayerbranch], ignores, dependencies, invalid)
+
+ if deplayerbranch.layer.name not in dependencies:
+ dependencies[deplayerbranch.layer.name] = [deplayerbranch, layerdependency]
+ else:
+ if layerdependency not in dependencies[deplayerbranch.layer.name]:
+ dependencies[deplayerbranch.layer.name].append(layerdependency)
+
+ return (dependencies, invalid)
+
+ # OK, resolve this one...
+ dependencies = OrderedDict()
+ (dependencies, invalid) = _resolve_dependencies(layerbranches, ignores, dependencies, invalid)
+
+ for layerbranch in layerbranches:
+ if layerbranch.layer.name not in dependencies:
+ dependencies[layerbranch.layer.name] = [layerbranch]
+
+ return (dependencies, invalid)
+
+
+ def list_obj(self, object):
+ '''Print via the plain logger object information
+
+This function is used to implement debugging and provide the user info.
+'''
+ for lix in self.indexes:
+ if not hasattr(lix, object):
+ continue
+
+ logger.plain ('')
+ logger.plain ('Index: %s' % lix.config['DESCRIPTION'])
+
+ output = []
+
+ if object == 'branches':
+ logger.plain ('%s %s %s' % ('{:26}'.format('branch'), '{:34}'.format('description'), '{:22}'.format('bitbake branch')))
+ logger.plain ('{:-^80}'.format(""))
+ for branchid in lix.branches:
+ output.append('%s %s %s' % (
+ '{:26}'.format(lix.branches[branchid].name),
+ '{:34}'.format(lix.branches[branchid].short_description),
+ '{:22}'.format(lix.branches[branchid].bitbake_branch)
+ ))
+ for line in sorted(output):
+ logger.plain (line)
+
+ continue
+
+ if object == 'layerItems':
+ logger.plain ('%s %s' % ('{:26}'.format('layer'), '{:34}'.format('description')))
+ logger.plain ('{:-^80}'.format(""))
+ for layerid in lix.layerItems:
+ output.append('%s %s' % (
+ '{:26}'.format(lix.layerItems[layerid].name),
+ '{:34}'.format(lix.layerItems[layerid].summary)
+ ))
+ for line in sorted(output):
+ logger.plain (line)
+
+ continue
+
+ if object == 'layerBranches':
+ logger.plain ('%s %s %s' % ('{:26}'.format('layer'), '{:34}'.format('description'), '{:19}'.format('collection:version')))
+ logger.plain ('{:-^80}'.format(""))
+ for layerbranchid in lix.layerBranches:
+ output.append('%s %s %s' % (
+ '{:26}'.format(lix.layerBranches[layerbranchid].layer.name),
+ '{:34}'.format(lix.layerBranches[layerbranchid].layer.summary),
+ '{:19}'.format("%s:%s" %
+ (lix.layerBranches[layerbranchid].collection,
+ lix.layerBranches[layerbranchid].version)
+ )
+ ))
+ for line in sorted(output):
+ logger.plain (line)
+
+ continue
+
+ if object == 'layerDependencies':
+ logger.plain ('%s %s %s %s' % ('{:19}'.format('branch'), '{:26}'.format('layer'), '{:11}'.format('dependency'), '{:26}'.format('layer')))
+ logger.plain ('{:-^80}'.format(""))
+ for layerDependency in lix.layerDependencies:
+ if not lix.layerDependencies[layerDependency].dependency_layerBranch:
+ continue
+
+ output.append('%s %s %s %s' % (
+ '{:19}'.format(lix.layerDependencies[layerDependency].layerbranch.branch.name),
+ '{:26}'.format(lix.layerDependencies[layerDependency].layerbranch.layer.name),
+ '{:11}'.format('requires' if lix.layerDependencies[layerDependency].required else 'recommends'),
+ '{:26}'.format(lix.layerDependencies[layerDependency].dependency_layerBranch.layer.name)
+ ))
+ for line in sorted(output):
+ logger.plain (line)
+
+ continue
+
+ if object == 'recipes':
+ logger.plain ('%s %s %s' % ('{:20}'.format('recipe'), '{:10}'.format('version'), 'layer'))
+ logger.plain ('{:-^80}'.format(""))
+ output = []
+ for recipe in lix.recipes:
+ output.append('%s %s %s' % (
+ '{:30}'.format(lix.recipes[recipe].pn),
+ '{:30}'.format(lix.recipes[recipe].pv),
+ lix.recipes[recipe].layer.name
+ ))
+ for line in sorted(output):
+ logger.plain (line)
+
+ continue
+
+ if object == 'machines':
+ logger.plain ('%s %s %s' % ('{:24}'.format('machine'), '{:34}'.format('description'), '{:19}'.format('layer')))
+ logger.plain ('{:-^80}'.format(""))
+ for machine in lix.machines:
+ output.append('%s %s %s' % (
+ '{:24}'.format(lix.machines[machine].name),
+ '{:34}'.format(lix.machines[machine].description)[:34],
+ '{:19}'.format(lix.machines[machine].layerbranch.layer.name)
+ ))
+ for line in sorted(output):
+ logger.plain (line)
+
+ continue
+
+ if object == 'distros':
+ logger.plain ('%s %s %s' % ('{:24}'.format('distro'), '{:34}'.format('description'), '{:19}'.format('layer')))
+ logger.plain ('{:-^80}'.format(""))
+ for distro in lix.distros:
+ output.append('%s %s %s' % (
+ '{:24}'.format(lix.distros[distro].name),
+ '{:34}'.format(lix.distros[distro].description)[:34],
+ '{:19}'.format(lix.distros[distro].layerbranch.layer.name)
+ ))
+ for line in sorted(output):
+ logger.plain (line)
+
+ continue
+
+ logger.plain ('')
+
+
+# This class holds a single layer index instance
+# The LayerIndexObj is made up of dictionary of elements, such as:
+# index['config'] - configuration data for this index
+# index['branches'] - dictionary of Branch objects, by id number
+# index['layerItems'] - dictionary of layerItem objects, by id number
+# ...etc... (See: http://layers.openembedded.org/layerindex/api/)
+#
+# The class needs to manage the 'index' entries and allow easily adding
+# of new items, as well as simply loading of the items.
+class LayerIndexObj():
+ def __init__(self):
+ super().__setattr__('_index', {})
+ super().__setattr__('_lock', False)
+
+ def __bool__(self):
+ '''False if the index is effectively empty
+
+ We check the index to see if it has a branch set, as well as
+ layerbranches set. If not, it is effectively blank.'''
+
+ if not bool(self._index):
+ return False
+
+ try:
+ if self.branches and self.layerBranches:
+ return True
+ except AttributeError:
+ pass
+
+ return False
+
+ def __getattr__(self, name):
+ if name.startswith('_'):
+ return super().__getattribute__(name)
+
+ if name not in self._index:
+ raise AttributeError('%s not in index datastore' % name)
+
+ return self._index[name]
+
+ def __setattr__(self, name, value):
+ if self.isLocked():
+ raise TypeError("Can not set attribute '%s': index is locked" % name)
+
+ if name.startswith('_'):
+ super().__setattr__(name, value)
+ return
+
+ self._index[name] = value
+
+ def __delattr__(self, name):
+ if self.isLocked():
+ raise TypeError("Can not delete attribute '%s': index is locked" % name)
+
+ if name.startswith('_'):
+ super().__delattr__(name)
+
+ self._index.pop(name)
+
+ def lockData(self):
+ '''Lock data object (make it readonly)'''
+ super().__setattr__("_lock", True)
+
+ def unlockData(self):
+ '''unlock data object (make it readonly)'''
+ super().__setattr__("_lock", False)
+
+ # When the data is unlocked, we have to clear the caches, as
+ # modification is allowed!
+ del(self._layerBranches_layerId_branchId)
+ del(self._layerDependencies_layerBranchId)
+ del(self._layerBranches_vcsUrl)
+
+ def isLocked(self):
+ '''Is this object locked (readonly)?'''
+ return self._lock
+
+ def add_element(self, indexname, objs):
+ '''Add a layer index object to index.<indexname>'''
+ if indexname not in self._index:
+ self._index[indexname] = {}
+
+ for obj in objs:
+ if obj.id in self._index[indexname]:
+ if self._index[indexname][obj.id] == obj:
+ continue
+ raise LayerIndexError('Conflict adding object %s(%s) to index' % (indexname, obj.id))
+ self._index[indexname][obj.id] = obj
+
+ def add_raw_element(self, indexname, objtype, rawobjs):
+ '''Convert a raw layer index data item to a layer index item object and add to the index'''
+ objs = []
+ for entry in rawobjs:
+ objs.append(objtype(self, entry))
+ self.add_element(indexname, objs)
+
+ # Quick lookup table for searching layerId and branchID combos
+ @property
+ def layerBranches_layerId_branchId(self):
+ def createCache(self):
+ cache = {}
+ for layerbranchid in self.layerBranches:
+ layerbranch = self.layerBranches[layerbranchid]
+ cache["%s:%s" % (layerbranch.layer_id, layerbranch.branch_id)] = layerbranch
+ return cache
+
+ if self.isLocked():
+ cache = getattr(self, '_layerBranches_layerId_branchId', None)
+ else:
+ cache = None
+
+ if not cache:
+ cache = createCache(self)
+
+ if self.isLocked():
+ super().__setattr__('_layerBranches_layerId_branchId', cache)
+
+ return cache
+
+ # Quick lookup table for finding all dependencies of a layerBranch
+ @property
+ def layerDependencies_layerBranchId(self):
+ def createCache(self):
+ cache = {}
+ # This ensures empty lists for all branchids
+ for layerbranchid in self.layerBranches:
+ cache[layerbranchid] = []
+
+ for layerdependencyid in self.layerDependencies:
+ layerdependency = self.layerDependencies[layerdependencyid]
+ cache[layerdependency.layerbranch_id].append(layerdependency)
+ return cache
+
+ if self.isLocked():
+ cache = getattr(self, '_layerDependencies_layerBranchId', None)
+ else:
+ cache = None
+
+ if not cache:
+ cache = createCache(self)
+
+ if self.isLocked():
+ super().__setattr__('_layerDependencies_layerBranchId', cache)
+
+ return cache
+
+ # Quick lookup table for finding all instances of a vcs_url
+ @property
+ def layerBranches_vcsUrl(self):
+ def createCache(self):
+ cache = {}
+ for layerbranchid in self.layerBranches:
+ layerbranch = self.layerBranches[layerbranchid]
+ if layerbranch.layer.vcs_url not in cache:
+ cache[layerbranch.layer.vcs_url] = [layerbranch]
+ else:
+ cache[layerbranch.layer.vcs_url].append(layerbranch)
+ return cache
+
+ if self.isLocked():
+ cache = getattr(self, '_layerBranches_vcsUrl', None)
+ else:
+ cache = None
+
+ if not cache:
+ cache = createCache(self)
+
+ if self.isLocked():
+ super().__setattr__('_layerBranches_vcsUrl', cache)
+
+ return cache
+
+
+ def find_vcs_url(self, vcs_url, branches=None):
+ ''''Return the first layerBranch with the given vcs_url
+
+ If a list of branches has not been specified, we will iterate on
+ all branches until the first vcs_url is found.'''
+
+ if not self.__bool__():
+ return None
+
+ for layerbranch in self.layerBranches_vcsUrl:
+ if branches and layerbranch.branch.name not in branches:
+ continue
+
+ return layerbranch
+
+ return None
+
+
+ def find_collection(self, collection, version=None, branches=None):
+ '''Return the first layerBranch with the given collection name
+
+ If a list of branches has not been specified, we will iterate on
+ all branches until the first collection is found.'''
+
+ if not self.__bool__():
+ return None
+
+ for layerbranchid in self.layerBranches:
+ layerbranch = self.layerBranches[layerbranchid]
+ if branches and layerbranch.branch.name not in branches:
+ continue
+
+ if layerbranch.collection == collection and \
+ (version is None or version == layerbranch.version):
+ return layerbranch
+
+ return None
+
+
+ def find_layerbranch(self, name, branches=None):
+ '''Return the first layerbranch whose layer name matches
+
+ If a list of branches has not been specified, we will iterate on
+ all branches until the first layer with that name is found.'''
+
+ if not self.__bool__():
+ return None
+
+ for layerbranchid in self.layerBranches:
+ layerbranch = self.layerBranches[layerbranchid]
+ if branches and layerbranch.branch.name not in branches:
+ continue
+
+ if layerbranch.layer.name == name:
+ return layerbranch
+
+ return None
+
+ def find_dependencies(self, names=None, branches=None, layerBranches=None, ignores=None):
+ '''Return a tuple of all dependencies and valid items for the list of (layer) names
+
+ The dependency scanning happens depth-first. The returned
+ dependencies should be in the best order to define bblayers.
+
+ names - list of layer names (searching layerItems)
+ branches - when specified (with names) only this list of branches are evaluated
+
+ layerBranches - list of layerBranches to resolve dependencies
+
+ ignores - list of layer names to ignore
+
+ return: (dependencies, invalid)
+
+ dependencies[LayerItem.name] = [ LayerBranch, LayerDependency1, LayerDependency2, ... ]
+ invalid = [ LayerItem.name1, LayerItem.name2, ... ]'''
+
+ invalid = []
+
+ # Convert name/branch to layerBranches
+ if layerbranches is None:
+ layerbranches = []
+
+ for name in names:
+ if ignores and name in ignores:
+ continue
+
+ layerbranch = self.find_layerbranch(name, branches)
+ if not layerbranch:
+ invalid.append(name)
+ else:
+ layerbranches.append(layerbranch)
+
+ for layerbranch in layerbranches:
+ if layerbranch.index != self:
+ raise LayerIndexException("Can not resolve dependencies across indexes with this class function!")
+
+ def _resolve_dependencies(layerbranches, ignores, dependencies, invalid):
+ for layerbranch in layerbranches:
+ if ignores and layerBranch.layer.name in ignores:
+ continue
+
+ for layerdependency in layerbranch.index.layerDependencies_layerBranchId[layerBranch.id]:
+ deplayerbranch = layerDependency.dependency_layerBranch
+
+ if ignores and deplayerbranch.layer.name in ignores:
+ continue
+
+ # New dependency, we need to resolve it now... depth-first
+ if deplayerbranch.layer.name not in dependencies:
+ (dependencies, invalid) = _resolve_dependencies([deplayerbranch], ignores, dependencies, invalid)
+
+ if deplayerbranch.layer.name not in dependencies:
+ dependencies[deplayerbranch.layer.name] = [deplayerbranch, layerdependency]
+ else:
+ if layerdependency not in dependencies[deplayerbranch.layer.name]:
+ dependencies[deplayerbranch.layer.name].append(layerdependency)
+
+ return (dependencies, invalid)
+
+ # OK, resolve this one...
+ dependencies = OrderedDict()
+ (dependencies, invalid) = _resolve_dependencies(layerbranches, ignores, dependencies, invalid)
+
+ # Is this item already in the list, if not add it
+ for layerbranch in layerbranches:
+ if layerbranch.layer.name not in dependencies:
+ dependencies[layerbranch.layer.name] = [layerbranch]
+
+ return (dependencies, invalid)
+
+
+# Define a basic LayerIndexItemObj. This object forms the basis for all other
+# objects. The raw Layer Index data is stored in the _data element, but we
+# do not want users to access data directly. So wrap this and protect it
+# from direct manipulation.
+#
+# It is up to the insantiators of the objects to fill them out, and once done
+# lock the objects to prevent further accidently manipulation.
+#
+# Using the getattr, setattr and properties we can access and manipulate
+# the data within the data element.
+class LayerIndexItemObj():
+ def __init__(self, index, data=None, lock=False):
+ if data is None:
+ data = {}
+
+ if type(data) != type(dict()):
+ raise TypeError('data (%s) is not a dict' % type(data))
+
+ super().__setattr__('_lock', lock)
+ super().__setattr__('index', index)
+ super().__setattr__('_data', data)
+
+ def __eq__(self, other):
+ if self.__class__ != other.__class__:
+ return False
+ res=(self._data == other._data)
+ return res
+
+ def __bool__(self):
+ return bool(self._data)
+
+ def __getattr__(self, name):
+ # These are internal to THIS class, and not part of data
+ if name == "index" or name.startswith('_'):
+ return super().__getattribute__(name)
+
+ if name not in self._data:
+ raise AttributeError('%s not in datastore' % name)
+
+ return self._data[name]
+
+ def _setattr(self, name, value, prop=True):
+ '''__setattr__ like function, but with control over property object behavior'''
+ if self.isLocked():
+ raise TypeError("Can not set attribute '%s': Object data is locked" % name)
+
+ if name.startswith('_'):
+ super().__setattr__(name, value)
+ return
+
+ # Since __setattr__ runs before properties, we need to check if
+ # there is a setter property and then execute it
+ # ... or return self._data[name]
+ propertyobj = getattr(self.__class__, name, None)
+ if prop and isinstance(propertyobj, property):
+ if propertyobj.fset:
+ propertyobj.fset(self, value)
+ else:
+ raise AttributeError('Attribute %s is readonly, and may not be set' % name)
+ else:
+ self._data[name] = value
+
+ def __setattr__(self, name, value):
+ self._setattr(name, value, prop=True)
+
+ def _delattr(self, name, prop=True):
+ # Since __delattr__ runs before properties, we need to check if
+ # there is a deleter property and then execute it
+ # ... or we pop it ourselves..
+ propertyobj = getattr(self.__class__, name, None)
+ if prop and isinstance(propertyobj, property):
+ if propertyobj.fdel:
+ propertyobj.fdel(self)
+ else:
+ raise AttributeError('Attribute %s is readonly, and may not be deleted' % name)
+ else:
+ self._data.pop(name)
+
+ def __delattr__(self, name):
+ self._delattr(name, prop=True)
+
+ def lockData(self):
+ '''Lock data object (make it readonly)'''
+ super().__setattr__("_lock", True)
+
+ def unlockData(self):
+ '''unlock data object (make it readonly)'''
+ super().__setattr__("_lock", False)
+
+ def isLocked(self):
+ '''Is this object locked (readonly)?'''
+ return self._lock
+
+# Branch object
+class Branch(LayerIndexItemObj):
+ def define_data(self, id, name, bitbake_branch,
+ short_description=None, sort_priority=1,
+ updates_enabled=True, updated=None,
+ update_environment=None):
+ self.id = id
+ self.name = name
+ self.bitbake_branch = bitbake_branch
+ self.short_description = short_description or name
+ self.sort_priority = sort_priority
+ self.updates_enabled = updates_enabled
+ self.updated = updated or datetime.datetime.today().isoformat()
+ self.update_environment = update_environment
+
+ @property
+ def name(self):
+ return self.__getattr__('name')
+
+ @name.setter
+ def name(self, value):
+ self._data['name'] = value
+
+ if self.bitbake_branch == value:
+ self.bitbake_branch = ""
+
+ @name.deleter
+ def name(self):
+ self._delattr('name', prop=False)
+
+ @property
+ def bitbake_branch(self):
+ try:
+ return self.__getattr__('bitbake_branch')
+ except AttributeError:
+ return self.name
+
+ @bitbake_branch.setter
+ def bitbake_branch(self, value):
+ if self.name == value:
+ self._data['bitbake_branch'] = ""
+ else:
+ self._data['bitbake_branch'] = value
+
+ @bitbake_branch.deleter
+ def bitbake_branch(self):
+ self._delattr('bitbake_branch', prop=False)
+
+
+class LayerItem(LayerIndexItemObj):
+ def define_data(self, id, name, status='P',
+ layer_type='A', summary=None,
+ description=None,
+ vcs_url=None, vcs_web_url=None,
+ vcs_web_tree_base_url=None,
+ vcs_web_file_base_url=None,
+ usage_url=None,
+ mailing_list_url=None,
+ index_preference=1,
+ classic=False,
+ updated=None):
+ self.id = id
+ self.name = name
+ self.status = status
+ self.layer_type = layer_type
+ self.summary = summary or name
+ self.description = description or summary or name
+ self.vcs_url = vcs_url
+ self.vcs_web_url = vcs_web_url
+ self.vcs_web_tree_base_url = vcs_web_tree_base_url
+ self.vcs_web_file_base_url = vcs_web_file_base_url
+ self.index_preference = index_preference
+ self.classic = classic
+ self.updated = updated or datetime.datetime.today().isoformat()
+
+
+class LayerBranch(LayerIndexItemObj):
+ def define_data(self, id, collection, version, layer, branch,
+ vcs_subdir="", vcs_last_fetch=None,
+ vcs_last_rev=None, vcs_last_commit=None,
+ actual_branch="",
+ updated=None):
+ self.id = id
+ self.collection = collection
+ self.version = version
+ if isinstance(layer, LayerItem):
+ self.layer = layer
+ else:
+ self.layer_id = layer
+
+ if isinstance(branch, Branch):
+ self.branch = branch
+ else:
+ self.branch_id = branch
+
+ self.vcs_subdir = vcs_subdir
+ self.vcs_last_fetch = vcs_last_fetch
+ self.vcs_last_rev = vcs_last_rev
+ self.vcs_last_commit = vcs_last_commit
+ self.actual_branch = actual_branch
+ self.updated = updated or datetime.datetime.today().isoformat()
+
+ # This is a little odd, the _data attribute is 'layer', but it's really
+ # referring to the layer id.. so lets adjust this to make it useful
+ @property
+ def layer_id(self):
+ return self.__getattr__('layer')
+
+ @layer_id.setter
+ def layer_id(self, value):
+ self._setattr('layer', value, prop=False)
+
+ @layer_id.deleter
+ def layer_id(self):
+ self._delattr('layer', prop=False)
+
+ @property
+ def layer(self):
+ try:
+ return self.index.layerItems[self.layer_id]
+ except KeyError:
+ raise AttributeError('Unable to find layerItems in index to map layer_id %s' % self.layer_id)
+ except IndexError:
+ raise AttributeError('Unable to find layer_id %s in index layerItems' % self.layer_id)
+
+ @layer.setter
+ def layer(self, value):
+ if not isinstance(value, LayerItem):
+ raise TypeError('value is not a LayerItem')
+ if self.index != value.index:
+ raise AttributeError('Object and value do not share the same index and thus key set.')
+ self.layer_id = value.id
+
+ @layer.deleter
+ def layer(self):
+ del self.layer_id
+
+ @property
+ def branch_id(self):
+ return self.__getattr__('branch')
+
+ @branch_id.setter
+ def branch_id(self, value):
+ self._setattr('branch', value, prop=False)
+
+ @branch_id.deleter
+ def branch_id(self):
+ self._delattr('branch', prop=False)
+
+ @property
+ def branch(self):
+ try:
+ logger.debug(1, "Get branch object from branches[%s]" % (self.branch_id))
+ return self.index.branches[self.branch_id]
+ except KeyError:
+ raise AttributeError('Unable to find branches in index to map branch_id %s' % self.branch_id)
+ except IndexError:
+ raise AttributeError('Unable to find branch_id %s in index branches' % self.branch_id)
+
+ @branch.setter
+ def branch(self, value):
+ if not isinstance(value, LayerItem):
+ raise TypeError('value is not a LayerItem')
+ if self.index != value.index:
+ raise AttributeError('Object and value do not share the same index and thus key set.')
+ self.branch_id = value.id
+
+ @branch.deleter
+ def branch(self):
+ del self.branch_id
+
+ @property
+ def actual_branch(self):
+ if self.__getattr__('actual_branch'):
+ return self.__getattr__('actual_branch')
+ else:
+ return self.branch.name
+
+ @actual_branch.setter
+ def actual_branch(self, value):
+ logger.debug(1, "Set actual_branch to %s .. name is %s" % (value, self.branch.name))
+ if value != self.branch.name:
+ self._setattr('actual_branch', value, prop=False)
+ else:
+ self._setattr('actual_branch', '', prop=False)
+
+ @actual_branch.deleter
+ def actual_branch(self):
+ self._delattr('actual_branch', prop=False)
+
+# Extend LayerIndexItemObj with common LayerBranch manipulations
+# All of the remaining LayerIndex objects refer to layerbranch, and it is
+# up to the user to follow that back through the LayerBranch object into
+# the layer object to get various attributes. So add an intermediate set
+# of attributes that can easily get us the layerbranch as well as layer.
+
+class LayerIndexItemObj_LayerBranch(LayerIndexItemObj):
+ @property
+ def layerbranch_id(self):
+ return self.__getattr__('layerbranch')
+
+ @layerbranch_id.setter
+ def layerbranch_id(self, value):
+ self._setattr('layerbranch', value, prop=False)
+
+ @layerbranch_id.deleter
+ def layerbranch_id(self):
+ self._delattr('layerbranch', prop=False)
+
+ @property
+ def layerbranch(self):
+ try:
+ return self.index.layerBranches[self.layerbranch_id]
+ except KeyError:
+ raise AttributeError('Unable to find layerBranches in index to map layerbranch_id %s' % self.layerbranch_id)
+ except IndexError:
+ raise AttributeError('Unable to find layerbranch_id %s in index branches' % self.layerbranch_id)
+
+ @layerbranch.setter
+ def layerbranch(self, value):
+ if not isinstance(value, LayerBranch):
+ raise TypeError('value (%s) is not a layerBranch' % type(value))
+ if self.index != value.index:
+ raise AttributeError('Object and value do not share the same index and thus key set.')
+ self.layerbranch_id = value.id
+
+ @layerbranch.deleter
+ def layerbranch(self):
+ del self.layerbranch_id
+
+ @property
+ def layer_id(self):
+ return self.layerbranch.layer_id
+
+ # Doesn't make sense to set or delete layer_id
+
+ @property
+ def layer(self):
+ return self.layerbranch.layer
+
+ # Doesn't make sense to set or delete layer
+
+
+class LayerDependency(LayerIndexItemObj_LayerBranch):
+ def define_data(self, id, layerbranch, dependency, required=True):
+ self.id = id
+ if isinstance(layerbranch, LayerBranch):
+ self.layerbranch = layerbranch
+ else:
+ self.layerbranch_id = layerbranch
+ if isinstance(dependency, LayerDependency):
+ self.dependency = dependency
+ else:
+ self.dependency_id = dependency
+ self.required = required
+
+ @property
+ def dependency_id(self):
+ return self.__getattr__('dependency')
+
+ @dependency_id.setter
+ def dependency_id(self, value):
+ self._setattr('dependency', value, prop=False)
+
+ @dependency_id.deleter
+ def dependency_id(self):
+ self._delattr('dependency', prop=False)
+
+ @property
+ def dependency(self):
+ try:
+ return self.index.layerItems[self.dependency_id]
+ except KeyError:
+ raise AttributeError('Unable to find layerItems in index to map layerbranch_id %s' % self.dependency_id)
+ except IndexError:
+ raise AttributeError('Unable to find dependency_id %s in index layerItems' % self.dependency_id)
+
+ @dependency.setter
+ def dependency(self, value):
+ if not isinstance(value, LayerDependency):
+ raise TypeError('value (%s) is not a dependency' % type(value))
+ if self.index != value.index:
+ raise AttributeError('Object and value do not share the same index and thus key set.')
+ self.dependency_id = value.id
+
+ @dependency.deleter
+ def dependency(self):
+ self._delattr('dependency', prop=False)
+
+ @property
+ def dependency_layerBranch(self):
+ layerid = self.dependency_id
+ branchid = self.layerbranch.branch_id
+
+ try:
+ return self.index.layerBranches_layerId_branchId["%s:%s" % (layerid, branchid)]
+ except IndexError:
+ # layerBranches_layerId_branchId -- but not layerId:branchId
+ raise AttributeError('Unable to find layerId:branchId %s:%s in index layerBranches_layerId_branchId' % (layerid, branchid))
+ except KeyError:
+ raise AttributeError('Unable to find layerId:branchId %s:%s in layerItems and layerBranches' % (layerid, branchid))
+
+ # dependency_layerBranch doesn't make sense to set or del
+
+
+class Recipe(LayerIndexItemObj_LayerBranch):
+ def define_data(self, id,
+ filename, filepath, pn, pv, layerbranch,
+ summary="", description="", section="", license="",
+ homepage="", bugtracker="", provides="", bbclassextend="",
+ inherits="", blacklisted="", updated=None):
+ self.id = id
+ self.filename = filename
+ self.filepath = filepath
+ self.pn = pn
+ self.pv = pv
+ self.summary = summary
+ self.description = description
+ self.section = section
+ self.license = license
+ self.homepage = homepage
+ self.bugtracker = bugtracker
+ self.provides = provides
+ self.bbclassextend = bbclassextend
+ self.inherits = inherits
+ self.updated = updated or datetime.datetime.today().isoformat()
+ self.blacklisted = blacklisted
+ if isinstance(layerbranch, LayerBranch):
+ self.layerbranch = layerbranch
+ else:
+ self.layerbranch_id = layerbranch
+
+ @property
+ def fullpath(self):
+ return os.path.join(self.filepath, self.filename)
+
+ # Set would need to understand how to split it
+ # del would we del both parts?
+
+ @property
+ def inherits(self):
+ if 'inherits' not in self._data:
+ # Older indexes may not have this, so emulate it
+ if '-image-' in self.pn:
+ return 'image'
+ return self.__getattr__('inherits')
+
+ @inherits.setter
+ def inherits(self, value):
+ return self._setattr('inherits', value, prop=False)
+
+ @inherits.deleter
+ def inherits(self):
+ return self._delattr('inherits', prop=False)
+
+
+class Machine(LayerIndexItemObj_LayerBranch):
+ def define_data(self, id,
+ name, description, layerbranch,
+ updated=None):
+ self.id = id
+ self.name = name
+ self.description = description
+ if isinstance(layerbranch, LayerBranch):
+ self.layerbranch = layerbranch
+ else:
+ self.layerbranch_id = layerbranch
+ self.updated = updated or datetime.datetime.today().isoformat()
+
+class Distro(LayerIndexItemObj_LayerBranch):
+ def define_data(self, id,
+ name, description, layerbranch,
+ updated=None):
+ self.id = id
+ self.name = name
+ self.description = description
+ if isinstance(layerbranch, LayerBranch):
+ self.layerbranch = layerbranch
+ else:
+ self.layerbranch_id = layerbranch
+ self.updated = updated or datetime.datetime.today().isoformat()
+
+# When performing certain actions, we may need to sort the data.
+# This will allow us to keep it consistent from run to run.
+def sort_entry(item):
+ newitem = item
+ try:
+ if type(newitem) == type(dict()):
+ newitem = OrderedDict(sorted(newitem.items(), key=lambda t: t[0]))
+ for index in newitem:
+ newitem[index] = sort_entry(newitem[index])
+ elif type(newitem) == type(list()):
+ newitem.sort(key=lambda obj: obj['id'])
+ for index, _ in enumerate(newitem):
+ newitem[index] = sort_entry(newitem[index])
+ except:
+ logger.error('Sort failed for item %s' % type(item))
+ pass
+
+ return newitem
diff --git a/bitbake/lib/layerindexlib/cooker.py b/bitbake/lib/layerindexlib/cooker.py
new file mode 100644
index 0000000..848f0e2
--- /dev/null
+++ b/bitbake/lib/layerindexlib/cooker.py
@@ -0,0 +1,344 @@
+# Copyright (C) 2016-2018 Wind River Systems, Inc.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License version 2 as
+# published by the Free Software Foundation.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
+# See the GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+import logging
+import json
+
+from collections import OrderedDict, defaultdict
+
+from urllib.parse import unquote, urlparse
+
+import layerindexlib
+
+import layerindexlib.plugin
+
+logger = logging.getLogger('BitBake.layerindexlib.cooker')
+
+import bb.utils
+
+def plugin_init(plugins):
+ return CookerPlugin()
+
+class CookerPlugin(layerindexlib.plugin.IndexPlugin):
+ def __init__(self):
+ self.type = "cooker"
+
+ self.server_connection = None
+ self.ui_module = None
+ self.server = None
+
+ def _run_command(self, command, path, default=None):
+ try:
+ result, _ = bb.process.run(command, cwd=path)
+ result = result.strip()
+ except bb.process.ExecutionError:
+ result = default
+ return result
+
+ def _handle_git_remote(self, remote):
+ if "://" not in remote:
+ if ':' in remote:
+ # This is assumed to be ssh
+ remote = "ssh://" + remote
+ else:
+ # This is assumed to be a file path
+ remote = "file://" + remote
+ return remote
+
+ def _get_bitbake_info(self):
+ """Return a tuple of bitbake information"""
+
+ # Our path SHOULD be .../bitbake/lib/layerindex/cooker.py
+ bb_path = os.path.dirname(__file__) # .../bitbake/lib/layerindex/cooker.py
+ bb_path = os.path.dirname(bb_path) # .../bitbake/lib/layerindex
+ bb_path = os.path.dirname(bb_path) # .../bitbake/lib
+ bb_path = os.path.dirname(bb_path) # .../bitbake
+ bb_path = self._run_command('git rev-parse --show-toplevel', os.path.dirname(__file__), default=bb_path)
+ bb_branch = self._run_command('git rev-parse --abbrev-ref HEAD', bb_path, default="<unknown>")
+ bb_rev = self._run_command('git rev-parse HEAD', bb_path, default="<unknown>")
+ for remotes in self._run_command('git remote -v', bb_path, default="").split("\n"):
+ remote = remotes.split("\t")[1].split(" ")[0]
+ if "(fetch)" == remotes.split("\t")[1].split(" ")[1]:
+ bb_remote = self._handle_git_remote(remote)
+ break
+ else:
+ bb_remote = self._handle_git_remote(bb_path)
+
+ return (bb_remote, bb_branch, bb_rev, bb_path)
+
+ def _load_bblayers(self, branches=None):
+ """Load the BBLAYERS and related collection information"""
+
+ d = self.layerindex.data
+
+ if not branches:
+ raise LayerIndexFetchError("No branches specified for _load_bblayers!")
+
+ index = layerindexlib.LayerIndexObj()
+
+ branchId = 0
+ index.branches = {}
+
+ layerItemId = 0
+ index.layerItems = {}
+
+ layerBranchId = 0
+ index.layerBranches = {}
+
+ bblayers = d.getVar('BBLAYERS').split()
+
+ if not bblayers:
+ # It's blank! Nothing to process...
+ return index
+
+ collections = d.getVar('BBFILE_COLLECTIONS')
+ layerconfs = d.varhistory.get_variable_items_files('BBFILE_COLLECTIONS', d)
+ bbfile_collections = {layer: os.path.dirname(os.path.dirname(path)) for layer, path in layerconfs.items()}
+
+ (_, bb_branch, _, _) = self._get_bitbake_info()
+
+ for branch in branches:
+ branchId += 1
+ index.branches[branchId] = layerindexlib.Branch(index, None)
+ index.branches[branchId].define_data(branchId, branch, bb_branch)
+
+ for entry in collections.split():
+ layerpath = entry
+ if entry in bbfile_collections:
+ layerpath = bbfile_collections[entry]
+
+ layername = d.getVar('BBLAYERS_LAYERINDEX_NAME_%s' % entry) or os.path.basename(layerpath)
+ layerversion = d.getVar('LAYERVERSION_%s' % entry) or ""
+ layerurl = self._handle_git_remote(layerpath)
+
+ layersubdir = ""
+ layerrev = "<unknown>"
+ layerbranch = "<unknown>"
+
+ if os.path.isdir(layerpath):
+ layerbasepath = self._run_command('git rev-parse --show-toplevel', layerpath, default=layerpath)
+ if os.path.abspath(layerpath) != os.path.abspath(layerbasepath):
+ layersubdir = os.path.abspath(layerpath)[len(layerbasepath) + 1:]
+
+ layerbranch = self._run_command('git rev-parse --abbrev-ref HEAD', layerpath, default="<unknown>")
+ layerrev = self._run_command('git rev-parse HEAD', layerpath, default="<unknown>")
+
+ for remotes in self._run_command('git remote -v', layerpath, default="").split("\n"):
+ if not remotes:
+ layerurl = self._handle_git_remote(layerpath)
+ else:
+ remote = remotes.split("\t")[1].split(" ")[0]
+ if "(fetch)" == remotes.split("\t")[1].split(" ")[1]:
+ layerurl = self._handle_git_remote(remote)
+ break
+
+ layerItemId += 1
+ index.layerItems[layerItemId] = layerindexlib.LayerItem(index, None)
+ index.layerItems[layerItemId].define_data(layerItemId, layername, description=layerpath, vcs_url=layerurl)
+
+ for branchId in index.branches:
+ layerBranchId += 1
+ index.layerBranches[layerBranchId] = layerindexlib.LayerBranch(index, None)
+ index.layerBranches[layerBranchId].define_data(layerBranchId, entry, layerversion, layerItemId, branchId,
+ vcs_subdir=layersubdir, vcs_last_rev=layerrev, actual_branch=layerbranch)
+
+ return index
+
+
+ def load_index(self, url, load):
+ """
+ Fetches layer information from a build configuration.
+
+ The return value is a dictionary containing API,
+ layer, branch, dependency, recipe, machine, distro, information.
+
+ url type should be 'cooker'.
+ url path is ignored
+ """
+
+ up = urlparse(url)
+
+ if up.scheme != 'cooker':
+ raise layerindexlib.plugin.LayerIndexPluginUrlError(self.type, url)
+
+ d = self.layerindex.data
+
+ params = self.layerindex._parse_params(up.params)
+
+ # Only reason to pass a branch is to emulate them...
+ if 'branch' in params:
+ branches = params['branch'].split(',')
+ else:
+ branches = ['HEAD']
+
+ logger.debug(1, "Loading cooker data branches %s" % branches)
+
+ index = self._load_bblayers(branches=branches)
+
+ index.config = {}
+ index.config['TYPE'] = self.type
+ index.config['URL'] = url
+
+ if 'desc' in params:
+ index.config['DESCRIPTION'] = unquote(params['desc'])
+ else:
+ index.config['DESCRIPTION'] = 'local'
+
+ if 'cache' in params:
+ index.config['CACHE'] = params['cache']
+
+ index.config['BRANCH'] = branches
+
+ # ("layerDependencies", layerindexlib.LayerDependency)
+ layerDependencyId = 0
+ if "layerDependencies" in load:
+ index.layerDependencies = {}
+ for layerBranchId in index.layerBranches:
+ branchName = index.layerBranches[layerBranchId].branch.name
+ collection = index.layerBranches[layerBranchId].collection
+
+ def add_dependency(layerDependencyId, index, deps, required):
+ try:
+ depDict = bb.utils.explode_dep_versions2(deps)
+ except bb.utils.VersionStringException as vse:
+ bb.fatal('Error parsing LAYERDEPENDS_%s: %s' % (c, str(vse)))
+
+ for dep, oplist in list(depDict.items()):
+ # We need to search ourselves, so use the _ version...
+ depLayerBranch = index.find_collection(dep, branches=[branchName])
+ if not depLayerBranch:
+ # Missing dependency?!
+ logger.error('Missing dependency %s (%s)' % (dep, branchName))
+ continue
+
+ # We assume that the oplist matches...
+ layerDependencyId += 1
+ layerDependency = layerindexlib.LayerDependency(index, None)
+ layerDependency.define_data(id=layerDependencyId,
+ required=required, layerbranch=layerBranchId,
+ dependency=depLayerBranch.layer_id)
+
+ logger.debug(1, '%s requires %s' % (layerDependency.layer.name, layerDependency.dependency.name))
+ index.add_element("layerDependencies", [layerDependency])
+
+ return layerDependencyId
+
+ deps = d.getVar("LAYERDEPENDS_%s" % collection)
+ if deps:
+ layerDependencyId = add_dependency(layerDependencyId, index, deps, True)
+
+ deps = d.getVar("LAYERRECOMMENDS_%s" % collection)
+ if deps:
+ layerDependencyId = add_dependency(layerDependencyId, index, deps, False)
+
+ # Need to load recipes here (requires cooker access)
+ recipeId = 0
+ ## TODO: NOT IMPLEMENTED
+ # The code following this is an example of what needs to be
+ # implemented. However, it does not work as-is.
+ if False and 'recipes' in load:
+ index.recipes = {}
+
+ ret = self.ui_module.main(self.server_connection.connection, self.server_connection.events, config_params)
+
+ all_versions = self._run_command('allProviders')
+
+ all_versions_list = defaultdict(list, all_versions)
+ for pn in all_versions_list:
+ for ((pe, pv, pr), fpath) in all_versions_list[pn]:
+ realfn = bb.cache.virtualfn2realfn(fpath)
+
+ filepath = os.path.dirname(realfn[0])
+ filename = os.path.basename(realfn[0])
+
+ # This is all HORRIBLY slow, and likely unnecessary
+ #dscon = self._run_command('parseRecipeFile', fpath, False, [])
+ #connector = myDataStoreConnector(self, dscon.dsindex)
+ #recipe_data = bb.data.init()
+ #recipe_data.setVar('_remote_data', connector)
+
+ #summary = recipe_data.getVar('SUMMARY')
+ #description = recipe_data.getVar('DESCRIPTION')
+ #section = recipe_data.getVar('SECTION')
+ #license = recipe_data.getVar('LICENSE')
+ #homepage = recipe_data.getVar('HOMEPAGE')
+ #bugtracker = recipe_data.getVar('BUGTRACKER')
+ #provides = recipe_data.getVar('PROVIDES')
+
+ layer = bb.utils.get_file_layer(realfn[0], self.config_data)
+
+ depBranchId = collection_layerbranch[layer]
+
+ recipeId += 1
+ recipe = layerindexlib.Recipe(index, None)
+ recipe.define_data(id=recipeId,
+ filename=filename, filepath=filepath,
+ pn=pn, pv=pv,
+ summary=pn, description=pn, section='?',
+ license='?', homepage='?', bugtracker='?',
+ provides='?', bbclassextend='?', inherits='?',
+ blacklisted='?', layerbranch=depBranchId)
+
+ index = addElement("recipes", [recipe], index)
+
+ # ("machines", layerindexlib.Machine)
+ machineId = 0
+ if 'machines' in load:
+ index.machines = {}
+
+ for layerBranchId in index.layerBranches:
+ # load_bblayers uses the description to cache the actual path...
+ machine_path = index.layerBranches[layerBranchId].layer.description
+ machine_path = os.path.join(machine_path, 'conf/machine')
+ if os.path.isdir(machine_path):
+ for (dirpath, _, filenames) in os.walk(machine_path):
+ # Ignore subdirs...
+ if not dirpath.endswith('conf/machine'):
+ continue
+ for fname in filenames:
+ if fname.endswith('.conf'):
+ machineId += 1
+ machine = layerindexlib.Machine(index, None)
+ machine.define_data(id=machineId, name=fname[:-5],
+ description=fname[:-5],
+ layerbranch=index.layerBranches[layerBranchId])
+
+ index.add_element("machines", [machine])
+
+ # ("distros", layerindexlib.Distro)
+ distroId = 0
+ if 'distros' in load:
+ index.distros = {}
+
+ for layerBranchId in index.layerBranches:
+ # load_bblayers uses the description to cache the actual path...
+ distro_path = index.layerBranches[layerBranchId].layer.description
+ distro_path = os.path.join(distro_path, 'conf/distro')
+ if os.path.isdir(distro_path):
+ for (dirpath, _, filenames) in os.walk(distro_path):
+ # Ignore subdirs...
+ if not dirpath.endswith('conf/distro'):
+ continue
+ for fname in filenames:
+ if fname.endswith('.conf'):
+ distroId += 1
+ distro = layerindexlib.Distro(index, None)
+ distro.define_data(id=distroId, name=fname[:-5],
+ description=fname[:-5],
+ layerbranch=index.layerBranches[layerBranchId])
+
+ index.add_element("distros", [distro])
+
+ return index
diff --git a/bitbake/lib/layerindexlib/plugin.py b/bitbake/lib/layerindexlib/plugin.py
new file mode 100644
index 0000000..92a2e97
--- /dev/null
+++ b/bitbake/lib/layerindexlib/plugin.py
@@ -0,0 +1,60 @@
+# Copyright (C) 2016-2018 Wind River Systems, Inc.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License version 2 as
+# published by the Free Software Foundation.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
+# See the GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+# The file contains:
+# LayerIndex exceptions
+# Plugin base class
+# Utility Functions for working on layerindex data
+
+import argparse
+import logging
+import os
+import bb.msg
+
+logger = logging.getLogger('BitBake.layerindexlib.plugin')
+
+class LayerIndexPluginException(Exception):
+ """LayerIndex Generic Exception"""
+ def __init__(self, message):
+ self.msg = message
+ Exception.__init__(self, message)
+
+ def __str__(self):
+ return self.msg
+
+class LayerIndexPluginUrlError(LayerIndexPluginException):
+ """Exception raised when a plugin does not support a given URL type"""
+ def __init__(self, plugin, url):
+ msg = "%s does not support %s:" % (plugin, url)
+ self.plugin = plugin
+ self.url = url
+ LayerIndexPluginException.__init__(self, msg)
+
+class IndexPlugin():
+ def __init__(self):
+ self.type = None
+
+ def init(self, layerindex):
+ self.layerindex = layerindex
+
+ def plugin_type(self):
+ return self.type
+
+ def load_index(self, uri):
+ raise NotImplementedError('load_index is not implemented')
+
+ def store_index(self, uri, index):
+ raise NotImplementedError('store_index is not implemented')
+
diff --git a/bitbake/lib/layerindexlib/restapi.py b/bitbake/lib/layerindexlib/restapi.py
new file mode 100644
index 0000000..d08eb20
--- /dev/null
+++ b/bitbake/lib/layerindexlib/restapi.py
@@ -0,0 +1,398 @@
+# Copyright (C) 2016-2018 Wind River Systems, Inc.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License version 2 as
+# published by the Free Software Foundation.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
+# See the GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+import logging
+import json
+from urllib.parse import unquote
+from urllib.parse import urlparse
+
+import layerindexlib
+import layerindexlib.plugin
+
+logger = logging.getLogger('BitBake.layerindexlib.restapi')
+
+def plugin_init(plugins):
+ return RestApiPlugin()
+
+class RestApiPlugin(layerindexlib.plugin.IndexPlugin):
+ def __init__(self):
+ self.type = "restapi"
+
+ def load_index(self, url, load):
+ """
+ Fetches layer information from a local or remote layer index.
+
+ The return value is a LayerIndexObj.
+
+ url is the url to the rest api of the layer index, such as:
+ http://layers.openembedded.org/layerindex/api/
+
+ Or a local file...
+ """
+
+ up = urlparse(url)
+
+ if up.scheme == 'file':
+ return self.load_index_file(up, url, load)
+
+ if up.scheme == 'http' or up.scheme == 'https':
+ return self.load_index_web(up, url, load)
+
+ raise layerindexlib.plugin.LayerIndexPluginUrlError(self.type, url)
+
+
+ def load_index_file(self, up, url, load):
+ """
+ Fetches layer information from a local file or directory.
+
+ The return value is a LayerIndexObj.
+
+ ud is the parsed url to the local file or directory.
+ """
+ if not os.path.exists(up.path):
+ raise FileNotFoundError(up.path)
+
+ index = layerindexlib.LayerIndexObj()
+
+ index.config = {}
+ index.config['TYPE'] = self.type
+ index.config['URL'] = url
+
+ params = self.layerindex._parse_params(up.params)
+
+ if 'desc' in params:
+ index.config['DESCRIPTION'] = unquote(params['desc'])
+ else:
+ index.config['DESCRIPTION'] = up.path
+
+ if 'cache' in params:
+ index.config['CACHE'] = params['cache']
+
+ if 'branch' in params:
+ branches = params['branch'].split(',')
+ index.config['BRANCH'] = branches
+ else:
+ branches = ['*']
+
+
+ def load_cache(path, index, branches=[]):
+ logger.debug(1, 'Loading json file %s' % path)
+ with open(path, 'rt', encoding='utf-8') as f:
+ pindex = json.load(f)
+
+ # Filter the branches on loaded files...
+ newpBranch = []
+ for branch in branches:
+ if branch != '*':
+ if 'branches' in pindex:
+ for br in pindex['branches']:
+ if br['name'] == branch:
+ newpBranch.append(br)
+ else:
+ if 'branches' in pindex:
+ for br in pindex['branches']:
+ newpBranch.append(br)
+
+ if newpBranch:
+ index.add_raw_element('branches', layerindexlib.Branch, newpBranch)
+ else:
+ logger.debug(1, 'No matching branches (%s) in index file(s)' % branches)
+ # No matching branches.. return nothing...
+ return
+
+ for (lName, lType) in [("layerItems", layerindexlib.LayerItem),
+ ("layerBranches", layerindexlib.LayerBranch),
+ ("layerDependencies", layerindexlib.LayerDependency),
+ ("recipes", layerindexlib.Recipe),
+ ("machines", layerindexlib.Machine),
+ ("distros", layerindexlib.Distro)]:
+ if lName in pindex:
+ index.add_raw_element(lName, lType, pindex[lName])
+
+
+ if not os.path.isdir(up.path):
+ load_cache(up.path, index, branches)
+ return index
+
+ logger.debug(1, 'Loading from dir %s...' % (up.path))
+ for (dirpath, _, filenames) in os.walk(up.path):
+ for filename in filenames:
+ if not filename.endswith('.json'):
+ continue
+ fpath = os.path.join(dirpath, filename)
+ load_cache(fpath, index, branches)
+
+ return index
+
+
+ def load_index_web(self, up, url, load):
+ """
+ Fetches layer information from a remote layer index.
+
+ The return value is a LayerIndexObj.
+
+ ud is the parsed url to the rest api of the layer index, such as:
+ http://layers.openembedded.org/layerindex/api/
+ """
+
+ def _get_json_response(apiurl=None, username=None, password=None, retry=True):
+ assert apiurl is not None
+
+ logger.debug(1, "fetching %s" % apiurl)
+
+ up = urlparse(apiurl)
+
+ username=up.username
+ password=up.password
+
+ # Strip username/password and params
+ if up.port:
+ up_stripped = up._replace(params="", netloc="%s:%s" % (up.hostname, up.port))
+ else:
+ up_stripped = up._replace(params="", netloc=up.hostname)
+
+ res = self.layerindex._fetch_url(up_stripped.geturl(), username=username, password=password)
+
+ try:
+ parsed = json.loads(res.read().decode('utf-8'))
+ except ConnectionResetError:
+ if retry:
+ logger.debug(1, "%s: Connection reset by peer. Retrying..." % url)
+ parsed = _get_json_response(apiurl=up_stripped.geturl(), username=username, password=password, retry=False)
+ logger.debug(1, "%s: retry successful.")
+ else:
+ raise LayerIndexFetchError('%s: Connection reset by peer. Is there a firewall blocking your connection?' % apiurl)
+
+ return parsed
+
+ index = layerindexlib.LayerIndexObj()
+
+ index.config = {}
+ index.config['TYPE'] = self.type
+ index.config['URL'] = url
+
+ params = self.layerindex._parse_params(up.params)
+
+ if 'desc' in params:
+ index.config['DESCRIPTION'] = unquote(params['desc'])
+ else:
+ index.config['DESCRIPTION'] = up.hostname
+
+ if 'cache' in params:
+ index.config['CACHE'] = params['cache']
+
+ if 'branch' in params:
+ branches = params['branch'].split(',')
+ index.config['BRANCH'] = branches
+ else:
+ branches = ['*']
+
+ try:
+ index.apilinks = _get_json_response(apiurl=url, username=up.username, password=up.password)
+ except Exception as e:
+ raise layerindexlib.LayerIndexFetchError(url, e)
+
+ # Local raw index set...
+ pindex = {}
+
+ # Load all the requested branches at the same time time,
+ # a special branch of '*' means load all branches
+ filter = ""
+ if "*" not in branches:
+ filter = "?filter=name:%s" % "OR".join(branches)
+
+ logger.debug(1, "Loading %s from %s" % (branches, index.apilinks['branches']))
+
+ # The link won't include username/password, so pull it from the original url
+ pindex['branches'] = _get_json_response(index.apilinks['branches'] + filter,
+ username=up.username, password=up.password)
+ if not pindex['branches']:
+ logger.debug(1, "No valid branches (%s) found at url %s." % (branch, url))
+ return index
+ index.add_raw_element("branches", layerindexlib.Branch, pindex['branches'])
+
+ # Load all of the layerItems (these can not be easily filtered)
+ logger.debug(1, "Loading %s from %s" % ('layerItems', index.apilinks['layerItems']))
+
+
+ # The link won't include username/password, so pull it from the original url
+ pindex['layerItems'] = _get_json_response(index.apilinks['layerItems'],
+ username=up.username, password=up.password)
+ if not pindex['layerItems']:
+ logger.debug(1, "No layers were found at url %s." % (url))
+ return index
+ index.add_raw_element("layerItems", layerindexlib.LayerItem, pindex['layerItems'])
+
+
+ # From this point on load the contents for each branch. Otherwise we
+ # could run into a timeout.
+ for branch in index.branches:
+ filter = "?filter=branch__name:%s" % index.branches[branch].name
+
+ logger.debug(1, "Loading %s from %s" % ('layerBranches', index.apilinks['layerBranches']))
+
+ # The link won't include username/password, so pull it from the original url
+ pindex['layerBranches'] = _get_json_response(index.apilinks['layerBranches'] + filter,
+ username=up.username, password=up.password)
+ if not pindex['layerBranches']:
+ logger.debug(1, "No valid layer branches (%s) found at url %s." % (branches or "*", url))
+ return index
+ index.add_raw_element("layerBranches", layerindexlib.LayerBranch, pindex['layerBranches'])
+
+
+ # Load the rest, they all have a similar format
+ # Note: the layer index has a few more items, we can add them if necessary
+ # in the future.
+ filter = "?filter=layerbranch__branch__name:%s" % index.branches[branch].name
+ for (lName, lType) in [("layerDependencies", layerindexlib.LayerDependency),
+ ("recipes", layerindexlib.Recipe),
+ ("machines", layerindexlib.Machine),
+ ("distros", layerindexlib.Distro)]:
+ if lName not in load:
+ continue
+ logger.debug(1, "Loading %s from %s" % (lName, index.apilinks[lName]))
+
+ # The link won't include username/password, so pull it from the original url
+ pindex[lName] = _get_json_response(index.apilinks[lName] + filter,
+ username=up.username, password=up.password)
+ index.add_raw_element(lName, lType, pindex[lName])
+
+ return index
+
+ def store_index(self, url, index):
+ """
+ Store layer information into a local file/dir.
+
+ The return value is a dictionary containing API,
+ layer, branch, dependency, recipe, machine, distro, information.
+
+ ud is a parsed url to a directory or file. If the path is a
+ directory, we will split the files into one file per layer.
+ If the path is to a file (exists or not) the entire DB will be
+ dumped into that one file.
+ """
+
+ up = urlparse(url)
+
+ if up.scheme != 'file':
+ raise layerindexlib.plugin.LayerIndexPluginUrlError(self.type, url)
+
+ logger.debug(1, "Storing to %s..." % up.path)
+
+ try:
+ layerbranches = index.layerBranches
+ except KeyError:
+ logger.error('No layerBranches to write.')
+ return
+
+
+ def filter_item(layerbranchid, objects):
+ filtered = []
+ for obj in getattr(index, objects, None):
+ try:
+ if getattr(index, objects)[obj].layerbranch_id == layerbranchid:
+ filtered.append(getattr(index, objects)[obj]._data)
+ except AttributeError:
+ logger.debug(1, 'No obj.layerbranch_id: %s' % objects)
+ # No simple filter method, just include it...
+ try:
+ filtered.append(getattr(index, objects)[obj]._data)
+ except AttributeError:
+ logger.debug(1, 'No obj._data: %s %s' % (objects, type(obj)))
+ filtered.append(obj)
+ return filtered
+
+
+ # Write out to a single file.
+ # Filter out unnecessary items, then sort as we write for determinism
+ if not os.path.isdir(up.path):
+ pindex = {}
+
+ pindex['branches'] = []
+ pindex['layerItems'] = []
+ pindex['layerBranches'] = []
+
+ for layerbranchid in layerbranches:
+ if layerbranches[layerbranchid].branch._data not in pindex['branches']:
+ pindex['branches'].append(layerbranches[layerbranchid].branch._data)
+
+ if layerbranches[layerbranchid].layer._data not in pindex['layerItems']:
+ pindex['layerItems'].append(layerbranches[layerbranchid].layer._data)
+
+ if layerbranches[layerbranchid]._data not in pindex['layerBranches']:
+ pindex['layerBranches'].append(layerbranches[layerbranchid]._data)
+
+ for entry in index._index:
+ # Skip local items, apilinks and items already processed
+ if entry in index.config['local'] or \
+ entry == 'apilinks' or \
+ entry == 'branches' or \
+ entry == 'layerBranches' or \
+ entry == 'layerItems':
+ continue
+ if entry not in pindex:
+ pindex[entry] = []
+ pindex[entry].extend(filter_item(layerbranchid, entry))
+
+ bb.debug(1, 'Writing index to %s' % up.path)
+ with open(up.path, 'wt') as f:
+ json.dump(layerindexlib.sort_entry(pindex), f, indent=4)
+ return
+
+
+ # Write out to a directory one file per layerBranch
+ # Prepare all layer related items, to create a minimal file.
+ # We have to sort the entries as we write so they are deterministic
+ for layerbranchid in layerbranches:
+ pindex = {}
+
+ for entry in index._index:
+ # Skip local items, apilinks and items already processed
+ if entry in index.config['local'] or \
+ entry == 'apilinks' or \
+ entry == 'branches' or \
+ entry == 'layerBranches' or \
+ entry == 'layerItems':
+ continue
+ pindex[entry] = filter_item(layerbranchid, entry)
+
+ # Add the layer we're processing as the first one...
+ pindex['branches'] = [layerbranches[layerbranchid].branch._data]
+ pindex['layerItems'] = [layerbranches[layerbranchid].layer._data]
+ pindex['layerBranches'] = [layerbranches[layerbranchid]._data]
+
+ # We also need to include the layerbranch for any dependencies...
+ for layerdep in pindex['layerDependencies']:
+ layerdependency = layerindexlib.LayerDependency(index, layerdep)
+
+ layeritem = layerdependency.dependency
+ layerbranch = layerdependency.dependency_layerBranch
+
+ # We need to avoid duplicates...
+ if layeritem._data not in pindex['layerItems']:
+ pindex['layerItems'].append(layeritem._data)
+
+ if layerbranch._data not in pindex['layerBranches']:
+ pindex['layerBranches'].append(layerbranch._data)
+
+ # apply mirroring adjustments here....
+
+ fname = index.config['DESCRIPTION'] + '__' + pindex['branches'][0]['name'] + '__' + pindex['layerItems'][0]['name']
+ fname = fname.translate(str.maketrans('/ ', '__'))
+ fpath = os.path.join(up.path, fname)
+
+ bb.debug(1, 'Writing index to %s' % fpath + '.json')
+ with open(fpath + '.json', 'wt') as f:
+ json.dump(layerindexlib.sort_entry(pindex), f, indent=4)
diff --git a/bitbake/lib/layerindexlib/tests/__init__.py b/bitbake/lib/layerindexlib/tests/__init__.py
new file mode 100644
index 0000000..e69de29
diff --git a/bitbake/lib/layerindexlib/tests/common.py b/bitbake/lib/layerindexlib/tests/common.py
new file mode 100644
index 0000000..22a5458
--- /dev/null
+++ b/bitbake/lib/layerindexlib/tests/common.py
@@ -0,0 +1,43 @@
+# Copyright (C) 2017-2018 Wind River Systems, Inc.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License version 2 as
+# published by the Free Software Foundation.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
+# See the GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+import unittest
+import tempfile
+import os
+import bb
+
+import logging
+
+class LayersTest(unittest.TestCase):
+
+ def setUp(self):
+ self.origdir = os.getcwd()
+ self.d = bb.data.init()
+ # At least one variable needs to be set
+ self.d.setVar('DL_DIR', os.getcwd())
+
+ if os.environ.get("BB_SKIP_NETTESTS") == "yes":
+ self.d.setVar('BB_NO_NETWORK', '1')
+
+ self.tempdir = tempfile.mkdtemp()
+ self.logger = logging.getLogger("BitBake")
+
+ def tearDown(self):
+ os.chdir(self.origdir)
+ if os.environ.get("BB_TMPDIR_NOCLEAN") == "yes":
+ print("Not cleaning up %s. Please remove manually." % self.tempdir)
+ else:
+ bb.utils.prunedir(self.tempdir)
+
diff --git a/bitbake/lib/layerindexlib/tests/cooker.py b/bitbake/lib/layerindexlib/tests/cooker.py
new file mode 100644
index 0000000..fdbf091
--- /dev/null
+++ b/bitbake/lib/layerindexlib/tests/cooker.py
@@ -0,0 +1,123 @@
+# Copyright (C) 2018 Wind River Systems, Inc.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License version 2 as
+# published by the Free Software Foundation.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
+# See the GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+import unittest
+import tempfile
+import os
+import bb
+
+import layerindexlib
+from layerindexlib.tests.common import LayersTest
+
+import logging
+
+class LayerIndexCookerTest(LayersTest):
+
+ def setUp(self):
+ LayersTest.setUp(self)
+
+ # Note this is NOT a comprehensive test of cooker, as we can't easily
+ # configure the test data. But we can emulate the basics of the layer.conf
+ # files, so that is what we will do.
+
+ new_topdir = os.path.join(os.path.dirname(os.path.realpath(__file__)), "testdata")
+ new_bbpath = os.path.join(new_topdir, "build")
+
+ self.d.setVar('TOPDIR', new_topdir)
+ self.d.setVar('BBPATH', new_bbpath)
+
+ self.d = bb.parse.handle("%s/conf/bblayers.conf" % new_bbpath, self.d, True)
+ for layer in self.d.getVar('BBLAYERS').split():
+ self.d = bb.parse.handle("%s/conf/layer.conf" % layer, self.d, True)
+
+ self.layerindex = layerindexlib.LayerIndex(self.d)
+ self.layerindex.load_layerindex('cooker://', load=['layerDependencies'])
+
+ def test_layerindex_is_empty(self):
+ self.assertFalse(self.layerindex.is_empty(), msg="Layerindex is not empty!")
+
+ def test_dependency_resolution(self):
+ # Verify depth first searching...
+ (dependencies, invalidnames) = self.layerindex.find_dependencies(names=['meta-python'])
+
+ first = True
+ for deplayerbranch in dependencies:
+ layerBranch = dependencies[deplayerbranch][0]
+ layerDeps = dependencies[deplayerbranch][1:]
+
+ if not first:
+ continue
+
+ first = False
+
+ # Top of the deps should be openembedded-core, since everything depends on it.
+ self.assertEqual(layerBranch.layer.name, "openembedded-core", msg='Top dependency not openembedded-core')
+
+ # meta-python should cause an openembedded-core dependency, if not assert!
+ for dep in layerDeps:
+ if dep.layer.name == 'meta-python':
+ break
+ else:
+ self.assertTrue(False, msg='meta-python was not found')
+
+ # Only check the first element...
+ break
+ else:
+ if first:
+ # Empty list, this is bad.
+ self.assertTrue(False, msg='Empty list of dependencies')
+
+ # Last dep should be the requested item
+ layerBranch = dependencies[deplayerbranch][0]
+ self.assertEqual(layerBranch.layer.name, "meta-python", msg='Last dependency not meta-python')
+
+ def test_find_collection(self):
+ def _check(collection, expected):
+ self.logger.debug(1, "Looking for collection %s..." % collection)
+ result = self.layerindex.find_collection(collection)
+ if expected:
+ self.assertIsNotNone(result, msg="Did not find %s when it shouldn't be there" % collection)
+ else:
+ self.assertIsNone(result, msg="Found %s when it should be there" % collection)
+
+ tests = [ ('core', True),
+ ('openembedded-core', False),
+ ('networking-layer', True),
+ ('meta-python', True),
+ ('openembedded-layer', True),
+ ('notpresent', False) ]
+
+ for collection,result in tests:
+ _check(collection, result)
+
+ def test_find_layerbranch(self):
+ def _check(name, expected):
+ self.logger.debug(1, "Looking for layerbranch %s..." % name)
+ result = self.layerindex.find_layerbranch(name)
+ if expected:
+ self.assertIsNotNone(result, msg="Did not find %s when it shouldn't be there" % collection)
+ else:
+ self.assertIsNone(result, msg="Found %s when it should be there" % collection)
+
+ tests = [ ('openembedded-core', True),
+ ('core', False),
+ ('networking-layer', True),
+ ('meta-python', True),
+ ('openembedded-layer', True),
+ ('notpresent', False) ]
+
+ for collection,result in tests:
+ _check(collection, result)
+
diff --git a/bitbake/lib/layerindexlib/tests/layerindexobj.py b/bitbake/lib/layerindexlib/tests/layerindexobj.py
new file mode 100644
index 0000000..e2fbb95
--- /dev/null
+++ b/bitbake/lib/layerindexlib/tests/layerindexobj.py
@@ -0,0 +1,226 @@
+# Copyright (C) 2017-2018 Wind River Systems, Inc.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License version 2 as
+# published by the Free Software Foundation.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
+# See the GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+import unittest
+import tempfile
+import os
+import bb
+
+from layerindexlib.tests.common import LayersTest
+
+import logging
+
+class LayerIndexObjectsTest(LayersTest):
+ def setUp(self):
+ from layerindexlib import LayerIndexObj, Branch, LayerItem, LayerBranch, LayerDependency, Recipe, Machine, Distro
+
+ LayersTest.setUp(self)
+
+ self.index = LayerIndexObj()
+
+ branchId = 0
+ layerItemId = 0
+ layerBranchId = 0
+ layerDependencyId = 0
+ recipeId = 0
+ machineId = 0
+ distroId = 0
+
+ self.index.branches = {}
+ self.index.layerItems = {}
+ self.index.layerBranches = {}
+ self.index.layerDependencies = {}
+ self.index.recipes = {}
+ self.index.machines = {}
+ self.index.distros = {}
+
+ branchId += 1
+ self.index.branches[branchId] = Branch(self.index)
+ self.index.branches[branchId].define_data(branchId,
+ 'test_branch', 'bb_test_branch')
+ self.index.branches[branchId].lockData()
+
+ layerItemId +=1
+ self.index.layerItems[layerItemId] = LayerItem(self.index)
+ self.index.layerItems[layerItemId].define_data(layerItemId,
+ 'test_layerItem', vcs_url='git://git_test_url/test_layerItem')
+ self.index.layerItems[layerItemId].lockData()
+
+ layerBranchId +=1
+ self.index.layerBranches[layerBranchId] = LayerBranch(self.index)
+ self.index.layerBranches[layerBranchId].define_data(layerBranchId,
+ 'test_collection', '99', layerItemId,
+ branchId)
+
+ recipeId += 1
+ self.index.recipes[recipeId] = Recipe(self.index)
+ self.index.recipes[recipeId].define_data(recipeId, 'test_git.bb',
+ 'recipes-test', 'test', 'git',
+ layerBranchId)
+
+ machineId += 1
+ self.index.machines[machineId] = Machine(self.index)
+ self.index.machines[machineId].define_data(machineId,
+ 'test_machine', 'test_machine',
+ layerBranchId)
+
+ distroId += 1
+ self.index.distros[distroId] = Distro(self.index)
+ self.index.distros[distroId].define_data(distroId,
+ 'test_distro', 'test_distro',
+ layerBranchId)
+
+ layerItemId +=1
+ self.index.layerItems[layerItemId] = LayerItem(self.index)
+ self.index.layerItems[layerItemId].define_data(layerItemId, 'test_layerItem 2',
+ vcs_url='git://git_test_url/test_layerItem')
+
+ layerBranchId +=1
+ self.index.layerBranches[layerBranchId] = LayerBranch(self.index)
+ self.index.layerBranches[layerBranchId].define_data(layerBranchId,
+ 'test_collection_2', '72', layerItemId,
+ branchId, actual_branch='some_other_branch')
+
+ layerDependencyId += 1
+ self.index.layerDependencies[layerDependencyId] = LayerDependency(self.index)
+ self.index.layerDependencies[layerDependencyId].define_data(layerDependencyId,
+ layerBranchId, 1)
+
+ layerDependencyId += 1
+ self.index.layerDependencies[layerDependencyId] = LayerDependency(self.index)
+ self.index.layerDependencies[layerDependencyId].define_data(layerDependencyId,
+ layerBranchId, 1, required=False)
+
+ def test_branch(self):
+ branch = self.index.branches[1]
+ self.assertEqual(branch.id, 1)
+ self.assertEqual(branch.name, 'test_branch')
+ self.assertEqual(branch.short_description, 'test_branch')
+ self.assertEqual(branch.bitbake_branch, 'bb_test_branch')
+
+ def test_layerItem(self):
+ layerItem = self.index.layerItems[1]
+ self.assertEqual(layerItem.id, 1)
+ self.assertEqual(layerItem.name, 'test_layerItem')
+ self.assertEqual(layerItem.summary, 'test_layerItem')
+ self.assertEqual(layerItem.description, 'test_layerItem')
+ self.assertEqual(layerItem.vcs_url, 'git://git_test_url/test_layerItem')
+ self.assertEqual(layerItem.vcs_web_url, None)
+ self.assertIsNone(layerItem.vcs_web_tree_base_url)
+ self.assertIsNone(layerItem.vcs_web_file_base_url)
+ self.assertIsNotNone(layerItem.updated)
+
+ layerItem = self.index.layerItems[2]
+ self.assertEqual(layerItem.id, 2)
+ self.assertEqual(layerItem.name, 'test_layerItem 2')
+ self.assertEqual(layerItem.summary, 'test_layerItem 2')
+ self.assertEqual(layerItem.description, 'test_layerItem 2')
+ self.assertEqual(layerItem.vcs_url, 'git://git_test_url/test_layerItem')
+ self.assertIsNone(layerItem.vcs_web_url)
+ self.assertIsNone(layerItem.vcs_web_tree_base_url)
+ self.assertIsNone(layerItem.vcs_web_file_base_url)
+ self.assertIsNotNone(layerItem.updated)
+
+ def test_layerBranch(self):
+ layerBranch = self.index.layerBranches[1]
+ self.assertEqual(layerBranch.id, 1)
+ self.assertEqual(layerBranch.collection, 'test_collection')
+ self.assertEqual(layerBranch.version, '99')
+ self.assertEqual(layerBranch.vcs_subdir, '')
+ self.assertEqual(layerBranch.actual_branch, 'test_branch')
+ self.assertIsNotNone(layerBranch.updated)
+ self.assertEqual(layerBranch.layer_id, 1)
+ self.assertEqual(layerBranch.branch_id, 1)
+ self.assertEqual(layerBranch.layer, self.index.layerItems[1])
+ self.assertEqual(layerBranch.branch, self.index.branches[1])
+
+ layerBranch = self.index.layerBranches[2]
+ self.assertEqual(layerBranch.id, 2)
+ self.assertEqual(layerBranch.collection, 'test_collection_2')
+ self.assertEqual(layerBranch.version, '72')
+ self.assertEqual(layerBranch.vcs_subdir, '')
+ self.assertEqual(layerBranch.actual_branch, 'some_other_branch')
+ self.assertIsNotNone(layerBranch.updated)
+ self.assertEqual(layerBranch.layer_id, 2)
+ self.assertEqual(layerBranch.branch_id, 1)
+ self.assertEqual(layerBranch.layer, self.index.layerItems[2])
+ self.assertEqual(layerBranch.branch, self.index.branches[1])
+
+ def test_layerDependency(self):
+ layerDependency = self.index.layerDependencies[1]
+ self.assertEqual(layerDependency.id, 1)
+ self.assertEqual(layerDependency.layerbranch_id, 2)
+ self.assertEqual(layerDependency.layerbranch, self.index.layerBranches[2])
+ self.assertEqual(layerDependency.layer_id, 2)
+ self.assertEqual(layerDependency.layer, self.index.layerItems[2])
+ self.assertTrue(layerDependency.required)
+ self.assertEqual(layerDependency.dependency_id, 1)
+ self.assertEqual(layerDependency.dependency, self.index.layerItems[1])
+ self.assertEqual(layerDependency.dependency_layerBranch, self.index.layerBranches[1])
+
+ layerDependency = self.index.layerDependencies[2]
+ self.assertEqual(layerDependency.id, 2)
+ self.assertEqual(layerDependency.layerbranch_id, 2)
+ self.assertEqual(layerDependency.layerbranch, self.index.layerBranches[2])
+ self.assertEqual(layerDependency.layer_id, 2)
+ self.assertEqual(layerDependency.layer, self.index.layerItems[2])
+ self.assertFalse(layerDependency.required)
+ self.assertEqual(layerDependency.dependency_id, 1)
+ self.assertEqual(layerDependency.dependency, self.index.layerItems[1])
+ self.assertEqual(layerDependency.dependency_layerBranch, self.index.layerBranches[1])
+
+ def test_recipe(self):
+ recipe = self.index.recipes[1]
+ self.assertEqual(recipe.id, 1)
+ self.assertEqual(recipe.layerbranch_id, 1)
+ self.assertEqual(recipe.layerbranch, self.index.layerBranches[1])
+ self.assertEqual(recipe.layer_id, 1)
+ self.assertEqual(recipe.layer, self.index.layerItems[1])
+ self.assertEqual(recipe.filename, 'test_git.bb')
+ self.assertEqual(recipe.filepath, 'recipes-test')
+ self.assertEqual(recipe.fullpath, 'recipes-test/test_git.bb')
+ self.assertEqual(recipe.summary, "")
+ self.assertEqual(recipe.description, "")
+ self.assertEqual(recipe.section, "")
+ self.assertEqual(recipe.pn, 'test')
+ self.assertEqual(recipe.pv, 'git')
+ self.assertEqual(recipe.license, "")
+ self.assertEqual(recipe.homepage, "")
+ self.assertEqual(recipe.bugtracker, "")
+ self.assertEqual(recipe.provides, "")
+ self.assertIsNotNone(recipe.updated)
+ self.assertEqual(recipe.inherits, "")
+
+ def test_machine(self):
+ machine = self.index.machines[1]
+ self.assertEqual(machine.id, 1)
+ self.assertEqual(machine.layerbranch_id, 1)
+ self.assertEqual(machine.layerbranch, self.index.layerBranches[1])
+ self.assertEqual(machine.layer_id, 1)
+ self.assertEqual(machine.layer, self.index.layerItems[1])
+ self.assertEqual(machine.name, 'test_machine')
+ self.assertEqual(machine.description, 'test_machine')
+ self.assertIsNotNone(machine.updated)
+
+ def test_distro(self):
+ distro = self.index.distros[1]
+ self.assertEqual(distro.id, 1)
+ self.assertEqual(distro.layerbranch_id, 1)
+ self.assertEqual(distro.layerbranch, self.index.layerBranches[1])
+ self.assertEqual(distro.layer_id, 1)
+ self.assertEqual(distro.layer, self.index.layerItems[1])
+ self.assertEqual(distro.name, 'test_distro')
+ self.assertEqual(distro.description, 'test_distro')
+ self.assertIsNotNone(distro.updated)
diff --git a/bitbake/lib/layerindexlib/tests/restapi.py b/bitbake/lib/layerindexlib/tests/restapi.py
new file mode 100644
index 0000000..5876695
--- /dev/null
+++ b/bitbake/lib/layerindexlib/tests/restapi.py
@@ -0,0 +1,184 @@
+# Copyright (C) 2017-2018 Wind River Systems, Inc.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License version 2 as
+# published by the Free Software Foundation.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
+# See the GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+import unittest
+import tempfile
+import os
+import bb
+
+import layerindexlib
+from layerindexlib.tests.common import LayersTest
+
+import logging
+
+def skipIfNoNetwork():
+ if os.environ.get("BB_SKIP_NETTESTS") == "yes":
+ return unittest.skip("Network tests being skipped")
+ return lambda f: f
+
+class LayerIndexWebRestApiTest(LayersTest):
+
+ @skipIfNoNetwork()
+ def setUp(self):
+ self.assertFalse(os.environ.get("BB_SKIP_NETTESTS") == "yes", msg="BB_SKIP_NETTESTS set, but we tried to test anyway")
+ LayersTest.setUp(self)
+ self.layerindex = layerindexlib.LayerIndex(self.d)
+ self.layerindex.load_layerindex('http://layers.openembedded.org/layerindex/api/;branch=sumo', load=['layerDependencies'])
+
+ @skipIfNoNetwork()
+ def test_layerindex_is_empty(self):
+ self.assertFalse(self.layerindex.is_empty(), msg="Layerindex is empty")
+
+ @skipIfNoNetwork()
+ def test_layerindex_store_file(self):
+ self.layerindex.store_layerindex('file://%s/file.json' % self.tempdir, self.layerindex.indexes[0])
+
+ self.assertTrue(os.path.isfile('%s/file.json' % self.tempdir), msg="Temporary file was not created by store_layerindex")
+
+ reload = layerindexlib.LayerIndex(self.d)
+ reload.load_layerindex('file://%s/file.json' % self.tempdir)
+
+ self.assertFalse(reload.is_empty(), msg="Layerindex is empty")
+
+ # Calculate layerItems in original index that should NOT be in reload
+ layerItemNames = []
+ for itemId in self.layerindex.indexes[0].layerItems:
+ layerItemNames.append(self.layerindex.indexes[0].layerItems[itemId].name)
+
+ for layerBranchId in self.layerindex.indexes[0].layerBranches:
+ layerItemNames.remove(self.layerindex.indexes[0].layerBranches[layerBranchId].layer.name)
+
+ for itemId in reload.indexes[0].layerItems:
+ self.assertFalse(reload.indexes[0].layerItems[itemId].name in layerItemNames, msg="Item reloaded when it shouldn't have been")
+
+ # Compare the original to what we wrote...
+ for type in self.layerindex.indexes[0]._index:
+ if type == 'apilinks' or \
+ type == 'layerItems' or \
+ type in self.layerindex.indexes[0].config['local']:
+ continue
+ for id in getattr(self.layerindex.indexes[0], type):
+ self.logger.debug(1, "type %s" % (type))
+
+ self.assertTrue(id in getattr(reload.indexes[0], type), msg="Id number not in reloaded index")
+
+ self.logger.debug(1, "%s ? %s" % (getattr(self.layerindex.indexes[0], type)[id], getattr(reload.indexes[0], type)[id]))
+
+ self.assertEqual(getattr(self.layerindex.indexes[0], type)[id], getattr(reload.indexes[0], type)[id], msg="Reloaded contents different")
+
+ @skipIfNoNetwork()
+ def test_layerindex_store_split(self):
+ self.layerindex.store_layerindex('file://%s' % self.tempdir, self.layerindex.indexes[0])
+
+ reload = layerindexlib.LayerIndex(self.d)
+ reload.load_layerindex('file://%s' % self.tempdir)
+
+ self.assertFalse(reload.is_empty(), msg="Layer index is empty")
+
+ for type in self.layerindex.indexes[0]._index:
+ if type == 'apilinks' or \
+ type == 'layerItems' or \
+ type in self.layerindex.indexes[0].config['local']:
+ continue
+ for id in getattr(self.layerindex.indexes[0] ,type):
+ self.logger.debug(1, "type %s" % (type))
+
+ self.assertTrue(id in getattr(reload.indexes[0], type), msg="Id number missing from reloaded data")
+
+ self.logger.debug(1, "%s ? %s" % (getattr(self.layerindex.indexes[0] ,type)[id], getattr(reload.indexes[0], type)[id]))
+
+ self.assertEqual(getattr(self.layerindex.indexes[0] ,type)[id], getattr(reload.indexes[0], type)[id], msg="reloaded data does not match original")
+
+ @skipIfNoNetwork()
+ def test_dependency_resolution(self):
+ # Verify depth first searching...
+ (dependencies, invalidnames) = self.layerindex.find_dependencies(names=['meta-python'])
+
+ first = True
+ for deplayerbranch in dependencies:
+ layerBranch = dependencies[deplayerbranch][0]
+ layerDeps = dependencies[deplayerbranch][1:]
+
+ if not first:
+ continue
+
+ first = False
+
+ # Top of the deps should be openembedded-core, since everything depends on it.
+ self.assertEqual(layerBranch.layer.name, "openembedded-core", msg='OpenEmbedded-Core is no the first dependency')
+
+ # meta-python should cause an openembedded-core dependency, if not assert!
+ for dep in layerDeps:
+ if dep.layer.name == 'meta-python':
+ break
+ else:
+ self.logger.debug(1, "meta-python was not found")
+ self.assetTrue(False)
+
+ # Only check the first element...
+ break
+ else:
+ # Empty list, this is bad.
+ self.logger.debug(1, "Empty list of dependencies")
+ self.assertIsNotNone(first, msg="Empty list of dependencies")
+
+ # Last dep should be the requested item
+ layerBranch = dependencies[deplayerbranch][0]
+ self.assertEqual(layerBranch.layer.name, "meta-python", msg="Last dependency not meta-python")
+
+ @skipIfNoNetwork()
+ def test_find_collection(self):
+ def _check(collection, expected):
+ self.logger.debug(1, "Looking for collection %s..." % collection)
+ result = self.layerindex.find_collection(collection)
+ if expected:
+ self.assertIsNotNone(result, msg="Did not find %s when it should be there" % collection)
+ else:
+ self.assertIsNone(result, msg="Found %s when it shouldn't be there" % collection)
+
+ tests = [ ('core', True),
+ ('openembedded-core', False),
+ ('networking-layer', True),
+ ('meta-python', True),
+ ('openembedded-layer', True),
+ ('notpresent', False) ]
+
+ for collection,result in tests:
+ _check(collection, result)
+
+ @skipIfNoNetwork()
+ def test_find_layerbranch(self):
+ def _check(name, expected):
+ self.logger.debug(1, "Looking for layerbranch %s..." % name)
+
+ for index in self.layerindex.indexes:
+ for layerbranchid in index.layerBranches:
+ self.logger.debug(1, "Present: %s" % index.layerBranches[layerbranchid].layer.name)
+ result = self.layerindex.find_layerbranch(name)
+ if expected:
+ self.assertIsNotNone(result, msg="Did not find %s when it should be there" % collection)
+ else:
+ self.assertIsNone(result, msg="Found %s when it shouldn't be there" % collection)
+
+ tests = [ ('openembedded-core', True),
+ ('core', False),
+ ('meta-networking', True),
+ ('meta-python', True),
+ ('meta-oe', True),
+ ('notpresent', False) ]
+
+ for collection,result in tests:
+ _check(collection, result)
+
diff --git a/bitbake/lib/layerindexlib/tests/testdata/README b/bitbake/lib/layerindexlib/tests/testdata/README
new file mode 100644
index 0000000..36ab40b
--- /dev/null
+++ b/bitbake/lib/layerindexlib/tests/testdata/README
@@ -0,0 +1,11 @@
+This test data is used to verify the 'cooker' module of the layerindex.
+
+The module consists of a faux project bblayers.conf with four layers defined.
+
+layer1 - openembedded-core
+layer2 - networking-layer
+layer3 - meta-python
+layer4 - openembedded-layer (meta-oe)
+
+Since we do not have a fully populated cooker, we use this to test the
+basic index generation, and not any deep recipe based contents.
diff --git a/bitbake/lib/layerindexlib/tests/testdata/build/conf/bblayers.conf b/bitbake/lib/layerindexlib/tests/testdata/build/conf/bblayers.conf
new file mode 100644
index 0000000..40429b2
--- /dev/null
+++ b/bitbake/lib/layerindexlib/tests/testdata/build/conf/bblayers.conf
@@ -0,0 +1,15 @@
+LAYERSERIES_CORENAMES = "sumo"
+
+# LAYER_CONF_VERSION is increased each time build/conf/bblayers.conf
+# changes incompatibly
+LCONF_VERSION = "7"
+
+BBPATH = "${TOPDIR}"
+BBFILES ?= ""
+
+BBLAYERS ?= " \
+ ${TOPDIR}/layer1 \
+ ${TOPDIR}/layer2 \
+ ${TOPDIR}/layer3 \
+ ${TOPDIR}/layer4 \
+ "
diff --git a/bitbake/lib/layerindexlib/tests/testdata/layer1/conf/layer.conf b/bitbake/lib/layerindexlib/tests/testdata/layer1/conf/layer.conf
new file mode 100644
index 0000000..966d531
--- /dev/null
+++ b/bitbake/lib/layerindexlib/tests/testdata/layer1/conf/layer.conf
@@ -0,0 +1,17 @@
+# We have a conf and classes directory, add to BBPATH
+BBPATH .= ":${LAYERDIR}"
+# We have recipes-* directories, add to BBFILES
+BBFILES += "${LAYERDIR}/recipes-*/*/*.bb"
+
+BBFILE_COLLECTIONS += "core"
+BBFILE_PATTERN_core = "^${LAYERDIR}/"
+BBFILE_PRIORITY_core = "5"
+
+LAYERSERIES_CORENAMES = "sumo"
+
+# This should only be incremented on significant changes that will
+# cause compatibility issues with other layers
+LAYERVERSION_core = "11"
+LAYERSERIES_COMPAT_core = "sumo"
+
+BBLAYERS_LAYERINDEX_NAME_core = "openembedded-core"
diff --git a/bitbake/lib/layerindexlib/tests/testdata/layer2/conf/layer.conf b/bitbake/lib/layerindexlib/tests/testdata/layer2/conf/layer.conf
new file mode 100644
index 0000000..7569d1c
--- /dev/null
+++ b/bitbake/lib/layerindexlib/tests/testdata/layer2/conf/layer.conf
@@ -0,0 +1,20 @@
+# We have a conf and classes directory, add to BBPATH
+BBPATH .= ":${LAYERDIR}"
+
+# We have a packages directory, add to BBFILES
+BBFILES += "${LAYERDIR}/recipes-*/*/*.bb \
+ ${LAYERDIR}/recipes-*/*/*.bbappend"
+
+BBFILE_COLLECTIONS += "networking-layer"
+BBFILE_PATTERN_networking-layer := "^${LAYERDIR}/"
+BBFILE_PRIORITY_networking-layer = "5"
+
+# This should only be incremented on significant changes that will
+# cause compatibility issues with other layers
+LAYERVERSION_networking-layer = "1"
+
+LAYERDEPENDS_networking-layer = "core"
+LAYERDEPENDS_networking-layer += "openembedded-layer"
+LAYERDEPENDS_networking-layer += "meta-python"
+
+LAYERSERIES_COMPAT_networking-layer = "sumo"
diff --git a/bitbake/lib/layerindexlib/tests/testdata/layer3/conf/layer.conf b/bitbake/lib/layerindexlib/tests/testdata/layer3/conf/layer.conf
new file mode 100644
index 0000000..7089071
--- /dev/null
+++ b/bitbake/lib/layerindexlib/tests/testdata/layer3/conf/layer.conf
@@ -0,0 +1,19 @@
+# We might have a conf and classes directory, append to BBPATH
+BBPATH .= ":${LAYERDIR}"
+
+# We have recipes directories, add to BBFILES
+BBFILES += "${LAYERDIR}/recipes*/*/*.bb ${LAYERDIR}/recipes*/*/*.bbappend"
+
+BBFILE_COLLECTIONS += "meta-python"
+BBFILE_PATTERN_meta-python := "^${LAYERDIR}/"
+BBFILE_PRIORITY_meta-python = "7"
+
+# This should only be incremented on significant changes that will
+# cause compatibility issues with other layers
+LAYERVERSION_meta-python = "1"
+
+LAYERDEPENDS_meta-python = "core openembedded-layer"
+
+LAYERSERIES_COMPAT_meta-python = "sumo"
+
+LICENSE_PATH += "${LAYERDIR}/licenses"
diff --git a/bitbake/lib/layerindexlib/tests/testdata/layer4/conf/layer.conf b/bitbake/lib/layerindexlib/tests/testdata/layer4/conf/layer.conf
new file mode 100644
index 0000000..6649ee0
--- /dev/null
+++ b/bitbake/lib/layerindexlib/tests/testdata/layer4/conf/layer.conf
@@ -0,0 +1,22 @@
+# We have a conf and classes directory, append to BBPATH
+BBPATH .= ":${LAYERDIR}"
+
+# We have a recipes directory, add to BBFILES
+BBFILES += "${LAYERDIR}/recipes-*/*/*.bb ${LAYERDIR}/recipes-*/*/*.bbappend"
+
+BBFILE_COLLECTIONS += "openembedded-layer"
+BBFILE_PATTERN_openembedded-layer := "^${LAYERDIR}/"
+
+# Define the priority for recipes (.bb files) from this layer,
+# choosing carefully how this layer interacts with all of the
+# other layers.
+
+BBFILE_PRIORITY_openembedded-layer = "6"
+
+# This should only be incremented on significant changes that will
+# cause compatibility issues with other layers
+LAYERVERSION_openembedded-layer = "1"
+
+LAYERDEPENDS_openembedded-layer = "core"
+
+LAYERSERIES_COMPAT_openembedded-layer = "sumo"
diff --git a/bitbake/lib/toaster/bldcontrol/localhostbecontroller.py b/bitbake/lib/toaster/bldcontrol/localhostbecontroller.py
index 4c17562..9490635 100644
--- a/bitbake/lib/toaster/bldcontrol/localhostbecontroller.py
+++ b/bitbake/lib/toaster/bldcontrol/localhostbecontroller.py
@@ -27,8 +27,9 @@ import shutil
import time
from django.db import transaction
from django.db.models import Q
-from bldcontrol.models import BuildEnvironment, BRLayer, BRVariable, BRTarget, BRBitbake
-from orm.models import CustomImageRecipe, Layer, Layer_Version, ProjectLayer, ToasterSetting
+from bldcontrol.models import BuildEnvironment, BuildRequest, BRLayer, BRVariable, BRTarget, BRBitbake, Build
+from orm.models import CustomImageRecipe, Layer, Layer_Version, Project, ProjectLayer, ToasterSetting
+from orm.models import signal_runbuilds
import subprocess
from toastermain import settings
@@ -38,6 +39,8 @@ from bldcontrol.bbcontroller import BuildEnvironmentController, ShellCmdExceptio
import logging
logger = logging.getLogger("toaster")
+install_dir = os.environ.get('TOASTER_DIR')
+
from pprint import pprint, pformat
class LocalhostBEController(BuildEnvironmentController):
@@ -87,10 +90,10 @@ class LocalhostBEController(BuildEnvironmentController):
#logger.debug("localhostbecontroller: using HEAD checkout in %s" % local_checkout_path)
return local_checkout_path
-
- def setCloneStatus(self,bitbake,status,total,current):
+ def setCloneStatus(self,bitbake,status,total,current,repo_name):
bitbake.req.build.repos_cloned=current
bitbake.req.build.repos_to_clone=total
+ bitbake.req.build.progress_item=repo_name
bitbake.req.build.save()
def setLayers(self, bitbake, layers, targets):
@@ -100,6 +103,7 @@ class LocalhostBEController(BuildEnvironmentController):
layerlist = []
nongitlayerlist = []
+ layer_index = 0
git_env = os.environ.copy()
# (note: add custom environment settings here)
@@ -113,7 +117,7 @@ class LocalhostBEController(BuildEnvironmentController):
if bitbake.giturl and bitbake.commit:
gitrepos[(bitbake.giturl, bitbake.commit)] = []
gitrepos[(bitbake.giturl, bitbake.commit)].append(
- ("bitbake", bitbake.dirpath))
+ ("bitbake", bitbake.dirpath, 0))
for layer in layers:
# We don't need to git clone the layer for the CustomImageRecipe
@@ -124,12 +128,13 @@ class LocalhostBEController(BuildEnvironmentController):
# If we have local layers then we don't need clone them
# For local layers giturl will be empty
if not layer.giturl:
- nongitlayerlist.append(layer.layer_version.layer.local_source_dir)
+ nongitlayerlist.append( "%03d:%s" % (layer_index,layer.local_source_dir) )
continue
if not (layer.giturl, layer.commit) in gitrepos:
gitrepos[(layer.giturl, layer.commit)] = []
- gitrepos[(layer.giturl, layer.commit)].append( (layer.name, layer.dirpath) )
+ gitrepos[(layer.giturl, layer.commit)].append( (layer.name,layer.dirpath,layer_index) )
+ layer_index += 1
logger.debug("localhostbecontroller, our git repos are %s" % pformat(gitrepos))
@@ -159,9 +164,9 @@ class LocalhostBEController(BuildEnvironmentController):
# 3. checkout the repositories
clone_count=0
clone_total=len(gitrepos.keys())
- self.setCloneStatus(bitbake,'Started',clone_total,clone_count)
+ self.setCloneStatus(bitbake,'Started',clone_total,clone_count,'')
for giturl, commit in gitrepos.keys():
- self.setCloneStatus(bitbake,'progress',clone_total,clone_count)
+ self.setCloneStatus(bitbake,'progress',clone_total,clone_count,gitrepos[(giturl, commit)][0][0])
clone_count += 1
localdirname = os.path.join(self.be.sourcedir, self.getGitCloneDirectory(giturl, commit))
@@ -172,8 +177,11 @@ class LocalhostBEController(BuildEnvironmentController):
try:
localremotes = self._shellcmd("git remote -v",
localdirname,env=git_env)
- if not giturl in localremotes and commit != 'HEAD':
- raise BuildSetupException("Existing git repository at %s, but with different remotes ('%s', expected '%s'). Toaster will not continue out of fear of damaging something." % (localdirname, ", ".join(localremotes.split("\n")), giturl))
+ # NOTE: this nice-to-have check breaks when using git remaping to get past firewall
+ # Re-enable later with .gitconfig remapping checks
+ #if not giturl in localremotes and commit != 'HEAD':
+ # raise BuildSetupException("Existing git repository at %s, but with different remotes ('%s', expected '%s'). Toaster will not continue out of fear of damaging something." % (localdirname, ", ".join(localremotes.split("\n")), giturl))
+ pass
except ShellCmdException:
# our localdirname might not be a git repository
#- that's fine
@@ -192,7 +200,7 @@ class LocalhostBEController(BuildEnvironmentController):
if commit != "HEAD":
logger.debug("localhostbecontroller: checking out commit %s to %s " % (commit, localdirname))
ref = commit if re.match('^[a-fA-F0-9]+$', commit) else 'origin/%s' % commit
- self._shellcmd('git fetch --all && git reset --hard "%s"' % ref, localdirname,env=git_env)
+ self._shellcmd('git fetch && git reset --hard "%s"' % ref, localdirname,env=git_env)
# take the localdirname as poky dir if we can find the oe-init-build-env
if self.pokydirname is None and os.path.exists(os.path.join(localdirname, "oe-init-build-env")):
@@ -205,21 +213,33 @@ class LocalhostBEController(BuildEnvironmentController):
self._shellcmd("git clone -b \"%s\" \"%s\" \"%s\" " % (bitbake.commit, bitbake.giturl, os.path.join(self.pokydirname, 'bitbake')),env=git_env)
# verify our repositories
- for name, dirpath in gitrepos[(giturl, commit)]:
+ for name, dirpath, index in gitrepos[(giturl, commit)]:
localdirpath = os.path.join(localdirname, dirpath)
- logger.debug("localhostbecontroller: localdirpath expected '%s'" % localdirpath)
+ logger.debug("localhostbecontroller: localdirpath expects '%s'" % localdirpath)
if not os.path.exists(localdirpath):
raise BuildSetupException("Cannot find layer git path '%s' in checked out repository '%s:%s'. Aborting." % (localdirpath, giturl, commit))
if name != "bitbake":
- layerlist.append(localdirpath.rstrip("/"))
+ layerlist.append("%03d:%s" % (index,localdirpath.rstrip("/")))
- self.setCloneStatus(bitbake,'complete',clone_total,clone_count)
+ self.setCloneStatus(bitbake,'complete',clone_total,clone_count,'')
logger.debug("localhostbecontroller: current layer list %s " % pformat(layerlist))
- if self.pokydirname is None and os.path.exists(os.path.join(self.be.sourcedir, "oe-init-build-env")):
- logger.debug("localhostbecontroller: selected poky dir name %s" % self.be.sourcedir)
- self.pokydirname = self.be.sourcedir
+ # Resolve self.pokydirname if not resolved yet, consider the scenario
+ # where all layers are local, that's the else clause
+ if self.pokydirname is None:
+ if os.path.exists(os.path.join(self.be.sourcedir, "oe-init-build-env")):
+ logger.debug("localhostbecontroller: selected poky dir name %s" % self.be.sourcedir)
+ self.pokydirname = self.be.sourcedir
+ else:
+ # Alternatively, scan local layers for relative "oe-init-build-env" location
+ for layer in layers:
+ if os.path.exists(os.path.join(layer.layer_version.layer.local_source_dir,"..","oe-init-build-env")):
+ logger.debug("localhostbecontroller, setting pokydirname to %s" % (layer.layer_version.layer.local_source_dir))
+ self.pokydirname = os.path.join(layer.layer_version.layer.local_source_dir,"..")
+ break
+ else:
+ logger.error("pokydirname is not set, you will run into trouble!")
# 5. create custom layer and add custom recipes to it
for target in targets:
@@ -232,7 +252,7 @@ class LocalhostBEController(BuildEnvironmentController):
customrecipe, layers)
if os.path.isdir(custom_layer_path):
- layerlist.append(custom_layer_path)
+ layerlist.append("%03d:%s" % (layer_index,custom_layer_path))
except CustomImageRecipe.DoesNotExist:
continue # not a custom recipe, skip
@@ -240,7 +260,11 @@ class LocalhostBEController(BuildEnvironmentController):
layerlist.extend(nongitlayerlist)
logger.debug("\n\nset layers gives this list %s" % pformat(layerlist))
self.islayerset = True
- return layerlist
+
+ # restore the order of layer list for bblayers.conf
+ layerlist.sort()
+ sorted_layerlist = [l[4:] for l in layerlist]
+ return sorted_layerlist
def setup_custom_image_recipe(self, customrecipe, layers):
""" Set up toaster-custom-images layer and recipe files """
@@ -310,41 +334,141 @@ class LocalhostBEController(BuildEnvironmentController):
def triggerBuild(self, bitbake, layers, variables, targets, brbe):
layers = self.setLayers(bitbake, layers, targets)
+ is_merged_attr = bitbake.req.project.merged_attr
+
+ git_env = os.environ.copy()
+ # (note: add custom environment settings here)
+ try:
+ # insure that the project init/build uses the selected bitbake, and not Toaster's
+ del git_env['TEMPLATECONF']
+ del git_env['BBBASEDIR']
+ del git_env['BUILDDIR']
+ except KeyError:
+ pass
# init build environment from the clone
- builddir = '%s-toaster-%d' % (self.be.builddir, bitbake.req.project.id)
+ if bitbake.req.project.builddir:
+ builddir = bitbake.req.project.builddir
+ else:
+ builddir = '%s-toaster-%d' % (self.be.builddir, bitbake.req.project.id)
oe_init = os.path.join(self.pokydirname, 'oe-init-build-env')
# init build environment
try:
custom_script = ToasterSetting.objects.get(name="CUSTOM_BUILD_INIT_SCRIPT").value
custom_script = custom_script.replace("%BUILDDIR%" ,builddir)
- self._shellcmd("bash -c 'source %s'" % (custom_script))
+ self._shellcmd("bash -c 'source %s'" % (custom_script),env=git_env)
except ToasterSetting.DoesNotExist:
self._shellcmd("bash -c 'source %s %s'" % (oe_init, builddir),
- self.be.sourcedir)
+ self.be.sourcedir,env=git_env)
# update bblayers.conf
- bblconfpath = os.path.join(builddir, "conf/toaster-bblayers.conf")
- with open(bblconfpath, 'w') as bblayers:
- bblayers.write('# line added by toaster build control\n'
- 'BBLAYERS = "%s"' % ' '.join(layers))
-
- # write configuration file
- confpath = os.path.join(builddir, 'conf/toaster.conf')
- with open(confpath, 'w') as conf:
- for var in variables:
- conf.write('%s="%s"\n' % (var.name, var.value))
- conf.write('INHERIT+="toaster buildhistory"')
+ if not is_merged_attr:
+ bblconfpath = os.path.join(builddir, "conf/toaster-bblayers.conf")
+ with open(bblconfpath, 'w') as bblayers:
+ bblayers.write('# line added by toaster build control\n'
+ 'BBLAYERS = "%s"' % ' '.join(layers))
+
+ # write configuration file
+ confpath = os.path.join(builddir, 'conf/toaster.conf')
+ with open(confpath, 'w') as conf:
+ for var in variables:
+ conf.write('%s="%s"\n' % (var.name, var.value))
+ conf.write('INHERIT+="toaster buildhistory"')
+ else:
+ # Append the Toaster-specific values directly to the bblayers.conf
+ bblconfpath = os.path.join(builddir, "conf/bblayers.conf")
+ bblconfpath_save = os.path.join(builddir, "conf/bblayers.conf.save")
+ shutil.copyfile(bblconfpath, bblconfpath_save)
+ with open(bblconfpath) as bblayers:
+ content = bblayers.readlines()
+ do_write = True
+ was_toaster = False
+ with open(bblconfpath,'w') as bblayers:
+ for line in content:
+ #line = line.strip('\n')
+ if 'TOASTER_CONFIG_PROLOG' in line:
+ do_write = False
+ was_toaster = True
+ elif 'TOASTER_CONFIG_EPILOG' in line:
+ do_write = True
+ elif do_write:
+ bblayers.write(line)
+ if not was_toaster:
+ bblayers.write('\n')
+ bblayers.write('#=== TOASTER_CONFIG_PROLOG ===\n')
+ bblayers.write('BBLAYERS = "\\\n')
+ for layer in layers:
+ bblayers.write(' %s \\\n' % layer)
+ bblayers.write(' "\n')
+ bblayers.write('#=== TOASTER_CONFIG_EPILOG ===\n')
+ # Append the Toaster-specific values directly to the local.conf
+ bbconfpath = os.path.join(builddir, "conf/local.conf")
+ bbconfpath_save = os.path.join(builddir, "conf/local.conf.save")
+ shutil.copyfile(bbconfpath, bbconfpath_save)
+ with open(bbconfpath) as f:
+ content = f.readlines()
+ do_write = True
+ was_toaster = False
+ with open(bbconfpath,'w') as conf:
+ for line in content:
+ #line = line.strip('\n')
+ if 'TOASTER_CONFIG_PROLOG' in line:
+ do_write = False
+ was_toaster = True
+ elif 'TOASTER_CONFIG_EPILOG' in line:
+ do_write = True
+ elif do_write:
+ conf.write(line)
+ if not was_toaster:
+ conf.write('\n')
+ conf.write('#=== TOASTER_CONFIG_PROLOG ===\n')
+ for var in variables:
+ if (not var.name.startswith("INTERNAL_")) and (not var.name == "BBLAYERS"):
+ conf.write('%s="%s"\n' % (var.name, var.value))
+ conf.write('#=== TOASTER_CONFIG_EPILOG ===\n')
+
+ # If 'target' is just the project preparation target, then we are done
+ for target in targets:
+ if "_PROJECT_PREPARE_" == target.target:
+ logger.debug('localhostbecontroller: Project has been prepared. Done.')
+ # Update the Build Request and release the build environment
+ bitbake.req.state = BuildRequest.REQ_COMPLETED
+ bitbake.req.save()
+ self.be.lock = BuildEnvironment.LOCK_FREE
+ self.be.save()
+ # Close the project build and progress bar
+ bitbake.req.build.outcome = Build.SUCCEEDED
+ bitbake.req.build.save()
+ # Update the project status
+ bitbake.req.project.set_variable(Project.PROJECT_SPECIFIC_STATUS,Project.PROJECT_SPECIFIC_CLONING_SUCCESS)
+ signal_runbuilds()
+ return
# clean the Toaster to build environment
env_clean = 'unset BBPATH;' # clean BBPATH for <= YP-2.4.0
- # run bitbake server from the clone
+ # run bitbake server from the clone if available
+ # otherwise pick it from the PATH
bitbake = os.path.join(self.pokydirname, 'bitbake', 'bin', 'bitbake')
+ if not os.path.exists(bitbake):
+ logger.info("Bitbake not available under %s, will try to use it from PATH" %
+ self.pokydirname)
+ for path in os.environ["PATH"].split(os.pathsep):
+ if os.path.exists(os.path.join(path, 'bitbake')):
+ bitbake = os.path.join(path, 'bitbake')
+ break
+ else:
+ logger.error("Looks like Bitbake is not available, please fix your environment")
+
toasterlayers = os.path.join(builddir,"conf/toaster-bblayers.conf")
- self._shellcmd('%s bash -c \"source %s %s; BITBAKE_UI="knotty" %s --read %s --read %s '
- '--server-only -B 0.0.0.0:0\"' % (env_clean, oe_init,
- builddir, bitbake, confpath, toasterlayers), self.be.sourcedir)
+ if not is_merged_attr:
+ self._shellcmd('%s bash -c \"source %s %s; BITBAKE_UI="knotty" %s --read %s --read %s '
+ '--server-only -B 0.0.0.0:0\"' % (env_clean, oe_init,
+ builddir, bitbake, confpath, toasterlayers), self.be.sourcedir)
+ else:
+ self._shellcmd('%s bash -c \"source %s %s; BITBAKE_UI="knotty" %s '
+ '--server-only -B 0.0.0.0:0\"' % (env_clean, oe_init,
+ builddir, bitbake), self.be.sourcedir)
# read port number from bitbake.lock
self.be.bbport = -1
@@ -390,12 +514,20 @@ class LocalhostBEController(BuildEnvironmentController):
log = os.path.join(builddir, 'toaster_ui.log')
local_bitbake = os.path.join(os.path.dirname(os.getenv('BBBASEDIR')),
'bitbake')
- self._shellcmd(['%s bash -c \"(TOASTER_BRBE="%s" BBSERVER="0.0.0.0:%s" '
+ if not is_merged_attr:
+ self._shellcmd(['%s bash -c \"(TOASTER_BRBE="%s" BBSERVER="0.0.0.0:%s" '
'%s %s -u toasterui --read %s --read %s --token="" >>%s 2>&1;'
'BITBAKE_UI="knotty" BBSERVER=0.0.0.0:%s %s -m)&\"' \
% (env_clean, brbe, self.be.bbport, local_bitbake, bbtargets, confpath, toasterlayers, log,
self.be.bbport, bitbake,)],
builddir, nowait=True)
+ else:
+ self._shellcmd(['%s bash -c \"(TOASTER_BRBE="%s" BBSERVER="0.0.0.0:%s" '
+ '%s %s -u toasterui --token="" >>%s 2>&1;'
+ 'BITBAKE_UI="knotty" BBSERVER=0.0.0.0:%s %s -m)&\"' \
+ % (env_clean, brbe, self.be.bbport, local_bitbake, bbtargets, log,
+ self.be.bbport, bitbake,)],
+ builddir, nowait=True)
logger.debug('localhostbecontroller: Build launched, exiting. '
'Follow build logs at %s' % log)
diff --git a/bitbake/lib/toaster/bldcontrol/management/commands/checksettings.py b/bitbake/lib/toaster/bldcontrol/management/commands/checksettings.py
index 582114a..14298d9 100644
--- a/bitbake/lib/toaster/bldcontrol/management/commands/checksettings.py
+++ b/bitbake/lib/toaster/bldcontrol/management/commands/checksettings.py
@@ -74,8 +74,9 @@ class Command(BaseCommand):
print("Loading default settings")
call_command("loaddata", "settings")
template_conf = os.environ.get("TEMPLATECONF", "")
+ custom_xml_only = os.environ.get("CUSTOM_XML_ONLY")
- if ToasterSetting.objects.filter(name='CUSTOM_XML_ONLY').count() > 0:
+ if ToasterSetting.objects.filter(name='CUSTOM_XML_ONLY').count() > 0 or (not custom_xml_only == None):
# only use the custom settings
pass
elif "poky" in template_conf:
@@ -107,7 +108,10 @@ class Command(BaseCommand):
action="ignore",
message="^.*No fixture named.*$")
print("Importing custom settings if present")
- call_command("loaddata", "custom")
+ try:
+ call_command("loaddata", "custom")
+ except:
+ print("NOTE: optional fixture 'custom' not found")
# we run lsupdates after config update
print("\nFetching information from the layer index, "
diff --git a/bitbake/lib/toaster/bldcontrol/management/commands/runbuilds.py b/bitbake/lib/toaster/bldcontrol/management/commands/runbuilds.py
index 791e53e..6a55dd4 100644
--- a/bitbake/lib/toaster/bldcontrol/management/commands/runbuilds.py
+++ b/bitbake/lib/toaster/bldcontrol/management/commands/runbuilds.py
@@ -49,7 +49,7 @@ class Command(BaseCommand):
# we could not find a BEC; postpone the BR
br.state = BuildRequest.REQ_QUEUED
br.save()
- logger.debug("runbuilds: No build env")
+ logger.debug("runbuilds: No build env (%s)" % e)
return
logger.info("runbuilds: starting build %s, environment %s" %
diff --git a/bitbake/lib/toaster/orm/fixtures/oe-core.xml b/bitbake/lib/toaster/orm/fixtures/oe-core.xml
index 00720c3..fec93ab 100644
--- a/bitbake/lib/toaster/orm/fixtures/oe-core.xml
+++ b/bitbake/lib/toaster/orm/fixtures/oe-core.xml
@@ -8,9 +8,9 @@
<!-- Bitbake versions which correspond to the metadata release -->
<object model="orm.bitbakeversion" pk="1">
- <field type="CharField" name="name">rocko</field>
+ <field type="CharField" name="name">sumo</field>
<field type="CharField" name="giturl">git://git.openembedded.org/bitbake</field>
- <field type="CharField" name="branch">1.36</field>
+ <field type="CharField" name="branch">1.38</field>
</object>
<object model="orm.bitbakeversion" pk="2">
<field type="CharField" name="name">HEAD</field>
@@ -22,14 +22,19 @@
<field type="CharField" name="giturl">git://git.openembedded.org/bitbake</field>
<field type="CharField" name="branch">master</field>
</object>
+ <object model="orm.bitbakeversion" pk="4">
+ <field type="CharField" name="name">thud</field>
+ <field type="CharField" name="giturl">git://git.openembedded.org/bitbake</field>
+ <field type="CharField" name="branch">1.40</field>
+ </object>
<!-- Releases available -->
<object model="orm.release" pk="1">
- <field type="CharField" name="name">rocko</field>
- <field type="CharField" name="description">Openembedded Rocko</field>
+ <field type="CharField" name="name">sumo</field>
+ <field type="CharField" name="description">Openembedded Sumo</field>
<field rel="ManyToOneRel" to="orm.bitbakeversion" name="bitbake_version">1</field>
- <field type="CharField" name="branch_name">rocko</field>
- <field type="TextField" name="helptext">Toaster will run your builds using the tip of the <a href=\"http://cgit.openembedded.org/openembedded-core/log/?h=rocko\">OpenEmbedded Rocko</a> branch.</field>
+ <field type="CharField" name="branch_name">sumo</field>
+ <field type="TextField" name="helptext">Toaster will run your builds using the tip of the <a href=\"http://cgit.openembedded.org/openembedded-core/log/?h=sumo\">OpenEmbedded Sumo</a> branch.</field>
</object>
<object model="orm.release" pk="2">
<field type="CharField" name="name">local</field>
@@ -45,6 +50,13 @@
<field type="CharField" name="branch_name">master</field>
<field type="TextField" name="helptext">Toaster will run your builds using the tip of the <a href=\"http://cgit.openembedded.org/openembedded-core/log/\">OpenEmbedded master</a> branch.</field>
</object>
+ <object model="orm.release" pk="4">
+ <field type="CharField" name="name">thud</field>
+ <field type="CharField" name="description">Openembedded Rocko</field>
+ <field rel="ManyToOneRel" to="orm.bitbakeversion" name="bitbake_version">1</field>
+ <field type="CharField" name="branch_name">thud</field>
+ <field type="TextField" name="helptext">Toaster will run your builds using the tip of the <a href=\"http://cgit.openembedded.org/openembedded-core/log/?h=thud\">OpenEmbedded Thud</a> branch.</field>
+ </object>
<!-- Default layers for each release -->
<object model="orm.releasedefaultlayer" pk="1">
@@ -59,6 +71,10 @@
<field rel="ManyToOneRel" to="orm.release" name="release">3</field>
<field type="CharField" name="layer_name">openembedded-core</field>
</object>
+ <object model="orm.releasedefaultlayer" pk="4">
+ <field rel="ManyToOneRel" to="orm.release" name="release">4</field>
+ <field type="CharField" name="layer_name">openembedded-core</field>
+ </object>
<!-- Layer for the Local release -->
diff --git a/bitbake/lib/toaster/orm/fixtures/poky.xml b/bitbake/lib/toaster/orm/fixtures/poky.xml
index 2f39d77..fb9a771 100644
--- a/bitbake/lib/toaster/orm/fixtures/poky.xml
+++ b/bitbake/lib/toaster/orm/fixtures/poky.xml
@@ -8,9 +8,9 @@
<!-- Bitbake versions which correspond to the metadata release -->
<object model="orm.bitbakeversion" pk="1">
- <field type="CharField" name="name">rocko</field>
+ <field type="CharField" name="name">sumo</field>
<field type="CharField" name="giturl">git://git.yoctoproject.org/poky</field>
- <field type="CharField" name="branch">rocko</field>
+ <field type="CharField" name="branch">sumo</field>
<field type="CharField" name="dirpath">bitbake</field>
</object>
<object model="orm.bitbakeversion" pk="2">
@@ -25,15 +25,21 @@
<field type="CharField" name="branch">master</field>
<field type="CharField" name="dirpath">bitbake</field>
</object>
+ <object model="orm.bitbakeversion" pk="4">
+ <field type="CharField" name="name">thud</field>
+ <field type="CharField" name="giturl">git://git.yoctoproject.org/poky</field>
+ <field type="CharField" name="branch">thud</field>
+ <field type="CharField" name="dirpath">bitbake</field>
+ </object>
<!-- Releases available -->
<object model="orm.release" pk="1">
- <field type="CharField" name="name">rocko</field>
- <field type="CharField" name="description">Yocto Project 2.4 "Rocko"</field>
+ <field type="CharField" name="name">sumo</field>
+ <field type="CharField" name="description">Yocto Project 2.5 "Sumo"</field>
<field rel="ManyToOneRel" to="orm.bitbakeversion" name="bitbake_version">1</field>
- <field type="CharField" name="branch_name">rocko</field>
- <field type="TextField" name="helptext">Toaster will run your builds using the tip of the <a href="http://git.yoctoproject.org/cgit/cgit.cgi/poky/log/?h=rocko">Yocto Project Rocko branch</a>.</field>
+ <field type="CharField" name="branch_name">sumo</field>
+ <field type="TextField" name="helptext">Toaster will run your builds using the tip of the <a href="http://git.yoctoproject.org/cgit/cgit.cgi/poky/log/?h=sumo">Yocto Project Sumo branch</a>.</field>
</object>
<object model="orm.release" pk="2">
<field type="CharField" name="name">local</field>
@@ -49,6 +55,13 @@
<field type="CharField" name="branch_name">master</field>
<field type="TextField" name="helptext">Toaster will run your builds using the tip of the <a href="http://git.yoctoproject.org/cgit/cgit.cgi/poky/log/">Yocto Project Master branch</a>.</field>
</object>
+ <object model="orm.release" pk="4">
+ <field type="CharField" name="name">rocko</field>
+ <field type="CharField" name="description">Yocto Project 2.6 "Thud"</field>
+ <field rel="ManyToOneRel" to="orm.bitbakeversion" name="bitbake_version">1</field>
+ <field type="CharField" name="branch_name">thud</field>
+ <field type="TextField" name="helptext">Toaster will run your builds using the tip of the <a href="http://git.yoctoproject.org/cgit/cgit.cgi/poky/log/?h=thud">Yocto Project Thud branch</a>.</field>
+ </object>
<!-- Default project layers for each release -->
<object model="orm.releasedefaultlayer" pk="1">
@@ -87,6 +100,18 @@
<field rel="ManyToOneRel" to="orm.release" name="release">3</field>
<field type="CharField" name="layer_name">meta-yocto-bsp</field>
</object>
+ <object model="orm.releasedefaultlayer" pk="10">
+ <field rel="ManyToOneRel" to="orm.release" name="release">4</field>
+ <field type="CharField" name="layer_name">openembedded-core</field>
+ </object>
+ <object model="orm.releasedefaultlayer" pk="11">
+ <field rel="ManyToOneRel" to="orm.release" name="release">4</field>
+ <field type="CharField" name="layer_name">meta-poky</field>
+ </object>
+ <object model="orm.releasedefaultlayer" pk="12">
+ <field rel="ManyToOneRel" to="orm.release" name="release">4</field>
+ <field type="CharField" name="layer_name">meta-yocto-bsp</field>
+ </object>
<!-- Default layers provided by poky
openembedded-core
@@ -105,7 +130,7 @@
<field rel="ManyToOneRel" to="orm.layer" name="layer">1</field>
<field type="IntegerField" name="layer_source">0</field>
<field rel="ManyToOneRel" to="orm.release" name="release">1</field>
- <field type="CharField" name="branch">rocko</field>
+ <field type="CharField" name="branch">sumo</field>
<field type="CharField" name="dirpath">meta</field>
</object>
<object model="orm.layer_version" pk="2">
@@ -123,6 +148,13 @@
<field type="CharField" name="branch">master</field>
<field type="CharField" name="dirpath">meta</field>
</object>
+ <object model="orm.layer_version" pk="4">
+ <field rel="ManyToOneRel" to="orm.layer" name="layer">1</field>
+ <field type="IntegerField" name="layer_source">0</field>
+ <field rel="ManyToOneRel" to="orm.release" name="release">4</field>
+ <field type="CharField" name="branch">rocko</field>
+ <field type="CharField" name="dirpath">meta</field>
+ </object>
<object model="orm.layer" pk="2">
<field type="CharField" name="name">meta-poky</field>
@@ -132,14 +164,14 @@
<field type="CharField" name="vcs_web_tree_base_url">http://git.yoctoproject.org/cgit/cgit.cgi/poky/tree/%path%?h=%branch%</field>
<field type="CharField" name="vcs_web_file_base_url">http://git.yoctoproject.org/cgit/cgit.cgi/poky/tree/%path%?h=%branch%</field>
</object>
- <object model="orm.layer_version" pk="4">
+ <object model="orm.layer_version" pk="5">
<field rel="ManyToOneRel" to="orm.layer" name="layer">2</field>
<field type="IntegerField" name="layer_source">0</field>
<field rel="ManyToOneRel" to="orm.release" name="release">1</field>
- <field type="CharField" name="branch">rocko</field>
+ <field type="CharField" name="branch">sumo</field>
<field type="CharField" name="dirpath">meta-poky</field>
</object>
- <object model="orm.layer_version" pk="5">
+ <object model="orm.layer_version" pk="6">
<field rel="ManyToOneRel" to="orm.layer" name="layer">2</field>
<field type="IntegerField" name="layer_source">0</field>
<field rel="ManyToOneRel" to="orm.release" name="release">2</field>
@@ -147,13 +179,20 @@
<field type="CharField" name="commit">HEAD</field>
<field type="CharField" name="dirpath">meta-poky</field>
</object>
- <object model="orm.layer_version" pk="6">
+ <object model="orm.layer_version" pk="7">
<field rel="ManyToOneRel" to="orm.layer" name="layer">2</field>
<field type="IntegerField" name="layer_source">0</field>
<field rel="ManyToOneRel" to="orm.release" name="release">3</field>
<field type="CharField" name="branch">master</field>
<field type="CharField" name="dirpath">meta-poky</field>
</object>
+ <object model="orm.layer_version" pk="8">
+ <field rel="ManyToOneRel" to="orm.layer" name="layer">2</field>
+ <field type="IntegerField" name="layer_source">0</field>
+ <field rel="ManyToOneRel" to="orm.release" name="release">4</field>
+ <field type="CharField" name="branch">rocko</field>
+ <field type="CharField" name="dirpath">meta-poky</field>
+ </object>
<object model="orm.layer" pk="3">
<field type="CharField" name="name">meta-yocto-bsp</field>
@@ -163,14 +202,14 @@
<field type="CharField" name="vcs_web_tree_base_url">http://git.yoctoproject.org/cgit/cgit.cgi/poky/tree/%path%?h=%branch%</field>
<field type="CharField" name="vcs_web_file_base_url">http://git.yoctoproject.org/cgit/cgit.cgi/poky/tree/%path%?h=%branch%</field>
</object>
- <object model="orm.layer_version" pk="7">
+ <object model="orm.layer_version" pk="9">
<field rel="ManyToOneRel" to="orm.layer" name="layer">3</field>
<field type="IntegerField" name="layer_source">0</field>
<field rel="ManyToOneRel" to="orm.release" name="release">1</field>
- <field type="CharField" name="branch">rocko</field>
+ <field type="CharField" name="branch">sumo</field>
<field type="CharField" name="dirpath">meta-yocto-bsp</field>
</object>
- <object model="orm.layer_version" pk="8">
+ <object model="orm.layer_version" pk="10">
<field rel="ManyToOneRel" to="orm.layer" name="layer">3</field>
<field type="IntegerField" name="layer_source">0</field>
<field rel="ManyToOneRel" to="orm.release" name="release">2</field>
@@ -178,11 +217,18 @@
<field type="CharField" name="commit">HEAD</field>
<field type="CharField" name="dirpath">meta-yocto-bsp</field>
</object>
- <object model="orm.layer_version" pk="9">
+ <object model="orm.layer_version" pk="11">
<field rel="ManyToOneRel" to="orm.layer" name="layer">3</field>
<field type="IntegerField" name="layer_source">0</field>
<field rel="ManyToOneRel" to="orm.release" name="release">3</field>
<field type="CharField" name="branch">master</field>
<field type="CharField" name="dirpath">meta-yocto-bsp</field>
</object>
+ <object model="orm.layer_version" pk="12">
+ <field rel="ManyToOneRel" to="orm.layer" name="layer">3</field>
+ <field type="IntegerField" name="layer_source">0</field>
+ <field rel="ManyToOneRel" to="orm.release" name="release">4</field>
+ <field type="CharField" name="branch">rocko</field>
+ <field type="CharField" name="dirpath">meta-yocto-bsp</field>
+ </object>
</django-objects>
diff --git a/bitbake/lib/toaster/orm/management/commands/lsupdates.py b/bitbake/lib/toaster/orm/management/commands/lsupdates.py
index efc6b3a..66114ff 100644
--- a/bitbake/lib/toaster/orm/management/commands/lsupdates.py
+++ b/bitbake/lib/toaster/orm/management/commands/lsupdates.py
@@ -29,7 +29,6 @@ from orm.models import ToasterSetting
import os
import sys
-import json
import logging
import threading
import time
@@ -37,6 +36,18 @@ logger = logging.getLogger("toaster")
DEFAULT_LAYERINDEX_SERVER = "http://layers.openembedded.org/layerindex/api/"
+# Add path to bitbake modules for layerindexlib
+# lib/toaster/orm/management/commands/lsupdates.py (abspath)
+# lib/toaster/orm/management/commands (dirname)
+# lib/toaster/orm/management (dirname)
+# lib/toaster/orm (dirname)
+# lib/toaster/ (dirname)
+# lib/ (dirname)
+path = os.path.dirname(os.path.dirname(os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))))
+sys.path.insert(0, path)
+
+import layerindexlib
+
class Spinner(threading.Thread):
""" A simple progress spinner to indicate download/parsing is happening"""
@@ -86,45 +97,6 @@ class Command(BaseCommand):
self.apiurl = ToasterSetting.objects.get(name = 'CUSTOM_LAYERINDEX_SERVER').value
assert self.apiurl is not None
- try:
- from urllib.request import urlopen, URLError
- from urllib.parse import urlparse
- except ImportError:
- from urllib2 import urlopen, URLError
- from urlparse import urlparse
-
- proxy_settings = os.environ.get("http_proxy", None)
-
- def _get_json_response(apiurl=None):
- if None == apiurl:
- apiurl=self.apiurl
- http_progress = Spinner()
- http_progress.start()
-
- _parsedurl = urlparse(apiurl)
- path = _parsedurl.path
-
- # logger.debug("Fetching %s", apiurl)
- try:
- res = urlopen(apiurl)
- except URLError as e:
- raise Exception("Failed to read %s: %s" % (path, e.reason))
-
- parsed = json.loads(res.read().decode('utf-8'))
-
- http_progress.stop()
- return parsed
-
- # verify we can get the basic api
- try:
- apilinks = _get_json_response()
- except Exception as e:
- import traceback
- if proxy_settings is not None:
- logger.info("EE: Using proxy %s" % proxy_settings)
- logger.warning("EE: could not connect to %s, skipping update:"
- "%s\n%s" % (self.apiurl, e, traceback.format_exc()))
- return
# update branches; only those that we already have names listed in the
# Releases table
@@ -133,112 +105,118 @@ class Command(BaseCommand):
if len(whitelist_branch_names) == 0:
raise Exception("Failed to make list of branches to fetch")
- logger.info("Fetching metadata releases for %s",
+ logger.info("Fetching metadata for %s",
" ".join(whitelist_branch_names))
- branches_info = _get_json_response(apilinks['branches'] +
- "?filter=name:%s"
- % "OR".join(whitelist_branch_names))
+ # We require a non-empty bb.data, but we can fake it with a dictionary
+ layerindex = layerindexlib.LayerIndex({"DUMMY" : "VALUE"})
+
+ http_progress = Spinner()
+ http_progress.start()
+
+ if whitelist_branch_names:
+ url_branches = ";branch=%s" % ','.join(whitelist_branch_names)
+ else:
+ url_branches = ""
+ layerindex.load_layerindex("%s%s" % (self.apiurl, url_branches))
+
+ http_progress.stop()
+
+ # We know we're only processing one entry, so we reference it here
+ # (this is cheating...)
+ index = layerindex.indexes[0]
# Map the layer index branches to toaster releases
li_branch_id_to_toaster_release = {}
- total = len(branches_info)
- for i, branch in enumerate(branches_info):
- li_branch_id_to_toaster_release[branch['id']] = \
- Release.objects.get(name=branch['name'])
+ logger.info("Processing releases")
+
+ total = len(index.branches)
+ for i, id in enumerate(index.branches):
+ li_branch_id_to_toaster_release[id] = \
+ Release.objects.get(name=index.branches[id].name)
self.mini_progress("Releases", i, total)
# keep a track of the layerindex (li) id mappings so that
# layer_versions can be created for these layers later on
li_layer_id_to_toaster_layer_id = {}
- logger.info("Fetching layers")
-
- layers_info = _get_json_response(apilinks['layerItems'])
+ logger.info("Processing layers")
- total = len(layers_info)
- for i, li in enumerate(layers_info):
+ total = len(index.layerItems)
+ for i, id in enumerate(index.layerItems):
try:
- l, created = Layer.objects.get_or_create(name=li['name'])
- l.up_date = li['updated']
- l.summary = li['summary']
- l.description = li['description']
+ l, created = Layer.objects.get_or_create(name=index.layerItems[id].name)
+ l.up_date = index.layerItems[id].updated
+ l.summary = index.layerItems[id].summary
+ l.description = index.layerItems[id].description
if created:
# predefined layers in the fixtures (for example poky.xml)
# always preempt the Layer Index for these values
- l.vcs_url = li['vcs_url']
- l.vcs_web_url = li['vcs_web_url']
- l.vcs_web_tree_base_url = li['vcs_web_tree_base_url']
- l.vcs_web_file_base_url = li['vcs_web_file_base_url']
+ l.vcs_url = index.layerItems[id].vcs_url
+ l.vcs_web_url = index.layerItems[id].vcs_web_url
+ l.vcs_web_tree_base_url = index.layerItems[id].vcs_web_tree_base_url
+ l.vcs_web_file_base_url = index.layerItems[id].vcs_web_file_base_url
l.save()
except Layer.MultipleObjectsReturned:
logger.info("Skipped %s as we found multiple layers and "
"don't know which to update" %
- li['name'])
+ index.layerItems[id].name)
- li_layer_id_to_toaster_layer_id[li['id']] = l.pk
+ li_layer_id_to_toaster_layer_id[id] = l.pk
self.mini_progress("layers", i, total)
# update layer_versions
- logger.info("Fetching layer versions")
- layerbranches_info = _get_json_response(
- apilinks['layerBranches'] + "?filter=branch__name:%s" %
- "OR".join(whitelist_branch_names))
+ logger.info("Processing layer versions")
# Map Layer index layer_branch object id to
# layer_version toaster object id
li_layer_branch_id_to_toaster_lv_id = {}
- total = len(layerbranches_info)
- for i, lbi in enumerate(layerbranches_info):
+ total = len(index.layerBranches)
+ for i, id in enumerate(index.layerBranches):
# release as defined by toaster map to layerindex branch
- release = li_branch_id_to_toaster_release[lbi['branch']]
+ release = li_branch_id_to_toaster_release[index.layerBranches[id].branch_id]
try:
lv, created = Layer_Version.objects.get_or_create(
layer=Layer.objects.get(
- pk=li_layer_id_to_toaster_layer_id[lbi['layer']]),
+ pk=li_layer_id_to_toaster_layer_id[index.layerBranches[id].layer_id]),
release=release
)
except KeyError:
logger.warning(
"No such layerindex layer referenced by layerbranch %d" %
- lbi['layer'])
+ index.layerBranches[id].layer_id)
continue
if created:
- lv.release = li_branch_id_to_toaster_release[lbi['branch']]
- lv.up_date = lbi['updated']
- lv.commit = lbi['actual_branch']
- lv.dirpath = lbi['vcs_subdir']
+ lv.release = li_branch_id_to_toaster_release[index.layerBranches[id].branch_id]
+ lv.up_date = index.layerBranches[id].updated
+ lv.commit = index.layerBranches[id].actual_branch
+ lv.dirpath = index.layerBranches[id].vcs_subdir
lv.save()
- li_layer_branch_id_to_toaster_lv_id[lbi['id']] =\
+ li_layer_branch_id_to_toaster_lv_id[index.layerBranches[id].id] =\
lv.pk
self.mini_progress("layer versions", i, total)
- logger.info("Fetching layer version dependencies")
- # update layer dependencies
- layerdependencies_info = _get_json_response(
- apilinks['layerDependencies'] +
- "?filter=layerbranch__branch__name:%s" %
- "OR".join(whitelist_branch_names))
+ logger.info("Processing layer version dependencies")
dependlist = {}
- for ldi in layerdependencies_info:
+ for id in index.layerDependencies:
try:
lv = Layer_Version.objects.get(
- pk=li_layer_branch_id_to_toaster_lv_id[ldi['layerbranch']])
+ pk=li_layer_branch_id_to_toaster_lv_id[index.layerDependencies[id].layerbranch_id])
except Layer_Version.DoesNotExist as e:
continue
if lv not in dependlist:
dependlist[lv] = []
try:
- layer_id = li_layer_id_to_toaster_layer_id[ldi['dependency']]
+ layer_id = li_layer_id_to_toaster_layer_id[index.layerDependencies[id].dependency_id]
dependlist[lv].append(
Layer_Version.objects.get(layer__pk=layer_id,
@@ -247,7 +225,7 @@ class Command(BaseCommand):
except Layer_Version.DoesNotExist:
logger.warning("Cannot find layer version (ls:%s),"
"up_id:%s lv:%s" %
- (self, ldi['dependency'], lv))
+ (self, index.layerDependencies[id].dependency_id, lv))
total = len(dependlist)
for i, lv in enumerate(dependlist):
@@ -258,73 +236,61 @@ class Command(BaseCommand):
self.mini_progress("Layer version dependencies", i, total)
# update Distros
- logger.info("Fetching distro information")
- distros_info = _get_json_response(
- apilinks['distros'] + "?filter=layerbranch__branch__name:%s" %
- "OR".join(whitelist_branch_names))
+ logger.info("Processing distro information")
- total = len(distros_info)
- for i, di in enumerate(distros_info):
+ total = len(index.distros)
+ for i, id in enumerate(index.distros):
distro, created = Distro.objects.get_or_create(
- name=di['name'],
+ name=index.distros[id].name,
layer_version=Layer_Version.objects.get(
- pk=li_layer_branch_id_to_toaster_lv_id[di['layerbranch']]))
- distro.up_date = di['updated']
- distro.name = di['name']
- distro.description = di['description']
+ pk=li_layer_branch_id_to_toaster_lv_id[index.distros[id].layerbranch_id]))
+ distro.up_date = index.distros[id].updated
+ distro.name = index.distros[id].name
+ distro.description = index.distros[id].description
distro.save()
self.mini_progress("distros", i, total)
# update machines
- logger.info("Fetching machine information")
- machines_info = _get_json_response(
- apilinks['machines'] + "?filter=layerbranch__branch__name:%s" %
- "OR".join(whitelist_branch_names))
+ logger.info("Processing machine information")
- total = len(machines_info)
- for i, mi in enumerate(machines_info):
+ total = len(index.machines)
+ for i, id in enumerate(index.machines):
mo, created = Machine.objects.get_or_create(
- name=mi['name'],
+ name=index.machines[id].name,
layer_version=Layer_Version.objects.get(
- pk=li_layer_branch_id_to_toaster_lv_id[mi['layerbranch']]))
- mo.up_date = mi['updated']
- mo.name = mi['name']
- mo.description = mi['description']
+ pk=li_layer_branch_id_to_toaster_lv_id[index.machines[id].layerbranch_id]))
+ mo.up_date = index.machines[id].updated
+ mo.name = index.machines[id].name
+ mo.description = index.machines[id].description
mo.save()
self.mini_progress("machines", i, total)
# update recipes; paginate by layer version / layer branch
- logger.info("Fetching recipe information")
- recipes_info = _get_json_response(
- apilinks['recipes'] + "?filter=layerbranch__branch__name:%s" %
- "OR".join(whitelist_branch_names))
+ logger.info("Processing recipe information")
- total = len(recipes_info)
- for i, ri in enumerate(recipes_info):
+ total = len(index.recipes)
+ for i, id in enumerate(index.recipes):
try:
- lv_id = li_layer_branch_id_to_toaster_lv_id[ri['layerbranch']]
+ lv_id = li_layer_branch_id_to_toaster_lv_id[index.recipes[id].layerbranch_id]
lv = Layer_Version.objects.get(pk=lv_id)
ro, created = Recipe.objects.get_or_create(
layer_version=lv,
- name=ri['pn']
+ name=index.recipes[id].pn
)
ro.layer_version = lv
- ro.up_date = ri['updated']
- ro.name = ri['pn']
- ro.version = ri['pv']
- ro.summary = ri['summary']
- ro.description = ri['description']
- ro.section = ri['section']
- ro.license = ri['license']
- ro.homepage = ri['homepage']
- ro.bugtracker = ri['bugtracker']
- ro.file_path = ri['filepath'] + "/" + ri['filename']
- if 'inherits' in ri:
- ro.is_image = 'image' in ri['inherits'].split()
- else: # workaround for old style layer index
- ro.is_image = "-image-" in ri['pn']
+ ro.up_date = index.recipes[id].updated
+ ro.name = index.recipes[id].pn
+ ro.version = index.recipes[id].pv
+ ro.summary = index.recipes[id].summary
+ ro.description = index.recipes[id].description
+ ro.section = index.recipes[id].section
+ ro.license = index.recipes[id].license
+ ro.homepage = index.recipes[id].homepage
+ ro.bugtracker = index.recipes[id].bugtracker
+ ro.file_path = index.recipes[id].fullpath
+ ro.is_image = 'image' in index.recipes[id].inherits.split()
ro.save()
except Exception as e:
logger.warning("Failed saving recipe %s", e)
diff --git a/bitbake/lib/toaster/orm/migrations/0018_project_specific.py b/bitbake/lib/toaster/orm/migrations/0018_project_specific.py
new file mode 100644
index 0000000..084ecad
--- /dev/null
+++ b/bitbake/lib/toaster/orm/migrations/0018_project_specific.py
@@ -0,0 +1,28 @@
+# -*- coding: utf-8 -*-
+from __future__ import unicode_literals
+
+from django.db import migrations, models
+
+class Migration(migrations.Migration):
+
+ dependencies = [
+ ('orm', '0017_distro_clone'),
+ ]
+
+ operations = [
+ migrations.AddField(
+ model_name='Project',
+ name='builddir',
+ field=models.TextField(),
+ ),
+ migrations.AddField(
+ model_name='Project',
+ name='merged_attr',
+ field=models.BooleanField(default=False)
+ ),
+ migrations.AddField(
+ model_name='Build',
+ name='progress_item',
+ field=models.CharField(max_length=40)
+ ),
+ ]
diff --git a/bitbake/lib/toaster/orm/models.py b/bitbake/lib/toaster/orm/models.py
index 3a7dff8..7720290 100644
--- a/bitbake/lib/toaster/orm/models.py
+++ b/bitbake/lib/toaster/orm/models.py
@@ -121,8 +121,15 @@ class ToasterSetting(models.Model):
class ProjectManager(models.Manager):
- def create_project(self, name, release):
- if release is not None:
+ def create_project(self, name, release, existing_project=None):
+ if existing_project and (release is not None):
+ prj = existing_project
+ prj.bitbake_version = release.bitbake_version
+ prj.release = release
+ # Delete the previous ProjectLayer mappings
+ for pl in ProjectLayer.objects.filter(project=prj):
+ pl.delete()
+ elif release is not None:
prj = self.model(name=name,
bitbake_version=release.bitbake_version,
release=release)
@@ -130,15 +137,14 @@ class ProjectManager(models.Manager):
prj = self.model(name=name,
bitbake_version=None,
release=None)
-
prj.save()
for defaultconf in ToasterSetting.objects.filter(
name__startswith="DEFCONF_"):
name = defaultconf.name[8:]
- ProjectVariable.objects.create(project=prj,
- name=name,
- value=defaultconf.value)
+ pv,create = ProjectVariable.objects.get_or_create(project=prj,name=name)
+ pv.value = defaultconf.value
+ pv.save()
if release is None:
return prj
@@ -197,6 +203,11 @@ class Project(models.Model):
user_id = models.IntegerField(null=True)
objects = ProjectManager()
+ # build directory override (e.g. imported)
+ builddir = models.TextField()
+ # merge the Toaster configure attributes directly into the standard conf files
+ merged_attr = models.BooleanField(default=False)
+
# set to True for the project which is the default container
# for builds initiated by the command line etc.
is_default= models.BooleanField(default=False)
@@ -305,6 +316,15 @@ class Project(models.Model):
return layer_versions
+ def get_default_image_recipe(self):
+ try:
+ return self.projectvariable_set.get(name="DEFAULT_IMAGE").value
+ except (ProjectVariable.DoesNotExist,IndexError):
+ return None;
+
+ def get_is_new(self):
+ return self.get_variable(Project.PROJECT_SPECIFIC_ISNEW)
+
def get_available_machines(self):
""" Returns QuerySet of all Machines which are provided by the
Layers currently added to the Project """
@@ -353,6 +373,32 @@ class Project(models.Model):
return queryset
+ # Project Specific status management
+ PROJECT_SPECIFIC_STATUS = 'INTERNAL_PROJECT_SPECIFIC_STATUS'
+ PROJECT_SPECIFIC_CALLBACK = 'INTERNAL_PROJECT_SPECIFIC_CALLBACK'
+ PROJECT_SPECIFIC_ISNEW = 'INTERNAL_PROJECT_SPECIFIC_ISNEW'
+ PROJECT_SPECIFIC_DEFAULTIMAGE = 'PROJECT_SPECIFIC_DEFAULTIMAGE'
+ PROJECT_SPECIFIC_NONE = ''
+ PROJECT_SPECIFIC_NEW = '1'
+ PROJECT_SPECIFIC_EDIT = '2'
+ PROJECT_SPECIFIC_CLONING = '3'
+ PROJECT_SPECIFIC_CLONING_SUCCESS = '4'
+ PROJECT_SPECIFIC_CLONING_FAIL = '5'
+
+ def get_variable(self,variable,default_value = ''):
+ try:
+ return self.projectvariable_set.get(name=variable).value
+ except (ProjectVariable.DoesNotExist,IndexError):
+ return default_value
+
+ def set_variable(self,variable,value):
+ pv,create = ProjectVariable.objects.get_or_create(project = self, name = variable)
+ pv.value = value
+ pv.save()
+
+ def get_default_image(self):
+ return self.get_variable(Project.PROJECT_SPECIFIC_DEFAULTIMAGE)
+
def schedule_build(self):
from bldcontrol.models import BuildRequest, BRTarget, BRLayer
@@ -459,6 +505,9 @@ class Build(models.Model):
# number of repos cloned so far for this build (default off)
repos_cloned = models.IntegerField(default=1)
+ # Hint on current progress item
+ progress_item = models.CharField(max_length=40)
+
@staticmethod
def get_recent(project=None):
"""
@@ -1663,6 +1712,9 @@ class CustomImageRecipe(Recipe):
path_schema_two = self.base_recipe.file_path
+ path_schema_three = "%s/%s" % (self.base_recipe.layer_version.layer.local_source_dir,
+ self.base_recipe.file_path)
+
if os.path.exists(path_schema_one):
return path_schema_one
@@ -1670,6 +1722,10 @@ class CustomImageRecipe(Recipe):
if os.path.exists(path_schema_two):
return path_schema_two
+ # Or a local path if all layers are local
+ if os.path.exists(path_schema_three):
+ return path_schema_three
+
return None
def generate_recipe_file_contents(self):
@@ -1694,8 +1750,8 @@ class CustomImageRecipe(Recipe):
if base_recipe_path:
base_recipe = open(base_recipe_path, 'r').read()
else:
- raise IOError("Based on recipe file not found: %s" %
- base_recipe_path)
+ # Pass back None to trigger error message to user
+ return None
# Add a special case for when the recipe we have based a custom image
# recipe on requires another recipe.
@@ -1821,7 +1877,7 @@ class Distro(models.Model):
description = models.CharField(max_length=255)
def get_vcs_distro_file_link_url(self):
- path = self.name+'.conf'
+ path = 'conf/distro/%s.conf' % self.name
return self.layer_version.get_vcs_file_link_url(path)
def __unicode__(self):
diff --git a/bitbake/lib/toaster/toastergui/api.py b/bitbake/lib/toaster/toastergui/api.py
index ab6ba69..564d595 100644
--- a/bitbake/lib/toaster/toastergui/api.py
+++ b/bitbake/lib/toaster/toastergui/api.py
@@ -22,7 +22,9 @@ import os
import re
import logging
import json
+import subprocess
from collections import Counter
+from shutil import copyfile
from orm.models import Project, ProjectTarget, Build, Layer_Version
from orm.models import LayerVersionDependency, LayerSource, ProjectLayer
@@ -38,6 +40,18 @@ from django.core.urlresolvers import reverse
from django.db.models import Q, F
from django.db import Error
from toastergui.templatetags.projecttags import filtered_filesizeformat
+from django.utils import timezone
+import pytz
+
+# development/debugging support
+verbose = 2
+def _log(msg):
+ if 1 == verbose:
+ print(msg)
+ elif 2 == verbose:
+ f1=open('/tmp/toaster.log', 'a')
+ f1.write("|" + msg + "|\n" )
+ f1.close()
logger = logging.getLogger("toaster")
@@ -137,6 +151,130 @@ class XhrBuildRequest(View):
return response
+class XhrProjectUpdate(View):
+
+ def get(self, request, *args, **kwargs):
+ return HttpResponse()
+
+ def post(self, request, *args, **kwargs):
+ """
+ Project Update
+
+ Entry point: /xhr_projectupdate/<project_id>
+ Method: POST
+
+ Args:
+ pid: pid of project to update
+
+ Returns:
+ {"error": "ok"}
+ or
+ {"error": <error message>}
+ """
+
+ project = Project.objects.get(pk=kwargs['pid'])
+ logger.debug("ProjectUpdateCallback:project.pk=%d,project.builddir=%s" % (project.pk,project.builddir))
+
+ if 'do_update' in request.POST:
+
+ # Extract any default image recipe
+ if 'default_image' in request.POST:
+ project.set_variable(Project.PROJECT_SPECIFIC_DEFAULTIMAGE,str(request.POST['default_image']))
+ else:
+ project.set_variable(Project.PROJECT_SPECIFIC_DEFAULTIMAGE,'')
+
+ logger.debug("ProjectUpdateCallback:Chain to the build request")
+
+ # Chain to the build request
+ xhrBuildRequest = XhrBuildRequest()
+ return xhrBuildRequest.post(request, *args, **kwargs)
+
+ logger.warning("ERROR:XhrProjectUpdate")
+ response = HttpResponse()
+ response.status_code = 500
+ return response
+
+class XhrSetDefaultImageUrl(View):
+
+ def get(self, request, *args, **kwargs):
+ return HttpResponse()
+
+ def post(self, request, *args, **kwargs):
+ """
+ Project Update
+
+ Entry point: /xhr_setdefaultimage/<project_id>
+ Method: POST
+
+ Args:
+ pid: pid of project to update default image
+
+ Returns:
+ {"error": "ok"}
+ or
+ {"error": <error message>}
+ """
+
+ project = Project.objects.get(pk=kwargs['pid'])
+ logger.debug("XhrSetDefaultImageUrl:project.pk=%d" % (project.pk))
+
+ # set any default image recipe
+ if 'targets' in request.POST:
+ default_target = str(request.POST['targets'])
+ project.set_variable(Project.PROJECT_SPECIFIC_DEFAULTIMAGE,default_target)
+ logger.debug("XhrSetDefaultImageUrl,project.pk=%d,project.builddir=%s" % (project.pk,project.builddir))
+ return error_response('ok')
+
+ logger.warning("ERROR:XhrSetDefaultImageUrl")
+ response = HttpResponse()
+ response.status_code = 500
+ return response
+
+
+#
+# Layer Management
+#
+# Rules for 'local_source_dir' layers
+# * Layers must have a unique name in the Layers table
+# * A 'local_source_dir' layer is supposed to be shared
+# by all projects that use it, so that it can have the
+# same logical name
+# * Each project that uses a layer will have its own
+# LayerVersion and Project Layer for it
+# * During the Paroject delete process, when the last
+# LayerVersion for a 'local_source_dir' layer is deleted
+# then the Layer record is deleted to remove orphans
+#
+
+def scan_layer_content(layer,layer_version):
+ # if this is a local layer directory, we can immediately scan its content
+ if layer.local_source_dir:
+ try:
+ # recipes-*/*/*.bb
+ cmd = '%s %s' % ('ls', os.path.join(layer.local_source_dir,'recipes-*/*/*.bb'))
+ recipes_list = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE,stderr=subprocess.STDOUT).stdout.read()
+ recipes_list = recipes_list.decode("utf-8").strip()
+ if recipes_list and 'No such' not in recipes_list:
+ for recipe in recipes_list.split('\n'):
+ recipe_path = recipe[recipe.rfind('recipes-'):]
+ recipe_name = recipe[recipe.rfind('/')+1:].replace('.bb','')
+ recipe_ver = recipe_name.rfind('_')
+ if recipe_ver > 0:
+ recipe_name = recipe_name[0:recipe_ver]
+ if recipe_name:
+ ro, created = Recipe.objects.get_or_create(
+ layer_version=layer_version,
+ name=recipe_name
+ )
+ if created:
+ ro.file_path = recipe_path
+ ro.summary = 'Recipe %s from layer %s' % (recipe_name,layer.name)
+ ro.description = ro.summary
+ ro.save()
+
+ except Exception as e:
+ logger.warning("ERROR:scan_layer_content: %s" % e)
+
class XhrLayer(View):
""" Delete, Get, Add and Update Layer information
@@ -265,6 +403,7 @@ class XhrLayer(View):
(csv)]
"""
+
try:
project = Project.objects.get(pk=kwargs['pid'])
@@ -285,7 +424,13 @@ class XhrLayer(View):
if layer_data['name'] in existing_layers:
return JsonResponse({"error": "layer-name-exists"})
- layer = Layer.objects.create(name=layer_data['name'])
+ if ('local_source_dir' in layer_data):
+ # Local layer can be shared across projects. They have no 'release'
+ # and are not included in get_all_compatible_layer_versions() above
+ layer,created = Layer.objects.get_or_create(name=layer_data['name'])
+ _log("Local Layer created=%s" % created)
+ else:
+ layer = Layer.objects.create(name=layer_data['name'])
layer_version = Layer_Version.objects.create(
layer=layer,
@@ -293,7 +438,7 @@ class XhrLayer(View):
layer_source=LayerSource.TYPE_IMPORTED)
# Local layer
- if ('local_source_dir' in layer_data) and layer.local_source_dir:
+ if ('local_source_dir' in layer_data): ### and layer.local_source_dir:
layer.local_source_dir = layer_data['local_source_dir']
# git layer
elif 'vcs_url' in layer_data:
@@ -325,6 +470,9 @@ class XhrLayer(View):
'layerdetailurl':
layer_dep.get_detailspage_url(project.pk)})
+ # Scan the layer's content and update components
+ scan_layer_content(layer,layer_version)
+
except Layer_Version.DoesNotExist:
return error_response("layer-dep-not-found")
except Project.DoesNotExist:
@@ -529,7 +677,13 @@ class XhrCustomRecipe(View):
recipe_path = os.path.join(layerpath, "recipes", "%s.bb" %
recipe.name)
with open(recipe_path, "w") as recipef:
- recipef.write(recipe.generate_recipe_file_contents())
+ content = recipe.generate_recipe_file_contents()
+ if not content:
+ # Delete this incomplete image recipe object
+ recipe.delete()
+ return error_response("recipe-parent-not-exist")
+ else:
+ recipef.write(recipe.generate_recipe_file_contents())
return JsonResponse(
{"error": "ok",
@@ -1014,8 +1168,24 @@ class XhrProject(View):
state=BuildRequest.REQ_INPROGRESS):
XhrBuildRequest.cancel_build(br)
+ # gather potential orphaned local layers attached to this project
+ project_local_layer_list = []
+ for pl in ProjectLayer.objects.filter(project=project):
+ if pl.layercommit.layer_source == LayerSource.TYPE_IMPORTED:
+ project_local_layer_list.append(pl.layercommit.layer)
+
+ # deep delete the project and its dependencies
project.delete()
+ # delete any local layers now orphaned
+ _log("LAYER_ORPHAN_CHECK:Check for orphaned layers")
+ for layer in project_local_layer_list:
+ layer_refs = Layer_Version.objects.filter(layer=layer)
+ _log("LAYER_ORPHAN_CHECK:Ref Count for '%s' = %d" % (layer.name,len(layer_refs)))
+ if 0 == len(layer_refs):
+ _log("LAYER_ORPHAN_CHECK:DELETE orpahned '%s'" % (layer.name))
+ Layer.objects.filter(pk=layer.id).delete()
+
except Project.DoesNotExist:
return error_response("Project %s does not exist" %
kwargs['project_id'])
diff --git a/bitbake/lib/toaster/toastergui/static/js/layerBtn.js b/bitbake/lib/toaster/toastergui/static/js/layerBtn.js
index 9f9eda1..a5a6563 100644
--- a/bitbake/lib/toaster/toastergui/static/js/layerBtn.js
+++ b/bitbake/lib/toaster/toastergui/static/js/layerBtn.js
@@ -67,6 +67,18 @@ function layerBtnsInit() {
});
});
+ $("td .set-default-recipe-btn").unbind('click');
+ $("td .set-default-recipe-btn").click(function(e){
+ e.preventDefault();
+ var recipe = $(this).data('recipe-name');
+
+ libtoaster.setDefaultImage(null, recipe,
+ function(){
+ /* Success */
+ window.location.replace(libtoaster.ctx.projectSpecificPageUrl);
+ });
+ });
+
$(".customise-btn").unbind('click');
$(".customise-btn").click(function(e){
diff --git a/bitbake/lib/toaster/toastergui/static/js/layerdetails.js b/bitbake/lib/toaster/toastergui/static/js/layerdetails.js
index 9ead393..933b65b 100644
--- a/bitbake/lib/toaster/toastergui/static/js/layerdetails.js
+++ b/bitbake/lib/toaster/toastergui/static/js/layerdetails.js
@@ -359,7 +359,8 @@ function layerDetailsPageInit (ctx) {
if ($(this).is("dt")) {
var dd = $(this).next("dd");
if (!dd.children("form:visible")|| !dd.find(".current-value").html()){
- if (ctx.layerVersion.layer_source == ctx.layerSourceTypes.TYPE_IMPORTED){
+ if (ctx.layerVersion.layer_source == ctx.layerSourceTypes.TYPE_IMPORTED ||
+ ctx.layerVersion.layer_source == ctx.layerSourceTypes.TYPE_LOCAL) {
/* There's no current value and the layer is editable
* so show the "Not set" and hide the delete icon
*/
diff --git a/bitbake/lib/toaster/toastergui/static/js/libtoaster.js b/bitbake/lib/toaster/toastergui/static/js/libtoaster.js
index 6f9b5d0..f2c45c8 100644
--- a/bitbake/lib/toaster/toastergui/static/js/libtoaster.js
+++ b/bitbake/lib/toaster/toastergui/static/js/libtoaster.js
@@ -275,7 +275,8 @@ var libtoaster = (function () {
function _addRmLayer(layerObj, add, doneCb){
if (layerObj.xhrLayerUrl === undefined){
- throw("xhrLayerUrl is undefined")
+ alert("ERROR: missing xhrLayerUrl object. Please file a bug.");
+ return;
}
if (add === true) {
@@ -465,6 +466,108 @@ var libtoaster = (function () {
$.cookie('toaster-notification', JSON.stringify(data), { path: '/'});
}
+ /* _updateProject:
+ * url: xhrProjectUpdateUrl or null for current project
+ * onsuccess: callback for successful execution
+ * onfail: callback for failed execution
+ */
+ function _updateProject (url, targets, default_image, onsuccess, onfail) {
+
+ if (!url)
+ url = libtoaster.ctx.xhrProjectUpdateUrl;
+
+ /* Flatten the array of targets into a space spearated list */
+ if (targets instanceof Array){
+ targets = targets.reduce(function(prevV, nextV){
+ return prev + ' ' + next;
+ });
+ }
+
+ $.ajax( {
+ type: "POST",
+ url: url,
+ data: { 'do_update' : 'True' , 'targets' : targets , 'default_image' : default_image , },
+ headers: { 'X-CSRFToken' : $.cookie('csrftoken')},
+ success: function (_data) {
+ if (_data.error !== "ok") {
+ console.warn(_data.error);
+ } else {
+ if (onsuccess !== undefined) onsuccess(_data);
+ }
+ },
+ error: function (_data) {
+ console.warn("Call failed");
+ console.warn(_data);
+ if (onfail) onfail(data);
+ } });
+ }
+
+ /* _cancelProject:
+ * url: xhrProjectUpdateUrl or null for current project
+ * onsuccess: callback for successful execution
+ * onfail: callback for failed execution
+ */
+ function _cancelProject (url, onsuccess, onfail) {
+
+ if (!url)
+ url = libtoaster.ctx.xhrProjectCancelUrl;
+
+ $.ajax( {
+ type: "POST",
+ url: url,
+ data: { 'do_cancel' : 'True' },
+ headers: { 'X-CSRFToken' : $.cookie('csrftoken')},
+ success: function (_data) {
+ if (_data.error !== "ok") {
+ console.warn(_data.error);
+ } else {
+ if (onsuccess !== undefined) onsuccess(_data);
+ }
+ },
+ error: function (_data) {
+ console.warn("Call failed");
+ console.warn(_data);
+ if (onfail) onfail(data);
+ } });
+ }
+
+ /* _setDefaultImage:
+ * url: xhrSetDefaultImageUrl or null for current project
+ * targets: an array or space separated list of targets to set as default
+ * onsuccess: callback for successful execution
+ * onfail: callback for failed execution
+ */
+ function _setDefaultImage (url, targets, onsuccess, onfail) {
+
+ if (!url)
+ url = libtoaster.ctx.xhrSetDefaultImageUrl;
+
+ /* Flatten the array of targets into a space spearated list */
+ if (targets instanceof Array){
+ targets = targets.reduce(function(prevV, nextV){
+ return prev + ' ' + next;
+ });
+ }
+
+ $.ajax( {
+ type: "POST",
+ url: url,
+ data: { 'targets' : targets },
+ headers: { 'X-CSRFToken' : $.cookie('csrftoken')},
+ success: function (_data) {
+ if (_data.error !== "ok") {
+ console.warn(_data.error);
+ } else {
+ if (onsuccess !== undefined) onsuccess(_data);
+ }
+ },
+ error: function (_data) {
+ console.warn("Call failed");
+ console.warn(_data);
+ if (onfail) onfail(data);
+ } });
+ }
+
return {
enableAjaxLoadingTimer: _enableAjaxLoadingTimer,
disableAjaxLoadingTimer: _disableAjaxLoadingTimer,
@@ -485,6 +588,9 @@ var libtoaster = (function () {
createCustomRecipe: _createCustomRecipe,
makeProjectNameValidation: _makeProjectNameValidation,
setNotification: _setNotification,
+ updateProject : _updateProject,
+ cancelProject : _cancelProject,
+ setDefaultImage : _setDefaultImage,
};
})();
diff --git a/bitbake/lib/toaster/toastergui/static/js/mrbsection.js b/bitbake/lib/toaster/toastergui/static/js/mrbsection.js
index c0c5fa9..f07ccf8 100644
--- a/bitbake/lib/toaster/toastergui/static/js/mrbsection.js
+++ b/bitbake/lib/toaster/toastergui/static/js/mrbsection.js
@@ -86,7 +86,7 @@ function mrbSectionInit(ctx){
if (buildFinished(build)) {
// a build finished: reload the whole page so that the build
// shows up in the builds table
- window.location.reload();
+ window.location.reload(true);
}
else if (stateChanged(build)) {
// update the whole template
@@ -110,6 +110,8 @@ function mrbSectionInit(ctx){
// update the clone progress text
selector = '#repos-cloned-percentage-' + build.id;
$(selector).html(build.repos_cloned_percentage);
+ selector = '#repos-cloned-progressitem-' + build.id;
+ $(selector).html('('+build.progress_item+')');
// update the recipe progress bar
selector = '#repos-cloned-percentage-bar-' + build.id;
diff --git a/bitbake/lib/toaster/toastergui/static/js/newcustomimage_modal.js b/bitbake/lib/toaster/toastergui/static/js/newcustomimage_modal.js
index dace8e3..e55fffc 100644
--- a/bitbake/lib/toaster/toastergui/static/js/newcustomimage_modal.js
+++ b/bitbake/lib/toaster/toastergui/static/js/newcustomimage_modal.js
@@ -25,6 +25,8 @@ function newCustomImageModalInit(){
var duplicateNameMsg = "An image with this name already exists. Image names must be unique.";
var duplicateImageInProjectMsg = "An image with this name already exists in this project."
var invalidBaseRecipeIdMsg = "Please select an image to customise.";
+ var missingParentRecipe = "The parent recipe file was not found. Cancel this action, build any target (like 'quilt-native') to force all new layers to clone, and try again";
+ var unknownError = "Unexpected error: ";
// set button to "submit" state and enable text entry so user can
// enter the custom recipe name
@@ -62,6 +64,7 @@ function newCustomImageModalInit(){
if (nameInput.val().length > 0) {
libtoaster.createCustomRecipe(nameInput.val(), baseRecipeId,
function(ret) {
+ showSubmitState();
if (ret.error !== "ok") {
console.warn(ret.error);
if (ret.error === "invalid-name") {
@@ -73,6 +76,10 @@ function newCustomImageModalInit(){
} else if (ret.error === "image-already-exists") {
showNameError(duplicateImageInProjectMsg);
return;
+ } else if (ret.error === "recipe-parent-not-exist") {
+ showNameError(missingParentRecipe);
+ } else {
+ showNameError(unknownError + ret.error);
}
} else {
imgCustomModal.modal('hide');
diff --git a/bitbake/lib/toaster/toastergui/static/js/projecttopbar.js b/bitbake/lib/toaster/toastergui/static/js/projecttopbar.js
index 69220aa..3f9e186 100644
--- a/bitbake/lib/toaster/toastergui/static/js/projecttopbar.js
+++ b/bitbake/lib/toaster/toastergui/static/js/projecttopbar.js
@@ -14,6 +14,9 @@ function projectTopBarInit(ctx) {
var newBuildTargetBuildBtn = $("#build-button");
var selectedTarget;
+ var updateProjectBtn = $("#update-project-button");
+ var cancelProjectBtn = $("#cancel-project-button");
+
/* Project name change functionality */
projectNameFormToggle.click(function(e){
e.preventDefault();
@@ -89,6 +92,25 @@ function projectTopBarInit(ctx) {
}, null);
});
+ updateProjectBtn.click(function (e) {
+ e.preventDefault();
+
+ selectedTarget = { name: "_PROJECT_PREPARE_" };
+
+ /* Save current default build image, fire off the build */
+ libtoaster.updateProject(null, selectedTarget.name, newBuildTargetInput.val().trim(),
+ function(){
+ window.location.replace(libtoaster.ctx.projectSpecificPageUrl);
+ }, null);
+ });
+
+ cancelProjectBtn.click(function (e) {
+ e.preventDefault();
+
+ /* redirect to 'done/canceled' landing page */
+ window.location.replace(libtoaster.ctx.landingSpecificCancelURL);
+ });
+
/* Call makeProjectNameValidation function */
libtoaster.makeProjectNameValidation($("#project-name-change-input"),
$("#hint-error-project-name"), $("#validate-project-name"),
diff --git a/bitbake/lib/toaster/toastergui/tables.py b/bitbake/lib/toaster/toastergui/tables.py
index dca2fa2..9ff756b 100644
--- a/bitbake/lib/toaster/toastergui/tables.py
+++ b/bitbake/lib/toaster/toastergui/tables.py
@@ -35,6 +35,8 @@ from toastergui.tablefilter import TableFilterActionToggle
from toastergui.tablefilter import TableFilterActionDateRange
from toastergui.tablefilter import TableFilterActionDay
+import os
+
class ProjectFilters(object):
@staticmethod
def in_project(project_layers):
@@ -339,6 +341,8 @@ class RecipesTable(ToasterTable):
'filter_name' : "in_current_project",
'static_data_name' : "add-del-layers",
'static_data_template' : '{% include "recipe_btn.html" %}'}
+ if '1' == os.environ.get('TOASTER_PROJECTSPECIFIC'):
+ build_col['static_data_template'] = '{% include "recipe_add_btn.html" %}'
def get_context_data(self, **kwargs):
project = Project.objects.get(pk=kwargs['pid'])
@@ -1611,14 +1615,12 @@ class DistrosTable(ToasterTable):
hidden=True,
field_name="layer_version__get_vcs_reference")
- wrtemplate_file_template = '''<code>conf/machine/{{data.name}}.conf</code>
- <a href="{{data.get_vcs_machine_file_link_url}}" target="_blank"><span class="glyphicon glyphicon-new-window"></i></a>'''
-
+ distro_file_template = '''<code>conf/distro/{{data.name}}.conf</code>
+ {% if 'None' not in data.get_vcs_distro_file_link_url %}<a href="{{data.get_vcs_distro_file_link_url}}" target="_blank"><span class="glyphicon glyphicon-new-window"></i></a>{% endif %}'''
self.add_column(title="Distro file",
hidden=True,
static_data_name="templatefile",
- static_data_template=wrtemplate_file_template)
-
+ static_data_template=distro_file_template)
self.add_column(title="Select",
help_text="Sets the selected distro to the project",
diff --git a/bitbake/lib/toaster/toastergui/templates/base_specific.html b/bitbake/lib/toaster/toastergui/templates/base_specific.html
new file mode 100644
index 0000000..e377cad
--- /dev/null
+++ b/bitbake/lib/toaster/toastergui/templates/base_specific.html
@@ -0,0 +1,128 @@
+<!DOCTYPE html>
+{% load static %}
+{% load projecttags %}
+{% load project_url_tag %}
+<html lang="en">
+ <head>
+ <title>
+ {% block title %} Toaster {% endblock %}
+ </title>
+ <link rel="stylesheet" href="{% static 'css/bootstrap.min.css' %}" type="text/css"/>
+ <!--link rel="stylesheet" href="{% static 'css/bootstrap-theme.css' %}" type="text/css"/-->
+ <link rel="stylesheet" href="{% static 'css/font-awesome.min.css' %}" type='text/css'/>
+ <link rel="stylesheet" href="{% static 'css/default.css' %}" type='text/css'/>
+
+ <meta name="viewport" content="width=device-width, initial-scale=1.0" />
+ <meta http-equiv="Content-Type" content="text/html;charset=UTF-8" />
+ <script src="{% static 'js/jquery-2.0.3.min.js' %}">
+ </script>
+ <script src="{% static 'js/jquery.cookie.js' %}">
+ </script>
+ <script src="{% static 'js/bootstrap.min.js' %}">
+ </script>
+ <script src="{% static 'js/typeahead.jquery.js' %}">
+ </script>
+ <script src="{% static 'js/jsrender.min.js' %}">
+ </script>
+ <script src="{% static 'js/highlight.pack.js' %}">
+ </script>
+ <script src="{% static 'js/libtoaster.js' %}">
+ </script>
+ {% if DEBUG %}
+ <script>
+ libtoaster.debug = true;
+ </script>
+ {% endif %}
+ <script>
+ /* Set JsRender delimiters (mrb_section.html) different than Django's */
+ $.views.settings.delimiters("<%", "%>");
+
+ /* This table allows Django substitutions to be passed to libtoaster.js */
+ libtoaster.ctx = {
+ jsUrl : "{% static 'js/' %}",
+ htmlUrl : "{% static 'html/' %}",
+ projectsUrl : "{% url 'all-projects' %}",
+ projectsTypeAheadUrl: {% url 'xhr_projectstypeahead' as prjurl%}{{prjurl|json}},
+ {% if project.id %}
+ landingSpecificURL : "{% url 'landing_specific' project.id %}",
+ landingSpecificCancelURL : "{% url 'landing_specific_cancel' project.id %}",
+ projectId : {{project.id}},
+ projectPageUrl : {% url 'project' project.id as purl %}{{purl|json}},
+ projectSpecificPageUrl : {% url 'project_specific' project.id as purl %}{{purl|json}},
+ xhrProjectUrl : {% url 'xhr_project' project.id as pxurl %}{{pxurl|json}},
+ projectName : {{project.name|json}},
+ recipesTypeAheadUrl: {% url 'xhr_recipestypeahead' project.id as paturl%}{{paturl|json}},
+ layersTypeAheadUrl: {% url 'xhr_layerstypeahead' project.id as paturl%}{{paturl|json}},
+ machinesTypeAheadUrl: {% url 'xhr_machinestypeahead' project.id as paturl%}{{paturl|json}},
+ distrosTypeAheadUrl: {% url 'xhr_distrostypeahead' project.id as paturl%}{{paturl|json}},
+ projectBuildsUrl: {% url 'projectbuilds' project.id as pburl %}{{pburl|json}},
+ xhrCustomRecipeUrl : "{% url 'xhr_customrecipe' %}",
+ projectId : {{project.id}},
+ xhrBuildRequestUrl: "{% url 'xhr_buildrequest' project.id %}",
+ mostRecentBuildsUrl: "{% url 'most_recent_builds' %}?project_id={{project.id}}",
+ xhrProjectUpdateUrl: "{% url 'xhr_projectupdate' project.id %}",
+ xhrProjectCancelUrl: "{% url 'landing_specific_cancel' project.id %}",
+ xhrSetDefaultImageUrl: "{% url 'xhr_setdefaultimage' project.id %}",
+ {% else %}
+ mostRecentBuildsUrl: "{% url 'most_recent_builds' %}",
+ projectId : undefined,
+ projectPageUrl : undefined,
+ projectName : undefined,
+ {% endif %}
+ };
+ </script>
+ {% block extraheadcontent %}
+ {% endblock %}
+ </head>
+
+ <body>
+
+ {% csrf_token %}
+ <div id="loading-notification" class="alert alert-warning lead text-center" style="display:none">
+ Loading <i class="fa-pulse icon-spinner"></i>
+ </div>
+
+ <div id="change-notification" class="alert alert-info alert-dismissible change-notification" style="display:none">
+ <button type="button" class="close" id="hide-alert" data-toggle="alert">×</button>
+ <span id="change-notification-msg"></span>
+ </div>
+
+ <nav class="navbar navbar-default navbar-fixed-top">
+ <div class="container-fluid">
+ <div class="navbar-header">
+ <button type="button" class="navbar-toggle collapsed" data-toggle="collapse" data-target="#global-nav" aria-expanded="false">
+ <span class="sr-only">Toggle navigation</span>
+ <span class="icon-bar"></span>
+ <span class="icon-bar"></span>
+ <span class="icon-bar"></span>
+ </button>
+ <div class="toaster-navbar-brand">
+ {% if project_specific %}
+ <img class="logo" src="{% static 'img/logo.png' %}" class="" alt="Yocto Project logo"/>
+ Toaster
+ {% else %}
+ <a href="/">
+ </a>
+ <a href="/">
+ <img class="logo" src="{% static 'img/logo.png' %}" class="" alt="Yocto Project logo"/>
+ </a>
+ <a class="brand" href="/">Toaster</a>
+ {% endif %}
+ {% if DEBUG %}
+ <span class="glyphicon glyphicon-info-sign" title="<strong>Toaster version information</strong>" data-content="<dl><dt>Git branch</dt><dd>{{TOASTER_BRANCH}}</dd><dt>Git revision</dt><dd>{{TOASTER_REVISION}}</dd></dl>"></i>
+ {% endif %}
+ </div>
+ </div>
+ <div class="collapse navbar-collapse" id="global-nav">
+ <ul class="nav navbar-nav">
+ <h3> Project Configuration Page </h3>
+ </div>
+ </div>
+ </nav>
+
+ <div class="container-fluid">
+ {% block pagecontent %}
+ {% endblock %}
+ </div>
+ </body>
+</html>
diff --git a/bitbake/lib/toaster/toastergui/templates/baseprojectspecificpage.html b/bitbake/lib/toaster/toastergui/templates/baseprojectspecificpage.html
new file mode 100644
index 0000000..d0b588d
--- /dev/null
+++ b/bitbake/lib/toaster/toastergui/templates/baseprojectspecificpage.html
@@ -0,0 +1,48 @@
+{% extends "base_specific.html" %}
+
+{% load projecttags %}
+{% load humanize %}
+
+{% block title %} {{title}} - {{project.name}} - Toaster {% endblock %}
+
+{% block pagecontent %}
+
+<div class="row">
+ {% include "project_specific_topbar.html" %}
+ <script type="text/javascript">
+$(document).ready(function(){
+ $("#config-nav .nav li a").each(function(){
+ if (window.location.pathname === $(this).attr('href'))
+ $(this).parent().addClass('active');
+ else
+ $(this).parent().removeClass('active');
+ });
+
+ $("#topbar-configuration-tab").addClass("active")
+ });
+ </script>
+
+ <!-- only on config pages -->
+ <div id="config-nav" class="col-md-2">
+ <ul class="nav nav-pills nav-stacked">
+ <li><a class="nav-parent" href="{% url 'project' project.id %}">Configuration</a></li>
+ <li class="nav-header">Compatible metadata</li>
+ <li><a href="{% url 'projectcustomimages' project.id %}">Custom images</a></li>
+ <li><a href="{% url 'projectimagerecipes' project.id %}">Image recipes</a></li>
+ <li><a href="{% url 'projectsoftwarerecipes' project.id %}">Software recipes</a></li>
+ <li><a href="{% url 'projectmachines' project.id %}">Machines</a></li>
+ <li><a href="{% url 'projectlayers' project.id %}">Layers</a></li>
+ <li><a href="{% url 'projectdistros' project.id %}">Distros</a></li>
+ <li class="nav-header">Extra configuration</li>
+ <li><a href="{% url 'projectconf' project.id %}">BitBake variables</a></li>
+
+ <li class="nav-header">Actions</li>
+ </ul>
+ </div>
+ <div class="col-md-10">
+ {% block projectinfomain %}{% endblock %}
+ </div>
+
+</div>
+{% endblock %}
+
diff --git a/bitbake/lib/toaster/toastergui/templates/customise_btn.html b/bitbake/lib/toaster/toastergui/templates/customise_btn.html
index 38c258a..ce46240 100644
--- a/bitbake/lib/toaster/toastergui/templates/customise_btn.html
+++ b/bitbake/lib/toaster/toastergui/templates/customise_btn.html
@@ -5,7 +5,11 @@
>
Customise
</button>
-<button class="btn btn-default btn-block layer-add-{{data.layer_version.pk}} layerbtn" data-layer='{ "id": {{data.layer_version.pk}}, "name": "{{data.layer_version.layer.name}}", "layerdetailurl": "{%url 'layerdetails' extra.pid data.layer_version.pk%}"}' data-directive="add"
+<button class="btn btn-default btn-block layer-add-{{data.layer_version.pk}} layerbtn"
+ data-layer='{ "id": {{data.layer_version.pk}}, "name": "{{data.layer_version.layer.name}}",
+ "layerdetailurl": "{%url 'layerdetails' extra.pid data.layer_version.pk%}",
+ "xhrLayerUrl": "{% url "xhr_layer" extra.pid data.layer_version.pk %}"}'
+ data-directive="add"
{% if data.layer_version.pk in extra.current_layers %}
style="display:none;"
{% endif %}
diff --git a/bitbake/lib/toaster/toastergui/templates/generic-toastertable-page.html b/bitbake/lib/toaster/toastergui/templates/generic-toastertable-page.html
index b3eabe1..99fbb38 100644
--- a/bitbake/lib/toaster/toastergui/templates/generic-toastertable-page.html
+++ b/bitbake/lib/toaster/toastergui/templates/generic-toastertable-page.html
@@ -1,4 +1,4 @@
-{% extends "baseprojectpage.html" %}
+{% extends project_specific|yesno:"baseprojectspecificpage.html,baseprojectpage.html" %}
{% load projecttags %}
{% load humanize %}
{% load static %}
diff --git a/bitbake/lib/toaster/toastergui/templates/importlayer.html b/bitbake/lib/toaster/toastergui/templates/importlayer.html
index 97d52c7..e0c987e 100644
--- a/bitbake/lib/toaster/toastergui/templates/importlayer.html
+++ b/bitbake/lib/toaster/toastergui/templates/importlayer.html
@@ -1,4 +1,4 @@
-{% extends "base.html" %}
+{% extends project_specific|yesno:"baseprojectspecificpage.html,base.html" %}
{% load projecttags %}
{% load humanize %}
{% load static %}
@@ -6,7 +6,7 @@
{% block pagecontent %}
<div class="row">
- {% include "projecttopbar.html" %}
+ {% include project_specific|yesno:"project_specific_topbar.html,projecttopbar.html" %}
{% if project and project.release %}
<script src="{% static 'js/layerDepsModal.js' %}"></script>
<script src="{% static 'js/importlayer.js' %}"></script>
diff --git a/bitbake/lib/toaster/toastergui/templates/landing_specific.html b/bitbake/lib/toaster/toastergui/templates/landing_specific.html
new file mode 100644
index 0000000..e289c7d
--- /dev/null
+++ b/bitbake/lib/toaster/toastergui/templates/landing_specific.html
@@ -0,0 +1,50 @@
+{% extends "base_specific.html" %}
+
+{% load static %}
+{% load projecttags %}
+{% load humanize %}
+
+{% block title %} Welcome to Toaster {% endblock %}
+
+{% block pagecontent %}
+
+ <div class="container">
+ <div class="row">
+ <!-- Empty - no build module -->
+ <div class="page-header top-air">
+ <h1>
+ Configuration {% if status == "cancel" %}Canceled{% else %}Completed{% endif %}! You can now close this window.
+ </h1>
+ </div>
+ <div class="alert alert-info lead">
+ <p>
+ Your project configuration {% if status == "cancel" %}changes have been canceled{% else %}has completed!{% endif %}
+ <br>
+ <br>
+ <ul>
+ <li>
+ The Toaster instance for project configuration has been shut down
+ </li>
+ <li>
+ You can start Toaster independently for advanced project management and analysis:
+ <pre><code>
+ Set up bitbake environment:
+ $ cd {{install_dir}}
+ $ . oe-init-build-env [toaster_server]
+
+ Option 1: Start a local Toaster server, open local browser to "localhost:8000"
+ $ . toaster start webport=8000
+
+ Option 2: Start a shared Toaster server, open any browser to "[host_ip]:8000"
+ $ . toaster start webport=0.0.0.0:8000
+
+ To stop the Toaster server:
+ $ . toaster stop
+ </code></pre>
+ </li>
+ </ul>
+ </p>
+ </div>
+ </div>
+
+{% endblock %}
diff --git a/bitbake/lib/toaster/toastergui/templates/layerdetails.html b/bitbake/lib/toaster/toastergui/templates/layerdetails.html
index e0069db..1e26e31 100644
--- a/bitbake/lib/toaster/toastergui/templates/layerdetails.html
+++ b/bitbake/lib/toaster/toastergui/templates/layerdetails.html
@@ -1,4 +1,4 @@
-{% extends "base.html" %}
+{% extends project_specific|yesno:"baseprojectspecificpage.html,base.html" %}
{% load projecttags %}
{% load humanize %}
{% load static %}
@@ -310,6 +310,7 @@
{% endwith %}
{% endwith %}
</div>
+
</div> <!-- end tab content -->
</div> <!-- end tabable -->
diff --git a/bitbake/lib/toaster/toastergui/templates/mrb_section.html b/bitbake/lib/toaster/toastergui/templates/mrb_section.html
index c5b9fe9..98d9fac 100644
--- a/bitbake/lib/toaster/toastergui/templates/mrb_section.html
+++ b/bitbake/lib/toaster/toastergui/templates/mrb_section.html
@@ -119,7 +119,7 @@
title="Toaster is cloning the repos required for your build">
</span>
- Cloning <span id="repos-cloned-percentage-<%:id%>"><%:repos_cloned_percentage%></span>% complete
+ Cloning <span id="repos-cloned-percentage-<%:id%>"><%:repos_cloned_percentage%></span>% complete <span id="repos-cloned-progressitem-<%:id%>">(<%:progress_item%>)</span>
<%include tmpl='#cancel-template'/%>
</div>
diff --git a/bitbake/lib/toaster/toastergui/templates/newcustomimage.html b/bitbake/lib/toaster/toastergui/templates/newcustomimage.html
index 980179a..0766e5e 100644
--- a/bitbake/lib/toaster/toastergui/templates/newcustomimage.html
+++ b/bitbake/lib/toaster/toastergui/templates/newcustomimage.html
@@ -1,4 +1,4 @@
-{% extends "base.html" %}
+{% extends project_specific|yesno:"baseprojectspecificpage.html,base.html" %}
{% load projecttags %}
{% load humanize %}
{% load static %}
@@ -8,7 +8,7 @@
<div class="row">
- {% include "projecttopbar.html" %}
+ {% include project_specific|yesno:"project_specific_topbar.html,projecttopbar.html" %}
<div class="col-md-12">
{% url table_name project.id as xhr_table_url %}
diff --git a/bitbake/lib/toaster/toastergui/templates/newproject.html b/bitbake/lib/toaster/toastergui/templates/newproject.html
index acb614e..7e1ebb3 100644
--- a/bitbake/lib/toaster/toastergui/templates/newproject.html
+++ b/bitbake/lib/toaster/toastergui/templates/newproject.html
@@ -20,23 +20,19 @@
<input type="text" class="form-control" required id="new-project-name" name="projectname">
</div>
<p class="help-block text-danger" style="display: none;" id="hint-error-project-name">A project with this name exists. Project names must be unique.</p>
-<!--
- <fieldset>
- <label class="project-form">Project type</label>
- <label class="project-form radio"><input type="radio" name="ptype" value="analysis" checked/> Analysis Project</label>
+ <label class="project-form">Project type:</label>
{% if releases.count > 0 %}
- <label class="project-form radio"><input type="radio" name="ptype" value="build" checked /> Build Project</label>
+ <label class="project-form radio" style="padding-left: 35px;"><input id='type-new' type="radio" name="ptype" value="new"/> New project</label>
{% endif %}
- </fieldset> -->
- <input type="hidden" name="ptype" value="build" />
+ <label class="project-form radio" style="padding-left: 35px;"><input id='type-import' type="radio" name="ptype" value="import"/> Import command line project</label>
{% if releases.count > 0 %}
- <div class="release form-group">
+ <div class="release form-group">
{% if releases.count > 1 %}
<label class="control-label">
Release
- <span class="glyphicon glyphicon-question-sign get-help" title="The version of the build system you want to use"></span>
+ <span class="glyphicon glyphicon-question-sign get-help" title="The version of the build system you want to use for this project"></span>
</label>
<select name="projectversion" id="projectversion" class="form-control">
{% for release in releases %}
@@ -54,33 +50,31 @@
<span class="help-block">{{release.helptext|safe}}</span>
</div>
{% endfor %}
+ </div>
+ </div>
{% else %}
<input type="hidden" name="projectversion" value="{{releases.0.id}}"/>
{% endif %}
- </div>
- </div>
- </fieldset>
+
+ <input type="checkbox" class="checkbox-mergeattr" name="mergeattr" value="mergeattr"> Merged Toaster settings (Command line user compatibility)
+ <span class="glyphicon glyphicon-question-sign get-help" title="Place the Toaster settings into the standard 'local.conf' and 'bblayers.conf' instead of 'toaster_bblayers.conf' and 'toaster.conf'"></span>
+
+ </div>
{% endif %}
+
+ <div class="build-import form-group" id="import-project">
+ <label class="control-label">Import existing project directory
+ <span class="glyphicon glyphicon-question-sign get-help" title="Enter a path to an existing build directory, import the existing settings, and create a Toaster Project for it."></span>
+ </label>
+ <input style="width: 33%;"type="text" class="form-control" required id="import-project-dir" name="importdir">
+ </div>
+
<div class="top-air">
<input type="submit" id="create-project-button" class="btn btn-primary btn-lg" value="Create project"/>
<span class="help-inline" style="vertical-align:middle;">To create a project, you need to enter a project name</span>
</div>
</form>
- <!--
- <div class="col-md-5 well">
- <span class="help-block">
- <h4>Toaster project types</h4>
- <p>With a <strong>build project</strong> you configure and run your builds from Toaster.</p>
- <p>With an <strong>analysis project</strong>, the builds are configured and run by another tool
- (something like Buildbot or Jenkins), and the project only collects the information about the
- builds (packages, recipes, dependencies, logs, etc). </p>
- <p>You can read more on <a href="#">how to set up an analysis project</a>
- in the Toaster manual.</p>
- <h4>Release</h4>
- <p>If you create a <strong>build project</strong>, you will need to select a <strong>release</strong>,
- which is the version of the build system you want to use to run your builds.</p>
- </div> -->
</div>
</div>
@@ -89,6 +83,7 @@
// hide the new project button
$("#new-project-button").hide();
$('.btn-primary').attr('disabled', 'disabled');
+ $('#type-new').attr('checked', 'checked');
// enable submit button when all required fields are populated
$("input#new-project-name").on('input', function() {
@@ -118,20 +113,24 @@
$(".btn-primary"));
-/* // Hide the project release when you select an analysis project
+ // Hide the project release when you select an analysis project
function projectType() {
- if ($("input[type='radio']:checked").val() == 'build') {
+ if ($("input[type='radio']:checked").val() == 'new') {
+ $('.build-import').fadeOut();
$('.release').fadeIn();
+ $('#import-project-dir').removeAttr('required');
}
else {
$('.release').fadeOut();
+ $('.build-import').fadeIn();
+ $('#import-project-dir').attr('required', 'required');
}
}
projectType();
$('input:radio').change(function(){
projectType();
- }); */
+ });
});
</script>
diff --git a/bitbake/lib/toaster/toastergui/templates/newproject_specific.html b/bitbake/lib/toaster/toastergui/templates/newproject_specific.html
new file mode 100644
index 0000000..cfa77f2
--- /dev/null
+++ b/bitbake/lib/toaster/toastergui/templates/newproject_specific.html
@@ -0,0 +1,95 @@
+{% extends "base.html" %}
+{% load projecttags %}
+{% load humanize %}
+
+{% block title %} Create a new project - Toaster {% endblock %}
+
+{% block pagecontent %}
+<div class="row">
+ <div class="col-md-12">
+ <div class="page-header">
+ <h1>Create a new project</h1>
+ </div>
+ {% if alert %}
+ <div class="alert alert-danger" role="alert">{{alert}}</div>
+ {% endif %}
+
+ <form method="POST" action="{%url "newproject_specific" project_pk %}">{% csrf_token %}
+ <div class="form-group" id="validate-project-name">
+ <label class="control-label">Project name <span class="text-muted">(required)</span></label>
+ <input type="text" class="form-control" required id="new-project-name" name="display_projectname" value="{{projectname}}" disabled>
+ </div>
+ <p class="help-block text-danger" style="display: none;" id="hint-error-project-name">A project with this name exists. Project names must be unique.</p>
+ <input type="hidden" name="ptype" value="build" />
+ <input type="hidden" name="projectname" value="{{projectname}}" />
+
+ {% if releases.count > 0 %}
+ <div class="release form-group">
+ {% if releases.count > 1 %}
+ <label class="control-label">
+ Release
+ <span class="glyphicon glyphicon-question-sign get-help" title="The version of the build system you want to use"></span>
+ </label>
+ <select name="projectversion" id="projectversion" class="form-control">
+ {% for release in releases %}
+ <option value="{{release.id}}"
+ {%if defaultbranch == release.name %}
+ selected
+ {%endif%}
+ >{{release.description}}</option>
+ {% endfor %}
+ </select>
+ <div class="row">
+ <div class="col-md-4">
+ {% for release in releases %}
+ <div class="helptext" id="description-{{release.id}}" style="display: none">
+ <span class="help-block">{{release.helptext|safe}}</span>
+ </div>
+ {% endfor %}
+ {% else %}
+ <input type="hidden" name="projectversion" value="{{releases.0.id}}"/>
+ {% endif %}
+ </div>
+ </div>
+ </fieldset>
+ {% endif %}
+ <div class="top-air">
+ <input type="submit" id="create-project-button" class="btn btn-primary btn-lg" value="Create project"/>
+ <span class="help-inline" style="vertical-align:middle;">To create a project, you need to specify the release</span>
+ </div>
+
+ </form>
+ </div>
+ </div>
+
+ <script type="text/javascript">
+ $(document).ready(function () {
+ // hide the new project button, name is preset
+ $("#new-project-button").hide();
+
+ // enable submit button when all required fields are populated
+ $("input#new-project-name").on('input', function() {
+ if ($("input#new-project-name").val().length > 0 ){
+ $('.btn-primary').removeAttr('disabled');
+ $(".help-inline").css('visibility','hidden');
+ }
+ else {
+ $('.btn-primary').attr('disabled', 'disabled');
+ $(".help-inline").css('visibility','visible');
+ }
+ });
+
+ // show relevant help text for the selected release
+ var selected_release = $('select').val();
+ $("#description-" + selected_release).show();
+
+ $('select').change(function(){
+ var new_release = $('select').val();
+ $(".helptext").hide();
+ $('#description-' + new_release).fadeIn();
+ });
+
+ });
+ </script>
+
+{% endblock %}
diff --git a/bitbake/lib/toaster/toastergui/templates/project.html b/bitbake/lib/toaster/toastergui/templates/project.html
index 11603d1..fa41e3c 100644
--- a/bitbake/lib/toaster/toastergui/templates/project.html
+++ b/bitbake/lib/toaster/toastergui/templates/project.html
@@ -1,4 +1,4 @@
-{% extends "baseprojectpage.html" %}
+{% extends project_specific|yesno:"baseprojectspecificpage.html,baseprojectpage.html" %}
{% load projecttags %}
{% load humanize %}
@@ -18,7 +18,7 @@
try {
projectPageInit(ctx);
} catch (e) {
- document.write("Sorry, An error has occurred loading this page");
+ document.write("Sorry, An error has occurred loading this page (project):"+e);
console.warn(e);
}
});
@@ -93,6 +93,7 @@
</form>
</div>
+ {% if not project_specific %}
<div class="well well-transparent">
<h3>Most built recipes</h3>
@@ -105,6 +106,7 @@
</ul>
<button class="btn btn-primary" id="freq-build-btn" disabled="disabled">Build selected recipes</button>
</div>
+ {% endif %}
<div class="well well-transparent">
<h3>Project release</h3>
@@ -157,5 +159,6 @@
<ul class="list-unstyled lead" id="layers-in-project-list">
</ul>
</div>
+
</div>
{% endblock %}
diff --git a/bitbake/lib/toaster/toastergui/templates/project_specific.html b/bitbake/lib/toaster/toastergui/templates/project_specific.html
new file mode 100644
index 0000000..f625d18
--- /dev/null
+++ b/bitbake/lib/toaster/toastergui/templates/project_specific.html
@@ -0,0 +1,162 @@
+{% extends "baseprojectspecificpage.html" %}
+
+{% load projecttags %}
+{% load humanize %}
+{% load static %}
+
+{% block title %} Configuration - {{project.name}} - Toaster {% endblock %}
+{% block projectinfomain %}
+
+<script src="{% static 'js/layerDepsModal.js' %}"></script>
+<script src="{% static 'js/projectpage.js' %}"></script>
+<script>
+ $(document).ready(function (){
+ var ctx = {
+ testReleaseChangeUrl: "{% url 'xhr_testreleasechange' project.id %}",
+ };
+
+ try {
+ projectPageInit(ctx);
+ } catch (e) {
+ document.write("Sorry, An error has occurred loading this page");
+ console.warn(e);
+ }
+ });
+</script>
+
+<div id="delete-project-modal" class="modal fade" tabindex="-1" role="dialog" data-backdrop="static" data-keyboard="false">
+ <div class="modal-dialog">
+ <div class="modal-content">
+ <div class="modal-header">
+ <h4>Are you sure you want to delete this project?</h4>
+ </div>
+ <div class="modal-body">
+ <p>Deleting the <strong class="project-name"></strong> project
+ will:</p>
+ <ul>
+ <li>Cancel its builds currently in progress</li>
+ <li>Remove its configuration information</li>
+ <li>Remove its imported layers</li>
+ <li>Remove its custom images</li>
+ <li>Remove all its build information</li>
+ </ul>
+ </div>
+ <div class="modal-footer">
+ <button type="button" class="btn btn-primary" id="delete-project-confirmed">
+ <span data-role="submit-state">Delete project</span>
+ <span data-role="loading-state" style="display:none">
+ <span class="fa-pulse">
+ <i class="fa-pulse icon-spinner"></i>
+ </span>
+ Deleting project...
+ </span>
+ </button>
+ <button type="button" class="btn btn-link" data-dismiss="modal">Cancel</button>
+ </div>
+ </div><!-- /.modal-content -->
+ </div><!-- /.modal-dialog -->
+</div>
+
+
+<div class="row" id="project-page" style="display:none">
+ <div class="col-md-6">
+ <div class="well well-transparent" id="machine-section">
+ <h3>Machine</h3>
+
+ <p class="lead"><span id="project-machine-name"></span> <span class="glyphicon glyphicon-edit" id="change-machine-toggle"></span></p>
+
+ <form id="select-machine-form" style="display:none;" class="form-inline">
+ <span class="help-block">Machine suggestions come from the list of layers added to your project. If you don't see the machine you are looking for, <a href="{% url 'projectmachines' project.id %}">check the full list of machines</a></span>
+ <div class="form-group" id="machine-input-form">
+ <input class="form-control" id="machine-change-input" autocomplete="off" value="" data-provide="typeahead" data-minlength="1" data-autocomplete="off" type="text">
+ </div>
+ <button id="machine-change-btn" class="btn btn-default" type="button">Save</button>
+ <a href="#" id="cancel-machine-change" class="btn btn-link">Cancel</a>
+ <span class="help-block text-danger" id="invalid-machine-name-help" style="display:none">A valid machine name cannot include spaces.</span>
+ <p class="form-link"><a href="{% url 'projectmachines' project.id %}">View compatible machines</a></p>
+ </form>
+ </div>
+
+ <div class="well well-transparent" id="distro-section">
+ <h3>Distro</h3>
+
+ <p class="lead"><span id="project-distro-name"></span> <span class="glyphicon glyphicon-edit" id="change-distro-toggle"></span></p>
+
+ <form id="select-distro-form" style="display:none;" class="form-inline">
+ <span class="help-block">Distro suggestions come from the Layer Index</a></span>
+ <div class="form-group">
+ <input class="form-control" id="distro-change-input" autocomplete="off" value="" data-provide="typeahead" data-minlength="1" data-autocomplete="off" type="text">
+ </div>
+ <button id="distro-change-btn" class="btn btn-default" type="button">Save</button>
+ <a href="#" id="cancel-distro-change" class="btn btn-link">Cancel</a>
+ <p class="form-link"><a href="{% url 'projectdistros' project.id %}">View compatible distros</a></p>
+ </form>
+ </div>
+
+ <div class="well well-transparent">
+ <h3>Most built recipes</h3>
+
+ <div class="alert alert-info" style="display:none" id="no-most-built">
+ <h4>You haven't built any recipes yet</h4>
+ <p class="form-link"><a href="{% url 'projectimagerecipes' project.id %}">Choose a recipe to build</a></p>
+ </div>
+
+ <ul class="list-unstyled lead" id="freq-build-list">
+ </ul>
+ <button class="btn btn-primary" id="freq-build-btn" disabled="disabled">Build selected recipes</button>
+ </div>
+
+ <div class="well well-transparent">
+ <h3>Project release</h3>
+
+ <p class="lead"><span id="project-release-title"></span>
+
+ <!-- Comment out the ability to change the project release, until we decide what to do with this functionality -->
+
+ <!--i title="" data-original-title="" id="release-change-toggle" class="icon-pencil"></i-->
+ </p>
+
+ <!-- Comment out the ability to change the project release, until we decide what to do with this functionality -->
+
+ <!--form class="form-inline" id="change-release-form" style="display:none;">
+ <select></select>
+ <button class="btn" style="margin-left:5px;" id="change-release-btn">Change</button> <a href="#" id="cancel-release-change" class="btn btn-link">Cancel</a>
+ </form-->
+ </div>
+ </div>
+
+ <div class="col-md-6">
+ <div class="well well-transparent" id="layer-container">
+ <h3>Layers <span class="counter">(<span id="project-layers-count"></span>)</span>
+ <span title="OpenEmbedded organises recipes and machines into thematic groups called <strong>layers</strong>. Click on a layer name to see the recipes and machines it includes." class="glyphicon glyphicon-question-sign get-help"></span>
+ </h3>
+
+ <div class="alert alert-warning" id="no-layers-in-project" style="display:none">
+ <h4>This project has no layers</h4>
+ In order to build this project you need to add some layers first. For that you can:
+ <ul>
+ <li><a href="{% url 'projectlayers' project.id %}">Choose from the layers compatible with this project</a></li>
+ <li><a href="{% url 'importlayer' project.id %}">Import a layer</a></li>
+ <li><a href="http://www.yoctoproject.org/docs/current/dev-manual/dev-manual.html#understanding-and-creating-layers" target="_blank">Read about layers in the documentation</a></li>
+ <li>Or type a layer name below</li>
+ </ul>
+ </div>
+
+ <form class="form-inline">
+ <div class="form-group">
+ <input id="layer-add-input" class="form-control" autocomplete="off" placeholder="Type a layer name" data-minlength="1" data-autocomplete="off" data-provide="typeahead" data-source="" type="text">
+ </div>
+ <button id="add-layer-btn" class="btn btn-default" disabled>Add layer</button>
+ <p class="form-link">
+ <a href="{% url 'projectlayers' project.id %}" id="view-compatible-layers">View compatible layers</a>
+ <span class="text-muted">|</span>
+ <a href="{% url 'importlayer' project.id %}">Import layer</a>
+ </p>
+ </form>
+
+ <ul class="list-unstyled lead" id="layers-in-project-list">
+ </ul>
+ </div>
+
+</div>
+{% endblock %}
diff --git a/bitbake/lib/toaster/toastergui/templates/project_specific_topbar.html b/bitbake/lib/toaster/toastergui/templates/project_specific_topbar.html
new file mode 100644
index 0000000..622787c
--- /dev/null
+++ b/bitbake/lib/toaster/toastergui/templates/project_specific_topbar.html
@@ -0,0 +1,80 @@
+{% load static %}
+<script src="{% static 'js/projecttopbar.js' %}"></script>
+<script>
+ $(document).ready(function () {
+ var ctx = {
+ numProjectLayers : {{project.get_project_layer_versions.count}},
+ machine : "{{project.get_current_machine_name|default_if_none:""}}",
+ }
+
+ try {
+ projectTopBarInit(ctx);
+ } catch (e) {
+ document.write("Sorry, An error has occurred loading this page (pstb):"+e);
+ console.warn(e);
+ }
+ });
+</script>
+
+<div class="col-md-12">
+ <div class="alert alert-success alert-dismissible change-notification" id="project-created-notification" style="display:none">
+ <button type="button" class="close" data-dismiss="alert">×</button>
+ <p>Your project <strong>{{project.name}}</strong> has been created. You can now <a class="alert-link" href="{% url 'projectmachines' project.id %}">select your target machine</a> and <a class="alert-link" href="{% url 'projectimagerecipes' project.id %}">choose image recipes</a> to build.</p>
+ </div>
+ <!-- project name -->
+ <div class="page-header">
+ <h1 id="project-name-container">
+ <span class="project-name">{{project.name}}</span>
+ {% if project.is_default %}
+ <span class="glyphicon glyphicon-question-sign get-help" title="This project shows information about the builds you start from the command line while Toaster is running"></span>
+ {% endif %}
+ </h1>
+ <form id="project-name-change-form" class="form-inline" style="display: none;">
+ <div class="form-group">
+ <input class="form-control input-lg" type="text" id="project-name-change-input" autocomplete="off" value="{{project.name}}">
+ </div>
+ <button id="project-name-change-btn" class="btn btn-default btn-lg" type="button">Save</button>
+ <a href="#" id="project-name-change-cancel" class="btn btn-lg btn-link">Cancel</a>
+ </form>
+ </div>
+
+ {% with mrb_type='project' %}
+ {% include "mrb_section.html" %}
+ {% endwith %}
+
+ {% if not project.is_default %}
+ <div id="project-topbar">
+ <ul class="nav nav-tabs">
+ <li id="topbar-configuration-tab">
+ <a href="{% url 'project_specific' project.id %}">
+ Configuration
+ </a>
+ </li>
+ <li>
+ <a href="{% url 'importlayer' project.id %}">
+ Import layer
+ </a>
+ </li>
+ <li>
+ <a href="{% url 'newcustomimage' project.id %}">
+ New custom image
+ </a>
+ </li>
+ <li class="pull-right">
+ <form class="form-inline">
+ <div class="form-group">
+ <span class="glyphicon glyphicon-question-sign get-help" data-placement="left" title="Type the name of one or more recipes you want to build, separated by a space. You can also specify a task by appending a colon and a task name to the recipe name, like so: <code>busybox:clean</code>"></span>
+ <input id="build-input" type="text" class="form-control input-lg" placeholder="Select the default image recipe" autocomplete="off" disabled value="{{project.get_default_image}}">
+ </div>
+ {% if project.get_is_new %}
+ <button id="update-project-button" class="btn btn-primary btn-lg" data-project-id="{{project.id}}">Prepare Project</button>
+ {% else %}
+ <button id="cancel-project-button" class="btn info btn-lg" data-project-id="{{project.id}}">Cancel</button>
+ <button id="update-project-button" class="btn btn-primary btn-lg" data-project-id="{{project.id}}">Update</button>
+ {% endif %}
+ </form>
+ </li>
+ </ul>
+ </div>
+ {% endif %}
+</div>
diff --git a/bitbake/lib/toaster/toastergui/templates/projectconf.html b/bitbake/lib/toaster/toastergui/templates/projectconf.html
index 933c588..fb20b26 100644
--- a/bitbake/lib/toaster/toastergui/templates/projectconf.html
+++ b/bitbake/lib/toaster/toastergui/templates/projectconf.html
@@ -1,4 +1,4 @@
-{% extends "baseprojectpage.html" %}
+{% extends project_specific|yesno:"baseprojectspecificpage.html,baseprojectpage.html" %}
{% load projecttags %}
{% load humanize %}
@@ -438,8 +438,11 @@ function onEditPageUpdate(data) {
var_context='m';
}
}
+ if (configvars_sorted[i][0].startsWith("INTERNAL_")) {
+ var_context='m';
+ }
if (var_context == undefined) {
- orightml += '<dt><span id="config_var_entry_'+configvars_sorted[i][2]+'" class="js-config-var-name"></span><span class="glyphicon glyphicon-trash js-icon-trash-config_var" id="config_var_trash_'+configvars_sorted[i][2]+'" x-data="'+configvars_sorted[i][2]+'"></span> </dt>'
+ orightml += '<dt><span id="config_var_entry_'+configvars_sorted[i][2]+'" class="js-config-var-name"></span><span class="glyphicon glyphicon-trash js-icon-trash-config_var" id="config_var_trash_'+configvars_sorted[i][2]+'" x-data="'+configvars_sorted[i][2]+'"></span> </dt>'
orightml += '<dd class="variable-list">'
orightml += ' <span class="lead" id="config_var_value_'+configvars_sorted[i][2]+'"></span>'
orightml += ' <span class="glyphicon glyphicon-edit js-icon-pencil-config_var" x-data="'+configvars_sorted[i][2]+'"></span>'
diff --git a/bitbake/lib/toaster/toastergui/templates/recipe.html b/bitbake/lib/toaster/toastergui/templates/recipe.html
index bf2cd71..3f76e65 100644
--- a/bitbake/lib/toaster/toastergui/templates/recipe.html
+++ b/bitbake/lib/toaster/toastergui/templates/recipe.html
@@ -176,7 +176,7 @@
<td>{{task.get_executed_display}}</td>
<td>{{task.get_outcome_display}}
- {% if task.outcome = task.OUTCOME_FAILED %}
+ {% if task.outcome == task.OUTCOME_FAILED %}
<a href="{% url 'build_artifact' build.pk "tasklogfile" task.pk %}">
<span class="glyphicon glyphicon-download-alt
get-help" title="Download task log
diff --git a/bitbake/lib/toaster/toastergui/templates/recipe_add_btn.html b/bitbake/lib/toaster/toastergui/templates/recipe_add_btn.html
new file mode 100644
index 0000000..06c4645
--- /dev/null
+++ b/bitbake/lib/toaster/toastergui/templates/recipe_add_btn.html
@@ -0,0 +1,23 @@
+<a data-recipe-name="{{data.name}}" class="btn btn-default btn-block layer-exists-{{data.layer_version.pk}} set-default-recipe-btn" style="margin-top: 5px;
+ {% if data.layer_version.pk not in extra.current_layers %}
+ display:none;
+ {% endif %}"
+ >
+ Set recipe
+</a>
+<a class="btn btn-default btn-block layerbtn layer-add-{{data.layer_version.pk}}"
+ data-layer='{
+ "id": {{data.layer_version.pk}},
+ "name": "{{data.layer_version.layer.name}}",
+ "layerdetailurl": "{%url "layerdetails" extra.pid data.layer_version.pk%}",
+ "xhrLayerUrl": "{% url "xhr_layer" extra.pid data.layer_version.pk %}"
+ }' data-directive="add"
+ {% if data.layer_version.pk in extra.current_layers %}
+ style="display:none;"
+ {% endif %}
+>
+ <span class="glyphicon glyphicon-plus"></span>
+ Add layer
+ <span class="glyphicon glyphicon-question-sign get-help" title="To set this
+ recipe you must first add the {{data.layer_version.layer.name}} layer to your project"></i>
+</a>
diff --git a/bitbake/lib/toaster/toastergui/urls.py b/bitbake/lib/toaster/toastergui/urls.py
index e07b0ef..dc03e30 100644
--- a/bitbake/lib/toaster/toastergui/urls.py
+++ b/bitbake/lib/toaster/toastergui/urls.py
@@ -116,6 +116,11 @@ urlpatterns = [
tables.ProjectBuildsTable.as_view(template_name="projectbuilds-toastertable.html"),
name='projectbuilds'),
+ url(r'^newproject_specific/(?P<pid>\d+)/$', views.newproject_specific, name='newproject_specific'),
+ url(r'^project_specific/(?P<pid>\d+)/$', views.project_specific, name='project_specific'),
+ url(r'^landing_specific/(?P<pid>\d+)/$', views.landing_specific, name='landing_specific'),
+ url(r'^landing_specific_cancel/(?P<pid>\d+)/$', views.landing_specific_cancel, name='landing_specific_cancel'),
+
# the import layer is a project-specific functionality;
url(r'^project/(?P<pid>\d+)/importlayer$', views.importlayer, name='importlayer'),
@@ -233,6 +238,14 @@ urlpatterns = [
api.XhrBuildRequest.as_view(),
name='xhr_buildrequest'),
+ url(r'^xhr_projectupdate/project/(?P<pid>\d+)$',
+ api.XhrProjectUpdate.as_view(),
+ name='xhr_projectupdate'),
+
+ url(r'^xhr_setdefaultimage/project/(?P<pid>\d+)$',
+ api.XhrSetDefaultImageUrl.as_view(),
+ name='xhr_setdefaultimage'),
+
url(r'xhr_project/(?P<project_id>\d+)$',
api.XhrProject.as_view(),
name='xhr_project'),
diff --git a/bitbake/lib/toaster/toastergui/views.py b/bitbake/lib/toaster/toastergui/views.py
old mode 100755
new mode 100644
index 34ed2b2..c712b06
--- a/bitbake/lib/toaster/toastergui/views.py
+++ b/bitbake/lib/toaster/toastergui/views.py
@@ -25,6 +25,7 @@ import re
from django.db.models import F, Q, Sum
from django.db import IntegrityError
from django.shortcuts import render, redirect, get_object_or_404
+from django.utils.http import urlencode
from orm.models import Build, Target, Task, Layer, Layer_Version, Recipe
from orm.models import LogMessage, Variable, Package_Dependency, Package
from orm.models import Task_Dependency, Package_File
@@ -51,6 +52,7 @@ logger = logging.getLogger("toaster")
# Project creation and managed build enable
project_enable = ('1' == os.environ.get('TOASTER_BUILDSERVER'))
+is_project_specific = ('1' == os.environ.get('TOASTER_PROJECTSPECIFIC'))
class MimeTypeFinder(object):
# setting this to False enables additional non-standard mimetypes
@@ -70,6 +72,7 @@ class MimeTypeFinder(object):
# single point to add global values into the context before rendering
def toaster_render(request, page, context):
context['project_enable'] = project_enable
+ context['project_specific'] = is_project_specific
return render(request, page, context)
@@ -1395,6 +1398,86 @@ if True:
mandatory_fields = ['projectname', 'ptype']
try:
ptype = request.POST.get('ptype')
+ if ptype == "import":
+ mandatory_fields.append('importdir')
+ else:
+ mandatory_fields.append('projectversion')
+ # make sure we have values for all mandatory_fields
+ missing = [field for field in mandatory_fields if len(request.POST.get(field, '')) == 0]
+ if missing:
+ # set alert for missing fields
+ raise BadParameterException("Fields missing: %s" % ", ".join(missing))
+
+ if not request.user.is_authenticated():
+ user = authenticate(username = request.POST.get('username', '_anonuser'), password = 'nopass')
+ if user is None:
+ user = User.objects.create_user(username = request.POST.get('username', '_anonuser'), email = request.POST.get('email', ''), password = "nopass")
+
+ user = authenticate(username = user.username, password = 'nopass')
+ login(request, user)
+
+ # save the project
+ if ptype == "import":
+ if not os.path.isdir('%s/conf' % request.POST['importdir']):
+ raise BadParameterException("Bad path or missing 'conf' directory (%s)" % request.POST['importdir'])
+ from django.core import management
+ management.call_command('buildimport', '--command=import', '--name=%s' % request.POST['projectname'], '--path=%s' % request.POST['importdir'], interactive=False)
+ prj = Project.objects.get(name = request.POST['projectname'])
+ prj.merged_attr = True
+ prj.save()
+ else:
+ release = Release.objects.get(pk = request.POST.get('projectversion', None ))
+ prj = Project.objects.create_project(name = request.POST['projectname'], release = release)
+ prj.user_id = request.user.pk
+ if 'mergeattr' == request.POST.get('mergeattr', ''):
+ prj.merged_attr = True
+ prj.save()
+
+ return redirect(reverse(project, args=(prj.pk,)) + "?notify=new-project")
+
+ except (IntegrityError, BadParameterException) as e:
+ # fill in page with previously submitted values
+ for field in mandatory_fields:
+ context.__setitem__(field, request.POST.get(field, "-- missing"))
+ if isinstance(e, IntegrityError) and "username" in str(e):
+ context['alert'] = "Your chosen username is already used"
+ else:
+ context['alert'] = str(e)
+ return toaster_render(request, template, context)
+
+ raise Exception("Invalid HTTP method for this page")
+
+ # new project
+ def newproject_specific(request, pid):
+ if not project_enable:
+ return redirect( landing )
+
+ project = Project.objects.get(pk=pid)
+ template = "newproject_specific.html"
+ context = {
+ 'email': request.user.email if request.user.is_authenticated() else '',
+ 'username': request.user.username if request.user.is_authenticated() else '',
+ 'releases': Release.objects.order_by("description"),
+ 'projectname': project.name,
+ 'project_pk': project.pk,
+ }
+
+ # WORKAROUND: if we already know release, redirect 'newproject_specific' to 'project_specific'
+ if '1' == project.get_variable('INTERNAL_PROJECT_SPECIFIC_SKIPRELEASE'):
+ return redirect(reverse(project_specific, args=(project.pk,)))
+
+ try:
+ context['defaultbranch'] = ToasterSetting.objects.get(name = "DEFAULT_RELEASE").value
+ except ToasterSetting.DoesNotExist:
+ pass
+
+ if request.method == "GET":
+ # render new project page
+ return toaster_render(request, template, context)
+ elif request.method == "POST":
+ mandatory_fields = ['projectname', 'ptype']
+ try:
+ ptype = request.POST.get('ptype')
if ptype == "build":
mandatory_fields.append('projectversion')
# make sure we have values for all mandatory_fields
@@ -1417,10 +1500,10 @@ if True:
else:
release = Release.objects.get(pk = request.POST.get('projectversion', None ))
- prj = Project.objects.create_project(name = request.POST['projectname'], release = release)
+ prj = Project.objects.create_project(name = request.POST['projectname'], release = release, existing_project = project)
prj.user_id = request.user.pk
prj.save()
- return redirect(reverse(project, args=(prj.pk,)) + "?notify=new-project")
+ return redirect(reverse(project_specific, args=(prj.pk,)) + "?notify=new-project")
except (IntegrityError, BadParameterException) as e:
# fill in page with previously submitted values
@@ -1437,9 +1520,87 @@ if True:
# Shows the edit project page
def project(request, pid):
project = Project.objects.get(pk=pid)
+
+ if '1' == os.environ.get('TOASTER_PROJECTSPECIFIC'):
+ if request.GET:
+ #Example:request.GET=<QueryDict: {'setMachine': ['qemuarm']}>
+ params = urlencode(request.GET).replace('%5B%27','').replace('%27%5D','')
+ return redirect("%s?%s" % (reverse(project_specific, args=(project.pk,)),params))
+ else:
+ return redirect(reverse(project_specific, args=(project.pk,)))
context = {"project": project}
return toaster_render(request, "project.html", context)
+ # Shows the edit project-specific page
+ def project_specific(request, pid):
+ project = Project.objects.get(pk=pid)
+
+ # Are we refreshing from a successful project specific update clone?
+ if Project.PROJECT_SPECIFIC_CLONING_SUCCESS == project.get_variable(Project.PROJECT_SPECIFIC_STATUS):
+ return redirect(reverse(landing_specific,args=(project.pk,)))
+
+ context = {
+ "project": project,
+ "is_new" : project.get_variable(Project.PROJECT_SPECIFIC_ISNEW),
+ "default_image_recipe" : project.get_variable(Project.PROJECT_SPECIFIC_DEFAULTIMAGE),
+ "mru" : Build.objects.all().filter(project=project,outcome=Build.IN_PROGRESS),
+ }
+ if project.build_set.filter(outcome=Build.IN_PROGRESS).count() > 0:
+ context['build_in_progress_none_completed'] = True
+ else:
+ context['build_in_progress_none_completed'] = False
+ return toaster_render(request, "project.html", context)
+
+ # perform the final actions for the project specific page
+ def project_specific_finalize(cmnd, pid):
+ project = Project.objects.get(pk=pid)
+ callback = project.get_variable(Project.PROJECT_SPECIFIC_CALLBACK)
+ if "update" == cmnd:
+ # Delete all '_PROJECT_PREPARE_' builds
+ for b in Build.objects.all().filter(project=project):
+ delete_build = False
+ for t in b.target_set.all():
+ if '_PROJECT_PREPARE_' == t.target:
+ delete_build = True
+ if delete_build:
+ from django.core import management
+ management.call_command('builddelete', str(b.id), interactive=False)
+ # perform callback at this last moment if defined, in case Toaster gets shutdown next
+ default_target = project.get_variable(Project.PROJECT_SPECIFIC_DEFAULTIMAGE)
+ if callback:
+ callback = callback.replace("<IMAGE>",default_target)
+ if "cancel" == cmnd:
+ if callback:
+ callback = callback.replace("<IMAGE>","none")
+ callback = callback.replace("--update","--cancel")
+ # perform callback at this last moment if defined, in case this Toaster gets shutdown next
+ ret = ''
+ if callback:
+ ret = os.system('bash -c "%s"' % callback)
+ project.set_variable(Project.PROJECT_SPECIFIC_CALLBACK,'')
+ # Delete the temp project specific variables
+ project.set_variable(Project.PROJECT_SPECIFIC_ISNEW,'')
+ project.set_variable(Project.PROJECT_SPECIFIC_STATUS,Project.PROJECT_SPECIFIC_NONE)
+ # WORKAROUND: Release this workaround flag
+ project.set_variable('INTERNAL_PROJECT_SPECIFIC_SKIPRELEASE','')
+
+ # Shows the final landing page for project specific update
+ def landing_specific(request, pid):
+ project_specific_finalize("update", pid)
+ context = {
+ "install_dir": os.environ['TOASTER_DIR'],
+ }
+ return toaster_render(request, "landing_specific.html", context)
+
+ # Shows the related landing-specific page
+ def landing_specific_cancel(request, pid):
+ project_specific_finalize("cancel", pid)
+ context = {
+ "install_dir": os.environ['TOASTER_DIR'],
+ "status": "cancel",
+ }
+ return toaster_render(request, "landing_specific.html", context)
+
def jsunittests(request):
""" Provides a page for the js unit tests """
bbv = BitbakeVersion.objects.filter(branch="master").first()
diff --git a/bitbake/lib/toaster/toastergui/widgets.py b/bitbake/lib/toaster/toastergui/widgets.py
index a1792d9..db5c3aa 100644
--- a/bitbake/lib/toaster/toastergui/widgets.py
+++ b/bitbake/lib/toaster/toastergui/widgets.py
@@ -89,6 +89,10 @@ class ToasterTable(TemplateView):
# global variables
context['project_enable'] = ('1' == os.environ.get('TOASTER_BUILDSERVER'))
+ try:
+ context['project_specific'] = ('1' == os.environ.get('TOASTER_PROJECTSPECIFIC'))
+ except:
+ context['project_specific'] = ''
return context
@@ -511,13 +515,20 @@ class MostRecentBuildsView(View):
buildrequest_id = build_obj.buildrequest.pk
build['buildrequest_id'] = buildrequest_id
- build['recipes_parsed_percentage'] = \
- int((build_obj.recipes_parsed /
- build_obj.recipes_to_parse) * 100)
+ if build_obj.recipes_to_parse > 0:
+ build['recipes_parsed_percentage'] = \
+ int((build_obj.recipes_parsed /
+ build_obj.recipes_to_parse) * 100)
+ else:
+ build['recipes_parsed_percentage'] = 0
+ if build_obj.repos_to_clone > 0:
+ build['repos_cloned_percentage'] = \
+ int((build_obj.repos_cloned /
+ build_obj.repos_to_clone) * 100)
+ else:
+ build['repos_cloned_percentage'] = 0
- build['repos_cloned_percentage'] = \
- int((build_obj.repos_cloned /
- build_obj.repos_to_clone) * 100)
+ build['progress_item'] = build_obj.progress_item
tasks_complete_percentage = 0
if build_obj.outcome in (Build.SUCCEEDED, Build.FAILED):
diff --git a/bitbake/lib/toaster/toastermain/management/commands/builddelete.py b/bitbake/lib/toaster/toastermain/management/commands/builddelete.py
index 0bef8d4..bf69a8f 100644
--- a/bitbake/lib/toaster/toastermain/management/commands/builddelete.py
+++ b/bitbake/lib/toaster/toastermain/management/commands/builddelete.py
@@ -10,8 +10,12 @@ class Command(BaseCommand):
args = '<buildID1 buildID2 .....>'
help = "Deletes selected build(s)"
+ def add_arguments(self, parser):
+ parser.add_argument('buildids', metavar='N', type=int, nargs='+',
+ help="Build ID's to delete")
+
def handle(self, *args, **options):
- for bid in args:
+ for bid in options['buildids']:
try:
b = Build.objects.get(pk = bid)
except ObjectDoesNotExist:
diff --git a/bitbake/lib/toaster/toastermain/management/commands/buildimport.py b/bitbake/lib/toaster/toastermain/management/commands/buildimport.py
new file mode 100644
index 0000000..2d57ab5
--- /dev/null
+++ b/bitbake/lib/toaster/toastermain/management/commands/buildimport.py
@@ -0,0 +1,584 @@
+#
+# ex:ts=4:sw=4:sts=4:et
+# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
+#
+# BitBake Toaster Implementation
+#
+# Copyright (C) 2018 Wind River Systems
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License version 2 as
+# published by the Free Software Foundation.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License along
+# with this program; if not, write to the Free Software Foundation, Inc.,
+# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
+
+# buildimport: import a project for project specific configuration
+#
+# Usage:
+# (a) Set up Toaster environent
+#
+# (b) Call buildimport
+# $ /path/to/bitbake/lib/toaster/manage.py buildimport \
+# --name=$PROJECTNAME \
+# --path=$BUILD_DIRECTORY \
+# --callback="$CALLBACK_SCRIPT" \
+# --command="configure|reconfigure|import"
+#
+# (c) Return is "|Default_image=%s|Project_id=%d"
+#
+# (d) Open Toaster to this project using for example:
+# $ xdg-open http://localhost:$toaster_port/toastergui/project_specific/$project_id
+#
+# (e) To delete a project:
+# $ /path/to/bitbake/lib/toaster/manage.py buildimport \
+# --name=$PROJECTNAME --delete-project
+#
+
+
+# ../bitbake/lib/toaster/manage.py buildimport --name=test --path=`pwd` --callback="" --command=import
+
+from django.core.management.base import BaseCommand, CommandError
+from django.core.exceptions import ObjectDoesNotExist
+from orm.models import ProjectManager, Project, Release, ProjectVariable
+from orm.models import Layer, Layer_Version, LayerSource, ProjectLayer
+from toastergui.api import scan_layer_content
+from django.db import OperationalError
+
+import os
+import re
+import os.path
+import subprocess
+import shutil
+
+# Toaster variable section delimiters
+TOASTER_PROLOG = '#=== TOASTER_CONFIG_PROLOG ==='
+TOASTER_EPILOG = '#=== TOASTER_CONFIG_EPILOG ==='
+
+# quick development/debugging support
+verbose = 2
+def _log(msg):
+ if 1 == verbose:
+ print(msg)
+ elif 2 == verbose:
+ f1=open('/tmp/toaster.log', 'a')
+ f1.write("|" + msg + "|\n" )
+ f1.close()
+
+
+__config_regexp__ = re.compile( r"""
+ ^
+ (?P<exp>export\s+)?
+ (?P<var>[a-zA-Z0-9\-_+.${}/~]+?)
+ (\[(?P<flag>[a-zA-Z0-9\-_+.]+)\])?
+
+ \s* (
+ (?P<colon>:=) |
+ (?P<lazyques>\?\?=) |
+ (?P<ques>\?=) |
+ (?P<append>\+=) |
+ (?P<prepend>=\+) |
+ (?P<predot>=\.) |
+ (?P<postdot>\.=) |
+ =
+ ) \s*
+
+ (?!'[^']*'[^']*'$)
+ (?!\"[^\"]*\"[^\"]*\"$)
+ (?P<apo>['\"])
+ (?P<value>.*)
+ (?P=apo)
+ $
+ """, re.X)
+
+class Command(BaseCommand):
+ args = "<name> <path> <release>"
+ help = "Import a command line build directory"
+ vars = {}
+ toaster_vars = {}
+
+ def add_arguments(self, parser):
+ parser.add_argument(
+ '--name', dest='name', required=True,
+ help='name of the project',
+ )
+ parser.add_argument(
+ '--path', dest='path', required=True,
+ help='path to the project',
+ )
+ parser.add_argument(
+ '--release', dest='release', required=False,
+ help='release for the project',
+ )
+ parser.add_argument(
+ '--callback', dest='callback', required=False,
+ help='callback for project config update',
+ )
+ parser.add_argument(
+ '--delete-project', dest='delete_project', required=False,
+ help='delete this project from the database',
+ )
+ parser.add_argument(
+ '--command', dest='command', required=False,
+ help='command (configure,reconfigure,import)',
+ )
+
+ # Extract the bb variables from a conf file
+ def scan_conf(self,fn):
+ vars = self.vars
+ toaster_vars = self.toaster_vars
+
+ #_log("scan_conf:%s" % fn)
+ if not os.path.isfile(fn):
+ return
+ f = open(fn, 'r')
+
+ #statements = ast.StatementGroup()
+ lineno = 0
+ is_toaster_section = False
+ while True:
+ lineno = lineno + 1
+ s = f.readline()
+ if not s:
+ break
+ w = s.strip()
+ # skip empty lines
+ if not w:
+ continue
+ # evaluate Toaster sections
+ if w.startswith(TOASTER_PROLOG):
+ is_toaster_section = True
+ continue
+ if w.startswith(TOASTER_EPILOG):
+ is_toaster_section = False
+ continue
+ s = s.rstrip()
+ while s[-1] == '\\':
+ s2 = f.readline().strip()
+ lineno = lineno + 1
+ if (not s2 or s2 and s2[0] != "#") and s[0] == "#" :
+ echo("There is a confusing multiline, partially commented expression on line %s of file %s (%s).\nPlease clarify whether this is all a comment or should be parsed." % (lineno, fn, s))
+ s = s[:-1] + s2
+ # skip comments
+ if s[0] == '#':
+ continue
+ # process the line for just assignments
+ m = __config_regexp__.match(s)
+ if m:
+ groupd = m.groupdict()
+ var = groupd['var']
+ value = groupd['value']
+
+ if groupd['lazyques']:
+ if not var in vars:
+ vars[var] = value
+ continue
+ if groupd['ques']:
+ if not var in vars:
+ vars[var] = value
+ continue
+ # preset empty blank for remaining operators
+ if not var in vars:
+ vars[var] = ''
+ if groupd['append']:
+ vars[var] += value
+ elif groupd['prepend']:
+ vars[var] = "%s%s" % (value,vars[var])
+ elif groupd['predot']:
+ vars[var] = "%s %s" % (value,vars[var])
+ elif groupd['postdot']:
+ vars[var] = "%s %s" % (vars[var],value)
+ else:
+ vars[var] = "%s" % (value)
+ # capture vars in a Toaster section
+ if is_toaster_section:
+ toaster_vars[var] = vars[var]
+
+ # DONE WITH PARSING
+ f.close()
+ self.vars = vars
+ self.toaster_vars = toaster_vars
+
+ # Update the scanned project variables
+ def update_project_vars(self,project,name):
+ pv, create = ProjectVariable.objects.get_or_create(project = project, name = name)
+ if (not name in self.vars.keys()) or (not self.vars[name]):
+ self.vars[name] = pv.value
+ else:
+ if pv.value != self.vars[name]:
+ pv.value = self.vars[name]
+ pv.save()
+
+ # Find the git version of the installation
+ def find_layer_dir_version(self,path):
+ # * rocko ...
+
+ install_version = ''
+ cwd = os.getcwd()
+ os.chdir(path)
+ p = subprocess.Popen(['git', 'branch', '-av'], stdout=subprocess.PIPE,
+ stderr=subprocess.PIPE)
+ out, err = p.communicate()
+ out = out.decode("utf-8")
+ for branch in out.split('\n'):
+ if ('*' == branch[0:1]) and ('no branch' not in branch):
+ install_version = re.sub(' .*','',branch[2:])
+ break
+ if 'remotes/m/master' in branch:
+ install_version = re.sub('.*base/','',branch)
+ break
+ os.chdir(cwd)
+ return install_version
+
+ # Compute table of the installation's registered layer versions (branch or commit)
+ def find_layer_dir_versions(self,INSTALL_URL_PREFIX):
+ lv_dict = {}
+ layer_versions = Layer_Version.objects.all()
+ for lv in layer_versions:
+ layer = Layer.objects.filter(pk=lv.layer.pk)[0]
+ if layer.vcs_url:
+ url_short = layer.vcs_url.replace(INSTALL_URL_PREFIX,'')
+ else:
+ url_short = ''
+ # register the core, branch, and the version variations
+ lv_dict["%s,%s,%s" % (url_short,lv.dirpath,'')] = (lv.id,layer.name)
+ lv_dict["%s,%s,%s" % (url_short,lv.dirpath,lv.branch)] = (lv.id,layer.name)
+ lv_dict["%s,%s,%s" % (url_short,lv.dirpath,lv.commit)] = (lv.id,layer.name)
+ #_log(" (%s,%s,%s|%s) = (%s,%s)" % (url_short,lv.dirpath,lv.branch,lv.commit,lv.id,layer.name))
+ return lv_dict
+
+ # Apply table of all layer versions
+ def extract_bblayers(self):
+ # set up the constants
+ bblayer_str = self.vars['BBLAYERS']
+ TOASTER_DIR = os.environ.get('TOASTER_DIR')
+ INSTALL_CLONE_PREFIX = os.path.dirname(TOASTER_DIR) + "/"
+ TOASTER_CLONE_PREFIX = TOASTER_DIR + "/_toaster_clones/"
+ INSTALL_URL_PREFIX = ''
+ layers = Layer.objects.filter(name='openembedded-core')
+ for layer in layers:
+ if layer.vcs_url:
+ INSTALL_URL_PREFIX = layer.vcs_url
+ break
+ INSTALL_URL_PREFIX = INSTALL_URL_PREFIX.replace("/poky","/")
+ INSTALL_VERSION_DIR = TOASTER_DIR
+ INSTALL_URL_POSTFIX = INSTALL_URL_PREFIX.replace(':','_')
+ INSTALL_URL_POSTFIX = INSTALL_URL_POSTFIX.replace('/','_')
+ INSTALL_URL_POSTFIX = "%s_%s" % (TOASTER_CLONE_PREFIX,INSTALL_URL_POSTFIX)
+
+ # get the set of available layer:layer_versions
+ lv_dict = self.find_layer_dir_versions(INSTALL_URL_PREFIX)
+
+ # compute the layer matches
+ layers_list = []
+ for line in bblayer_str.split(' '):
+ if not line:
+ continue
+ if line.endswith('/local'):
+ continue
+
+ # isolate the repo
+ layer_path = line
+ line = line.replace(INSTALL_URL_POSTFIX,'').replace(INSTALL_CLONE_PREFIX,'').replace('/layers/','/').replace('/poky/','/')
+
+ # isolate the sub-path
+ path_index = line.rfind('/')
+ if path_index > 0:
+ sub_path = line[path_index+1:]
+ line = line[0:path_index]
+ else:
+ sub_path = ''
+
+ # isolate the version
+ if TOASTER_CLONE_PREFIX in layer_path:
+ is_toaster_clone = True
+ # extract version from name syntax
+ version_index = line.find('_')
+ if version_index > 0:
+ version = line[version_index+1:]
+ line = line[0:version_index]
+ else:
+ version = ''
+ _log("TOASTER_CLONE(%s/%s), version=%s" % (line,sub_path,version))
+ else:
+ is_toaster_clone = False
+ # version is from the installation
+ version = self.find_layer_dir_version(layer_path)
+ _log("LOCAL_CLONE(%s/%s), version=%s" % (line,sub_path,version))
+
+ # capture the layer information into layers_list
+ layers_list.append( (line,sub_path,version,layer_path,is_toaster_clone) )
+ return layers_list,lv_dict
+
+ #
+ def find_import_release(self,layers_list,lv_dict,default_release):
+ # poky,meta,rocko => 4;openembedded-core
+ release = default_release
+ for line,path,version,layer_path,is_toaster_clone in layers_list:
+ key = "%s,%s,%s" % (line,path,version)
+ if key in lv_dict:
+ lv_id = lv_dict[key]
+ if 'openembedded-core' == lv_id[1]:
+ _log("Find_import_release(%s):version=%s,Toaster=%s" % (lv_id[1],version,is_toaster_clone))
+ # only versions in Toaster managed layers are accepted
+ if not is_toaster_clone:
+ break
+ try:
+ release = Release.objects.get(name=version)
+ except:
+ pass
+ break
+ _log("Find_import_release:RELEASE=%s" % release.name)
+ return release
+
+ # Apply the found conf layers
+ def apply_conf_bblayers(self,layers_list,lv_dict,project,release=None):
+ for line,path,version,layer_path,is_toaster_clone in layers_list:
+ # Assert release promote if present
+ if release:
+ version = release
+ # try to match the key to a layer_version
+ key = "%s,%s,%s" % (line,path,version)
+ key_short = "%s,%s,%s" % (line,path,'')
+ lv_id = ''
+ if key in lv_dict:
+ lv_id = lv_dict[key]
+ lv = Layer_Version.objects.get(pk=int(lv_id[0]))
+ pl,created = ProjectLayer.objects.get_or_create(project=project,
+ layercommit=lv)
+ pl.optional=False
+ pl.save()
+ _log(" %s => %s;%s" % (key,lv_id[0],lv_id[1]))
+ elif key_short in lv_dict:
+ lv_id = lv_dict[key_short]
+ lv = Layer_Version.objects.get(pk=int(lv_id[0]))
+ pl,created = ProjectLayer.objects.get_or_create(project=project,
+ layercommit=lv)
+ pl.optional=False
+ pl.save()
+ _log(" %s ?> %s" % (key,lv_dict[key_short]))
+ else:
+ _log("%s <= %s" % (key,layer_path))
+ found = False
+ # does local layer already exist in this project?
+ try:
+ for pl in ProjectLayer.objects.filter(project=project):
+ if pl.layercommit.layer.local_source_dir == layer_path:
+ found = True
+ _log(" Project Local Layer found!")
+ except Exception as e:
+ _log("ERROR: Local Layer '%s'" % e)
+ pass
+
+ if not found:
+ # Does Layer name+path already exist?
+ try:
+ layer_name_base = os.path.basename(layer_path)
+ _log("Layer_lookup: try '%s','%s'" % (layer_name_base,layer_path))
+ layer = Layer.objects.get(name=layer_name_base,local_source_dir = layer_path)
+ # Found! Attach layer_version and ProjectLayer
+ layer_version = Layer_Version.objects.create(
+ layer=layer,
+ project=project,
+ layer_source=LayerSource.TYPE_IMPORTED)
+ layer_version.save()
+ pl,created = ProjectLayer.objects.get_or_create(project=project,
+ layercommit=layer_version)
+ pl.optional=False
+ pl.save()
+ found = True
+ # add layer contents to this layer version
+ scan_layer_content(layer,layer_version)
+ _log(" Parent Local Layer found in db!")
+ except Exception as e:
+ _log("Layer_exists_test_failed: Local Layer '%s'" % e)
+ pass
+
+ if not found:
+ # Insure that layer path exists, in case of user typo
+ if not os.path.isdir(layer_path):
+ _log("ERROR:Layer path '%s' not found" % layer_path)
+ continue
+ # Add layer to db and attach project to it
+ layer_name_base = os.path.basename(layer_path)
+ # generate a unique layer name
+ layer_name_matches = {}
+ for layer in Layer.objects.filter(name__contains=layer_name_base):
+ layer_name_matches[layer.name] = '1'
+ layer_name_idx = 0
+ layer_name_test = layer_name_base
+ while layer_name_test in layer_name_matches.keys():
+ layer_name_idx += 1
+ layer_name_test = "%s_%d" % (layer_name_base,layer_name_idx)
+ # create the layer and layer_verion objects
+ layer = Layer.objects.create(name=layer_name_test)
+ layer.local_source_dir = layer_path
+ layer_version = Layer_Version.objects.create(
+ layer=layer,
+ project=project,
+ layer_source=LayerSource.TYPE_IMPORTED)
+ layer.save()
+ layer_version.save()
+ pl,created = ProjectLayer.objects.get_or_create(project=project,
+ layercommit=layer_version)
+ pl.optional=False
+ pl.save()
+ # register the layer's content
+ _log(" Local Layer Add content")
+ scan_layer_content(layer,layer_version)
+ _log(" Local Layer Added '%s'!" % layer_name_test)
+
+ # Scan the project's conf files (if any)
+ def scan_conf_variables(self,project_path):
+ # scan the project's settings, add any new layers or variables
+ if os.path.isfile("%s/conf/local.conf" % project_path):
+ self.scan_conf("%s/conf/local.conf" % project_path)
+ self.scan_conf("%s/conf/bblayers.conf" % project_path)
+ # Import then disable old style Toaster conf files (before 'merged_attr')
+ old_toaster_local = "%s/conf/toaster.conf" % project_path
+ if os.path.isfile(old_toaster_local):
+ self.scan_conf(old_toaster_local)
+ shutil.move(old_toaster_local, old_toaster_local+"_old")
+ old_toaster_layer = "%s/conf/toaster-bblayers.conf" % project_path
+ if os.path.isfile(old_toaster_layer):
+ self.scan_conf(old_toaster_layer)
+ shutil.move(old_toaster_layer, old_toaster_layer+"_old")
+
+ # Scan the found conf variables (if any)
+ def apply_conf_variables(self,project,layers_list,lv_dict,release=None):
+ if self.vars:
+ # Catch vars relevant to Toaster (in case no Toaster section)
+ self.update_project_vars(project,'DISTRO')
+ self.update_project_vars(project,'MACHINE')
+ self.update_project_vars(project,'IMAGE_INSTALL_append')
+ self.update_project_vars(project,'IMAGE_FSTYPES')
+ self.update_project_vars(project,'PACKAGE_CLASSES')
+ # These vars are typically only assigned by Toaster
+ #self.update_project_vars(project,'DL_DIR')
+ #self.update_project_vars(project,'SSTATE_DIR')
+
+ # Assert found Toaster vars
+ for var in self.toaster_vars.keys():
+ pv, create = ProjectVariable.objects.get_or_create(project = project, name = var)
+ pv.value = self.toaster_vars[var]
+ _log("* Add/update Toaster var '%s' = '%s'" % (pv.name,pv.value))
+ pv.save()
+
+ # Assert found BBLAYERS
+ if 0 < verbose:
+ for pl in ProjectLayer.objects.filter(project=project):
+ release_name = 'None' if not pl.layercommit.release else pl.layercommit.release.name
+ print(" BEFORE:ProjectLayer=%s,%s,%s,%s" % (pl.layercommit.layer.name,release_name,pl.layercommit.branch,pl.layercommit.commit))
+ self.apply_conf_bblayers(layers_list,lv_dict,project,release)
+ if 0 < verbose:
+ for pl in ProjectLayer.objects.filter(project=project):
+ release_name = 'None' if not pl.layercommit.release else pl.layercommit.release.name
+ print(" AFTER :ProjectLayer=%s,%s,%s,%s" % (pl.layercommit.layer.name,release_name,pl.layercommit.branch,pl.layercommit.commit))
+
+
+ def handle(self, *args, **options):
+ project_name = options['name']
+ project_path = options['path']
+ project_callback = options['callback'] if options['callback'] else ''
+ release_name = options['release'] if options['release'] else ''
+
+ #
+ # Delete project
+ #
+
+ if options['delete_project']:
+ try:
+ print("Project '%s' delete from Toaster database" % (project_name))
+ project = Project.objects.get(name=project_name)
+ # TODO: deep project delete
+ project.delete()
+ print("Project '%s' Deleted" % (project_name))
+ return
+ except Exception as e:
+ print("Project '%s' not found, not deleted (%s)" % (project_name,e))
+ return
+
+ #
+ # Create/Update/Import project
+ #
+
+ # See if project (by name) exists
+ project = None
+ try:
+ # Project already exists
+ project = Project.objects.get(name=project_name)
+ except Exception as e:
+ pass
+
+ # Find the installation's default release
+ default_release = Release.objects.get(id=1)
+
+ # SANITY: if 'reconfig' but project does not exist (deleted externally), switch to 'import'
+ if ("reconfigure" == options['command']) and (None == project):
+ options['command'] = 'import'
+
+ # 'Configure':
+ if "configure" == options['command']:
+ # Note: ignore any existing conf files
+ # create project, SANITY: reuse any project of same name
+ project = Project.objects.create_project(project_name,default_release,project)
+
+ # 'Re-configure':
+ if "reconfigure" == options['command']:
+ # Scan the directory's conf files
+ self.scan_conf_variables(project_path)
+ # Scan the layer list
+ layers_list,lv_dict = self.extract_bblayers()
+ # Apply any new layers or variables
+ self.apply_conf_variables(project,layers_list,lv_dict)
+
+ # 'Import':
+ if "import" == options['command']:
+ # Scan the directory's conf files
+ self.scan_conf_variables(project_path)
+ # Remove these Toaster controlled variables
+ for var in ('DL_DIR','SSTATE_DIR'):
+ self.vars.pop(var, None)
+ self.toaster_vars.pop(var, None)
+ # Scan the layer list
+ layers_list,lv_dict = self.extract_bblayers()
+ # Find the directory's release, and promote to default_release if local paths
+ release = self.find_import_release(layers_list,lv_dict,default_release)
+ # create project, SANITY: reuse any project of same name
+ project = Project.objects.create_project(project_name,release,project)
+ # Apply any new layers or variables
+ self.apply_conf_variables(project,layers_list,lv_dict,release)
+ # WORKAROUND: since we now derive the release, redirect 'newproject_specific' to 'project_specific'
+ project.set_variable('INTERNAL_PROJECT_SPECIFIC_SKIPRELEASE','1')
+
+ # Set up the project's meta data
+ project.builddir = project_path
+ project.merged_attr = True
+ project.set_variable(Project.PROJECT_SPECIFIC_CALLBACK,project_callback)
+ project.set_variable(Project.PROJECT_SPECIFIC_STATUS,Project.PROJECT_SPECIFIC_EDIT)
+ if ("configure" == options['command']) or ("import" == options['command']):
+ # preset the mode and default image recipe
+ project.set_variable(Project.PROJECT_SPECIFIC_ISNEW,Project.PROJECT_SPECIFIC_NEW)
+ project.set_variable(Project.PROJECT_SPECIFIC_DEFAULTIMAGE,"core-image-minimal")
+ # Assert any extended/custom actions or variables for new non-Toaster projects
+ if not len(self.toaster_vars):
+ pass
+ else:
+ project.set_variable(Project.PROJECT_SPECIFIC_ISNEW,Project.PROJECT_SPECIFIC_NONE)
+
+ # Save the updated Project
+ project.save()
+
+ _log("Buildimport:project='%s' at '%d'" % (project_name,project.id))
+
+ if ('DEFAULT_IMAGE' in self.vars) and (self.vars['DEFAULT_IMAGE']):
+ print("|Default_image=%s|Project_id=%d" % (self.vars['DEFAULT_IMAGE'],project.id))
+ else:
+ print("|Project_id=%d" % (project.id))
+
diff --git a/bitbake/toaster-requirements.txt b/bitbake/toaster-requirements.txt
index c0ec368..a682b08 100644
--- a/bitbake/toaster-requirements.txt
+++ b/bitbake/toaster-requirements.txt
@@ -1,3 +1,3 @@
-Django>1.8,<1.11.9
+Django>1.8,<1.12
beautifulsoup4>=4.4.0
pytz
--
2.11.0
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH 2/3] meta: Set LAYERSERIES_* variables
2018-11-07 16:09 [PATCH 0/3] bitbake upstream update and eliminate no-gpg-check option usage Maxim Yu. Osipov
2018-11-07 16:09 ` [PATCH 1/3] Update bitbake from the upstream Maxim Yu. Osipov
@ 2018-11-07 16:09 ` Maxim Yu. Osipov
2018-11-07 16:20 ` Jan Kiszka
2018-11-07 16:09 ` [PATCH 3/3] isar-bootstrap: Eliminate no-gpg-check option usage Maxim Yu. Osipov
2 siblings, 1 reply; 18+ messages in thread
From: Maxim Yu. Osipov @ 2018-11-07 16:09 UTC (permalink / raw)
To: isar-users
Fix warnings after update to the latest bitbake.
Signed-off-by: Maxim Yu. Osipov <mosipov@ilbers.de>
---
meta-isar/conf/layer.conf | 1 +
meta/conf/layer.conf | 5 ++++-
2 files changed, 5 insertions(+), 1 deletion(-)
diff --git a/meta-isar/conf/layer.conf b/meta-isar/conf/layer.conf
index 4aa1cf1..b4b90c1 100644
--- a/meta-isar/conf/layer.conf
+++ b/meta-isar/conf/layer.conf
@@ -14,6 +14,7 @@ BBFILE_PRIORITY_isar = "5"
# This should only be incremented on significant changes that will
# cause compatibility issues with other layers
LAYERVERSION_isar = "3"
+LAYERSERIES_COMPAT_isar = "sumo"
LAYERDIR_isar = "${LAYERDIR}"
diff --git a/meta/conf/layer.conf b/meta/conf/layer.conf
index ab6ae8e..6fefca6 100644
--- a/meta/conf/layer.conf
+++ b/meta/conf/layer.conf
@@ -11,8 +11,11 @@ BBFILE_COLLECTIONS += "core"
BBFILE_PATTERN_core = "^${LAYERDIR}/"
BBFILE_PRIORITY_core = "5"
+LAYERSERIES_CORENAMES = "sumo"
+
# This should only be incremented on significant changes that will
# cause compatibility issues with other layers
-LAYERVERSION_core = "1"
+LAYERVERSION_core = "11"
+LAYERSERIES_COMPAT_core = "sumo"
LAYERDIR_core = "${LAYERDIR}"
--
2.11.0
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH 3/3] isar-bootstrap: Eliminate no-gpg-check option usage
2018-11-07 16:09 [PATCH 0/3] bitbake upstream update and eliminate no-gpg-check option usage Maxim Yu. Osipov
2018-11-07 16:09 ` [PATCH 1/3] Update bitbake from the upstream Maxim Yu. Osipov
2018-11-07 16:09 ` [PATCH 2/3] meta: Set LAYERSERIES_* variables Maxim Yu. Osipov
@ 2018-11-07 16:09 ` Maxim Yu. Osipov
2018-11-07 17:38 ` Henning Schild
` (3 more replies)
2 siblings, 4 replies; 18+ messages in thread
From: Maxim Yu. Osipov @ 2018-11-07 16:09 UTC (permalink / raw)
To: isar-users
Marking repo as trusted eliminates this option usage.
Suggested-by: Henning Schild <henning.schild@siemens.com>
Signed-off-by: Maxim Yu. Osipov <mosipov@ilbers.de>
---
meta/recipes-core/isar-bootstrap/isar-bootstrap.inc | 3 ---
1 file changed, 3 deletions(-)
diff --git a/meta/recipes-core/isar-bootstrap/isar-bootstrap.inc b/meta/recipes-core/isar-bootstrap/isar-bootstrap.inc
index cc1791c..592d042 100644
--- a/meta/recipes-core/isar-bootstrap/isar-bootstrap.inc
+++ b/meta/recipes-core/isar-bootstrap/isar-bootstrap.inc
@@ -178,9 +178,6 @@ isar_bootstrap() {
shift
done
debootstrap_args="--verbose --variant=minbase --include=locales "
- if [ "${ISAR_USE_CACHED_BASE_REPO}" = "1" ]; then
- debootstrap_args="$debootstrap_args --no-check-gpg"
- fi
E="${@bb.utils.export_proxies(d)}"
sudo -E flock "${ISAR_BOOTSTRAP_LOCK}" -c "\
set -e
--
2.11.0
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH 2/3] meta: Set LAYERSERIES_* variables
2018-11-07 16:09 ` [PATCH 2/3] meta: Set LAYERSERIES_* variables Maxim Yu. Osipov
@ 2018-11-07 16:20 ` Jan Kiszka
2018-11-07 16:39 ` Maxim Yu. Osipov
0 siblings, 1 reply; 18+ messages in thread
From: Jan Kiszka @ 2018-11-07 16:20 UTC (permalink / raw)
To: Maxim Yu. Osipov, isar-users
On 07.11.18 17:09, Maxim Yu. Osipov wrote:
> Fix warnings after update to the latest bitbake.
>
> Signed-off-by: Maxim Yu. Osipov <mosipov@ilbers.de>
> ---
> meta-isar/conf/layer.conf | 1 +
> meta/conf/layer.conf | 5 ++++-
> 2 files changed, 5 insertions(+), 1 deletion(-)
>
> diff --git a/meta-isar/conf/layer.conf b/meta-isar/conf/layer.conf
> index 4aa1cf1..b4b90c1 100644
> --- a/meta-isar/conf/layer.conf
> +++ b/meta-isar/conf/layer.conf
> @@ -14,6 +14,7 @@ BBFILE_PRIORITY_isar = "5"
> # This should only be incremented on significant changes that will
> # cause compatibility issues with other layers
> LAYERVERSION_isar = "3"
> +LAYERSERIES_COMPAT_isar = "sumo"
>
> LAYERDIR_isar = "${LAYERDIR}"
>
> diff --git a/meta/conf/layer.conf b/meta/conf/layer.conf
> index ab6ae8e..6fefca6 100644
> --- a/meta/conf/layer.conf
> +++ b/meta/conf/layer.conf
> @@ -11,8 +11,11 @@ BBFILE_COLLECTIONS += "core"
> BBFILE_PATTERN_core = "^${LAYERDIR}/"
> BBFILE_PRIORITY_core = "5"
>
> +LAYERSERIES_CORENAMES = "sumo"
> +
> # This should only be incremented on significant changes that will
> # cause compatibility issues with other layers
> -LAYERVERSION_core = "1"
> +LAYERVERSION_core = "11"
> +LAYERSERIES_COMPAT_core = "sumo"
>
> LAYERDIR_core = "${LAYERDIR}"
>
That looks... weird. Does bitbake now actually have Yocto release names encoded???
Jan
--
Siemens AG, Corporate Technology, CT RDA IOT SES-DE
Corporate Competence Center Embedded Linux
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH 2/3] meta: Set LAYERSERIES_* variables
2018-11-07 16:20 ` Jan Kiszka
@ 2018-11-07 16:39 ` Maxim Yu. Osipov
2018-11-07 16:41 ` Jan Kiszka
0 siblings, 1 reply; 18+ messages in thread
From: Maxim Yu. Osipov @ 2018-11-07 16:39 UTC (permalink / raw)
To: Jan Kiszka, isar-users
On 11/7/18 7:20 PM, Jan Kiszka wrote:
> On 07.11.18 17:09, Maxim Yu. Osipov wrote:
>> Fix warnings after update to the latest bitbake.
>>
>> Signed-off-by: Maxim Yu. Osipov <mosipov@ilbers.de>
>> ---
>> meta-isar/conf/layer.conf | 1 +
>> meta/conf/layer.conf | 5 ++++-
>> 2 files changed, 5 insertions(+), 1 deletion(-)
>>
>> diff --git a/meta-isar/conf/layer.conf b/meta-isar/conf/layer.conf
>> index 4aa1cf1..b4b90c1 100644
>> --- a/meta-isar/conf/layer.conf
>> +++ b/meta-isar/conf/layer.conf
>> @@ -14,6 +14,7 @@ BBFILE_PRIORITY_isar = "5"
>> # This should only be incremented on significant changes that will
>> # cause compatibility issues with other layers
>> LAYERVERSION_isar = "3"
>> +LAYERSERIES_COMPAT_isar = "sumo"
>> LAYERDIR_isar = "${LAYERDIR}"
>> diff --git a/meta/conf/layer.conf b/meta/conf/layer.conf
>> index ab6ae8e..6fefca6 100644
>> --- a/meta/conf/layer.conf
>> +++ b/meta/conf/layer.conf
>> @@ -11,8 +11,11 @@ BBFILE_COLLECTIONS += "core"
>> BBFILE_PATTERN_core = "^${LAYERDIR}/"
>> BBFILE_PRIORITY_core = "5"
>> +LAYERSERIES_CORENAMES = "sumo"
>> +
>> # This should only be incremented on significant changes that will
>> # cause compatibility issues with other layers
>> -LAYERVERSION_core = "1"
>> +LAYERVERSION_core = "11"
>> +LAYERSERIES_COMPAT_core = "sumo"
>> LAYERDIR_core = "${LAYERDIR}"
>>
>
> That looks... weird. Does bitbake now actually have Yocto release names
> encoded???
Absolutely agree, but warning comes from the latest bitbake:
lib/bb/cookerdata.py: bb.warn("Layer %s should set
LAYERSERIES_COMPAT_%s in its conf/layer.conf file to list the core layer
names it is compatible with." % (c, c))
Maxim.
--
Maxim Osipov
ilbers GmbH
Maria-Merian-Str. 8
85521 Ottobrunn
Germany
+49 (151) 6517 6917
mosipov@ilbers.de
http://ilbers.de/
Commercial register Munich, HRB 214197
General Manager: Baurzhan Ismagulov
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH 2/3] meta: Set LAYERSERIES_* variables
2018-11-07 16:39 ` Maxim Yu. Osipov
@ 2018-11-07 16:41 ` Jan Kiszka
2018-11-07 17:24 ` Maxim Yu. Osipov
0 siblings, 1 reply; 18+ messages in thread
From: Jan Kiszka @ 2018-11-07 16:41 UTC (permalink / raw)
To: Maxim Yu. Osipov, isar-users
On 07.11.18 17:39, Maxim Yu. Osipov wrote:
> On 11/7/18 7:20 PM, Jan Kiszka wrote:
>> On 07.11.18 17:09, Maxim Yu. Osipov wrote:
>>> Fix warnings after update to the latest bitbake.
>>>
>>> Signed-off-by: Maxim Yu. Osipov <mosipov@ilbers.de>
>>> ---
>>> meta-isar/conf/layer.conf | 1 +
>>> meta/conf/layer.conf | 5 ++++-
>>> 2 files changed, 5 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/meta-isar/conf/layer.conf b/meta-isar/conf/layer.conf
>>> index 4aa1cf1..b4b90c1 100644
>>> --- a/meta-isar/conf/layer.conf
>>> +++ b/meta-isar/conf/layer.conf
>>> @@ -14,6 +14,7 @@ BBFILE_PRIORITY_isar = "5"
>>> # This should only be incremented on significant changes that will
>>> # cause compatibility issues with other layers
>>> LAYERVERSION_isar = "3"
>>> +LAYERSERIES_COMPAT_isar = "sumo"
>>> LAYERDIR_isar = "${LAYERDIR}"
>>> diff --git a/meta/conf/layer.conf b/meta/conf/layer.conf
>>> index ab6ae8e..6fefca6 100644
>>> --- a/meta/conf/layer.conf
>>> +++ b/meta/conf/layer.conf
>>> @@ -11,8 +11,11 @@ BBFILE_COLLECTIONS += "core"
>>> BBFILE_PATTERN_core = "^${LAYERDIR}/"
>>> BBFILE_PRIORITY_core = "5"
>>> +LAYERSERIES_CORENAMES = "sumo"
>>> +
>>> # This should only be incremented on significant changes that will
>>> # cause compatibility issues with other layers
>>> -LAYERVERSION_core = "1"
>>> +LAYERVERSION_core = "11"
>>> +LAYERSERIES_COMPAT_core = "sumo"
>>> LAYERDIR_core = "${LAYERDIR}"
>>>
>>
>> That looks... weird. Does bitbake now actually have Yocto release names
>> encoded???
>
> Absolutely agree, but warning comes from the latest bitbake:
> lib/bb/cookerdata.py: bb.warn("Layer %s should set
> LAYERSERIES_COMPAT_%s in its conf/layer.conf file to list the core layer names
> it is compatible with." % (c, c))
>
OK, but does the value have to come from a restricted set? Or can we use that to
declare layers compatible with specific Isar releases?
Jan
--
Siemens AG, Corporate Technology, CT RDA IOT SES-DE
Corporate Competence Center Embedded Linux
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH 2/3] meta: Set LAYERSERIES_* variables
2018-11-07 16:41 ` Jan Kiszka
@ 2018-11-07 17:24 ` Maxim Yu. Osipov
2018-11-07 17:26 ` Jan Kiszka
0 siblings, 1 reply; 18+ messages in thread
From: Maxim Yu. Osipov @ 2018-11-07 17:24 UTC (permalink / raw)
To: Jan Kiszka, isar-users
On 11/7/18 7:41 PM, Jan Kiszka wrote:
> On 07.11.18 17:39, Maxim Yu. Osipov wrote:
>> On 11/7/18 7:20 PM, Jan Kiszka wrote:
>>> On 07.11.18 17:09, Maxim Yu. Osipov wrote:
>>>> Fix warnings after update to the latest bitbake.
>>>>
>>>> Signed-off-by: Maxim Yu. Osipov <mosipov@ilbers.de>
>>>> ---
>>>> meta-isar/conf/layer.conf | 1 +
>>>> meta/conf/layer.conf | 5 ++++-
>>>> 2 files changed, 5 insertions(+), 1 deletion(-)
>>>>
>>>> diff --git a/meta-isar/conf/layer.conf b/meta-isar/conf/layer.conf
>>>> index 4aa1cf1..b4b90c1 100644
>>>> --- a/meta-isar/conf/layer.conf
>>>> +++ b/meta-isar/conf/layer.conf
>>>> @@ -14,6 +14,7 @@ BBFILE_PRIORITY_isar = "5"
>>>> # This should only be incremented on significant changes that will
>>>> # cause compatibility issues with other layers
>>>> LAYERVERSION_isar = "3"
>>>> +LAYERSERIES_COMPAT_isar = "sumo"
>>>> LAYERDIR_isar = "${LAYERDIR}"
>>>> diff --git a/meta/conf/layer.conf b/meta/conf/layer.conf
>>>> index ab6ae8e..6fefca6 100644
>>>> --- a/meta/conf/layer.conf
>>>> +++ b/meta/conf/layer.conf
>>>> @@ -11,8 +11,11 @@ BBFILE_COLLECTIONS += "core"
>>>> BBFILE_PATTERN_core = "^${LAYERDIR}/"
>>>> BBFILE_PRIORITY_core = "5"
>>>> +LAYERSERIES_CORENAMES = "sumo"
>>>> +
>>>> # This should only be incremented on significant changes that will
>>>> # cause compatibility issues with other layers
>>>> -LAYERVERSION_core = "1"
>>>> +LAYERVERSION_core = "11"
>>>> +LAYERSERIES_COMPAT_core = "sumo"
>>>> LAYERDIR_core = "${LAYERDIR}"
>>>>
>>>
>>> That looks... weird. Does bitbake now actually have Yocto release
>>> names encoded???
>>
>> Absolutely agree, but warning comes from the latest bitbake:
>> lib/bb/cookerdata.py: bb.warn("Layer %s should set
>> LAYERSERIES_COMPAT_%s in its conf/layer.conf file to list the core
>> layer names it is compatible with." % (c, c))
>>
>
> OK, but does the value have to come from a restricted set? Or can we use
> that to declare layers compatible with specific Isar releases?
Of course the value could be our own - (it was restricted by my fantasy
:)), f.e. we may set to LAYERSERIES_CORENAMES = "isartor"...
Maxim.
--
Maxim Osipov
ilbers GmbH
Maria-Merian-Str. 8
85521 Ottobrunn
Germany
+49 (151) 6517 6917
mosipov@ilbers.de
http://ilbers.de/
Commercial register Munich, HRB 214197
General Manager: Baurzhan Ismagulov
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH 2/3] meta: Set LAYERSERIES_* variables
2018-11-07 17:24 ` Maxim Yu. Osipov
@ 2018-11-07 17:26 ` Jan Kiszka
0 siblings, 0 replies; 18+ messages in thread
From: Jan Kiszka @ 2018-11-07 17:26 UTC (permalink / raw)
To: Maxim Yu. Osipov, isar-users
On 07.11.18 18:24, Maxim Yu. Osipov wrote:
> On 11/7/18 7:41 PM, Jan Kiszka wrote:
>> On 07.11.18 17:39, Maxim Yu. Osipov wrote:
>>> On 11/7/18 7:20 PM, Jan Kiszka wrote:
>>>> On 07.11.18 17:09, Maxim Yu. Osipov wrote:
>>>>> Fix warnings after update to the latest bitbake.
>>>>>
>>>>> Signed-off-by: Maxim Yu. Osipov <mosipov@ilbers.de>
>>>>> ---
>>>>> meta-isar/conf/layer.conf | 1 +
>>>>> meta/conf/layer.conf | 5 ++++-
>>>>> 2 files changed, 5 insertions(+), 1 deletion(-)
>>>>>
>>>>> diff --git a/meta-isar/conf/layer.conf b/meta-isar/conf/layer.conf
>>>>> index 4aa1cf1..b4b90c1 100644
>>>>> --- a/meta-isar/conf/layer.conf
>>>>> +++ b/meta-isar/conf/layer.conf
>>>>> @@ -14,6 +14,7 @@ BBFILE_PRIORITY_isar = "5"
>>>>> # This should only be incremented on significant changes that will
>>>>> # cause compatibility issues with other layers
>>>>> LAYERVERSION_isar = "3"
>>>>> +LAYERSERIES_COMPAT_isar = "sumo"
>>>>> LAYERDIR_isar = "${LAYERDIR}"
>>>>> diff --git a/meta/conf/layer.conf b/meta/conf/layer.conf
>>>>> index ab6ae8e..6fefca6 100644
>>>>> --- a/meta/conf/layer.conf
>>>>> +++ b/meta/conf/layer.conf
>>>>> @@ -11,8 +11,11 @@ BBFILE_COLLECTIONS += "core"
>>>>> BBFILE_PATTERN_core = "^${LAYERDIR}/"
>>>>> BBFILE_PRIORITY_core = "5"
>>>>> +LAYERSERIES_CORENAMES = "sumo"
>>>>> +
>>>>> # This should only be incremented on significant changes that will
>>>>> # cause compatibility issues with other layers
>>>>> -LAYERVERSION_core = "1"
>>>>> +LAYERVERSION_core = "11"
>>>>> +LAYERSERIES_COMPAT_core = "sumo"
>>>>> LAYERDIR_core = "${LAYERDIR}"
>>>>>
>>>>
>>>> That looks... weird. Does bitbake now actually have Yocto release names
>>>> encoded???
>>>
>>> Absolutely agree, but warning comes from the latest bitbake:
>>> lib/bb/cookerdata.py: bb.warn("Layer %s should set
>>> LAYERSERIES_COMPAT_%s in its conf/layer.conf file to list the core layer
>>> names it is compatible with." % (c, c))
>>>
>>
>> OK, but does the value have to come from a restricted set? Or can we use that
>> to declare layers compatible with specific Isar releases?
>
> Of course the value could be our own - (it was restricted by my fantasy :)),
> f.e. we may set to LAYERSERIES_CORENAMES = "isartor"...
>
Ah, ok. As we have no release names, I would suggest version strings.
How is this variable evaluated (beyond testing that it's set)?
Jan
--
Siemens AG, Corporate Technology, CT RDA IOT SES-DE
Corporate Competence Center Embedded Linux
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH 3/3] isar-bootstrap: Eliminate no-gpg-check option usage
2018-11-07 16:09 ` [PATCH 3/3] isar-bootstrap: Eliminate no-gpg-check option usage Maxim Yu. Osipov
@ 2018-11-07 17:38 ` Henning Schild
2018-11-08 7:57 ` Maxim Yu. Osipov
2018-11-12 9:30 ` Maxim Yu. Osipov
` (2 subsequent siblings)
3 siblings, 1 reply; 18+ messages in thread
From: Henning Schild @ 2018-11-07 17:38 UTC (permalink / raw)
To: Maxim Yu. Osipov; +Cc: isar-users
In case reviews hold the bitbake parts back, this should not be part of
the series.
Henning
Am Wed, 7 Nov 2018 17:09:55 +0100
schrieb "Maxim Yu. Osipov" <mosipov@ilbers.de>:
> Marking repo as trusted eliminates this option usage.
>
> Suggested-by: Henning Schild <henning.schild@siemens.com>
> Signed-off-by: Maxim Yu. Osipov <mosipov@ilbers.de>
> ---
> meta/recipes-core/isar-bootstrap/isar-bootstrap.inc | 3 ---
> 1 file changed, 3 deletions(-)
>
> diff --git a/meta/recipes-core/isar-bootstrap/isar-bootstrap.inc
> b/meta/recipes-core/isar-bootstrap/isar-bootstrap.inc index
> cc1791c..592d042 100644 ---
> a/meta/recipes-core/isar-bootstrap/isar-bootstrap.inc +++
> b/meta/recipes-core/isar-bootstrap/isar-bootstrap.inc @@ -178,9
> +178,6 @@ isar_bootstrap() { shift
> done
> debootstrap_args="--verbose --variant=minbase --include=locales "
> - if [ "${ISAR_USE_CACHED_BASE_REPO}" = "1" ]; then
> - debootstrap_args="$debootstrap_args --no-check-gpg"
> - fi
> E="${@bb.utils.export_proxies(d)}"
> sudo -E flock "${ISAR_BOOTSTRAP_LOCK}" -c "\
> set -e
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH 1/3] Update bitbake from the upstream.
2018-11-07 16:09 ` [PATCH 1/3] Update bitbake from the upstream Maxim Yu. Osipov
@ 2018-11-07 17:58 ` Henning Schild
2018-11-08 9:08 ` Maxim Yu. Osipov
0 siblings, 1 reply; 18+ messages in thread
From: Henning Schild @ 2018-11-07 17:58 UTC (permalink / raw)
To: Maxim Yu. Osipov; +Cc: isar-users
Am Wed, 7 Nov 2018 17:09:53 +0100
schrieb "Maxim Yu. Osipov" <mosipov@ilbers.de>:
> Origin: https://github.com/openembedded/bitbake.git
> Commit: 701f76f773a6e77258f307a4f8e2ec1a8552f6f3
Please include the complete "git show" header here, or at least the
name of the patch. Just to be extra sure we find that again, should the
hash change ...
This is one behind the last release and the only diff is a
user-manual-change. I think that is ok, but why did you not go for the
release?
Henning
> Signed-off-by: Maxim Yu. Osipov <mosipov@ilbers.de>
> ---
> bitbake/bin/bitbake | 2 +-
> bitbake/bin/bitbake-selftest | 7 +-
> bitbake/bin/toaster | 13 +-
> bitbake/contrib/dump_cache.py | 85 +-
> .../bitbake-user-manual-execution.xml | 2 +-
> .../bitbake-user-manual-fetching.xml | 40 +-
> .../bitbake-user-manual-hello.xml | 8 +-
> .../bitbake-user-manual-intro.xml | 178 ++-
> .../bitbake-user-manual-metadata.xml | 142 +-
> .../bitbake-user-manual-ref-variables.xml | 118 +-
> .../bitbake-user-manual/bitbake-user-manual.xml | 2 +-
> .../figures/bb_multiconfig_files.png | 0
> bitbake/lib/bb/COW.py | 2 +-
> bitbake/lib/bb/__init__.py | 18 +-
> bitbake/lib/bb/build.py | 8 +-
> bitbake/lib/bb/cache.py | 7 +-
> bitbake/lib/bb/checksum.py | 2 +
> bitbake/lib/bb/codeparser.py | 4 +-
> bitbake/lib/bb/cooker.py | 57 +-
> bitbake/lib/bb/cookerdata.py | 5 +-
> bitbake/lib/bb/daemonize.py | 25 +-
> bitbake/lib/bb/data.py | 61 +-
> bitbake/lib/bb/data_smart.py | 108 +-
> bitbake/lib/bb/event.py | 5 +-
> bitbake/lib/bb/fetch2/__init__.py | 62 +-
> bitbake/lib/bb/fetch2/bzr.py | 5 +-
> bitbake/lib/bb/fetch2/clearcase.py | 3 +-
> bitbake/lib/bb/fetch2/cvs.py | 5 +-
> bitbake/lib/bb/fetch2/git.py | 66 +-
> bitbake/lib/bb/fetch2/gitsm.py | 264 ++--
> bitbake/lib/bb/fetch2/hg.py | 2 +-
> bitbake/lib/bb/fetch2/npm.py | 9 +-
> bitbake/lib/bb/fetch2/osc.py | 5 +-
> bitbake/lib/bb/fetch2/perforce.py | 8 +-
> bitbake/lib/bb/fetch2/repo.py | 12 +-
> bitbake/lib/bb/fetch2/svn.py | 5 +-
> bitbake/lib/bb/main.py | 15 +-
> bitbake/lib/bb/msg.py | 3 +
> bitbake/lib/bb/parse/__init__.py | 3 +-
> bitbake/lib/bb/parse/ast.py | 46 +-
> bitbake/lib/bb/parse/parse_py/BBHandler.py | 3 -
> bitbake/lib/bb/parse/parse_py/ConfHandler.py | 3 -
> bitbake/lib/bb/runqueue.py | 278 ++--
> bitbake/lib/bb/server/process.py | 27 +-
> bitbake/lib/bb/siggen.py | 54 +-
> bitbake/lib/bb/taskdata.py | 18 +-
> bitbake/lib/bb/tests/cooker.py | 83 ++
> bitbake/lib/bb/tests/data.py | 77 +-
> bitbake/lib/bb/tests/fetch.py | 295 ++++-
> bitbake/lib/bb/tests/parse.py | 4 +
> bitbake/lib/bb/ui/buildinfohelper.py | 9 +-
> bitbake/lib/bb/ui/taskexp.py | 10 +-
> bitbake/lib/bb/utils.py | 60 +-
> bitbake/lib/bblayers/action.py | 2 +-
> bitbake/lib/bblayers/layerindex.py | 323 ++---
> bitbake/lib/layerindexlib/README | 28 +
> bitbake/lib/layerindexlib/__init__.py | 1363
> ++++++++++++++++++++
> bitbake/lib/layerindexlib/cooker.py | 344 +++++
> bitbake/lib/layerindexlib/plugin.py | 60 +
> bitbake/lib/layerindexlib/restapi.py | 398 ++++++
> bitbake/lib/layerindexlib/tests/__init__.py | 0
> bitbake/lib/layerindexlib/tests/common.py | 43 +
> bitbake/lib/layerindexlib/tests/cooker.py | 123 ++
> bitbake/lib/layerindexlib/tests/layerindexobj.py | 226 ++++
> bitbake/lib/layerindexlib/tests/restapi.py | 184 +++
> bitbake/lib/layerindexlib/tests/testdata/README | 11
> + .../tests/testdata/build/conf/bblayers.conf | 15
> + .../tests/testdata/layer1/conf/layer.conf | 17
> + .../tests/testdata/layer2/conf/layer.conf | 20
> + .../tests/testdata/layer3/conf/layer.conf | 19
> + .../tests/testdata/layer4/conf/layer.conf | 22
> + .../toaster/bldcontrol/localhostbecontroller.py | 212
> ++- .../management/commands/checksettings.py | 8
> +- .../bldcontrol/management/commands/runbuilds.py | 2 +-
> bitbake/lib/toaster/orm/fixtures/oe-core.xml | 28 +-
> bitbake/lib/toaster/orm/fixtures/poky.xml | 76
> +- .../toaster/orm/management/commands/lsupdates.py | 228
> ++-- .../orm/migrations/0018_project_specific.py | 28 +
> bitbake/lib/toaster/orm/models.py | 74 +-
> bitbake/lib/toaster/toastergui/api.py | 176
> ++- .../lib/toaster/toastergui/static/js/layerBtn.js | 12
> + .../toaster/toastergui/static/js/layerdetails.js | 3
> +- .../lib/toaster/toastergui/static/js/libtoaster.js | 108
> +- .../lib/toaster/toastergui/static/js/mrbsection.js | 4
> +- .../toastergui/static/js/newcustomimage_modal.js | 7
> + .../toaster/toastergui/static/js/projecttopbar.js | 22 +
> bitbake/lib/toaster/toastergui/tables.py | 12
> +- .../toastergui/templates/base_specific.html | 128
> ++ .../templates/baseprojectspecificpage.html | 48
> + .../toastergui/templates/customise_btn.html | 6
> +- .../templates/generic-toastertable-page.html | 2
> +- .../toaster/toastergui/templates/importlayer.html | 4
> +- .../toastergui/templates/landing_specific.html | 50
> + .../toaster/toastergui/templates/layerdetails.html | 3
> +- .../toaster/toastergui/templates/mrb_section.html | 2
> +- .../toastergui/templates/newcustomimage.html | 4
> +- .../toaster/toastergui/templates/newproject.html | 57
> +- .../toastergui/templates/newproject_specific.html | 95
> ++ .../lib/toaster/toastergui/templates/project.html | 7
> +- .../toastergui/templates/project_specific.html | 162
> +++ .../templates/project_specific_topbar.html | 80
> ++ .../toaster/toastergui/templates/projectconf.html | 7
> +- .../lib/toaster/toastergui/templates/recipe.html | 2
> +- .../toastergui/templates/recipe_add_btn.html | 23 +
> bitbake/lib/toaster/toastergui/urls.py | 13 +
> bitbake/lib/toaster/toastergui/views.py | 165 ++-
> bitbake/lib/toaster/toastergui/widgets.py | 23
> +- .../toastermain/management/commands/builddelete.py | 6
> +- .../toastermain/management/commands/buildimport.py | 584
> +++++++++ bitbake/toaster-requirements.txt | 2
> +- 110 files changed, 7024 insertions(+), 980 deletions(-) create
> mode 100644
> bitbake/doc/bitbake-user-manual/figures/bb_multiconfig_files.png
> create mode 100644 bitbake/lib/bb/tests/cooker.py create mode 100644
> bitbake/lib/layerindexlib/README create mode 100644
> bitbake/lib/layerindexlib/__init__.py create mode 100644
> bitbake/lib/layerindexlib/cooker.py create mode 100644
> bitbake/lib/layerindexlib/plugin.py create mode 100644
> bitbake/lib/layerindexlib/restapi.py create mode 100644
> bitbake/lib/layerindexlib/tests/__init__.py create mode 100644
> bitbake/lib/layerindexlib/tests/common.py create mode 100644
> bitbake/lib/layerindexlib/tests/cooker.py create mode 100644
> bitbake/lib/layerindexlib/tests/layerindexobj.py create mode 100644
> bitbake/lib/layerindexlib/tests/restapi.py create mode 100644
> bitbake/lib/layerindexlib/tests/testdata/README create mode 100644
> bitbake/lib/layerindexlib/tests/testdata/build/conf/bblayers.conf
> create mode 100644
> bitbake/lib/layerindexlib/tests/testdata/layer1/conf/layer.conf
> create mode 100644
> bitbake/lib/layerindexlib/tests/testdata/layer2/conf/layer.conf
> create mode 100644
> bitbake/lib/layerindexlib/tests/testdata/layer3/conf/layer.conf
> create mode 100644
> bitbake/lib/layerindexlib/tests/testdata/layer4/conf/layer.conf
> create mode 100644
> bitbake/lib/toaster/orm/migrations/0018_project_specific.py create
> mode 100644
> bitbake/lib/toaster/toastergui/templates/base_specific.html create
> mode 100644
> bitbake/lib/toaster/toastergui/templates/baseprojectspecificpage.html
> create mode 100644
> bitbake/lib/toaster/toastergui/templates/landing_specific.html create
> mode 100644
> bitbake/lib/toaster/toastergui/templates/newproject_specific.html
> create mode 100644
> bitbake/lib/toaster/toastergui/templates/project_specific.html create
> mode 100644
> bitbake/lib/toaster/toastergui/templates/project_specific_topbar.html
> create mode 100644
> bitbake/lib/toaster/toastergui/templates/recipe_add_btn.html mode
> change 100755 => 100644 bitbake/lib/toaster/toastergui/views.py
> create mode 100644
> bitbake/lib/toaster/toastermain/management/commands/buildimport.py
>
> diff --git a/bitbake/bin/bitbake b/bitbake/bin/bitbake
> index 95e4109..57dec2a 100755
> --- a/bitbake/bin/bitbake
> +++ b/bitbake/bin/bitbake
> @@ -38,7 +38,7 @@ from bb.main import bitbake_main,
> BitBakeConfigParameters, BBMainException if
> sys.getfilesystemencoding() != "utf-8": sys.exit("Please use a locale
> setting which supports UTF-8 (such as LANG=en_US.UTF-8).\nPython
> can't change the filesystem locale after loading so we need a UTF-8
> when Python starts or things won't work.") -__version__ = "1.37.0"
> +__version__ = "1.40.0"
> if __name__ == "__main__":
> if __version__ != bb.__version__:
> diff --git a/bitbake/bin/bitbake-selftest
> b/bitbake/bin/bitbake-selftest index afe1603..cfa7ac5 100755
> --- a/bitbake/bin/bitbake-selftest
> +++ b/bitbake/bin/bitbake-selftest
> @@ -22,16 +22,21 @@ sys.path.insert(0,
> os.path.join(os.path.dirname(os.path.dirname(__file__)), 'lib import
> unittest try:
> import bb
> + import layerindexlib
> except RuntimeError as exc:
> sys.exit(str(exc))
>
> tests = ["bb.tests.codeparser",
> + "bb.tests.cooker",
> "bb.tests.cow",
> "bb.tests.data",
> "bb.tests.event",
> "bb.tests.fetch",
> "bb.tests.parse",
> - "bb.tests.utils"]
> + "bb.tests.utils",
> + "layerindexlib.tests.layerindexobj",
> + "layerindexlib.tests.restapi",
> + "layerindexlib.tests.cooker"]
>
> for t in tests:
> t = '.'.join(t.split('.')[:3])
> diff --git a/bitbake/bin/toaster b/bitbake/bin/toaster
> index 4036f0a..9fffbc6 100755
> --- a/bitbake/bin/toaster
> +++ b/bitbake/bin/toaster
> @@ -18,11 +18,12 @@
> # along with this program. If not, see http://www.gnu.org/licenses/.
>
> HELP="
> -Usage: source toaster start|stop [webport=<address:port>] [noweb]
> [nobuild] +Usage: source toaster start|stop [webport=<address:port>]
> [noweb] [nobuild] [toasterdir] Optional arguments:
> [nobuild] Setup the environment for capturing builds with
> toaster but disable managed builds [noweb] Setup the environment for
> capturing builds with toaster but don't start the web server
> [webport] Set the development server (default: localhost:8000)
> + [toasterdir] Set absolute path to be used as TOASTER_DIR
> (default: BUILDDIR/../) "
>
> custom_extention()
> @@ -68,7 +69,7 @@ webserverKillAll()
> if [ -f ${pidfile} ]; then
> pid=`cat ${pidfile}`
> while kill -0 $pid 2>/dev/null; do
> - kill -SIGTERM -$pid 2>/dev/null
> + kill -SIGTERM $pid 2>/dev/null
> sleep 1
> done
> rm ${pidfile}
> @@ -91,7 +92,7 @@ webserverStartAll()
>
> echo "Starting webserver..."
>
> - $MANAGE runserver "$ADDR_PORT" \
> + $MANAGE runserver --noreload "$ADDR_PORT" \
> </dev/null >>${BUILDDIR}/toaster_web.log 2>&1 \
> & echo $! >${BUILDDIR}/.toastermain.pid
>
> @@ -186,6 +187,7 @@ unset OE_ROOT
> WEBSERVER=1
> export TOASTER_BUILDSERVER=1
> ADDR_PORT="localhost:8000"
> +TOASTERDIR=`dirname $BUILDDIR`
> unset CMD
> for param in $*; do
> case $param in
> @@ -211,6 +213,9 @@ for param in $*; do
> ADDR_PORT="localhost:$PORT"
> fi
> ;;
> + toasterdir=*)
> + TOASTERDIR="${param#*=}"
> + ;;
> --help)
> echo "$HELP"
> return 0
> @@ -241,7 +246,7 @@ fi
> # 2) the build dir (in build)
> # 3) the sqlite db if that is being used.
> # 4) pid's we need to clean up on exit/shutdown
> -export TOASTER_DIR=`dirname $BUILDDIR`
> +export TOASTER_DIR=$TOASTERDIR
> export BB_ENV_EXTRAWHITE="$BB_ENV_EXTRAWHITE TOASTER_DIR"
>
> # Determine the action. If specified by arguments, fine, if not,
> toggle it diff --git a/bitbake/contrib/dump_cache.py
> b/bitbake/contrib/dump_cache.py index f4d4c1b..8963ca4 100755
> --- a/bitbake/contrib/dump_cache.py
> +++ b/bitbake/contrib/dump_cache.py
> @@ -2,7 +2,7 @@
> # ex:ts=4:sw=4:sts=4:et
> # -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
> #
> -# Copyright (C) 2012 Wind River Systems, Inc.
> +# Copyright (C) 2012, 2018 Wind River Systems, Inc.
> #
> # This program is free software; you can redistribute it and/or
> modify # it under the terms of the GNU General Public License version
> 2 as @@ -18,51 +18,68 @@
> # 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
>
> #
> -# This is used for dumping the bb_cache.dat, the output format is:
> -# recipe_path PN PV PACKAGES
> +# Used for dumping the bb_cache.dat
> #
> import os
> import sys
> -import warnings
> +import argparse
>
> # For importing bb.cache
> sys.path.insert(0,
> os.path.join(os.path.abspath(os.path.dirname(sys.argv[0])),
> '../lib')) from bb.cache import CoreRecipeInfo
> -import pickle as pickle
> +import pickle
>
> -def main(argv=None):
> - """
> - Get the mapping for the target recipe.
> - """
> - if len(argv) != 1:
> - print("Error, need one argument!", file=sys.stderr)
> - return 2
> +class DumpCache(object):
> + def __init__(self):
> + parser = argparse.ArgumentParser(
> + description="bb_cache.dat's dumper",
> + epilog="Use %(prog)s --help to get help")
> + parser.add_argument("-r", "--recipe",
> + help="specify the recipe, default: all recipes",
> action="store")
> + parser.add_argument("-m", "--members",
> + help = "specify the member, use comma as separator for
> multiple ones, default: all members", action="store", default="")
> + parser.add_argument("-s", "--skip",
> + help = "skip skipped recipes", action="store_true")
> + parser.add_argument("cachefile",
> + help = "specify bb_cache.dat", nargs = 1,
> action="store", default="")
> - cachefile = argv[0]
> + self.args = parser.parse_args()
>
> - with open(cachefile, "rb") as cachefile:
> - pickled = pickle.Unpickler(cachefile)
> - while cachefile:
> - try:
> - key = pickled.load()
> - val = pickled.load()
> - except Exception:
> - break
> - if isinstance(val, CoreRecipeInfo) and (not val.skipped):
> - pn = val.pn
> - # Filter out the native recipes.
> - if key.startswith('virtual:native:') or
> pn.endswith("-native"):
> - continue
> + def main(self):
> + with open(self.args.cachefile[0], "rb") as cachefile:
> + pickled = pickle.Unpickler(cachefile)
> + while True:
> + try:
> + key = pickled.load()
> + val = pickled.load()
> + except Exception:
> + break
> + if isinstance(val, CoreRecipeInfo):
> + pn = val.pn
>
> - # 1.0 is the default version for a no PV recipe.
> - if "pv" in val.__dict__:
> - pv = val.pv
> - else:
> - pv = "1.0"
> + if self.args.recipe and self.args.recipe != pn:
> + continue
>
> - print("%s %s %s %s" % (key, pn, pv, '
> '.join(val.packages)))
> + if self.args.skip and val.skipped:
> + continue
>
> -if __name__ == "__main__":
> - sys.exit(main(sys.argv[1:]))
> + if self.args.members:
> + out = key
> + for member in self.args.members.split(','):
> + out += ": %s" % val.__dict__.get(member)
> + print("%s" % out)
> + else:
> + print("%s: %s" % (key, val.__dict__))
> + elif not self.args.recipe:
> + print("%s %s" % (key, val))
>
> +if __name__ == "__main__":
> + try:
> + dump = DumpCache()
> + ret = dump.main()
> + except Exception as esc:
> + ret = 1
> + import traceback
> + traceback.print_exc()
> + sys.exit(ret)
> diff --git
> a/bitbake/doc/bitbake-user-manual/bitbake-user-manual-execution.xml
> b/bitbake/doc/bitbake-user-manual/bitbake-user-manual-execution.xml
> index e4cc422..f1caaec 100644 ---
> a/bitbake/doc/bitbake-user-manual/bitbake-user-manual-execution.xml
> +++
> b/bitbake/doc/bitbake-user-manual/bitbake-user-manual-execution.xml
> @@ -781,7 +781,7 @@ The code in
> <filename>meta/lib/oe/sstatesig.py</filename> shows two examples of
> this and also illustrates how you can insert your own policy into the
> system if so desired.
> - This file defines the two basic signature generators
> OpenEmbedded Core
> + This file defines the two basic signature generators
> OpenEmbedded-Core uses: "OEBasic" and "OEBasicHash".
> By default, there is a dummy "noop" signature handler
> enabled in BitBake. This means that behavior is unchanged from
> previous versions. diff --git
> a/bitbake/doc/bitbake-user-manual/bitbake-user-manual-fetching.xml
> b/bitbake/doc/bitbake-user-manual/bitbake-user-manual-fetching.xml
> index c721e86..29ae486 100644 ---
> a/bitbake/doc/bitbake-user-manual/bitbake-user-manual-fetching.xml
> +++
> b/bitbake/doc/bitbake-user-manual/bitbake-user-manual-fetching.xml @@
> -777,6 +777,43 @@ </para> </section>
> + <section id='repo-fetcher'>
> + <title>Repo Fetcher
> (<filename>repo://</filename>)</title> +
> + <para>
> + This fetcher submodule fetches code from
> + <filename>google-repo</filename> source control
> system.
> + The fetcher works by initiating and syncing sources
> of the
> + repository into
> + <link
> linkend='var-REPODIR'><filename>REPODIR</filename></link>,
> + which is usually
> + <link
> linkend='var-DL_DIR'><filename>DL_DIR</filename></link><filename>/repo</filename>.
> + </para>
> +
> + <para>
> + This fetcher supports the following parameters:
> + <itemizedlist>
> + <listitem><para>
> + <emphasis>"protocol":</emphasis>
> + Protocol to fetch the repository manifest
> (default: git).
> + </para></listitem>
> + <listitem><para>
> + <emphasis>"branch":</emphasis>
> + Branch or tag of repository to get (default:
> master).
> + </para></listitem>
> + <listitem><para>
> + <emphasis>"manifest":</emphasis>
> + Name of the manifest file (default:
> <filename>default.xml</filename>).
> + </para></listitem>
> + </itemizedlist>
> + Here are some example URLs:
> + <literallayout class='monospaced'>
> + SRC_URI =
> "repo://REPOROOT;protocol=git;branch=some_branch;manifest=my_manifest.xml"
> + SRC_URI =
> "repo://REPOROOT;protocol=file;branch=some_branch;manifest=my_manifest.xml"
> + </literallayout>
> + </para>
> + </section>
> +
> <section id='other-fetchers'>
> <title>Other Fetchers</title>
>
> @@ -796,9 +833,6 @@
> Secure Shell (<filename>ssh://</filename>)
> </para></listitem>
> <listitem><para>
> - Repo (<filename>repo://</filename>)
> - </para></listitem>
> - <listitem><para>
> OSC (<filename>osc://</filename>)
> </para></listitem>
> <listitem><para>
> diff --git
> a/bitbake/doc/bitbake-user-manual/bitbake-user-manual-hello.xml
> b/bitbake/doc/bitbake-user-manual/bitbake-user-manual-hello.xml index
> f1060e5..9076f0f 100644 ---
> a/bitbake/doc/bitbake-user-manual/bitbake-user-manual-hello.xml +++
> b/bitbake/doc/bitbake-user-manual/bitbake-user-manual-hello.xml @@
> -383,10 +383,10 @@ code separate from the general metadata used by
> BitBake. Thus, this example creates and uses a layer called
> "mylayer". <note>
> - You can find additional information on layers at
> - <ulink
> url='http://www.yoctoproject.org/docs/2.3/bitbake-user-manual/bitbake-user-manual.html#layers'></ulink>.
> - </note>
> - </para>
> + You can find additional information on layers in
> the
> + "<link linkend='layers'>Layers</link>" section.
> + </note></para>
> +
> <para>Minimally, you need a recipe file and a layer
> configuration file in your layer.
> The configuration file needs to be in the
> <filename>conf</filename> diff --git
> a/bitbake/doc/bitbake-user-manual/bitbake-user-manual-intro.xml
> b/bitbake/doc/bitbake-user-manual/bitbake-user-manual-intro.xml index
> eb45809..f7d312a 100644 ---
> a/bitbake/doc/bitbake-user-manual/bitbake-user-manual-intro.xml +++
> b/bitbake/doc/bitbake-user-manual/bitbake-user-manual-intro.xml @@
> -342,13 +342,14 @@ <para>
> When you name an append file, you can use the
> - wildcard character (%) to allow for matching recipe
> names.
> + "<filename>%</filename>" wildcard character to allow
> for matching
> + recipe names.
> For example, suppose you have an append file named
> as follows:
> <literallayout class='monospaced'>
> busybox_1.21.%.bbappend
> </literallayout>
> - That append file would match any
> <filename>busybox_1.21.x.bb</filename>
> + That append file would match any
> <filename>busybox_1.21.</filename><replaceable>x</replaceable><filename>.bb</filename>
> version of the recipe. So, the append file would match the following
> recipe names: <literallayout class='monospaced'>
> @@ -356,6 +357,14 @@
> busybox_1.21.2.bb
> busybox_1.21.3.bb
> </literallayout>
> + <note><title>Important</title>
> + The use of the "<filename>%</filename>" character
> + is limited in that it only works directly in
> front of the
> + <filename>.bbappend</filename> portion of the
> append file's
> + name.
> + You cannot use the wildcard character in any
> other
> + location of the name.
> + </note>
> If the <filename>busybox</filename> recipe was
> updated to <filename>busybox_1.3.0.bb</filename>, the append name
> would not match.
> @@ -564,8 +573,12 @@
> Writes the event log of the build to a
> bitbake event json file. Use '' (empty string) to assign the name
> automatically.
> - --runall=RUNALL Run the specified task for all build
> targets and their
> - dependencies.
> + --runall=RUNALL Run the specified task for any recipe
> in the taskgraph
> + of the specified target (even if it
> wouldn't otherwise
> + have run).
> + --runonly=RUNONLY Run only the specified task within the
> taskgraph of
> + the specified targets (and any task
> dependencies those
> + tasks may have).
> </literallayout>
> </para>
> </section>
> @@ -719,6 +732,163 @@
> </literallayout>
> </para>
> </section>
> +
> + <section id='executing-a-multiple-configuration-build'>
> + <title>Executing a Multiple Configuration
> Build</title> +
> + <para>
> + BitBake is able to build multiple images or
> packages
> + using a single command where the different
> targets
> + require different configurations (multiple
> configuration
> + builds).
> + Each target, in this scenario, is referred to as
> a
> + "multiconfig".
> + </para>
> +
> + <para>
> + To accomplish a multiple configuration build,
> you must
> + define each target's configuration separately
> using
> + a parallel configuration file in the build
> directory.
> + The location for these multiconfig configuration
> files
> + is specific.
> + They must reside in the current build directory
> in
> + a sub-directory of <filename>conf</filename>
> named
> + <filename>multiconfig</filename>.
> + Following is an example for two separate targets:
> + <imagedata
> fileref="figures/bb_multiconfig_files.png" align="center" width="4in"
> depth="3in" />
> + </para>
> +
> + <para>
> + The reason for this required file hierarchy
> + is because the <filename>BBPATH</filename>
> variable
> + is not constructed until the layers are parsed.
> + Consequently, using the configuration file as a
> + pre-configuration file is not possible unless it
> is
> + located in the current working directory.
> + </para>
> +
> + <para>
> + Minimally, each configuration file must define
> the
> + machine and the temporary directory BitBake uses
> + for the build.
> + Suggested practice dictates that you do not
> + overlap the temporary directories used during the
> + builds.
> + </para>
> +
> + <para>
> + Aside from separate configuration files for each
> + target, you must also enable BitBake to perform
> multiple
> + configuration builds.
> + Enabling is accomplished by setting the
> + <link
> linkend='var-BBMULTICONFIG'><filename>BBMULTICONFIG</filename></link>
> + variable in the <filename>local.conf</filename>
> + configuration file.
> + As an example, suppose you had configuration
> files
> + for <filename>target1</filename> and
> + <filename>target2</filename> defined in the build
> + directory.
> + The following statement in the
> + <filename>local.conf</filename> file both enables
> + BitBake to perform multiple configuration builds
> and
> + specifies the two multiconfigs:
> + <literallayout class='monospaced'>
> + BBMULTICONFIG = "target1 target2"
> + </literallayout>
> + </para>
> +
> + <para>
> + Once the target configuration files are in place
> and
> + BitBake has been enabled to perform multiple
> configuration
> + builds, use the following command form to start
> the
> + builds:
> + <literallayout class='monospaced'>
> + $ bitbake
> [multiconfig:<replaceable>multiconfigname</replaceable>:]<replaceable>target</replaceable>
> [[[multiconfig:<replaceable>multiconfigname</replaceable>:]<replaceable>target</replaceable>] ... ]
> + </literallayout>
> + Here is an example for two multiconfigs:
> + <filename>target1</filename> and
> + <filename>target2</filename>:
> + <literallayout class='monospaced'>
> + $ bitbake multiconfig:target1:<replaceable>target</replaceable>
> multiconfig:target2:<replaceable>target</replaceable>
> + </literallayout>
> + </para>
> + </section>
> +
> + <section
> id='bb-enabling-multiple-configuration-build-dependencies'>
> + <title>Enabling Multiple Configuration Build
> Dependencies</title> +
> + <para>
> + Sometimes dependencies can exist between targets
> + (multiconfigs) in a multiple configuration build.
> + For example, suppose that in order to build an
> image
> + for a particular architecture, the root
> filesystem of
> + another build for a different architecture needs
> to
> + exist.
> + In other words, the image for the first
> multiconfig depends
> + on the root filesystem of the second multiconfig.
> + This dependency is essentially that the task in
> the recipe
> + that builds one multiconfig is dependent on the
> + completion of the task in the recipe that builds
> + another multiconfig.
> + </para>
> +
> + <para>
> + To enable dependencies in a multiple
> configuration
> + build, you must declare the dependencies in the
> recipe
> + using the following statement form:
> + <literallayout class='monospaced'>
> + <replaceable>task_or_package</replaceable>[mcdepends] =
> "multiconfig:<replaceable>from_multiconfig</replaceable>:<replaceable>to_multiconfig</replaceable>:<replaceable>recipe_name</replaceable>:<replaceable>task_on_which_to_depend</replaceable>"
> + </literallayout>
> + To better show how to use this statement,
> consider an
> + example with two multiconfigs:
> <filename>target1</filename>
> + and <filename>target2</filename>:
> + <literallayout class='monospaced'>
> + <replaceable>image_task</replaceable>[mcdepends] =
> "multiconfig:target1:target2:<replaceable>image2</replaceable>:<replaceable>rootfs_task</replaceable>"
> + </literallayout>
> + In this example, the
> + <replaceable>from_multiconfig</replaceable> is
> "target1" and
> + the <replaceable>to_multiconfig</replaceable> is
> "target2".
> + The task on which the image whose recipe contains
> + <replaceable>image_task</replaceable> depends on
> the
> + completion of the
> <replaceable>rootfs_task</replaceable>
> + used to build out
> <replaceable>image2</replaceable>, which
> + is associated with the "target2" multiconfig.
> + </para>
> +
> + <para>
> + Once you set up this dependency, you can build the
> + "target1" multiconfig using a BitBake command as
> follows:
> + <literallayout class='monospaced'>
> + $ bitbake multiconfig:target1:<replaceable>image1</replaceable>
> + </literallayout>
> + This command executes all the tasks needed to
> create
> + <replaceable>image1</replaceable> for the
> "target1"
> + multiconfig.
> + Because of the dependency, BitBake also executes
> through
> + the <replaceable>rootfs_task</replaceable> for
> the "target2"
> + multiconfig build.
> + </para>
> +
> + <para>
> + Having a recipe depend on the root filesystem of
> another
> + build might not seem that useful.
> + Consider this change to the statement in the
> + <replaceable>image1</replaceable> recipe:
> + <literallayout class='monospaced'>
> + <replaceable>image_task</replaceable>[mcdepends] =
> "multiconfig:target1:target2:<replaceable>image2</replaceable>:<replaceable>image_task</replaceable>"
> + </literallayout>
> + In this case, BitBake must create
> + <replaceable>image2</replaceable> for the
> "target2"
> + build since the "target1" build depends on it.
> + </para>
> +
> + <para>
> + Because "target1" and "target2" are enabled for
> multiple
> + configuration builds and have separate
> configuration
> + files, BitBake places the artifacts for each
> build in the
> + respective temporary build directories.
> + </para>
> + </section>
> </section>
> </section>
> </chapter>
> diff --git
> a/bitbake/doc/bitbake-user-manual/bitbake-user-manual-metadata.xml
> b/bitbake/doc/bitbake-user-manual/bitbake-user-manual-metadata.xml
> index f0cfffe..2490f6e 100644 ---
> a/bitbake/doc/bitbake-user-manual/bitbake-user-manual-metadata.xml
> +++
> b/bitbake/doc/bitbake-user-manual/bitbake-user-manual-metadata.xml @@
> -342,7 +342,7 @@ <para> When you use this syntax, BitBake expects one
> or more strings.
> - Surrounding spaces are removed as well.
> + Surrounding spaces and spacing are preserved.
> Here is an example:
> <literallayout class='monospaced'>
> FOO = "123 456 789 123456 123 456 123 456"
> @@ -352,8 +352,9 @@
> FOO2_remove = "abc def"
> </literallayout>
> The variable <filename>FOO</filename> becomes
> - "789 123456" and <filename>FOO2</filename> becomes
> - "ghi abcdef".
> + " 789 123456 "
> + and <filename>FOO2</filename> becomes
> + " ghi abcdef ".
> </para>
>
> <para>
> @@ -1929,6 +1930,38 @@
> not careful.
> </note>
> </para></listitem>
> +
> <listitem><para><emphasis><filename>[number_threads]</filename>:</emphasis>
> + Limits tasks to a specific number of
> simultaneous threads
> + during execution.
> + This varflag is useful when your build host has
> a large number
> + of cores but certain tasks need to be
> rate-limited due to various
> + kinds of resource constraints (e.g. to avoid
> network throttling).
> + <filename>number_threads</filename> works
> similarly to the
> + <link
> linkend='var-BB_NUMBER_THREADS'><filename>BB_NUMBER_THREADS</filename></link>
> + variable but is task-specific.</para>
> +
> + <para>Set the value globally.
> + For example, the following makes sure the
> + <filename>do_fetch</filename> task uses no more
> than two
> + simultaneous execution threads:
> + <literallayout class='monospaced'>
> + do_fetch[number_threads] = "2"
> + </literallayout>
> + <note><title>Warnings</title>
> + <itemizedlist>
> + <listitem><para>
> + Setting the varflag in individual
> recipes rather
> + than globally can result in
> unpredictable behavior.
> + </para></listitem>
> + <listitem><para>
> + Setting the varflag to a value
> greater than the
> + value used in the
> <filename>BB_NUMBER_THREADS</filename>
> + variable causes
> <filename>number_threads</filename>
> + to have no effect.
> + </para></listitem>
> + </itemizedlist>
> + </note>
> + </para></listitem>
> <listitem><para><emphasis><filename>[postfuncs]</filename>:</emphasis>
> List of functions to call after the completion
> of the task. </para></listitem>
> @@ -2652,48 +2685,97 @@
> </para>
>
> <para>
> - This list is a place holder of content existed from
> previous work
> - on the manual.
> - Some or all of it probably needs integrated into the
> subsections
> - that make up this section.
> - For now, I have just provided a short glossary-like
> description
> - for each variable.
> - Ultimately, this list goes away.
> + These checksums are stored in
> + <link
> linkend='var-STAMP'><filename>STAMP</filename></link>.
> + You can examine the checksums using the following
> BitBake command:
> + <literallayout class='monospaced'>
> + $ bitbake-dumpsigs
> + </literallayout>
> + This command returns the signature data in a readable
> format
> + that allows you to examine the inputs used when the
> + OpenEmbedded build system generates signatures.
> + For example, using <filename>bitbake-dumpsigs</filename>
> + allows you to examine the <filename>do_compile</filename>
> + task's “sigdata” for a C application (e.g.
> + <filename>bash</filename>).
> + Running the command also reveals that the “CC” variable
> is part of
> + the inputs that are hashed.
> + Any changes to this variable would invalidate the stamp
> and
> + cause the <filename>do_compile</filename> task to run.
> + </para>
> +
> + <para>
> + The following list describes related variables:
> <itemizedlist>
> - <listitem><para><filename>STAMP</filename>:
> - The base path to create stamp
> files.</para></listitem>
> - <listitem><para><filename>STAMPCLEAN</filename>
> - Again, the base path to create stamp files but
> can use wildcards
> - for matching a range of files for clean
> operations.
> - </para></listitem>
> -
> <listitem><para><filename>BB_STAMP_WHITELIST</filename>
> - Lists stamp files that are looked at when the
> stamp policy
> - is "whitelist".
> - </para></listitem>
> - <listitem><para><filename>BB_STAMP_POLICY</filename>
> - Defines the mode for comparing timestamps of
> stamp files.
> - </para></listitem>
> -
> <listitem><para><filename>BB_HASHCHECK_FUNCTION</filename>
> + <listitem><para>
> + <link
> linkend='var-BB_HASHCHECK_FUNCTION'><filename>BB_HASHCHECK_FUNCTION</filename></link>:
> Specifies the name of the function to call during the "setscene" part
> of the task's execution in order to validate the list of task hashes.
> </para></listitem>
> -
> <listitem><para><filename>BB_SETSCENE_VERIFY_FUNCTION2</filename>
> + <listitem><para>
> + <link
> linkend='var-BB_SETSCENE_DEPVALID'><filename>BB_SETSCENE_DEPVALID</filename></link>:
> + Specifies a function BitBake calls that
> determines
> + whether BitBake requires a setscene dependency to
> + be met.
> + </para></listitem>
> + <listitem><para>
> + <link
> linkend='var-BB_SETSCENE_VERIFY_FUNCTION2'><filename>BB_SETSCENE_VERIFY_FUNCTION2</filename></link>:
> Specifies a function to call that verifies the list of planned task
> execution before the main task execution happens.
> </para></listitem>
> -
> <listitem><para><filename>BB_SETSCENE_DEPVALID</filename>
> - Specifies a function BitBake calls that
> determines
> - whether BitBake requires a setscene dependency to
> - be met.
> + <listitem><para>
> + <link
> linkend='var-BB_STAMP_POLICY'><filename>BB_STAMP_POLICY</filename></link>:
> + Defines the mode for comparing timestamps of
> stamp files.
> + </para></listitem>
> + <listitem><para>
> + <link
> linkend='var-BB_STAMP_WHITELIST'><filename>BB_STAMP_WHITELIST</filename></link>:
> + Lists stamp files that are looked at when the
> stamp policy
> + is "whitelist".
> </para></listitem>
> - <listitem><para><filename>BB_TASKHASH</filename>
> + <listitem><para>
> + <link
> linkend='var-BB_TASKHASH'><filename>BB_TASKHASH</filename></link>:
> Within an executing task, this variable holds the hash of the task as
> returned by the currently enabled signature generator.
> </para></listitem>
> + <listitem><para>
> + <link
> linkend='var-STAMP'><filename>STAMP</filename></link>:
> + The base path to create stamp files.
> + </para></listitem>
> + <listitem><para>
> + <link
> linkend='var-STAMPCLEAN'><filename>STAMPCLEAN</filename></link>:
> + Again, the base path to create stamp files but
> can use wildcards
> + for matching a range of files for clean
> operations.
> + </para></listitem>
> </itemizedlist>
> </para>
> </section>
> +
> + <section id='wildcard-support-in-variables'>
> + <title>Wildcard Support in Variables</title>
> +
> + <para>
> + Support for wildcard use in variables varies depending
> on the
> + context in which it is used.
> + For example, some variables and file names allow limited
> use of
> + wildcards through the "<filename>%</filename>" and
> + "<filename>*</filename>" characters.
> + Other variables or names support Python's
> + <ulink
> url='https://docs.python.org/3/library/glob.html'><filename>glob</filename></ulink>
> + syntax,
> + <ulink
> url='https://docs.python.org/3/library/fnmatch.html#module-fnmatch'><filename>fnmatch</filename></ulink>
> + syntax, or
> + <ulink
> url='https://docs.python.org/3/library/re.html#re'><filename>Regular
> Expression (re)</filename></ulink>
> + syntax.
> + </para>
> +
> + <para>
> + For variables that have wildcard suport, the
> + documentation describes which form of wildcard, its
> + use, and its limitations.
> + </para>
> + </section>
> +
> </chapter>
> diff --git
> a/bitbake/doc/bitbake-user-manual/bitbake-user-manual-ref-variables.xml
> b/bitbake/doc/bitbake-user-manual/bitbake-user-manual-ref-variables.xml
> index d89e123..a84b2bc 100644 ---
> a/bitbake/doc/bitbake-user-manual/bitbake-user-manual-ref-variables.xml
> +++
> b/bitbake/doc/bitbake-user-manual/bitbake-user-manual-ref-variables.xml
> @@ -78,7 +78,7 @@ </para> <para>
> - In OpenEmbedded Core,
> <filename>ASSUME_PROVIDED</filename>
> + In OpenEmbedded-Core,
> <filename>ASSUME_PROVIDED</filename> mostly specifies native tools
> that should not be built. An example is
> <filename>git-native</filename>, which when specified allows for the
> Git binary from the host to @@ -115,7 +115,8 @@
> is either not set or set to "0".
> </para></listitem>
> <listitem><para>
> - Limited support for wildcard matching
> against the
> + Limited support for the
> "<filename>*</filename>"
> + wildcard character for matching against
> the beginning of host names exists.
> For example, the following setting
> matches <filename>git.gnu.org</filename>,
> @@ -124,6 +125,20 @@
> <literallayout class='monospaced'>
> BB_ALLOWED_NETWORKS = "*.gnu.org"
> </literallayout>
> + <note><title>Important</title>
> + <para>The use of the
> "<filename>*</filename>"
> + character only works at the
> beginning of
> + a host name and it must be isolated
> from
> + the remainder of the host name.
> + You cannot use the wildcard
> character in any
> + other location of the name or
> combined with
> + the front part of the name.</para>
> +
> + <para>For example,
> + <filename>*.foo.bar</filename> is
> supported,
> + while
> <filename>*aa.foo.bar</filename> is not.
> + </para>
> + </note>
> </para></listitem>
> <listitem><para>
> Mirrors not in the host list are skipped
> and @@ -646,10 +661,10 @@
> <glossdef>
> <para>
> Contains the name of the currently executing
> task.
> - The value does not include the "do_" prefix.
> + The value includes the "do_" prefix.
> For example, if the currently executing task is
> <filename>do_config</filename>, the value is
> - "config".
> + "do_config".
> </para>
> </glossdef>
> </glossentry>
> @@ -964,7 +979,7 @@
> Allows you to extend a recipe so that it builds
> variants of the software.
> Some examples of these variants for recipes from
> the
> - OpenEmbedded Core metadata are "natives" such as
> + OpenEmbedded-Core metadata are "natives" such as
> <filename>quilt-native</filename>, which is a
> copy of Quilt built to run on the build system; "crosses" such
> as <filename>gcc-cross</filename>, which is a
> compiler @@ -980,7 +995,7 @@
> amount of code, it usually is as simple as
> adding the variable to your recipe.
> Here are two examples.
> - The "native" variants are from the OpenEmbedded
> Core
> + The "native" variants are from the
> OpenEmbedded-Core metadata:
> <literallayout class='monospaced'>
> BBCLASSEXTEND =+ "native nativesdk"
> @@ -1082,7 +1097,19 @@
>
> <glossentry id='var-BBFILES'><glossterm>BBFILES</glossterm>
> <glossdef>
> - <para>List of recipe files BitBake uses to build
> software.</para>
> + <para>
> + A space-separated list of recipe files BitBake
> uses to
> + build software.
> + </para>
> +
> + <para>
> + When specifying recipe files, you can pattern
> match using
> + Python's
> + <ulink
> url='https://docs.python.org/3/library/glob.html'><filename>glob</filename></ulink>
> + syntax.
> + For details on the syntax, see the documentation
> by
> + following the previous link.
> + </para>
> </glossdef>
> </glossentry>
>
> @@ -1166,15 +1193,19 @@
> match any of the expressions.
> It is as if BitBake does not see them at all.
> Consequently, matching files are not parsed or
> otherwise
> - used by BitBake.</para>
> + used by BitBake.
> + </para>
> +
> <para>
> The values you provide are passed to Python's
> regular expression compiler.
> + Consequently, the syntax follows Python's Regular
> + Expression (re) syntax.
> The expressions are compared against the full
> paths to the files.
> For complete syntax information, see Python's
> documentation at
> - <ulink
> url='http://docs.python.org/release/2.3/lib/re-syntax.html'></ulink>.
> + <ulink
> url='http://docs.python.org/3/library/re.html#re'></ulink>. </para>
>
> <para>
> @@ -1205,6 +1236,45 @@
> </glossdef>
> </glossentry>
>
> + <glossentry
> id='var-BBMULTICONFIG'><glossterm>BBMULTICONFIG</glossterm>
> + <info>
> + BBMULTICONFIG[doc] = "Enables BitBake to perform
> multiple configuration builds and lists each separate configuration
> (multiconfig)."
> + </info>
> + <glossdef>
> + <para role="glossdeffirst">
> +<!-- <para role="glossdeffirst"><imagedata
> fileref="figures/define-generic.png" /> -->
> + Enables BitBake to perform multiple
> configuration builds
> + and lists each separate configuration
> (multiconfig).
> + You can use this variable to cause BitBake to
> build
> + multiple targets where each target has a separate
> + configuration.
> + Define <filename>BBMULTICONFIG</filename> in your
> + <filename>conf/local.conf</filename>
> configuration file.
> + </para>
> +
> + <para>
> + As an example, the following line specifies three
> + multiconfigs, each having a separate
> configuration file:
> + <literallayout class='monospaced'>
> + BBMULTIFONFIG = "configA configB configC"
> + </literallayout>
> + Each configuration file you use must reside in
> the
> + build directory within a directory named
> + <filename>conf/multiconfig</filename> (e.g.
> +
> <replaceable>build_directory</replaceable><filename>/conf/multiconfig/configA.conf</filename>).
> + </para>
> +
> + <para>
> + For information on how to use
> + <filename>BBMULTICONFIG</filename> in an
> environment that
> + supports building targets with multiple
> configurations,
> + see the
> + "<link
> linkend='executing-a-multiple-configuration-build'>Executing a
> Multiple Configuration Build</link>"
> + section.
> + </para>
> + </glossdef>
> + </glossentry>
> +
> <glossentry id='var-BBPATH'><glossterm>BBPATH</glossterm>
> <glossdef>
> <para>
> @@ -1894,15 +1964,27 @@
> you want to select, and you should set
> <link
> linkend='var-PV'><filename>PV</filename></link> accordingly for
> precedence.
> - You can use the "<filename>%</filename>"
> character as a
> - wildcard to match any number of characters,
> which can be
> - useful when specifying versions that contain
> long revision
> - numbers that could potentially change.
> + </para>
> +
> + <para>
> + The <filename>PREFERRED_VERSION</filename>
> variable
> + supports limited wildcard use through the
> + "<filename>%</filename>" character.
> + You can use the character to match any number of
> + characters, which can be useful when specifying
> versions
> + that contain long revision numbers that
> potentially change. Here are two examples:
> <literallayout class='monospaced'>
> PREFERRED_VERSION_python = "2.7.3"
> PREFERRED_VERSION_linux-yocto = "4.12%"
> </literallayout>
> + <note><title>Important</title>
> + The use of the "<filename>%</filename>"
> character
> + is limited in that it only works at the end
> of the
> + string.
> + You cannot use the wildcard character in any
> other
> + location of the string.
> + </note>
> </para>
> </glossdef>
> </glossentry>
> @@ -2089,6 +2171,16 @@
> </glossdef>
> </glossentry>
>
> + <glossentry id='var-REPODIR'><glossterm>REPODIR</glossterm>
> + <glossdef>
> + <para>
> + The directory in which a local copy of a
> + <filename>google-repo</filename> directory is
> stored
> + when it is synced.
> + </para>
> + </glossdef>
> + </glossentry>
> +
> <glossentry
> id='var-RPROVIDES'><glossterm>RPROVIDES</glossterm> <glossdef>
> <para>
> diff --git a/bitbake/doc/bitbake-user-manual/bitbake-user-manual.xml
> b/bitbake/doc/bitbake-user-manual/bitbake-user-manual.xml index
> d23e3ef..d793265 100644 ---
> a/bitbake/doc/bitbake-user-manual/bitbake-user-manual.xml +++
> b/bitbake/doc/bitbake-user-manual/bitbake-user-manual.xml @@ -56,7
> +56,7 @@ -->
>
> <copyright>
> - <year>2004-2017</year>
> + <year>2004-2018</year>
> <holder>Richard Purdie</holder>
> <holder>Chris Larson</holder>
> <holder>and Phil Blundell</holder>
> diff --git
> a/bitbake/doc/bitbake-user-manual/figures/bb_multiconfig_files.png
> b/bitbake/doc/bitbake-user-manual/figures/bb_multiconfig_files.png
> new file mode 100644 index 0000000..e69de29 diff --git
> a/bitbake/lib/bb/COW.py b/bitbake/lib/bb/COW.py index
> bec6208..7817473 100644 --- a/bitbake/lib/bb/COW.py
> +++ b/bitbake/lib/bb/COW.py
> @@ -150,7 +150,7 @@ class COWDictMeta(COWMeta):
> yield value
> if type == "items":
> yield (key, value)
> - raise StopIteration()
> + return
>
> def iterkeys(cls):
> return cls.iter("keys")
> diff --git a/bitbake/lib/bb/__init__.py b/bitbake/lib/bb/__init__.py
> index cd2f157..4bc47c8 100644
> --- a/bitbake/lib/bb/__init__.py
> +++ b/bitbake/lib/bb/__init__.py
> @@ -21,7 +21,7 @@
> # with this program; if not, write to the Free Software Foundation,
> Inc., # 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
>
> -__version__ = "1.37.0"
> +__version__ = "1.40.0"
>
> import sys
> if sys.version_info < (3, 4, 0):
> @@ -63,6 +63,10 @@ class BBLogger(Logger):
> def verbose(self, msg, *args, **kwargs):
> return self.log(logging.INFO - 1, msg, *args, **kwargs)
>
> + def verbnote(self, msg, *args, **kwargs):
> + return self.log(logging.INFO + 2, msg, *args, **kwargs)
> +
> +
> logging.raiseExceptions = False
> logging.setLoggerClass(BBLogger)
>
> @@ -93,6 +97,18 @@ def debug(lvl, *args):
> def note(*args):
> mainlogger.info(''.join(args))
>
> +#
> +# A higher prioity note which will show on the console but isn't a
> warning +#
> +# Something is happening the user should be aware of but they
> probably did +# something to make it happen
> +#
> +def verbnote(*args):
> + mainlogger.verbnote(''.join(args))
> +
> +#
> +# Warnings - things the user likely needs to pay attention to and fix
> +#
> def warn(*args):
> mainlogger.warning(''.join(args))
>
> diff --git a/bitbake/lib/bb/build.py b/bitbake/lib/bb/build.py
> index 4631abd..3e2a94e 100644
> --- a/bitbake/lib/bb/build.py
> +++ b/bitbake/lib/bb/build.py
> @@ -41,8 +41,6 @@ from bb import data, event, utils
> bblogger = logging.getLogger('BitBake')
> logger = logging.getLogger('BitBake.Build')
>
> -NULL = open(os.devnull, 'r+')
> -
> __mtime_cache = {}
>
> def cached_mtime_noerror(f):
> @@ -533,7 +531,6 @@ def _exec_task(fn, task, d, quieterr):
> self.triggered = True
>
> # Handle logfiles
> - si = open('/dev/null', 'r')
> try:
> bb.utils.mkdirhier(os.path.dirname(logfn))
> logfile = open(logfn, 'w')
> @@ -547,7 +544,8 @@ def _exec_task(fn, task, d, quieterr):
> ose = [os.dup(sys.stderr.fileno()), sys.stderr.fileno()]
>
> # Replace those fds with our own
> - os.dup2(si.fileno(), osi[1])
> + with open('/dev/null', 'r') as si:
> + os.dup2(si.fileno(), osi[1])
> os.dup2(logfile.fileno(), oso[1])
> os.dup2(logfile.fileno(), ose[1])
>
> @@ -608,7 +606,6 @@ def _exec_task(fn, task, d, quieterr):
> os.close(osi[0])
> os.close(oso[0])
> os.close(ose[0])
> - si.close()
>
> logfile.close()
> if os.path.exists(logfn) and os.path.getsize(logfn) == 0:
> @@ -803,6 +800,7 @@ def add_tasks(tasklist, d):
> if name in flags:
> deptask = d.expand(flags[name])
> task_deps[name][task] = deptask
> + getTask('mcdepends')
> getTask('depends')
> getTask('rdepends')
> getTask('deptask')
> diff --git a/bitbake/lib/bb/cache.py b/bitbake/lib/bb/cache.py
> index 86ce0e7..258d679 100644
> --- a/bitbake/lib/bb/cache.py
> +++ b/bitbake/lib/bb/cache.py
> @@ -37,7 +37,7 @@ import bb.utils
>
> logger = logging.getLogger("BitBake.Cache")
>
> -__cache_version__ = "151"
> +__cache_version__ = "152"
>
> def getCacheFile(path, filename, data_hash):
> return os.path.join(path, filename + "." + data_hash)
> @@ -395,7 +395,7 @@ class Cache(NoCache):
> self.has_cache = True
> self.cachefile = getCacheFile(self.cachedir, "bb_cache.dat",
> self.data_hash)
> - logger.debug(1, "Using cache in '%s'", self.cachedir)
> + logger.debug(1, "Cache dir: %s", self.cachedir)
> bb.utils.mkdirhier(self.cachedir)
>
> cache_ok = True
> @@ -408,6 +408,8 @@ class Cache(NoCache):
> self.load_cachefile()
> elif os.path.isfile(self.cachefile):
> logger.info("Out of date cache found, rebuilding...")
> + else:
> + logger.debug(1, "Cache file %s not found, building..." %
> self.cachefile)
> def load_cachefile(self):
> cachesize = 0
> @@ -424,6 +426,7 @@ class Cache(NoCache):
>
> for cache_class in self.caches_array:
> cachefile = getCacheFile(self.cachedir,
> cache_class.cachefile, self.data_hash)
> + logger.debug(1, 'Loading cache file: %s' % cachefile)
> with open(cachefile, "rb") as cachefile:
> pickled = pickle.Unpickler(cachefile)
> # Check cache version information
> diff --git a/bitbake/lib/bb/checksum.py b/bitbake/lib/bb/checksum.py
> index 8428920..4e1598f 100644
> --- a/bitbake/lib/bb/checksum.py
> +++ b/bitbake/lib/bb/checksum.py
> @@ -97,6 +97,8 @@ class FileChecksumCache(MultiProcessCache):
>
> def checksum_dir(pth):
> # Handle directories recursively
> + if pth == "/":
> + bb.fatal("Refusing to checksum /")
> dirchecksums = []
> for root, dirs, files in os.walk(pth):
> for name in files:
> diff --git a/bitbake/lib/bb/codeparser.py
> b/bitbake/lib/bb/codeparser.py index 530f44e..ddd1b97 100644
> --- a/bitbake/lib/bb/codeparser.py
> +++ b/bitbake/lib/bb/codeparser.py
> @@ -140,7 +140,7 @@ class CodeParserCache(MultiProcessCache):
> # so that an existing cache gets invalidated. Additionally
> you'll need # to increment __cache_version__ in cache.py in order to
> ensure that old # recipe caches don't trigger "Taskhash mismatch"
> errors.
> - CACHE_VERSION = 9
> + CACHE_VERSION = 10
>
> def __init__(self):
> MultiProcessCache.__init__(self)
> @@ -214,7 +214,7 @@ class BufferedLogger(Logger):
> self.buffer = []
>
> class PythonParser():
> - getvars = (".getVar", ".appendVar", ".prependVar")
> + getvars = (".getVar", ".appendVar", ".prependVar",
> "oe.utils.conditional") getvarflags = (".getVarFlag",
> ".appendVarFlag", ".prependVarFlag") containsfuncs =
> ("bb.utils.contains", "base_contains") containsanyfuncs =
> ("bb.utils.contains_any", "bb.utils.filter") diff --git
> a/bitbake/lib/bb/cooker.py b/bitbake/lib/bb/cooker.py index
> cd365f7..71a0eba 100644 --- a/bitbake/lib/bb/cooker.py
> +++ b/bitbake/lib/bb/cooker.py
> @@ -516,6 +516,8 @@ class BBCooker:
> fn = runlist[0][3]
> else:
> envdata = self.data
> + data.expandKeys(envdata)
> + parse.ast.runAnonFuncs(envdata)
>
> if fn:
> try:
> @@ -536,7 +538,6 @@ class BBCooker:
> logger.plain(env.getvalue())
>
> # emit the metadata which isnt valid shell
> - data.expandKeys(envdata)
> for e in sorted(envdata.keys()):
> if envdata.getVarFlag(e, 'func', False) and
> envdata.getVarFlag(e, 'python', False): logger.plain("\npython %s ()
> {\n%s}\n", e, envdata.getVar(e, False)) @@ -608,7 +609,14 @@ class
> BBCooker: k2 = k.split(":do_")
> k = k2[0]
> ktask = k2[1]
> - taskdata[mc].add_provider(localdata[mc],
> self.recipecaches[mc], k)
> + if mc:
> + # Provider might be from another mc
> + for mcavailable in self.multiconfigs:
> + # The first element is empty
> + if mcavailable:
> +
> taskdata[mcavailable].add_provider(localdata[mcavailable],
> self.recipecaches[mcavailable], k)
> + else:
> + taskdata[mc].add_provider(localdata[mc],
> self.recipecaches[mc], k) current += 1
> if not ktask.startswith("do_"):
> ktask = "do_%s" % ktask
> @@ -619,6 +627,27 @@ class BBCooker:
> runlist.append([mc, k, ktask, fn])
> bb.event.fire(bb.event.TreeDataPreparationProgress(current,
> len(fulltargetlist)), self.data)
> + mcdeps = taskdata[mc].get_mcdepends()
> + # No need to do check providers if there are no mcdeps or
> not an mc build
> + if mcdeps and mc:
> + # Make sure we can provide the multiconfig dependency
> + seen = set()
> + new = True
> + while new:
> + new = False
> + for mc in self.multiconfigs:
> + for k in mcdeps:
> + if k in seen:
> + continue
> + l = k.split(':')
> + depmc = l[2]
> + if depmc not in self.multiconfigs:
> + bb.fatal("Multiconfig dependency %s
> depends on nonexistent mc configuration %s" % (k,depmc))
> + else:
> + logger.debug(1, "Adding providers for
> multiconfig dependency %s" % l[3])
> +
> taskdata[depmc].add_provider(localdata[depmc],
> self.recipecaches[depmc], l[3])
> + seen.add(k)
> + new = True
> for mc in self.multiconfigs:
> taskdata[mc].add_unresolved(localdata[mc],
> self.recipecaches[mc])
> @@ -705,8 +734,8 @@ class BBCooker:
> if not dotname in depend_tree["tdepends"]:
> depend_tree["tdepends"][dotname] = []
> for dep in rq.rqdata.runtaskentries[tid].depends:
> - (depmc, depfn, deptaskname, deptaskfn) =
> bb.runqueue.split_tid_mcfn(dep)
> - deppn = self.recipecaches[mc].pkg_fn[deptaskfn]
> + (depmc, depfn, _, deptaskfn) =
> bb.runqueue.split_tid_mcfn(dep)
> + deppn = self.recipecaches[depmc].pkg_fn[deptaskfn]
> depend_tree["tdepends"][dotname].append("%s.%s" %
> (deppn, bb.runqueue.taskname_from_tid(dep))) if taskfn not in
> seen_fns: seen_fns.append(taskfn)
> @@ -1170,6 +1199,7 @@ class BBCooker:
> elif regex == "":
> parselog.debug(1, "BBFILE_PATTERN_%s is empty" %
> c) errors = False
> + continue
> else:
> try:
> cre = re.compile(regex)
> @@ -1564,7 +1594,7 @@ class BBCooker:
> pkgs_to_build.append(t)
>
> if 'universe' in pkgs_to_build:
> - parselog.warning("The \"universe\" target is only
> intended for testing and may produce errors.")
> + parselog.verbnote("The \"universe\" target is only
> intended for testing and may produce errors.") parselog.debug(1,
> "collating packages for \"universe\"")
> pkgs_to_build.remove('universe') for mc in self.multiconfigs:
> @@ -1603,8 +1633,6 @@ class BBCooker:
>
> if self.parser:
> self.parser.shutdown(clean=not force, force=force)
> - self.notifier.stop()
> - self.confignotifier.stop()
>
> def finishcommand(self):
> self.state = state.initial
> @@ -1633,7 +1661,10 @@ class CookerExit(bb.event.Event):
> class CookerCollectFiles(object):
> def __init__(self, priorities):
> self.bbappends = []
> - self.bbfile_config_priorities = priorities
> + # Priorities is a list of tupples, with the second element
> as the pattern.
> + # We need to sort the list with the longest pattern first,
> and so on to
> + # the shortest. This allows nested layers to be properly
> evaluated.
> + self.bbfile_config_priorities = sorted(priorities,
> key=lambda tup: tup[1], reverse=True)
> def calc_bbfile_priority( self, filename, matched = None ):
> for _, _, regex, pri in self.bbfile_config_priorities:
> @@ -1807,21 +1838,25 @@ class CookerCollectFiles(object):
> realfn, cls, mc = bb.cache.virtualfn2realfn(p)
> priorities[p] = self.calc_bbfile_priority(realfn,
> matched)
> - # Don't show the warning if the BBFILE_PATTERN did
> match .bbappend files unmatched = set()
> for _, _, regex, pri in self.bbfile_config_priorities:
> if not regex in matched:
> unmatched.add(regex)
>
> - def findmatch(regex):
> + # Don't show the warning if the BBFILE_PATTERN did
> match .bbappend files
> + def find_bbappend_match(regex):
> for b in self.bbappends:
> (bbfile, append) = b
> if regex.match(append):
> + # If the bbappend is matched by already "matched
> set", return False
> + for matched_regex in matched:
> + if matched_regex.match(append):
> + return False
> return True
> return False
>
> for unmatch in unmatched.copy():
> - if findmatch(unmatch):
> + if find_bbappend_match(unmatch):
> unmatched.remove(unmatch)
>
> for collection, pattern, regex, _ in
> self.bbfile_config_priorities: diff --git
> a/bitbake/lib/bb/cookerdata.py b/bitbake/lib/bb/cookerdata.py index
> fab47c7..5df66e6 100644 --- a/bitbake/lib/bb/cookerdata.py
> +++ b/bitbake/lib/bb/cookerdata.py
> @@ -143,7 +143,8 @@ class CookerConfiguration(object):
> self.writeeventlog = False
> self.server_only = False
> self.limited_deps = False
> - self.runall = None
> + self.runall = []
> + self.runonly = []
>
> self.env = {}
>
> @@ -395,6 +396,8 @@ class CookerDataBuilder(object):
> if compat and not (compat & layerseries):
> bb.fatal("Layer %s is not compatible with the
> core layer which only supports these series: %s (layer is compatible
> with %s)" % (c, " ".join(layerseries), " ".join(compat)))
> + elif not compat and not
> data.getVar("BB_WORKERCONTEXT"):
> + bb.warn("Layer %s should set
> LAYERSERIES_COMPAT_%s in its conf/layer.conf file to list the core
> layer names it is compatible with." % (c, c)) if not
> data.getVar("BBPATH"): msg = "The BBPATH variable is not set"
> diff --git a/bitbake/lib/bb/daemonize.py b/bitbake/lib/bb/daemonize.py
> index 8300d1d..c937675 100644
> --- a/bitbake/lib/bb/daemonize.py
> +++ b/bitbake/lib/bb/daemonize.py
> @@ -16,6 +16,10 @@ def createDaemon(function, logfile):
> background as a daemon, returning control to the caller.
> """
>
> + # Ensure stdout/stderror are flushed before forking to avoid
> duplicate output
> + sys.stdout.flush()
> + sys.stderr.flush()
> +
> try:
> # Fork a child process so the parent can exit. This returns
> control to # the command-line or shell. It also guarantees that the
> child will not @@ -49,8 +53,8 @@ def createDaemon(function, logfile):
> # exit() or _exit()?
> # _exit is like exit(), but it doesn't call any
> functions registered # with atexit (and on_exit) or any registered
> signal handlers. It also
> - # closes any open file descriptors. Using exit() may
> cause all stdio
> - # streams to be flushed twice and any temporary files
> may be unexpectedly
> + # closes any open file descriptors, but doesn't flush
> any buffered output.
> + # Using exit() may cause all any temporary files to be
> unexpectedly # removed. It's therefore recommended that child
> branches of a fork() # and the parent branch(es) of a daemon use
> _exit(). os._exit(0)
> @@ -61,17 +65,19 @@ def createDaemon(function, logfile):
> # The second child.
>
> # Replace standard fds with our own
> - si = open('/dev/null', 'r')
> - os.dup2(si.fileno(), sys.stdin.fileno())
> + with open('/dev/null', 'r') as si:
> + os.dup2(si.fileno(), sys.stdin.fileno())
>
> try:
> so = open(logfile, 'a+')
> - se = so
> os.dup2(so.fileno(), sys.stdout.fileno())
> - os.dup2(se.fileno(), sys.stderr.fileno())
> + os.dup2(so.fileno(), sys.stderr.fileno())
> except io.UnsupportedOperation:
> sys.stdout = open(logfile, 'a+')
> - sys.stderr = sys.stdout
> +
> + # Have stdout and stderr be the same so log output matches
> chronologically
> + # and there aren't two seperate buffers
> + sys.stderr = sys.stdout
>
> try:
> function()
> @@ -79,4 +85,9 @@ def createDaemon(function, logfile):
> traceback.print_exc()
> finally:
> bb.event.print_ui_queue()
> + # os._exit() doesn't flush open files like os.exit() does.
> Manually flush
> + # stdout and stderr so that any logging output will be seen,
> particularly
> + # exception tracebacks.
> + sys.stdout.flush()
> + sys.stderr.flush()
> os._exit(0)
> diff --git a/bitbake/lib/bb/data.py b/bitbake/lib/bb/data.py
> index 80a7879..d66d98c 100644
> --- a/bitbake/lib/bb/data.py
> +++ b/bitbake/lib/bb/data.py
> @@ -38,6 +38,7 @@ the speed is more critical here.
> # Based on functions from the base bb module, Copyright 2003 Holger
> Schurig
> import sys, os, re
> +import hashlib
> if sys.argv[0][-5:] == "pydoc":
> path = os.path.dirname(os.path.dirname(sys.argv[1]))
> else:
> @@ -283,14 +284,12 @@ def build_dependencies(key, keys, shelldeps,
> varflagsexcl, d): try:
> if key[-1] == ']':
> vf = key[:-1].split('[')
> - value = d.getVarFlag(vf[0], vf[1], False)
> - parser = d.expandWithRefs(value, key)
> + value, parser = d.getVarFlag(vf[0], vf[1], False,
> retparser=True) deps |= parser.references
> deps = deps | (keys & parser.execs)
> return deps, value
> varflags = d.getVarFlags(key, ["vardeps", "vardepvalue",
> "vardepsexclude", "exports", "postfuncs", "prefuncs", "lineno",
> "filename"]) or {} vardeps = varflags.get("vardeps")
> - value = d.getVarFlag(key, "_content", False)
>
> def handle_contains(value, contains, d):
> newvalue = ""
> @@ -309,10 +308,19 @@ def build_dependencies(key, keys, shelldeps,
> varflagsexcl, d): return newvalue
> return value + newvalue
>
> + def handle_remove(value, deps, removes, d):
> + for r in sorted(removes):
> + r2 = d.expandWithRefs(r, None)
> + value += "\n_remove of %s" % r
> + deps |= r2.references
> + deps = deps | (keys & r2.execs)
> + return value
> +
> if "vardepvalue" in varflags:
> - value = varflags.get("vardepvalue")
> + value = varflags.get("vardepvalue")
> elif varflags.get("func"):
> if varflags.get("python"):
> + value = d.getVarFlag(key, "_content", False)
> parser = bb.codeparser.PythonParser(key, logger)
> if value and "\t" in value:
> logger.warning("Variable %s contains tabs,
> please remove these (%s)" % (key, d.getVar("FILE"))) @@ -321,13
> +329,15 @@ def build_dependencies(key, keys, shelldeps, varflagsexcl,
> d): deps = deps | (keys & parser.execs) value =
> handle_contains(value, parser.contains, d) else:
> - parsedvar = d.expandWithRefs(value, key)
> + value, parsedvar = d.getVarFlag(key, "_content",
> False, retparser=True) parser = bb.codeparser.ShellParser(key, logger)
> parser.parse_shell(parsedvar.value)
> deps = deps | shelldeps
> deps = deps | parsedvar.references
> deps = deps | (keys & parser.execs) | (keys &
> parsedvar.execs) value = handle_contains(value, parsedvar.contains, d)
> + if hasattr(parsedvar, "removes"):
> + value = handle_remove(value, deps,
> parsedvar.removes, d) if vardeps is None:
> parser.log.flush()
> if "prefuncs" in varflags:
> @@ -337,10 +347,12 @@ def build_dependencies(key, keys, shelldeps,
> varflagsexcl, d): if "exports" in varflags:
> deps = deps | set(varflags["exports"].split())
> else:
> - parser = d.expandWithRefs(value, key)
> + value, parser = d.getVarFlag(key, "_content", False,
> retparser=True) deps |= parser.references
> deps = deps | (keys & parser.execs)
> value = handle_contains(value, parser.contains, d)
> + if hasattr(parser, "removes"):
> + value = handle_remove(value, deps, parser.removes, d)
>
> if "vardepvalueexclude" in varflags:
> exclude = varflags.get("vardepvalueexclude")
> @@ -394,6 +406,43 @@ def generate_dependencies(d):
> #print "For %s: %s" % (task, str(deps[task]))
> return tasklist, deps, values
>
> +def generate_dependency_hash(tasklist, gendeps, lookupcache,
> whitelist, fn):
> + taskdeps = {}
> + basehash = {}
> +
> + for task in tasklist:
> + data = lookupcache[task]
> +
> + if data is None:
> + bb.error("Task %s from %s seems to be empty?!" % (task,
> fn))
> + data = ''
> +
> + gendeps[task] -= whitelist
> + newdeps = gendeps[task]
> + seen = set()
> + while newdeps:
> + nextdeps = newdeps
> + seen |= nextdeps
> + newdeps = set()
> + for dep in nextdeps:
> + if dep in whitelist:
> + continue
> + gendeps[dep] -= whitelist
> + newdeps |= gendeps[dep]
> + newdeps -= seen
> +
> + alldeps = sorted(seen)
> + for dep in alldeps:
> + data = data + dep
> + var = lookupcache[dep]
> + if var is not None:
> + data = data + str(var)
> + k = fn + "." + task
> + basehash[k] = hashlib.md5(data.encode("utf-8")).hexdigest()
> + taskdeps[task] = alldeps
> +
> + return taskdeps, basehash
> +
> def inherits_class(klass, d):
> val = d.getVar('__inherit_cache', False) or []
> needle = os.path.join('classes', '%s.bbclass' % klass)
> diff --git a/bitbake/lib/bb/data_smart.py
> b/bitbake/lib/bb/data_smart.py index 7b09af5..6b94fc4 100644
> --- a/bitbake/lib/bb/data_smart.py
> +++ b/bitbake/lib/bb/data_smart.py
> @@ -42,6 +42,7 @@ __setvar_keyword__ = ["_append", "_prepend",
> "_remove"] __setvar_regexp__ =
> re.compile('(?P<base>.*?)(?P<keyword>_append|_prepend|_remove)(_(?P<add>[^A-Z]*))?$')
> __expand_var_regexp__ = re.compile(r"\${[^{}@\n\t :]+}")
> __expand_python_regexp__ = re.compile(r"\${@.+?}")
> +__whitespace_split__ = re.compile('(\s)')
> def infer_caller_details(loginfo, parent = False, varval = True):
> """Save the caller the trouble of specifying everything."""
> @@ -104,11 +105,7 @@ class VariableParse:
> if self.varname and key:
> if self.varname == key:
> raise Exception("variable %s references itself!"
> % self.varname)
> - if key in self.d.expand_cache:
> - varparse = self.d.expand_cache[key]
> - var = varparse.value
> - else:
> - var = self.d.getVarFlag(key, "_content")
> + var = self.d.getVarFlag(key, "_content")
> self.references.add(key)
> if var is not None:
> return var
> @@ -267,6 +264,16 @@ class VariableHistory(object):
> return
> self.variables[var].append(loginfo.copy())
>
> + def rename_variable_hist(self, oldvar, newvar):
> + if not self.dataroot._tracking:
> + return
> + if oldvar not in self.variables:
> + return
> + if newvar not in self.variables:
> + self.variables[newvar] = []
> + for i in self.variables[oldvar]:
> + self.variables[newvar].append(i.copy())
> +
> def variable(self, var):
> remote_connector = self.dataroot.getVar('_remote_data',
> False) if remote_connector:
> @@ -401,9 +408,6 @@ class DataSmart(MutableMapping):
> if not isinstance(s, str): # sanity check
> return VariableParse(varname, self, s)
>
> - if varname and varname in self.expand_cache:
> - return self.expand_cache[varname]
> -
> varparse = VariableParse(varname, self)
>
> while s.find('${') != -1:
> @@ -427,9 +431,6 @@ class DataSmart(MutableMapping):
>
> varparse.value = s
>
> - if varname:
> - self.expand_cache[varname] = varparse
> -
> return varparse
>
> def expand(self, s, varname = None):
> @@ -498,6 +499,7 @@ class DataSmart(MutableMapping):
>
> def setVar(self, var, value, **loginfo):
> #print("var=" + str(var) + " val=" + str(value))
> + self.expand_cache = {}
> parsing=False
> if 'parsing' in loginfo:
> parsing=True
> @@ -510,7 +512,7 @@ class DataSmart(MutableMapping):
>
> if 'op' not in loginfo:
> loginfo['op'] = "set"
> - self.expand_cache = {}
> +
> match = __setvar_regexp__.match(var)
> if match and match.group("keyword") in __setvar_keyword__:
> base = match.group('base')
> @@ -619,6 +621,7 @@ class DataSmart(MutableMapping):
>
> val = self.getVar(key, 0, parsing=True)
> if val is not None:
> + self.varhistory.rename_variable_hist(key, newkey)
> loginfo['variable'] = newkey
> loginfo['op'] = 'rename from %s' % key
> loginfo['detail'] = val
> @@ -660,6 +663,7 @@ class DataSmart(MutableMapping):
> self.setVar(var + "_prepend", value, ignore=True,
> parsing=True)
> def delVar(self, var, **loginfo):
> + self.expand_cache = {}
> if '_remote_data' in self.dict:
> connector = self.dict["_remote_data"]["_content"]
> res = connector.delVar(var)
> @@ -669,7 +673,6 @@ class DataSmart(MutableMapping):
> loginfo['detail'] = ""
> loginfo['op'] = 'del'
> self.varhistory.record(**loginfo)
> - self.expand_cache = {}
> self.dict[var] = {}
> if var in self.overridedata:
> del self.overridedata[var]
> @@ -692,13 +695,13 @@ class DataSmart(MutableMapping):
> override = None
>
> def setVarFlag(self, var, flag, value, **loginfo):
> + self.expand_cache = {}
> if '_remote_data' in self.dict:
> connector = self.dict["_remote_data"]["_content"]
> res = connector.setVarFlag(var, flag, value)
> if not res:
> return
>
> - self.expand_cache = {}
> if 'op' not in loginfo:
> loginfo['op'] = "set"
> loginfo['flag'] = flag
> @@ -719,9 +722,21 @@ class DataSmart(MutableMapping):
> self.dict["__exportlist"]["_content"] = set()
> self.dict["__exportlist"]["_content"].add(var)
>
> - def getVarFlag(self, var, flag, expand=True,
> noweakdefault=False, parsing=False):
> + def getVarFlag(self, var, flag, expand=True,
> noweakdefault=False, parsing=False, retparser=False):
> + if flag == "_content":
> + cachename = var
> + else:
> + if not flag:
> + bb.warn("Calling getVarFlag with flag unset is
> invalid")
> + return None
> + cachename = var + "[" + flag + "]"
> +
> + if expand and cachename in self.expand_cache:
> + return self.expand_cache[cachename].value
> +
> local_var, overridedata = self._findVar(var)
> value = None
> + removes = set()
> if flag == "_content" and overridedata is not None and not
> parsing: match = False
> active = {}
> @@ -748,7 +763,11 @@ class DataSmart(MutableMapping):
> match = active[a]
> del active[a]
> if match:
> - value = self.getVar(match, False)
> + value, subparser = self.getVarFlag(match,
> "_content", False, retparser=True)
> + if hasattr(subparser, "removes"):
> + # We have to carry the removes from the
> overridden variable to apply at the
> + # end of processing
> + removes = subparser.removes
>
> if local_var is not None and value is None:
> if flag in local_var:
> @@ -784,17 +803,13 @@ class DataSmart(MutableMapping):
> if match:
> value = r + value
>
> - if expand and value:
> - # Only getvar (flag == _content) hits the expand cache
> - cachename = None
> - if flag == "_content":
> - cachename = var
> - else:
> - cachename = var + "[" + flag + "]"
> - value = self.expand(value, cachename)
> + parser = None
> + if expand or retparser:
> + parser = self.expandWithRefs(value, cachename)
> + if expand:
> + value = parser.value
>
> - if value and flag == "_content" and local_var is not None
> and "_remove" in local_var:
> - removes = []
> + if value and flag == "_content" and local_var is not None
> and "_remove" in local_var and not parsing: self.need_overrides()
> for (r, o) in local_var["_remove"]:
> match = True
> @@ -803,26 +818,45 @@ class DataSmart(MutableMapping):
> if not o2 in self.overrides:
> match = False
> if match:
> - removes.extend(self.expand(r).split())
> -
> - if removes:
> - filtered = filter(lambda v: v not in removes,
> - value.split())
> - value = " ".join(filtered)
> - if expand and var in self.expand_cache:
> - # We need to ensure the expand cache has the
> correct value
> - # flag == "_content" here
> - self.expand_cache[var].value = value
> + removes.add(r)
> +
> + if value and flag == "_content" and not parsing:
> + if removes and parser:
> + expanded_removes = {}
> + for r in removes:
> + expanded_removes[r] = self.expand(r).split()
> +
> + parser.removes = set()
> + val = ""
> + for v in __whitespace_split__.split(parser.value):
> + skip = False
> + for r in removes:
> + if v in expanded_removes[r]:
> + parser.removes.add(r)
> + skip = True
> + if skip:
> + continue
> + val = val + v
> + parser.value = val
> + if expand:
> + value = parser.value
> +
> + if parser:
> + self.expand_cache[cachename] = parser
> +
> + if retparser:
> + return value, parser
> +
> return value
>
> def delVarFlag(self, var, flag, **loginfo):
> + self.expand_cache = {}
> if '_remote_data' in self.dict:
> connector = self.dict["_remote_data"]["_content"]
> res = connector.delVarFlag(var, flag)
> if not res:
> return
>
> - self.expand_cache = {}
> local_var, _ = self._findVar(var)
> if not local_var:
> return
> diff --git a/bitbake/lib/bb/event.py b/bitbake/lib/bb/event.py
> index 5d00496..5b1b094 100644
> --- a/bitbake/lib/bb/event.py
> +++ b/bitbake/lib/bb/event.py
> @@ -141,6 +141,9 @@ def print_ui_queue():
> logger = logging.getLogger("BitBake")
> if not _uiready:
> from bb.msg import BBLogFormatter
> + # Flush any existing buffered content
> + sys.stdout.flush()
> + sys.stderr.flush()
> stdout = logging.StreamHandler(sys.stdout)
> stderr = logging.StreamHandler(sys.stderr)
> formatter = BBLogFormatter("%(levelname)s: %(message)s")
> @@ -395,7 +398,7 @@ class RecipeEvent(Event):
> Event.__init__(self)
>
> class RecipePreFinalise(RecipeEvent):
> - """ Recipe Parsing Complete but not yet finialised"""
> + """ Recipe Parsing Complete but not yet finalised"""
>
> class RecipeTaskPreProcess(RecipeEvent):
> """
> diff --git a/bitbake/lib/bb/fetch2/__init__.py
> b/bitbake/lib/bb/fetch2/__init__.py index 6bd0404..2b62b41 100644
> --- a/bitbake/lib/bb/fetch2/__init__.py
> +++ b/bitbake/lib/bb/fetch2/__init__.py
> @@ -383,7 +383,7 @@ def decodeurl(url):
> path = location
> else:
> host = location
> - path = ""
> + path = "/"
> if user:
> m = re.compile('(?P<user>[^:]+)(:?(?P<pswd>.*))').match(user)
> if m:
> @@ -452,8 +452,8 @@ def uri_replace(ud, uri_find, uri_replace,
> replacements, d, mirrortarball=None): # Handle URL parameters
> if i:
> # Any specified URL parameters must match
> - for k in uri_replace_decoded[loc]:
> - if uri_decoded[loc][k] !=
> uri_replace_decoded[loc][k]:
> + for k in uri_find_decoded[loc]:
> + if uri_decoded[loc][k] !=
> uri_find_decoded[loc][k]: return None
> # Overwrite any specified replacement parameters
> for k in uri_replace_decoded[loc]:
> @@ -643,26 +643,25 @@ def verify_donestamp(ud, d, origud=None):
> if not ud.needdonestamp or (origud and not origud.needdonestamp):
> return True
>
> - if not os.path.exists(ud.donestamp):
> + if not os.path.exists(ud.localpath):
> + # local path does not exist
> + if os.path.exists(ud.donestamp):
> + # done stamp exists, but the downloaded file does not;
> the done stamp
> + # must be incorrect, re-trigger the download
> + bb.utils.remove(ud.donestamp)
> return False
>
> if (not ud.method.supports_checksum(ud) or
> (origud and not origud.method.supports_checksum(origud))):
> - # done stamp exists, checksums not supported; assume the
> local file is
> - # current
> - return True
> -
> - if not os.path.exists(ud.localpath):
> - # done stamp exists, but the downloaded file does not; the
> done stamp
> - # must be incorrect, re-trigger the download
> - bb.utils.remove(ud.donestamp)
> - return False
> + # if done stamp exists and checksums not supported; assume
> the local
> + # file is current
> + return os.path.exists(ud.donestamp)
>
> precomputed_checksums = {}
> # Only re-use the precomputed checksums if the donestamp is
> newer than the # file. Do not rely on the mtime of directories,
> though. If ud.localpath is # a directory, there will probably not be
> any checksums anyway.
> - if (os.path.isdir(ud.localpath) or
> + if os.path.exists(ud.donestamp) and (os.path.isdir(ud.localpath)
> or os.path.getmtime(ud.localpath) < os.path.getmtime(ud.donestamp)):
> try:
> with open(ud.donestamp, "rb") as cachefile:
> @@ -838,14 +837,16 @@ def runfetchcmd(cmd, d, quiet=False,
> cleanup=None, log=None, workdir=None): if not cleanup:
> cleanup = []
>
> - # If PATH contains WORKDIR which contains PV which contains
> SRCPV we
> + # If PATH contains WORKDIR which contains PV-PR which contains
> SRCPV we # can end up in circular recursion here so give the option
> of breaking it # in a data store copy.
> try:
> d.getVar("PV")
> + d.getVar("PR")
> except bb.data_smart.ExpansionError:
> d = bb.data.createCopy(d)
> d.setVar("PV", "fetcheravoidrecurse")
> + d.setVar("PR", "fetcheravoidrecurse")
>
> origenv = d.getVar("BB_ORIGENV", False)
> for var in exportvars:
> @@ -1017,16 +1018,7 @@ def try_mirror_url(fetch, origud, ud, ld,
> check = False): origud.method.build_mirror_data(origud, ld)
> return origud.localpath
> # Otherwise the result is a local file:// and we symlink to
> it
> - if not os.path.exists(origud.localpath):
> - if os.path.islink(origud.localpath):
> - # Broken symbolic link
> - os.unlink(origud.localpath)
> -
> - # As per above, in case two tasks end up here
> simultaneously.
> - try:
> - os.symlink(ud.localpath, origud.localpath)
> - except FileExistsError:
> - pass
> + ensure_symlink(ud.localpath, origud.localpath)
> update_stamp(origud, ld)
> return ud.localpath
>
> @@ -1060,6 +1052,22 @@ def try_mirror_url(fetch, origud, ud, ld,
> check = False): bb.utils.unlockfile(lf)
>
>
> +def ensure_symlink(target, link_name):
> + if not os.path.exists(link_name):
> + if os.path.islink(link_name):
> + # Broken symbolic link
> + os.unlink(link_name)
> +
> + # In case this is executing without any file locks held (as
> is
> + # the case for file:// URLs), two tasks may end up here at
> the
> + # same time, in which case we do not want the second task to
> + # fail when the link has already been created by the first
> task.
> + try:
> + os.symlink(target, link_name)
> + except FileExistsError:
> + pass
> +
> +
> def try_mirrors(fetch, d, origud, mirrors, check = False):
> """
> Try to use a mirrored version of the sources.
> @@ -1089,7 +1097,9 @@ def trusted_network(d, url):
> return True
>
> pkgname = d.expand(d.getVar('PN', False))
> - trusted_hosts = d.getVarFlag('BB_ALLOWED_NETWORKS', pkgname,
> False)
> + trusted_hosts = None
> + if pkgname:
> + trusted_hosts = d.getVarFlag('BB_ALLOWED_NETWORKS', pkgname,
> False)
> if not trusted_hosts:
> trusted_hosts = d.getVar('BB_ALLOWED_NETWORKS')
> diff --git a/bitbake/lib/bb/fetch2/bzr.py
> b/bitbake/lib/bb/fetch2/bzr.py index 16123f8..658502f 100644
> --- a/bitbake/lib/bb/fetch2/bzr.py
> +++ b/bitbake/lib/bb/fetch2/bzr.py
> @@ -41,8 +41,9 @@ class Bzr(FetchMethod):
> init bzr specific variable within url data
> """
> # Create paths to bzr checkouts
> + bzrdir = d.getVar("BZRDIR") or (d.getVar("DL_DIR") + "/bzr")
> relpath = self._strip_leading_slashes(ud.path)
> - ud.pkgdir = os.path.join(d.expand('${BZRDIR}'), ud.host,
> relpath)
> + ud.pkgdir = os.path.join(bzrdir, ud.host, relpath)
>
> ud.setup_revisions(d)
>
> @@ -57,7 +58,7 @@ class Bzr(FetchMethod):
> command is "fetch", "update", "revno"
> """
>
> - basecmd = d.expand('${FETCHCMD_bzr}')
> + basecmd = d.getVar("FETCHCMD_bzr") or "/usr/bin/env bzr"
>
> proto = ud.parm.get('protocol', 'http')
>
> diff --git a/bitbake/lib/bb/fetch2/clearcase.py
> b/bitbake/lib/bb/fetch2/clearcase.py index 36beab6..3a6573d 100644
> --- a/bitbake/lib/bb/fetch2/clearcase.py
> +++ b/bitbake/lib/bb/fetch2/clearcase.py
> @@ -69,7 +69,6 @@ from bb.fetch2 import FetchMethod
> from bb.fetch2 import FetchError
> from bb.fetch2 import runfetchcmd
> from bb.fetch2 import logger
> -from distutils import spawn
>
> class ClearCase(FetchMethod):
> """Class to fetch urls via 'clearcase'"""
> @@ -107,7 +106,7 @@ class ClearCase(FetchMethod):
> else:
> ud.module = ""
>
> - ud.basecmd = d.getVar("FETCHCMD_ccrc") or
> spawn.find_executable("cleartool") or
> spawn.find_executable("rcleartool")
> + ud.basecmd = d.getVar("FETCHCMD_ccrc") or "/usr/bin/env
> cleartool || rcleartool"
> if d.getVar("SRCREV") == "INVALID":
> raise FetchError("Set a valid SRCREV for the clearcase
> fetcher in your recipe, e.g. SRCREV = \"/main/LATEST\" or any other
> label of your choice.") diff --git a/bitbake/lib/bb/fetch2/cvs.py
> b/bitbake/lib/bb/fetch2/cvs.py index 490c954..0e0a319 100644 ---
> a/bitbake/lib/bb/fetch2/cvs.py +++ b/bitbake/lib/bb/fetch2/cvs.py
> @@ -110,7 +110,7 @@ class Cvs(FetchMethod):
> if ud.tag:
> options.append("-r %s" % ud.tag)
>
> - cvsbasecmd = d.getVar("FETCHCMD_cvs")
> + cvsbasecmd = d.getVar("FETCHCMD_cvs") or "/usr/bin/env cvs"
> cvscmd = cvsbasecmd + " '-d" + cvsroot + "' co " + "
> ".join(options) + " " + ud.module cvsupdatecmd = cvsbasecmd + " '-d"
> + cvsroot + "' update -d -P " + " ".join(options)
> @@ -121,7 +121,8 @@ class Cvs(FetchMethod):
> # create module directory
> logger.debug(2, "Fetch: checking for module directory")
> pkg = d.getVar('PN')
> - pkgdir = os.path.join(d.getVar('CVSDIR'), pkg)
> + cvsdir = d.getVar("CVSDIR") or (d.getVar("DL_DIR") + "/cvs")
> + pkgdir = os.path.join(cvsdir, pkg)
> moddir = os.path.join(pkgdir, localdir)
> workdir = None
> if os.access(os.path.join(moddir, 'CVS'), os.R_OK):
> diff --git a/bitbake/lib/bb/fetch2/git.py
> b/bitbake/lib/bb/fetch2/git.py index d34ea1d..15858a6 100644
> --- a/bitbake/lib/bb/fetch2/git.py
> +++ b/bitbake/lib/bb/fetch2/git.py
> @@ -125,6 +125,9 @@ class
> GitProgressHandler(bb.progress.LineFilterProgressHandler):
>
> class Git(FetchMethod):
> + bitbake_dir =
> os.path.abspath(os.path.join(os.path.dirname(os.path.join(os.path.abspath(__file__))),
> '..', '..', '..'))
> + make_shallow_path = os.path.join(bitbake_dir, 'bin',
> 'git-make-shallow') +
> """Class to fetch a module or modules from git repositories"""
> def init(self, d):
> pass
> @@ -258,7 +261,7 @@ class Git(FetchMethod):
> gitsrcname = gitsrcname + '_' + ud.revisions[name]
>
> dl_dir = d.getVar("DL_DIR")
> - gitdir = d.getVar("GITDIR") or (dl_dir + "/git2/")
> + gitdir = d.getVar("GITDIR") or (dl_dir + "/git2")
> ud.clonedir = os.path.join(gitdir, gitsrcname)
> ud.localfile = ud.clonedir
>
> @@ -296,17 +299,22 @@ class Git(FetchMethod):
> return ud.clonedir
>
> def need_update(self, ud, d):
> + return self.clonedir_need_update(ud, d) or
> self.shallow_tarball_need_update(ud) or self.tarball_need_update(ud) +
> + def clonedir_need_update(self, ud, d):
> if not os.path.exists(ud.clonedir):
> return True
> for name in ud.names:
> if not self._contains_ref(ud, d, name, ud.clonedir):
> return True
> - if ud.shallow and ud.write_shallow_tarballs and not
> os.path.exists(ud.fullshallow):
> - return True
> - if ud.write_tarballs and not os.path.exists(ud.fullmirror):
> - return True
> return False
>
> + def shallow_tarball_need_update(self, ud):
> + return ud.shallow and ud.write_shallow_tarballs and not
> os.path.exists(ud.fullshallow) +
> + def tarball_need_update(self, ud):
> + return ud.write_tarballs and not
> os.path.exists(ud.fullmirror) +
> def try_premirror(self, ud, d):
> # If we don't do this, updating an existing checkout with
> only premirrors # is not possible
> @@ -319,16 +327,13 @@ class Git(FetchMethod):
> def download(self, ud, d):
> """Fetch url"""
>
> - no_clone = not os.path.exists(ud.clonedir)
> - need_update = no_clone or self.need_update(ud, d)
> -
> # A current clone is preferred to either tarball, a shallow
> tarball is # preferred to an out of date clone, and a missing clone
> will use # either tarball.
> - if ud.shallow and os.path.exists(ud.fullshallow) and
> need_update:
> + if ud.shallow and os.path.exists(ud.fullshallow) and
> self.need_update(ud, d): ud.localpath = ud.fullshallow
> return
> - elif os.path.exists(ud.fullmirror) and no_clone:
> + elif os.path.exists(ud.fullmirror) and not
> os.path.exists(ud.clonedir): bb.utils.mkdirhier(ud.clonedir)
> runfetchcmd("tar -xzf %s" % ud.fullmirror, d,
> workdir=ud.clonedir)
> @@ -350,11 +355,12 @@ class Git(FetchMethod):
> for name in ud.names:
> if not self._contains_ref(ud, d, name, ud.clonedir):
> needupdate = True
> + break
> +
> if needupdate:
> - try:
> - runfetchcmd("%s remote rm origin" % ud.basecmd, d,
> workdir=ud.clonedir)
> - except bb.fetch2.FetchError:
> - logger.debug(1, "No Origin")
> + output = runfetchcmd("%s remote" % ud.basecmd, d,
> quiet=True, workdir=ud.clonedir)
> + if "origin" in output:
> + runfetchcmd("%s remote rm origin" % ud.basecmd, d,
> workdir=ud.clonedir)
> runfetchcmd("%s remote add --mirror=fetch origin %s" %
> (ud.basecmd, repourl), d, workdir=ud.clonedir) fetch_cmd = "LANG=C %s
> fetch -f --prune --progress %s refs/*:refs/*" % (ud.basecmd, repourl)
> @@ -370,6 +376,7 @@ class Git(FetchMethod): except OSError as exc:
> if exc.errno != errno.ENOENT:
> raise
> +
> for name in ud.names:
> if not self._contains_ref(ud, d, name, ud.clonedir):
> raise bb.fetch2.FetchError("Unable to find revision
> %s in branch %s even from upstream" % (ud.revisions[name],
> ud.branches[name])) @@ -446,7 +453,7 @@ class Git(FetchMethod):
> shallow_branches.append(r)
> # Make the repository shallow
> - shallow_cmd = ['git', 'make-shallow', '-s']
> + shallow_cmd = [self.make_shallow_path, '-s']
> for b in shallow_branches:
> shallow_cmd.append('-r')
> shallow_cmd.append(b)
> @@ -469,11 +476,27 @@ class Git(FetchMethod):
> if os.path.exists(destdir):
> bb.utils.prunedir(destdir)
>
> - if ud.shallow and (not os.path.exists(ud.clonedir) or
> self.need_update(ud, d)):
> - bb.utils.mkdirhier(destdir)
> - runfetchcmd("tar -xzf %s" % ud.fullshallow, d,
> workdir=destdir)
> - else:
> - runfetchcmd("%s clone %s %s/ %s" % (ud.basecmd,
> ud.cloneflags, ud.clonedir, destdir), d)
> + source_found = False
> + source_error = []
> +
> + if not source_found:
> + clonedir_is_up_to_date = not
> self.clonedir_need_update(ud, d)
> + if clonedir_is_up_to_date:
> + runfetchcmd("%s clone %s %s/ %s" % (ud.basecmd,
> ud.cloneflags, ud.clonedir, destdir), d)
> + source_found = True
> + else:
> + source_error.append("clone directory not available
> or not up to date: " + ud.clonedir) +
> + if not source_found:
> + if ud.shallow and os.path.exists(ud.fullshallow):
> + bb.utils.mkdirhier(destdir)
> + runfetchcmd("tar -xzf %s" % ud.fullshallow, d,
> workdir=destdir)
> + source_found = True
> + else:
> + source_error.append("shallow clone not enabled or
> not available: " + ud.fullshallow) +
> + if not source_found:
> + raise bb.fetch2.UnpackError("No up to date source found:
> " + "; ".join(source_error), ud.url)
> repourl = self._get_repo_url(ud)
> runfetchcmd("%s remote set-url origin %s" % (ud.basecmd,
> repourl), d, workdir=destdir) @@ -592,7 +615,8 @@ class
> Git(FetchMethod): tagregex =
> re.compile(d.getVar('UPSTREAM_CHECK_GITTAGREGEX') or
> "(?P<pver>([0-9][\.|_]?)+)") try: output = self._lsremote(ud, d,
> "refs/tags/*")
> - except bb.fetch2.FetchError or bb.fetch2.NetworkAccess:
> + except (bb.fetch2.FetchError, bb.fetch2.NetworkAccess) as e:
> + bb.note("Could not list remote: %s" % str(e))
> return pupver
>
> verstring = ""
> diff --git a/bitbake/lib/bb/fetch2/gitsm.py
> b/bitbake/lib/bb/fetch2/gitsm.py index 0aff100..0a982da 100644
> --- a/bitbake/lib/bb/fetch2/gitsm.py
> +++ b/bitbake/lib/bb/fetch2/gitsm.py
> @@ -31,9 +31,12 @@ NOTE: Switching a SRC_URI from "git://" to
> "gitsm://" requires a clean of your r
> import os
> import bb
> +import copy
> from bb.fetch2.git import Git
> from bb.fetch2 import runfetchcmd
> from bb.fetch2 import logger
> +from bb.fetch2 import Fetch
> +from bb.fetch2 import BBFetchException
>
> class GitSM(Git):
> def supports(self, ud, d):
> @@ -42,94 +45,207 @@ class GitSM(Git):
> """
> return ud.type in ['gitsm']
>
> - def uses_submodules(self, ud, d, wd):
> - for name in ud.names:
> - try:
> - runfetchcmd("%s show %s:.gitmodules" % (ud.basecmd,
> ud.revisions[name]), d, quiet=True, workdir=wd)
> - return True
> - except bb.fetch.FetchError:
> - pass
> - return False
> + @staticmethod
> + def parse_gitmodules(gitmodules):
> + modules = {}
> + module = ""
> + for line in gitmodules.splitlines():
> + if line.startswith('[submodule'):
> + module = line.split('"')[1]
> + modules[module] = {}
> + elif module and line.strip().startswith('path'):
> + path = line.split('=')[1].strip()
> + modules[module]['path'] = path
> + elif module and line.strip().startswith('url'):
> + url = line.split('=')[1].strip()
> + modules[module]['url'] = url
> + return modules
>
> - def _set_relative_paths(self, repopath):
> - """
> - Fix submodule paths to be relative instead of absolute,
> - so that when we move the repo it doesn't break
> - (In Git 1.7.10+ this is done automatically)
> - """
> + def update_submodules(self, ud, d):
> submodules = []
> - with open(os.path.join(repopath, '.gitmodules'), 'r') as f:
> - for line in f.readlines():
> - if line.startswith('[submodule'):
> - submodules.append(line.split('"')[1])
> + paths = {}
> + uris = {}
> + local_paths = {}
>
> - for module in submodules:
> - repo_conf = os.path.join(repopath, module, '.git')
> - if os.path.exists(repo_conf):
> - with open(repo_conf, 'r') as f:
> - lines = f.readlines()
> - newpath = ''
> - for i, line in enumerate(lines):
> - if line.startswith('gitdir:'):
> - oldpath = line.split(': ')[-1].rstrip()
> - if oldpath.startswith('/'):
> - newpath = '../' * (module.count('/') +
> 1) + '.git/modules/' + module
> - lines[i] = 'gitdir: %s\n' % newpath
> - break
> - if newpath:
> - with open(repo_conf, 'w') as f:
> - for line in lines:
> - f.write(line)
> -
> - repo_conf2 = os.path.join(repopath, '.git', 'modules',
> module, 'config')
> - if os.path.exists(repo_conf2):
> - with open(repo_conf2, 'r') as f:
> - lines = f.readlines()
> - newpath = ''
> - for i, line in enumerate(lines):
> - if line.lstrip().startswith('worktree = '):
> - oldpath = line.split(' = ')[-1].rstrip()
> - if oldpath.startswith('/'):
> - newpath = '../' * (module.count('/') +
> 3) + module
> - lines[i] = '\tworktree = %s\n' % newpath
> - break
> - if newpath:
> - with open(repo_conf2, 'w') as f:
> - for line in lines:
> - f.write(line)
> + for name in ud.names:
> + try:
> + gitmodules = runfetchcmd("%s show %s:.gitmodules" %
> (ud.basecmd, ud.revisions[name]), d, quiet=True, workdir=ud.clonedir)
> + except:
> + # No submodules to update
> + continue
> +
> + for m, md in self.parse_gitmodules(gitmodules).items():
> + submodules.append(m)
> + paths[m] = md['path']
> + uris[m] = md['url']
> + if uris[m].startswith('..'):
> + newud = copy.copy(ud)
> + newud.path =
> os.path.realpath(os.path.join(newud.path, md['url']))
> + uris[m] = Git._get_repo_url(self, newud)
>
> - def update_submodules(self, ud, d):
> - # We have to convert bare -> full repo, do the submodule
> bit, then convert back
> - tmpclonedir = ud.clonedir + ".tmp"
> - gitdir = tmpclonedir + os.sep + ".git"
> - bb.utils.remove(tmpclonedir, True)
> - os.mkdir(tmpclonedir)
> - os.rename(ud.clonedir, gitdir)
> - runfetchcmd("sed " + gitdir + "/config -i -e
> 's/bare.*=.*true/bare = false/'", d)
> - runfetchcmd(ud.basecmd + " reset --hard", d,
> workdir=tmpclonedir)
> - runfetchcmd(ud.basecmd + " checkout -f " +
> ud.revisions[ud.names[0]], d, workdir=tmpclonedir)
> - runfetchcmd(ud.basecmd + " submodule update --init
> --recursive", d, workdir=tmpclonedir)
> - self._set_relative_paths(tmpclonedir)
> - runfetchcmd("sed " + gitdir + "/config -i -e
> 's/bare.*=.*false/bare = true/'", d, workdir=tmpclonedir)
> - os.rename(gitdir, ud.clonedir,)
> - bb.utils.remove(tmpclonedir, True)
> + for module in submodules:
> + module_hash = runfetchcmd("%s ls-tree -z -d %s %s" %
> (ud.basecmd, ud.revisions[name], paths[module]), d, quiet=True,
> workdir=ud.clonedir)
> + module_hash = module_hash.split()[2]
> +
> + # Build new SRC_URI
> + proto = uris[module].split(':', 1)[0]
> + url = uris[module].replace('%s:' % proto, 'gitsm:', 1)
> + url += ';protocol=%s' % proto
> + url += ";name=%s" % module
> + url += ";bareclone=1;nocheckout=1"
> +
> + ld = d.createCopy()
> + # Not necessary to set SRC_URI, since we're passing the
> URI to
> + # Fetch.
> + #ld.setVar('SRC_URI', url)
> + ld.setVar('SRCREV_%s' % module, module_hash)
> +
> + # Workaround for issues with SRCPV/SRCREV_FORMAT errors
> + # error refer to 'multiple' repositories. Only the
> repository
> + # in the original SRC_URI actually matters...
> + ld.setVar('SRCPV', d.getVar('SRCPV'))
> + ld.setVar('SRCREV_FORMAT', module)
> +
> + newfetch = Fetch([url], ld, cache=False)
> + newfetch.download()
> + local_paths[module] = newfetch.localpath(url)
> +
> + # Correct the submodule references to the local download
> version...
> + runfetchcmd("%(basecmd)s config submodule.%(module)s.url
> %(url)s" % {'basecmd': ud.basecmd, 'module': module, 'url' :
> local_paths[module]}, d, workdir=ud.clonedir) +
> + symlink_path = os.path.join(ud.clonedir, 'modules',
> paths[module])
> + if not os.path.exists(symlink_path):
> + try:
> + os.makedirs(os.path.dirname(symlink_path),
> exist_ok=True)
> + except OSError:
> + pass
> + os.symlink(local_paths[module], symlink_path)
> +
> + return True
> +
> + def need_update(self, ud, d):
> + main_repo_needs_update = Git.need_update(self, ud, d)
> +
> + # First check that the main repository has enough history
> fetched. If it doesn't, then we don't
> + # even have the .gitmodules and gitlinks for the submodules
> to attempt asking whether the
> + # submodules' histories are recent enough.
> + if main_repo_needs_update:
> + return True
> +
> + # Now check that the submodule histories are new enough. The
> git-submodule command doesn't have
> + # any clean interface for doing this aside from just
> attempting the checkout (with network
> + # fetched disabled).
> + return not self.update_submodules(ud, d)
>
> def download(self, ud, d):
> Git.download(self, ud, d)
>
> if not ud.shallow or ud.localpath != ud.fullshallow:
> - submodules = self.uses_submodules(ud, d, ud.clonedir)
> - if submodules:
> - self.update_submodules(ud, d)
> + self.update_submodules(ud, d)
> +
> + def copy_submodules(self, submodules, ud, destdir, d):
> + if ud.bareclone:
> + repo_conf = destdir
> + else:
> + repo_conf = os.path.join(destdir, '.git')
> +
> + if submodules and not os.path.exists(os.path.join(repo_conf,
> 'modules')):
> + os.mkdir(os.path.join(repo_conf, 'modules'))
> +
> + for module in submodules:
> + srcpath = os.path.join(ud.clonedir, 'modules', module)
> + modpath = os.path.join(repo_conf, 'modules', module)
> +
> + if os.path.exists(srcpath):
> + if os.path.exists(os.path.join(srcpath, '.git')):
> + srcpath = os.path.join(srcpath, '.git')
> +
> + target = modpath
> + if os.path.exists(modpath):
> + target = os.path.dirname(modpath)
> +
> + os.makedirs(os.path.dirname(target), exist_ok=True)
> + runfetchcmd("cp -fpLR %s %s" % (srcpath, target), d)
> + elif os.path.exists(modpath):
> + # Module already exists, likely unpacked from a
> shallow mirror clone
> + pass
> + else:
> + # This is fatal, as we do NOT want git-submodule to
> hit the network
> + raise bb.fetch2.FetchError('Submodule %s does not
> exist in %s or %s.' % (module, srcpath, modpath))
> def clone_shallow_local(self, ud, dest, d):
> super(GitSM, self).clone_shallow_local(ud, dest, d)
>
> - runfetchcmd('cp -fpPRH "%s/modules" "%s/"' % (ud.clonedir,
> os.path.join(dest, '.git')), d)
> + # Copy over the submodules' fetched histories too.
> + repo_conf = os.path.join(dest, '.git')
> +
> + submodules = []
> + for name in ud.names:
> + try:
> + gitmodules = runfetchcmd("%s show %s:.gitmodules" %
> (ud.basecmd, ud.revision), d, quiet=True, workdir=dest)
> + except:
> + # No submodules to update
> + continue
> +
> + submodules =
> list(self.parse_gitmodules(gitmodules).keys()) +
> + self.copy_submodules(submodules, ud, dest, d)
>
> def unpack(self, ud, destdir, d):
> Git.unpack(self, ud, destdir, d)
>
> - if self.uses_submodules(ud, d, ud.destdir):
> - runfetchcmd(ud.basecmd + " checkout " +
> ud.revisions[ud.names[0]], d, workdir=ud.destdir)
> - runfetchcmd(ud.basecmd + " submodule update --init
> --recursive", d, workdir=ud.destdir)
> + # Copy over the submodules' fetched histories too.
> + if ud.bareclone:
> + repo_conf = ud.destdir
> + else:
> + repo_conf = os.path.join(ud.destdir, '.git')
> +
> + submodules = []
> + paths = {}
> + uris = {}
> + local_paths = {}
> + for name in ud.names:
> + try:
> + gitmodules = runfetchcmd("%s show HEAD:.gitmodules"
> % (ud.basecmd), d, quiet=True, workdir=ud.destdir)
> + except:
> + # No submodules to update
> + continue
> +
> + for m, md in self.parse_gitmodules(gitmodules).items():
> + submodules.append(m)
> + paths[m] = md['path']
> + uris[m] = md['url']
> +
> + self.copy_submodules(submodules, ud, ud.destdir, d)
> +
> + submodules_queue = [(module, os.path.join(repo_conf,
> 'modules', module)) for module in submodules]
> + while len(submodules_queue) != 0:
> + module, modpath = submodules_queue.pop()
> +
> + # add submodule children recursively
> + try:
> + gitmodules = runfetchcmd("%s show HEAD:.gitmodules"
> % (ud.basecmd), d, quiet=True, workdir=modpath)
> + for m, md in
> self.parse_gitmodules(gitmodules).items():
> + submodules_queue.append([m,
> os.path.join(modpath, 'modules', m)])
> + except:
> + # no children
> + pass
> +
> + # Determine (from the submodule) the correct url to
> reference
> + try:
> + output = runfetchcmd("%(basecmd)s config
> remote.origin.url" % {'basecmd': ud.basecmd}, d, workdir=modpath)
> + except bb.fetch2.FetchError as e:
> + # No remote url defined in this submodule
> + continue
> +
> + local_paths[module] = output
> +
> + # Setup the local URL properly (like git submodule init
> or sync would do...)
> + runfetchcmd("%(basecmd)s config submodule.%(module)s.url
> %(url)s" % {'basecmd': ud.basecmd, 'module': module, 'url' :
> local_paths[module]}, d, workdir=ud.destdir) +
> + # Ensure the submodule repository is NOT set to bare,
> since we're checking it out...
> + runfetchcmd("%s config core.bare false" % (ud.basecmd),
> d, quiet=True, workdir=modpath) +
> + if submodules:
> + # Run submodule update, this sets up the directories --
> without touching the config
> + runfetchcmd("%s submodule update --recursive --no-fetch"
> % (ud.basecmd), d, quiet=True, workdir=ud.destdir) diff --git
> a/bitbake/lib/bb/fetch2/hg.py b/bitbake/lib/bb/fetch2/hg.py index
> d0857e6..936d043 100644 --- a/bitbake/lib/bb/fetch2/hg.py
> +++ b/bitbake/lib/bb/fetch2/hg.py
> @@ -80,7 +80,7 @@ class Hg(FetchMethod):
> ud.fullmirror = os.path.join(d.getVar("DL_DIR"),
> mirrortarball) ud.mirrortarballs = [mirrortarball]
>
> - hgdir = d.getVar("HGDIR") or (d.getVar("DL_DIR") + "/hg/")
> + hgdir = d.getVar("HGDIR") or (d.getVar("DL_DIR") + "/hg")
> ud.pkgdir = os.path.join(hgdir, hgsrcname)
> ud.moddir = os.path.join(ud.pkgdir, ud.module)
> ud.localfile = ud.moddir
> diff --git a/bitbake/lib/bb/fetch2/npm.py
> b/bitbake/lib/bb/fetch2/npm.py index b5f148c..408dfc3 100644
> --- a/bitbake/lib/bb/fetch2/npm.py
> +++ b/bitbake/lib/bb/fetch2/npm.py
> @@ -32,7 +32,6 @@ from bb.fetch2 import runfetchcmd
> from bb.fetch2 import logger
> from bb.fetch2 import UnpackError
> from bb.fetch2 import ParameterError
> -from distutils import spawn
>
> def subprocess_setup():
> # Python installs a SIGPIPE handler by default. This is usually
> not what @@ -195,9 +194,11 @@ class Npm(FetchMethod):
> outputurl = pdata['dist']['tarball']
> data[pkg] = {}
> data[pkg]['tgz'] = os.path.basename(outputurl)
> - if not outputurl in fetchedlist:
> - self._runwget(ud, d, "%s --directory-prefix=%s %s" %
> (self.basecmd, ud.prefixdir, outputurl), False)
> - fetchedlist.append(outputurl)
> + if outputurl in fetchedlist:
> + return
> +
> + self._runwget(ud, d, "%s --directory-prefix=%s %s" %
> (self.basecmd, ud.prefixdir, outputurl), False)
> + fetchedlist.append(outputurl)
>
> dependencies = pdata.get('dependencies', {})
> optionalDependencies = pdata.get('optionalDependencies', {})
> diff --git a/bitbake/lib/bb/fetch2/osc.py
> b/bitbake/lib/bb/fetch2/osc.py index 2b4f7d9..6c60456 100644
> --- a/bitbake/lib/bb/fetch2/osc.py
> +++ b/bitbake/lib/bb/fetch2/osc.py
> @@ -32,8 +32,9 @@ class Osc(FetchMethod):
> ud.module = ud.parm["module"]
>
> # Create paths to osc checkouts
> + oscdir = d.getVar("OSCDIR") or (d.getVar("DL_DIR") + "/osc")
> relpath = self._strip_leading_slashes(ud.path)
> - ud.pkgdir = os.path.join(d.getVar('OSCDIR'), ud.host)
> + ud.pkgdir = os.path.join(oscdir, ud.host)
> ud.moddir = os.path.join(ud.pkgdir, relpath, ud.module)
>
> if 'rev' in ud.parm:
> @@ -54,7 +55,7 @@ class Osc(FetchMethod):
> command is "fetch", "update", "info"
> """
>
> - basecmd = d.expand('${FETCHCMD_osc}')
> + basecmd = d.getVar("FETCHCMD_osc") or "/usr/bin/env osc"
>
> proto = ud.parm.get('protocol', 'ocs')
>
> diff --git a/bitbake/lib/bb/fetch2/perforce.py
> b/bitbake/lib/bb/fetch2/perforce.py index 3debad5..903a8e6 100644
> --- a/bitbake/lib/bb/fetch2/perforce.py
> +++ b/bitbake/lib/bb/fetch2/perforce.py
> @@ -43,13 +43,9 @@ class Perforce(FetchMethod):
> provided by the env, use it. If P4PORT is specified by the
> recipe, use its values, which may override the settings in P4CONFIG.
> """
> - ud.basecmd = d.getVar('FETCHCMD_p4')
> - if not ud.basecmd:
> - ud.basecmd = "/usr/bin/env p4"
> + ud.basecmd = d.getVar("FETCHCMD_p4") or "/usr/bin/env p4"
>
> - ud.dldir = d.getVar('P4DIR')
> - if not ud.dldir:
> - ud.dldir = '%s/%s' % (d.getVar('DL_DIR'), 'p4')
> + ud.dldir = d.getVar("P4DIR") or (d.getVar("DL_DIR") + "/p4")
>
> path = ud.url.split('://')[1]
> path = path.split(';')[0]
> diff --git a/bitbake/lib/bb/fetch2/repo.py
> b/bitbake/lib/bb/fetch2/repo.py index c22d9b5..8c7e818 100644
> --- a/bitbake/lib/bb/fetch2/repo.py
> +++ b/bitbake/lib/bb/fetch2/repo.py
> @@ -45,6 +45,8 @@ class Repo(FetchMethod):
> "master".
> """
>
> + ud.basecmd = d.getVar("FETCHCMD_repo") or "/usr/bin/env repo"
> +
> ud.proto = ud.parm.get('protocol', 'git')
> ud.branch = ud.parm.get('branch', 'master')
> ud.manifest = ud.parm.get('manifest', 'default.xml')
> @@ -60,8 +62,8 @@ class Repo(FetchMethod):
> logger.debug(1, "%s already exists (or was stashed).
> Skipping repo init / sync.", ud.localpath) return
>
> + repodir = d.getVar("REPODIR") or (d.getVar("DL_DIR") +
> "/repo") gitsrcname = "%s%s" % (ud.host, ud.path.replace("/", "."))
> - repodir = d.getVar("REPODIR") or
> os.path.join(d.getVar("DL_DIR"), "repo") codir =
> os.path.join(repodir, gitsrcname, ud.manifest)
> if ud.user:
> @@ -72,11 +74,11 @@ class Repo(FetchMethod):
> repodir = os.path.join(codir, "repo")
> bb.utils.mkdirhier(repodir)
> if not os.path.exists(os.path.join(repodir, ".repo")):
> - bb.fetch2.check_network_access(d, "repo init -m %s -b %s
> -u %s://%s%s%s" % (ud.manifest, ud.branch, ud.proto, username,
> ud.host, ud.path), ud.url)
> - runfetchcmd("repo init -m %s -b %s -u %s://%s%s%s" %
> (ud.manifest, ud.branch, ud.proto, username, ud.host, ud.path), d,
> workdir=repodir)
> + bb.fetch2.check_network_access(d, "%s init -m %s -b %s
> -u %s://%s%s%s" % (ud.basecmd, ud.manifest, ud.branch, ud.proto,
> username, ud.host, ud.path), ud.url)
> + runfetchcmd("%s init -m %s -b %s -u %s://%s%s%s" %
> (ud.basecmd, ud.manifest, ud.branch, ud.proto, username, ud.host,
> ud.path), d, workdir=repodir)
> - bb.fetch2.check_network_access(d, "repo sync %s" % ud.url,
> ud.url)
> - runfetchcmd("repo sync", d, workdir=repodir)
> + bb.fetch2.check_network_access(d, "%s sync %s" %
> (ud.basecmd, ud.url), ud.url)
> + runfetchcmd("%s sync" % ud.basecmd, d, workdir=repodir)
>
> scmdata = ud.parm.get("scmdata", "")
> if scmdata == "keep":
> diff --git a/bitbake/lib/bb/fetch2/svn.py
> b/bitbake/lib/bb/fetch2/svn.py index 3f172ee..ed70bcf 100644
> --- a/bitbake/lib/bb/fetch2/svn.py
> +++ b/bitbake/lib/bb/fetch2/svn.py
> @@ -49,7 +49,7 @@ class Svn(FetchMethod):
> if not "module" in ud.parm:
> raise MissingParameterError('module', ud.url)
>
> - ud.basecmd = d.getVar('FETCHCMD_svn')
> + ud.basecmd = d.getVar("FETCHCMD_svn") or "/usr/bin/env svn
> --non-interactive --trust-server-cert"
> ud.module = ud.parm["module"]
>
> @@ -59,8 +59,9 @@ class Svn(FetchMethod):
> ud.path_spec = ud.parm["path_spec"]
>
> # Create paths to svn checkouts
> + svndir = d.getVar("SVNDIR") or (d.getVar("DL_DIR") + "/svn")
> relpath = self._strip_leading_slashes(ud.path)
> - ud.pkgdir = os.path.join(d.expand('${SVNDIR}'), ud.host,
> relpath)
> + ud.pkgdir = os.path.join(svndir, ud.host, relpath)
> ud.moddir = os.path.join(ud.pkgdir, ud.module)
>
> ud.setup_revisions(d)
> diff --git a/bitbake/lib/bb/main.py b/bitbake/lib/bb/main.py
> index 7711b29..732a315 100755
> --- a/bitbake/lib/bb/main.py
> +++ b/bitbake/lib/bb/main.py
> @@ -292,8 +292,12 @@ class
> BitBakeConfigParameters(cookerdata.ConfigParameters): help="Writes
> the event log of the build to a bitbake event json file. " "Use
> '' (empty string) to assign the name automatically.")
> - parser.add_option("", "--runall", action="store",
> dest="runall",
> - help="Run the specified task for all build
> targets and their dependencies.")
> + parser.add_option("", "--runall", action="append",
> dest="runall",
> + help="Run the specified task for any
> recipe in the taskgraph of the specified target (even if it wouldn't
> otherwise have run).") +
> + parser.add_option("", "--runonly", action="append",
> dest="runonly",
> + help="Run only the specified task within
> the taskgraph of the specified targets (and any task dependencies
> those tasks may have).") +
> options, targets = parser.parse_args(argv)
>
> @@ -401,9 +405,6 @@ def setup_bitbake(configParams, configuration,
> extrafeatures=None): # In status only mode there are no logs and no UI
> logger.addHandler(handler)
>
> - # Clear away any spurious environment variables while we stoke
> up the cooker
> - cleanedvars = bb.utils.clean_environment()
> -
> if configParams.server_only:
> featureset = []
> ui_module = None
> @@ -419,6 +420,10 @@ def setup_bitbake(configParams, configuration,
> extrafeatures=None):
> server_connection = None
>
> + # Clear away any spurious environment variables while we stoke
> up the cooker
> + # (done after import_extension_module() above since for example
> import gi triggers env var usage)
> + cleanedvars = bb.utils.clean_environment()
> +
> if configParams.remote_server:
> # Connect to a remote XMLRPC server
> server_connection =
> bb.server.xmlrpcclient.connectXMLRPC(configParams.remote_server,
> featureset, diff --git a/bitbake/lib/bb/msg.py
> b/bitbake/lib/bb/msg.py index f1723be..96f077e 100644 ---
> a/bitbake/lib/bb/msg.py +++ b/bitbake/lib/bb/msg.py
> @@ -40,6 +40,7 @@ class BBLogFormatter(logging.Formatter):
> VERBOSE = logging.INFO - 1
> NOTE = logging.INFO
> PLAIN = logging.INFO + 1
> + VERBNOTE = logging.INFO + 2
> ERROR = logging.ERROR
> WARNING = logging.WARNING
> CRITICAL = logging.CRITICAL
> @@ -51,6 +52,7 @@ class BBLogFormatter(logging.Formatter):
> VERBOSE: 'NOTE',
> NOTE : 'NOTE',
> PLAIN : '',
> + VERBNOTE: 'NOTE',
> WARNING : 'WARNING',
> ERROR : 'ERROR',
> CRITICAL: 'ERROR',
> @@ -66,6 +68,7 @@ class BBLogFormatter(logging.Formatter):
> VERBOSE : BASECOLOR,
> NOTE : BASECOLOR,
> PLAIN : BASECOLOR,
> + VERBNOTE: BASECOLOR,
> WARNING : YELLOW,
> ERROR : RED,
> CRITICAL: RED,
> diff --git a/bitbake/lib/bb/parse/__init__.py
> b/bitbake/lib/bb/parse/__init__.py index 2fc4002..5397d57 100644
> --- a/bitbake/lib/bb/parse/__init__.py
> +++ b/bitbake/lib/bb/parse/__init__.py
> @@ -134,8 +134,9 @@ def resolve_file(fn, d):
> if not newfn:
> raise IOError(errno.ENOENT, "file %s not found in %s" %
> (fn, bbpath)) fn = newfn
> + else:
> + mark_dependency(d, fn)
>
> - mark_dependency(d, fn)
> if not os.path.isfile(fn):
> raise IOError(errno.ENOENT, "file %s not found" % fn)
>
> diff --git a/bitbake/lib/bb/parse/ast.py b/bitbake/lib/bb/parse/ast.py
> index dba4540..9d20c32 100644
> --- a/bitbake/lib/bb/parse/ast.py
> +++ b/bitbake/lib/bb/parse/ast.py
> @@ -335,35 +335,39 @@ def handleInherit(statements, filename, lineno,
> m): classes = m.group(1)
> statements.append(InheritNode(filename, lineno, classes))
>
> -def finalize(fn, d, variant = None):
> - saved_handlers = bb.event.get_handlers().copy()
> -
> - for var in d.getVar('__BBHANDLERS', False) or []:
> - # try to add the handler
> - handlerfn = d.getVarFlag(var, "filename", False)
> - if not handlerfn:
> - bb.fatal("Undefined event handler function '%s'" % var)
> - handlerln = int(d.getVarFlag(var, "lineno", False))
> - bb.event.register(var, d.getVar(var, False),
> (d.getVarFlag(var, "eventmask") or "").split(), handlerfn, handlerln)
> -
> - bb.event.fire(bb.event.RecipePreFinalise(fn), d)
> -
> - bb.data.expandKeys(d)
> +def runAnonFuncs(d):
> code = []
> for funcname in d.getVar("__BBANONFUNCS", False) or []:
> code.append("%s(d)" % funcname)
> bb.utils.better_exec("\n".join(code), {"d": d})
>
> - tasklist = d.getVar('__BBTASKS', False) or []
> - bb.event.fire(bb.event.RecipeTaskPreProcess(fn, list(tasklist)),
> d)
> - bb.build.add_tasks(tasklist, d)
> +def finalize(fn, d, variant = None):
> + saved_handlers = bb.event.get_handlers().copy()
> + try:
> + for var in d.getVar('__BBHANDLERS', False) or []:
> + # try to add the handler
> + handlerfn = d.getVarFlag(var, "filename", False)
> + if not handlerfn:
> + bb.fatal("Undefined event handler function '%s'" %
> var)
> + handlerln = int(d.getVarFlag(var, "lineno", False))
> + bb.event.register(var, d.getVar(var, False),
> (d.getVarFlag(var, "eventmask") or "").split(), handlerfn, handlerln)
> +
> + bb.event.fire(bb.event.RecipePreFinalise(fn), d)
> +
> + bb.data.expandKeys(d)
> + runAnonFuncs(d)
> +
> + tasklist = d.getVar('__BBTASKS', False) or []
> + bb.event.fire(bb.event.RecipeTaskPreProcess(fn,
> list(tasklist)), d)
> + bb.build.add_tasks(tasklist, d)
>
> - bb.parse.siggen.finalise(fn, d, variant)
> + bb.parse.siggen.finalise(fn, d, variant)
>
> - d.setVar('BBINCLUDED', bb.parse.get_file_depends(d))
> + d.setVar('BBINCLUDED', bb.parse.get_file_depends(d))
>
> - bb.event.fire(bb.event.RecipeParsed(fn), d)
> - bb.event.set_handlers(saved_handlers)
> + bb.event.fire(bb.event.RecipeParsed(fn), d)
> + finally:
> + bb.event.set_handlers(saved_handlers)
>
> def _create_variants(datastores, names, function, onlyfinalise):
> def create_variant(name, orig_d, arg = None):
> diff --git a/bitbake/lib/bb/parse/parse_py/BBHandler.py
> b/bitbake/lib/bb/parse/parse_py/BBHandler.py index f89ad24..e5039e3
> 100644 --- a/bitbake/lib/bb/parse/parse_py/BBHandler.py
> +++ b/bitbake/lib/bb/parse/parse_py/BBHandler.py
> @@ -131,9 +131,6 @@ def handle(fn, d, include):
>
> abs_fn = resolve_file(fn, d)
>
> - if include:
> - bb.parse.mark_dependency(d, abs_fn)
> -
> # actual loading
> statements = get_statements(fn, abs_fn, base_name)
>
> diff --git a/bitbake/lib/bb/parse/parse_py/ConfHandler.py
> b/bitbake/lib/bb/parse/parse_py/ConfHandler.py index 97aa130..9d3ebe1
> 100644 --- a/bitbake/lib/bb/parse/parse_py/ConfHandler.py
> +++ b/bitbake/lib/bb/parse/parse_py/ConfHandler.py
> @@ -134,9 +134,6 @@ def handle(fn, data, include):
> abs_fn = resolve_file(fn, data)
> f = open(abs_fn, 'r')
>
> - if include:
> - bb.parse.mark_dependency(data, abs_fn)
> -
> statements = ast.StatementGroup()
> lineno = 0
> while True:
> diff --git a/bitbake/lib/bb/runqueue.py b/bitbake/lib/bb/runqueue.py
> index b7be102..9ce06c4 100644
> --- a/bitbake/lib/bb/runqueue.py
> +++ b/bitbake/lib/bb/runqueue.py
> @@ -94,13 +94,13 @@ class RunQueueStats:
> self.active = self.active - 1
> self.failed = self.failed + 1
>
> - def taskCompleted(self, number = 1):
> - self.active = self.active - number
> - self.completed = self.completed + number
> + def taskCompleted(self):
> + self.active = self.active - 1
> + self.completed = self.completed + 1
>
> - def taskSkipped(self, number = 1):
> - self.active = self.active + number
> - self.skipped = self.skipped + number
> + def taskSkipped(self):
> + self.active = self.active + 1
> + self.skipped = self.skipped + 1
>
> def taskActive(self):
> self.active = self.active + 1
> @@ -134,6 +134,7 @@ class RunQueueScheduler(object):
> self.prio_map = [self.rqdata.runtaskentries.keys()]
>
> self.buildable = []
> + self.skip_maxthread = {}
> self.stamps = {}
> for tid in self.rqdata.runtaskentries:
> (mc, fn, taskname, taskfn) = split_tid_mcfn(tid)
> @@ -150,8 +151,25 @@ class RunQueueScheduler(object):
> self.buildable = [x for x in self.buildable if x not in
> self.rq.runq_running] if not self.buildable:
> return None
> +
> + # Filter out tasks that have a max number of threads that
> have been exceeded
> + skip_buildable = {}
> + for running in
> self.rq.runq_running.difference(self.rq.runq_complete):
> + rtaskname = taskname_from_tid(running)
> + if rtaskname not in self.skip_maxthread:
> + self.skip_maxthread[rtaskname] =
> self.rq.cfgData.getVarFlag(rtaskname, "number_threads")
> + if not self.skip_maxthread[rtaskname]:
> + continue
> + if rtaskname in skip_buildable:
> + skip_buildable[rtaskname] += 1
> + else:
> + skip_buildable[rtaskname] = 1
> +
> if len(self.buildable) == 1:
> tid = self.buildable[0]
> + taskname = taskname_from_tid(tid)
> + if taskname in skip_buildable and
> skip_buildable[taskname] >= int(self.skip_maxthread[taskname]):
> + return None
> stamp = self.stamps[tid]
> if stamp not in self.rq.build_stamps.values():
> return tid
> @@ -164,6 +182,9 @@ class RunQueueScheduler(object):
> best = None
> bestprio = None
> for tid in self.buildable:
> + taskname = taskname_from_tid(tid)
> + if taskname in skip_buildable and
> skip_buildable[taskname] >= int(self.skip_maxthread[taskname]):
> + continue
> prio = self.rev_prio_map[tid]
> if bestprio is None or bestprio > prio:
> stamp = self.stamps[tid]
> @@ -178,7 +199,7 @@ class RunQueueScheduler(object):
> """
> Return the id of the task we should build next
> """
> - if self.rq.stats.active < self.rq.number_tasks:
> + if self.rq.can_start_task():
> return self.next_buildable_task()
>
> def newbuildable(self, task):
> @@ -581,11 +602,18 @@ class RunQueueData:
> if t in taskData[mc].taskentries:
> depends.add(t)
>
> - def add_resolved_dependencies(mc, fn, tasknames, depends):
> - for taskname in tasknames:
> - tid = build_tid(mc, fn, taskname)
> - if tid in self.runtaskentries:
> - depends.add(tid)
> + def add_mc_dependencies(mc, tid):
> + mcdeps = taskData[mc].get_mcdepends()
> + for dep in mcdeps:
> + mcdependency = dep.split(':')
> + pn = mcdependency[3]
> + frommc = mcdependency[1]
> + mcdep = mcdependency[2]
> + deptask = mcdependency[4]
> + if mc == frommc:
> + fn = taskData[mcdep].build_targets[pn][0]
> + newdep = '%s:%s' % (fn,deptask)
> +
> taskData[mc].taskentries[tid].tdepends.append(newdep)
> for mc in taskData:
> for tid in taskData[mc].taskentries:
> @@ -603,12 +631,16 @@ class RunQueueData:
> if fn in taskData[mc].failed_fns:
> continue
>
> + # We add multiconfig dependencies before processing
> internal task deps (tdepends)
> + if 'mcdepends' in task_deps and taskname in
> task_deps['mcdepends']:
> + add_mc_dependencies(mc, tid)
> +
> # Resolve task internal dependencies
> #
> # e.g. addtask before X after Y
> for t in taskData[mc].taskentries[tid].tdepends:
> - (_, depfn, deptaskname, _) = split_tid_mcfn(t)
> - depends.add(build_tid(mc, depfn, deptaskname))
> + (depmc, depfn, deptaskname, _) =
> split_tid_mcfn(t)
> + depends.add(build_tid(depmc, depfn, deptaskname))
>
> # Resolve 'deptask' dependencies
> #
> @@ -673,57 +705,106 @@ class RunQueueData:
> recursiveitasks[tid].append(newdep)
>
> self.runtaskentries[tid].depends = depends
> + # Remove all self references
> + self.runtaskentries[tid].depends.discard(tid)
>
> #self.dump_data()
>
> + self.init_progress_reporter.next_stage()
> +
> # Resolve recursive 'recrdeptask' dependencies (Part B)
> #
> # e.g. do_sometask[recrdeptask] = "do_someothertask"
> # (makes sure sometask runs after someothertask of all
> DEPENDS, RDEPENDS and intertask dependencies, recursively) # We need
> to do this separately since we need all of runtaskentries[*].depends
> to be complete before this is processed
> - self.init_progress_reporter.next_stage(len(recursivetasks))
> - extradeps = {}
> - for taskcounter, tid in enumerate(recursivetasks):
> - extradeps[tid] = set(self.runtaskentries[tid].depends)
> -
> - tasknames = recursivetasks[tid]
> - seendeps = set()
> -
> - def generate_recdeps(t):
> - newdeps = set()
> - (mc, fn, taskname, _) = split_tid_mcfn(t)
> - add_resolved_dependencies(mc, fn, tasknames, newdeps)
> - extradeps[tid].update(newdeps)
> - seendeps.add(t)
> - newdeps.add(t)
> - for i in newdeps:
> - if i not in self.runtaskentries:
> - # Not all recipes might have the recrdeptask
> task as a task
> - continue
> - task = self.runtaskentries[i].task
> - for n in self.runtaskentries[i].depends:
> - if n not in seendeps:
> - generate_recdeps(n)
> - generate_recdeps(tid)
>
> - if tid in recursiveitasks:
> - for dep in recursiveitasks[tid]:
> - generate_recdeps(dep)
> - self.init_progress_reporter.update(taskcounter)
> + # Generating/interating recursive lists of dependencies is
> painful and potentially slow
> + # Precompute recursive task dependencies here by:
> + # a) create a temp list of reverse dependencies (revdeps)
> + # b) walk up the ends of the chains (when a given task
> no longer has dependencies i.e. len(deps) == 0)
> + # c) combine the total list of dependencies in
> cumulativedeps
> + # d) optimise by pre-truncating 'task' off the items in
> cumulativedeps (keeps items in sets lower)
> - # Remove circular references so that do_a[recrdeptask] =
> "do_a do_b" can work
> - for tid in recursivetasks:
> - extradeps[tid].difference_update(recursivetasksselfref)
>
> + revdeps = {}
> + deps = {}
> + cumulativedeps = {}
> + for tid in self.runtaskentries:
> + deps[tid] = set(self.runtaskentries[tid].depends)
> + revdeps[tid] = set()
> + cumulativedeps[tid] = set()
> + # Generate a temp list of reverse dependencies
> for tid in self.runtaskentries:
> - task = self.runtaskentries[tid].task
> - # Add in extra dependencies
> - if tid in extradeps:
> - self.runtaskentries[tid].depends = extradeps[tid]
> - # Remove all self references
> - if tid in self.runtaskentries[tid].depends:
> - logger.debug(2, "Task %s contains self reference!",
> tid)
> - self.runtaskentries[tid].depends.remove(tid)
> + for dep in self.runtaskentries[tid].depends:
> + revdeps[dep].add(tid)
> + # Find the dependency chain endpoints
> + endpoints = set()
> + for tid in self.runtaskentries:
> + if len(deps[tid]) == 0:
> + endpoints.add(tid)
> + # Iterate the chains collating dependencies
> + while endpoints:
> + next = set()
> + for tid in endpoints:
> + for dep in revdeps[tid]:
> + cumulativedeps[dep].add(fn_from_tid(tid))
> + cumulativedeps[dep].update(cumulativedeps[tid])
> + if tid in deps[dep]:
> + deps[dep].remove(tid)
> + if len(deps[dep]) == 0:
> + next.add(dep)
> + endpoints = next
> + #for tid in deps:
> + # if len(deps[tid]) != 0:
> + # bb.warn("Sanity test failure, dependencies left for
> %s (%s)" % (tid, deps[tid])) +
> + # Loop here since recrdeptasks can depend upon other
> recrdeptasks and we have to
> + # resolve these recursively until we aren't adding any
> further extra dependencies
> + extradeps = True
> + while extradeps:
> + extradeps = 0
> + for tid in recursivetasks:
> + tasknames = recursivetasks[tid]
> +
> + totaldeps = set(self.runtaskentries[tid].depends)
> + if tid in recursiveitasks:
> + totaldeps.update(recursiveitasks[tid])
> + for dep in recursiveitasks[tid]:
> + if dep not in self.runtaskentries:
> + continue
> +
> totaldeps.update(self.runtaskentries[dep].depends) +
> + deps = set()
> + for dep in totaldeps:
> + if dep in cumulativedeps:
> + deps.update(cumulativedeps[dep])
> +
> + for t in deps:
> + for taskname in tasknames:
> + newtid = t + ":" + taskname
> + if newtid == tid:
> + continue
> + if newtid in self.runtaskentries and newtid
> not in self.runtaskentries[tid].depends:
> + extradeps += 1
> +
> self.runtaskentries[tid].depends.add(newtid) +
> + # Handle recursive tasks which depend upon other
> recursive tasks
> + deps = set()
> + for dep in
> self.runtaskentries[tid].depends.intersection(recursivetasks):
> +
> deps.update(self.runtaskentries[dep].depends.difference(self.runtaskentries[tid].depends))
> + for newtid in deps:
> + for taskname in tasknames:
> + if not newtid.endswith(":" + taskname):
> + continue
> + if newtid in self.runtaskentries:
> + extradeps += 1
> +
> self.runtaskentries[tid].depends.add(newtid) +
> + bb.debug(1, "Added %s recursive dependencies in this
> loop" % extradeps) +
> + # Remove recrdeptask circular references so that
> do_a[recrdeptask] = "do_a do_b" can work
> + for tid in recursivetasksselfref:
> +
> self.runtaskentries[tid].depends.difference_update(recursivetasksselfref)
> self.init_progress_reporter.next_stage()
>
> @@ -798,30 +879,57 @@ class RunQueueData:
> #
> # Once all active tasks are marked, prune the ones we don't
> need.
> - delcount = 0
> + delcount = {}
> for tid in list(self.runtaskentries.keys()):
> if tid not in runq_build:
> + delcount[tid] = self.runtaskentries[tid]
> del self.runtaskentries[tid]
> - delcount += 1
>
> - self.init_progress_reporter.next_stage()
> + # Handle --runall
> + if self.cooker.configuration.runall:
> + # re-run the mark_active and then drop unused tasks from
> new list
> + runq_build = {}
> +
> + for task in self.cooker.configuration.runall:
> + runall_tids = set()
> + for tid in list(self.runtaskentries):
> + wanttid = fn_from_tid(tid) + ":do_%s" % task
> + if wanttid in delcount:
> + self.runtaskentries[wanttid] =
> delcount[wanttid]
> + if wanttid in self.runtaskentries:
> + runall_tids.add(wanttid)
> +
> + for tid in list(runall_tids):
> + mark_active(tid,1)
>
> - if self.cooker.configuration.runall is not None:
> - runall = "do_%s" % self.cooker.configuration.runall
> - runall_tids = { k: v for k, v in
> self.runtaskentries.items() if taskname_from_tid(k) == runall }
> + for tid in list(self.runtaskentries.keys()):
> + if tid not in runq_build:
> + delcount[tid] = self.runtaskentries[tid]
> + del self.runtaskentries[tid]
>
> + if len(self.runtaskentries) == 0:
> + bb.msg.fatal("RunQueue", "Could not find any tasks
> with the tasknames %s to run within the recipes of the taskgraphs of
> the targets %s" % (str(self.cooker.configuration.runall),
> str(self.targets))) +
> + self.init_progress_reporter.next_stage()
> +
> + # Handle runonly
> + if self.cooker.configuration.runonly:
> # re-run the mark_active and then drop unused tasks from
> new list runq_build = {}
> - for tid in list(runall_tids):
> - mark_active(tid,1)
> +
> + for task in self.cooker.configuration.runonly:
> + runonly_tids = { k: v for k, v in
> self.runtaskentries.items() if taskname_from_tid(k) == "do_%s" %
> task } +
> + for tid in list(runonly_tids):
> + mark_active(tid,1)
>
> for tid in list(self.runtaskentries.keys()):
> if tid not in runq_build:
> + delcount[tid] = self.runtaskentries[tid]
> del self.runtaskentries[tid]
> - delcount += 1
>
> if len(self.runtaskentries) == 0:
> - bb.msg.fatal("RunQueue", "No remaining tasks to run
> for build target %s with runall %s" % (target, runall))
> + bb.msg.fatal("RunQueue", "Could not find any tasks
> with the tasknames %s to run within the taskgraphs of the targets %s"
> % (str(self.cooker.configuration.runonly), str(self.targets))) #
> # Step D - Sanity checks and computation
> @@ -834,7 +942,7 @@ class RunQueueData:
> else:
> bb.msg.fatal("RunQueue", "No active tasks and not in
> --continue mode?! Please report this bug.")
> - logger.verbose("Pruned %s inactive tasks, %s left",
> delcount, len(self.runtaskentries))
> + logger.verbose("Pruned %s inactive tasks, %s left",
> len(delcount), len(self.runtaskentries))
> logger.verbose("Assign Weightings")
>
> @@ -962,7 +1070,7 @@ class RunQueueData:
> msg += "\n%s has unique rprovides:\n %s" %
> (provfn, "\n ".join(rprovide_results[provfn] - commonrprovs))
> if self.warn_multi_bb:
> - logger.warning(msg)
> + logger.verbnote(msg)
> else:
> logger.error(msg)
>
> @@ -970,7 +1078,7 @@ class RunQueueData:
>
> # Create a whitelist usable by the stamp checks
> self.stampfnwhitelist = {}
> - for mc in self.taskData:
> + for mc in self.taskData:
> self.stampfnwhitelist[mc] = []
> for entry in self.stampwhitelist.split():
> if entry not in self.taskData[mc].build_targets:
> @@ -1002,7 +1110,7 @@ class RunQueueData:
> bb.debug(1, "Task %s is marked nostamp, cannot
> invalidate this task" % taskname) else:
> logger.verbose("Invalidate task %s, %s", taskname,
> fn)
> - bb.parse.siggen.invalidate_task(taskname,
> self.dataCaches[mc], fn)
> + bb.parse.siggen.invalidate_task(taskname,
> self.dataCaches[mc], taskfn)
> self.init_progress_reporter.next_stage()
>
> @@ -1646,6 +1754,10 @@ class RunQueueExecute:
> valid = bb.utils.better_eval(call, locs)
> return valid
>
> + def can_start_task(self):
> + can_start = self.stats.active < self.number_tasks
> + return can_start
> +
> class RunQueueExecuteDummy(RunQueueExecute):
> def __init__(self, rq):
> self.rq = rq
> @@ -1719,13 +1831,14 @@ class RunQueueExecuteTasks(RunQueueExecute):
> bb.build.del_stamp(taskname, self.rqdata.dataCaches[mc],
> taskfn) self.rq.scenequeue_covered.remove(tid)
>
> - toremove = covered_remove
> + toremove = covered_remove | self.rq.scenequeue_notcovered
> for task in toremove:
> logger.debug(1, 'Not skipping task %s due to
> setsceneverify', task) while toremove:
> covered_remove = []
> for task in toremove:
> - removecoveredtask(task)
> + if task in self.rq.scenequeue_covered:
> + removecoveredtask(task)
> for deptask in
> self.rqdata.runtaskentries[task].depends: if deptask not in
> self.rq.scenequeue_covered: continue
> @@ -1795,14 +1908,13 @@ class RunQueueExecuteTasks(RunQueueExecute):
> continue
> if revdep in self.runq_buildable:
> continue
> - alldeps = 1
> + alldeps = True
> for dep in self.rqdata.runtaskentries[revdep].depends:
> if dep not in self.runq_complete:
> - alldeps = 0
> - if alldeps == 1:
> + alldeps = False
> + break
> + if alldeps:
> self.setbuildable(revdep)
> - fn = fn_from_tid(revdep)
> - taskname = taskname_from_tid(revdep)
> logger.debug(1, "Marking task %s as buildable",
> revdep)
> def task_complete(self, task):
> @@ -1826,8 +1938,8 @@ class RunQueueExecuteTasks(RunQueueExecute):
> self.setbuildable(task)
> bb.event.fire(runQueueTaskSkipped(task, self.stats, self.rq,
> reason), self.cfgData) self.task_completeoutright(task)
> - self.stats.taskCompleted()
> self.stats.taskSkipped()
> + self.stats.taskCompleted()
>
> def execute(self):
> """
> @@ -1937,7 +2049,7 @@ class RunQueueExecuteTasks(RunQueueExecute):
> self.build_stamps2.append(self.build_stamps[task])
> self.runq_running.add(task)
> self.stats.taskActive()
> - if self.stats.active < self.number_tasks:
> + if self.can_start_task():
> return True
>
> if self.stats.active > 0:
> @@ -1992,6 +2104,7 @@ class
> RunQueueExecuteScenequeue(RunQueueExecute): # If we don't have any
> setscene functions, skip this step if
> len(self.rqdata.runq_setscene_tids) == 0: rq.scenequeue_covered =
> set()
> + rq.scenequeue_notcovered = set()
> rq.state = runQueueRunInit
> return
>
> @@ -2207,10 +2320,15 @@ class
> RunQueueExecuteScenequeue(RunQueueExecute):
> sq_hash.append(self.rqdata.runtaskentries[tid].hash)
> sq_taskname.append(taskname) sq_task.append(tid)
> +
> +
> self.cooker.data.setVar("BB_SETSCENE_STAMPCURRENT_COUNT",
> len(stamppresent)) + call = self.rq.hashvalidate + "(sq_fn, sq_task,
> sq_hash, sq_hashfn, d)" locs = { "sq_fn" : sq_fn, "sq_task" :
> sq_taskname, "sq_hash" : sq_hash, "sq_hashfn" : sq_hashfn, "d" :
> self.cooker.data } valid = bb.utils.better_eval(call, locs)
> + self.cooker.data.delVar("BB_SETSCENE_STAMPCURRENT_COUNT")
> +
> valid_new = stamppresent
> for v in valid:
> valid_new.append(sq_task[v])
> @@ -2272,8 +2390,8 @@ class
> RunQueueExecuteScenequeue(RunQueueExecute): def
> task_failoutright(self, task): self.runq_running.add(task)
> self.runq_buildable.add(task)
> - self.stats.taskCompleted()
> self.stats.taskSkipped()
> + self.stats.taskCompleted()
> self.scenequeue_notcovered.add(task)
> self.scenequeue_updatecounters(task, True)
>
> @@ -2281,8 +2399,8 @@ class
> RunQueueExecuteScenequeue(RunQueueExecute):
> self.runq_running.add(task) self.runq_buildable.add(task)
> self.task_completeoutright(task)
> - self.stats.taskCompleted()
> self.stats.taskSkipped()
> + self.stats.taskCompleted()
>
> def execute(self):
> """
> @@ -2292,7 +2410,7 @@ class
> RunQueueExecuteScenequeue(RunQueueExecute): self.rq.read_workers()
>
> task = None
> - if self.stats.active < self.number_tasks:
> + if self.can_start_task():
> # Find the next setscene to run
> for nexttask in self.rqdata.runq_setscene_tids:
> if nexttask in self.runq_buildable and nexttask not
> in self.runq_running and self.stamps[nexttask] not in
> self.build_stamps.values(): @@ -2351,7 +2469,7 @@ class
> RunQueueExecuteScenequeue(RunQueueExecute):
> self.build_stamps2.append(self.build_stamps[task])
> self.runq_running.add(task) self.stats.taskActive()
> - if self.stats.active < self.number_tasks:
> + if self.can_start_task():
> return True
>
> if self.stats.active > 0:
> diff --git a/bitbake/lib/bb/server/process.py
> b/bitbake/lib/bb/server/process.py index 3d31355..38b923f 100644
> --- a/bitbake/lib/bb/server/process.py
> +++ b/bitbake/lib/bb/server/process.py
> @@ -223,6 +223,8 @@ class ProcessServer(multiprocessing.Process):
>
> try:
> self.cooker.shutdown(True)
> + self.cooker.notifier.stop()
> + self.cooker.confignotifier.stop()
> except:
> pass
>
> @@ -375,11 +377,12 @@ class BitBakeServer(object):
> if os.path.exists(sockname):
> os.unlink(sockname)
>
> + # Place the log in the builddirectory alongside the lock file
> + logfile =
> os.path.join(os.path.dirname(self.bitbake_lock.name),
> "bitbake-cookerdaemon.log") + self.sock =
> socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) # AF_UNIX has path
> length issues so chdir here to workaround cwd = os.getcwd()
> - logfile = os.path.join(cwd, "bitbake-cookerdaemon.log")
> -
> try:
> os.chdir(os.path.dirname(sockname))
> self.sock.bind(os.path.basename(sockname))
> @@ -392,11 +395,16 @@ class BitBakeServer(object):
> bb.daemonize.createDaemon(self._startServer, logfile)
> self.sock.close()
> self.bitbake_lock.close()
> + os.close(self.readypipein)
>
> ready = ConnectionReader(self.readypipe)
> r = ready.poll(30)
> if r:
> - r = ready.get()
> + try:
> + r = ready.get()
> + except EOFError:
> + # Trap the child exitting/closing the pipe and error
> out
> + r = None
> if not r or r != "ready":
> ready.close()
> bb.error("Unable to start bitbake server")
> @@ -422,21 +430,16 @@ class BitBakeServer(object):
> bb.error("Server log for this session
> (%s):\n%s" % (logfile, "".join(lines))) raise SystemExit(1)
> ready.close()
> - os.close(self.readypipein)
>
> def _startServer(self):
> print(self.start_log_format % (os.getpid(),
> datetime.datetime.now().strftime(self.start_log_datetime_format)))
> server = ProcessServer(self.bitbake_lock, self.sock, self.sockname)
> self.configuration.setServerRegIdleCallback(server.register_idle_function)
> + os.close(self.readypipe)
> writer = ConnectionWriter(self.readypipein)
> - try:
> - self.cooker = bb.cooker.BBCooker(self.configuration,
> self.featureset)
> - writer.send("ready")
> - except:
> - writer.send("fail")
> - raise
> - finally:
> - os.close(self.readypipein)
> + self.cooker = bb.cooker.BBCooker(self.configuration,
> self.featureset)
> + writer.send("ready")
> + writer.close()
> server.cooker = self.cooker
> server.server_timeout = self.configuration.server_timeout
> server.xmlrpcinterface = self.configuration.xmlrpcinterface
> diff --git a/bitbake/lib/bb/siggen.py b/bitbake/lib/bb/siggen.py
> index 5ef82d7..03c824e 100644
> --- a/bitbake/lib/bb/siggen.py
> +++ b/bitbake/lib/bb/siggen.py
> @@ -110,42 +110,13 @@ class
> SignatureGeneratorBasic(SignatureGenerator): ignore_mismatch =
> ((d.getVar("BB_HASH_IGNORE_MISMATCH") or '') == '1') tasklist,
> gendeps, lookupcache = bb.data.generate_dependencies(d)
> - taskdeps = {}
> - basehash = {}
> + taskdeps, basehash =
> bb.data.generate_dependency_hash(tasklist, gendeps, lookupcache,
> self.basewhitelist, fn) for task in tasklist:
> - data = lookupcache[task]
> -
> - if data is None:
> - bb.error("Task %s from %s seems to be empty?!" %
> (task, fn))
> - data = ''
> -
> - gendeps[task] -= self.basewhitelist
> - newdeps = gendeps[task]
> - seen = set()
> - while newdeps:
> - nextdeps = newdeps
> - seen |= nextdeps
> - newdeps = set()
> - for dep in nextdeps:
> - if dep in self.basewhitelist:
> - continue
> - gendeps[dep] -= self.basewhitelist
> - newdeps |= gendeps[dep]
> - newdeps -= seen
> -
> - alldeps = sorted(seen)
> - for dep in alldeps:
> - data = data + dep
> - var = lookupcache[dep]
> - if var is not None:
> - data = data + str(var)
> - datahash = hashlib.md5(data.encode("utf-8")).hexdigest()
> k = fn + "." + task
> - if not ignore_mismatch and k in self.basehash and
> self.basehash[k] != datahash:
> - bb.error("When reparsing %s, the basehash value
> changed from %s to %s. The metadata is not deterministic and this
> needs to be fixed." % (k, self.basehash[k], datahash))
> - self.basehash[k] = datahash
> - taskdeps[task] = alldeps
> + if not ignore_mismatch and k in self.basehash and
> self.basehash[k] != basehash[k]:
> + bb.error("When reparsing %s, the basehash value
> changed from %s to %s. The metadata is not deterministic and this
> needs to be fixed." % (k, self.basehash[k], basehash[k]))
> + self.basehash[k] = basehash[k]
>
> self.taskdeps[fn] = taskdeps
> self.gendeps[fn] = gendeps
> @@ -193,15 +164,24 @@ class
> SignatureGeneratorBasic(SignatureGenerator): return taint
>
> def get_taskhash(self, fn, task, deps, dataCache):
> +
> + mc = ''
> + if fn.startswith('multiconfig:'):
> + mc = fn.split(':')[1]
> k = fn + "." + task
> +
> data = dataCache.basetaskhash[k]
> self.basehash[k] = data
> self.runtaskdeps[k] = []
> self.file_checksum_values[k] = []
> recipename = dataCache.pkg_fn[fn]
> -
> for dep in sorted(deps, key=clean_basepath):
> - depname =
> dataCache.pkg_fn[self.pkgnameextract.search(dep).group('fn')]
> + pkgname = self.pkgnameextract.search(dep).group('fn')
> + if mc:
> + depmc = pkgname.split(':')[1]
> + if mc != depmc:
> + continue
> + depname = dataCache.pkg_fn[pkgname]
> if not self.rundep_check(fn, recipename, task, dep,
> depname, dataCache): continue
> if dep not in self.taskhash:
> @@ -347,7 +327,7 @@ class
> SignatureGeneratorBasicHash(SignatureGeneratorBasic):
> def stampcleanmask(self, stampbase, fn, taskname, extrainfo):
> return self.stampfile(stampbase, fn, taskname, extrainfo,
> clean=True)
> -
> +
> def invalidate_task(self, task, d, fn):
> bb.note("Tainting hash to force rebuild of task %s, %s" %
> (fn, task)) bb.build.write_taint(task, d, fn)
> @@ -636,7 +616,7 @@ def compare_sigfiles(a, b, recursecb=None,
> color=False, collapsed=False): if collapsed:
> output.extend(recout)
> else:
> - # If a dependent hash changed, might as
> well print the line above and then defer to the changes in
> + # If a dependent hash changed, might as
> well print the line above and then defer to the changes in # that
> hash since in all likelyhood, they're the same changes this task also
> saw. output = [output[-1]] + recout
> diff --git a/bitbake/lib/bb/taskdata.py b/bitbake/lib/bb/taskdata.py
> index 0ea6c0b..94e822c 100644
> --- a/bitbake/lib/bb/taskdata.py
> +++ b/bitbake/lib/bb/taskdata.py
> @@ -70,6 +70,8 @@ class TaskData:
>
> self.skiplist = skiplist
>
> + self.mcdepends = []
> +
> def add_tasks(self, fn, dataCache):
> """
> Add tasks for a given fn to the database
> @@ -88,6 +90,13 @@ class TaskData:
>
> self.add_extra_deps(fn, dataCache)
>
> + def add_mcdepends(task):
> + for dep in task_deps['mcdepends'][task].split():
> + if len(dep.split(':')) != 5:
> + bb.msg.fatal("TaskData", "Error for %s:%s[%s],
> multiconfig dependency %s does not contain exactly four ':'
> characters.\n Task '%s' should be specified in the form
> 'multiconfig:fromMC:toMC:packagename:task'" % (fn, task, 'mcdepends',
> dep, 'mcdepends'))
> + if dep not in self.mcdepends:
> + self.mcdepends.append(dep)
> +
> # Common code for dep_name/depends = 'depends'/idepends and
> 'rdepends'/irdepends def handle_deps(task, dep_name, depends, seen):
> if dep_name in task_deps and task in task_deps[dep_name]:
> @@ -110,16 +119,20 @@ class TaskData:
> parentids = []
> for dep in task_deps['parents'][task]:
> if dep not in task_deps['tasks']:
> - bb.debug(2, "Not adding dependeny of %s on %s
> since %s does not exist" % (task, dep, dep))
> + bb.debug(2, "Not adding dependency of %s on %s
> since %s does not exist" % (task, dep, dep)) continue
> parentid = "%s:%s" % (fn, dep)
> parentids.append(parentid)
> self.taskentries[tid].tdepends.extend(parentids)
>
> +
> # Touch all intertask dependencies
> handle_deps(task, 'depends',
> self.taskentries[tid].idepends, self.seen_build_target)
> handle_deps(task, 'rdepends', self.taskentries[tid].irdepends,
> self.seen_run_target)
> + if 'mcdepends' in task_deps and task in
> task_deps['mcdepends']:
> + add_mcdepends(task)
> +
> # Work out build dependencies
> if not fn in self.depids:
> dependids = set()
> @@ -537,6 +550,9 @@ class TaskData:
> provmap[name] = provider[0]
> return provmap
>
> + def get_mcdepends(self):
> + return self.mcdepends
> +
> def dump_data(self):
> """
> Dump some debug information on the internal data structures
> diff --git a/bitbake/lib/bb/tests/cooker.py
> b/bitbake/lib/bb/tests/cooker.py new file mode 100644
> index 0000000..2b44236
> --- /dev/null
> +++ b/bitbake/lib/bb/tests/cooker.py
> @@ -0,0 +1,83 @@
> +# ex:ts=4:sw=4:sts=4:et
> +# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
> +#
> +# BitBake Tests for cooker.py
> +#
> +# This program is free software; you can redistribute it and/or
> modify +# it under the terms of the GNU General Public License
> version 2 as +# published by the Free Software Foundation.
> +#
> +# This program is distributed in the hope that it will be useful,
> +# but WITHOUT ANY WARRANTY; without even the implied warranty of
> +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> +# GNU General Public License for more details.
> +#
> +# You should have received a copy of the GNU General Public License
> along +# with this program; if not, write to the Free Software
> Foundation, Inc., +# 51 Franklin Street, Fifth Floor, Boston, MA
> 02110-1301 USA. +#
> +
> +import unittest
> +import tempfile
> +import os
> +import bb, bb.cooker
> +import re
> +import logging
> +
> +# Cooker tests
> +class CookerTest(unittest.TestCase):
> + def setUp(self):
> + # At least one variable needs to be set
> + self.d = bb.data.init()
> + topdir =
> os.path.join(os.path.dirname(os.path.realpath(__file__)),
> "testdata/cooker")
> + self.d.setVar('TOPDIR', topdir)
> +
> + def test_CookerCollectFiles_sublayers(self):
> + '''Test that a sublayer of an existing layer does not trigger
> + No bb files matched ...'''
> +
> + def append_collection(topdir, path, d):
> + collection = path.split('/')[-1]
> + pattern = "^" + topdir + "/" + path + "/"
> + regex = re.compile(pattern)
> + priority = 5
> +
> + d.setVar('BBFILE_COLLECTIONS',
> (d.getVar('BBFILE_COLLECTIONS') or "") + " " + collection)
> + d.setVar('BBFILE_PATTERN_%s' % (collection), pattern)
> + d.setVar('BBFILE_PRIORITY_%s' % (collection), priority)
> +
> + return (collection, pattern, regex, priority)
> +
> + topdir = self.d.getVar("TOPDIR")
> +
> + # Priorities: list of (collection, pattern, regex, priority)
> + bbfile_config_priorities = []
> + # Order is important for this test, shortest to longest is
> typical failure case
> + bbfile_config_priorities.append( append_collection(topdir,
> 'first', self.d) )
> + bbfile_config_priorities.append( append_collection(topdir,
> 'second', self.d) )
> + bbfile_config_priorities.append( append_collection(topdir,
> 'second/third', self.d) ) +
> + pkgfns = [ topdir + '/first/recipes/sample1_1.0.bb',
> + topdir + '/second/recipes/sample2_1.0.bb',
> + topdir + '/second/third/recipes/sample3_1.0.bb' ]
> +
> + class LogHandler(logging.Handler):
> + def __init__(self):
> + logging.Handler.__init__(self)
> + self.logdata = []
> +
> + def emit(self, record):
> + self.logdata.append(record.getMessage())
> +
> + # Move cooker to use my special logging
> + logger = bb.cooker.logger
> + log_handler = LogHandler()
> + logger.addHandler(log_handler)
> + collection =
> bb.cooker.CookerCollectFiles(bbfile_config_priorities)
> + collection.collection_priorities(pkgfns, self.d)
> + logger.removeHandler(log_handler)
> +
> + # Should be empty (no generated messages)
> + expected = []
> +
> + self.assertEqual(log_handler.logdata, expected)
> diff --git a/bitbake/lib/bb/tests/data.py
> b/bitbake/lib/bb/tests/data.py index a4a9dd3..db3e201 100644
> --- a/bitbake/lib/bb/tests/data.py
> +++ b/bitbake/lib/bb/tests/data.py
> @@ -281,7 +281,7 @@ class TestConcatOverride(unittest.TestCase):
> def test_remove(self):
> self.d.setVar("TEST", "${VAL} ${BAR}")
> self.d.setVar("TEST_remove", "val")
> - self.assertEqual(self.d.getVar("TEST"), "bar")
> + self.assertEqual(self.d.getVar("TEST"), " bar")
>
> def test_remove_cleared(self):
> self.d.setVar("TEST", "${VAL} ${BAR}")
> @@ -300,7 +300,7 @@ class TestConcatOverride(unittest.TestCase):
> self.d.setVar("TEST", "${VAL} ${BAR}")
> self.d.setVar("TEST_remove", "val")
> self.d.setVar("TEST_TEST", "${TEST} ${TEST}")
> - self.assertEqual(self.d.getVar("TEST_TEST"), "bar bar")
> + self.assertEqual(self.d.getVar("TEST_TEST"), " bar bar")
>
> def test_empty_remove(self):
> self.d.setVar("TEST", "")
> @@ -311,13 +311,25 @@ class TestConcatOverride(unittest.TestCase):
> self.d.setVar("BAR", "Z")
> self.d.setVar("TEST", "${BAR}/X Y")
> self.d.setVar("TEST_remove", "${BAR}/X")
> - self.assertEqual(self.d.getVar("TEST"), "Y")
> + self.assertEqual(self.d.getVar("TEST"), " Y")
>
> def test_remove_expansion_items(self):
> self.d.setVar("TEST", "A B C D")
> self.d.setVar("BAR", "B D")
> self.d.setVar("TEST_remove", "${BAR}")
> - self.assertEqual(self.d.getVar("TEST"), "A C")
> + self.assertEqual(self.d.getVar("TEST"), "A C ")
> +
> + def test_remove_preserve_whitespace(self):
> + # When the removal isn't active, the original value should
> be preserved
> + self.d.setVar("TEST", " A B")
> + self.d.setVar("TEST_remove", "C")
> + self.assertEqual(self.d.getVar("TEST"), " A B")
> +
> + def test_remove_preserve_whitespace2(self):
> + # When the removal is active preserve the whitespace
> + self.d.setVar("TEST", " A B")
> + self.d.setVar("TEST_remove", "B")
> + self.assertEqual(self.d.getVar("TEST"), " A ")
>
> class TestOverrides(unittest.TestCase):
> def setUp(self):
> @@ -374,6 +386,15 @@ class TestOverrides(unittest.TestCase):
> self.d.setVar("OVERRIDES", "foo:bar:some_val")
> self.assertEqual(self.d.getVar("TEST"), "testvalue3")
>
> + def test_remove_with_override(self):
> + self.d.setVar("TEST_bar", "testvalue2")
> + self.d.setVar("TEST_some_val", "testvalue3 testvalue5")
> + self.d.setVar("TEST_some_val_remove", "testvalue3")
> + self.d.setVar("TEST_foo", "testvalue4")
> + self.d.setVar("OVERRIDES", "foo:bar:some_val")
> + self.assertEqual(self.d.getVar("TEST"), " testvalue5")
> +
> +
> class TestKeyExpansion(unittest.TestCase):
> def setUp(self):
> self.d = bb.data.init()
> @@ -443,6 +464,54 @@ class Contains(unittest.TestCase):
> self.assertFalse(bb.utils.contains_any("SOMEFLAG", "x y z",
> True, False, self.d))
>
> +class TaskHash(unittest.TestCase):
> + def test_taskhashes(self):
> + def gettask_bashhash(taskname, d):
> + tasklist, gendeps, lookupcache =
> bb.data.generate_dependencies(d)
> + taskdeps, basehash =
> bb.data.generate_dependency_hash(tasklist, gendeps, lookupcache,
> set(), "somefile")
> + bb.warn(str(lookupcache))
> + return basehash["somefile." + taskname]
> +
> + d = bb.data.init()
> + d.setVar("__BBTASKS", ["mytask"])
> + d.setVar("__exportlist", [])
> + d.setVar("mytask", "${MYCOMMAND}")
> + d.setVar("MYCOMMAND", "${VAR}; foo; bar; exit 0")
> + d.setVar("VAR", "val")
> + orighash = gettask_bashhash("mytask", d)
> +
> + # Changing a variable should change the hash
> + d.setVar("VAR", "val2")
> + nexthash = gettask_bashhash("mytask", d)
> + self.assertNotEqual(orighash, nexthash)
> +
> + d.setVar("VAR", "val")
> + # Adding an inactive removal shouldn't change the hash
> + d.setVar("BAR", "notbar")
> + d.setVar("MYCOMMAND_remove", "${BAR}")
> + nexthash = gettask_bashhash("mytask", d)
> + self.assertEqual(orighash, nexthash)
> +
> + # Adding an active removal should change the hash
> + d.setVar("BAR", "bar;")
> + nexthash = gettask_bashhash("mytask", d)
> + self.assertNotEqual(orighash, nexthash)
> +
> + # Setup an inactive contains()
> + d.setVar("VAR", "${@bb.utils.contains('VAR2', 'A', 'val',
> '', d)}")
> + orighash = gettask_bashhash("mytask", d)
> +
> + # Activate the contains() and the hash should change
> + d.setVar("VAR2", "A")
> + nexthash = gettask_bashhash("mytask", d)
> + self.assertNotEqual(orighash, nexthash)
> +
> + # The contains should be inactive but even though VAR2 has a
> + # different value the hash should match the original
> + d.setVar("VAR2", "B")
> + nexthash = gettask_bashhash("mytask", d)
> + self.assertEqual(orighash, nexthash)
> +
> class Serialize(unittest.TestCase):
>
> def test_serialize(self):
> diff --git a/bitbake/lib/bb/tests/fetch.py
> b/bitbake/lib/bb/tests/fetch.py index 11698f2..17909ec 100644
> --- a/bitbake/lib/bb/tests/fetch.py
> +++ b/bitbake/lib/bb/tests/fetch.py
> @@ -20,6 +20,7 @@
> #
>
> import unittest
> +import hashlib
> import tempfile
> import subprocess
> import collections
> @@ -401,6 +402,12 @@ class MirrorUriTest(FetcherTest):
> :
> "git://somewhere.org/somedir/mtd-utils.git;tag=1234567890123456789012345678901234567890;protocol=http",
> ("git://git.invalid.infradead.org/foo/mtd-utils.git;tag=1234567890123456789012345678901234567890",
> "git://.*/.*",
> "git://somewhere.org/somedir/MIRRORNAME;protocol=http") :
> "git://somewhere.org/somedir/git.invalid.infradead.org.foo.mtd-utils.git;tag=1234567890123456789012345678901234567890;protocol=http",
> +
> ("http://somewhere.org/somedir1/somedir2/somefile_1.2.3.tar.gz",
> "http://.*/.*", "http://somewhere2.org")
> + : "http://somewhere2.org/somefile_1.2.3.tar.gz",
> +
> ("http://somewhere.org/somedir1/somedir2/somefile_1.2.3.tar.gz",
> "http://.*/.*", "http://somewhere2.org/")
> + : "http://somewhere2.org/somefile_1.2.3.tar.gz",
> +
> ("git://someserver.org/bitbake;tag=1234567890123456789012345678901234567890;branch=master",
> "git://someserver.org/bitbake;branch=master",
> "git://git.openembedded.org/bitbake;protocol=http")
> + :
> "git://git.openembedded.org/bitbake;tag=1234567890123456789012345678901234567890;branch=master;protocol=http",
> #Renaming files doesn't work
> #("http://somewhere.org/somedir1/somefile_1.2.3.tar.gz",
> "http://somewhere.org/somedir1/somefile_1.2.3.tar.gz",
> "http://somewhere2.org/somedir3/somefile_2.3.4.tar.gz") :
> "http://somewhere2.org/somedir3/somefile_2.3.4.tar.gz" @@ -456,6
> +463,124 @@ class MirrorUriTest(FetcherTest):
> 'https://BBBB/B/B/B/bitbake/bitbake-1.0.tar.gz',
> 'http://AAAA/A/A/A/B/B/bitbake/bitbake-1.0.tar.gz']) + +class
> GitDownloadDirectoryNamingTest(FetcherTest):
> + def setUp(self):
> + super(GitDownloadDirectoryNamingTest, self).setUp()
> + self.recipe_url = "git://git.openembedded.org/bitbake"
> + self.recipe_dir = "git.openembedded.org.bitbake"
> + self.mirror_url = "git://github.com/openembedded/bitbake.git"
> + self.mirror_dir = "github.com.openembedded.bitbake.git"
> +
> + self.d.setVar('SRCREV',
> '82ea737a0b42a8b53e11c9cde141e9e9c0bd8c40') +
> + def setup_mirror_rewrite(self):
> + self.d.setVar("PREMIRRORS", self.recipe_url + " " +
> self.mirror_url + " \n") +
> + @skipIfNoNetwork()
> + def
> test_that_directory_is_named_after_recipe_url_when_no_mirroring_is_used(self):
> + self.setup_mirror_rewrite()
> + fetcher = bb.fetch.Fetch([self.recipe_url], self.d)
> +
> + fetcher.download()
> +
> + dir = os.listdir(self.dldir + "/git2")
> + self.assertIn(self.recipe_dir, dir)
> +
> + @skipIfNoNetwork()
> + def
> test_that_directory_exists_for_mirrored_url_and_recipe_url_when_mirroring_is_used(self):
> + self.setup_mirror_rewrite()
> + fetcher = bb.fetch.Fetch([self.recipe_url], self.d)
> +
> + fetcher.download()
> +
> + dir = os.listdir(self.dldir + "/git2")
> + self.assertIn(self.mirror_dir, dir)
> + self.assertIn(self.recipe_dir, dir)
> +
> + @skipIfNoNetwork()
> + def
> test_that_recipe_directory_and_mirrored_directory_exists_when_mirroring_is_used_and_the_mirrored_directory_already_exists(self):
> + self.setup_mirror_rewrite()
> + fetcher = bb.fetch.Fetch([self.mirror_url], self.d)
> + fetcher.download()
> + fetcher = bb.fetch.Fetch([self.recipe_url], self.d)
> +
> + fetcher.download()
> +
> + dir = os.listdir(self.dldir + "/git2")
> + self.assertIn(self.mirror_dir, dir)
> + self.assertIn(self.recipe_dir, dir)
> +
> +
> +class TarballNamingTest(FetcherTest):
> + def setUp(self):
> + super(TarballNamingTest, self).setUp()
> + self.recipe_url = "git://git.openembedded.org/bitbake"
> + self.recipe_tarball =
> "git2_git.openembedded.org.bitbake.tar.gz"
> + self.mirror_url = "git://github.com/openembedded/bitbake.git"
> + self.mirror_tarball =
> "git2_github.com.openembedded.bitbake.git.tar.gz" +
> + self.d.setVar('BB_GENERATE_MIRROR_TARBALLS', '1')
> + self.d.setVar('SRCREV',
> '82ea737a0b42a8b53e11c9cde141e9e9c0bd8c40') +
> + def setup_mirror_rewrite(self):
> + self.d.setVar("PREMIRRORS", self.recipe_url + " " +
> self.mirror_url + " \n") +
> + @skipIfNoNetwork()
> + def
> test_that_the_recipe_tarball_is_created_when_no_mirroring_is_used(self):
> + fetcher = bb.fetch.Fetch([self.recipe_url], self.d)
> +
> + fetcher.download()
> +
> + dir = os.listdir(self.dldir)
> + self.assertIn(self.recipe_tarball, dir)
> +
> + @skipIfNoNetwork()
> + def
> test_that_the_mirror_tarball_is_created_when_mirroring_is_used(self):
> + self.setup_mirror_rewrite()
> + fetcher = bb.fetch.Fetch([self.recipe_url], self.d)
> +
> + fetcher.download()
> +
> + dir = os.listdir(self.dldir)
> + self.assertIn(self.mirror_tarball, dir)
> +
> +
> +class GitShallowTarballNamingTest(FetcherTest):
> + def setUp(self):
> + super(GitShallowTarballNamingTest, self).setUp()
> + self.recipe_url = "git://git.openembedded.org/bitbake"
> + self.recipe_tarball =
> "gitshallow_git.openembedded.org.bitbake_82ea737-1_master.tar.gz"
> + self.mirror_url = "git://github.com/openembedded/bitbake.git"
> + self.mirror_tarball =
> "gitshallow_github.com.openembedded.bitbake.git_82ea737-1_master.tar.gz"
> +
> + self.d.setVar('BB_GIT_SHALLOW', '1')
> + self.d.setVar('BB_GENERATE_SHALLOW_TARBALLS', '1')
> + self.d.setVar('SRCREV',
> '82ea737a0b42a8b53e11c9cde141e9e9c0bd8c40') +
> + def setup_mirror_rewrite(self):
> + self.d.setVar("PREMIRRORS", self.recipe_url + " " +
> self.mirror_url + " \n") +
> + @skipIfNoNetwork()
> + def
> test_that_the_tarball_is_named_after_recipe_url_when_no_mirroring_is_used(self):
> + fetcher = bb.fetch.Fetch([self.recipe_url], self.d)
> +
> + fetcher.download()
> +
> + dir = os.listdir(self.dldir)
> + self.assertIn(self.recipe_tarball, dir)
> +
> + @skipIfNoNetwork()
> + def
> test_that_the_mirror_tarball_is_created_when_mirroring_is_used(self):
> + self.setup_mirror_rewrite()
> + fetcher = bb.fetch.Fetch([self.recipe_url], self.d)
> +
> + fetcher.download()
> +
> + dir = os.listdir(self.dldir)
> + self.assertIn(self.mirror_tarball, dir)
> +
> +
> class FetcherLocalTest(FetcherTest):
> def setUp(self):
> def touch(fn):
> @@ -522,6 +647,109 @@ class FetcherLocalTest(FetcherTest):
> with self.assertRaises(bb.fetch2.UnpackError):
> self.fetchUnpack(['file://a;subdir=/bin/sh'])
>
> +class FetcherNoNetworkTest(FetcherTest):
> + def setUp(self):
> + super().setUp()
> + # all test cases are based on not having network
> + self.d.setVar("BB_NO_NETWORK", "1")
> +
> + def test_missing(self):
> + string = "this is a test file\n".encode("utf-8")
> + self.d.setVarFlag("SRC_URI", "md5sum",
> hashlib.md5(string).hexdigest())
> + self.d.setVarFlag("SRC_URI", "sha256sum",
> hashlib.sha256(string).hexdigest()) +
> + self.assertFalse(os.path.exists(os.path.join(self.dldir,
> "test-file.tar.gz")))
> + self.assertFalse(os.path.exists(os.path.join(self.dldir,
> "test-file.tar.gz.done")))
> + fetcher =
> bb.fetch.Fetch(["http://invalid.yoctoproject.org/test-file.tar.gz"],
> self.d)
> + with self.assertRaises(bb.fetch2.NetworkAccess):
> + fetcher.download()
> +
> + def test_valid_missing_donestamp(self):
> + # create the file in the download directory with correct hash
> + string = "this is a test file\n".encode("utf-8")
> + with open(os.path.join(self.dldir, "test-file.tar.gz"),
> "wb") as f:
> + f.write(string)
> +
> + self.d.setVarFlag("SRC_URI", "md5sum",
> hashlib.md5(string).hexdigest())
> + self.d.setVarFlag("SRC_URI", "sha256sum",
> hashlib.sha256(string).hexdigest()) +
> + self.assertTrue(os.path.exists(os.path.join(self.dldir,
> "test-file.tar.gz")))
> + self.assertFalse(os.path.exists(os.path.join(self.dldir,
> "test-file.tar.gz.done")))
> + fetcher =
> bb.fetch.Fetch(["http://invalid.yoctoproject.org/test-file.tar.gz"],
> self.d)
> + fetcher.download()
> + self.assertTrue(os.path.exists(os.path.join(self.dldir,
> "test-file.tar.gz.done"))) +
> + def test_invalid_missing_donestamp(self):
> + # create an invalid file in the download directory with
> incorrect hash
> + string = "this is a test file\n".encode("utf-8")
> + with open(os.path.join(self.dldir, "test-file.tar.gz"),
> "wb"):
> + pass
> +
> + self.d.setVarFlag("SRC_URI", "md5sum",
> hashlib.md5(string).hexdigest())
> + self.d.setVarFlag("SRC_URI", "sha256sum",
> hashlib.sha256(string).hexdigest()) +
> + self.assertTrue(os.path.exists(os.path.join(self.dldir,
> "test-file.tar.gz")))
> + self.assertFalse(os.path.exists(os.path.join(self.dldir,
> "test-file.tar.gz.done")))
> + fetcher =
> bb.fetch.Fetch(["http://invalid.yoctoproject.org/test-file.tar.gz"],
> self.d)
> + with self.assertRaises(bb.fetch2.NetworkAccess):
> + fetcher.download()
> + # the existing file should not exist or should have be moved
> to "bad-checksum"
> + self.assertFalse(os.path.exists(os.path.join(self.dldir,
> "test-file.tar.gz"))) +
> + def test_nochecksums_missing(self):
> + self.assertFalse(os.path.exists(os.path.join(self.dldir,
> "test-file.tar.gz")))
> + self.assertFalse(os.path.exists(os.path.join(self.dldir,
> "test-file.tar.gz.done")))
> + # ssh fetch does not support checksums
> + fetcher =
> bb.fetch.Fetch(["ssh://invalid@invalid.yoctoproject.org/test-file.tar.gz"],
> self.d)
> + # attempts to download with missing donestamp
> + with self.assertRaises(bb.fetch2.NetworkAccess):
> + fetcher.download()
> +
> + def test_nochecksums_missing_donestamp(self):
> + # create a file in the download directory
> + with open(os.path.join(self.dldir, "test-file.tar.gz"),
> "wb"):
> + pass
> +
> + self.assertTrue(os.path.exists(os.path.join(self.dldir,
> "test-file.tar.gz")))
> + self.assertFalse(os.path.exists(os.path.join(self.dldir,
> "test-file.tar.gz.done")))
> + # ssh fetch does not support checksums
> + fetcher =
> bb.fetch.Fetch(["ssh://invalid@invalid.yoctoproject.org/test-file.tar.gz"],
> self.d)
> + # attempts to download with missing donestamp
> + with self.assertRaises(bb.fetch2.NetworkAccess):
> + fetcher.download()
> +
> + def test_nochecksums_has_donestamp(self):
> + # create a file in the download directory with the donestamp
> + with open(os.path.join(self.dldir, "test-file.tar.gz"),
> "wb"):
> + pass
> + with open(os.path.join(self.dldir, "test-file.tar.gz.done"),
> "wb"):
> + pass
> +
> + self.assertTrue(os.path.exists(os.path.join(self.dldir,
> "test-file.tar.gz")))
> + self.assertTrue(os.path.exists(os.path.join(self.dldir,
> "test-file.tar.gz.done")))
> + # ssh fetch does not support checksums
> + fetcher =
> bb.fetch.Fetch(["ssh://invalid@invalid.yoctoproject.org/test-file.tar.gz"],
> self.d)
> + # should not fetch
> + fetcher.download()
> + # both files should still exist
> + self.assertTrue(os.path.exists(os.path.join(self.dldir,
> "test-file.tar.gz")))
> + self.assertTrue(os.path.exists(os.path.join(self.dldir,
> "test-file.tar.gz.done"))) +
> + def test_nochecksums_missing_has_donestamp(self):
> + # create a file in the download directory with the donestamp
> + with open(os.path.join(self.dldir, "test-file.tar.gz.done"),
> "wb"):
> + pass
> +
> + self.assertFalse(os.path.exists(os.path.join(self.dldir,
> "test-file.tar.gz")))
> + self.assertTrue(os.path.exists(os.path.join(self.dldir,
> "test-file.tar.gz.done")))
> + # ssh fetch does not support checksums
> + fetcher =
> bb.fetch.Fetch(["ssh://invalid@invalid.yoctoproject.org/test-file.tar.gz"],
> self.d)
> + with self.assertRaises(bb.fetch2.NetworkAccess):
> + fetcher.download()
> + # both files should still exist
> + self.assertFalse(os.path.exists(os.path.join(self.dldir,
> "test-file.tar.gz")))
> + self.assertFalse(os.path.exists(os.path.join(self.dldir,
> "test-file.tar.gz.done"))) +
> class FetcherNetworkTest(FetcherTest):
> @skipIfNoNetwork()
> def test_fetch(self):
> @@ -641,27 +869,27 @@ class FetcherNetworkTest(FetcherTest):
> self.assertRaises(bb.fetch.ParameterError, self.gitfetcher,
> url, url)
> @skipIfNoNetwork()
> - def test_gitfetch_premirror(self):
> - url1 = "git://git.openembedded.org/bitbake"
> - url2 = "git://someserver.org/bitbake"
> + def
> test_gitfetch_finds_local_tarball_for_mirrored_url_when_previous_downloaded_by_the_recipe_url(self):
> + recipeurl = "git://git.openembedded.org/bitbake"
> + mirrorurl = "git://someserver.org/bitbake"
> self.d.setVar("PREMIRRORS", "git://someserver.org/bitbake
> git://git.openembedded.org/bitbake \n")
> - self.gitfetcher(url1, url2)
> + self.gitfetcher(recipeurl, mirrorurl)
>
> @skipIfNoNetwork()
> - def test_gitfetch_premirror2(self):
> - url1 = url2 = "git://someserver.org/bitbake"
> + def
> test_gitfetch_finds_local_tarball_when_previous_downloaded_from_a_premirror(self):
> + recipeurl = "git://someserver.org/bitbake"
> self.d.setVar("PREMIRRORS", "git://someserver.org/bitbake
> git://git.openembedded.org/bitbake \n")
> - self.gitfetcher(url1, url2)
> + self.gitfetcher(recipeurl, recipeurl)
>
> @skipIfNoNetwork()
> - def test_gitfetch_premirror3(self):
> + def
> test_gitfetch_finds_local_repository_when_premirror_rewrites_the_recipe_url(self):
> realurl = "git://git.openembedded.org/bitbake"
> - dummyurl = "git://someserver.org/bitbake"
> + recipeurl = "git://someserver.org/bitbake"
> self.sourcedir = self.unpackdir.replace("unpacked",
> "sourcemirror.git") os.chdir(self.tempdir)
> bb.process.run("git clone %s %s 2> /dev/null" % (realurl,
> self.sourcedir), shell=True)
> - self.d.setVar("PREMIRRORS", "%s git://%s;protocol=file \n" %
> (dummyurl, self.sourcedir))
> - self.gitfetcher(dummyurl, dummyurl)
> + self.d.setVar("PREMIRRORS", "%s git://%s;protocol=file \n" %
> (recipeurl, self.sourcedir))
> + self.gitfetcher(recipeurl, recipeurl)
>
> @skipIfNoNetwork()
> def test_git_submodule(self):
> @@ -728,7 +956,7 @@ class URLHandle(unittest.TestCase):
> # decodeurl and we need to handle them
> decodedata = datatable.copy()
> decodedata.update({
> - "http://somesite.net;someparam=1": ('http', 'somesite.net',
> '', '', '', {'someparam': '1'}),
> + "http://somesite.net;someparam=1": ('http', 'somesite.net',
> '/', '', '', {'someparam': '1'}), })
>
> def test_decodeurl(self):
> @@ -757,12 +985,12 @@ class FetchLatestVersionTest(FetcherTest):
> ("dtc", "git://git.qemu.org/dtc.git",
> "65cc4d2748a2c2e6f27f1cf39e07a5dbabd80ebf", "") : "1.4.0",
> # combination version pattern
> - ("sysprof", "git://git.gnome.org/sysprof",
> "cd44ee6644c3641507fb53b8a2a69137f2971219", "")
> + ("sysprof",
> "git://gitlab.gnome.org/GNOME/sysprof.git;protocol=https",
> "cd44ee6644c3641507fb53b8a2a69137f2971219", "") : "1.2.0",
> ("u-boot-mkimage",
> "git://git.denx.de/u-boot.git;branch=master;protocol=git",
> "62c175fbb8a0f9a926c88294ea9f7e88eb898f6c", "") : "2014.01", #
> version pattern "yyyymmdd"
> - ("mobile-broadband-provider-info",
> "git://git.gnome.org/mobile-broadband-provider-info",
> "4ed19e11c2975105b71b956440acdb25d46a347d", "")
> + ("mobile-broadband-provider-info",
> "git://gitlab.gnome.org/GNOME/mobile-broadband-provider-info.git;protocol=https",
> "4ed19e11c2975105b71b956440acdb25d46a347d", "") : "20120614", #
> packages with a valid UPSTREAM_CHECK_GITTAGREGEX ("xf86-video-omap",
> "git://anongit.freedesktop.org/xorg/driver/xf86-video-omap",
> "ae0394e687f1a77e966cf72f895da91840dffb8f",
> "(?P<pver>(\d+\.(\d\.?)*))") @@ -809,7 +1037,7 @@ class
> FetchLatestVersionTest(FetcherTest): ud = bb.fetch2.FetchData(k[1],
> self.d) pupver= ud.method.latest_versionstring(ud, self.d) verstring
> = pupver[0]
> - self.assertTrue(verstring, msg="Could not find upstream
> version")
> + self.assertTrue(verstring, msg="Could not find upstream
> version for %s" % k[0]) r = bb.utils.vercmp_string(v, verstring)
> self.assertTrue(r == -1 or r == 0, msg="Package %s,
> version: %s <= %s" % (k[0], v, verstring))
> @@ -822,7 +1050,7 @@ class FetchLatestVersionTest(FetcherTest):
> ud = bb.fetch2.FetchData(k[1], self.d)
> pupver = ud.method.latest_versionstring(ud, self.d)
> verstring = pupver[0]
> - self.assertTrue(verstring, msg="Could not find upstream
> version")
> + self.assertTrue(verstring, msg="Could not find upstream
> version for %s" % k[0]) r = bb.utils.vercmp_string(v, verstring)
> self.assertTrue(r == -1 or r == 0, msg="Package %s,
> version: %s <= %s" % (k[0], v, verstring))
> @@ -874,9 +1102,6 @@ class FetchCheckStatusTest(FetcherTest):
>
>
> class GitMakeShallowTest(FetcherTest):
> - bitbake_dir =
> os.path.join(os.path.dirname(os.path.join(os.path.abspath(__file__))),
> '..', '..', '..')
> - make_shallow_path = os.path.join(bitbake_dir, 'bin',
> 'git-make-shallow') -
> def setUp(self):
> FetcherTest.setUp(self)
> self.gitdir = os.path.join(self.tempdir, 'gitshallow')
> @@ -905,7 +1130,7 @@ class GitMakeShallowTest(FetcherTest):
> def make_shallow(self, args=None):
> if args is None:
> args = ['HEAD']
> - return bb.process.run([self.make_shallow_path] + args,
> cwd=self.gitdir)
> + return bb.process.run([bb.fetch2.git.Git.make_shallow_path]
> + args, cwd=self.gitdir)
> def add_empty_file(self, path, msg=None):
> if msg is None:
> @@ -1237,6 +1462,9 @@ class GitShallowTest(FetcherTest):
> smdir = os.path.join(self.tempdir, 'gitsubmodule')
> bb.utils.mkdirhier(smdir)
> self.git('init', cwd=smdir)
> + # Make this look like it was cloned from a remote...
> + self.git('config --add remote.origin.url "%s"' % smdir,
> cwd=smdir)
> + self.git('config --add remote.origin.fetch
> "+refs/heads/*:refs/remotes/origin/*"', cwd=smdir)
> self.add_empty_file('asub', cwd=smdir)
> self.git('submodule init', cwd=self.srcdir)
> @@ -1470,3 +1698,30 @@ class GitShallowTest(FetcherTest):
> self.assertNotEqual(orig_revs, revs)
> self.assertRefs(['master', 'origin/master'])
> self.assertRevCount(orig_revs - 1758)
> +
> + def
> test_that_unpack_throws_an_error_when_the_git_clone_nor_shallow_tarball_exist(self):
> + self.add_empty_file('a')
> + fetcher, ud = self.fetch()
> + bb.utils.remove(self.gitdir, recurse=True)
> + bb.utils.remove(self.dldir, recurse=True)
> +
> + with self.assertRaises(bb.fetch2.UnpackError) as context:
> + fetcher.unpack(self.d.getVar('WORKDIR'))
> +
> + self.assertTrue("No up to date source found" in
> context.exception.msg)
> + self.assertTrue("clone directory not available or not up to
> date" in context.exception.msg)
> + self.assertTrue("shallow clone not enabled or not available"
> in context.exception.msg) +
> + @skipIfNoNetwork()
> + def
> test_that_unpack_does_work_when_using_git_shallow_tarball_but_tarball_is_not_available(self):
> + self.d.setVar('SRCREV',
> 'e5939ff608b95cdd4d0ab0e1935781ab9a276ac0')
> + self.d.setVar('BB_GIT_SHALLOW', '1')
> + self.d.setVar('BB_GENERATE_SHALLOW_TARBALLS', '1')
> + fetcher =
> bb.fetch.Fetch(["git://git.yoctoproject.org/fstests"], self.d)
> + fetcher.download()
> +
> + bb.utils.remove(self.dldir + "/*.tar.gz")
> + fetcher.unpack(self.unpackdir)
> +
> + dir = os.listdir(self.unpackdir + "/git/")
> + self.assertIn("fstests.doap", dir)
> diff --git a/bitbake/lib/bb/tests/parse.py
> b/bitbake/lib/bb/tests/parse.py index 8f16ba4..1bc4740 100644
> --- a/bitbake/lib/bb/tests/parse.py
> +++ b/bitbake/lib/bb/tests/parse.py
> @@ -44,9 +44,13 @@ C = "3"
> """
>
> def setUp(self):
> + self.origdir = os.getcwd()
> self.d = bb.data.init()
> bb.parse.siggen = bb.siggen.init(self.d)
>
> + def tearDown(self):
> + os.chdir(self.origdir)
> +
> def parsehelper(self, content, suffix = ".bb"):
>
> f = tempfile.NamedTemporaryFile(suffix = suffix)
> diff --git a/bitbake/lib/bb/ui/buildinfohelper.py
> b/bitbake/lib/bb/ui/buildinfohelper.py index 524a5b0..31323d2 100644
> --- a/bitbake/lib/bb/ui/buildinfohelper.py
> +++ b/bitbake/lib/bb/ui/buildinfohelper.py
> @@ -1603,14 +1603,14 @@ class BuildInfoHelper(object):
> mockevent.lineno = -1
> self.store_log_event(mockevent)
>
> - def store_log_event(self, event):
> + def store_log_event(self, event,cli_backlog=True):
> self._ensure_build()
>
> if event.levelno < formatter.WARNING:
> return
>
> # early return for CLI builds
> - if self.brbe is None:
> + if cli_backlog and self.brbe is None:
> if not 'backlog' in self.internal_state:
> self.internal_state['backlog'] = []
> self.internal_state['backlog'].append(event)
> @@ -1622,7 +1622,7 @@ class BuildInfoHelper(object):
> tempevent = self.internal_state['backlog'].pop()
> logger.debug(1, "buildinfohelper: Saving stored
> event %s " % tempevent)
> - self.store_log_event(tempevent)
> + self.store_log_event(tempevent,cli_backlog)
> else:
> logger.info("buildinfohelper: All events saved")
> del self.internal_state['backlog']
> @@ -1987,7 +1987,8 @@ class BuildInfoHelper(object):
> if 'backlog' in self.internal_state:
> # we save missed events in the database for the current
> build tempevent = self.internal_state['backlog'].pop()
> - self.store_log_event(tempevent)
> + # Do not skip command line build events
> + self.store_log_event(tempevent,False)
>
> if not
> connection.features.autocommits_when_autocommit_is_off:
> transaction.set_autocommit(True) diff --git
> a/bitbake/lib/bb/ui/taskexp.py b/bitbake/lib/bb/ui/taskexp.py index
> 0e8e9d4..8305d70 100644 --- a/bitbake/lib/bb/ui/taskexp.py
> +++ b/bitbake/lib/bb/ui/taskexp.py
> @@ -103,9 +103,16 @@ class DepExplorer(Gtk.Window):
> self.pkg_treeview.get_selection().connect("changed",
> self.on_cursor_changed) column = Gtk.TreeViewColumn("Package",
> Gtk.CellRendererText(), text=COL_PKG_NAME)
> self.pkg_treeview.append_column(column)
> - pane.add1(scrolled)
> scrolled.add(self.pkg_treeview)
>
> + self.search_entry = Gtk.SearchEntry.new()
> + self.pkg_treeview.set_search_entry(self.search_entry)
> +
> + left_panel = Gtk.VPaned()
> + left_panel.add(self.search_entry)
> + left_panel.add(scrolled)
> + pane.add1(left_panel)
> +
> box = Gtk.VBox(homogeneous=True, spacing=4)
>
> # Task Depends
> @@ -129,6 +136,7 @@ class DepExplorer(Gtk.Window):
> pane.add2(box)
>
> self.show_all()
> + self.search_entry.grab_focus()
>
> def on_package_activated(self, treeview, path, column, data_col):
> model = treeview.get_model()
> diff --git a/bitbake/lib/bb/utils.py b/bitbake/lib/bb/utils.py
> index c540b49..73b6cb4 100644
> --- a/bitbake/lib/bb/utils.py
> +++ b/bitbake/lib/bb/utils.py
> @@ -187,7 +187,7 @@ def explode_deps(s):
> #r[-1] += ' ' + ' '.join(j)
> return r
>
> -def explode_dep_versions2(s):
> +def explode_dep_versions2(s, *, sort=True):
> """
> Take an RDEPENDS style string of format:
> "DEPEND1 (optional version) DEPEND2 (optional version) ..."
> @@ -250,7 +250,8 @@ def explode_dep_versions2(s):
> if not (i in r and r[i]):
> r[lastdep] = []
>
> - r = collections.OrderedDict(sorted(r.items(), key=lambda x:
> x[0]))
> + if sort:
> + r = collections.OrderedDict(sorted(r.items(), key=lambda x:
> x[0])) return r
>
> def explode_dep_versions(s):
> @@ -496,7 +497,11 @@ def lockfile(name, shared=False, retry=True,
> block=False): if statinfo.st_ino == statinfo2.st_ino:
> return lf
> lf.close()
> - except Exception:
> + except OSError as e:
> + if e.errno == errno.EACCES:
> + logger.error("Unable to acquire lock '%s', %s",
> + e.strerror, name)
> + sys.exit(1)
> try:
> lf.close()
> except Exception:
> @@ -523,12 +528,17 @@ def md5_file(filename):
> """
> Return the hex string representation of the MD5 checksum of
> filename. """
> - import hashlib
> - m = hashlib.md5()
> + import hashlib, mmap
>
> with open(filename, "rb") as f:
> - for line in f:
> - m.update(line)
> + m = hashlib.md5()
> + try:
> + with mmap.mmap(f.fileno(), 0, access=mmap.ACCESS_READ)
> as mm:
> + for chunk in iter(lambda: mm.read(8192), b''):
> + m.update(chunk)
> + except ValueError:
> + # You can't mmap() an empty file so silence this
> exception
> + pass
> return m.hexdigest()
>
> def sha256_file(filename):
> @@ -806,8 +816,8 @@ def movefile(src, dest, newmtime = None, sstat =
> None): return None # failure
> try:
> if didcopy:
> - os.lchown(dest, sstat[stat.ST_UID],
> sstat[stat.ST_GID])
> - os.chmod(dest, stat.S_IMODE(sstat[stat.ST_MODE])) #
> Sticky is reset on chown
> + os.lchown(destpath, sstat[stat.ST_UID],
> sstat[stat.ST_GID])
> + os.chmod(destpath,
> stat.S_IMODE(sstat[stat.ST_MODE])) # Sticky is reset on chown
> os.unlink(src) except Exception as e:
> print("movefile: Failed to chown/chmod/unlink", dest, e)
> @@ -900,6 +910,23 @@ def copyfile(src, dest, newmtime = None, sstat =
> None): newmtime = sstat[stat.ST_MTIME]
> return newmtime
>
> +def break_hardlinks(src, sstat = None):
> + """
> + Ensures src is the only hardlink to this file. Other hardlinks,
> + if any, are not affected (other than in their st_nlink value, of
> + course). Returns true on success and false on failure.
> +
> + """
> + try:
> + if not sstat:
> + sstat = os.lstat(src)
> + except Exception as e:
> + logger.warning("break_hardlinks: stat of %s failed (%s)" %
> (src, e))
> + return False
> + if sstat[stat.ST_NLINK] == 1:
> + return True
> + return copyfile(src, src, sstat=sstat)
> +
> def which(path, item, direction = 0, history = False,
> executable=False): """
> Locate `item` in the list of paths `path` (colon separated
> string like $PATH). @@ -1284,7 +1311,7 @@ def
> edit_metadata_file(meta_file, variables, varfunc): return updated
>
>
> -def edit_bblayers_conf(bblayers_conf, add, remove):
> +def edit_bblayers_conf(bblayers_conf, add, remove, edit_cb=None):
> """Edit bblayers.conf, adding and/or removing layers
> Parameters:
> bblayers_conf: path to bblayers.conf file to edit
> @@ -1292,6 +1319,8 @@ def edit_bblayers_conf(bblayers_conf, add,
> remove): list to add nothing
> remove: layer path (or list of layer paths) to remove; None
> or empty list to remove nothing
> + edit_cb: optional callback function that will be called after
> + processing adds/removes once per existing entry.
> Returns a tuple:
> notadded: list of layers specified to be added but weren't
> (because they were already in the list)
> @@ -1355,6 +1384,17 @@ def edit_bblayers_conf(bblayers_conf, add,
> remove): bblayers.append(addlayer)
> del addlayers[:]
>
> + if edit_cb:
> + newlist = []
> + for layer in bblayers:
> + res = edit_cb(layer, canonicalise_path(layer))
> + if res != layer:
> + newlist.append(res)
> + updated = True
> + else:
> + newlist.append(layer)
> + bblayers = newlist
> +
> if updated:
> if op == '+=' and not bblayers:
> bblayers = None
> diff --git a/bitbake/lib/bblayers/action.py
> b/bitbake/lib/bblayers/action.py index aa575d1..a3f658f 100644
> --- a/bitbake/lib/bblayers/action.py
> +++ b/bitbake/lib/bblayers/action.py
> @@ -45,7 +45,7 @@ class ActionPlugin(LayerPlugin):
> notadded, _ = bb.utils.edit_bblayers_conf(bblayers_conf,
> layerdirs, None) if not (args.force or notadded):
> try:
> - self.tinfoil.parseRecipes()
> + self.tinfoil.run_command('parseConfiguration')
> except bb.tinfoil.TinfoilUIException:
> # Restore the back up copy of bblayers.conf
> shutil.copy2(backup, bblayers_conf)
> diff --git a/bitbake/lib/bblayers/layerindex.py
> b/bitbake/lib/bblayers/layerindex.py index 9af385d..9f02a9d 100644
> --- a/bitbake/lib/bblayers/layerindex.py
> +++ b/bitbake/lib/bblayers/layerindex.py
> @@ -1,10 +1,9 @@
> +import layerindexlib
> +
> import argparse
> -import http.client
> -import json
> import logging
> import os
> import subprocess
> -import urllib.parse
>
> from bblayers.action import ActionPlugin
>
> @@ -21,110 +20,6 @@ class LayerIndexPlugin(ActionPlugin):
> This class inherits ActionPlugin to get do_add_layer.
> """
>
> - def get_json_data(self, apiurl):
> - proxy_settings = os.environ.get("http_proxy", None)
> - conn = None
> - _parsedurl = urllib.parse.urlparse(apiurl)
> - path = _parsedurl.path
> - query = _parsedurl.query
> -
> - def parse_url(url):
> - parsedurl = urllib.parse.urlparse(url)
> - if parsedurl.netloc[0] == '[':
> - host, port = parsedurl.netloc[1:].split(']', 1)
> - if ':' in port:
> - port = port.rsplit(':', 1)[1]
> - else:
> - port = None
> - else:
> - if parsedurl.netloc.count(':') == 1:
> - (host, port) = parsedurl.netloc.split(":")
> - else:
> - host = parsedurl.netloc
> - port = None
> - return (host, 80 if port is None else int(port))
> -
> - if proxy_settings is None:
> - host, port = parse_url(apiurl)
> - conn = http.client.HTTPConnection(host, port)
> - conn.request("GET", path + "?" + query)
> - else:
> - host, port = parse_url(proxy_settings)
> - conn = http.client.HTTPConnection(host, port)
> - conn.request("GET", apiurl)
> -
> - r = conn.getresponse()
> - if r.status != 200:
> - raise Exception("Failed to read " + path + ": %d %s" %
> (r.status, r.reason))
> - return json.loads(r.read().decode())
> -
> - def get_layer_deps(self, layername, layeritems, layerbranches,
> layerdependencies, branchnum, selfname=False):
> - def layeritems_info_id(items_name, layeritems):
> - litems_id = None
> - for li in layeritems:
> - if li['name'] == items_name:
> - litems_id = li['id']
> - break
> - return litems_id
> -
> - def layerbranches_info(items_id, layerbranches):
> - lbranch = {}
> - for lb in layerbranches:
> - if lb['layer'] == items_id and lb['branch'] ==
> branchnum:
> - lbranch['id'] = lb['id']
> - lbranch['vcs_subdir'] = lb['vcs_subdir']
> - break
> - return lbranch
> -
> - def layerdependencies_info(lb_id, layerdependencies):
> - ld_deps = []
> - for ld in layerdependencies:
> - if ld['layerbranch'] == lb_id and not
> ld['dependency'] in ld_deps:
> - ld_deps.append(ld['dependency'])
> - if not ld_deps:
> - logger.error("The dependency of layerDependencies is
> not found.")
> - return ld_deps
> -
> - def layeritems_info_name_subdir(items_id, layeritems):
> - litems = {}
> - for li in layeritems:
> - if li['id'] == items_id:
> - litems['vcs_url'] = li['vcs_url']
> - litems['name'] = li['name']
> - break
> - return litems
> -
> - if selfname:
> - selfid = layeritems_info_id(layername, layeritems)
> - lbinfo = layerbranches_info(selfid, layerbranches)
> - if lbinfo:
> - selfsubdir = lbinfo['vcs_subdir']
> - else:
> - logger.error("%s is not found in the specified
> branch" % layername)
> - return
> - selfurl = layeritems_info_name_subdir(selfid,
> layeritems)['vcs_url']
> - if selfurl:
> - return selfurl, selfsubdir
> - else:
> - logger.error("Cannot get layer %s git repo and
> subdir" % layername)
> - return
> - ldict = {}
> - itemsid = layeritems_info_id(layername, layeritems)
> - if not itemsid:
> - return layername, None
> - lbid = layerbranches_info(itemsid, layerbranches)
> - if lbid:
> - lbid = layerbranches_info(itemsid, layerbranches)['id']
> - else:
> - logger.error("%s is not found in the specified branch" %
> layername)
> - return None, None
> - for dependency in layerdependencies_info(lbid,
> layerdependencies):
> - lname = layeritems_info_name_subdir(dependency,
> layeritems)['name']
> - lurl = layeritems_info_name_subdir(dependency,
> layeritems)['vcs_url']
> - lsubdir = layerbranches_info(dependency,
> layerbranches)['vcs_subdir']
> - ldict[lname] = lurl, lsubdir
> - return None, ldict
> -
> def get_fetch_layer(self, fetchdir, url, subdir, fetch_layer):
> layername = self.get_layer_name(url)
> if os.path.splitext(layername)[1] == '.git':
> @@ -136,95 +31,124 @@ class LayerIndexPlugin(ActionPlugin):
> result = subprocess.call('git clone %s %s' % (url,
> repodir), shell = True) if result:
> logger.error("Failed to download %s" % url)
> - return None, None
> + return None, None, None
> else:
> - return layername, layerdir
> + return subdir, layername, layerdir
> else:
> logger.plain("Repository %s needs to be fetched" %
> url)
> - return layername, layerdir
> + return subdir, layername, layerdir
> elif os.path.exists(layerdir):
> - return layername, layerdir
> + return subdir, layername, layerdir
> else:
> logger.error("%s is not in %s" % (url, subdir))
> - return None, None
> + return None, None, None
>
> def do_layerindex_fetch(self, args):
> """Fetches a layer from a layer index along with its
> dependent layers, and adds them to conf/bblayers.conf. """
> - apiurl =
> self.tinfoil.config_data.getVar('BBLAYERS_LAYERINDEX_URL')
> - if not apiurl:
> - logger.error("Cannot get BBLAYERS_LAYERINDEX_URL")
> - return 1
> +
> + def _construct_url(baseurls, branches):
> + urls = []
> + for baseurl in baseurls:
> + if baseurl[-1] != '/':
> + baseurl += '/'
> +
> + if not baseurl.startswith('cooker'):
> + baseurl += "api/"
> +
> + if branches:
> + baseurl += ";branch=%s" % ','.join(branches)
> +
> + urls.append(baseurl)
> +
> + return urls
> +
> +
> + # Set the default...
> + if args.branch:
> + branches = [args.branch]
> else:
> - if apiurl[-1] != '/':
> - apiurl += '/'
> - apiurl += "api/"
> - apilinks = self.get_json_data(apiurl)
> - branches = self.get_json_data(apilinks['branches'])
> -
> - branchnum = 0
> - for branch in branches:
> - if branch['name'] == args.branch:
> - branchnum = branch['id']
> - break
> - if branchnum == 0:
> - validbranches = ', '.join([branch['name'] for branch in
> branches])
> - logger.error('Invalid layer branch name "%s". Valid
> branches: %s' % (args.branch, validbranches))
> - return 1
> + branches =
> (self.tinfoil.config_data.getVar('LAYERSERIES_CORENAMES') or
> 'master').split()
> + logger.debug(1, 'Trying branches: %s' % branches)
>
> ignore_layers = []
> - for collection in
> self.tinfoil.config_data.getVar('BBFILE_COLLECTIONS').split():
> - lname =
> self.tinfoil.config_data.getVar('BBLAYERS_LAYERINDEX_NAME_%s' %
> collection)
> - if lname:
> - ignore_layers.append(lname)
> -
> if args.ignore:
> ignore_layers.extend(args.ignore.split(','))
>
> - layeritems = self.get_json_data(apilinks['layerItems'])
> - layerbranches = self.get_json_data(apilinks['layerBranches'])
> - layerdependencies =
> self.get_json_data(apilinks['layerDependencies'])
> - invaluenames = []
> - repourls = {}
> - printlayers = []
> -
> - def query_dependencies(layers, layeritems, layerbranches,
> layerdependencies, branchnum):
> - depslayer = []
> - for layername in layers:
> - invaluename, layerdict =
> self.get_layer_deps(layername, layeritems, layerbranches,
> layerdependencies, branchnum)
> - if layerdict:
> - repourls[layername] =
> self.get_layer_deps(layername, layeritems, layerbranches,
> layerdependencies, branchnum, selfname=True)
> - for layer in layerdict:
> - if not layer in ignore_layers:
> - depslayer.append(layer)
> - printlayers.append((layername, layer,
> layerdict[layer][0], layerdict[layer][1]))
> - if not layer in ignore_layers and not layer
> in repourls:
> - repourls[layer] = (layerdict[layer][0],
> layerdict[layer][1])
> - if invaluename and not invaluename in invaluenames:
> - invaluenames.append(invaluename)
> - return depslayer
> -
> - depslayers = query_dependencies(args.layername, layeritems,
> layerbranches, layerdependencies, branchnum)
> - while depslayers:
> - depslayer = query_dependencies(depslayers, layeritems,
> layerbranches, layerdependencies, branchnum)
> - depslayers = depslayer
> - if invaluenames:
> - for invaluename in invaluenames:
> - logger.error('Layer "%s" not found in layer index' %
> invaluename)
> - return 1
> - logger.plain("%s %s %s %s" % ("Layer".ljust(19),
> "Required by".ljust(19), "Git repository".ljust(54), "Subdirectory"))
> - logger.plain('=' * 115)
> - for layername in args.layername:
> - layerurl = repourls[layername]
> - logger.plain("%s %s %s %s" % (layername.ljust(20),
> '-'.ljust(20), layerurl[0].ljust(55), layerurl[1]))
> - printedlayers = []
> - for layer, dependency, gitrepo, subdirectory in printlayers:
> - if dependency in printedlayers:
> - continue
> - logger.plain("%s %s %s %s" % (dependency.ljust(20),
> layer.ljust(20), gitrepo.ljust(55), subdirectory))
> - printedlayers.append(dependency)
> -
> - if repourls:
> + # Load the cooker DB
> + cookerIndex =
> layerindexlib.LayerIndex(self.tinfoil.config_data)
> + cookerIndex.load_layerindex('cooker://',
> load='layerDependencies') +
> + # Fast path, check if we already have what has been
> requested!
> + (dependencies, invalidnames) =
> cookerIndex.find_dependencies(names=args.layername,
> ignores=ignore_layers)
> + if not args.show_only and not invalidnames:
> + logger.plain("You already have the requested layer(s):
> %s" % args.layername)
> + return 0
> +
> + # The information to show is already in the cookerIndex
> + if invalidnames:
> + # General URL to use to access the layer index
> + # While there is ONE right now, we're expect users could
> enter several
> + apiurl =
> self.tinfoil.config_data.getVar('BBLAYERS_LAYERINDEX_URL').split()
> + if not apiurl:
> + logger.error("Cannot get BBLAYERS_LAYERINDEX_URL")
> + return 1
> +
> + remoteIndex =
> layerindexlib.LayerIndex(self.tinfoil.config_data) +
> + for remoteurl in _construct_url(apiurl, branches):
> + logger.plain("Loading %s..." % remoteurl)
> + remoteIndex.load_layerindex(remoteurl)
> +
> + if remoteIndex.is_empty():
> + logger.error("Remote layer index %s is empty for
> branches %s" % (apiurl, branches))
> + return 1
> +
> + lIndex = cookerIndex + remoteIndex
> +
> + (dependencies, invalidnames) =
> lIndex.find_dependencies(names=args.layername, ignores=ignore_layers)
> +
> + if invalidnames:
> + for invaluename in invalidnames:
> + logger.error('Layer "%s" not found in layer
> index' % invaluename)
> + return 1
> +
> + logger.plain("%s %s %s" % ("Layer".ljust(49), "Git
> repository (branch)".ljust(54), "Subdirectory"))
> + logger.plain('=' * 125)
> +
> + for deplayerbranch in dependencies:
> + layerBranch = dependencies[deplayerbranch][0]
> +
> + # TODO: Determine display behavior
> + # This is the local content, uncomment to hide local
> + # layers from the display.
> + #if layerBranch.index.config['TYPE'] == 'cooker':
> + # continue
> +
> + layerDeps = dependencies[deplayerbranch][1:]
> +
> + requiredby = []
> + recommendedby = []
> + for dep in layerDeps:
> + if dep.required:
> + requiredby.append(dep.layer.name)
> + else:
> + recommendedby.append(dep.layer.name)
> +
> + logger.plain('%s %s %s' % (("%s:%s:%s" %
> +
> (layerBranch.index.config['DESCRIPTION'],
> + layerBranch.branch.name,
> + layerBranch.layer.name)).ljust(50),
> + ("%s (%s)" %
> (layerBranch.layer.vcs_url,
> +
> layerBranch.actual_branch)).ljust(55),
> + layerBranch.vcs_subdir
> + ))
> + if requiredby:
> + logger.plain(' required by: %s' % '
> '.join(requiredby))
> + if recommendedby:
> + logger.plain(' recommended by: %s' % '
> '.join(recommendedby)) +
> + if dependencies:
> fetchdir =
> self.tinfoil.config_data.getVar('BBLAYERS_FETCH_DIR') if not fetchdir:
> logger.error("Cannot get BBLAYERS_FETCH_DIR")
> @@ -232,26 +156,39 @@ class LayerIndexPlugin(ActionPlugin):
> if not os.path.exists(fetchdir):
> os.makedirs(fetchdir)
> addlayers = []
> - for repourl, subdir in repourls.values():
> - name, layerdir = self.get_fetch_layer(fetchdir,
> repourl, subdir, not args.show_only) +
> + for deplayerbranch in dependencies:
> + layerBranch = dependencies[deplayerbranch][0]
> +
> + if layerBranch.index.config['TYPE'] == 'cooker':
> + # Anything loaded via cooker is already local,
> skip it
> + continue
> +
> + subdir, name, layerdir =
> self.get_fetch_layer(fetchdir,
> +
> layerBranch.layer.vcs_url,
> +
> layerBranch.vcs_subdir,
> + not
> args.show_only) if not name:
> # Error already shown
> return 1
> addlayers.append((subdir, name, layerdir))
> if not args.show_only:
> - for subdir, name, layerdir in set(addlayers):
> + localargs = argparse.Namespace()
> + localargs.layerdir = []
> + localargs.force = args.force
> + for subdir, name, layerdir in addlayers:
> if os.path.exists(layerdir):
> if subdir:
> - logger.plain("Adding layer \"%s\" to
> conf/bblayers.conf" % subdir)
> + logger.plain("Adding layer \"%s\" (%s) to
> conf/bblayers.conf" % (subdir, layerdir)) else:
> - logger.plain("Adding layer \"%s\" to
> conf/bblayers.conf" % name)
> - localargs = argparse.Namespace()
> - localargs.layerdir = layerdir
> - localargs.force = args.force
> - self.do_add_layer(localargs)
> + logger.plain("Adding layer \"%s\" (%s) to
> conf/bblayers.conf" % (name, layerdir))
> + localargs.layerdir.append(layerdir)
> else:
> break
>
> + if localargs.layerdir:
> + self.do_add_layer(localargs)
> +
> def do_layerindex_show_depends(self, args):
> """Find layer dependencies from layer index.
> """
> @@ -260,12 +197,12 @@ class LayerIndexPlugin(ActionPlugin):
> self.do_layerindex_fetch(args)
>
> def register_commands(self, sp):
> - parser_layerindex_fetch = self.add_command(sp,
> 'layerindex-fetch', self.do_layerindex_fetch)
> + parser_layerindex_fetch = self.add_command(sp,
> 'layerindex-fetch', self.do_layerindex_fetch, parserecipes=False)
> parser_layerindex_fetch.add_argument('-n', '--show-only', help='show
> dependencies and do nothing else', action='store_true')
> - parser_layerindex_fetch.add_argument('-b', '--branch',
> help='branch name to fetch (default %(default)s)', default='master')
> + parser_layerindex_fetch.add_argument('-b', '--branch',
> help='branch name to fetch')
> parser_layerindex_fetch.add_argument('-i', '--ignore', help='assume
> the specified layers do not need to be fetched/added (separate
> multiple layers with commas, no spaces)', metavar='LAYER')
> parser_layerindex_fetch.add_argument('layername', nargs='+',
> help='layer to fetch')
> - parser_layerindex_show_depends = self.add_command(sp,
> 'layerindex-show-depends', self.do_layerindex_show_depends)
> - parser_layerindex_show_depends.add_argument('-b',
> '--branch', help='branch name to fetch (default %(default)s)',
> default='master')
> + parser_layerindex_show_depends = self.add_command(sp,
> 'layerindex-show-depends', self.do_layerindex_show_depends,
> parserecipes=False)
> + parser_layerindex_show_depends.add_argument('-b',
> '--branch', help='branch name to fetch')
> parser_layerindex_show_depends.add_argument('layername', nargs='+',
> help='layer to query') diff --git a/bitbake/lib/layerindexlib/README
> b/bitbake/lib/layerindexlib/README new file mode 100644 index
> 0000000..5d927af --- /dev/null
> +++ b/bitbake/lib/layerindexlib/README
> @@ -0,0 +1,28 @@
> +The layerindexlib module is designed to permit programs to work
> directly +with layer index information. (See
> layers.openembedded.org...) +
> +The layerindexlib module includes a plugin interface that is used to
> extend +the basic functionality. There are two primary plugins
> available: restapi +and cooker.
> +
> +The restapi plugin works with a web based REST Api compatible with
> the +layerindex-web project, as well as the ability to store and
> retried a +the information for one or more files on the disk.
> +
> +The cooker plugin works by reading the information from the current
> build +project and processing it as if it were a layer index.
> +
> +
> +TODO:
> +
> +__init__.py:
> +Implement local on-disk caching (using the rest api store/load)
> +Implement layer index style query operations on a combined index
> +
> +common.py:
> +Stop network access if BB_NO_NETWORK or allowed hosts is restricted
> +
> +cooker.py:
> +Cooker - Implement recipe parsing
> +
> +
> diff --git a/bitbake/lib/layerindexlib/__init__.py
> b/bitbake/lib/layerindexlib/__init__.py new file mode 100644
> index 0000000..cb79cb3
> --- /dev/null
> +++ b/bitbake/lib/layerindexlib/__init__.py
> @@ -0,0 +1,1363 @@
> +# Copyright (C) 2016-2018 Wind River Systems, Inc.
> +#
> +# This program is free software; you can redistribute it and/or
> modify +# it under the terms of the GNU General Public License
> version 2 as +# published by the Free Software Foundation.
> +#
> +# This program is distributed in the hope that it will be useful,
> +# but WITHOUT ANY WARRANTY; without even the implied warranty of
> +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
> +# See the GNU General Public License for more details.
> +#
> +# You should have received a copy of the GNU General Public License
> +# along with this program; if not, write to the Free Software
> +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA
> 02111-1307 USA +
> +import datetime
> +
> +import logging
> +import imp
> +
> +from collections import OrderedDict
> +from layerindexlib.plugin import LayerIndexPluginUrlError
> +
> +logger = logging.getLogger('BitBake.layerindexlib')
> +
> +# Exceptions
> +
> +class LayerIndexException(Exception):
> + '''LayerIndex Generic Exception'''
> + def __init__(self, message):
> + self.msg = message
> + Exception.__init__(self, message)
> +
> + def __str__(self):
> + return self.msg
> +
> +class LayerIndexUrlError(LayerIndexException):
> + '''Exception raised when unable to access a URL for some
> reason'''
> + def __init__(self, url, message=""):
> + if message:
> + msg = "Unable to access layerindex url %s: %s" % (url,
> message)
> + else:
> + msg = "Unable to access layerindex url %s" % url
> + self.url = url
> + LayerIndexException.__init__(self, msg)
> +
> +class LayerIndexFetchError(LayerIndexException):
> + '''General layerindex fetcher exception when something fails'''
> + def __init__(self, url, message=""):
> + if message:
> + msg = "Unable to fetch layerindex url %s: %s" % (url,
> message)
> + else:
> + msg = "Unable to fetch layerindex url %s" % url
> + self.url = url
> + LayerIndexException.__init__(self, msg)
> +
> +
> +# Interface to the overall layerindex system
> +# the layer may contain one or more individual indexes
> +class LayerIndex():
> + def __init__(self, d):
> + if not d:
> + raise LayerIndexException("Must be initialized with
> bb.data.") +
> + self.data = d
> +
> + # List of LayerIndexObj
> + self.indexes = []
> +
> + self.plugins = []
> +
> + import bb.utils
> + bb.utils.load_plugins(logger, self.plugins,
> os.path.dirname(__file__))
> + for plugin in self.plugins:
> + if hasattr(plugin, 'init'):
> + plugin.init(self)
> +
> + def __add__(self, other):
> + newIndex = LayerIndex(self.data)
> +
> + if self.__class__ != newIndex.__class__ or \
> + other.__class__ != newIndex.__class__:
> + raise TypeException("Can not add different types.")
> +
> + for indexEnt in self.indexes:
> + newIndex.indexes.append(indexEnt)
> +
> + for indexEnt in other.indexes:
> + newIndex.indexes.append(indexEnt)
> +
> + return newIndex
> +
> + def _parse_params(self, params):
> + '''Take a parameter list, return a dictionary of parameters.
> +
> + Expected to be called from the data of
> urllib.parse.urlparse(url).params +
> + If there are two conflicting parameters, last in wins...
> + '''
> +
> + param_dict = {}
> + for param in params.split(';'):
> + if not param:
> + continue
> + item = param.split('=', 1)
> + logger.debug(1, item)
> + param_dict[item[0]] = item[1]
> +
> + return param_dict
> +
> + def _fetch_url(self, url, username=None, password=None,
> debuglevel=0):
> + '''Fetch data from a specific URL.
> +
> + Fetch something from a specific URL. This is
> specifically designed to
> + fetch data from a layerindex-web instance, but may be
> useful for other
> + raw fetch actions.
> +
> + It is not designed to be used to fetch recipe sources or
> similar. the
> + regular fetcher class should used for that.
> +
> + It is the responsibility of the caller to check
> BB_NO_NETWORK and related
> + BB_ALLOWED_NETWORKS.
> + '''
> +
> + if not url:
> + raise LayerIndexUrlError(url, "empty url")
> +
> + import urllib
> + from urllib.request import urlopen, Request
> + from urllib.parse import urlparse
> +
> + up = urlparse(url)
> +
> + if username:
> + logger.debug(1, "Configuring authentication for %s..." %
> url)
> + password_mgr =
> urllib.request.HTTPPasswordMgrWithDefaultRealm()
> + password_mgr.add_password(None, "%s://%s" % (up.scheme,
> up.netloc), username, password)
> + handler =
> urllib.request.HTTPBasicAuthHandler(password_mgr)
> + opener = urllib.request.build_opener(handler,
> urllib.request.HTTPSHandler(debuglevel=debuglevel))
> + else:
> + opener =
> urllib.request.build_opener(urllib.request.HTTPSHandler(debuglevel=debuglevel))
> +
> + urllib.request.install_opener(opener)
> +
> + logger.debug(1, "Fetching %s (%s)..." % (url, ["without
> authentication", "with authentication"][bool(username)])) +
> + try:
> + res = urlopen(Request(url, headers={'User-Agent':
> 'Mozilla/5.0 (bitbake/lib/layerindex)'}, unverifiable=True))
> + except urllib.error.HTTPError as e:
> + logger.debug(1, "HTTP Error: %s: %s" % (e.code,
> e.reason))
> + logger.debug(1, " Requested: %s" % (url))
> + logger.debug(1, " Actual: %s" % (e.geturl()))
> +
> + if e.code == 404:
> + logger.debug(1, "Request not found.")
> + raise LayerIndexFetchError(url, e)
> + else:
> + logger.debug(1, "Headers:\n%s" % (e.headers))
> + raise LayerIndexFetchError(url, e)
> + except OSError as e:
> + error = 0
> + reason = ""
> +
> + # Process base OSError first...
> + if hasattr(e, 'errno'):
> + error = e.errno
> + reason = e.strerror
> +
> + # Process gaierror (socket error) subclass if available.
> + if hasattr(e, 'reason') and hasattr(e.reason, 'errno')
> and hasattr(e.reason, 'strerror'):
> + error = e.reason.errno
> + reason = e.reason.strerror
> + if error == -2:
> + raise LayerIndexFetchError(url, "%s: %s" % (e,
> reason)) +
> + if error and error != 0:
> + raise LayerIndexFetchError(url, "Unexpected
> exception: [Error %s] %s" % (error, reason))
> + else:
> + raise LayerIndexFetchError(url, "Unable to fetch
> OSError exception: %s" % e) +
> + finally:
> + logger.debug(1, "...fetching %s (%s), done." % (url,
> ["without authentication", "with authentication"][bool(username)])) +
> + return res
> +
> +
> + def load_layerindex(self, indexURI, load=['layerDependencies',
> 'recipes', 'machines', 'distros'], reload=False):
> + '''Load the layerindex.
> +
> + indexURI - An index to load. (Use multiple calls to load
> multiple indexes)
> +
> + reload - If reload is True, then any previously loaded
> indexes will be forgotten.
> +
> + load - List of elements to load. Default loads all items.
> + Note: plugs may ignore this.
> +
> +The format of the indexURI:
> +
> + <url>;branch=<branch>;cache=<cache>;desc=<description>
> +
> + Note: the 'branch' parameter if set can select multiple branches
> by using
> + comma, such as 'branch=master,morty,pyro'. However, many
> operations only look
> + at the -first- branch specified!
> +
> + The cache value may be undefined, in this case a network failure
> will
> + result in an error, otherwise the system will look for a file of
> the cache
> + name and load that instead.
> +
> + For example:
> +
> +
> http://layers.openembedded.org/layerindex/api/;branch=master;desc=OpenEmbedded%20Layer%20Index
> + cooker://
> +'''
> + if reload:
> + self.indexes = []
> +
> + logger.debug(1, 'Loading: %s' % indexURI)
> +
> + if not self.plugins:
> + raise LayerIndexException("No LayerIndex Plugins
> available") +
> + for plugin in self.plugins:
> + # Check if the plugin was initialized
> + logger.debug(1, 'Trying %s' % plugin.__class__)
> + if not hasattr(plugin, 'type') or not plugin.type:
> + continue
> + try:
> + # TODO: Implement 'cache', for when the network is
> not available
> + indexEnt = plugin.load_index(indexURI, load)
> + break
> + except LayerIndexPluginUrlError as e:
> + logger.debug(1, "%s doesn't support %s" %
> (plugin.type, e.url))
> + except NotImplementedError:
> + pass
> + else:
> + logger.debug(1, "No plugins support %s" % indexURI)
> + raise LayerIndexException("No plugins support %s" %
> indexURI) +
> + # Mark CONFIG data as something we've added...
> + indexEnt.config['local'] = []
> + indexEnt.config['local'].append('config')
> +
> + # No longer permit changes..
> + indexEnt.lockData()
> +
> + self.indexes.append(indexEnt)
> +
> + def store_layerindex(self, indexURI, index=None):
> + '''Store one layerindex
> +
> +Typically this will be used to create a local cache file of a remote
> index. +
> + file://<path>;branch=<branch>
> +
> +We can write out in either the restapi or django formats. The split
> option +will write out the individual elements split by layer and
> related components. +'''
> + if not index:
> + logger.warning('No index to write, nothing to do.')
> + return
> +
> + if not self.plugins:
> + raise LayerIndexException("No LayerIndex Plugins
> available") +
> + for plugin in self.plugins:
> + # Check if the plugin was initialized
> + logger.debug(1, 'Trying %s' % plugin.__class__)
> + if not hasattr(plugin, 'type') or not plugin.type:
> + continue
> + try:
> + plugin.store_index(indexURI, index)
> + break
> + except LayerIndexPluginUrlError as e:
> + logger.debug(1, "%s doesn't support %s" %
> (plugin.type, e.url))
> + except NotImplementedError:
> + logger.debug(1, "Store not implemented in %s" %
> plugin.type)
> + pass
> + else:
> + logger.debug(1, "No plugins support %s" % url)
> + raise LayerIndexException("No plugins support %s" % url)
> +
> +
> + def is_empty(self):
> + '''Return True or False if the index has any usable data.
> +
> +We check the indexes entries to see if they have a branch set, as
> well as +layerBranches set. If not, they are effectively blank.'''
> +
> + found = False
> + for index in self.indexes:
> + if index.__bool__():
> + found = True
> + break
> + return not found
> +
> +
> + def find_vcs_url(self, vcs_url, branch=None):
> + '''Return the first layerBranch with the given vcs_url
> +
> + If a branch has not been specified, we will iterate over
> the branches in
> + the default configuration until the first vcs_url/branch
> match.''' +
> + for index in self.indexes:
> + logger.debug(1, ' searching %s' %
> index.config['DESCRIPTION'])
> + layerBranch = index.find_vcs_url(vcs_url, [branch])
> + if layerBranch:
> + return layerBranch
> + return None
> +
> + def find_collection(self, collection, version=None, branch=None):
> + '''Return the first layerBranch with the given collection
> name +
> + If a branch has not been specified, we will iterate over
> the branches in
> + the default configuration until the first
> collection/branch match.''' +
> + logger.debug(1, 'find_collection: %s (%s) %s' % (collection,
> version, branch)) +
> + if branch:
> + branches = [branch]
> + else:
> + branches = None
> +
> + for index in self.indexes:
> + logger.debug(1, ' searching %s' %
> index.config['DESCRIPTION'])
> + layerBranch = index.find_collection(collection, version,
> branches)
> + if layerBranch:
> + return layerBranch
> + else:
> + logger.debug(1, 'Collection %s (%s) not found for branch
> (%s)' % (collection, version, branch))
> + return None
> +
> + def find_layerbranch(self, name, branch=None):
> + '''Return the layerBranch item for a given name and branch
> +
> + If a branch has not been specified, we will iterate over
> the branches in
> + the default configuration until the first name/branch
> match.''' +
> + if branch:
> + branches = [branch]
> + else:
> + branches = None
> +
> + for index in self.indexes:
> + layerBranch = index.find_layerbranch(name, branches)
> + if layerBranch:
> + return layerBranch
> + return None
> +
> + def find_dependencies(self, names=None, layerbranches=None,
> ignores=None):
> + '''Return a tuple of all dependencies and valid items for
> the list of (layer) names +
> + The dependency scanning happens depth-first. The returned
> + dependencies should be in the best order to define bblayers.
> +
> + names - list of layer names (searching layerItems)
> + branches - when specified (with names) only this list of
> branches are evaluated +
> + layerbranches - list of layerbranches to resolve
> dependencies +
> + ignores - list of layer names to ignore
> +
> + return: (dependencies, invalid)
> +
> + dependencies[LayerItem.name] = [ LayerBranch,
> LayerDependency1, LayerDependency2, ... ]
> + invalid = [ LayerItem.name1, LayerItem.name2, ... ]
> + '''
> +
> + invalid = []
> +
> + # Convert name/branch to layerbranches
> + if layerbranches is None:
> + layerbranches = []
> +
> + for name in names:
> + if ignores and name in ignores:
> + continue
> +
> + for index in self.indexes:
> + layerbranch = index.find_layerbranch(name)
> + if not layerbranch:
> + # Not in this index, hopefully it's in another...
> + continue
> + layerbranches.append(layerbranch)
> + break
> + else:
> + invalid.append(name)
> +
> +
> + def _resolve_dependencies(layerbranches, ignores,
> dependencies, invalid):
> + for layerbranch in layerbranches:
> + if ignores and layerbranch.layer.name in ignores:
> + continue
> +
> + # Get a list of dependencies and then recursively
> process them
> + for layerdependency in
> layerbranch.index.layerDependencies_layerBranchId[layerbranch.id]:
> + deplayerbranch =
> layerdependency.dependency_layerBranch +
> + if ignores and deplayerbranch.layer.name in
> ignores:
> + continue
> +
> + # This little block is why we can't re-use the
> LayerIndexObj version,
> + # we must be able to satisfy each dependencies
> across layer indexes and
> + # use the layer index order for priority. (r
> stands for replacement below) +
> + # If this is the primary index, we can fast path
> and skip this
> + if deplayerbranch.index != self.indexes[0]:
> + # Is there an entry in a prior index for
> this collection/version?
> + rdeplayerbranch = self.find_collection(
> +
> collection=deplayerbranch.collection,
> +
> version=deplayerbranch.version
> + )
> + if rdeplayerbranch != deplayerbranch:
> + logger.debug(1, 'Replaced %s:%s:%s
> with %s:%s:%s' % \
> +
> (deplayerbranch.index.config['DESCRIPTION'],
> + deplayerbranch.branch.name,
> + deplayerbranch.layer.name,
> +
> rdeplayerbranch.index.config['DESCRIPTION'],
> + rdeplayerbranch.branch.name,
> + rdeplayerbranch.layer.name))
> + deplayerbranch = rdeplayerbranch
> +
> + # New dependency, we need to resolve it now...
> depth-first
> + if deplayerbranch.layer.name not in dependencies:
> + (dependencies, invalid) =
> _resolve_dependencies([deplayerbranch], ignores, dependencies,
> invalid) +
> + if deplayerbranch.layer.name not in dependencies:
> + dependencies[deplayerbranch.layer.name] =
> [deplayerbranch, layerdependency]
> + else:
> + if layerdependency not in
> dependencies[deplayerbranch.layer.name]:
> +
> dependencies[deplayerbranch.layer.name].append(layerdependency) +
> + return (dependencies, invalid)
> +
> + # OK, resolve this one...
> + dependencies = OrderedDict()
> + (dependencies, invalid) =
> _resolve_dependencies(layerbranches, ignores, dependencies, invalid) +
> + for layerbranch in layerbranches:
> + if layerbranch.layer.name not in dependencies:
> + dependencies[layerbranch.layer.name] = [layerbranch]
> +
> + return (dependencies, invalid)
> +
> +
> + def list_obj(self, object):
> + '''Print via the plain logger object information
> +
> +This function is used to implement debugging and provide the user
> info. +'''
> + for lix in self.indexes:
> + if not hasattr(lix, object):
> + continue
> +
> + logger.plain ('')
> + logger.plain ('Index: %s' % lix.config['DESCRIPTION'])
> +
> + output = []
> +
> + if object == 'branches':
> + logger.plain ('%s %s %s' %
> ('{:26}'.format('branch'), '{:34}'.format('description'),
> '{:22}'.format('bitbake branch')))
> + logger.plain ('{:-^80}'.format(""))
> + for branchid in lix.branches:
> + output.append('%s %s %s' % (
> +
> '{:26}'.format(lix.branches[branchid].name),
> +
> '{:34}'.format(lix.branches[branchid].short_description),
> +
> '{:22}'.format(lix.branches[branchid].bitbake_branch)
> + ))
> + for line in sorted(output):
> + logger.plain (line)
> +
> + continue
> +
> + if object == 'layerItems':
> + logger.plain ('%s %s' % ('{:26}'.format('layer'),
> '{:34}'.format('description')))
> + logger.plain ('{:-^80}'.format(""))
> + for layerid in lix.layerItems:
> + output.append('%s %s' % (
> +
> '{:26}'.format(lix.layerItems[layerid].name),
> +
> '{:34}'.format(lix.layerItems[layerid].summary)
> + ))
> + for line in sorted(output):
> + logger.plain (line)
> +
> + continue
> +
> + if object == 'layerBranches':
> + logger.plain ('%s %s %s' % ('{:26}'.format('layer'),
> '{:34}'.format('description'), '{:19}'.format('collection:version')))
> + logger.plain ('{:-^80}'.format(""))
> + for layerbranchid in lix.layerBranches:
> + output.append('%s %s %s' % (
> +
> '{:26}'.format(lix.layerBranches[layerbranchid].layer.name),
> +
> '{:34}'.format(lix.layerBranches[layerbranchid].layer.summary),
> + '{:19}'.format("%s:%s" %
> +
> (lix.layerBranches[layerbranchid].collection,
> +
> lix.layerBranches[layerbranchid].version)
> + )
> + ))
> + for line in sorted(output):
> + logger.plain (line)
> +
> + continue
> +
> + if object == 'layerDependencies':
> + logger.plain ('%s %s %s %s' %
> ('{:19}'.format('branch'), '{:26}'.format('layer'),
> '{:11}'.format('dependency'), '{:26}'.format('layer')))
> + logger.plain ('{:-^80}'.format(""))
> + for layerDependency in lix.layerDependencies:
> + if not
> lix.layerDependencies[layerDependency].dependency_layerBranch:
> + continue
> +
> + output.append('%s %s %s %s' % (
> +
> '{:19}'.format(lix.layerDependencies[layerDependency].layerbranch.branch.name),
> +
> '{:26}'.format(lix.layerDependencies[layerDependency].layerbranch.layer.name),
> + '{:11}'.format('requires' if
> lix.layerDependencies[layerDependency].required else 'recommends'),
> +
> '{:26}'.format(lix.layerDependencies[layerDependency].dependency_layerBranch.layer.name)
> + ))
> + for line in sorted(output):
> + logger.plain (line)
> +
> + continue
> +
> + if object == 'recipes':
> + logger.plain ('%s %s %s' %
> ('{:20}'.format('recipe'), '{:10}'.format('version'), 'layer'))
> + logger.plain ('{:-^80}'.format(""))
> + output = []
> + for recipe in lix.recipes:
> + output.append('%s %s %s' % (
> +
> '{:30}'.format(lix.recipes[recipe].pn),
> +
> '{:30}'.format(lix.recipes[recipe].pv),
> + lix.recipes[recipe].layer.name
> + ))
> + for line in sorted(output):
> + logger.plain (line)
> +
> + continue
> +
> + if object == 'machines':
> + logger.plain ('%s %s %s' %
> ('{:24}'.format('machine'), '{:34}'.format('description'),
> '{:19}'.format('layer')))
> + logger.plain ('{:-^80}'.format(""))
> + for machine in lix.machines:
> + output.append('%s %s %s' % (
> +
> '{:24}'.format(lix.machines[machine].name),
> +
> '{:34}'.format(lix.machines[machine].description)[:34],
> +
> '{:19}'.format(lix.machines[machine].layerbranch.layer.name)
> + ))
> + for line in sorted(output):
> + logger.plain (line)
> +
> + continue
> +
> + if object == 'distros':
> + logger.plain ('%s %s %s' %
> ('{:24}'.format('distro'), '{:34}'.format('description'),
> '{:19}'.format('layer')))
> + logger.plain ('{:-^80}'.format(""))
> + for distro in lix.distros:
> + output.append('%s %s %s' % (
> +
> '{:24}'.format(lix.distros[distro].name),
> +
> '{:34}'.format(lix.distros[distro].description)[:34],
> +
> '{:19}'.format(lix.distros[distro].layerbranch.layer.name)
> + ))
> + for line in sorted(output):
> + logger.plain (line)
> +
> + continue
> +
> + logger.plain ('')
> +
> +
> +# This class holds a single layer index instance
> +# The LayerIndexObj is made up of dictionary of elements, such as:
> +# index['config'] - configuration data for this index
> +# index['branches'] - dictionary of Branch objects, by id number
> +# index['layerItems'] - dictionary of layerItem objects, by id
> number +# ...etc... (See:
> http://layers.openembedded.org/layerindex/api/) +#
> +# The class needs to manage the 'index' entries and allow easily
> adding +# of new items, as well as simply loading of the items.
> +class LayerIndexObj():
> + def __init__(self):
> + super().__setattr__('_index', {})
> + super().__setattr__('_lock', False)
> +
> + def __bool__(self):
> + '''False if the index is effectively empty
> +
> + We check the index to see if it has a branch set, as well
> as
> + layerbranches set. If not, it is effectively blank.'''
> +
> + if not bool(self._index):
> + return False
> +
> + try:
> + if self.branches and self.layerBranches:
> + return True
> + except AttributeError:
> + pass
> +
> + return False
> +
> + def __getattr__(self, name):
> + if name.startswith('_'):
> + return super().__getattribute__(name)
> +
> + if name not in self._index:
> + raise AttributeError('%s not in index datastore' % name)
> +
> + return self._index[name]
> +
> + def __setattr__(self, name, value):
> + if self.isLocked():
> + raise TypeError("Can not set attribute '%s': index is
> locked" % name) +
> + if name.startswith('_'):
> + super().__setattr__(name, value)
> + return
> +
> + self._index[name] = value
> +
> + def __delattr__(self, name):
> + if self.isLocked():
> + raise TypeError("Can not delete attribute '%s': index is
> locked" % name) +
> + if name.startswith('_'):
> + super().__delattr__(name)
> +
> + self._index.pop(name)
> +
> + def lockData(self):
> + '''Lock data object (make it readonly)'''
> + super().__setattr__("_lock", True)
> +
> + def unlockData(self):
> + '''unlock data object (make it readonly)'''
> + super().__setattr__("_lock", False)
> +
> + # When the data is unlocked, we have to clear the caches, as
> + # modification is allowed!
> + del(self._layerBranches_layerId_branchId)
> + del(self._layerDependencies_layerBranchId)
> + del(self._layerBranches_vcsUrl)
> +
> + def isLocked(self):
> + '''Is this object locked (readonly)?'''
> + return self._lock
> +
> + def add_element(self, indexname, objs):
> + '''Add a layer index object to index.<indexname>'''
> + if indexname not in self._index:
> + self._index[indexname] = {}
> +
> + for obj in objs:
> + if obj.id in self._index[indexname]:
> + if self._index[indexname][obj.id] == obj:
> + continue
> + raise LayerIndexError('Conflict adding object %s(%s)
> to index' % (indexname, obj.id))
> + self._index[indexname][obj.id] = obj
> +
> + def add_raw_element(self, indexname, objtype, rawobjs):
> + '''Convert a raw layer index data item to a layer index item
> object and add to the index'''
> + objs = []
> + for entry in rawobjs:
> + objs.append(objtype(self, entry))
> + self.add_element(indexname, objs)
> +
> + # Quick lookup table for searching layerId and branchID combos
> + @property
> + def layerBranches_layerId_branchId(self):
> + def createCache(self):
> + cache = {}
> + for layerbranchid in self.layerBranches:
> + layerbranch = self.layerBranches[layerbranchid]
> + cache["%s:%s" % (layerbranch.layer_id,
> layerbranch.branch_id)] = layerbranch
> + return cache
> +
> + if self.isLocked():
> + cache = getattr(self, '_layerBranches_layerId_branchId',
> None)
> + else:
> + cache = None
> +
> + if not cache:
> + cache = createCache(self)
> +
> + if self.isLocked():
> + super().__setattr__('_layerBranches_layerId_branchId',
> cache) +
> + return cache
> +
> + # Quick lookup table for finding all dependencies of a
> layerBranch
> + @property
> + def layerDependencies_layerBranchId(self):
> + def createCache(self):
> + cache = {}
> + # This ensures empty lists for all branchids
> + for layerbranchid in self.layerBranches:
> + cache[layerbranchid] = []
> +
> + for layerdependencyid in self.layerDependencies:
> + layerdependency =
> self.layerDependencies[layerdependencyid]
> +
> cache[layerdependency.layerbranch_id].append(layerdependency)
> + return cache
> +
> + if self.isLocked():
> + cache = getattr(self,
> '_layerDependencies_layerBranchId', None)
> + else:
> + cache = None
> +
> + if not cache:
> + cache = createCache(self)
> +
> + if self.isLocked():
> + super().__setattr__('_layerDependencies_layerBranchId',
> cache) +
> + return cache
> +
> + # Quick lookup table for finding all instances of a vcs_url
> + @property
> + def layerBranches_vcsUrl(self):
> + def createCache(self):
> + cache = {}
> + for layerbranchid in self.layerBranches:
> + layerbranch = self.layerBranches[layerbranchid]
> + if layerbranch.layer.vcs_url not in cache:
> + cache[layerbranch.layer.vcs_url] = [layerbranch]
> + else:
> +
> cache[layerbranch.layer.vcs_url].append(layerbranch)
> + return cache
> +
> + if self.isLocked():
> + cache = getattr(self, '_layerBranches_vcsUrl', None)
> + else:
> + cache = None
> +
> + if not cache:
> + cache = createCache(self)
> +
> + if self.isLocked():
> + super().__setattr__('_layerBranches_vcsUrl', cache)
> +
> + return cache
> +
> +
> + def find_vcs_url(self, vcs_url, branches=None):
> + ''''Return the first layerBranch with the given vcs_url
> +
> + If a list of branches has not been specified, we will
> iterate on
> + all branches until the first vcs_url is found.'''
> +
> + if not self.__bool__():
> + return None
> +
> + for layerbranch in self.layerBranches_vcsUrl:
> + if branches and layerbranch.branch.name not in branches:
> + continue
> +
> + return layerbranch
> +
> + return None
> +
> +
> + def find_collection(self, collection, version=None,
> branches=None):
> + '''Return the first layerBranch with the given collection
> name +
> + If a list of branches has not been specified, we will
> iterate on
> + all branches until the first collection is found.'''
> +
> + if not self.__bool__():
> + return None
> +
> + for layerbranchid in self.layerBranches:
> + layerbranch = self.layerBranches[layerbranchid]
> + if branches and layerbranch.branch.name not in branches:
> + continue
> +
> + if layerbranch.collection == collection and \
> + (version is None or version == layerbranch.version):
> + return layerbranch
> +
> + return None
> +
> +
> + def find_layerbranch(self, name, branches=None):
> + '''Return the first layerbranch whose layer name matches
> +
> + If a list of branches has not been specified, we will
> iterate on
> + all branches until the first layer with that name is
> found.''' +
> + if not self.__bool__():
> + return None
> +
> + for layerbranchid in self.layerBranches:
> + layerbranch = self.layerBranches[layerbranchid]
> + if branches and layerbranch.branch.name not in branches:
> + continue
> +
> + if layerbranch.layer.name == name:
> + return layerbranch
> +
> + return None
> +
> + def find_dependencies(self, names=None, branches=None,
> layerBranches=None, ignores=None):
> + '''Return a tuple of all dependencies and valid items for
> the list of (layer) names +
> + The dependency scanning happens depth-first. The returned
> + dependencies should be in the best order to define bblayers.
> +
> + names - list of layer names (searching layerItems)
> + branches - when specified (with names) only this list of
> branches are evaluated +
> + layerBranches - list of layerBranches to resolve
> dependencies +
> + ignores - list of layer names to ignore
> +
> + return: (dependencies, invalid)
> +
> + dependencies[LayerItem.name] = [ LayerBranch,
> LayerDependency1, LayerDependency2, ... ]
> + invalid = [ LayerItem.name1, LayerItem.name2, ... ]'''
> +
> + invalid = []
> +
> + # Convert name/branch to layerBranches
> + if layerbranches is None:
> + layerbranches = []
> +
> + for name in names:
> + if ignores and name in ignores:
> + continue
> +
> + layerbranch = self.find_layerbranch(name, branches)
> + if not layerbranch:
> + invalid.append(name)
> + else:
> + layerbranches.append(layerbranch)
> +
> + for layerbranch in layerbranches:
> + if layerbranch.index != self:
> + raise LayerIndexException("Can not resolve
> dependencies across indexes with this class function!") +
> + def _resolve_dependencies(layerbranches, ignores,
> dependencies, invalid):
> + for layerbranch in layerbranches:
> + if ignores and layerBranch.layer.name in ignores:
> + continue
> +
> + for layerdependency in
> layerbranch.index.layerDependencies_layerBranchId[layerBranch.id]:
> + deplayerbranch =
> layerDependency.dependency_layerBranch +
> + if ignores and deplayerbranch.layer.name in
> ignores:
> + continue
> +
> + # New dependency, we need to resolve it now...
> depth-first
> + if deplayerbranch.layer.name not in dependencies:
> + (dependencies, invalid) =
> _resolve_dependencies([deplayerbranch], ignores, dependencies,
> invalid) +
> + if deplayerbranch.layer.name not in dependencies:
> + dependencies[deplayerbranch.layer.name] =
> [deplayerbranch, layerdependency]
> + else:
> + if layerdependency not in
> dependencies[deplayerbranch.layer.name]:
> +
> dependencies[deplayerbranch.layer.name].append(layerdependency) +
> + return (dependencies, invalid)
> +
> + # OK, resolve this one...
> + dependencies = OrderedDict()
> + (dependencies, invalid) =
> _resolve_dependencies(layerbranches, ignores, dependencies, invalid) +
> + # Is this item already in the list, if not add it
> + for layerbranch in layerbranches:
> + if layerbranch.layer.name not in dependencies:
> + dependencies[layerbranch.layer.name] = [layerbranch]
> +
> + return (dependencies, invalid)
> +
> +
> +# Define a basic LayerIndexItemObj. This object forms the basis for
> all other +# objects. The raw Layer Index data is stored in the
> _data element, but we +# do not want users to access data directly.
> So wrap this and protect it +# from direct manipulation.
> +#
> +# It is up to the insantiators of the objects to fill them out, and
> once done +# lock the objects to prevent further accidently
> manipulation. +#
> +# Using the getattr, setattr and properties we can access and
> manipulate +# the data within the data element.
> +class LayerIndexItemObj():
> + def __init__(self, index, data=None, lock=False):
> + if data is None:
> + data = {}
> +
> + if type(data) != type(dict()):
> + raise TypeError('data (%s) is not a dict' % type(data))
> +
> + super().__setattr__('_lock', lock)
> + super().__setattr__('index', index)
> + super().__setattr__('_data', data)
> +
> + def __eq__(self, other):
> + if self.__class__ != other.__class__:
> + return False
> + res=(self._data == other._data)
> + return res
> +
> + def __bool__(self):
> + return bool(self._data)
> +
> + def __getattr__(self, name):
> + # These are internal to THIS class, and not part of data
> + if name == "index" or name.startswith('_'):
> + return super().__getattribute__(name)
> +
> + if name not in self._data:
> + raise AttributeError('%s not in datastore' % name)
> +
> + return self._data[name]
> +
> + def _setattr(self, name, value, prop=True):
> + '''__setattr__ like function, but with control over property
> object behavior'''
> + if self.isLocked():
> + raise TypeError("Can not set attribute '%s': Object data
> is locked" % name) +
> + if name.startswith('_'):
> + super().__setattr__(name, value)
> + return
> +
> + # Since __setattr__ runs before properties, we need to check
> if
> + # there is a setter property and then execute it
> + # ... or return self._data[name]
> + propertyobj = getattr(self.__class__, name, None)
> + if prop and isinstance(propertyobj, property):
> + if propertyobj.fset:
> + propertyobj.fset(self, value)
> + else:
> + raise AttributeError('Attribute %s is readonly, and
> may not be set' % name)
> + else:
> + self._data[name] = value
> +
> + def __setattr__(self, name, value):
> + self._setattr(name, value, prop=True)
> +
> + def _delattr(self, name, prop=True):
> + # Since __delattr__ runs before properties, we need to check
> if
> + # there is a deleter property and then execute it
> + # ... or we pop it ourselves..
> + propertyobj = getattr(self.__class__, name, None)
> + if prop and isinstance(propertyobj, property):
> + if propertyobj.fdel:
> + propertyobj.fdel(self)
> + else:
> + raise AttributeError('Attribute %s is readonly, and
> may not be deleted' % name)
> + else:
> + self._data.pop(name)
> +
> + def __delattr__(self, name):
> + self._delattr(name, prop=True)
> +
> + def lockData(self):
> + '''Lock data object (make it readonly)'''
> + super().__setattr__("_lock", True)
> +
> + def unlockData(self):
> + '''unlock data object (make it readonly)'''
> + super().__setattr__("_lock", False)
> +
> + def isLocked(self):
> + '''Is this object locked (readonly)?'''
> + return self._lock
> +
> +# Branch object
> +class Branch(LayerIndexItemObj):
> + def define_data(self, id, name, bitbake_branch,
> + short_description=None, sort_priority=1,
> + updates_enabled=True, updated=None,
> + update_environment=None):
> + self.id = id
> + self.name = name
> + self.bitbake_branch = bitbake_branch
> + self.short_description = short_description or name
> + self.sort_priority = sort_priority
> + self.updates_enabled = updates_enabled
> + self.updated = updated or
> datetime.datetime.today().isoformat()
> + self.update_environment = update_environment
> +
> + @property
> + def name(self):
> + return self.__getattr__('name')
> +
> + @name.setter
> + def name(self, value):
> + self._data['name'] = value
> +
> + if self.bitbake_branch == value:
> + self.bitbake_branch = ""
> +
> + @name.deleter
> + def name(self):
> + self._delattr('name', prop=False)
> +
> + @property
> + def bitbake_branch(self):
> + try:
> + return self.__getattr__('bitbake_branch')
> + except AttributeError:
> + return self.name
> +
> + @bitbake_branch.setter
> + def bitbake_branch(self, value):
> + if self.name == value:
> + self._data['bitbake_branch'] = ""
> + else:
> + self._data['bitbake_branch'] = value
> +
> + @bitbake_branch.deleter
> + def bitbake_branch(self):
> + self._delattr('bitbake_branch', prop=False)
> +
> +
> +class LayerItem(LayerIndexItemObj):
> + def define_data(self, id, name, status='P',
> + layer_type='A', summary=None,
> + description=None,
> + vcs_url=None, vcs_web_url=None,
> + vcs_web_tree_base_url=None,
> + vcs_web_file_base_url=None,
> + usage_url=None,
> + mailing_list_url=None,
> + index_preference=1,
> + classic=False,
> + updated=None):
> + self.id = id
> + self.name = name
> + self.status = status
> + self.layer_type = layer_type
> + self.summary = summary or name
> + self.description = description or summary or name
> + self.vcs_url = vcs_url
> + self.vcs_web_url = vcs_web_url
> + self.vcs_web_tree_base_url = vcs_web_tree_base_url
> + self.vcs_web_file_base_url = vcs_web_file_base_url
> + self.index_preference = index_preference
> + self.classic = classic
> + self.updated = updated or
> datetime.datetime.today().isoformat() +
> +
> +class LayerBranch(LayerIndexItemObj):
> + def define_data(self, id, collection, version, layer, branch,
> + vcs_subdir="", vcs_last_fetch=None,
> + vcs_last_rev=None, vcs_last_commit=None,
> + actual_branch="",
> + updated=None):
> + self.id = id
> + self.collection = collection
> + self.version = version
> + if isinstance(layer, LayerItem):
> + self.layer = layer
> + else:
> + self.layer_id = layer
> +
> + if isinstance(branch, Branch):
> + self.branch = branch
> + else:
> + self.branch_id = branch
> +
> + self.vcs_subdir = vcs_subdir
> + self.vcs_last_fetch = vcs_last_fetch
> + self.vcs_last_rev = vcs_last_rev
> + self.vcs_last_commit = vcs_last_commit
> + self.actual_branch = actual_branch
> + self.updated = updated or
> datetime.datetime.today().isoformat() +
> + # This is a little odd, the _data attribute is 'layer', but it's
> really
> + # referring to the layer id.. so lets adjust this to make it
> useful
> + @property
> + def layer_id(self):
> + return self.__getattr__('layer')
> +
> + @layer_id.setter
> + def layer_id(self, value):
> + self._setattr('layer', value, prop=False)
> +
> + @layer_id.deleter
> + def layer_id(self):
> + self._delattr('layer', prop=False)
> +
> + @property
> + def layer(self):
> + try:
> + return self.index.layerItems[self.layer_id]
> + except KeyError:
> + raise AttributeError('Unable to find layerItems in index
> to map layer_id %s' % self.layer_id)
> + except IndexError:
> + raise AttributeError('Unable to find layer_id %s in
> index layerItems' % self.layer_id) +
> + @layer.setter
> + def layer(self, value):
> + if not isinstance(value, LayerItem):
> + raise TypeError('value is not a LayerItem')
> + if self.index != value.index:
> + raise AttributeError('Object and value do not share the
> same index and thus key set.')
> + self.layer_id = value.id
> +
> + @layer.deleter
> + def layer(self):
> + del self.layer_id
> +
> + @property
> + def branch_id(self):
> + return self.__getattr__('branch')
> +
> + @branch_id.setter
> + def branch_id(self, value):
> + self._setattr('branch', value, prop=False)
> +
> + @branch_id.deleter
> + def branch_id(self):
> + self._delattr('branch', prop=False)
> +
> + @property
> + def branch(self):
> + try:
> + logger.debug(1, "Get branch object from branches[%s]" %
> (self.branch_id))
> + return self.index.branches[self.branch_id]
> + except KeyError:
> + raise AttributeError('Unable to find branches in index
> to map branch_id %s' % self.branch_id)
> + except IndexError:
> + raise AttributeError('Unable to find branch_id %s in
> index branches' % self.branch_id) +
> + @branch.setter
> + def branch(self, value):
> + if not isinstance(value, LayerItem):
> + raise TypeError('value is not a LayerItem')
> + if self.index != value.index:
> + raise AttributeError('Object and value do not share the
> same index and thus key set.')
> + self.branch_id = value.id
> +
> + @branch.deleter
> + def branch(self):
> + del self.branch_id
> +
> + @property
> + def actual_branch(self):
> + if self.__getattr__('actual_branch'):
> + return self.__getattr__('actual_branch')
> + else:
> + return self.branch.name
> +
> + @actual_branch.setter
> + def actual_branch(self, value):
> + logger.debug(1, "Set actual_branch to %s .. name is %s" %
> (value, self.branch.name))
> + if value != self.branch.name:
> + self._setattr('actual_branch', value, prop=False)
> + else:
> + self._setattr('actual_branch', '', prop=False)
> +
> + @actual_branch.deleter
> + def actual_branch(self):
> + self._delattr('actual_branch', prop=False)
> +
> +# Extend LayerIndexItemObj with common LayerBranch manipulations
> +# All of the remaining LayerIndex objects refer to layerbranch, and
> it is +# up to the user to follow that back through the LayerBranch
> object into +# the layer object to get various attributes. So add an
> intermediate set +# of attributes that can easily get us the
> layerbranch as well as layer. +
> +class LayerIndexItemObj_LayerBranch(LayerIndexItemObj):
> + @property
> + def layerbranch_id(self):
> + return self.__getattr__('layerbranch')
> +
> + @layerbranch_id.setter
> + def layerbranch_id(self, value):
> + self._setattr('layerbranch', value, prop=False)
> +
> + @layerbranch_id.deleter
> + def layerbranch_id(self):
> + self._delattr('layerbranch', prop=False)
> +
> + @property
> + def layerbranch(self):
> + try:
> + return self.index.layerBranches[self.layerbranch_id]
> + except KeyError:
> + raise AttributeError('Unable to find layerBranches in
> index to map layerbranch_id %s' % self.layerbranch_id)
> + except IndexError:
> + raise AttributeError('Unable to find layerbranch_id %s
> in index branches' % self.layerbranch_id) +
> + @layerbranch.setter
> + def layerbranch(self, value):
> + if not isinstance(value, LayerBranch):
> + raise TypeError('value (%s) is not a layerBranch' %
> type(value))
> + if self.index != value.index:
> + raise AttributeError('Object and value do not share the
> same index and thus key set.')
> + self.layerbranch_id = value.id
> +
> + @layerbranch.deleter
> + def layerbranch(self):
> + del self.layerbranch_id
> +
> + @property
> + def layer_id(self):
> + return self.layerbranch.layer_id
> +
> + # Doesn't make sense to set or delete layer_id
> +
> + @property
> + def layer(self):
> + return self.layerbranch.layer
> +
> + # Doesn't make sense to set or delete layer
> +
> +
> +class LayerDependency(LayerIndexItemObj_LayerBranch):
> + def define_data(self, id, layerbranch, dependency,
> required=True):
> + self.id = id
> + if isinstance(layerbranch, LayerBranch):
> + self.layerbranch = layerbranch
> + else:
> + self.layerbranch_id = layerbranch
> + if isinstance(dependency, LayerDependency):
> + self.dependency = dependency
> + else:
> + self.dependency_id = dependency
> + self.required = required
> +
> + @property
> + def dependency_id(self):
> + return self.__getattr__('dependency')
> +
> + @dependency_id.setter
> + def dependency_id(self, value):
> + self._setattr('dependency', value, prop=False)
> +
> + @dependency_id.deleter
> + def dependency_id(self):
> + self._delattr('dependency', prop=False)
> +
> + @property
> + def dependency(self):
> + try:
> + return self.index.layerItems[self.dependency_id]
> + except KeyError:
> + raise AttributeError('Unable to find layerItems in index
> to map layerbranch_id %s' % self.dependency_id)
> + except IndexError:
> + raise AttributeError('Unable to find dependency_id %s in
> index layerItems' % self.dependency_id) +
> + @dependency.setter
> + def dependency(self, value):
> + if not isinstance(value, LayerDependency):
> + raise TypeError('value (%s) is not a dependency' %
> type(value))
> + if self.index != value.index:
> + raise AttributeError('Object and value do not share the
> same index and thus key set.')
> + self.dependency_id = value.id
> +
> + @dependency.deleter
> + def dependency(self):
> + self._delattr('dependency', prop=False)
> +
> + @property
> + def dependency_layerBranch(self):
> + layerid = self.dependency_id
> + branchid = self.layerbranch.branch_id
> +
> + try:
> + return self.index.layerBranches_layerId_branchId["%s:%s"
> % (layerid, branchid)]
> + except IndexError:
> + # layerBranches_layerId_branchId -- but not
> layerId:branchId
> + raise AttributeError('Unable to find layerId:branchId
> %s:%s in index layerBranches_layerId_branchId' % (layerid, branchid))
> + except KeyError:
> + raise AttributeError('Unable to find layerId:branchId
> %s:%s in layerItems and layerBranches' % (layerid, branchid)) +
> + # dependency_layerBranch doesn't make sense to set or del
> +
> +
> +class Recipe(LayerIndexItemObj_LayerBranch):
> + def define_data(self, id,
> + filename, filepath, pn, pv, layerbranch,
> + summary="", description="", section="",
> license="",
> + homepage="", bugtracker="", provides="",
> bbclassextend="",
> + inherits="", blacklisted="", updated=None):
> + self.id = id
> + self.filename = filename
> + self.filepath = filepath
> + self.pn = pn
> + self.pv = pv
> + self.summary = summary
> + self.description = description
> + self.section = section
> + self.license = license
> + self.homepage = homepage
> + self.bugtracker = bugtracker
> + self.provides = provides
> + self.bbclassextend = bbclassextend
> + self.inherits = inherits
> + self.updated = updated or
> datetime.datetime.today().isoformat()
> + self.blacklisted = blacklisted
> + if isinstance(layerbranch, LayerBranch):
> + self.layerbranch = layerbranch
> + else:
> + self.layerbranch_id = layerbranch
> +
> + @property
> + def fullpath(self):
> + return os.path.join(self.filepath, self.filename)
> +
> + # Set would need to understand how to split it
> + # del would we del both parts?
> +
> + @property
> + def inherits(self):
> + if 'inherits' not in self._data:
> + # Older indexes may not have this, so emulate it
> + if '-image-' in self.pn:
> + return 'image'
> + return self.__getattr__('inherits')
> +
> + @inherits.setter
> + def inherits(self, value):
> + return self._setattr('inherits', value, prop=False)
> +
> + @inherits.deleter
> + def inherits(self):
> + return self._delattr('inherits', prop=False)
> +
> +
> +class Machine(LayerIndexItemObj_LayerBranch):
> + def define_data(self, id,
> + name, description, layerbranch,
> + updated=None):
> + self.id = id
> + self.name = name
> + self.description = description
> + if isinstance(layerbranch, LayerBranch):
> + self.layerbranch = layerbranch
> + else:
> + self.layerbranch_id = layerbranch
> + self.updated = updated or
> datetime.datetime.today().isoformat() +
> +class Distro(LayerIndexItemObj_LayerBranch):
> + def define_data(self, id,
> + name, description, layerbranch,
> + updated=None):
> + self.id = id
> + self.name = name
> + self.description = description
> + if isinstance(layerbranch, LayerBranch):
> + self.layerbranch = layerbranch
> + else:
> + self.layerbranch_id = layerbranch
> + self.updated = updated or
> datetime.datetime.today().isoformat() +
> +# When performing certain actions, we may need to sort the data.
> +# This will allow us to keep it consistent from run to run.
> +def sort_entry(item):
> + newitem = item
> + try:
> + if type(newitem) == type(dict()):
> + newitem = OrderedDict(sorted(newitem.items(), key=lambda
> t: t[0]))
> + for index in newitem:
> + newitem[index] = sort_entry(newitem[index])
> + elif type(newitem) == type(list()):
> + newitem.sort(key=lambda obj: obj['id'])
> + for index, _ in enumerate(newitem):
> + newitem[index] = sort_entry(newitem[index])
> + except:
> + logger.error('Sort failed for item %s' % type(item))
> + pass
> +
> + return newitem
> diff --git a/bitbake/lib/layerindexlib/cooker.py
> b/bitbake/lib/layerindexlib/cooker.py new file mode 100644
> index 0000000..848f0e2
> --- /dev/null
> +++ b/bitbake/lib/layerindexlib/cooker.py
> @@ -0,0 +1,344 @@
> +# Copyright (C) 2016-2018 Wind River Systems, Inc.
> +#
> +# This program is free software; you can redistribute it and/or
> modify +# it under the terms of the GNU General Public License
> version 2 as +# published by the Free Software Foundation.
> +#
> +# This program is distributed in the hope that it will be useful,
> +# but WITHOUT ANY WARRANTY; without even the implied warranty of
> +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
> +# See the GNU General Public License for more details.
> +#
> +# You should have received a copy of the GNU General Public License
> +# along with this program; if not, write to the Free Software
> +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA
> 02111-1307 USA +
> +import logging
> +import json
> +
> +from collections import OrderedDict, defaultdict
> +
> +from urllib.parse import unquote, urlparse
> +
> +import layerindexlib
> +
> +import layerindexlib.plugin
> +
> +logger = logging.getLogger('BitBake.layerindexlib.cooker')
> +
> +import bb.utils
> +
> +def plugin_init(plugins):
> + return CookerPlugin()
> +
> +class CookerPlugin(layerindexlib.plugin.IndexPlugin):
> + def __init__(self):
> + self.type = "cooker"
> +
> + self.server_connection = None
> + self.ui_module = None
> + self.server = None
> +
> + def _run_command(self, command, path, default=None):
> + try:
> + result, _ = bb.process.run(command, cwd=path)
> + result = result.strip()
> + except bb.process.ExecutionError:
> + result = default
> + return result
> +
> + def _handle_git_remote(self, remote):
> + if "://" not in remote:
> + if ':' in remote:
> + # This is assumed to be ssh
> + remote = "ssh://" + remote
> + else:
> + # This is assumed to be a file path
> + remote = "file://" + remote
> + return remote
> +
> + def _get_bitbake_info(self):
> + """Return a tuple of bitbake information"""
> +
> + # Our path SHOULD be .../bitbake/lib/layerindex/cooker.py
> + bb_path = os.path.dirname(__file__)
> # .../bitbake/lib/layerindex/cooker.py
> + bb_path = os.path.dirname(bb_path)
> # .../bitbake/lib/layerindex
> + bb_path = os.path.dirname(bb_path) # .../bitbake/lib
> + bb_path = os.path.dirname(bb_path) # .../bitbake
> + bb_path = self._run_command('git rev-parse --show-toplevel',
> os.path.dirname(__file__), default=bb_path)
> + bb_branch = self._run_command('git rev-parse --abbrev-ref
> HEAD', bb_path, default="<unknown>")
> + bb_rev = self._run_command('git rev-parse HEAD', bb_path,
> default="<unknown>")
> + for remotes in self._run_command('git remote -v', bb_path,
> default="").split("\n"):
> + remote = remotes.split("\t")[1].split(" ")[0]
> + if "(fetch)" == remotes.split("\t")[1].split(" ")[1]:
> + bb_remote = self._handle_git_remote(remote)
> + break
> + else:
> + bb_remote = self._handle_git_remote(bb_path)
> +
> + return (bb_remote, bb_branch, bb_rev, bb_path)
> +
> + def _load_bblayers(self, branches=None):
> + """Load the BBLAYERS and related collection information"""
> +
> + d = self.layerindex.data
> +
> + if not branches:
> + raise LayerIndexFetchError("No branches specified for
> _load_bblayers!") +
> + index = layerindexlib.LayerIndexObj()
> +
> + branchId = 0
> + index.branches = {}
> +
> + layerItemId = 0
> + index.layerItems = {}
> +
> + layerBranchId = 0
> + index.layerBranches = {}
> +
> + bblayers = d.getVar('BBLAYERS').split()
> +
> + if not bblayers:
> + # It's blank! Nothing to process...
> + return index
> +
> + collections = d.getVar('BBFILE_COLLECTIONS')
> + layerconfs =
> d.varhistory.get_variable_items_files('BBFILE_COLLECTIONS', d)
> + bbfile_collections = {layer:
> os.path.dirname(os.path.dirname(path)) for layer, path in
> layerconfs.items()} +
> + (_, bb_branch, _, _) = self._get_bitbake_info()
> +
> + for branch in branches:
> + branchId += 1
> + index.branches[branchId] = layerindexlib.Branch(index,
> None)
> + index.branches[branchId].define_data(branchId, branch,
> bb_branch) +
> + for entry in collections.split():
> + layerpath = entry
> + if entry in bbfile_collections:
> + layerpath = bbfile_collections[entry]
> +
> + layername = d.getVar('BBLAYERS_LAYERINDEX_NAME_%s' %
> entry) or os.path.basename(layerpath)
> + layerversion = d.getVar('LAYERVERSION_%s' % entry) or ""
> + layerurl = self._handle_git_remote(layerpath)
> +
> + layersubdir = ""
> + layerrev = "<unknown>"
> + layerbranch = "<unknown>"
> +
> + if os.path.isdir(layerpath):
> + layerbasepath = self._run_command('git rev-parse
> --show-toplevel', layerpath, default=layerpath)
> + if os.path.abspath(layerpath) !=
> os.path.abspath(layerbasepath):
> + layersubdir =
> os.path.abspath(layerpath)[len(layerbasepath) + 1:] +
> + layerbranch = self._run_command('git rev-parse
> --abbrev-ref HEAD', layerpath, default="<unknown>")
> + layerrev = self._run_command('git rev-parse HEAD',
> layerpath, default="<unknown>") +
> + for remotes in self._run_command('git remote -v',
> layerpath, default="").split("\n"):
> + if not remotes:
> + layerurl = self._handle_git_remote(layerpath)
> + else:
> + remote = remotes.split("\t")[1].split(" ")[0]
> + if "(fetch)" ==
> remotes.split("\t")[1].split(" ")[1]:
> + layerurl =
> self._handle_git_remote(remote)
> + break
> +
> + layerItemId += 1
> + index.layerItems[layerItemId] =
> layerindexlib.LayerItem(index, None)
> + index.layerItems[layerItemId].define_data(layerItemId,
> layername, description=layerpath, vcs_url=layerurl) +
> + for branchId in index.branches:
> + layerBranchId += 1
> + index.layerBranches[layerBranchId] =
> layerindexlib.LayerBranch(index, None)
> +
> index.layerBranches[layerBranchId].define_data(layerBranchId, entry,
> layerversion, layerItemId, branchId,
> +
> vcs_subdir=layersubdir, vcs_last_rev=layerrev,
> actual_branch=layerbranch) +
> + return index
> +
> +
> + def load_index(self, url, load):
> + """
> + Fetches layer information from a build configuration.
> +
> + The return value is a dictionary containing API,
> + layer, branch, dependency, recipe, machine, distro,
> information. +
> + url type should be 'cooker'.
> + url path is ignored
> + """
> +
> + up = urlparse(url)
> +
> + if up.scheme != 'cooker':
> + raise
> layerindexlib.plugin.LayerIndexPluginUrlError(self.type, url) +
> + d = self.layerindex.data
> +
> + params = self.layerindex._parse_params(up.params)
> +
> + # Only reason to pass a branch is to emulate them...
> + if 'branch' in params:
> + branches = params['branch'].split(',')
> + else:
> + branches = ['HEAD']
> +
> + logger.debug(1, "Loading cooker data branches %s" % branches)
> +
> + index = self._load_bblayers(branches=branches)
> +
> + index.config = {}
> + index.config['TYPE'] = self.type
> + index.config['URL'] = url
> +
> + if 'desc' in params:
> + index.config['DESCRIPTION'] = unquote(params['desc'])
> + else:
> + index.config['DESCRIPTION'] = 'local'
> +
> + if 'cache' in params:
> + index.config['CACHE'] = params['cache']
> +
> + index.config['BRANCH'] = branches
> +
> + # ("layerDependencies", layerindexlib.LayerDependency)
> + layerDependencyId = 0
> + if "layerDependencies" in load:
> + index.layerDependencies = {}
> + for layerBranchId in index.layerBranches:
> + branchName =
> index.layerBranches[layerBranchId].branch.name
> + collection =
> index.layerBranches[layerBranchId].collection +
> + def add_dependency(layerDependencyId, index, deps,
> required):
> + try:
> + depDict =
> bb.utils.explode_dep_versions2(deps)
> + except bb.utils.VersionStringException as vse:
> + bb.fatal('Error parsing LAYERDEPENDS_%s: %s'
> % (c, str(vse))) +
> + for dep, oplist in list(depDict.items()):
> + # We need to search ourselves, so use the _
> version...
> + depLayerBranch = index.find_collection(dep,
> branches=[branchName])
> + if not depLayerBranch:
> + # Missing dependency?!
> + logger.error('Missing dependency %s
> (%s)' % (dep, branchName))
> + continue
> +
> + # We assume that the oplist matches...
> + layerDependencyId += 1
> + layerDependency =
> layerindexlib.LayerDependency(index, None)
> +
> layerDependency.define_data(id=layerDependencyId,
> + required=required,
> layerbranch=layerBranchId,
> +
> dependency=depLayerBranch.layer_id) +
> + logger.debug(1, '%s requires %s' %
> (layerDependency.layer.name, layerDependency.dependency.name))
> + index.add_element("layerDependencies",
> [layerDependency]) +
> + return layerDependencyId
> +
> + deps = d.getVar("LAYERDEPENDS_%s" % collection)
> + if deps:
> + layerDependencyId =
> add_dependency(layerDependencyId, index, deps, True) +
> + deps = d.getVar("LAYERRECOMMENDS_%s" % collection)
> + if deps:
> + layerDependencyId =
> add_dependency(layerDependencyId, index, deps, False) +
> + # Need to load recipes here (requires cooker access)
> + recipeId = 0
> + ## TODO: NOT IMPLEMENTED
> + # The code following this is an example of what needs to be
> + # implemented. However, it does not work as-is.
> + if False and 'recipes' in load:
> + index.recipes = {}
> +
> + ret =
> self.ui_module.main(self.server_connection.connection,
> self.server_connection.events, config_params) +
> + all_versions = self._run_command('allProviders')
> +
> + all_versions_list = defaultdict(list, all_versions)
> + for pn in all_versions_list:
> + for ((pe, pv, pr), fpath) in all_versions_list[pn]:
> + realfn = bb.cache.virtualfn2realfn(fpath)
> +
> + filepath = os.path.dirname(realfn[0])
> + filename = os.path.basename(realfn[0])
> +
> + # This is all HORRIBLY slow, and likely
> unnecessary
> + #dscon = self._run_command('parseRecipeFile',
> fpath, False, [])
> + #connector = myDataStoreConnector(self,
> dscon.dsindex)
> + #recipe_data = bb.data.init()
> + #recipe_data.setVar('_remote_data', connector)
> +
> + #summary = recipe_data.getVar('SUMMARY')
> + #description = recipe_data.getVar('DESCRIPTION')
> + #section = recipe_data.getVar('SECTION')
> + #license = recipe_data.getVar('LICENSE')
> + #homepage = recipe_data.getVar('HOMEPAGE')
> + #bugtracker = recipe_data.getVar('BUGTRACKER')
> + #provides = recipe_data.getVar('PROVIDES')
> +
> + layer = bb.utils.get_file_layer(realfn[0],
> self.config_data) +
> + depBranchId = collection_layerbranch[layer]
> +
> + recipeId += 1
> + recipe = layerindexlib.Recipe(index, None)
> + recipe.define_data(id=recipeId,
> + filename=filename,
> filepath=filepath,
> + pn=pn, pv=pv,
> + summary=pn, description=pn,
> section='?',
> + license='?', homepage='?',
> bugtracker='?',
> + provides='?', bbclassextend='?',
> inherits='?',
> + blacklisted='?',
> layerbranch=depBranchId) +
> + index = addElement("recipes", [recipe], index)
> +
> + # ("machines", layerindexlib.Machine)
> + machineId = 0
> + if 'machines' in load:
> + index.machines = {}
> +
> + for layerBranchId in index.layerBranches:
> + # load_bblayers uses the description to cache the
> actual path...
> + machine_path =
> index.layerBranches[layerBranchId].layer.description
> + machine_path = os.path.join(machine_path,
> 'conf/machine')
> + if os.path.isdir(machine_path):
> + for (dirpath, _, filenames) in
> os.walk(machine_path):
> + # Ignore subdirs...
> + if not dirpath.endswith('conf/machine'):
> + continue
> + for fname in filenames:
> + if fname.endswith('.conf'):
> + machineId += 1
> + machine =
> layerindexlib.Machine(index, None)
> + machine.define_data(id=machineId,
> name=fname[:-5],
> +
> description=fname[:-5],
> +
> layerbranch=index.layerBranches[layerBranchId]) +
> + index.add_element("machines",
> [machine]) +
> + # ("distros", layerindexlib.Distro)
> + distroId = 0
> + if 'distros' in load:
> + index.distros = {}
> +
> + for layerBranchId in index.layerBranches:
> + # load_bblayers uses the description to cache the
> actual path...
> + distro_path =
> index.layerBranches[layerBranchId].layer.description
> + distro_path = os.path.join(distro_path,
> 'conf/distro')
> + if os.path.isdir(distro_path):
> + for (dirpath, _, filenames) in
> os.walk(distro_path):
> + # Ignore subdirs...
> + if not dirpath.endswith('conf/distro'):
> + continue
> + for fname in filenames:
> + if fname.endswith('.conf'):
> + distroId += 1
> + distro = layerindexlib.Distro(index,
> None)
> + distro.define_data(id=distroId,
> name=fname[:-5],
> +
> description=fname[:-5],
> +
> layerbranch=index.layerBranches[layerBranchId]) +
> + index.add_element("distros",
> [distro]) +
> + return index
> diff --git a/bitbake/lib/layerindexlib/plugin.py
> b/bitbake/lib/layerindexlib/plugin.py new file mode 100644
> index 0000000..92a2e97
> --- /dev/null
> +++ b/bitbake/lib/layerindexlib/plugin.py
> @@ -0,0 +1,60 @@
> +# Copyright (C) 2016-2018 Wind River Systems, Inc.
> +#
> +# This program is free software; you can redistribute it and/or
> modify +# it under the terms of the GNU General Public License
> version 2 as +# published by the Free Software Foundation.
> +#
> +# This program is distributed in the hope that it will be useful,
> +# but WITHOUT ANY WARRANTY; without even the implied warranty of
> +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
> +# See the GNU General Public License for more details.
> +#
> +# You should have received a copy of the GNU General Public License
> +# along with this program; if not, write to the Free Software
> +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA
> 02111-1307 USA +
> +# The file contains:
> +# LayerIndex exceptions
> +# Plugin base class
> +# Utility Functions for working on layerindex data
> +
> +import argparse
> +import logging
> +import os
> +import bb.msg
> +
> +logger = logging.getLogger('BitBake.layerindexlib.plugin')
> +
> +class LayerIndexPluginException(Exception):
> + """LayerIndex Generic Exception"""
> + def __init__(self, message):
> + self.msg = message
> + Exception.__init__(self, message)
> +
> + def __str__(self):
> + return self.msg
> +
> +class LayerIndexPluginUrlError(LayerIndexPluginException):
> + """Exception raised when a plugin does not support a given URL
> type"""
> + def __init__(self, plugin, url):
> + msg = "%s does not support %s:" % (plugin, url)
> + self.plugin = plugin
> + self.url = url
> + LayerIndexPluginException.__init__(self, msg)
> +
> +class IndexPlugin():
> + def __init__(self):
> + self.type = None
> +
> + def init(self, layerindex):
> + self.layerindex = layerindex
> +
> + def plugin_type(self):
> + return self.type
> +
> + def load_index(self, uri):
> + raise NotImplementedError('load_index is not implemented')
> +
> + def store_index(self, uri, index):
> + raise NotImplementedError('store_index is not implemented')
> +
> diff --git a/bitbake/lib/layerindexlib/restapi.py
> b/bitbake/lib/layerindexlib/restapi.py new file mode 100644
> index 0000000..d08eb20
> --- /dev/null
> +++ b/bitbake/lib/layerindexlib/restapi.py
> @@ -0,0 +1,398 @@
> +# Copyright (C) 2016-2018 Wind River Systems, Inc.
> +#
> +# This program is free software; you can redistribute it and/or
> modify +# it under the terms of the GNU General Public License
> version 2 as +# published by the Free Software Foundation.
> +#
> +# This program is distributed in the hope that it will be useful,
> +# but WITHOUT ANY WARRANTY; without even the implied warranty of
> +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
> +# See the GNU General Public License for more details.
> +#
> +# You should have received a copy of the GNU General Public License
> +# along with this program; if not, write to the Free Software
> +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA
> 02111-1307 USA +
> +import logging
> +import json
> +from urllib.parse import unquote
> +from urllib.parse import urlparse
> +
> +import layerindexlib
> +import layerindexlib.plugin
> +
> +logger = logging.getLogger('BitBake.layerindexlib.restapi')
> +
> +def plugin_init(plugins):
> + return RestApiPlugin()
> +
> +class RestApiPlugin(layerindexlib.plugin.IndexPlugin):
> + def __init__(self):
> + self.type = "restapi"
> +
> + def load_index(self, url, load):
> + """
> + Fetches layer information from a local or remote layer
> index. +
> + The return value is a LayerIndexObj.
> +
> + url is the url to the rest api of the layer index, such
> as:
> + http://layers.openembedded.org/layerindex/api/
> +
> + Or a local file...
> + """
> +
> + up = urlparse(url)
> +
> + if up.scheme == 'file':
> + return self.load_index_file(up, url, load)
> +
> + if up.scheme == 'http' or up.scheme == 'https':
> + return self.load_index_web(up, url, load)
> +
> + raise
> layerindexlib.plugin.LayerIndexPluginUrlError(self.type, url) +
> +
> + def load_index_file(self, up, url, load):
> + """
> + Fetches layer information from a local file or directory.
> +
> + The return value is a LayerIndexObj.
> +
> + ud is the parsed url to the local file or directory.
> + """
> + if not os.path.exists(up.path):
> + raise FileNotFoundError(up.path)
> +
> + index = layerindexlib.LayerIndexObj()
> +
> + index.config = {}
> + index.config['TYPE'] = self.type
> + index.config['URL'] = url
> +
> + params = self.layerindex._parse_params(up.params)
> +
> + if 'desc' in params:
> + index.config['DESCRIPTION'] = unquote(params['desc'])
> + else:
> + index.config['DESCRIPTION'] = up.path
> +
> + if 'cache' in params:
> + index.config['CACHE'] = params['cache']
> +
> + if 'branch' in params:
> + branches = params['branch'].split(',')
> + index.config['BRANCH'] = branches
> + else:
> + branches = ['*']
> +
> +
> + def load_cache(path, index, branches=[]):
> + logger.debug(1, 'Loading json file %s' % path)
> + with open(path, 'rt', encoding='utf-8') as f:
> + pindex = json.load(f)
> +
> + # Filter the branches on loaded files...
> + newpBranch = []
> + for branch in branches:
> + if branch != '*':
> + if 'branches' in pindex:
> + for br in pindex['branches']:
> + if br['name'] == branch:
> + newpBranch.append(br)
> + else:
> + if 'branches' in pindex:
> + for br in pindex['branches']:
> + newpBranch.append(br)
> +
> + if newpBranch:
> + index.add_raw_element('branches',
> layerindexlib.Branch, newpBranch)
> + else:
> + logger.debug(1, 'No matching branches (%s) in index
> file(s)' % branches)
> + # No matching branches.. return nothing...
> + return
> +
> + for (lName, lType) in [("layerItems",
> layerindexlib.LayerItem),
> + ("layerBranches",
> layerindexlib.LayerBranch),
> + ("layerDependencies",
> layerindexlib.LayerDependency),
> + ("recipes", layerindexlib.Recipe),
> + ("machines",
> layerindexlib.Machine),
> + ("distros",
> layerindexlib.Distro)]:
> + if lName in pindex:
> + index.add_raw_element(lName, lType,
> pindex[lName]) +
> +
> + if not os.path.isdir(up.path):
> + load_cache(up.path, index, branches)
> + return index
> +
> + logger.debug(1, 'Loading from dir %s...' % (up.path))
> + for (dirpath, _, filenames) in os.walk(up.path):
> + for filename in filenames:
> + if not filename.endswith('.json'):
> + continue
> + fpath = os.path.join(dirpath, filename)
> + load_cache(fpath, index, branches)
> +
> + return index
> +
> +
> + def load_index_web(self, up, url, load):
> + """
> + Fetches layer information from a remote layer index.
> +
> + The return value is a LayerIndexObj.
> +
> + ud is the parsed url to the rest api of the layer index,
> such as:
> + http://layers.openembedded.org/layerindex/api/
> + """
> +
> + def _get_json_response(apiurl=None, username=None,
> password=None, retry=True):
> + assert apiurl is not None
> +
> + logger.debug(1, "fetching %s" % apiurl)
> +
> + up = urlparse(apiurl)
> +
> + username=up.username
> + password=up.password
> +
> + # Strip username/password and params
> + if up.port:
> + up_stripped = up._replace(params="", netloc="%s:%s"
> % (up.hostname, up.port))
> + else:
> + up_stripped = up._replace(params="",
> netloc=up.hostname) +
> + res = self.layerindex._fetch_url(up_stripped.geturl(),
> username=username, password=password) +
> + try:
> + parsed = json.loads(res.read().decode('utf-8'))
> + except ConnectionResetError:
> + if retry:
> + logger.debug(1, "%s: Connection reset by peer.
> Retrying..." % url)
> + parsed =
> _get_json_response(apiurl=up_stripped.geturl(), username=username,
> password=password, retry=False)
> + logger.debug(1, "%s: retry successful.")
> + else:
> + raise LayerIndexFetchError('%s: Connection reset
> by peer. Is there a firewall blocking your connection?' % apiurl) +
> + return parsed
> +
> + index = layerindexlib.LayerIndexObj()
> +
> + index.config = {}
> + index.config['TYPE'] = self.type
> + index.config['URL'] = url
> +
> + params = self.layerindex._parse_params(up.params)
> +
> + if 'desc' in params:
> + index.config['DESCRIPTION'] = unquote(params['desc'])
> + else:
> + index.config['DESCRIPTION'] = up.hostname
> +
> + if 'cache' in params:
> + index.config['CACHE'] = params['cache']
> +
> + if 'branch' in params:
> + branches = params['branch'].split(',')
> + index.config['BRANCH'] = branches
> + else:
> + branches = ['*']
> +
> + try:
> + index.apilinks = _get_json_response(apiurl=url,
> username=up.username, password=up.password)
> + except Exception as e:
> + raise layerindexlib.LayerIndexFetchError(url, e)
> +
> + # Local raw index set...
> + pindex = {}
> +
> + # Load all the requested branches at the same time time,
> + # a special branch of '*' means load all branches
> + filter = ""
> + if "*" not in branches:
> + filter = "?filter=name:%s" % "OR".join(branches)
> +
> + logger.debug(1, "Loading %s from %s" % (branches,
> index.apilinks['branches'])) +
> + # The link won't include username/password, so pull it from
> the original url
> + pindex['branches'] =
> _get_json_response(index.apilinks['branches'] + filter,
> +
> username=up.username, password=up.password)
> + if not pindex['branches']:
> + logger.debug(1, "No valid branches (%s) found at url
> %s." % (branch, url))
> + return index
> + index.add_raw_element("branches", layerindexlib.Branch,
> pindex['branches']) +
> + # Load all of the layerItems (these can not be easily
> filtered)
> + logger.debug(1, "Loading %s from %s" % ('layerItems',
> index.apilinks['layerItems'])) +
> +
> + # The link won't include username/password, so pull it from
> the original url
> + pindex['layerItems'] =
> _get_json_response(index.apilinks['layerItems'],
> +
> username=up.username, password=up.password)
> + if not pindex['layerItems']:
> + logger.debug(1, "No layers were found at url %s." %
> (url))
> + return index
> + index.add_raw_element("layerItems", layerindexlib.LayerItem,
> pindex['layerItems']) +
> +
> + # From this point on load the contents for each branch.
> Otherwise we
> + # could run into a timeout.
> + for branch in index.branches:
> + filter = "?filter=branch__name:%s" %
> index.branches[branch].name +
> + logger.debug(1, "Loading %s from %s" % ('layerBranches',
> index.apilinks['layerBranches'])) +
> + # The link won't include username/password, so pull it
> from the original url
> + pindex['layerBranches'] =
> _get_json_response(index.apilinks['layerBranches'] + filter,
> +
> username=up.username, password=up.password)
> + if not pindex['layerBranches']:
> + logger.debug(1, "No valid layer branches (%s) found
> at url %s." % (branches or "*", url))
> + return index
> + index.add_raw_element("layerBranches",
> layerindexlib.LayerBranch, pindex['layerBranches']) +
> +
> + # Load the rest, they all have a similar format
> + # Note: the layer index has a few more items, we can add
> them if necessary
> + # in the future.
> + filter = "?filter=layerbranch__branch__name:%s" %
> index.branches[branch].name
> + for (lName, lType) in [("layerDependencies",
> layerindexlib.LayerDependency),
> + ("recipes", layerindexlib.Recipe),
> + ("machines",
> layerindexlib.Machine),
> + ("distros",
> layerindexlib.Distro)]:
> + if lName not in load:
> + continue
> + logger.debug(1, "Loading %s from %s" % (lName,
> index.apilinks[lName])) +
> + # The link won't include username/password, so pull
> it from the original url
> + pindex[lName] =
> _get_json_response(index.apilinks[lName] + filter,
> + username=up.username,
> password=up.password)
> + index.add_raw_element(lName, lType, pindex[lName])
> +
> + return index
> +
> + def store_index(self, url, index):
> + """
> + Store layer information into a local file/dir.
> +
> + The return value is a dictionary containing API,
> + layer, branch, dependency, recipe, machine, distro,
> information. +
> + ud is a parsed url to a directory or file. If the path
> is a
> + directory, we will split the files into one file per
> layer.
> + If the path is to a file (exists or not) the entire DB
> will be
> + dumped into that one file.
> + """
> +
> + up = urlparse(url)
> +
> + if up.scheme != 'file':
> + raise
> layerindexlib.plugin.LayerIndexPluginUrlError(self.type, url) +
> + logger.debug(1, "Storing to %s..." % up.path)
> +
> + try:
> + layerbranches = index.layerBranches
> + except KeyError:
> + logger.error('No layerBranches to write.')
> + return
> +
> +
> + def filter_item(layerbranchid, objects):
> + filtered = []
> + for obj in getattr(index, objects, None):
> + try:
> + if getattr(index, objects)[obj].layerbranch_id
> == layerbranchid:
> + filtered.append(getattr(index,
> objects)[obj]._data)
> + except AttributeError:
> + logger.debug(1, 'No obj.layerbranch_id: %s' %
> objects)
> + # No simple filter method, just include it...
> + try:
> + filtered.append(getattr(index,
> objects)[obj]._data)
> + except AttributeError:
> + logger.debug(1, 'No obj._data: %s %s' %
> (objects, type(obj)))
> + filtered.append(obj)
> + return filtered
> +
> +
> + # Write out to a single file.
> + # Filter out unnecessary items, then sort as we write for
> determinism
> + if not os.path.isdir(up.path):
> + pindex = {}
> +
> + pindex['branches'] = []
> + pindex['layerItems'] = []
> + pindex['layerBranches'] = []
> +
> + for layerbranchid in layerbranches:
> + if layerbranches[layerbranchid].branch._data not in
> pindex['branches']:
> +
> pindex['branches'].append(layerbranches[layerbranchid].branch._data) +
> + if layerbranches[layerbranchid].layer._data not in
> pindex['layerItems']:
> +
> pindex['layerItems'].append(layerbranches[layerbranchid].layer._data)
> +
> + if layerbranches[layerbranchid]._data not in
> pindex['layerBranches']:
> +
> pindex['layerBranches'].append(layerbranches[layerbranchid]._data) +
> + for entry in index._index:
> + # Skip local items, apilinks and items already
> processed
> + if entry in index.config['local'] or \
> + entry == 'apilinks' or \
> + entry == 'branches' or \
> + entry == 'layerBranches' or \
> + entry == 'layerItems':
> + continue
> + if entry not in pindex:
> + pindex[entry] = []
> + pindex[entry].extend(filter_item(layerbranchid,
> entry)) +
> + bb.debug(1, 'Writing index to %s' % up.path)
> + with open(up.path, 'wt') as f:
> + json.dump(layerindexlib.sort_entry(pindex), f,
> indent=4)
> + return
> +
> +
> + # Write out to a directory one file per layerBranch
> + # Prepare all layer related items, to create a minimal file.
> + # We have to sort the entries as we write so they are
> deterministic
> + for layerbranchid in layerbranches:
> + pindex = {}
> +
> + for entry in index._index:
> + # Skip local items, apilinks and items already
> processed
> + if entry in index.config['local'] or \
> + entry == 'apilinks' or \
> + entry == 'branches' or \
> + entry == 'layerBranches' or \
> + entry == 'layerItems':
> + continue
> + pindex[entry] = filter_item(layerbranchid, entry)
> +
> + # Add the layer we're processing as the first one...
> + pindex['branches'] =
> [layerbranches[layerbranchid].branch._data]
> + pindex['layerItems'] =
> [layerbranches[layerbranchid].layer._data]
> + pindex['layerBranches'] =
> [layerbranches[layerbranchid]._data] +
> + # We also need to include the layerbranch for any
> dependencies...
> + for layerdep in pindex['layerDependencies']:
> + layerdependency =
> layerindexlib.LayerDependency(index, layerdep) +
> + layeritem = layerdependency.dependency
> + layerbranch = layerdependency.dependency_layerBranch
> +
> + # We need to avoid duplicates...
> + if layeritem._data not in pindex['layerItems']:
> + pindex['layerItems'].append(layeritem._data)
> +
> + if layerbranch._data not in pindex['layerBranches']:
> + pindex['layerBranches'].append(layerbranch._data)
> +
> + # apply mirroring adjustments here....
> +
> + fname = index.config['DESCRIPTION'] + '__' +
> pindex['branches'][0]['name'] + '__' + pindex['layerItems'][0]['name']
> + fname = fname.translate(str.maketrans('/ ', '__'))
> + fpath = os.path.join(up.path, fname)
> +
> + bb.debug(1, 'Writing index to %s' % fpath + '.json')
> + with open(fpath + '.json', 'wt') as f:
> + json.dump(layerindexlib.sort_entry(pindex), f,
> indent=4) diff --git a/bitbake/lib/layerindexlib/tests/__init__.py
> b/bitbake/lib/layerindexlib/tests/__init__.py new file mode 100644
> index 0000000..e69de29
> diff --git a/bitbake/lib/layerindexlib/tests/common.py
> b/bitbake/lib/layerindexlib/tests/common.py new file mode 100644
> index 0000000..22a5458
> --- /dev/null
> +++ b/bitbake/lib/layerindexlib/tests/common.py
> @@ -0,0 +1,43 @@
> +# Copyright (C) 2017-2018 Wind River Systems, Inc.
> +#
> +# This program is free software; you can redistribute it and/or
> modify +# it under the terms of the GNU General Public License
> version 2 as +# published by the Free Software Foundation.
> +#
> +# This program is distributed in the hope that it will be useful,
> +# but WITHOUT ANY WARRANTY; without even the implied warranty of
> +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
> +# See the GNU General Public License for more details.
> +#
> +# You should have received a copy of the GNU General Public License
> +# along with this program; if not, write to the Free Software
> +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA
> 02111-1307 USA +
> +import unittest
> +import tempfile
> +import os
> +import bb
> +
> +import logging
> +
> +class LayersTest(unittest.TestCase):
> +
> + def setUp(self):
> + self.origdir = os.getcwd()
> + self.d = bb.data.init()
> + # At least one variable needs to be set
> + self.d.setVar('DL_DIR', os.getcwd())
> +
> + if os.environ.get("BB_SKIP_NETTESTS") == "yes":
> + self.d.setVar('BB_NO_NETWORK', '1')
> +
> + self.tempdir = tempfile.mkdtemp()
> + self.logger = logging.getLogger("BitBake")
> +
> + def tearDown(self):
> + os.chdir(self.origdir)
> + if os.environ.get("BB_TMPDIR_NOCLEAN") == "yes":
> + print("Not cleaning up %s. Please remove manually." %
> self.tempdir)
> + else:
> + bb.utils.prunedir(self.tempdir)
> +
> diff --git a/bitbake/lib/layerindexlib/tests/cooker.py
> b/bitbake/lib/layerindexlib/tests/cooker.py new file mode 100644
> index 0000000..fdbf091
> --- /dev/null
> +++ b/bitbake/lib/layerindexlib/tests/cooker.py
> @@ -0,0 +1,123 @@
> +# Copyright (C) 2018 Wind River Systems, Inc.
> +#
> +# This program is free software; you can redistribute it and/or
> modify +# it under the terms of the GNU General Public License
> version 2 as +# published by the Free Software Foundation.
> +#
> +# This program is distributed in the hope that it will be useful,
> +# but WITHOUT ANY WARRANTY; without even the implied warranty of
> +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
> +# See the GNU General Public License for more details.
> +#
> +# You should have received a copy of the GNU General Public License
> +# along with this program; if not, write to the Free Software
> +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA
> 02111-1307 USA +
> +import unittest
> +import tempfile
> +import os
> +import bb
> +
> +import layerindexlib
> +from layerindexlib.tests.common import LayersTest
> +
> +import logging
> +
> +class LayerIndexCookerTest(LayersTest):
> +
> + def setUp(self):
> + LayersTest.setUp(self)
> +
> + # Note this is NOT a comprehensive test of cooker, as we
> can't easily
> + # configure the test data. But we can emulate the basics of
> the layer.conf
> + # files, so that is what we will do.
> +
> + new_topdir =
> os.path.join(os.path.dirname(os.path.realpath(__file__)), "testdata")
> + new_bbpath = os.path.join(new_topdir, "build")
> +
> + self.d.setVar('TOPDIR', new_topdir)
> + self.d.setVar('BBPATH', new_bbpath)
> +
> + self.d = bb.parse.handle("%s/conf/bblayers.conf" %
> new_bbpath, self.d, True)
> + for layer in self.d.getVar('BBLAYERS').split():
> + self.d = bb.parse.handle("%s/conf/layer.conf" % layer,
> self.d, True) +
> + self.layerindex = layerindexlib.LayerIndex(self.d)
> + self.layerindex.load_layerindex('cooker://',
> load=['layerDependencies']) +
> + def test_layerindex_is_empty(self):
> + self.assertFalse(self.layerindex.is_empty(), msg="Layerindex
> is not empty!") +
> + def test_dependency_resolution(self):
> + # Verify depth first searching...
> + (dependencies, invalidnames) =
> self.layerindex.find_dependencies(names=['meta-python']) +
> + first = True
> + for deplayerbranch in dependencies:
> + layerBranch = dependencies[deplayerbranch][0]
> + layerDeps = dependencies[deplayerbranch][1:]
> +
> + if not first:
> + continue
> +
> + first = False
> +
> + # Top of the deps should be openembedded-core, since
> everything depends on it.
> + self.assertEqual(layerBranch.layer.name,
> "openembedded-core", msg='Top dependency not openembedded-core') +
> + # meta-python should cause an openembedded-core
> dependency, if not assert!
> + for dep in layerDeps:
> + if dep.layer.name == 'meta-python':
> + break
> + else:
> + self.assertTrue(False, msg='meta-python was not
> found') +
> + # Only check the first element...
> + break
> + else:
> + if first:
> + # Empty list, this is bad.
> + self.assertTrue(False, msg='Empty list of
> dependencies') +
> + # Last dep should be the requested item
> + layerBranch = dependencies[deplayerbranch][0]
> + self.assertEqual(layerBranch.layer.name, "meta-python",
> msg='Last dependency not meta-python') +
> + def test_find_collection(self):
> + def _check(collection, expected):
> + self.logger.debug(1, "Looking for collection %s..." %
> collection)
> + result = self.layerindex.find_collection(collection)
> + if expected:
> + self.assertIsNotNone(result, msg="Did not find %s
> when it shouldn't be there" % collection)
> + else:
> + self.assertIsNone(result, msg="Found %s when it
> should be there" % collection) +
> + tests = [ ('core', True),
> + ('openembedded-core', False),
> + ('networking-layer', True),
> + ('meta-python', True),
> + ('openembedded-layer', True),
> + ('notpresent', False) ]
> +
> + for collection,result in tests:
> + _check(collection, result)
> +
> + def test_find_layerbranch(self):
> + def _check(name, expected):
> + self.logger.debug(1, "Looking for layerbranch %s..." %
> name)
> + result = self.layerindex.find_layerbranch(name)
> + if expected:
> + self.assertIsNotNone(result, msg="Did not find %s
> when it shouldn't be there" % collection)
> + else:
> + self.assertIsNone(result, msg="Found %s when it
> should be there" % collection) +
> + tests = [ ('openembedded-core', True),
> + ('core', False),
> + ('networking-layer', True),
> + ('meta-python', True),
> + ('openembedded-layer', True),
> + ('notpresent', False) ]
> +
> + for collection,result in tests:
> + _check(collection, result)
> +
> diff --git a/bitbake/lib/layerindexlib/tests/layerindexobj.py
> b/bitbake/lib/layerindexlib/tests/layerindexobj.py new file mode
> 100644 index 0000000..e2fbb95
> --- /dev/null
> +++ b/bitbake/lib/layerindexlib/tests/layerindexobj.py
> @@ -0,0 +1,226 @@
> +# Copyright (C) 2017-2018 Wind River Systems, Inc.
> +#
> +# This program is free software; you can redistribute it and/or
> modify +# it under the terms of the GNU General Public License
> version 2 as +# published by the Free Software Foundation.
> +#
> +# This program is distributed in the hope that it will be useful,
> +# but WITHOUT ANY WARRANTY; without even the implied warranty of
> +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
> +# See the GNU General Public License for more details.
> +#
> +# You should have received a copy of the GNU General Public License
> +# along with this program; if not, write to the Free Software
> +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA
> 02111-1307 USA +
> +import unittest
> +import tempfile
> +import os
> +import bb
> +
> +from layerindexlib.tests.common import LayersTest
> +
> +import logging
> +
> +class LayerIndexObjectsTest(LayersTest):
> + def setUp(self):
> + from layerindexlib import LayerIndexObj, Branch, LayerItem,
> LayerBranch, LayerDependency, Recipe, Machine, Distro +
> + LayersTest.setUp(self)
> +
> + self.index = LayerIndexObj()
> +
> + branchId = 0
> + layerItemId = 0
> + layerBranchId = 0
> + layerDependencyId = 0
> + recipeId = 0
> + machineId = 0
> + distroId = 0
> +
> + self.index.branches = {}
> + self.index.layerItems = {}
> + self.index.layerBranches = {}
> + self.index.layerDependencies = {}
> + self.index.recipes = {}
> + self.index.machines = {}
> + self.index.distros = {}
> +
> + branchId += 1
> + self.index.branches[branchId] = Branch(self.index)
> + self.index.branches[branchId].define_data(branchId,
> + 'test_branch',
> 'bb_test_branch')
> + self.index.branches[branchId].lockData()
> +
> + layerItemId +=1
> + self.index.layerItems[layerItemId] = LayerItem(self.index)
> + self.index.layerItems[layerItemId].define_data(layerItemId,
> + 'test_layerItem',
> vcs_url='git://git_test_url/test_layerItem')
> + self.index.layerItems[layerItemId].lockData()
> +
> + layerBranchId +=1
> + self.index.layerBranches[layerBranchId] =
> LayerBranch(self.index)
> +
> self.index.layerBranches[layerBranchId].define_data(layerBranchId,
> + 'test_collection', '99',
> layerItemId,
> + branchId)
> +
> + recipeId += 1
> + self.index.recipes[recipeId] = Recipe(self.index)
> + self.index.recipes[recipeId].define_data(recipeId,
> 'test_git.bb',
> + 'recipes-test', 'test',
> 'git',
> + layerBranchId)
> +
> + machineId += 1
> + self.index.machines[machineId] = Machine(self.index)
> + self.index.machines[machineId].define_data(machineId,
> + 'test_machine',
> 'test_machine',
> + layerBranchId)
> +
> + distroId += 1
> + self.index.distros[distroId] = Distro(self.index)
> + self.index.distros[distroId].define_data(distroId,
> + 'test_distro', 'test_distro',
> + layerBranchId)
> +
> + layerItemId +=1
> + self.index.layerItems[layerItemId] = LayerItem(self.index)
> + self.index.layerItems[layerItemId].define_data(layerItemId,
> 'test_layerItem 2',
> +
> vcs_url='git://git_test_url/test_layerItem') +
> + layerBranchId +=1
> + self.index.layerBranches[layerBranchId] =
> LayerBranch(self.index)
> +
> self.index.layerBranches[layerBranchId].define_data(layerBranchId,
> + 'test_collection_2', '72',
> layerItemId,
> + branchId,
> actual_branch='some_other_branch') +
> + layerDependencyId += 1
> + self.index.layerDependencies[layerDependencyId] =
> LayerDependency(self.index)
> +
> self.index.layerDependencies[layerDependencyId].define_data(layerDependencyId,
> + layerBranchId, 1)
> +
> + layerDependencyId += 1
> + self.index.layerDependencies[layerDependencyId] =
> LayerDependency(self.index)
> +
> self.index.layerDependencies[layerDependencyId].define_data(layerDependencyId,
> + layerBranchId, 1,
> required=False) +
> + def test_branch(self):
> + branch = self.index.branches[1]
> + self.assertEqual(branch.id, 1)
> + self.assertEqual(branch.name, 'test_branch')
> + self.assertEqual(branch.short_description, 'test_branch')
> + self.assertEqual(branch.bitbake_branch, 'bb_test_branch')
> +
> + def test_layerItem(self):
> + layerItem = self.index.layerItems[1]
> + self.assertEqual(layerItem.id, 1)
> + self.assertEqual(layerItem.name, 'test_layerItem')
> + self.assertEqual(layerItem.summary, 'test_layerItem')
> + self.assertEqual(layerItem.description, 'test_layerItem')
> + self.assertEqual(layerItem.vcs_url,
> 'git://git_test_url/test_layerItem')
> + self.assertEqual(layerItem.vcs_web_url, None)
> + self.assertIsNone(layerItem.vcs_web_tree_base_url)
> + self.assertIsNone(layerItem.vcs_web_file_base_url)
> + self.assertIsNotNone(layerItem.updated)
> +
> + layerItem = self.index.layerItems[2]
> + self.assertEqual(layerItem.id, 2)
> + self.assertEqual(layerItem.name, 'test_layerItem 2')
> + self.assertEqual(layerItem.summary, 'test_layerItem 2')
> + self.assertEqual(layerItem.description, 'test_layerItem 2')
> + self.assertEqual(layerItem.vcs_url,
> 'git://git_test_url/test_layerItem')
> + self.assertIsNone(layerItem.vcs_web_url)
> + self.assertIsNone(layerItem.vcs_web_tree_base_url)
> + self.assertIsNone(layerItem.vcs_web_file_base_url)
> + self.assertIsNotNone(layerItem.updated)
> +
> + def test_layerBranch(self):
> + layerBranch = self.index.layerBranches[1]
> + self.assertEqual(layerBranch.id, 1)
> + self.assertEqual(layerBranch.collection, 'test_collection')
> + self.assertEqual(layerBranch.version, '99')
> + self.assertEqual(layerBranch.vcs_subdir, '')
> + self.assertEqual(layerBranch.actual_branch, 'test_branch')
> + self.assertIsNotNone(layerBranch.updated)
> + self.assertEqual(layerBranch.layer_id, 1)
> + self.assertEqual(layerBranch.branch_id, 1)
> + self.assertEqual(layerBranch.layer, self.index.layerItems[1])
> + self.assertEqual(layerBranch.branch, self.index.branches[1])
> +
> + layerBranch = self.index.layerBranches[2]
> + self.assertEqual(layerBranch.id, 2)
> + self.assertEqual(layerBranch.collection, 'test_collection_2')
> + self.assertEqual(layerBranch.version, '72')
> + self.assertEqual(layerBranch.vcs_subdir, '')
> + self.assertEqual(layerBranch.actual_branch,
> 'some_other_branch')
> + self.assertIsNotNone(layerBranch.updated)
> + self.assertEqual(layerBranch.layer_id, 2)
> + self.assertEqual(layerBranch.branch_id, 1)
> + self.assertEqual(layerBranch.layer, self.index.layerItems[2])
> + self.assertEqual(layerBranch.branch, self.index.branches[1])
> +
> + def test_layerDependency(self):
> + layerDependency = self.index.layerDependencies[1]
> + self.assertEqual(layerDependency.id, 1)
> + self.assertEqual(layerDependency.layerbranch_id, 2)
> + self.assertEqual(layerDependency.layerbranch,
> self.index.layerBranches[2])
> + self.assertEqual(layerDependency.layer_id, 2)
> + self.assertEqual(layerDependency.layer,
> self.index.layerItems[2])
> + self.assertTrue(layerDependency.required)
> + self.assertEqual(layerDependency.dependency_id, 1)
> + self.assertEqual(layerDependency.dependency,
> self.index.layerItems[1])
> + self.assertEqual(layerDependency.dependency_layerBranch,
> self.index.layerBranches[1]) +
> + layerDependency = self.index.layerDependencies[2]
> + self.assertEqual(layerDependency.id, 2)
> + self.assertEqual(layerDependency.layerbranch_id, 2)
> + self.assertEqual(layerDependency.layerbranch,
> self.index.layerBranches[2])
> + self.assertEqual(layerDependency.layer_id, 2)
> + self.assertEqual(layerDependency.layer,
> self.index.layerItems[2])
> + self.assertFalse(layerDependency.required)
> + self.assertEqual(layerDependency.dependency_id, 1)
> + self.assertEqual(layerDependency.dependency,
> self.index.layerItems[1])
> + self.assertEqual(layerDependency.dependency_layerBranch,
> self.index.layerBranches[1]) +
> + def test_recipe(self):
> + recipe = self.index.recipes[1]
> + self.assertEqual(recipe.id, 1)
> + self.assertEqual(recipe.layerbranch_id, 1)
> + self.assertEqual(recipe.layerbranch,
> self.index.layerBranches[1])
> + self.assertEqual(recipe.layer_id, 1)
> + self.assertEqual(recipe.layer, self.index.layerItems[1])
> + self.assertEqual(recipe.filename, 'test_git.bb')
> + self.assertEqual(recipe.filepath, 'recipes-test')
> + self.assertEqual(recipe.fullpath, 'recipes-test/test_git.bb')
> + self.assertEqual(recipe.summary, "")
> + self.assertEqual(recipe.description, "")
> + self.assertEqual(recipe.section, "")
> + self.assertEqual(recipe.pn, 'test')
> + self.assertEqual(recipe.pv, 'git')
> + self.assertEqual(recipe.license, "")
> + self.assertEqual(recipe.homepage, "")
> + self.assertEqual(recipe.bugtracker, "")
> + self.assertEqual(recipe.provides, "")
> + self.assertIsNotNone(recipe.updated)
> + self.assertEqual(recipe.inherits, "")
> +
> + def test_machine(self):
> + machine = self.index.machines[1]
> + self.assertEqual(machine.id, 1)
> + self.assertEqual(machine.layerbranch_id, 1)
> + self.assertEqual(machine.layerbranch,
> self.index.layerBranches[1])
> + self.assertEqual(machine.layer_id, 1)
> + self.assertEqual(machine.layer, self.index.layerItems[1])
> + self.assertEqual(machine.name, 'test_machine')
> + self.assertEqual(machine.description, 'test_machine')
> + self.assertIsNotNone(machine.updated)
> +
> + def test_distro(self):
> + distro = self.index.distros[1]
> + self.assertEqual(distro.id, 1)
> + self.assertEqual(distro.layerbranch_id, 1)
> + self.assertEqual(distro.layerbranch,
> self.index.layerBranches[1])
> + self.assertEqual(distro.layer_id, 1)
> + self.assertEqual(distro.layer, self.index.layerItems[1])
> + self.assertEqual(distro.name, 'test_distro')
> + self.assertEqual(distro.description, 'test_distro')
> + self.assertIsNotNone(distro.updated)
> diff --git a/bitbake/lib/layerindexlib/tests/restapi.py
> b/bitbake/lib/layerindexlib/tests/restapi.py new file mode 100644
> index 0000000..5876695
> --- /dev/null
> +++ b/bitbake/lib/layerindexlib/tests/restapi.py
> @@ -0,0 +1,184 @@
> +# Copyright (C) 2017-2018 Wind River Systems, Inc.
> +#
> +# This program is free software; you can redistribute it and/or
> modify +# it under the terms of the GNU General Public License
> version 2 as +# published by the Free Software Foundation.
> +#
> +# This program is distributed in the hope that it will be useful,
> +# but WITHOUT ANY WARRANTY; without even the implied warranty of
> +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
> +# See the GNU General Public License for more details.
> +#
> +# You should have received a copy of the GNU General Public License
> +# along with this program; if not, write to the Free Software
> +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA
> 02111-1307 USA +
> +import unittest
> +import tempfile
> +import os
> +import bb
> +
> +import layerindexlib
> +from layerindexlib.tests.common import LayersTest
> +
> +import logging
> +
> +def skipIfNoNetwork():
> + if os.environ.get("BB_SKIP_NETTESTS") == "yes":
> + return unittest.skip("Network tests being skipped")
> + return lambda f: f
> +
> +class LayerIndexWebRestApiTest(LayersTest):
> +
> + @skipIfNoNetwork()
> + def setUp(self):
> + self.assertFalse(os.environ.get("BB_SKIP_NETTESTS") ==
> "yes", msg="BB_SKIP_NETTESTS set, but we tried to test anyway")
> + LayersTest.setUp(self)
> + self.layerindex = layerindexlib.LayerIndex(self.d)
> +
> self.layerindex.load_layerindex('http://layers.openembedded.org/layerindex/api/;branch=sumo',
> load=['layerDependencies']) +
> + @skipIfNoNetwork()
> + def test_layerindex_is_empty(self):
> + self.assertFalse(self.layerindex.is_empty(), msg="Layerindex
> is empty") +
> + @skipIfNoNetwork()
> + def test_layerindex_store_file(self):
> + self.layerindex.store_layerindex('file://%s/file.json' %
> self.tempdir, self.layerindex.indexes[0]) +
> + self.assertTrue(os.path.isfile('%s/file.json' %
> self.tempdir), msg="Temporary file was not created by
> store_layerindex") +
> + reload = layerindexlib.LayerIndex(self.d)
> + reload.load_layerindex('file://%s/file.json' % self.tempdir)
> +
> + self.assertFalse(reload.is_empty(), msg="Layerindex is
> empty") +
> + # Calculate layerItems in original index that should NOT be
> in reload
> + layerItemNames = []
> + for itemId in self.layerindex.indexes[0].layerItems:
> +
> layerItemNames.append(self.layerindex.indexes[0].layerItems[itemId].name)
> +
> + for layerBranchId in
> self.layerindex.indexes[0].layerBranches:
> +
> layerItemNames.remove(self.layerindex.indexes[0].layerBranches[layerBranchId].layer.name)
> +
> + for itemId in reload.indexes[0].layerItems:
> +
> self.assertFalse(reload.indexes[0].layerItems[itemId].name in
> layerItemNames, msg="Item reloaded when it shouldn't have been") +
> + # Compare the original to what we wrote...
> + for type in self.layerindex.indexes[0]._index:
> + if type == 'apilinks' or \
> + type == 'layerItems' or \
> + type in self.layerindex.indexes[0].config['local']:
> + continue
> + for id in getattr(self.layerindex.indexes[0], type):
> + self.logger.debug(1, "type %s" % (type))
> +
> + self.assertTrue(id in getattr(reload.indexes[0],
> type), msg="Id number not in reloaded index") +
> + self.logger.debug(1, "%s ? %s" %
> (getattr(self.layerindex.indexes[0], type)[id],
> getattr(reload.indexes[0], type)[id])) +
> + self.assertEqual(getattr(self.layerindex.indexes[0],
> type)[id], getattr(reload.indexes[0], type)[id], msg="Reloaded
> contents different") +
> + @skipIfNoNetwork()
> + def test_layerindex_store_split(self):
> + self.layerindex.store_layerindex('file://%s' % self.tempdir,
> self.layerindex.indexes[0]) +
> + reload = layerindexlib.LayerIndex(self.d)
> + reload.load_layerindex('file://%s' % self.tempdir)
> +
> + self.assertFalse(reload.is_empty(), msg="Layer index is
> empty") +
> + for type in self.layerindex.indexes[0]._index:
> + if type == 'apilinks' or \
> + type == 'layerItems' or \
> + type in self.layerindex.indexes[0].config['local']:
> + continue
> + for id in getattr(self.layerindex.indexes[0] ,type):
> + self.logger.debug(1, "type %s" % (type))
> +
> + self.assertTrue(id in getattr(reload.indexes[0],
> type), msg="Id number missing from reloaded data") +
> + self.logger.debug(1, "%s ? %s" %
> (getattr(self.layerindex.indexes[0] ,type)[id],
> getattr(reload.indexes[0], type)[id])) +
> +
> self.assertEqual(getattr(self.layerindex.indexes[0] ,type)[id],
> getattr(reload.indexes[0], type)[id], msg="reloaded data does not
> match original") +
> + @skipIfNoNetwork()
> + def test_dependency_resolution(self):
> + # Verify depth first searching...
> + (dependencies, invalidnames) =
> self.layerindex.find_dependencies(names=['meta-python']) +
> + first = True
> + for deplayerbranch in dependencies:
> + layerBranch = dependencies[deplayerbranch][0]
> + layerDeps = dependencies[deplayerbranch][1:]
> +
> + if not first:
> + continue
> +
> + first = False
> +
> + # Top of the deps should be openembedded-core, since
> everything depends on it.
> + self.assertEqual(layerBranch.layer.name,
> "openembedded-core", msg='OpenEmbedded-Core is no the first
> dependency') +
> + # meta-python should cause an openembedded-core
> dependency, if not assert!
> + for dep in layerDeps:
> + if dep.layer.name == 'meta-python':
> + break
> + else:
> + self.logger.debug(1, "meta-python was not found")
> + self.assetTrue(False)
> +
> + # Only check the first element...
> + break
> + else:
> + # Empty list, this is bad.
> + self.logger.debug(1, "Empty list of dependencies")
> + self.assertIsNotNone(first, msg="Empty list of
> dependencies") +
> + # Last dep should be the requested item
> + layerBranch = dependencies[deplayerbranch][0]
> + self.assertEqual(layerBranch.layer.name, "meta-python",
> msg="Last dependency not meta-python") +
> + @skipIfNoNetwork()
> + def test_find_collection(self):
> + def _check(collection, expected):
> + self.logger.debug(1, "Looking for collection %s..." %
> collection)
> + result = self.layerindex.find_collection(collection)
> + if expected:
> + self.assertIsNotNone(result, msg="Did not find %s
> when it should be there" % collection)
> + else:
> + self.assertIsNone(result, msg="Found %s when it
> shouldn't be there" % collection) +
> + tests = [ ('core', True),
> + ('openembedded-core', False),
> + ('networking-layer', True),
> + ('meta-python', True),
> + ('openembedded-layer', True),
> + ('notpresent', False) ]
> +
> + for collection,result in tests:
> + _check(collection, result)
> +
> + @skipIfNoNetwork()
> + def test_find_layerbranch(self):
> + def _check(name, expected):
> + self.logger.debug(1, "Looking for layerbranch %s..." %
> name) +
> + for index in self.layerindex.indexes:
> + for layerbranchid in index.layerBranches:
> + self.logger.debug(1, "Present: %s" %
> index.layerBranches[layerbranchid].layer.name)
> + result = self.layerindex.find_layerbranch(name)
> + if expected:
> + self.assertIsNotNone(result, msg="Did not find %s
> when it should be there" % collection)
> + else:
> + self.assertIsNone(result, msg="Found %s when it
> shouldn't be there" % collection) +
> + tests = [ ('openembedded-core', True),
> + ('core', False),
> + ('meta-networking', True),
> + ('meta-python', True),
> + ('meta-oe', True),
> + ('notpresent', False) ]
> +
> + for collection,result in tests:
> + _check(collection, result)
> +
> diff --git a/bitbake/lib/layerindexlib/tests/testdata/README
> b/bitbake/lib/layerindexlib/tests/testdata/README new file mode 100644
> index 0000000..36ab40b
> --- /dev/null
> +++ b/bitbake/lib/layerindexlib/tests/testdata/README
> @@ -0,0 +1,11 @@
> +This test data is used to verify the 'cooker' module of the
> layerindex. +
> +The module consists of a faux project bblayers.conf with four layers
> defined. +
> +layer1 - openembedded-core
> +layer2 - networking-layer
> +layer3 - meta-python
> +layer4 - openembedded-layer (meta-oe)
> +
> +Since we do not have a fully populated cooker, we use this to test
> the +basic index generation, and not any deep recipe based contents.
> diff --git
> a/bitbake/lib/layerindexlib/tests/testdata/build/conf/bblayers.conf
> b/bitbake/lib/layerindexlib/tests/testdata/build/conf/bblayers.conf
> new file mode 100644 index 0000000..40429b2 --- /dev/null
> +++
> b/bitbake/lib/layerindexlib/tests/testdata/build/conf/bblayers.conf
> @@ -0,0 +1,15 @@ +LAYERSERIES_CORENAMES = "sumo"
> +
> +# LAYER_CONF_VERSION is increased each time build/conf/bblayers.conf
> +# changes incompatibly
> +LCONF_VERSION = "7"
> +
> +BBPATH = "${TOPDIR}"
> +BBFILES ?= ""
> +
> +BBLAYERS ?= " \
> + ${TOPDIR}/layer1 \
> + ${TOPDIR}/layer2 \
> + ${TOPDIR}/layer3 \
> + ${TOPDIR}/layer4 \
> + "
> diff --git
> a/bitbake/lib/layerindexlib/tests/testdata/layer1/conf/layer.conf
> b/bitbake/lib/layerindexlib/tests/testdata/layer1/conf/layer.conf new
> file mode 100644 index 0000000..966d531 --- /dev/null
> +++ b/bitbake/lib/layerindexlib/tests/testdata/layer1/conf/layer.conf
> @@ -0,0 +1,17 @@
> +# We have a conf and classes directory, add to BBPATH
> +BBPATH .= ":${LAYERDIR}"
> +# We have recipes-* directories, add to BBFILES
> +BBFILES += "${LAYERDIR}/recipes-*/*/*.bb"
> +
> +BBFILE_COLLECTIONS += "core"
> +BBFILE_PATTERN_core = "^${LAYERDIR}/"
> +BBFILE_PRIORITY_core = "5"
> +
> +LAYERSERIES_CORENAMES = "sumo"
> +
> +# This should only be incremented on significant changes that will
> +# cause compatibility issues with other layers
> +LAYERVERSION_core = "11"
> +LAYERSERIES_COMPAT_core = "sumo"
> +
> +BBLAYERS_LAYERINDEX_NAME_core = "openembedded-core"
> diff --git
> a/bitbake/lib/layerindexlib/tests/testdata/layer2/conf/layer.conf
> b/bitbake/lib/layerindexlib/tests/testdata/layer2/conf/layer.conf new
> file mode 100644 index 0000000..7569d1c --- /dev/null
> +++ b/bitbake/lib/layerindexlib/tests/testdata/layer2/conf/layer.conf
> @@ -0,0 +1,20 @@
> +# We have a conf and classes directory, add to BBPATH
> +BBPATH .= ":${LAYERDIR}"
> +
> +# We have a packages directory, add to BBFILES
> +BBFILES += "${LAYERDIR}/recipes-*/*/*.bb \
> + ${LAYERDIR}/recipes-*/*/*.bbappend"
> +
> +BBFILE_COLLECTIONS += "networking-layer"
> +BBFILE_PATTERN_networking-layer := "^${LAYERDIR}/"
> +BBFILE_PRIORITY_networking-layer = "5"
> +
> +# This should only be incremented on significant changes that will
> +# cause compatibility issues with other layers
> +LAYERVERSION_networking-layer = "1"
> +
> +LAYERDEPENDS_networking-layer = "core"
> +LAYERDEPENDS_networking-layer += "openembedded-layer"
> +LAYERDEPENDS_networking-layer += "meta-python"
> +
> +LAYERSERIES_COMPAT_networking-layer = "sumo"
> diff --git
> a/bitbake/lib/layerindexlib/tests/testdata/layer3/conf/layer.conf
> b/bitbake/lib/layerindexlib/tests/testdata/layer3/conf/layer.conf new
> file mode 100644 index 0000000..7089071 --- /dev/null
> +++ b/bitbake/lib/layerindexlib/tests/testdata/layer3/conf/layer.conf
> @@ -0,0 +1,19 @@
> +# We might have a conf and classes directory, append to BBPATH
> +BBPATH .= ":${LAYERDIR}"
> +
> +# We have recipes directories, add to BBFILES
> +BBFILES += "${LAYERDIR}/recipes*/*/*.bb
> ${LAYERDIR}/recipes*/*/*.bbappend" +
> +BBFILE_COLLECTIONS += "meta-python"
> +BBFILE_PATTERN_meta-python := "^${LAYERDIR}/"
> +BBFILE_PRIORITY_meta-python = "7"
> +
> +# This should only be incremented on significant changes that will
> +# cause compatibility issues with other layers
> +LAYERVERSION_meta-python = "1"
> +
> +LAYERDEPENDS_meta-python = "core openembedded-layer"
> +
> +LAYERSERIES_COMPAT_meta-python = "sumo"
> +
> +LICENSE_PATH += "${LAYERDIR}/licenses"
> diff --git
> a/bitbake/lib/layerindexlib/tests/testdata/layer4/conf/layer.conf
> b/bitbake/lib/layerindexlib/tests/testdata/layer4/conf/layer.conf new
> file mode 100644 index 0000000..6649ee0 --- /dev/null
> +++ b/bitbake/lib/layerindexlib/tests/testdata/layer4/conf/layer.conf
> @@ -0,0 +1,22 @@
> +# We have a conf and classes directory, append to BBPATH
> +BBPATH .= ":${LAYERDIR}"
> +
> +# We have a recipes directory, add to BBFILES
> +BBFILES += "${LAYERDIR}/recipes-*/*/*.bb
> ${LAYERDIR}/recipes-*/*/*.bbappend" +
> +BBFILE_COLLECTIONS += "openembedded-layer"
> +BBFILE_PATTERN_openembedded-layer := "^${LAYERDIR}/"
> +
> +# Define the priority for recipes (.bb files) from this layer,
> +# choosing carefully how this layer interacts with all of the
> +# other layers.
> +
> +BBFILE_PRIORITY_openembedded-layer = "6"
> +
> +# This should only be incremented on significant changes that will
> +# cause compatibility issues with other layers
> +LAYERVERSION_openembedded-layer = "1"
> +
> +LAYERDEPENDS_openembedded-layer = "core"
> +
> +LAYERSERIES_COMPAT_openembedded-layer = "sumo"
> diff --git a/bitbake/lib/toaster/bldcontrol/localhostbecontroller.py
> b/bitbake/lib/toaster/bldcontrol/localhostbecontroller.py index
> 4c17562..9490635 100644 ---
> a/bitbake/lib/toaster/bldcontrol/localhostbecontroller.py +++
> b/bitbake/lib/toaster/bldcontrol/localhostbecontroller.py @@ -27,8
> +27,9 @@ import shutil import time
> from django.db import transaction
> from django.db.models import Q
> -from bldcontrol.models import BuildEnvironment, BRLayer, BRVariable,
> BRTarget, BRBitbake -from orm.models import CustomImageRecipe, Layer,
> Layer_Version, ProjectLayer, ToasterSetting +from bldcontrol.models
> import BuildEnvironment, BuildRequest, BRLayer, BRVariable, BRTarget,
> BRBitbake, Build +from orm.models import CustomImageRecipe, Layer,
> Layer_Version, Project, ProjectLayer, ToasterSetting +from orm.models
> import signal_runbuilds import subprocess
> from toastermain import settings
> @@ -38,6 +39,8 @@ from bldcontrol.bbcontroller import
> BuildEnvironmentController, ShellCmdExceptio import logging
> logger = logging.getLogger("toaster")
>
> +install_dir = os.environ.get('TOASTER_DIR')
> +
> from pprint import pprint, pformat
>
> class LocalhostBEController(BuildEnvironmentController):
> @@ -87,10 +90,10 @@ class
> LocalhostBEController(BuildEnvironmentController):
> #logger.debug("localhostbecontroller: using HEAD checkout in %s" %
> local_checkout_path) return local_checkout_path
> -
> - def setCloneStatus(self,bitbake,status,total,current):
> + def setCloneStatus(self,bitbake,status,total,current,repo_name):
> bitbake.req.build.repos_cloned=current
> bitbake.req.build.repos_to_clone=total
> + bitbake.req.build.progress_item=repo_name
> bitbake.req.build.save()
>
> def setLayers(self, bitbake, layers, targets):
> @@ -100,6 +103,7 @@ class
> LocalhostBEController(BuildEnvironmentController):
> layerlist = []
> nongitlayerlist = []
> + layer_index = 0
> git_env = os.environ.copy()
> # (note: add custom environment settings here)
>
> @@ -113,7 +117,7 @@ class
> LocalhostBEController(BuildEnvironmentController): if bitbake.giturl
> and bitbake.commit: gitrepos[(bitbake.giturl, bitbake.commit)] = []
> gitrepos[(bitbake.giturl, bitbake.commit)].append(
> - ("bitbake", bitbake.dirpath))
> + ("bitbake", bitbake.dirpath, 0))
>
> for layer in layers:
> # We don't need to git clone the layer for the
> CustomImageRecipe @@ -124,12 +128,13 @@ class
> LocalhostBEController(BuildEnvironmentController): # If we have local
> layers then we don't need clone them # For local layers giturl will
> be empty if not layer.giturl:
> -
> nongitlayerlist.append(layer.layer_version.layer.local_source_dir)
> + nongitlayerlist.append( "%03d:%s" %
> (layer_index,layer.local_source_dir) ) continue
>
> if not (layer.giturl, layer.commit) in gitrepos:
> gitrepos[(layer.giturl, layer.commit)] = []
> - gitrepos[(layer.giturl,
> layer.commit)].append( (layer.name, layer.dirpath) )
> + gitrepos[(layer.giturl,
> layer.commit)].append( (layer.name,layer.dirpath,layer_index) )
> + layer_index += 1
>
>
> logger.debug("localhostbecontroller, our git repos are %s" %
> pformat(gitrepos)) @@ -159,9 +164,9 @@ class
> LocalhostBEController(BuildEnvironmentController): # 3. checkout the
> repositories clone_count=0
> clone_total=len(gitrepos.keys())
> -
> self.setCloneStatus(bitbake,'Started',clone_total,clone_count)
> +
> self.setCloneStatus(bitbake,'Started',clone_total,clone_count,'') for
> giturl, commit in gitrepos.keys():
> -
> self.setCloneStatus(bitbake,'progress',clone_total,clone_count)
> +
> self.setCloneStatus(bitbake,'progress',clone_total,clone_count,gitrepos[(giturl,
> commit)][0][0]) clone_count += 1
> localdirname = os.path.join(self.be.sourcedir,
> self.getGitCloneDirectory(giturl, commit)) @@ -172,8 +177,11 @@ class
> LocalhostBEController(BuildEnvironmentController): try:
> localremotes = self._shellcmd("git remote -v",
> localdirname,env=git_env)
> - if not giturl in localremotes and commit !=
> 'HEAD':
> - raise BuildSetupException("Existing git
> repository at %s, but with different remotes ('%s', expected '%s').
> Toaster will not continue out of fear of damaging something." %
> (localdirname, ", ".join(localremotes.split("\n")), giturl))
> + # NOTE: this nice-to-have check breaks when
> using git remaping to get past firewall
> + # Re-enable later with .gitconfig
> remapping checks
> + #if not giturl in localremotes and commit !=
> 'HEAD':
> + # raise BuildSetupException("Existing git
> repository at %s, but with different remotes ('%s', expected '%s').
> Toaster will not continue out of fear of damaging something." %
> (localdirname, ", ".join(localremotes.split("\n")), giturl))
> + pass
> except ShellCmdException:
> # our localdirname might not be a git repository
> #- that's fine
> @@ -192,7 +200,7 @@ class
> LocalhostBEController(BuildEnvironmentController): if commit !=
> "HEAD": logger.debug("localhostbecontroller: checking out commit %s
> to %s " % (commit, localdirname)) ref = commit if
> re.match('^[a-fA-F0-9]+$', commit) else 'origin/%s' % commit
> - self._shellcmd('git fetch --all && git reset --hard
> "%s"' % ref, localdirname,env=git_env)
> + self._shellcmd('git fetch && git reset --hard "%s"'
> % ref, localdirname,env=git_env)
> # take the localdirname as poky dir if we can find the
> oe-init-build-env if self.pokydirname is None and
> os.path.exists(os.path.join(localdirname, "oe-init-build-env")): @@
> -205,21 +213,33 @@ class
> LocalhostBEController(BuildEnvironmentController):
> self._shellcmd("git clone -b \"%s\" \"%s\" \"%s\" " %
> (bitbake.commit, bitbake.giturl, os.path.join(self.pokydirname,
> 'bitbake')),env=git_env) # verify our repositories
> - for name, dirpath in gitrepos[(giturl, commit)]:
> + for name, dirpath, index in gitrepos[(giturl, commit)]:
> localdirpath = os.path.join(localdirname, dirpath)
> - logger.debug("localhostbecontroller: localdirpath
> expected '%s'" % localdirpath)
> + logger.debug("localhostbecontroller: localdirpath
> expects '%s'" % localdirpath) if not os.path.exists(localdirpath):
> raise BuildSetupException("Cannot find layer git
> path '%s' in checked out repository '%s:%s'. Aborting." %
> (localdirpath, giturl, commit)) if name != "bitbake":
> - layerlist.append(localdirpath.rstrip("/"))
> + layerlist.append("%03d:%s" %
> (index,localdirpath.rstrip("/")))
> -
> self.setCloneStatus(bitbake,'complete',clone_total,clone_count)
> +
> self.setCloneStatus(bitbake,'complete',clone_total,clone_count,'')
> logger.debug("localhostbecontroller: current layer list %s " %
> pformat(layerlist))
> - if self.pokydirname is None and
> os.path.exists(os.path.join(self.be.sourcedir, "oe-init-build-env")):
> - logger.debug("localhostbecontroller: selected poky dir
> name %s" % self.be.sourcedir)
> - self.pokydirname = self.be.sourcedir
> + # Resolve self.pokydirname if not resolved yet, consider the
> scenario
> + # where all layers are local, that's the else clause
> + if self.pokydirname is None:
> + if os.path.exists(os.path.join(self.be.sourcedir,
> "oe-init-build-env")):
> + logger.debug("localhostbecontroller: selected poky
> dir name %s" % self.be.sourcedir)
> + self.pokydirname = self.be.sourcedir
> + else:
> + # Alternatively, scan local layers for relative
> "oe-init-build-env" location
> + for layer in layers:
> + if
> os.path.exists(os.path.join(layer.layer_version.layer.local_source_dir,"..","oe-init-build-env")):
> + logger.debug("localhostbecontroller, setting
> pokydirname to %s" % (layer.layer_version.layer.local_source_dir))
> + self.pokydirname =
> os.path.join(layer.layer_version.layer.local_source_dir,"..")
> + break
> + else:
> + logger.error("pokydirname is not set, you will
> run into trouble!")
> # 5. create custom layer and add custom recipes to it
> for target in targets:
> @@ -232,7 +252,7 @@ class
> LocalhostBEController(BuildEnvironmentController): customrecipe,
> layers)
> if os.path.isdir(custom_layer_path):
> - layerlist.append(custom_layer_path)
> + layerlist.append("%03d:%s" %
> (layer_index,custom_layer_path))
> except CustomImageRecipe.DoesNotExist:
> continue # not a custom recipe, skip
> @@ -240,7 +260,11 @@ class
> LocalhostBEController(BuildEnvironmentController):
> layerlist.extend(nongitlayerlist) logger.debug("\n\nset layers gives
> this list %s" % pformat(layerlist)) self.islayerset = True
> - return layerlist
> +
> + # restore the order of layer list for bblayers.conf
> + layerlist.sort()
> + sorted_layerlist = [l[4:] for l in layerlist]
> + return sorted_layerlist
>
> def setup_custom_image_recipe(self, customrecipe, layers):
> """ Set up toaster-custom-images layer and recipe files """
> @@ -310,41 +334,141 @@ class
> LocalhostBEController(BuildEnvironmentController):
> def triggerBuild(self, bitbake, layers, variables, targets,
> brbe): layers = self.setLayers(bitbake, layers, targets)
> + is_merged_attr = bitbake.req.project.merged_attr
> +
> + git_env = os.environ.copy()
> + # (note: add custom environment settings here)
> + try:
> + # insure that the project init/build uses the selected
> bitbake, and not Toaster's
> + del git_env['TEMPLATECONF']
> + del git_env['BBBASEDIR']
> + del git_env['BUILDDIR']
> + except KeyError:
> + pass
>
> # init build environment from the clone
> - builddir = '%s-toaster-%d' % (self.be.builddir,
> bitbake.req.project.id)
> + if bitbake.req.project.builddir:
> + builddir = bitbake.req.project.builddir
> + else:
> + builddir = '%s-toaster-%d' % (self.be.builddir,
> bitbake.req.project.id) oe_init = os.path.join(self.pokydirname,
> 'oe-init-build-env') # init build environment
> try:
> custom_script =
> ToasterSetting.objects.get(name="CUSTOM_BUILD_INIT_SCRIPT").value
> custom_script = custom_script.replace("%BUILDDIR%" ,builddir)
> - self._shellcmd("bash -c 'source %s'" % (custom_script))
> + self._shellcmd("bash -c 'source %s'" %
> (custom_script),env=git_env) except ToasterSetting.DoesNotExist:
> self._shellcmd("bash -c 'source %s %s'" % (oe_init,
> builddir),
> - self.be.sourcedir)
> + self.be.sourcedir,env=git_env)
>
> # update bblayers.conf
> - bblconfpath = os.path.join(builddir,
> "conf/toaster-bblayers.conf")
> - with open(bblconfpath, 'w') as bblayers:
> - bblayers.write('# line added by toaster build control\n'
> - 'BBLAYERS = "%s"' % ' '.join(layers))
> -
> - # write configuration file
> - confpath = os.path.join(builddir, 'conf/toaster.conf')
> - with open(confpath, 'w') as conf:
> - for var in variables:
> - conf.write('%s="%s"\n' % (var.name, var.value))
> - conf.write('INHERIT+="toaster buildhistory"')
> + if not is_merged_attr:
> + bblconfpath = os.path.join(builddir,
> "conf/toaster-bblayers.conf")
> + with open(bblconfpath, 'w') as bblayers:
> + bblayers.write('# line added by toaster build
> control\n'
> + 'BBLAYERS = "%s"' % ' '.join(layers))
> +
> + # write configuration file
> + confpath = os.path.join(builddir, 'conf/toaster.conf')
> + with open(confpath, 'w') as conf:
> + for var in variables:
> + conf.write('%s="%s"\n' % (var.name, var.value))
> + conf.write('INHERIT+="toaster buildhistory"')
> + else:
> + # Append the Toaster-specific values directly to the
> bblayers.conf
> + bblconfpath = os.path.join(builddir,
> "conf/bblayers.conf")
> + bblconfpath_save = os.path.join(builddir,
> "conf/bblayers.conf.save")
> + shutil.copyfile(bblconfpath, bblconfpath_save)
> + with open(bblconfpath) as bblayers:
> + content = bblayers.readlines()
> + do_write = True
> + was_toaster = False
> + with open(bblconfpath,'w') as bblayers:
> + for line in content:
> + #line = line.strip('\n')
> + if 'TOASTER_CONFIG_PROLOG' in line:
> + do_write = False
> + was_toaster = True
> + elif 'TOASTER_CONFIG_EPILOG' in line:
> + do_write = True
> + elif do_write:
> + bblayers.write(line)
> + if not was_toaster:
> + bblayers.write('\n')
> + bblayers.write('#=== TOASTER_CONFIG_PROLOG ===\n')
> + bblayers.write('BBLAYERS = "\\\n')
> + for layer in layers:
> + bblayers.write(' %s \\\n' % layer)
> + bblayers.write(' "\n')
> + bblayers.write('#=== TOASTER_CONFIG_EPILOG ===\n')
> + # Append the Toaster-specific values directly to the
> local.conf
> + bbconfpath = os.path.join(builddir, "conf/local.conf")
> + bbconfpath_save = os.path.join(builddir,
> "conf/local.conf.save")
> + shutil.copyfile(bbconfpath, bbconfpath_save)
> + with open(bbconfpath) as f:
> + content = f.readlines()
> + do_write = True
> + was_toaster = False
> + with open(bbconfpath,'w') as conf:
> + for line in content:
> + #line = line.strip('\n')
> + if 'TOASTER_CONFIG_PROLOG' in line:
> + do_write = False
> + was_toaster = True
> + elif 'TOASTER_CONFIG_EPILOG' in line:
> + do_write = True
> + elif do_write:
> + conf.write(line)
> + if not was_toaster:
> + conf.write('\n')
> + conf.write('#=== TOASTER_CONFIG_PROLOG ===\n')
> + for var in variables:
> + if (not var.name.startswith("INTERNAL_")) and
> (not var.name == "BBLAYERS"):
> + conf.write('%s="%s"\n' % (var.name,
> var.value))
> + conf.write('#=== TOASTER_CONFIG_EPILOG ===\n')
> +
> + # If 'target' is just the project preparation target, then
> we are done
> + for target in targets:
> + if "_PROJECT_PREPARE_" == target.target:
> + logger.debug('localhostbecontroller: Project has
> been prepared. Done.')
> + # Update the Build Request and release the build
> environment
> + bitbake.req.state = BuildRequest.REQ_COMPLETED
> + bitbake.req.save()
> + self.be.lock = BuildEnvironment.LOCK_FREE
> + self.be.save()
> + # Close the project build and progress bar
> + bitbake.req.build.outcome = Build.SUCCEEDED
> + bitbake.req.build.save()
> + # Update the project status
> +
> bitbake.req.project.set_variable(Project.PROJECT_SPECIFIC_STATUS,Project.PROJECT_SPECIFIC_CLONING_SUCCESS)
> + signal_runbuilds()
> + return
>
> # clean the Toaster to build environment
> env_clean = 'unset BBPATH;' # clean BBPATH for <= YP-2.4.0
>
> - # run bitbake server from the clone
> + # run bitbake server from the clone if available
> + # otherwise pick it from the PATH
> bitbake = os.path.join(self.pokydirname, 'bitbake', 'bin',
> 'bitbake')
> + if not os.path.exists(bitbake):
> + logger.info("Bitbake not available under %s, will try to
> use it from PATH" %
> + self.pokydirname)
> + for path in os.environ["PATH"].split(os.pathsep):
> + if os.path.exists(os.path.join(path, 'bitbake')):
> + bitbake = os.path.join(path, 'bitbake')
> + break
> + else:
> + logger.error("Looks like Bitbake is not available,
> please fix your environment") +
> toasterlayers =
> os.path.join(builddir,"conf/toaster-bblayers.conf")
> - self._shellcmd('%s bash -c \"source %s %s;
> BITBAKE_UI="knotty" %s --read %s --read %s '
> - '--server-only -B 0.0.0.0:0\"' % (env_clean,
> oe_init,
> - builddir, bitbake, confpath, toasterlayers),
> self.be.sourcedir)
> + if not is_merged_attr:
> + self._shellcmd('%s bash -c \"source %s %s;
> BITBAKE_UI="knotty" %s --read %s --read %s '
> + '--server-only -B 0.0.0.0:0\"' %
> (env_clean, oe_init,
> + builddir, bitbake, confpath,
> toasterlayers), self.be.sourcedir)
> + else:
> + self._shellcmd('%s bash -c \"source %s %s;
> BITBAKE_UI="knotty" %s '
> + '--server-only -B 0.0.0.0:0\"' %
> (env_clean, oe_init,
> + builddir, bitbake), self.be.sourcedir)
>
> # read port number from bitbake.lock
> self.be.bbport = -1
> @@ -390,12 +514,20 @@ class
> LocalhostBEController(BuildEnvironmentController): log =
> os.path.join(builddir, 'toaster_ui.log') local_bitbake =
> os.path.join(os.path.dirname(os.getenv('BBBASEDIR')), 'bitbake')
> - self._shellcmd(['%s bash -c \"(TOASTER_BRBE="%s"
> BBSERVER="0.0.0.0:%s" '
> + if not is_merged_attr:
> + self._shellcmd(['%s bash -c \"(TOASTER_BRBE="%s"
> BBSERVER="0.0.0.0:%s" ' '%s %s -u toasterui --read %s --read %s
> --token="" >>%s 2>&1;' 'BITBAKE_UI="knotty" BBSERVER=0.0.0.0:%s %s
> -m)&\"' \ % (env_clean, brbe, self.be.bbport, local_bitbake,
> bbtargets, confpath, toasterlayers, log, self.be.bbport, bitbake,)],
> builddir, nowait=True)
> + else:
> + self._shellcmd(['%s bash -c \"(TOASTER_BRBE="%s"
> BBSERVER="0.0.0.0:%s" '
> + '%s %s -u toasterui --token="" >>%s 2>&1;'
> + 'BITBAKE_UI="knotty" BBSERVER=0.0.0.0:%s %s
> -m)&\"' \
> + % (env_clean, brbe, self.be.bbport,
> local_bitbake, bbtargets, log,
> + self.be.bbport, bitbake,)],
> + builddir, nowait=True)
>
> logger.debug('localhostbecontroller: Build launched,
> exiting. ' 'Follow build logs at %s' % log)
> diff --git
> a/bitbake/lib/toaster/bldcontrol/management/commands/checksettings.py
> b/bitbake/lib/toaster/bldcontrol/management/commands/checksettings.py
> index 582114a..14298d9 100644 ---
> a/bitbake/lib/toaster/bldcontrol/management/commands/checksettings.py
> +++
> b/bitbake/lib/toaster/bldcontrol/management/commands/checksettings.py
> @@ -74,8 +74,9 @@ class Command(BaseCommand): print("Loading default
> settings") call_command("loaddata", "settings") template_conf =
> os.environ.get("TEMPLATECONF", "")
> + custom_xml_only =
> os.environ.get("CUSTOM_XML_ONLY")
> - if
> ToasterSetting.objects.filter(name='CUSTOM_XML_ONLY').count() > 0:
> + if
> ToasterSetting.objects.filter(name='CUSTOM_XML_ONLY').count() > 0 or
> (not custom_xml_only == None): # only use the custom settings pass
> elif "poky" in template_conf:
> @@ -107,7 +108,10 @@ class Command(BaseCommand):
> action="ignore",
> message="^.*No fixture named.*$")
> print("Importing custom settings if
> present")
> - call_command("loaddata", "custom")
> + try:
> + call_command("loaddata", "custom")
> + except:
> + print("NOTE: optional fixture
> 'custom' not found")
> # we run lsupdates after config update
> print("\nFetching information from the layer
> index, " diff --git
> a/bitbake/lib/toaster/bldcontrol/management/commands/runbuilds.py
> b/bitbake/lib/toaster/bldcontrol/management/commands/runbuilds.py
> index 791e53e..6a55dd4 100644 ---
> a/bitbake/lib/toaster/bldcontrol/management/commands/runbuilds.py +++
> b/bitbake/lib/toaster/bldcontrol/management/commands/runbuilds.py @@
> -49,7 +49,7 @@ class Command(BaseCommand): # we could not find a BEC;
> postpone the BR br.state = BuildRequest.REQ_QUEUED br.save()
> - logger.debug("runbuilds: No build env")
> + logger.debug("runbuilds: No build env (%s)" % e)
> return
>
> logger.info("runbuilds: starting build %s, environment
> %s" % diff --git a/bitbake/lib/toaster/orm/fixtures/oe-core.xml
> b/bitbake/lib/toaster/orm/fixtures/oe-core.xml index 00720c3..fec93ab
> 100644 --- a/bitbake/lib/toaster/orm/fixtures/oe-core.xml
> +++ b/bitbake/lib/toaster/orm/fixtures/oe-core.xml
> @@ -8,9 +8,9 @@
>
> <!-- Bitbake versions which correspond to the metadata release -->
> <object model="orm.bitbakeversion" pk="1">
> - <field type="CharField" name="name">rocko</field>
> + <field type="CharField" name="name">sumo</field>
> <field type="CharField"
> name="giturl">git://git.openembedded.org/bitbake</field>
> - <field type="CharField" name="branch">1.36</field>
> + <field type="CharField" name="branch">1.38</field>
> </object>
> <object model="orm.bitbakeversion" pk="2">
> <field type="CharField" name="name">HEAD</field>
> @@ -22,14 +22,19 @@
> <field type="CharField"
> name="giturl">git://git.openembedded.org/bitbake</field> <field
> type="CharField" name="branch">master</field> </object>
> + <object model="orm.bitbakeversion" pk="4">
> + <field type="CharField" name="name">thud</field>
> + <field type="CharField"
> name="giturl">git://git.openembedded.org/bitbake</field>
> + <field type="CharField" name="branch">1.40</field>
> + </object>
>
> <!-- Releases available -->
> <object model="orm.release" pk="1">
> - <field type="CharField" name="name">rocko</field>
> - <field type="CharField" name="description">Openembedded
> Rocko</field>
> + <field type="CharField" name="name">sumo</field>
> + <field type="CharField" name="description">Openembedded
> Sumo</field> <field rel="ManyToOneRel" to="orm.bitbakeversion"
> name="bitbake_version">1</field>
> - <field type="CharField" name="branch_name">rocko</field>
> - <field type="TextField" name="helptext">Toaster will run your
> builds using the tip of the <a
> href=\"http://cgit.openembedded.org/openembedded-core/log/?h=rocko\">OpenEmbedded
> Rocko</a> branch.</field>
> + <field type="CharField" name="branch_name">sumo</field>
> + <field type="TextField" name="helptext">Toaster will run your
> builds using the tip of the <a
> href=\"http://cgit.openembedded.org/openembedded-core/log/?h=sumo\">OpenEmbedded
> Sumo</a> branch.</field> </object> <object model="orm.release"
> pk="2"> <field type="CharField" name="name">local</field> @@ -45,6
> +50,13 @@ <field type="CharField" name="branch_name">master</field>
> <field type="TextField" name="helptext">Toaster will run your
> builds using the tip of the <a
> href=\"http://cgit.openembedded.org/openembedded-core/log/\">OpenEmbedded
> master</a> branch.</field> </object>
> + <object model="orm.release" pk="4">
> + <field type="CharField" name="name">thud</field>
> + <field type="CharField" name="description">Openembedded
> Rocko</field>
> + <field rel="ManyToOneRel" to="orm.bitbakeversion"
> name="bitbake_version">1</field>
> + <field type="CharField" name="branch_name">thud</field>
> + <field type="TextField" name="helptext">Toaster will run your
> builds using the tip of the <a
> href=\"http://cgit.openembedded.org/openembedded-core/log/?h=thud\">OpenEmbedded
> Thud</a> branch.</field>
> + </object>
>
> <!-- Default layers for each release -->
> <object model="orm.releasedefaultlayer" pk="1">
> @@ -59,6 +71,10 @@
> <field rel="ManyToOneRel" to="orm.release"
> name="release">3</field> <field type="CharField"
> name="layer_name">openembedded-core</field> </object>
> + <object model="orm.releasedefaultlayer" pk="4">
> + <field rel="ManyToOneRel" to="orm.release"
> name="release">4</field>
> + <field type="CharField"
> name="layer_name">openembedded-core</field>
> + </object>
>
>
> <!-- Layer for the Local release -->
> diff --git a/bitbake/lib/toaster/orm/fixtures/poky.xml
> b/bitbake/lib/toaster/orm/fixtures/poky.xml index 2f39d77..fb9a771
> 100644 --- a/bitbake/lib/toaster/orm/fixtures/poky.xml
> +++ b/bitbake/lib/toaster/orm/fixtures/poky.xml
> @@ -8,9 +8,9 @@
>
> <!-- Bitbake versions which correspond to the metadata release -->
> <object model="orm.bitbakeversion" pk="1">
> - <field type="CharField" name="name">rocko</field>
> + <field type="CharField" name="name">sumo</field>
> <field type="CharField"
> name="giturl">git://git.yoctoproject.org/poky</field>
> - <field type="CharField" name="branch">rocko</field>
> + <field type="CharField" name="branch">sumo</field>
> <field type="CharField" name="dirpath">bitbake</field>
> </object>
> <object model="orm.bitbakeversion" pk="2">
> @@ -25,15 +25,21 @@
> <field type="CharField" name="branch">master</field>
> <field type="CharField" name="dirpath">bitbake</field>
> </object>
> + <object model="orm.bitbakeversion" pk="4">
> + <field type="CharField" name="name">thud</field>
> + <field type="CharField"
> name="giturl">git://git.yoctoproject.org/poky</field>
> + <field type="CharField" name="branch">thud</field>
> + <field type="CharField" name="dirpath">bitbake</field>
> + </object>
>
>
> <!-- Releases available -->
> <object model="orm.release" pk="1">
> - <field type="CharField" name="name">rocko</field>
> - <field type="CharField" name="description">Yocto Project 2.4
> "Rocko"</field>
> + <field type="CharField" name="name">sumo</field>
> + <field type="CharField" name="description">Yocto Project 2.5
> "Sumo"</field> <field rel="ManyToOneRel" to="orm.bitbakeversion"
> name="bitbake_version">1</field>
> - <field type="CharField" name="branch_name">rocko</field>
> - <field type="TextField" name="helptext">Toaster will run your
> builds using the tip of the <a
> href="http://git.yoctoproject.org/cgit/cgit.cgi/poky/log/?h=rocko">Yocto
> Project Rocko branch</a>.</field>
> + <field type="CharField" name="branch_name">sumo</field>
> + <field type="TextField" name="helptext">Toaster will run your
> builds using the tip of the <a
> href="http://git.yoctoproject.org/cgit/cgit.cgi/poky/log/?h=sumo">Yocto
> Project Sumo branch</a>.</field> </object> <object
> model="orm.release" pk="2"> <field type="CharField"
> name="name">local</field> @@ -49,6 +55,13 @@ <field type="CharField"
> name="branch_name">master</field> <field type="TextField"
> name="helptext">Toaster will run your builds using the tip of the
> <a
> href="http://git.yoctoproject.org/cgit/cgit.cgi/poky/log/">Yocto
> Project Master branch</a>.</field> </object>
> + <object model="orm.release" pk="4">
> + <field type="CharField" name="name">rocko</field>
> + <field type="CharField" name="description">Yocto Project 2.6
> "Thud"</field>
> + <field rel="ManyToOneRel" to="orm.bitbakeversion"
> name="bitbake_version">1</field>
> + <field type="CharField" name="branch_name">thud</field>
> + <field type="TextField" name="helptext">Toaster will run your
> builds using the tip of the <a
> href="http://git.yoctoproject.org/cgit/cgit.cgi/poky/log/?h=thud">Yocto
> Project Thud branch</a>.</field>
> + </object>
>
> <!-- Default project layers for each release -->
> <object model="orm.releasedefaultlayer" pk="1">
> @@ -87,6 +100,18 @@
> <field rel="ManyToOneRel" to="orm.release"
> name="release">3</field> <field type="CharField"
> name="layer_name">meta-yocto-bsp</field> </object>
> + <object model="orm.releasedefaultlayer" pk="10">
> + <field rel="ManyToOneRel" to="orm.release"
> name="release">4</field>
> + <field type="CharField"
> name="layer_name">openembedded-core</field>
> + </object>
> + <object model="orm.releasedefaultlayer" pk="11">
> + <field rel="ManyToOneRel" to="orm.release"
> name="release">4</field>
> + <field type="CharField" name="layer_name">meta-poky</field>
> + </object>
> + <object model="orm.releasedefaultlayer" pk="12">
> + <field rel="ManyToOneRel" to="orm.release"
> name="release">4</field>
> + <field type="CharField" name="layer_name">meta-yocto-bsp</field>
> + </object>
>
> <!-- Default layers provided by poky
> openembedded-core
> @@ -105,7 +130,7 @@
> <field rel="ManyToOneRel" to="orm.layer" name="layer">1</field>
> <field type="IntegerField" name="layer_source">0</field>
> <field rel="ManyToOneRel" to="orm.release"
> name="release">1</field>
> - <field type="CharField" name="branch">rocko</field>
> + <field type="CharField" name="branch">sumo</field>
> <field type="CharField" name="dirpath">meta</field>
> </object>
> <object model="orm.layer_version" pk="2">
> @@ -123,6 +148,13 @@
> <field type="CharField" name="branch">master</field>
> <field type="CharField" name="dirpath">meta</field>
> </object>
> + <object model="orm.layer_version" pk="4">
> + <field rel="ManyToOneRel" to="orm.layer" name="layer">1</field>
> + <field type="IntegerField" name="layer_source">0</field>
> + <field rel="ManyToOneRel" to="orm.release"
> name="release">4</field>
> + <field type="CharField" name="branch">rocko</field>
> + <field type="CharField" name="dirpath">meta</field>
> + </object>
>
> <object model="orm.layer" pk="2">
> <field type="CharField" name="name">meta-poky</field>
> @@ -132,14 +164,14 @@
> <field type="CharField"
> name="vcs_web_tree_base_url">http://git.yoctoproject.org/cgit/cgit.cgi/poky/tree/%path%?h=%branch%</field>
> <field type="CharField"
> name="vcs_web_file_base_url">http://git.yoctoproject.org/cgit/cgit.cgi/poky/tree/%path%?h=%branch%</field>
> </object>
> - <object model="orm.layer_version" pk="4">
> + <object model="orm.layer_version" pk="5">
> <field rel="ManyToOneRel" to="orm.layer" name="layer">2</field>
> <field type="IntegerField" name="layer_source">0</field>
> <field rel="ManyToOneRel" to="orm.release"
> name="release">1</field>
> - <field type="CharField" name="branch">rocko</field>
> + <field type="CharField" name="branch">sumo</field>
> <field type="CharField" name="dirpath">meta-poky</field>
> </object>
> - <object model="orm.layer_version" pk="5">
> + <object model="orm.layer_version" pk="6">
> <field rel="ManyToOneRel" to="orm.layer" name="layer">2</field>
> <field type="IntegerField" name="layer_source">0</field>
> <field rel="ManyToOneRel" to="orm.release"
> name="release">2</field> @@ -147,13 +179,20 @@
> <field type="CharField" name="commit">HEAD</field>
> <field type="CharField" name="dirpath">meta-poky</field>
> </object>
> - <object model="orm.layer_version" pk="6">
> + <object model="orm.layer_version" pk="7">
> <field rel="ManyToOneRel" to="orm.layer" name="layer">2</field>
> <field type="IntegerField" name="layer_source">0</field>
> <field rel="ManyToOneRel" to="orm.release"
> name="release">3</field> <field type="CharField"
> name="branch">master</field> <field type="CharField"
> name="dirpath">meta-poky</field> </object>
> + <object model="orm.layer_version" pk="8">
> + <field rel="ManyToOneRel" to="orm.layer" name="layer">2</field>
> + <field type="IntegerField" name="layer_source">0</field>
> + <field rel="ManyToOneRel" to="orm.release"
> name="release">4</field>
> + <field type="CharField" name="branch">rocko</field>
> + <field type="CharField" name="dirpath">meta-poky</field>
> + </object>
>
> <object model="orm.layer" pk="3">
> <field type="CharField" name="name">meta-yocto-bsp</field>
> @@ -163,14 +202,14 @@
> <field type="CharField"
> name="vcs_web_tree_base_url">http://git.yoctoproject.org/cgit/cgit.cgi/poky/tree/%path%?h=%branch%</field>
> <field type="CharField"
> name="vcs_web_file_base_url">http://git.yoctoproject.org/cgit/cgit.cgi/poky/tree/%path%?h=%branch%</field>
> </object>
> - <object model="orm.layer_version" pk="7">
> + <object model="orm.layer_version" pk="9">
> <field rel="ManyToOneRel" to="orm.layer" name="layer">3</field>
> <field type="IntegerField" name="layer_source">0</field>
> <field rel="ManyToOneRel" to="orm.release"
> name="release">1</field>
> - <field type="CharField" name="branch">rocko</field>
> + <field type="CharField" name="branch">sumo</field>
> <field type="CharField" name="dirpath">meta-yocto-bsp</field>
> </object>
> - <object model="orm.layer_version" pk="8">
> + <object model="orm.layer_version" pk="10">
> <field rel="ManyToOneRel" to="orm.layer" name="layer">3</field>
> <field type="IntegerField" name="layer_source">0</field>
> <field rel="ManyToOneRel" to="orm.release"
> name="release">2</field> @@ -178,11 +217,18 @@
> <field type="CharField" name="commit">HEAD</field>
> <field type="CharField" name="dirpath">meta-yocto-bsp</field>
> </object>
> - <object model="orm.layer_version" pk="9">
> + <object model="orm.layer_version" pk="11">
> <field rel="ManyToOneRel" to="orm.layer" name="layer">3</field>
> <field type="IntegerField" name="layer_source">0</field>
> <field rel="ManyToOneRel" to="orm.release"
> name="release">3</field> <field type="CharField"
> name="branch">master</field> <field type="CharField"
> name="dirpath">meta-yocto-bsp</field> </object>
> + <object model="orm.layer_version" pk="12">
> + <field rel="ManyToOneRel" to="orm.layer" name="layer">3</field>
> + <field type="IntegerField" name="layer_source">0</field>
> + <field rel="ManyToOneRel" to="orm.release"
> name="release">4</field>
> + <field type="CharField" name="branch">rocko</field>
> + <field type="CharField" name="dirpath">meta-yocto-bsp</field>
> + </object>
> </django-objects>
> diff --git a/bitbake/lib/toaster/orm/management/commands/lsupdates.py
> b/bitbake/lib/toaster/orm/management/commands/lsupdates.py index
> efc6b3a..66114ff 100644 ---
> a/bitbake/lib/toaster/orm/management/commands/lsupdates.py +++
> b/bitbake/lib/toaster/orm/management/commands/lsupdates.py @@ -29,7
> +29,6 @@ from orm.models import ToasterSetting import os
> import sys
>
> -import json
> import logging
> import threading
> import time
> @@ -37,6 +36,18 @@ logger = logging.getLogger("toaster")
>
> DEFAULT_LAYERINDEX_SERVER =
> "http://layers.openembedded.org/layerindex/api/"
> +# Add path to bitbake modules for layerindexlib
> +# lib/toaster/orm/management/commands/lsupdates.py (abspath)
> +# lib/toaster/orm/management/commands (dirname)
> +# lib/toaster/orm/management (dirname)
> +# lib/toaster/orm (dirname)
> +# lib/toaster/ (dirname)
> +# lib/ (dirname)
> +path =
> os.path.dirname(os.path.dirname(os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))))
> +sys.path.insert(0, path) +
> +import layerindexlib
> +
>
> class Spinner(threading.Thread):
> """ A simple progress spinner to indicate download/parsing is
> happening""" @@ -86,45 +97,6 @@ class Command(BaseCommand):
> self.apiurl = ToasterSetting.objects.get(name =
> 'CUSTOM_LAYERINDEX_SERVER').value
> assert self.apiurl is not None
> - try:
> - from urllib.request import urlopen, URLError
> - from urllib.parse import urlparse
> - except ImportError:
> - from urllib2 import urlopen, URLError
> - from urlparse import urlparse
> -
> - proxy_settings = os.environ.get("http_proxy", None)
> -
> - def _get_json_response(apiurl=None):
> - if None == apiurl:
> - apiurl=self.apiurl
> - http_progress = Spinner()
> - http_progress.start()
> -
> - _parsedurl = urlparse(apiurl)
> - path = _parsedurl.path
> -
> - # logger.debug("Fetching %s", apiurl)
> - try:
> - res = urlopen(apiurl)
> - except URLError as e:
> - raise Exception("Failed to read %s: %s" % (path,
> e.reason)) -
> - parsed = json.loads(res.read().decode('utf-8'))
> -
> - http_progress.stop()
> - return parsed
> -
> - # verify we can get the basic api
> - try:
> - apilinks = _get_json_response()
> - except Exception as e:
> - import traceback
> - if proxy_settings is not None:
> - logger.info("EE: Using proxy %s" % proxy_settings)
> - logger.warning("EE: could not connect to %s, skipping
> update:"
> - "%s\n%s" % (self.apiurl, e,
> traceback.format_exc()))
> - return
>
> # update branches; only those that we already have names
> listed in the # Releases table
> @@ -133,112 +105,118 @@ class Command(BaseCommand):
> if len(whitelist_branch_names) == 0:
> raise Exception("Failed to make list of branches to
> fetch")
> - logger.info("Fetching metadata releases for %s",
> + logger.info("Fetching metadata for %s",
> " ".join(whitelist_branch_names))
>
> - branches_info = _get_json_response(apilinks['branches'] +
> - "?filter=name:%s"
> - %
> "OR".join(whitelist_branch_names))
> + # We require a non-empty bb.data, but we can fake it with a
> dictionary
> + layerindex = layerindexlib.LayerIndex({"DUMMY" : "VALUE"})
> +
> + http_progress = Spinner()
> + http_progress.start()
> +
> + if whitelist_branch_names:
> + url_branches = ";branch=%s" %
> ','.join(whitelist_branch_names)
> + else:
> + url_branches = ""
> + layerindex.load_layerindex("%s%s" % (self.apiurl,
> url_branches)) +
> + http_progress.stop()
> +
> + # We know we're only processing one entry, so we reference
> it here
> + # (this is cheating...)
> + index = layerindex.indexes[0]
>
> # Map the layer index branches to toaster releases
> li_branch_id_to_toaster_release = {}
>
> - total = len(branches_info)
> - for i, branch in enumerate(branches_info):
> - li_branch_id_to_toaster_release[branch['id']] = \
> - Release.objects.get(name=branch['name'])
> + logger.info("Processing releases")
> +
> + total = len(index.branches)
> + for i, id in enumerate(index.branches):
> + li_branch_id_to_toaster_release[id] = \
> + Release.objects.get(name=index.branches[id].name)
> self.mini_progress("Releases", i, total)
>
> # keep a track of the layerindex (li) id mappings so that
> # layer_versions can be created for these layers later on
> li_layer_id_to_toaster_layer_id = {}
>
> - logger.info("Fetching layers")
> -
> - layers_info = _get_json_response(apilinks['layerItems'])
> + logger.info("Processing layers")
>
> - total = len(layers_info)
> - for i, li in enumerate(layers_info):
> + total = len(index.layerItems)
> + for i, id in enumerate(index.layerItems):
> try:
> - l, created =
> Layer.objects.get_or_create(name=li['name'])
> - l.up_date = li['updated']
> - l.summary = li['summary']
> - l.description = li['description']
> + l, created =
> Layer.objects.get_or_create(name=index.layerItems[id].name)
> + l.up_date = index.layerItems[id].updated
> + l.summary = index.layerItems[id].summary
> + l.description = index.layerItems[id].description
>
> if created:
> # predefined layers in the fixtures (for example
> poky.xml) # always preempt the Layer Index for these values
> - l.vcs_url = li['vcs_url']
> - l.vcs_web_url = li['vcs_web_url']
> - l.vcs_web_tree_base_url =
> li['vcs_web_tree_base_url']
> - l.vcs_web_file_base_url =
> li['vcs_web_file_base_url']
> + l.vcs_url = index.layerItems[id].vcs_url
> + l.vcs_web_url = index.layerItems[id].vcs_web_url
> + l.vcs_web_tree_base_url =
> index.layerItems[id].vcs_web_tree_base_url
> + l.vcs_web_file_base_url =
> index.layerItems[id].vcs_web_file_base_url l.save()
> except Layer.MultipleObjectsReturned:
> logger.info("Skipped %s as we found multiple layers
> and " "don't know which to update" %
> - li['name'])
> + index.layerItems[id].name)
>
> - li_layer_id_to_toaster_layer_id[li['id']] = l.pk
> + li_layer_id_to_toaster_layer_id[id] = l.pk
>
> self.mini_progress("layers", i, total)
>
> # update layer_versions
> - logger.info("Fetching layer versions")
> - layerbranches_info = _get_json_response(
> - apilinks['layerBranches'] + "?filter=branch__name:%s" %
> - "OR".join(whitelist_branch_names))
> + logger.info("Processing layer versions")
>
> # Map Layer index layer_branch object id to
> # layer_version toaster object id
> li_layer_branch_id_to_toaster_lv_id = {}
>
> - total = len(layerbranches_info)
> - for i, lbi in enumerate(layerbranches_info):
> + total = len(index.layerBranches)
> + for i, id in enumerate(index.layerBranches):
> # release as defined by toaster map to layerindex branch
> - release = li_branch_id_to_toaster_release[lbi['branch']]
> + release =
> li_branch_id_to_toaster_release[index.layerBranches[id].branch_id]
> try:
> lv, created = Layer_Version.objects.get_or_create(
> layer=Layer.objects.get(
> -
> pk=li_layer_id_to_toaster_layer_id[lbi['layer']]),
> +
> pk=li_layer_id_to_toaster_layer_id[index.layerBranches[id].layer_id]),
> release=release )
> except KeyError:
> logger.warning(
> "No such layerindex layer referenced by
> layerbranch %d" %
> - lbi['layer'])
> + index.layerBranches[id].layer_id)
> continue
>
> if created:
> - lv.release =
> li_branch_id_to_toaster_release[lbi['branch']]
> - lv.up_date = lbi['updated']
> - lv.commit = lbi['actual_branch']
> - lv.dirpath = lbi['vcs_subdir']
> + lv.release =
> li_branch_id_to_toaster_release[index.layerBranches[id].branch_id]
> + lv.up_date = index.layerBranches[id].updated
> + lv.commit = index.layerBranches[id].actual_branch
> + lv.dirpath = index.layerBranches[id].vcs_subdir
> lv.save()
>
> - li_layer_branch_id_to_toaster_lv_id[lbi['id']] =\
> +
> li_layer_branch_id_to_toaster_lv_id[index.layerBranches[id].id] =\
> lv.pk self.mini_progress("layer versions", i, total)
>
> - logger.info("Fetching layer version dependencies")
> - # update layer dependencies
> - layerdependencies_info = _get_json_response(
> - apilinks['layerDependencies'] +
> - "?filter=layerbranch__branch__name:%s" %
> - "OR".join(whitelist_branch_names))
> + logger.info("Processing layer version dependencies")
>
> dependlist = {}
> - for ldi in layerdependencies_info:
> + for id in index.layerDependencies:
> try:
> lv = Layer_Version.objects.get(
> -
> pk=li_layer_branch_id_to_toaster_lv_id[ldi['layerbranch']])
> +
> pk=li_layer_branch_id_to_toaster_lv_id[index.layerDependencies[id].layerbranch_id])
> except Layer_Version.DoesNotExist as e: continue
>
> if lv not in dependlist:
> dependlist[lv] = []
> try:
> - layer_id =
> li_layer_id_to_toaster_layer_id[ldi['dependency']]
> + layer_id =
> li_layer_id_to_toaster_layer_id[index.layerDependencies[id].dependency_id]
> dependlist[lv].append(
> Layer_Version.objects.get(layer__pk=layer_id,
> @@ -247,7 +225,7 @@ class Command(BaseCommand):
> except Layer_Version.DoesNotExist:
> logger.warning("Cannot find layer version (ls:%s),"
> "up_id:%s lv:%s" %
> - (self, ldi['dependency'], lv))
> + (self,
> index.layerDependencies[id].dependency_id, lv))
> total = len(dependlist)
> for i, lv in enumerate(dependlist):
> @@ -258,73 +236,61 @@ class Command(BaseCommand):
> self.mini_progress("Layer version dependencies", i,
> total)
> # update Distros
> - logger.info("Fetching distro information")
> - distros_info = _get_json_response(
> - apilinks['distros'] +
> "?filter=layerbranch__branch__name:%s" %
> - "OR".join(whitelist_branch_names))
> + logger.info("Processing distro information")
>
> - total = len(distros_info)
> - for i, di in enumerate(distros_info):
> + total = len(index.distros)
> + for i, id in enumerate(index.distros):
> distro, created = Distro.objects.get_or_create(
> - name=di['name'],
> + name=index.distros[id].name,
> layer_version=Layer_Version.objects.get(
> -
> pk=li_layer_branch_id_to_toaster_lv_id[di['layerbranch']]))
> - distro.up_date = di['updated']
> - distro.name = di['name']
> - distro.description = di['description']
> +
> pk=li_layer_branch_id_to_toaster_lv_id[index.distros[id].layerbranch_id]))
> + distro.up_date = index.distros[id].updated
> + distro.name = index.distros[id].name
> + distro.description = index.distros[id].description
> distro.save()
> self.mini_progress("distros", i, total)
>
> # update machines
> - logger.info("Fetching machine information")
> - machines_info = _get_json_response(
> - apilinks['machines'] +
> "?filter=layerbranch__branch__name:%s" %
> - "OR".join(whitelist_branch_names))
> + logger.info("Processing machine information")
>
> - total = len(machines_info)
> - for i, mi in enumerate(machines_info):
> + total = len(index.machines)
> + for i, id in enumerate(index.machines):
> mo, created = Machine.objects.get_or_create(
> - name=mi['name'],
> + name=index.machines[id].name,
> layer_version=Layer_Version.objects.get(
> -
> pk=li_layer_branch_id_to_toaster_lv_id[mi['layerbranch']]))
> - mo.up_date = mi['updated']
> - mo.name = mi['name']
> - mo.description = mi['description']
> +
> pk=li_layer_branch_id_to_toaster_lv_id[index.machines[id].layerbranch_id]))
> + mo.up_date = index.machines[id].updated
> + mo.name = index.machines[id].name
> + mo.description = index.machines[id].description
> mo.save()
> self.mini_progress("machines", i, total)
>
> # update recipes; paginate by layer version / layer branch
> - logger.info("Fetching recipe information")
> - recipes_info = _get_json_response(
> - apilinks['recipes'] +
> "?filter=layerbranch__branch__name:%s" %
> - "OR".join(whitelist_branch_names))
> + logger.info("Processing recipe information")
>
> - total = len(recipes_info)
> - for i, ri in enumerate(recipes_info):
> + total = len(index.recipes)
> + for i, id in enumerate(index.recipes):
> try:
> - lv_id =
> li_layer_branch_id_to_toaster_lv_id[ri['layerbranch']]
> + lv_id =
> li_layer_branch_id_to_toaster_lv_id[index.recipes[id].layerbranch_id]
> lv = Layer_Version.objects.get(pk=lv_id)
> ro, created = Recipe.objects.get_or_create(
> layer_version=lv,
> - name=ri['pn']
> + name=index.recipes[id].pn
> )
>
> ro.layer_version = lv
> - ro.up_date = ri['updated']
> - ro.name = ri['pn']
> - ro.version = ri['pv']
> - ro.summary = ri['summary']
> - ro.description = ri['description']
> - ro.section = ri['section']
> - ro.license = ri['license']
> - ro.homepage = ri['homepage']
> - ro.bugtracker = ri['bugtracker']
> - ro.file_path = ri['filepath'] + "/" + ri['filename']
> - if 'inherits' in ri:
> - ro.is_image = 'image' in ri['inherits'].split()
> - else: # workaround for old style layer index
> - ro.is_image = "-image-" in ri['pn']
> + ro.up_date = index.recipes[id].updated
> + ro.name = index.recipes[id].pn
> + ro.version = index.recipes[id].pv
> + ro.summary = index.recipes[id].summary
> + ro.description = index.recipes[id].description
> + ro.section = index.recipes[id].section
> + ro.license = index.recipes[id].license
> + ro.homepage = index.recipes[id].homepage
> + ro.bugtracker = index.recipes[id].bugtracker
> + ro.file_path = index.recipes[id].fullpath
> + ro.is_image = 'image' in
> index.recipes[id].inherits.split() ro.save()
> except Exception as e:
> logger.warning("Failed saving recipe %s", e)
> diff --git
> a/bitbake/lib/toaster/orm/migrations/0018_project_specific.py
> b/bitbake/lib/toaster/orm/migrations/0018_project_specific.py new
> file mode 100644 index 0000000..084ecad --- /dev/null
> +++ b/bitbake/lib/toaster/orm/migrations/0018_project_specific.py
> @@ -0,0 +1,28 @@
> +# -*- coding: utf-8 -*-
> +from __future__ import unicode_literals
> +
> +from django.db import migrations, models
> +
> +class Migration(migrations.Migration):
> +
> + dependencies = [
> + ('orm', '0017_distro_clone'),
> + ]
> +
> + operations = [
> + migrations.AddField(
> + model_name='Project',
> + name='builddir',
> + field=models.TextField(),
> + ),
> + migrations.AddField(
> + model_name='Project',
> + name='merged_attr',
> + field=models.BooleanField(default=False)
> + ),
> + migrations.AddField(
> + model_name='Build',
> + name='progress_item',
> + field=models.CharField(max_length=40)
> + ),
> + ]
> diff --git a/bitbake/lib/toaster/orm/models.py
> b/bitbake/lib/toaster/orm/models.py index 3a7dff8..7720290 100644
> --- a/bitbake/lib/toaster/orm/models.py
> +++ b/bitbake/lib/toaster/orm/models.py
> @@ -121,8 +121,15 @@ class ToasterSetting(models.Model):
>
>
> class ProjectManager(models.Manager):
> - def create_project(self, name, release):
> - if release is not None:
> + def create_project(self, name, release, existing_project=None):
> + if existing_project and (release is not None):
> + prj = existing_project
> + prj.bitbake_version = release.bitbake_version
> + prj.release = release
> + # Delete the previous ProjectLayer mappings
> + for pl in ProjectLayer.objects.filter(project=prj):
> + pl.delete()
> + elif release is not None:
> prj = self.model(name=name,
> bitbake_version=release.bitbake_version,
> release=release)
> @@ -130,15 +137,14 @@ class ProjectManager(models.Manager):
> prj = self.model(name=name,
> bitbake_version=None,
> release=None)
> -
> prj.save()
>
> for defaultconf in ToasterSetting.objects.filter(
> name__startswith="DEFCONF_"):
> name = defaultconf.name[8:]
> - ProjectVariable.objects.create(project=prj,
> - name=name,
> - value=defaultconf.value)
> + pv,create =
> ProjectVariable.objects.get_or_create(project=prj,name=name)
> + pv.value = defaultconf.value
> + pv.save()
>
> if release is None:
> return prj
> @@ -197,6 +203,11 @@ class Project(models.Model):
> user_id = models.IntegerField(null=True)
> objects = ProjectManager()
>
> + # build directory override (e.g. imported)
> + builddir = models.TextField()
> + # merge the Toaster configure attributes directly into the
> standard conf files
> + merged_attr = models.BooleanField(default=False)
> +
> # set to True for the project which is the default container
> # for builds initiated by the command line etc.
> is_default= models.BooleanField(default=False)
> @@ -305,6 +316,15 @@ class Project(models.Model):
> return layer_versions
>
>
> + def get_default_image_recipe(self):
> + try:
> + return
> self.projectvariable_set.get(name="DEFAULT_IMAGE").value
> + except (ProjectVariable.DoesNotExist,IndexError):
> + return None;
> +
> + def get_is_new(self):
> + return self.get_variable(Project.PROJECT_SPECIFIC_ISNEW)
> +
> def get_available_machines(self):
> """ Returns QuerySet of all Machines which are provided by
> the Layers currently added to the Project """
> @@ -353,6 +373,32 @@ class Project(models.Model):
>
> return queryset
>
> + # Project Specific status management
> + PROJECT_SPECIFIC_STATUS = 'INTERNAL_PROJECT_SPECIFIC_STATUS'
> + PROJECT_SPECIFIC_CALLBACK = 'INTERNAL_PROJECT_SPECIFIC_CALLBACK'
> + PROJECT_SPECIFIC_ISNEW = 'INTERNAL_PROJECT_SPECIFIC_ISNEW'
> + PROJECT_SPECIFIC_DEFAULTIMAGE = 'PROJECT_SPECIFIC_DEFAULTIMAGE'
> + PROJECT_SPECIFIC_NONE = ''
> + PROJECT_SPECIFIC_NEW = '1'
> + PROJECT_SPECIFIC_EDIT = '2'
> + PROJECT_SPECIFIC_CLONING = '3'
> + PROJECT_SPECIFIC_CLONING_SUCCESS = '4'
> + PROJECT_SPECIFIC_CLONING_FAIL = '5'
> +
> + def get_variable(self,variable,default_value = ''):
> + try:
> + return self.projectvariable_set.get(name=variable).value
> + except (ProjectVariable.DoesNotExist,IndexError):
> + return default_value
> +
> + def set_variable(self,variable,value):
> + pv,create = ProjectVariable.objects.get_or_create(project =
> self, name = variable)
> + pv.value = value
> + pv.save()
> +
> + def get_default_image(self):
> + return
> self.get_variable(Project.PROJECT_SPECIFIC_DEFAULTIMAGE) +
> def schedule_build(self):
>
> from bldcontrol.models import BuildRequest, BRTarget, BRLayer
> @@ -459,6 +505,9 @@ class Build(models.Model):
> # number of repos cloned so far for this build (default off)
> repos_cloned = models.IntegerField(default=1)
>
> + # Hint on current progress item
> + progress_item = models.CharField(max_length=40)
> +
> @staticmethod
> def get_recent(project=None):
> """
> @@ -1663,6 +1712,9 @@ class CustomImageRecipe(Recipe):
>
> path_schema_two = self.base_recipe.file_path
>
> + path_schema_three = "%s/%s" %
> (self.base_recipe.layer_version.layer.local_source_dir,
> + self.base_recipe.file_path)
> +
> if os.path.exists(path_schema_one):
> return path_schema_one
>
> @@ -1670,6 +1722,10 @@ class CustomImageRecipe(Recipe):
> if os.path.exists(path_schema_two):
> return path_schema_two
>
> + # Or a local path if all layers are local
> + if os.path.exists(path_schema_three):
> + return path_schema_three
> +
> return None
>
> def generate_recipe_file_contents(self):
> @@ -1694,8 +1750,8 @@ class CustomImageRecipe(Recipe):
> if base_recipe_path:
> base_recipe = open(base_recipe_path, 'r').read()
> else:
> - raise IOError("Based on recipe file not found: %s" %
> - base_recipe_path)
> + # Pass back None to trigger error message to user
> + return None
>
> # Add a special case for when the recipe we have based a
> custom image # recipe on requires another recipe.
> @@ -1821,7 +1877,7 @@ class Distro(models.Model):
> description = models.CharField(max_length=255)
>
> def get_vcs_distro_file_link_url(self):
> - path = self.name+'.conf'
> + path = 'conf/distro/%s.conf' % self.name
> return self.layer_version.get_vcs_file_link_url(path)
>
> def __unicode__(self):
> diff --git a/bitbake/lib/toaster/toastergui/api.py
> b/bitbake/lib/toaster/toastergui/api.py index ab6ba69..564d595 100644
> --- a/bitbake/lib/toaster/toastergui/api.py
> +++ b/bitbake/lib/toaster/toastergui/api.py
> @@ -22,7 +22,9 @@ import os
> import re
> import logging
> import json
> +import subprocess
> from collections import Counter
> +from shutil import copyfile
>
> from orm.models import Project, ProjectTarget, Build, Layer_Version
> from orm.models import LayerVersionDependency, LayerSource,
> ProjectLayer @@ -38,6 +40,18 @@ from django.core.urlresolvers import
> reverse from django.db.models import Q, F
> from django.db import Error
> from toastergui.templatetags.projecttags import
> filtered_filesizeformat +from django.utils import timezone
> +import pytz
> +
> +# development/debugging support
> +verbose = 2
> +def _log(msg):
> + if 1 == verbose:
> + print(msg)
> + elif 2 == verbose:
> + f1=open('/tmp/toaster.log', 'a')
> + f1.write("|" + msg + "|\n" )
> + f1.close()
>
> logger = logging.getLogger("toaster")
>
> @@ -137,6 +151,130 @@ class XhrBuildRequest(View):
> return response
>
>
> +class XhrProjectUpdate(View):
> +
> + def get(self, request, *args, **kwargs):
> + return HttpResponse()
> +
> + def post(self, request, *args, **kwargs):
> + """
> + Project Update
> +
> + Entry point: /xhr_projectupdate/<project_id>
> + Method: POST
> +
> + Args:
> + pid: pid of project to update
> +
> + Returns:
> + {"error": "ok"}
> + or
> + {"error": <error message>}
> + """
> +
> + project = Project.objects.get(pk=kwargs['pid'])
> +
> logger.debug("ProjectUpdateCallback:project.pk=%d,project.builddir=%s"
> % (project.pk,project.builddir)) +
> + if 'do_update' in request.POST:
> +
> + # Extract any default image recipe
> + if 'default_image' in request.POST:
> +
> project.set_variable(Project.PROJECT_SPECIFIC_DEFAULTIMAGE,str(request.POST['default_image']))
> + else:
> +
> project.set_variable(Project.PROJECT_SPECIFIC_DEFAULTIMAGE,'') +
> + logger.debug("ProjectUpdateCallback:Chain to the build
> request") +
> + # Chain to the build request
> + xhrBuildRequest = XhrBuildRequest()
> + return xhrBuildRequest.post(request, *args, **kwargs)
> +
> + logger.warning("ERROR:XhrProjectUpdate")
> + response = HttpResponse()
> + response.status_code = 500
> + return response
> +
> +class XhrSetDefaultImageUrl(View):
> +
> + def get(self, request, *args, **kwargs):
> + return HttpResponse()
> +
> + def post(self, request, *args, **kwargs):
> + """
> + Project Update
> +
> + Entry point: /xhr_setdefaultimage/<project_id>
> + Method: POST
> +
> + Args:
> + pid: pid of project to update default image
> +
> + Returns:
> + {"error": "ok"}
> + or
> + {"error": <error message>}
> + """
> +
> + project = Project.objects.get(pk=kwargs['pid'])
> + logger.debug("XhrSetDefaultImageUrl:project.pk=%d" %
> (project.pk)) +
> + # set any default image recipe
> + if 'targets' in request.POST:
> + default_target = str(request.POST['targets'])
> +
> project.set_variable(Project.PROJECT_SPECIFIC_DEFAULTIMAGE,default_target)
> +
> logger.debug("XhrSetDefaultImageUrl,project.pk=%d,project.builddir=%s"
> % (project.pk,project.builddir))
> + return error_response('ok')
> +
> + logger.warning("ERROR:XhrSetDefaultImageUrl")
> + response = HttpResponse()
> + response.status_code = 500
> + return response
> +
> +
> +#
> +# Layer Management
> +#
> +# Rules for 'local_source_dir' layers
> +# * Layers must have a unique name in the Layers table
> +# * A 'local_source_dir' layer is supposed to be shared
> +# by all projects that use it, so that it can have the
> +# same logical name
> +# * Each project that uses a layer will have its own
> +# LayerVersion and Project Layer for it
> +# * During the Paroject delete process, when the last
> +# LayerVersion for a 'local_source_dir' layer is deleted
> +# then the Layer record is deleted to remove orphans
> +#
> +
> +def scan_layer_content(layer,layer_version):
> + # if this is a local layer directory, we can immediately scan
> its content
> + if layer.local_source_dir:
> + try:
> + # recipes-*/*/*.bb
> + cmd = '%s %s' % ('ls',
> os.path.join(layer.local_source_dir,'recipes-*/*/*.bb'))
> + recipes_list = subprocess.Popen(cmd, shell=True,
> stdout=subprocess.PIPE,stderr=subprocess.STDOUT).stdout.read()
> + recipes_list = recipes_list.decode("utf-8").strip()
> + if recipes_list and 'No such' not in recipes_list:
> + for recipe in recipes_list.split('\n'):
> + recipe_path = recipe[recipe.rfind('recipes-'):]
> + recipe_name =
> recipe[recipe.rfind('/')+1:].replace('.bb','')
> + recipe_ver = recipe_name.rfind('_')
> + if recipe_ver > 0:
> + recipe_name = recipe_name[0:recipe_ver]
> + if recipe_name:
> + ro, created = Recipe.objects.get_or_create(
> + layer_version=layer_version,
> + name=recipe_name
> + )
> + if created:
> + ro.file_path = recipe_path
> + ro.summary = 'Recipe %s from layer %s' %
> (recipe_name,layer.name)
> + ro.description = ro.summary
> + ro.save()
> +
> + except Exception as e:
> + logger.warning("ERROR:scan_layer_content: %s" % e)
> +
> class XhrLayer(View):
> """ Delete, Get, Add and Update Layer information
>
> @@ -265,6 +403,7 @@ class XhrLayer(View):
> (csv)]
>
> """
> +
> try:
> project = Project.objects.get(pk=kwargs['pid'])
>
> @@ -285,7 +424,13 @@ class XhrLayer(View):
> if layer_data['name'] in existing_layers:
> return JsonResponse({"error": "layer-name-exists"})
>
> - layer = Layer.objects.create(name=layer_data['name'])
> + if ('local_source_dir' in layer_data):
> + # Local layer can be shared across projects. They
> have no 'release'
> + # and are not included in
> get_all_compatible_layer_versions() above
> + layer,created =
> Layer.objects.get_or_create(name=layer_data['name'])
> + _log("Local Layer created=%s" % created)
> + else:
> + layer = Layer.objects.create(name=layer_data['name'])
>
> layer_version = Layer_Version.objects.create(
> layer=layer,
> @@ -293,7 +438,7 @@ class XhrLayer(View):
> layer_source=LayerSource.TYPE_IMPORTED)
>
> # Local layer
> - if ('local_source_dir' in layer_data) and
> layer.local_source_dir:
> + if ('local_source_dir' in layer_data): ### and
> layer.local_source_dir: layer.local_source_dir =
> layer_data['local_source_dir'] # git layer
> elif 'vcs_url' in layer_data:
> @@ -325,6 +470,9 @@ class XhrLayer(View):
> 'layerdetailurl':
> layer_dep.get_detailspage_url(project.pk)})
>
> + # Scan the layer's content and update components
> + scan_layer_content(layer,layer_version)
> +
> except Layer_Version.DoesNotExist:
> return error_response("layer-dep-not-found")
> except Project.DoesNotExist:
> @@ -529,7 +677,13 @@ class XhrCustomRecipe(View):
> recipe_path = os.path.join(layerpath, "recipes", "%s.bb" %
> recipe.name)
> with open(recipe_path, "w") as recipef:
> - recipef.write(recipe.generate_recipe_file_contents())
> + content = recipe.generate_recipe_file_contents()
> + if not content:
> + # Delete this incomplete image recipe object
> + recipe.delete()
> + return error_response("recipe-parent-not-exist")
> + else:
> + recipef.write(recipe.generate_recipe_file_contents())
>
> return JsonResponse(
> {"error": "ok",
> @@ -1014,8 +1168,24 @@ class XhrProject(View):
> state=BuildRequest.REQ_INPROGRESS):
> XhrBuildRequest.cancel_build(br)
>
> + # gather potential orphaned local layers attached to
> this project
> + project_local_layer_list = []
> + for pl in ProjectLayer.objects.filter(project=project):
> + if pl.layercommit.layer_source ==
> LayerSource.TYPE_IMPORTED:
> +
> project_local_layer_list.append(pl.layercommit.layer) +
> + # deep delete the project and its dependencies
> project.delete()
>
> + # delete any local layers now orphaned
> + _log("LAYER_ORPHAN_CHECK:Check for orphaned layers")
> + for layer in project_local_layer_list:
> + layer_refs =
> Layer_Version.objects.filter(layer=layer)
> + _log("LAYER_ORPHAN_CHECK:Ref Count for '%s' = %d" %
> (layer.name,len(layer_refs)))
> + if 0 == len(layer_refs):
> + _log("LAYER_ORPHAN_CHECK:DELETE orpahned '%s'" %
> (layer.name))
> + Layer.objects.filter(pk=layer.id).delete()
> +
> except Project.DoesNotExist:
> return error_response("Project %s does not exist" %
> kwargs['project_id'])
> diff --git a/bitbake/lib/toaster/toastergui/static/js/layerBtn.js
> b/bitbake/lib/toaster/toastergui/static/js/layerBtn.js index
> 9f9eda1..a5a6563 100644 ---
> a/bitbake/lib/toaster/toastergui/static/js/layerBtn.js +++
> b/bitbake/lib/toaster/toastergui/static/js/layerBtn.js @@ -67,6
> +67,18 @@ function layerBtnsInit() { });
> });
>
> + $("td .set-default-recipe-btn").unbind('click');
> + $("td .set-default-recipe-btn").click(function(e){
> + e.preventDefault();
> + var recipe = $(this).data('recipe-name');
> +
> + libtoaster.setDefaultImage(null, recipe,
> + function(){
> + /* Success */
> +
> window.location.replace(libtoaster.ctx.projectSpecificPageUrl);
> + });
> + });
> +
>
> $(".customise-btn").unbind('click');
> $(".customise-btn").click(function(e){
> diff --git a/bitbake/lib/toaster/toastergui/static/js/layerdetails.js
> b/bitbake/lib/toaster/toastergui/static/js/layerdetails.js index
> 9ead393..933b65b 100644 ---
> a/bitbake/lib/toaster/toastergui/static/js/layerdetails.js +++
> b/bitbake/lib/toaster/toastergui/static/js/layerdetails.js @@ -359,7
> +359,8 @@ function layerDetailsPageInit (ctx) { if ($(this).is("dt"))
> { var dd = $(this).next("dd");
> if
> (!dd.children("form:visible")|| !dd.find(".current-value").html()){
> - if (ctx.layerVersion.layer_source ==
> ctx.layerSourceTypes.TYPE_IMPORTED){
> + if (ctx.layerVersion.layer_source ==
> ctx.layerSourceTypes.TYPE_IMPORTED ||
> + ctx.layerVersion.layer_source ==
> ctx.layerSourceTypes.TYPE_LOCAL) { /* There's no current value and
> the layer is editable
> * so show the "Not set" and hide the delete icon
> */
> diff --git a/bitbake/lib/toaster/toastergui/static/js/libtoaster.js
> b/bitbake/lib/toaster/toastergui/static/js/libtoaster.js index
> 6f9b5d0..f2c45c8 100644 ---
> a/bitbake/lib/toaster/toastergui/static/js/libtoaster.js +++
> b/bitbake/lib/toaster/toastergui/static/js/libtoaster.js @@ -275,7
> +275,8 @@ var libtoaster = (function () {
> function _addRmLayer(layerObj, add, doneCb){
> if (layerObj.xhrLayerUrl === undefined){
> - throw("xhrLayerUrl is undefined")
> + alert("ERROR: missing xhrLayerUrl object. Please file a bug.");
> + return;
> }
>
> if (add === true) {
> @@ -465,6 +466,108 @@ var libtoaster = (function () {
> $.cookie('toaster-notification', JSON.stringify(data), { path:
> '/'}); }
>
> + /* _updateProject:
> + * url: xhrProjectUpdateUrl or null for current project
> + * onsuccess: callback for successful execution
> + * onfail: callback for failed execution
> + */
> + function _updateProject (url, targets, default_image, onsuccess,
> onfail) { +
> + if (!url)
> + url = libtoaster.ctx.xhrProjectUpdateUrl;
> +
> + /* Flatten the array of targets into a space spearated list */
> + if (targets instanceof Array){
> + targets = targets.reduce(function(prevV, nextV){
> + return prev + ' ' + next;
> + });
> + }
> +
> + $.ajax( {
> + type: "POST",
> + url: url,
> + data: { 'do_update' : 'True' , 'targets' : targets ,
> 'default_image' : default_image , },
> + headers: { 'X-CSRFToken' : $.cookie('csrftoken')},
> + success: function (_data) {
> + if (_data.error !== "ok") {
> + console.warn(_data.error);
> + } else {
> + if (onsuccess !== undefined) onsuccess(_data);
> + }
> + },
> + error: function (_data) {
> + console.warn("Call failed");
> + console.warn(_data);
> + if (onfail) onfail(data);
> + } });
> + }
> +
> + /* _cancelProject:
> + * url: xhrProjectUpdateUrl or null for current project
> + * onsuccess: callback for successful execution
> + * onfail: callback for failed execution
> + */
> + function _cancelProject (url, onsuccess, onfail) {
> +
> + if (!url)
> + url = libtoaster.ctx.xhrProjectCancelUrl;
> +
> + $.ajax( {
> + type: "POST",
> + url: url,
> + data: { 'do_cancel' : 'True' },
> + headers: { 'X-CSRFToken' : $.cookie('csrftoken')},
> + success: function (_data) {
> + if (_data.error !== "ok") {
> + console.warn(_data.error);
> + } else {
> + if (onsuccess !== undefined) onsuccess(_data);
> + }
> + },
> + error: function (_data) {
> + console.warn("Call failed");
> + console.warn(_data);
> + if (onfail) onfail(data);
> + } });
> + }
> +
> + /* _setDefaultImage:
> + * url: xhrSetDefaultImageUrl or null for current project
> + * targets: an array or space separated list of targets to set as
> default
> + * onsuccess: callback for successful execution
> + * onfail: callback for failed execution
> + */
> + function _setDefaultImage (url, targets, onsuccess, onfail) {
> +
> + if (!url)
> + url = libtoaster.ctx.xhrSetDefaultImageUrl;
> +
> + /* Flatten the array of targets into a space spearated list */
> + if (targets instanceof Array){
> + targets = targets.reduce(function(prevV, nextV){
> + return prev + ' ' + next;
> + });
> + }
> +
> + $.ajax( {
> + type: "POST",
> + url: url,
> + data: { 'targets' : targets },
> + headers: { 'X-CSRFToken' : $.cookie('csrftoken')},
> + success: function (_data) {
> + if (_data.error !== "ok") {
> + console.warn(_data.error);
> + } else {
> + if (onsuccess !== undefined) onsuccess(_data);
> + }
> + },
> + error: function (_data) {
> + console.warn("Call failed");
> + console.warn(_data);
> + if (onfail) onfail(data);
> + } });
> + }
> +
> return {
> enableAjaxLoadingTimer: _enableAjaxLoadingTimer,
> disableAjaxLoadingTimer: _disableAjaxLoadingTimer,
> @@ -485,6 +588,9 @@ var libtoaster = (function () {
> createCustomRecipe: _createCustomRecipe,
> makeProjectNameValidation: _makeProjectNameValidation,
> setNotification: _setNotification,
> + updateProject : _updateProject,
> + cancelProject : _cancelProject,
> + setDefaultImage : _setDefaultImage,
> };
> })();
>
> diff --git a/bitbake/lib/toaster/toastergui/static/js/mrbsection.js
> b/bitbake/lib/toaster/toastergui/static/js/mrbsection.js index
> c0c5fa9..f07ccf8 100644 ---
> a/bitbake/lib/toaster/toastergui/static/js/mrbsection.js +++
> b/bitbake/lib/toaster/toastergui/static/js/mrbsection.js @@ -86,7
> +86,7 @@ function mrbSectionInit(ctx){ if (buildFinished(build)) {
> // a build finished: reload the whole page so that the
> build // shows up in the builds table
> - window.location.reload();
> + window.location.reload(true);
> }
> else if (stateChanged(build)) {
> // update the whole template
> @@ -110,6 +110,8 @@ function mrbSectionInit(ctx){
> // update the clone progress text
> selector = '#repos-cloned-percentage-' + build.id;
> $(selector).html(build.repos_cloned_percentage);
> + selector = '#repos-cloned-progressitem-' + build.id;
> + $(selector).html('('+build.progress_item+')');
>
> // update the recipe progress bar
> selector = '#repos-cloned-percentage-bar-' + build.id;
> diff --git
> a/bitbake/lib/toaster/toastergui/static/js/newcustomimage_modal.js
> b/bitbake/lib/toaster/toastergui/static/js/newcustomimage_modal.js
> index dace8e3..e55fffc 100644 ---
> a/bitbake/lib/toaster/toastergui/static/js/newcustomimage_modal.js
> +++
> b/bitbake/lib/toaster/toastergui/static/js/newcustomimage_modal.js @@
> -25,6 +25,8 @@ function newCustomImageModalInit(){ var
> duplicateNameMsg = "An image with this name already exists. Image
> names must be unique."; var duplicateImageInProjectMsg = "An image
> with this name already exists in this project." var
> invalidBaseRecipeIdMsg = "Please select an image to customise.";
> + var missingParentRecipe = "The parent recipe file was not found.
> Cancel this action, build any target (like 'quilt-native') to force
> all new layers to clone, and try again";
> + var unknownError = "Unexpected error: ";
>
> // set button to "submit" state and enable text entry so user can
> // enter the custom recipe name
> @@ -62,6 +64,7 @@ function newCustomImageModalInit(){
> if (nameInput.val().length > 0) {
> libtoaster.createCustomRecipe(nameInput.val(), baseRecipeId,
> function(ret) {
> + showSubmitState();
> if (ret.error !== "ok") {
> console.warn(ret.error);
> if (ret.error === "invalid-name") {
> @@ -73,6 +76,10 @@ function newCustomImageModalInit(){
> } else if (ret.error === "image-already-exists") {
> showNameError(duplicateImageInProjectMsg);
> return;
> + } else if (ret.error === "recipe-parent-not-exist") {
> + showNameError(missingParentRecipe);
> + } else {
> + showNameError(unknownError + ret.error);
> }
> } else {
> imgCustomModal.modal('hide');
> diff --git
> a/bitbake/lib/toaster/toastergui/static/js/projecttopbar.js
> b/bitbake/lib/toaster/toastergui/static/js/projecttopbar.js index
> 69220aa..3f9e186 100644 ---
> a/bitbake/lib/toaster/toastergui/static/js/projecttopbar.js +++
> b/bitbake/lib/toaster/toastergui/static/js/projecttopbar.js @@ -14,6
> +14,9 @@ function projectTopBarInit(ctx) { var newBuildTargetBuildBtn
> = $("#build-button"); var selectedTarget;
> + var updateProjectBtn = $("#update-project-button");
> + var cancelProjectBtn = $("#cancel-project-button");
> +
> /* Project name change functionality */
> projectNameFormToggle.click(function(e){
> e.preventDefault();
> @@ -89,6 +92,25 @@ function projectTopBarInit(ctx) {
> }, null);
> });
>
> + updateProjectBtn.click(function (e) {
> + e.preventDefault();
> +
> + selectedTarget = { name: "_PROJECT_PREPARE_" };
> +
> + /* Save current default build image, fire off the build */
> + libtoaster.updateProject(null, selectedTarget.name,
> newBuildTargetInput.val().trim(),
> + function(){
> +
> window.location.replace(libtoaster.ctx.projectSpecificPageUrl);
> + }, null);
> + });
> +
> + cancelProjectBtn.click(function (e) {
> + e.preventDefault();
> +
> + /* redirect to 'done/canceled' landing page */
> + window.location.replace(libtoaster.ctx.landingSpecificCancelURL);
> + });
> +
> /* Call makeProjectNameValidation function */
> libtoaster.makeProjectNameValidation($("#project-name-change-input"),
> $("#hint-error-project-name"), $("#validate-project-name"),
> diff --git a/bitbake/lib/toaster/toastergui/tables.py
> b/bitbake/lib/toaster/toastergui/tables.py index dca2fa2..9ff756b
> 100644 --- a/bitbake/lib/toaster/toastergui/tables.py
> +++ b/bitbake/lib/toaster/toastergui/tables.py
> @@ -35,6 +35,8 @@ from toastergui.tablefilter import
> TableFilterActionToggle from toastergui.tablefilter import
> TableFilterActionDateRange from toastergui.tablefilter import
> TableFilterActionDay
> +import os
> +
> class ProjectFilters(object):
> @staticmethod
> def in_project(project_layers):
> @@ -339,6 +341,8 @@ class RecipesTable(ToasterTable):
> 'filter_name' : "in_current_project",
> 'static_data_name' : "add-del-layers",
> 'static_data_template' : '{% include "recipe_btn.html"
> %}'}
> + if '1' == os.environ.get('TOASTER_PROJECTSPECIFIC'):
> + build_col['static_data_template'] = '{% include
> "recipe_add_btn.html" %}'
> def get_context_data(self, **kwargs):
> project = Project.objects.get(pk=kwargs['pid'])
> @@ -1611,14 +1615,12 @@ class DistrosTable(ToasterTable):
> hidden=True,
> field_name="layer_version__get_vcs_reference")
>
> - wrtemplate_file_template =
> '''<code>conf/machine/{{data.name}}.conf</code>
> - <a href="{{data.get_vcs_machine_file_link_url}}"
> target="_blank"><span class="glyphicon
> glyphicon-new-window"></i></a>''' -
> + distro_file_template =
> '''<code>conf/distro/{{data.name}}.conf</code>
> + {% if 'None' not in data.get_vcs_distro_file_link_url %}<a
> href="{{data.get_vcs_distro_file_link_url}}" target="_blank"><span
> class="glyphicon glyphicon-new-window"></i></a>{% endif %}'''
> self.add_column(title="Distro file", hidden=True,
> static_data_name="templatefile",
> -
> static_data_template=wrtemplate_file_template) -
> + static_data_template=distro_file_template)
>
> self.add_column(title="Select",
> help_text="Sets the selected distro to the
> project", diff --git
> a/bitbake/lib/toaster/toastergui/templates/base_specific.html
> b/bitbake/lib/toaster/toastergui/templates/base_specific.html new
> file mode 100644 index 0000000..e377cad --- /dev/null
> +++ b/bitbake/lib/toaster/toastergui/templates/base_specific.html
> @@ -0,0 +1,128 @@
> +<!DOCTYPE html>
> +{% load static %}
> +{% load projecttags %}
> +{% load project_url_tag %}
> +<html lang="en">
> + <head>
> + <title>
> + {% block title %} Toaster {% endblock %}
> + </title>
> + <link rel="stylesheet" href="{% static 'css/bootstrap.min.css'
> %}" type="text/css"/>
> + <!--link rel="stylesheet" href="{% static
> 'css/bootstrap-theme.css' %}" type="text/css"/-->
> + <link rel="stylesheet" href="{% static
> 'css/font-awesome.min.css' %}" type='text/css'/>
> + <link rel="stylesheet" href="{% static 'css/default.css' %}"
> type='text/css'/> +
> + <meta name="viewport" content="width=device-width,
> initial-scale=1.0" />
> + <meta http-equiv="Content-Type"
> content="text/html;charset=UTF-8" />
> + <script src="{% static 'js/jquery-2.0.3.min.js' %}">
> + </script>
> + <script src="{% static 'js/jquery.cookie.js' %}">
> + </script>
> + <script src="{% static 'js/bootstrap.min.js' %}">
> + </script>
> + <script src="{% static 'js/typeahead.jquery.js' %}">
> + </script>
> + <script src="{% static 'js/jsrender.min.js' %}">
> + </script>
> + <script src="{% static 'js/highlight.pack.js' %}">
> + </script>
> + <script src="{% static 'js/libtoaster.js' %}">
> + </script>
> + {% if DEBUG %}
> + <script>
> + libtoaster.debug = true;
> + </script>
> + {% endif %}
> + <script>
> + /* Set JsRender delimiters (mrb_section.html) different than
> Django's */
> + $.views.settings.delimiters("<%", "%>");
> +
> + /* This table allows Django substitutions to be passed to
> libtoaster.js */
> + libtoaster.ctx = {
> + jsUrl : "{% static 'js/' %}",
> + htmlUrl : "{% static 'html/' %}",
> + projectsUrl : "{% url 'all-projects' %}",
> + projectsTypeAheadUrl: {% url 'xhr_projectstypeahead' as
> prjurl%}{{prjurl|json}},
> + {% if project.id %}
> + landingSpecificURL : "{% url 'landing_specific' project.id
> %}",
> + landingSpecificCancelURL : "{% url 'landing_specific_cancel'
> project.id %}",
> + projectId : {{project.id}},
> + projectPageUrl : {% url 'project' project.id as purl
> %}{{purl|json}},
> + projectSpecificPageUrl : {% url 'project_specific'
> project.id as purl %}{{purl|json}},
> + xhrProjectUrl : {% url 'xhr_project' project.id as pxurl
> %}{{pxurl|json}},
> + projectName : {{project.name|json}},
> + recipesTypeAheadUrl: {% url 'xhr_recipestypeahead'
> project.id as paturl%}{{paturl|json}},
> + layersTypeAheadUrl: {% url 'xhr_layerstypeahead' project.id
> as paturl%}{{paturl|json}},
> + machinesTypeAheadUrl: {% url 'xhr_machinestypeahead'
> project.id as paturl%}{{paturl|json}},
> + distrosTypeAheadUrl: {% url 'xhr_distrostypeahead'
> project.id as paturl%}{{paturl|json}},
> + projectBuildsUrl: {% url 'projectbuilds' project.id as pburl
> %}{{pburl|json}},
> + xhrCustomRecipeUrl : "{% url 'xhr_customrecipe' %}",
> + projectId : {{project.id}},
> + xhrBuildRequestUrl: "{% url 'xhr_buildrequest' project.id
> %}",
> + mostRecentBuildsUrl: "{% url 'most_recent_builds'
> %}?project_id={{project.id}}",
> + xhrProjectUpdateUrl: "{% url 'xhr_projectupdate' project.id
> %}",
> + xhrProjectCancelUrl: "{% url 'landing_specific_cancel'
> project.id %}",
> + xhrSetDefaultImageUrl: "{% url 'xhr_setdefaultimage'
> project.id %}",
> + {% else %}
> + mostRecentBuildsUrl: "{% url 'most_recent_builds' %}",
> + projectId : undefined,
> + projectPageUrl : undefined,
> + projectName : undefined,
> + {% endif %}
> + };
> + </script>
> + {% block extraheadcontent %}
> + {% endblock %}
> + </head>
> +
> + <body>
> +
> + {% csrf_token %}
> + <div id="loading-notification" class="alert alert-warning lead
> text-center" style="display:none">
> + Loading <i class="fa-pulse icon-spinner"></i>
> + </div>
> +
> + <div id="change-notification" class="alert alert-info
> alert-dismissible change-notification" style="display:none">
> + <button type="button" class="close" id="hide-alert"
> data-toggle="alert">×</button>
> + <span id="change-notification-msg"></span>
> + </div>
> +
> + <nav class="navbar navbar-default navbar-fixed-top">
> + <div class="container-fluid">
> + <div class="navbar-header">
> + <button type="button" class="navbar-toggle collapsed"
> data-toggle="collapse" data-target="#global-nav"
> aria-expanded="false">
> + <span class="sr-only">Toggle navigation</span>
> + <span class="icon-bar"></span>
> + <span class="icon-bar"></span>
> + <span class="icon-bar"></span>
> + </button>
> + <div class="toaster-navbar-brand">
> + {% if project_specific %}
> + <img class="logo" src="{% static 'img/logo.png' %}"
> class="" alt="Yocto Project logo"/>
> + Toaster
> + {% else %}
> + <a href="/">
> + </a>
> + <a href="/">
> + <img class="logo" src="{% static 'img/logo.png' %}"
> class="" alt="Yocto Project logo"/>
> + </a>
> + <a class="brand" href="/">Toaster</a>
> + {% endif %}
> + {% if DEBUG %}
> + <span class="glyphicon glyphicon-info-sign"
> title="<strong>Toaster version information</strong>"
> data-content="<dl><dt>Git
> branch</dt><dd>{{TOASTER_BRANCH}}</dd><dt>Git
> revision</dt><dd>{{TOASTER_REVISION}}</dd></dl>"></i>
> + {% endif %}
> + </div>
> + </div>
> + <div class="collapse navbar-collapse" id="global-nav">
> + <ul class="nav navbar-nav">
> + <h3> Project Configuration Page </h3>
> + </div>
> + </div>
> + </nav>
> +
> + <div class="container-fluid">
> + {% block pagecontent %}
> + {% endblock %}
> + </div>
> + </body>
> +</html>
> diff --git
> a/bitbake/lib/toaster/toastergui/templates/baseprojectspecificpage.html
> b/bitbake/lib/toaster/toastergui/templates/baseprojectspecificpage.html
> new file mode 100644 index 0000000..d0b588d --- /dev/null
> +++
> b/bitbake/lib/toaster/toastergui/templates/baseprojectspecificpage.html
> @@ -0,0 +1,48 @@ +{% extends "base_specific.html" %}
> +
> +{% load projecttags %}
> +{% load humanize %}
> +
> +{% block title %} {{title}} - {{project.name}} - Toaster {% endblock
> %} +
> +{% block pagecontent %}
> +
> +<div class="row">
> + {% include "project_specific_topbar.html" %}
> + <script type="text/javascript">
> +$(document).ready(function(){
> + $("#config-nav .nav li a").each(function(){
> + if (window.location.pathname === $(this).attr('href'))
> + $(this).parent().addClass('active');
> + else
> + $(this).parent().removeClass('active');
> + });
> +
> + $("#topbar-configuration-tab").addClass("active")
> + });
> + </script>
> +
> + <!-- only on config pages -->
> + <div id="config-nav" class="col-md-2">
> + <ul class="nav nav-pills nav-stacked">
> + <li><a class="nav-parent" href="{% url 'project' project.id
> %}">Configuration</a></li>
> + <li class="nav-header">Compatible metadata</li>
> + <li><a href="{% url 'projectcustomimages' project.id
> %}">Custom images</a></li>
> + <li><a href="{% url 'projectimagerecipes' project.id %}">Image
> recipes</a></li>
> + <li><a href="{% url 'projectsoftwarerecipes' project.id
> %}">Software recipes</a></li>
> + <li><a href="{% url 'projectmachines' project.id
> %}">Machines</a></li>
> + <li><a href="{% url 'projectlayers' project.id
> %}">Layers</a></li>
> + <li><a href="{% url 'projectdistros' project.id
> %}">Distros</a></li>
> + <li class="nav-header">Extra configuration</li>
> + <li><a href="{% url 'projectconf' project.id %}">BitBake
> variables</a></li> +
> + <li class="nav-header">Actions</li>
> + </ul>
> + </div>
> + <div class="col-md-10">
> + {% block projectinfomain %}{% endblock %}
> + </div>
> +
> +</div>
> +{% endblock %}
> +
> diff --git
> a/bitbake/lib/toaster/toastergui/templates/customise_btn.html
> b/bitbake/lib/toaster/toastergui/templates/customise_btn.html index
> 38c258a..ce46240 100644 ---
> a/bitbake/lib/toaster/toastergui/templates/customise_btn.html +++
> b/bitbake/lib/toaster/toastergui/templates/customise_btn.html @@ -5,7
> +5,11 @@
> >
> Customise
> </button>
> -<button class="btn btn-default btn-block
> layer-add-{{data.layer_version.pk}} layerbtn" data-layer='{ "id":
> {{data.layer_version.pk}}, "name":
> "{{data.layer_version.layer.name}}", "layerdetailurl": "{%url
> 'layerdetails' extra.pid data.layer_version.pk%}"}'
> data-directive="add" +<button class="btn btn-default btn-block
> layer-add-{{data.layer_version.pk}} layerbtn"
> + data-layer='{ "id": {{data.layer_version.pk}}, "name":
> "{{data.layer_version.layer.name}}",
> + "layerdetailurl": "{%url 'layerdetails' extra.pid
> data.layer_version.pk%}",
> + "xhrLayerUrl": "{% url "xhr_layer" extra.pid
> data.layer_version.pk %}"}'
> + data-directive="add"
> {% if data.layer_version.pk in extra.current_layers %}
> style="display:none;"
> {% endif %}
> diff --git
> a/bitbake/lib/toaster/toastergui/templates/generic-toastertable-page.html
> b/bitbake/lib/toaster/toastergui/templates/generic-toastertable-page.html
> index b3eabe1..99fbb38 100644 ---
> a/bitbake/lib/toaster/toastergui/templates/generic-toastertable-page.html
> +++
> b/bitbake/lib/toaster/toastergui/templates/generic-toastertable-page.html
> @@ -1,4 +1,4 @@ -{% extends "baseprojectpage.html" %} +{% extends
> project_specific|yesno:"baseprojectspecificpage.html,baseprojectpage.html"
> %} {% load projecttags %} {% load humanize %} {% load static %}
> diff --git
> a/bitbake/lib/toaster/toastergui/templates/importlayer.html
> b/bitbake/lib/toaster/toastergui/templates/importlayer.html index
> 97d52c7..e0c987e 100644 ---
> a/bitbake/lib/toaster/toastergui/templates/importlayer.html +++
> b/bitbake/lib/toaster/toastergui/templates/importlayer.html @@ -1,4
> +1,4 @@ -{% extends "base.html" %} +{% extends
> project_specific|yesno:"baseprojectspecificpage.html,base.html" %} {%
> load projecttags %} {% load humanize %}
> {% load static %}
> @@ -6,7 +6,7 @@
> {% block pagecontent %}
>
> <div class="row">
> - {% include "projecttopbar.html" %}
> + {% include
> project_specific|yesno:"project_specific_topbar.html,projecttopbar.html"
> %} {% if project and project.release %} <script src="{% static
> 'js/layerDepsModal.js' %}"></script> <script src="{% static
> 'js/importlayer.js' %}"></script> diff --git
> a/bitbake/lib/toaster/toastergui/templates/landing_specific.html
> b/bitbake/lib/toaster/toastergui/templates/landing_specific.html new
> file mode 100644 index 0000000..e289c7d --- /dev/null
> +++ b/bitbake/lib/toaster/toastergui/templates/landing_specific.html
> @@ -0,0 +1,50 @@
> +{% extends "base_specific.html" %}
> +
> +{% load static %}
> +{% load projecttags %}
> +{% load humanize %}
> +
> +{% block title %} Welcome to Toaster {% endblock %}
> +
> +{% block pagecontent %}
> +
> + <div class="container">
> + <div class="row">
> + <!-- Empty - no build module -->
> + <div class="page-header top-air">
> + <h1>
> + Configuration {% if status == "cancel" %}Canceled{% else
> %}Completed{% endif %}! You can now close this window.
> + </h1>
> + </div>
> + <div class="alert alert-info lead">
> + <p>
> + Your project configuration {% if status == "cancel" %}changes
> have been canceled{% else %}has completed!{% endif %}
> + <br>
> + <br>
> + <ul>
> + <li>
> + The Toaster instance for project configuration has been shut
> down
> + </li>
> + <li>
> + You can start Toaster independently for advanced project
> management and analysis:
> + <pre><code>
> + Set up bitbake environment:
> + $ cd {{install_dir}}
> + $ . oe-init-build-env [toaster_server]
> +
> + Option 1: Start a local Toaster server, open local browser
> to "localhost:8000"
> + $ . toaster start webport=8000
> +
> + Option 2: Start a shared Toaster server, open any browser
> to "[host_ip]:8000"
> + $ . toaster start webport=0.0.0.0:8000
> +
> + To stop the Toaster server:
> + $ . toaster stop
> + </code></pre>
> + </li>
> + </ul>
> + </p>
> + </div>
> + </div>
> +
> +{% endblock %}
> diff --git
> a/bitbake/lib/toaster/toastergui/templates/layerdetails.html
> b/bitbake/lib/toaster/toastergui/templates/layerdetails.html index
> e0069db..1e26e31 100644 ---
> a/bitbake/lib/toaster/toastergui/templates/layerdetails.html +++
> b/bitbake/lib/toaster/toastergui/templates/layerdetails.html @@ -1,4
> +1,4 @@ -{% extends "base.html" %} +{% extends
> project_specific|yesno:"baseprojectspecificpage.html,base.html" %} {%
> load projecttags %} {% load humanize %}
> {% load static %}
> @@ -310,6 +310,7 @@
> {% endwith %}
> {% endwith %}
> </div>
> +
> </div> <!-- end tab content -->
> </div> <!-- end tabable -->
>
> diff --git
> a/bitbake/lib/toaster/toastergui/templates/mrb_section.html
> b/bitbake/lib/toaster/toastergui/templates/mrb_section.html index
> c5b9fe9..98d9fac 100644 ---
> a/bitbake/lib/toaster/toastergui/templates/mrb_section.html +++
> b/bitbake/lib/toaster/toastergui/templates/mrb_section.html @@ -119,7
> +119,7 @@ title="Toaster is cloning the repos required for your
> build"> </span>
> - Cloning <span
> id="repos-cloned-percentage-<%:id%>"><%:repos_cloned_percentage%></span>%
> complete
> + Cloning <span
> id="repos-cloned-percentage-<%:id%>"><%:repos_cloned_percentage%></span>%
> complete <span
> id="repos-cloned-progressitem-<%:id%>">(<%:progress_item%>)</span>
> <%include tmpl='#cancel-template'/%> </div> diff --git
> a/bitbake/lib/toaster/toastergui/templates/newcustomimage.html
> b/bitbake/lib/toaster/toastergui/templates/newcustomimage.html index
> 980179a..0766e5e 100644 ---
> a/bitbake/lib/toaster/toastergui/templates/newcustomimage.html +++
> b/bitbake/lib/toaster/toastergui/templates/newcustomimage.html @@
> -1,4 +1,4 @@ -{% extends "base.html" %} +{% extends
> project_specific|yesno:"baseprojectspecificpage.html,base.html" %} {%
> load projecttags %} {% load humanize %}
> {% load static %}
> @@ -8,7 +8,7 @@
>
> <div class="row">
>
> - {% include "projecttopbar.html" %}
> + {% include
> project_specific|yesno:"project_specific_topbar.html,projecttopbar.html"
> %} <div class="col-md-12">
> {% url table_name project.id as xhr_table_url %}
> diff --git a/bitbake/lib/toaster/toastergui/templates/newproject.html
> b/bitbake/lib/toaster/toastergui/templates/newproject.html index
> acb614e..7e1ebb3 100644 ---
> a/bitbake/lib/toaster/toastergui/templates/newproject.html +++
> b/bitbake/lib/toaster/toastergui/templates/newproject.html @@ -20,23
> +20,19 @@ <input type="text" class="form-control" required
> id="new-project-name" name="projectname"> </div>
> <p class="help-block text-danger" style="display: none;"
> id="hint-error-project-name">A project with this name exists. Project
> names must be unique.</p> -<!--
> - <fieldset>
> - <label class="project-form">Project type</label>
> - <label class="project-form radio"><input
> type="radio" name="ptype" value="analysis" checked/> Analysis
> Project</label>
> + <label class="project-form">Project type:</label>
> {% if releases.count > 0 %}
> - <label class="project-form radio"><input
> type="radio" name="ptype" value="build" checked /> Build
> Project</label>
> + <label class="project-form radio"
> style="padding-left: 35px;"><input id='type-new' type="radio"
> name="ptype" value="new"/> New project</label> {% endif %}
> - </fieldset> -->
> - <input type="hidden" name="ptype" value="build" />
> + <label class="project-form radio"
> style="padding-left: 35px;"><input id='type-import' type="radio"
> name="ptype" value="import"/> Import command line project</label> {%
> if releases.count > 0 %}
> - <div class="release form-group">
> + <div class="release form-group">
> {% if releases.count > 1 %}
> <label class="control-label">
> Release
> - <span class="glyphicon glyphicon-question-sign
> get-help" title="The version of the build system you want to
> use"></span>
> + <span class="glyphicon glyphicon-question-sign
> get-help" title="The version of the build system you want to use for
> this project"></span> </label> <select name="projectversion"
> id="projectversion" class="form-control"> {% for release in releases
> %} @@ -54,33 +50,31 @@
> <span
> class="help-block">{{release.helptext|safe}}</span> </div>
> {% endfor %}
> + </div>
> + </div>
> {% else %}
> <input type="hidden" name="projectversion"
> value="{{releases.0.id}}"/> {% endif %}
> - </div>
> - </div>
> - </fieldset>
> +
> + <input type="checkbox" class="checkbox-mergeattr"
> name="mergeattr" value="mergeattr"> Merged Toaster settings (Command
> line user compatibility)
> + <span class="glyphicon glyphicon-question-sign
> get-help" title="Place the Toaster settings into the standard
> 'local.conf' and 'bblayers.conf' instead of 'toaster_bblayers.conf'
> and 'toaster.conf'"></span> +
> + </div>
> {% endif %}
> +
> + <div class="build-import form-group" id="import-project">
> + <label class="control-label">Import existing project
> directory
> + <span class="glyphicon glyphicon-question-sign
> get-help" title="Enter a path to an existing build directory, import
> the existing settings, and create a Toaster Project for it."></span>
> + </label>
> + <input style="width: 33%;"type="text"
> class="form-control" required id="import-project-dir"
> name="importdir">
> + </div>
> +
> <div class="top-air">
> <input type="submit" id="create-project-button"
> class="btn btn-primary btn-lg" value="Create project"/> <span
> class="help-inline" style="vertical-align:middle;">To create a
> project, you need to enter a project name</span> </div>
> </form>
> - <!--
> - <div class="col-md-5 well">
> - <span class="help-block">
> - <h4>Toaster project types</h4>
> - <p>With a <strong>build project</strong> you
> configure and run your builds from Toaster.</p>
> - <p>With an <strong>analysis project</strong>, the
> builds are configured and run by another tool
> - (something like Buildbot or Jenkins), and the
> project only collects the information about the
> - builds (packages, recipes, dependencies, logs,
> etc). </p>
> - <p>You can read more on <a href="#">how to set up
> an analysis project</a>
> - in the Toaster manual.</p>
> - <h4>Release</h4>
> - <p>If you create a <strong>build project</strong>,
> you will need to select a <strong>release</strong>,
> - which is the version of the build system you want
> to use to run your builds.</p>
> - </div> -->
> </div>
> </div>
>
> @@ -89,6 +83,7 @@
> // hide the new project button
> $("#new-project-button").hide();
> $('.btn-primary').attr('disabled', 'disabled');
> + $('#type-new').attr('checked', 'checked');
>
> // enable submit button when all required fields are
> populated $("input#new-project-name").on('input', function() {
> @@ -118,20 +113,24 @@
> $(".btn-primary"));
>
>
> -/* // Hide the project release when you
> select an analysis project
> + // Hide the project release when you select
> an analysis project function projectType() {
> - if
> ($("input[type='radio']:checked").val() == 'build') {
> + if
> ($("input[type='radio']:checked").val() == 'new') {
> + $('.build-import').fadeOut();
> $('.release').fadeIn();
> + $('#import-project-dir').removeAttr('required');
> }
> else {
> $('.release').fadeOut();
> + $('.build-import').fadeIn();
> + $('#import-project-dir').attr('required',
> 'required'); }
> }
> projectType();
>
> $('input:radio').change(function(){
> projectType();
> - }); */
> + });
> });
> </script>
>
> diff --git
> a/bitbake/lib/toaster/toastergui/templates/newproject_specific.html
> b/bitbake/lib/toaster/toastergui/templates/newproject_specific.html
> new file mode 100644 index 0000000..cfa77f2 --- /dev/null
> +++
> b/bitbake/lib/toaster/toastergui/templates/newproject_specific.html
> @@ -0,0 +1,95 @@ +{% extends "base.html" %}
> +{% load projecttags %}
> +{% load humanize %}
> +
> +{% block title %} Create a new project - Toaster {% endblock %}
> +
> +{% block pagecontent %}
> +<div class="row">
> + <div class="col-md-12">
> + <div class="page-header">
> + <h1>Create a new project</h1>
> + </div>
> + {% if alert %}
> + <div class="alert alert-danger" role="alert">{{alert}}</div>
> + {% endif %}
> +
> + <form method="POST" action="{%url "newproject_specific"
> project_pk %}">{% csrf_token %}
> + <div class="form-group" id="validate-project-name">
> + <label class="control-label">Project name <span
> class="text-muted">(required)</span></label>
> + <input type="text" class="form-control" required
> id="new-project-name" name="display_projectname"
> value="{{projectname}}" disabled>
> + </div>
> + <p class="help-block text-danger" style="display: none;"
> id="hint-error-project-name">A project with this name exists. Project
> names must be unique.</p>
> + <input type="hidden" name="ptype" value="build" />
> + <input type="hidden" name="projectname"
> value="{{projectname}}" /> +
> + {% if releases.count > 0 %}
> + <div class="release form-group">
> + {% if releases.count > 1 %}
> + <label class="control-label">
> + Release
> + <span class="glyphicon glyphicon-question-sign
> get-help" title="The version of the build system you want to
> use"></span>
> + </label>
> + <select name="projectversion" id="projectversion"
> class="form-control">
> + {% for release in releases %}
> + <option value="{{release.id}}"
> + {%if defaultbranch == release.name %}
> + selected
> + {%endif%}
> + >{{release.description}}</option>
> + {% endfor %}
> + </select>
> + <div class="row">
> + <div class="col-md-4">
> + {% for release in releases %}
> + <div class="helptext"
> id="description-{{release.id}}" style="display: none">
> + <span
> class="help-block">{{release.helptext|safe}}</span>
> + </div>
> + {% endfor %}
> + {% else %}
> + <input type="hidden" name="projectversion"
> value="{{releases.0.id}}"/>
> + {% endif %}
> + </div>
> + </div>
> + </fieldset>
> + {% endif %}
> + <div class="top-air">
> + <input type="submit" id="create-project-button"
> class="btn btn-primary btn-lg" value="Create project"/>
> + <span class="help-inline"
> style="vertical-align:middle;">To create a project, you need to
> specify the release</span>
> + </div>
> +
> + </form>
> + </div>
> + </div>
> +
> + <script type="text/javascript">
> + $(document).ready(function () {
> + // hide the new project button, name is preset
> + $("#new-project-button").hide();
> +
> + // enable submit button when all required fields are
> populated
> + $("input#new-project-name").on('input', function() {
> + if ($("input#new-project-name").val().length > 0 ){
> + $('.btn-primary').removeAttr('disabled');
> + $(".help-inline").css('visibility','hidden');
> + }
> + else {
> + $('.btn-primary').attr('disabled', 'disabled');
> + $(".help-inline").css('visibility','visible');
> + }
> + });
> +
> + // show relevant help text for the selected release
> + var selected_release = $('select').val();
> + $("#description-" + selected_release).show();
> +
> + $('select').change(function(){
> + var new_release = $('select').val();
> + $(".helptext").hide();
> + $('#description-' + new_release).fadeIn();
> + });
> +
> + });
> + </script>
> +
> +{% endblock %}
> diff --git a/bitbake/lib/toaster/toastergui/templates/project.html
> b/bitbake/lib/toaster/toastergui/templates/project.html index
> 11603d1..fa41e3c 100644 ---
> a/bitbake/lib/toaster/toastergui/templates/project.html +++
> b/bitbake/lib/toaster/toastergui/templates/project.html @@ -1,4 +1,4
> @@ -{% extends "baseprojectpage.html" %}
> +{% extends
> project_specific|yesno:"baseprojectspecificpage.html,baseprojectpage.html"
> %} {% load projecttags %}
> {% load humanize %}
> @@ -18,7 +18,7 @@
> try {
> projectPageInit(ctx);
> } catch (e) {
> - document.write("Sorry, An error has occurred loading this
> page");
> + document.write("Sorry, An error has occurred loading this page
> (project):"+e); console.warn(e);
> }
> });
> @@ -93,6 +93,7 @@
> </form>
> </div>
>
> + {% if not project_specific %}
> <div class="well well-transparent">
> <h3>Most built recipes</h3>
>
> @@ -105,6 +106,7 @@
> </ul>
> <button class="btn btn-primary" id="freq-build-btn"
> disabled="disabled">Build selected recipes</button> </div>
> + {% endif %}
>
> <div class="well well-transparent">
> <h3>Project release</h3>
> @@ -157,5 +159,6 @@
> <ul class="list-unstyled lead" id="layers-in-project-list">
> </ul>
> </div>
> +
> </div>
> {% endblock %}
> diff --git
> a/bitbake/lib/toaster/toastergui/templates/project_specific.html
> b/bitbake/lib/toaster/toastergui/templates/project_specific.html new
> file mode 100644 index 0000000..f625d18 --- /dev/null
> +++ b/bitbake/lib/toaster/toastergui/templates/project_specific.html
> @@ -0,0 +1,162 @@
> +{% extends "baseprojectspecificpage.html" %}
> +
> +{% load projecttags %}
> +{% load humanize %}
> +{% load static %}
> +
> +{% block title %} Configuration - {{project.name}} - Toaster {%
> endblock %} +{% block projectinfomain %}
> +
> +<script src="{% static 'js/layerDepsModal.js' %}"></script>
> +<script src="{% static 'js/projectpage.js' %}"></script>
> +<script>
> + $(document).ready(function (){
> + var ctx = {
> + testReleaseChangeUrl: "{% url 'xhr_testreleasechange'
> project.id %}",
> + };
> +
> + try {
> + projectPageInit(ctx);
> + } catch (e) {
> + document.write("Sorry, An error has occurred loading this
> page");
> + console.warn(e);
> + }
> + });
> +</script>
> +
> +<div id="delete-project-modal" class="modal fade" tabindex="-1"
> role="dialog" data-backdrop="static" data-keyboard="false">
> + <div class="modal-dialog">
> + <div class="modal-content">
> + <div class="modal-header">
> + <h4>Are you sure you want to delete this project?</h4>
> + </div>
> + <div class="modal-body">
> + <p>Deleting the <strong class="project-name"></strong>
> project
> + will:</p>
> + <ul>
> + <li>Cancel its builds currently in progress</li>
> + <li>Remove its configuration information</li>
> + <li>Remove its imported layers</li>
> + <li>Remove its custom images</li>
> + <li>Remove all its build information</li>
> + </ul>
> + </div>
> + <div class="modal-footer">
> + <button type="button" class="btn btn-primary"
> id="delete-project-confirmed">
> + <span data-role="submit-state">Delete project</span>
> + <span data-role="loading-state" style="display:none">
> + <span class="fa-pulse">
> + <i class="fa-pulse icon-spinner"></i>
> + </span>
> + Deleting project...
> + </span>
> + </button>
> + <button type="button" class="btn btn-link"
> data-dismiss="modal">Cancel</button>
> + </div>
> + </div><!-- /.modal-content -->
> + </div><!-- /.modal-dialog -->
> +</div>
> +
> +
> +<div class="row" id="project-page" style="display:none">
> + <div class="col-md-6">
> + <div class="well well-transparent" id="machine-section">
> + <h3>Machine</h3>
> +
> + <p class="lead"><span id="project-machine-name"></span> <span
> class="glyphicon glyphicon-edit"
> id="change-machine-toggle"></span></p> +
> + <form id="select-machine-form" style="display:none;"
> class="form-inline">
> + <span class="help-block">Machine suggestions come from the
> list of layers added to your project. If you don't see the machine
> you are looking for, <a href="{% url 'projectmachines' project.id
> %}">check the full list of machines</a></span>
> + <div class="form-group" id="machine-input-form">
> + <input class="form-control" id="machine-change-input"
> autocomplete="off" value="" data-provide="typeahead"
> data-minlength="1" data-autocomplete="off" type="text">
> + </div>
> + <button id="machine-change-btn" class="btn btn-default"
> type="button">Save</button>
> + <a href="#" id="cancel-machine-change" class="btn
> btn-link">Cancel</a>
> + <span class="help-block text-danger"
> id="invalid-machine-name-help" style="display:none">A valid machine
> name cannot include spaces.</span>
> + <p class="form-link"><a href="{% url 'projectmachines'
> project.id %}">View compatible machines</a></p>
> + </form>
> + </div>
> +
> + <div class="well well-transparent" id="distro-section">
> + <h3>Distro</h3>
> +
> + <p class="lead"><span id="project-distro-name"></span> <span
> class="glyphicon glyphicon-edit"
> id="change-distro-toggle"></span></p> +
> + <form id="select-distro-form" style="display:none;"
> class="form-inline">
> + <span class="help-block">Distro suggestions come from the
> Layer Index</a></span>
> + <div class="form-group">
> + <input class="form-control" id="distro-change-input"
> autocomplete="off" value="" data-provide="typeahead"
> data-minlength="1" data-autocomplete="off" type="text">
> + </div>
> + <button id="distro-change-btn" class="btn btn-default"
> type="button">Save</button>
> + <a href="#" id="cancel-distro-change" class="btn
> btn-link">Cancel</a>
> + <p class="form-link"><a href="{% url 'projectdistros'
> project.id %}">View compatible distros</a></p>
> + </form>
> + </div>
> +
> + <div class="well well-transparent">
> + <h3>Most built recipes</h3>
> +
> + <div class="alert alert-info" style="display:none"
> id="no-most-built">
> + <h4>You haven't built any recipes yet</h4>
> + <p class="form-link"><a href="{% url 'projectimagerecipes'
> project.id %}">Choose a recipe to build</a></p>
> + </div>
> +
> + <ul class="list-unstyled lead" id="freq-build-list">
> + </ul>
> + <button class="btn btn-primary" id="freq-build-btn"
> disabled="disabled">Build selected recipes</button>
> + </div>
> +
> + <div class="well well-transparent">
> + <h3>Project release</h3>
> +
> + <p class="lead"><span id="project-release-title"></span>
> +
> + <!-- Comment out the ability to change the project release,
> until we decide what to do with this functionality --> +
> + <!--i title="" data-original-title=""
> id="release-change-toggle" class="icon-pencil"></i-->
> + </p>
> +
> + <!-- Comment out the ability to change the project release,
> until we decide what to do with this functionality --> +
> + <!--form class="form-inline" id="change-release-form"
> style="display:none;">
> + <select></select>
> + <button class="btn" style="margin-left:5px;"
> id="change-release-btn">Change</button> <a href="#"
> id="cancel-release-change" class="btn btn-link">Cancel</a>
> + </form-->
> + </div>
> + </div>
> +
> + <div class="col-md-6">
> + <div class="well well-transparent" id="layer-container">
> + <h3>Layers <span class="counter">(<span
> id="project-layers-count"></span>)</span>
> + <span title="OpenEmbedded organises recipes and machines
> into thematic groups called <strong>layers</strong>. Click on a layer
> name to see the recipes and machines it includes." class="glyphicon
> glyphicon-question-sign get-help"></span>
> + </h3>
> +
> + <div class="alert alert-warning" id="no-layers-in-project"
> style="display:none">
> + <h4>This project has no layers</h4>
> + In order to build this project you need to add some layers
> first. For that you can:
> + <ul>
> + <li><a href="{% url 'projectlayers' project.id %}">Choose
> from the layers compatible with this project</a></li>
> + <li><a href="{% url 'importlayer' project.id %}">Import a
> layer</a></li>
> + <li><a
> href="http://www.yoctoproject.org/docs/current/dev-manual/dev-manual.html#understanding-and-creating-layers"
> target="_blank">Read about layers in the documentation</a></li>
> + <li>Or type a layer name below</li>
> + </ul>
> + </div>
> +
> + <form class="form-inline">
> + <div class="form-group">
> + <input id="layer-add-input" class="form-control"
> autocomplete="off" placeholder="Type a layer name" data-minlength="1"
> data-autocomplete="off" data-provide="typeahead" data-source=""
> type="text">
> + </div>
> + <button id="add-layer-btn" class="btn btn-default"
> disabled>Add layer</button>
> + <p class="form-link">
> + <a href="{% url 'projectlayers' project.id %}"
> id="view-compatible-layers">View compatible layers</a>
> + <span class="text-muted">|</span>
> + <a href="{% url 'importlayer' project.id %}">Import
> layer</a>
> + </p>
> + </form>
> +
> + <ul class="list-unstyled lead" id="layers-in-project-list">
> + </ul>
> + </div>
> +
> +</div>
> +{% endblock %}
> diff --git
> a/bitbake/lib/toaster/toastergui/templates/project_specific_topbar.html
> b/bitbake/lib/toaster/toastergui/templates/project_specific_topbar.html
> new file mode 100644 index 0000000..622787c --- /dev/null
> +++
> b/bitbake/lib/toaster/toastergui/templates/project_specific_topbar.html
> @@ -0,0 +1,80 @@ +{% load static %}
> +<script src="{% static 'js/projecttopbar.js' %}"></script>
> +<script>
> + $(document).ready(function () {
> + var ctx = {
> + numProjectLayers :
> {{project.get_project_layer_versions.count}},
> + machine :
> "{{project.get_current_machine_name|default_if_none:""}}",
> + }
> +
> + try {
> + projectTopBarInit(ctx);
> + } catch (e) {
> + document.write("Sorry, An error has occurred loading this page
> (pstb):"+e);
> + console.warn(e);
> + }
> + });
> +</script>
> +
> +<div class="col-md-12">
> + <div class="alert alert-success alert-dismissible
> change-notification" id="project-created-notification"
> style="display:none">
> + <button type="button" class="close"
> data-dismiss="alert">×</button>
> + <p>Your project <strong>{{project.name}}</strong>
> has been created. You can now <a class="alert-link" href="{% url
> 'projectmachines' project.id %}">select your target machine</a> and
> <a class="alert-link" href="{% url 'projectimagerecipes' project.id
> %}">choose image recipes</a> to build.</p>
> + </div>
> + <!-- project name -->
> + <div class="page-header">
> + <h1 id="project-name-container">
> + <span class="project-name">{{project.name}}</span>
> + {% if project.is_default %}
> + <span class="glyphicon glyphicon-question-sign get-help"
> title="This project shows information about the builds you start from
> the command line while Toaster is running"></span>
> + {% endif %}
> + </h1>
> + <form id="project-name-change-form" class="form-inline"
> style="display: none;">
> + <div class="form-group">
> + <input class="form-control input-lg" type="text"
> id="project-name-change-input" autocomplete="off"
> value="{{project.name}}">
> + </div>
> + <button id="project-name-change-btn" class="btn btn-default
> btn-lg" type="button">Save</button>
> + <a href="#" id="project-name-change-cancel" class="btn btn-lg
> btn-link">Cancel</a>
> + </form>
> + </div>
> +
> + {% with mrb_type='project' %}
> + {% include "mrb_section.html" %}
> + {% endwith %}
> +
> + {% if not project.is_default %}
> + <div id="project-topbar">
> + <ul class="nav nav-tabs">
> + <li id="topbar-configuration-tab">
> + <a href="{% url 'project_specific' project.id %}">
> + Configuration
> + </a>
> + </li>
> + <li>
> + <a href="{% url 'importlayer' project.id %}">
> + Import layer
> + </a>
> + </li>
> + <li>
> + <a href="{% url 'newcustomimage' project.id %}">
> + New custom image
> + </a>
> + </li>
> + <li class="pull-right">
> + <form class="form-inline">
> + <div class="form-group">
> + <span class="glyphicon glyphicon-question-sign get-help"
> data-placement="left" title="Type the name of one or more recipes you
> want to build, separated by a space. You can also specify a task by
> appending a colon and a task name to the recipe name, like so:
> <code>busybox:clean</code>"></span>
> + <input id="build-input" type="text"
> class="form-control input-lg" placeholder="Select the default image
> recipe" autocomplete="off" disabled
> value="{{project.get_default_image}}">
> + </div>
> + {% if project.get_is_new %}
> + <button id="update-project-button" class="btn
> btn-primary btn-lg" data-project-id="{{project.id}}">Prepare
> Project</button>
> + {% else %}
> + <button id="cancel-project-button" class="btn info
> btn-lg" data-project-id="{{project.id}}">Cancel</button>
> + <button id="update-project-button" class="btn
> btn-primary btn-lg" data-project-id="{{project.id}}">Update</button>
> + {% endif %}
> + </form>
> + </li>
> + </ul>
> + </div>
> + {% endif %}
> +</div>
> diff --git
> a/bitbake/lib/toaster/toastergui/templates/projectconf.html
> b/bitbake/lib/toaster/toastergui/templates/projectconf.html index
> 933c588..fb20b26 100644 ---
> a/bitbake/lib/toaster/toastergui/templates/projectconf.html +++
> b/bitbake/lib/toaster/toastergui/templates/projectconf.html @@ -1,4
> +1,4 @@ -{% extends "baseprojectpage.html" %} +{% extends
> project_specific|yesno:"baseprojectspecificpage.html,baseprojectpage.html"
> %} {% load projecttags %} {% load humanize %}
>
> @@ -438,8 +438,11 @@ function onEditPageUpdate(data) {
> var_context='m';
> }
> }
> + if (configvars_sorted[i][0].startsWith("INTERNAL_")) {
> + var_context='m';
> + }
> if (var_context == undefined) {
> - orightml += '<dt><span
> id="config_var_entry_'+configvars_sorted[i][2]+'"
> class="js-config-var-name"></span><span class="glyphicon
> glyphicon-trash js-icon-trash-config_var"
> id="config_var_trash_'+configvars_sorted[i][2]+'"
> x-data="'+configvars_sorted[i][2]+'"></span> </dt>'
> + orightml += '<dt><span
> id="config_var_entry_'+configvars_sorted[i][2]+'"
> class="js-config-var-name"></span><span class="glyphicon
> glyphicon-trash js-icon-trash-config_var"
> id="config_var_trash_'+configvars_sorted[i][2]+'"
> x-data="'+configvars_sorted[i][2]+'"></span> </dt>' orightml += '<dd
> class="variable-list">' orightml += ' <span class="lead"
> id="config_var_value_'+configvars_sorted[i][2]+'"></span>' orightml
> += ' <span class="glyphicon glyphicon-edit
> js-icon-pencil-config_var"
> x-data="'+configvars_sorted[i][2]+'"></span>' diff --git
> a/bitbake/lib/toaster/toastergui/templates/recipe.html
> b/bitbake/lib/toaster/toastergui/templates/recipe.html index
> bf2cd71..3f76e65 100644 ---
> a/bitbake/lib/toaster/toastergui/templates/recipe.html +++
> b/bitbake/lib/toaster/toastergui/templates/recipe.html @@ -176,7
> +176,7 @@ <td>{{task.get_executed_display}}</td>
> <td>{{task.get_outcome_display}}
> - {% if task.outcome = task.OUTCOME_FAILED %}
> + {% if task.outcome == task.OUTCOME_FAILED %}
> <a href="{% url 'build_artifact'
> build.pk "tasklogfile" task.pk %}"> <span class="glyphicon
> glyphicon-download-alt get-help" title="Download task log
> diff --git
> a/bitbake/lib/toaster/toastergui/templates/recipe_add_btn.html
> b/bitbake/lib/toaster/toastergui/templates/recipe_add_btn.html new
> file mode 100644 index 0000000..06c4645 --- /dev/null
> +++ b/bitbake/lib/toaster/toastergui/templates/recipe_add_btn.html
> @@ -0,0 +1,23 @@
> +<a data-recipe-name="{{data.name}}" class="btn btn-default btn-block
> layer-exists-{{data.layer_version.pk}} set-default-recipe-btn"
> style="margin-top: 5px;
> + {% if data.layer_version.pk not in extra.current_layers %}
> + display:none;
> + {% endif %}"
> + >
> + Set recipe
> +</a>
> +<a class="btn btn-default btn-block layerbtn
> layer-add-{{data.layer_version.pk}}"
> + data-layer='{
> + "id": {{data.layer_version.pk}},
> + "name": "{{data.layer_version.layer.name}}",
> + "layerdetailurl": "{%url "layerdetails" extra.pid
> data.layer_version.pk%}",
> + "xhrLayerUrl": "{% url "xhr_layer" extra.pid
> data.layer_version.pk %}"
> + }' data-directive="add"
> + {% if data.layer_version.pk in extra.current_layers %}
> + style="display:none;"
> + {% endif %}
> +>
> + <span class="glyphicon glyphicon-plus"></span>
> + Add layer
> + <span class="glyphicon glyphicon-question-sign get-help" title="To
> set this
> + recipe you must first add the
> {{data.layer_version.layer.name}} layer to your project"></i> +</a>
> diff --git a/bitbake/lib/toaster/toastergui/urls.py
> b/bitbake/lib/toaster/toastergui/urls.py index e07b0ef..dc03e30 100644
> --- a/bitbake/lib/toaster/toastergui/urls.py
> +++ b/bitbake/lib/toaster/toastergui/urls.py
> @@ -116,6 +116,11 @@ urlpatterns = [
> tables.ProjectBuildsTable.as_view(template_name="projectbuilds-toastertable.html"),
> name='projectbuilds'),
>
> + url(r'^newproject_specific/(?P<pid>\d+)/$',
> views.newproject_specific, name='newproject_specific'),
> + url(r'^project_specific/(?P<pid>\d+)/$',
> views.project_specific, name='project_specific'),
> + url(r'^landing_specific/(?P<pid>\d+)/$',
> views.landing_specific, name='landing_specific'),
> + url(r'^landing_specific_cancel/(?P<pid>\d+)/$',
> views.landing_specific_cancel, name='landing_specific_cancel'), +
> # the import layer is a project-specific functionality;
> url(r'^project/(?P<pid>\d+)/importlayer$',
> views.importlayer, name='importlayer'),
> @@ -233,6 +238,14 @@ urlpatterns = [
> api.XhrBuildRequest.as_view(),
> name='xhr_buildrequest'),
>
> + url(r'^xhr_projectupdate/project/(?P<pid>\d+)$',
> + api.XhrProjectUpdate.as_view(),
> + name='xhr_projectupdate'),
> +
> + url(r'^xhr_setdefaultimage/project/(?P<pid>\d+)$',
> + api.XhrSetDefaultImageUrl.as_view(),
> + name='xhr_setdefaultimage'),
> +
> url(r'xhr_project/(?P<project_id>\d+)$',
> api.XhrProject.as_view(),
> name='xhr_project'),
> diff --git a/bitbake/lib/toaster/toastergui/views.py
> b/bitbake/lib/toaster/toastergui/views.py old mode 100755
> new mode 100644
> index 34ed2b2..c712b06
> --- a/bitbake/lib/toaster/toastergui/views.py
> +++ b/bitbake/lib/toaster/toastergui/views.py
> @@ -25,6 +25,7 @@ import re
> from django.db.models import F, Q, Sum
> from django.db import IntegrityError
> from django.shortcuts import render, redirect, get_object_or_404
> +from django.utils.http import urlencode
> from orm.models import Build, Target, Task, Layer, Layer_Version,
> Recipe from orm.models import LogMessage, Variable,
> Package_Dependency, Package from orm.models import Task_Dependency,
> Package_File @@ -51,6 +52,7 @@ logger = logging.getLogger("toaster")
>
> # Project creation and managed build enable
> project_enable = ('1' == os.environ.get('TOASTER_BUILDSERVER'))
> +is_project_specific = ('1' ==
> os.environ.get('TOASTER_PROJECTSPECIFIC'))
> class MimeTypeFinder(object):
> # setting this to False enables additional non-standard mimetypes
> @@ -70,6 +72,7 @@ class MimeTypeFinder(object):
> # single point to add global values into the context before rendering
> def toaster_render(request, page, context):
> context['project_enable'] = project_enable
> + context['project_specific'] = is_project_specific
> return render(request, page, context)
>
>
> @@ -1395,6 +1398,86 @@ if True:
> mandatory_fields = ['projectname', 'ptype']
> try:
> ptype = request.POST.get('ptype')
> + if ptype == "import":
> + mandatory_fields.append('importdir')
> + else:
> + mandatory_fields.append('projectversion')
> + # make sure we have values for all mandatory_fields
> + missing = [field for field in mandatory_fields if
> len(request.POST.get(field, '')) == 0]
> + if missing:
> + # set alert for missing fields
> + raise BadParameterException("Fields missing: %s"
> % ", ".join(missing)) +
> + if not request.user.is_authenticated():
> + user = authenticate(username =
> request.POST.get('username', '_anonuser'), password = 'nopass')
> + if user is None:
> + user = User.objects.create_user(username =
> request.POST.get('username', '_anonuser'), email =
> request.POST.get('email', ''), password = "nopass") +
> + user = authenticate(username =
> user.username, password = 'nopass')
> + login(request, user)
> +
> + # save the project
> + if ptype == "import":
> + if not os.path.isdir('%s/conf' %
> request.POST['importdir']):
> + raise BadParameterException("Bad path or
> missing 'conf' directory (%s)" % request.POST['importdir'])
> + from django.core import management
> + management.call_command('buildimport',
> '--command=import', '--name=%s' % request.POST['projectname'],
> '--path=%s' % request.POST['importdir'], interactive=False)
> + prj = Project.objects.get(name =
> request.POST['projectname'])
> + prj.merged_attr = True
> + prj.save()
> + else:
> + release = Release.objects.get(pk =
> request.POST.get('projectversion', None ))
> + prj = Project.objects.create_project(name =
> request.POST['projectname'], release = release)
> + prj.user_id = request.user.pk
> + if 'mergeattr' == request.POST.get('mergeattr',
> ''):
> + prj.merged_attr = True
> + prj.save()
> +
> + return redirect(reverse(project, args=(prj.pk,)) +
> "?notify=new-project") +
> + except (IntegrityError, BadParameterException) as e:
> + # fill in page with previously submitted values
> + for field in mandatory_fields:
> + context.__setitem__(field,
> request.POST.get(field, "-- missing"))
> + if isinstance(e, IntegrityError) and "username" in
> str(e):
> + context['alert'] = "Your chosen username is
> already used"
> + else:
> + context['alert'] = str(e)
> + return toaster_render(request, template, context)
> +
> + raise Exception("Invalid HTTP method for this page")
> +
> + # new project
> + def newproject_specific(request, pid):
> + if not project_enable:
> + return redirect( landing )
> +
> + project = Project.objects.get(pk=pid)
> + template = "newproject_specific.html"
> + context = {
> + 'email': request.user.email if
> request.user.is_authenticated() else '',
> + 'username': request.user.username if
> request.user.is_authenticated() else '',
> + 'releases': Release.objects.order_by("description"),
> + 'projectname': project.name,
> + 'project_pk': project.pk,
> + }
> +
> + # WORKAROUND: if we already know release, redirect
> 'newproject_specific' to 'project_specific'
> + if '1' ==
> project.get_variable('INTERNAL_PROJECT_SPECIFIC_SKIPRELEASE'):
> + return redirect(reverse(project_specific,
> args=(project.pk,))) +
> + try:
> + context['defaultbranch'] =
> ToasterSetting.objects.get(name = "DEFAULT_RELEASE").value
> + except ToasterSetting.DoesNotExist:
> + pass
> +
> + if request.method == "GET":
> + # render new project page
> + return toaster_render(request, template, context)
> + elif request.method == "POST":
> + mandatory_fields = ['projectname', 'ptype']
> + try:
> + ptype = request.POST.get('ptype')
> if ptype == "build":
> mandatory_fields.append('projectversion')
> # make sure we have values for all mandatory_fields
> @@ -1417,10 +1500,10 @@ if True:
> else:
> release = Release.objects.get(pk =
> request.POST.get('projectversion', None ))
> - prj = Project.objects.create_project(name =
> request.POST['projectname'], release = release)
> + prj = Project.objects.create_project(name =
> request.POST['projectname'], release = release, existing_project =
> project) prj.user_id = request.user.pk prj.save()
> - return redirect(reverse(project, args=(prj.pk,)) +
> "?notify=new-project")
> + return redirect(reverse(project_specific,
> args=(prj.pk,)) + "?notify=new-project")
> except (IntegrityError, BadParameterException) as e:
> # fill in page with previously submitted values
> @@ -1437,9 +1520,87 @@ if True:
> # Shows the edit project page
> def project(request, pid):
> project = Project.objects.get(pk=pid)
> +
> + if '1' == os.environ.get('TOASTER_PROJECTSPECIFIC'):
> + if request.GET:
> + #Example:request.GET=<QueryDict: {'setMachine':
> ['qemuarm']}>
> + params =
> urlencode(request.GET).replace('%5B%27','').replace('%27%5D','')
> + return redirect("%s?%s" % (reverse(project_specific,
> args=(project.pk,)),params))
> + else:
> + return redirect(reverse(project_specific,
> args=(project.pk,))) context = {"project": project}
> return toaster_render(request, "project.html", context)
>
> + # Shows the edit project-specific page
> + def project_specific(request, pid):
> + project = Project.objects.get(pk=pid)
> +
> + # Are we refreshing from a successful project specific
> update clone?
> + if Project.PROJECT_SPECIFIC_CLONING_SUCCESS ==
> project.get_variable(Project.PROJECT_SPECIFIC_STATUS):
> + return
> redirect(reverse(landing_specific,args=(project.pk,))) +
> + context = {
> + "project": project,
> + "is_new" :
> project.get_variable(Project.PROJECT_SPECIFIC_ISNEW),
> + "default_image_recipe" :
> project.get_variable(Project.PROJECT_SPECIFIC_DEFAULTIMAGE),
> + "mru" :
> Build.objects.all().filter(project=project,outcome=Build.IN_PROGRESS),
> + }
> + if
> project.build_set.filter(outcome=Build.IN_PROGRESS).count() > 0:
> + context['build_in_progress_none_completed'] = True
> + else:
> + context['build_in_progress_none_completed'] = False
> + return toaster_render(request, "project.html", context)
> +
> + # perform the final actions for the project specific page
> + def project_specific_finalize(cmnd, pid):
> + project = Project.objects.get(pk=pid)
> + callback =
> project.get_variable(Project.PROJECT_SPECIFIC_CALLBACK)
> + if "update" == cmnd:
> + # Delete all '_PROJECT_PREPARE_' builds
> + for b in Build.objects.all().filter(project=project):
> + delete_build = False
> + for t in b.target_set.all():
> + if '_PROJECT_PREPARE_' == t.target:
> + delete_build = True
> + if delete_build:
> + from django.core import management
> + management.call_command('builddelete',
> str(b.id), interactive=False)
> + # perform callback at this last moment if defined, in
> case Toaster gets shutdown next
> + default_target =
> project.get_variable(Project.PROJECT_SPECIFIC_DEFAULTIMAGE)
> + if callback:
> + callback = callback.replace("<IMAGE>",default_target)
> + if "cancel" == cmnd:
> + if callback:
> + callback = callback.replace("<IMAGE>","none")
> + callback = callback.replace("--update","--cancel")
> + # perform callback at this last moment if defined, in case
> this Toaster gets shutdown next
> + ret = ''
> + if callback:
> + ret = os.system('bash -c "%s"' % callback)
> +
> project.set_variable(Project.PROJECT_SPECIFIC_CALLBACK,'')
> + # Delete the temp project specific variables
> + project.set_variable(Project.PROJECT_SPECIFIC_ISNEW,'')
> +
> project.set_variable(Project.PROJECT_SPECIFIC_STATUS,Project.PROJECT_SPECIFIC_NONE)
> + # WORKAROUND: Release this workaround flag
> +
> project.set_variable('INTERNAL_PROJECT_SPECIFIC_SKIPRELEASE','') +
> + # Shows the final landing page for project specific update
> + def landing_specific(request, pid):
> + project_specific_finalize("update", pid)
> + context = {
> + "install_dir": os.environ['TOASTER_DIR'],
> + }
> + return toaster_render(request, "landing_specific.html",
> context) +
> + # Shows the related landing-specific page
> + def landing_specific_cancel(request, pid):
> + project_specific_finalize("cancel", pid)
> + context = {
> + "install_dir": os.environ['TOASTER_DIR'],
> + "status": "cancel",
> + }
> + return toaster_render(request, "landing_specific.html",
> context) +
> def jsunittests(request):
> """ Provides a page for the js unit tests """
> bbv = BitbakeVersion.objects.filter(branch="master").first()
> diff --git a/bitbake/lib/toaster/toastergui/widgets.py
> b/bitbake/lib/toaster/toastergui/widgets.py index a1792d9..db5c3aa
> 100644 --- a/bitbake/lib/toaster/toastergui/widgets.py
> +++ b/bitbake/lib/toaster/toastergui/widgets.py
> @@ -89,6 +89,10 @@ class ToasterTable(TemplateView):
>
> # global variables
> context['project_enable'] = ('1' ==
> os.environ.get('TOASTER_BUILDSERVER'))
> + try:
> + context['project_specific'] = ('1' ==
> os.environ.get('TOASTER_PROJECTSPECIFIC'))
> + except:
> + context['project_specific'] = ''
>
> return context
>
> @@ -511,13 +515,20 @@ class MostRecentBuildsView(View):
> buildrequest_id = build_obj.buildrequest.pk
> build['buildrequest_id'] = buildrequest_id
>
> - build['recipes_parsed_percentage'] = \
> - int((build_obj.recipes_parsed /
> - build_obj.recipes_to_parse) * 100)
> + if build_obj.recipes_to_parse > 0:
> + build['recipes_parsed_percentage'] = \
> + int((build_obj.recipes_parsed /
> + build_obj.recipes_to_parse) * 100)
> + else:
> + build['recipes_parsed_percentage'] = 0
> + if build_obj.repos_to_clone > 0:
> + build['repos_cloned_percentage'] = \
> + int((build_obj.repos_cloned /
> + build_obj.repos_to_clone) * 100)
> + else:
> + build['repos_cloned_percentage'] = 0
>
> - build['repos_cloned_percentage'] = \
> - int((build_obj.repos_cloned /
> - build_obj.repos_to_clone) * 100)
> + build['progress_item'] = build_obj.progress_item
>
> tasks_complete_percentage = 0
> if build_obj.outcome in (Build.SUCCEEDED, Build.FAILED):
> diff --git
> a/bitbake/lib/toaster/toastermain/management/commands/builddelete.py
> b/bitbake/lib/toaster/toastermain/management/commands/builddelete.py
> index 0bef8d4..bf69a8f 100644 ---
> a/bitbake/lib/toaster/toastermain/management/commands/builddelete.py
> +++
> b/bitbake/lib/toaster/toastermain/management/commands/builddelete.py
> @@ -10,8 +10,12 @@ class Command(BaseCommand): args = '<buildID1
> buildID2 .....>' help = "Deletes selected build(s)"
> + def add_arguments(self, parser):
> + parser.add_argument('buildids', metavar='N', type=int,
> nargs='+',
> + help="Build ID's to delete")
> +
> def handle(self, *args, **options):
> - for bid in args:
> + for bid in options['buildids']:
> try:
> b = Build.objects.get(pk = bid)
> except ObjectDoesNotExist:
> diff --git
> a/bitbake/lib/toaster/toastermain/management/commands/buildimport.py
> b/bitbake/lib/toaster/toastermain/management/commands/buildimport.py
> new file mode 100644 index 0000000..2d57ab5 --- /dev/null
> +++
> b/bitbake/lib/toaster/toastermain/management/commands/buildimport.py
> @@ -0,0 +1,584 @@ +#
> +# ex:ts=4:sw=4:sts=4:et
> +# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
> +#
> +# BitBake Toaster Implementation
> +#
> +# Copyright (C) 2018 Wind River Systems
> +#
> +# This program is free software; you can redistribute it and/or
> modify +# it under the terms of the GNU General Public License
> version 2 as +# published by the Free Software Foundation.
> +#
> +# This program is distributed in the hope that it will be useful,
> +# but WITHOUT ANY WARRANTY; without even the implied warranty of
> +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> +# GNU General Public License for more details.
> +#
> +# You should have received a copy of the GNU General Public License
> along +# with this program; if not, write to the Free Software
> Foundation, Inc., +# 51 Franklin Street, Fifth Floor, Boston, MA
> 02110-1301 USA. +
> +# buildimport: import a project for project specific configuration
> +#
> +# Usage:
> +# (a) Set up Toaster environent
> +#
> +# (b) Call buildimport
> +# $ /path/to/bitbake/lib/toaster/manage.py buildimport \
> +# --name=$PROJECTNAME \
> +# --path=$BUILD_DIRECTORY \
> +# --callback="$CALLBACK_SCRIPT" \
> +# --command="configure|reconfigure|import"
> +#
> +# (c) Return is "|Default_image=%s|Project_id=%d"
> +#
> +# (d) Open Toaster to this project using for example:
> +# $ xdg-open
> http://localhost:$toaster_port/toastergui/project_specific/$project_id
> +# +# (e) To delete a project:
> +# $ /path/to/bitbake/lib/toaster/manage.py buildimport \
> +# --name=$PROJECTNAME --delete-project
> +#
> +
> +
> +# ../bitbake/lib/toaster/manage.py buildimport --name=test
> --path=`pwd` --callback="" --command=import +
> +from django.core.management.base import BaseCommand, CommandError
> +from django.core.exceptions import ObjectDoesNotExist
> +from orm.models import ProjectManager, Project, Release,
> ProjectVariable +from orm.models import Layer, Layer_Version,
> LayerSource, ProjectLayer +from toastergui.api import
> scan_layer_content +from django.db import OperationalError
> +
> +import os
> +import re
> +import os.path
> +import subprocess
> +import shutil
> +
> +# Toaster variable section delimiters
> +TOASTER_PROLOG = '#=== TOASTER_CONFIG_PROLOG ==='
> +TOASTER_EPILOG = '#=== TOASTER_CONFIG_EPILOG ==='
> +
> +# quick development/debugging support
> +verbose = 2
> +def _log(msg):
> + if 1 == verbose:
> + print(msg)
> + elif 2 == verbose:
> + f1=open('/tmp/toaster.log', 'a')
> + f1.write("|" + msg + "|\n" )
> + f1.close()
> +
> +
> +__config_regexp__ = re.compile( r"""
> + ^
> + (?P<exp>export\s+)?
> + (?P<var>[a-zA-Z0-9\-_+.${}/~]+?)
> + (\[(?P<flag>[a-zA-Z0-9\-_+.]+)\])?
> +
> + \s* (
> + (?P<colon>:=) |
> + (?P<lazyques>\?\?=) |
> + (?P<ques>\?=) |
> + (?P<append>\+=) |
> + (?P<prepend>=\+) |
> + (?P<predot>=\.) |
> + (?P<postdot>\.=) |
> + =
> + ) \s*
> +
> + (?!'[^']*'[^']*'$)
> + (?!\"[^\"]*\"[^\"]*\"$)
> + (?P<apo>['\"])
> + (?P<value>.*)
> + (?P=apo)
> + $
> + """, re.X)
> +
> +class Command(BaseCommand):
> + args = "<name> <path> <release>"
> + help = "Import a command line build directory"
> + vars = {}
> + toaster_vars = {}
> +
> + def add_arguments(self, parser):
> + parser.add_argument(
> + '--name', dest='name', required=True,
> + help='name of the project',
> + )
> + parser.add_argument(
> + '--path', dest='path', required=True,
> + help='path to the project',
> + )
> + parser.add_argument(
> + '--release', dest='release', required=False,
> + help='release for the project',
> + )
> + parser.add_argument(
> + '--callback', dest='callback', required=False,
> + help='callback for project config update',
> + )
> + parser.add_argument(
> + '--delete-project', dest='delete_project',
> required=False,
> + help='delete this project from the database',
> + )
> + parser.add_argument(
> + '--command', dest='command', required=False,
> + help='command (configure,reconfigure,import)',
> + )
> +
> + # Extract the bb variables from a conf file
> + def scan_conf(self,fn):
> + vars = self.vars
> + toaster_vars = self.toaster_vars
> +
> + #_log("scan_conf:%s" % fn)
> + if not os.path.isfile(fn):
> + return
> + f = open(fn, 'r')
> +
> + #statements = ast.StatementGroup()
> + lineno = 0
> + is_toaster_section = False
> + while True:
> + lineno = lineno + 1
> + s = f.readline()
> + if not s:
> + break
> + w = s.strip()
> + # skip empty lines
> + if not w:
> + continue
> + # evaluate Toaster sections
> + if w.startswith(TOASTER_PROLOG):
> + is_toaster_section = True
> + continue
> + if w.startswith(TOASTER_EPILOG):
> + is_toaster_section = False
> + continue
> + s = s.rstrip()
> + while s[-1] == '\\':
> + s2 = f.readline().strip()
> + lineno = lineno + 1
> + if (not s2 or s2 and s2[0] != "#") and s[0] == "#" :
> + echo("There is a confusing multiline, partially
> commented expression on line %s of file %s (%s).\nPlease clarify
> whether this is all a comment or should be parsed." % (lineno, fn, s))
> + s = s[:-1] + s2
> + # skip comments
> + if s[0] == '#':
> + continue
> + # process the line for just assignments
> + m = __config_regexp__.match(s)
> + if m:
> + groupd = m.groupdict()
> + var = groupd['var']
> + value = groupd['value']
> +
> + if groupd['lazyques']:
> + if not var in vars:
> + vars[var] = value
> + continue
> + if groupd['ques']:
> + if not var in vars:
> + vars[var] = value
> + continue
> + # preset empty blank for remaining operators
> + if not var in vars:
> + vars[var] = ''
> + if groupd['append']:
> + vars[var] += value
> + elif groupd['prepend']:
> + vars[var] = "%s%s" % (value,vars[var])
> + elif groupd['predot']:
> + vars[var] = "%s %s" % (value,vars[var])
> + elif groupd['postdot']:
> + vars[var] = "%s %s" % (vars[var],value)
> + else:
> + vars[var] = "%s" % (value)
> + # capture vars in a Toaster section
> + if is_toaster_section:
> + toaster_vars[var] = vars[var]
> +
> + # DONE WITH PARSING
> + f.close()
> + self.vars = vars
> + self.toaster_vars = toaster_vars
> +
> + # Update the scanned project variables
> + def update_project_vars(self,project,name):
> + pv, create = ProjectVariable.objects.get_or_create(project =
> project, name = name)
> + if (not name in self.vars.keys()) or (not self.vars[name]):
> + self.vars[name] = pv.value
> + else:
> + if pv.value != self.vars[name]:
> + pv.value = self.vars[name]
> + pv.save()
> +
> + # Find the git version of the installation
> + def find_layer_dir_version(self,path):
> + # * rocko ...
> +
> + install_version = ''
> + cwd = os.getcwd()
> + os.chdir(path)
> + p = subprocess.Popen(['git', 'branch', '-av'],
> stdout=subprocess.PIPE,
> +
> stderr=subprocess.PIPE)
> + out, err = p.communicate()
> + out = out.decode("utf-8")
> + for branch in out.split('\n'):
> + if ('*' == branch[0:1]) and ('no branch' not in branch):
> + install_version = re.sub(' .*','',branch[2:])
> + break
> + if 'remotes/m/master' in branch:
> + install_version = re.sub('.*base/','',branch)
> + break
> + os.chdir(cwd)
> + return install_version
> +
> + # Compute table of the installation's registered layer versions
> (branch or commit)
> + def find_layer_dir_versions(self,INSTALL_URL_PREFIX):
> + lv_dict = {}
> + layer_versions = Layer_Version.objects.all()
> + for lv in layer_versions:
> + layer = Layer.objects.filter(pk=lv.layer.pk)[0]
> + if layer.vcs_url:
> + url_short =
> layer.vcs_url.replace(INSTALL_URL_PREFIX,'')
> + else:
> + url_short = ''
> + # register the core, branch, and the version variations
> + lv_dict["%s,%s,%s" % (url_short,lv.dirpath,'')] =
> (lv.id,layer.name)
> + lv_dict["%s,%s,%s" % (url_short,lv.dirpath,lv.branch)] =
> (lv.id,layer.name)
> + lv_dict["%s,%s,%s" % (url_short,lv.dirpath,lv.commit)] =
> (lv.id,layer.name)
> + #_log(" (%s,%s,%s|%s) = (%s,%s)" %
> (url_short,lv.dirpath,lv.branch,lv.commit,lv.id,layer.name))
> + return lv_dict
> +
> + # Apply table of all layer versions
> + def extract_bblayers(self):
> + # set up the constants
> + bblayer_str = self.vars['BBLAYERS']
> + TOASTER_DIR = os.environ.get('TOASTER_DIR')
> + INSTALL_CLONE_PREFIX = os.path.dirname(TOASTER_DIR) + "/"
> + TOASTER_CLONE_PREFIX = TOASTER_DIR + "/_toaster_clones/"
> + INSTALL_URL_PREFIX = ''
> + layers = Layer.objects.filter(name='openembedded-core')
> + for layer in layers:
> + if layer.vcs_url:
> + INSTALL_URL_PREFIX = layer.vcs_url
> + break
> + INSTALL_URL_PREFIX = INSTALL_URL_PREFIX.replace("/poky","/")
> + INSTALL_VERSION_DIR = TOASTER_DIR
> + INSTALL_URL_POSTFIX = INSTALL_URL_PREFIX.replace(':','_')
> + INSTALL_URL_POSTFIX = INSTALL_URL_POSTFIX.replace('/','_')
> + INSTALL_URL_POSTFIX = "%s_%s" %
> (TOASTER_CLONE_PREFIX,INSTALL_URL_POSTFIX) +
> + # get the set of available layer:layer_versions
> + lv_dict = self.find_layer_dir_versions(INSTALL_URL_PREFIX)
> +
> + # compute the layer matches
> + layers_list = []
> + for line in bblayer_str.split(' '):
> + if not line:
> + continue
> + if line.endswith('/local'):
> + continue
> +
> + # isolate the repo
> + layer_path = line
> + line =
> line.replace(INSTALL_URL_POSTFIX,'').replace(INSTALL_CLONE_PREFIX,'').replace('/layers/','/').replace('/poky/','/')
> +
> + # isolate the sub-path
> + path_index = line.rfind('/')
> + if path_index > 0:
> + sub_path = line[path_index+1:]
> + line = line[0:path_index]
> + else:
> + sub_path = ''
> +
> + # isolate the version
> + if TOASTER_CLONE_PREFIX in layer_path:
> + is_toaster_clone = True
> + # extract version from name syntax
> + version_index = line.find('_')
> + if version_index > 0:
> + version = line[version_index+1:]
> + line = line[0:version_index]
> + else:
> + version = ''
> + _log("TOASTER_CLONE(%s/%s), version=%s" %
> (line,sub_path,version))
> + else:
> + is_toaster_clone = False
> + # version is from the installation
> + version = self.find_layer_dir_version(layer_path)
> + _log("LOCAL_CLONE(%s/%s), version=%s" %
> (line,sub_path,version)) +
> + # capture the layer information into layers_list
> +
> layers_list.append( (line,sub_path,version,layer_path,is_toaster_clone) )
> + return layers_list,lv_dict
> +
> + #
> + def
> find_import_release(self,layers_list,lv_dict,default_release):
> + # poky,meta,rocko => 4;openembedded-core
> + release = default_release
> + for line,path,version,layer_path,is_toaster_clone in
> layers_list:
> + key = "%s,%s,%s" % (line,path,version)
> + if key in lv_dict:
> + lv_id = lv_dict[key]
> + if 'openembedded-core' == lv_id[1]:
> +
> _log("Find_import_release(%s):version=%s,Toaster=%s" %
> (lv_id[1],version,is_toaster_clone))
> + # only versions in Toaster managed layers are
> accepted
> + if not is_toaster_clone:
> + break
> + try:
> + release = Release.objects.get(name=version)
> + except:
> + pass
> + break
> + _log("Find_import_release:RELEASE=%s" % release.name)
> + return release
> +
> + # Apply the found conf layers
> + def
> apply_conf_bblayers(self,layers_list,lv_dict,project,release=None):
> + for line,path,version,layer_path,is_toaster_clone in
> layers_list:
> + # Assert release promote if present
> + if release:
> + version = release
> + # try to match the key to a layer_version
> + key = "%s,%s,%s" % (line,path,version)
> + key_short = "%s,%s,%s" % (line,path,'')
> + lv_id = ''
> + if key in lv_dict:
> + lv_id = lv_dict[key]
> + lv = Layer_Version.objects.get(pk=int(lv_id[0]))
> + pl,created =
> ProjectLayer.objects.get_or_create(project=project,
> + layercommit=lv)
> + pl.optional=False
> + pl.save()
> + _log(" %s => %s;%s" % (key,lv_id[0],lv_id[1]))
> + elif key_short in lv_dict:
> + lv_id = lv_dict[key_short]
> + lv = Layer_Version.objects.get(pk=int(lv_id[0]))
> + pl,created =
> ProjectLayer.objects.get_or_create(project=project,
> + layercommit=lv)
> + pl.optional=False
> + pl.save()
> + _log(" %s ?> %s" % (key,lv_dict[key_short]))
> + else:
> + _log("%s <= %s" % (key,layer_path))
> + found = False
> + # does local layer already exist in this project?
> + try:
> + for pl in
> ProjectLayer.objects.filter(project=project):
> + if pl.layercommit.layer.local_source_dir ==
> layer_path:
> + found = True
> + _log(" Project Local Layer found!")
> + except Exception as e:
> + _log("ERROR: Local Layer '%s'" % e)
> + pass
> +
> + if not found:
> + # Does Layer name+path already exist?
> + try:
> + layer_name_base =
> os.path.basename(layer_path)
> + _log("Layer_lookup: try '%s','%s'" %
> (layer_name_base,layer_path))
> + layer =
> Layer.objects.get(name=layer_name_base,local_source_dir = layer_path)
> + # Found! Attach layer_version and
> ProjectLayer
> + layer_version = Layer_Version.objects.create(
> + layer=layer,
> + project=project,
> + layer_source=LayerSource.TYPE_IMPORTED)
> + layer_version.save()
> + pl,created =
> ProjectLayer.objects.get_or_create(project=project,
> +
> layercommit=layer_version)
> + pl.optional=False
> + pl.save()
> + found = True
> + # add layer contents to this layer version
> + scan_layer_content(layer,layer_version)
> + _log(" Parent Local Layer found in db!")
> + except Exception as e:
> + _log("Layer_exists_test_failed: Local Layer
> '%s'" % e)
> + pass
> +
> + if not found:
> + # Insure that layer path exists, in case of user
> typo
> + if not os.path.isdir(layer_path):
> + _log("ERROR:Layer path '%s' not found" %
> layer_path)
> + continue
> + # Add layer to db and attach project to it
> + layer_name_base = os.path.basename(layer_path)
> + # generate a unique layer name
> + layer_name_matches = {}
> + for layer in
> Layer.objects.filter(name__contains=layer_name_base):
> + layer_name_matches[layer.name] = '1'
> + layer_name_idx = 0
> + layer_name_test = layer_name_base
> + while layer_name_test in
> layer_name_matches.keys():
> + layer_name_idx += 1
> + layer_name_test = "%s_%d" %
> (layer_name_base,layer_name_idx)
> + # create the layer and layer_verion objects
> + layer =
> Layer.objects.create(name=layer_name_test)
> + layer.local_source_dir = layer_path
> + layer_version = Layer_Version.objects.create(
> + layer=layer,
> + project=project,
> + layer_source=LayerSource.TYPE_IMPORTED)
> + layer.save()
> + layer_version.save()
> + pl,created =
> ProjectLayer.objects.get_or_create(project=project,
> +
> layercommit=layer_version)
> + pl.optional=False
> + pl.save()
> + # register the layer's content
> + _log(" Local Layer Add content")
> + scan_layer_content(layer,layer_version)
> + _log(" Local Layer Added '%s'!" %
> layer_name_test) +
> + # Scan the project's conf files (if any)
> + def scan_conf_variables(self,project_path):
> + # scan the project's settings, add any new layers or
> variables
> + if os.path.isfile("%s/conf/local.conf" % project_path):
> + self.scan_conf("%s/conf/local.conf" % project_path)
> + self.scan_conf("%s/conf/bblayers.conf" % project_path)
> + # Import then disable old style Toaster conf files
> (before 'merged_attr')
> + old_toaster_local = "%s/conf/toaster.conf" % project_path
> + if os.path.isfile(old_toaster_local):
> + self.scan_conf(old_toaster_local)
> + shutil.move(old_toaster_local,
> old_toaster_local+"_old")
> + old_toaster_layer = "%s/conf/toaster-bblayers.conf" %
> project_path
> + if os.path.isfile(old_toaster_layer):
> + self.scan_conf(old_toaster_layer)
> + shutil.move(old_toaster_layer,
> old_toaster_layer+"_old") +
> + # Scan the found conf variables (if any)
> + def
> apply_conf_variables(self,project,layers_list,lv_dict,release=None):
> + if self.vars:
> + # Catch vars relevant to Toaster (in case no Toaster
> section)
> + self.update_project_vars(project,'DISTRO')
> + self.update_project_vars(project,'MACHINE')
> + self.update_project_vars(project,'IMAGE_INSTALL_append')
> + self.update_project_vars(project,'IMAGE_FSTYPES')
> + self.update_project_vars(project,'PACKAGE_CLASSES')
> + # These vars are typically only assigned by Toaster
> + #self.update_project_vars(project,'DL_DIR')
> + #self.update_project_vars(project,'SSTATE_DIR')
> +
> + # Assert found Toaster vars
> + for var in self.toaster_vars.keys():
> + pv, create =
> ProjectVariable.objects.get_or_create(project = project, name = var)
> + pv.value = self.toaster_vars[var]
> + _log("* Add/update Toaster var '%s' = '%s'" %
> (pv.name,pv.value))
> + pv.save()
> +
> + # Assert found BBLAYERS
> + if 0 < verbose:
> + for pl in
> ProjectLayer.objects.filter(project=project):
> + release_name = 'None' if not
> pl.layercommit.release else pl.layercommit.release.name
> + print(" BEFORE:ProjectLayer=%s,%s,%s,%s" %
> (pl.layercommit.layer.name,release_name,pl.layercommit.branch,pl.layercommit.commit))
> +
> self.apply_conf_bblayers(layers_list,lv_dict,project,release)
> + if 0 < verbose:
> + for pl in
> ProjectLayer.objects.filter(project=project):
> + release_name = 'None' if not
> pl.layercommit.release else pl.layercommit.release.name
> + print(" AFTER :ProjectLayer=%s,%s,%s,%s" %
> (pl.layercommit.layer.name,release_name,pl.layercommit.branch,pl.layercommit.commit))
> + +
> + def handle(self, *args, **options):
> + project_name = options['name']
> + project_path = options['path']
> + project_callback = options['callback'] if
> options['callback'] else ''
> + release_name = options['release'] if options['release'] else
> '' +
> + #
> + # Delete project
> + #
> +
> + if options['delete_project']:
> + try:
> + print("Project '%s' delete from Toaster database" %
> (project_name))
> + project = Project.objects.get(name=project_name)
> + # TODO: deep project delete
> + project.delete()
> + print("Project '%s' Deleted" % (project_name))
> + return
> + except Exception as e:
> + print("Project '%s' not found, not deleted (%s)" %
> (project_name,e))
> + return
> +
> + #
> + # Create/Update/Import project
> + #
> +
> + # See if project (by name) exists
> + project = None
> + try:
> + # Project already exists
> + project = Project.objects.get(name=project_name)
> + except Exception as e:
> + pass
> +
> + # Find the installation's default release
> + default_release = Release.objects.get(id=1)
> +
> + # SANITY: if 'reconfig' but project does not exist (deleted
> externally), switch to 'import'
> + if ("reconfigure" == options['command']) and (None ==
> project):
> + options['command'] = 'import'
> +
> + # 'Configure':
> + if "configure" == options['command']:
> + # Note: ignore any existing conf files
> + # create project, SANITY: reuse any project of same name
> + project =
> Project.objects.create_project(project_name,default_release,project) +
> + # 'Re-configure':
> + if "reconfigure" == options['command']:
> + # Scan the directory's conf files
> + self.scan_conf_variables(project_path)
> + # Scan the layer list
> + layers_list,lv_dict = self.extract_bblayers()
> + # Apply any new layers or variables
> + self.apply_conf_variables(project,layers_list,lv_dict)
> +
> + # 'Import':
> + if "import" == options['command']:
> + # Scan the directory's conf files
> + self.scan_conf_variables(project_path)
> + # Remove these Toaster controlled variables
> + for var in ('DL_DIR','SSTATE_DIR'):
> + self.vars.pop(var, None)
> + self.toaster_vars.pop(var, None)
> + # Scan the layer list
> + layers_list,lv_dict = self.extract_bblayers()
> + # Find the directory's release, and promote to
> default_release if local paths
> + release =
> self.find_import_release(layers_list,lv_dict,default_release)
> + # create project, SANITY: reuse any project of same name
> + project =
> Project.objects.create_project(project_name,release,project)
> + # Apply any new layers or variables
> +
> self.apply_conf_variables(project,layers_list,lv_dict,release)
> + # WORKAROUND: since we now derive the release, redirect
> 'newproject_specific' to 'project_specific'
> +
> project.set_variable('INTERNAL_PROJECT_SPECIFIC_SKIPRELEASE','1') +
> + # Set up the project's meta data
> + project.builddir = project_path
> + project.merged_attr = True
> +
> project.set_variable(Project.PROJECT_SPECIFIC_CALLBACK,project_callback)
> +
> project.set_variable(Project.PROJECT_SPECIFIC_STATUS,Project.PROJECT_SPECIFIC_EDIT)
> + if ("configure" == options['command']) or ("import" ==
> options['command']):
> + # preset the mode and default image recipe
> +
> project.set_variable(Project.PROJECT_SPECIFIC_ISNEW,Project.PROJECT_SPECIFIC_NEW)
> +
> project.set_variable(Project.PROJECT_SPECIFIC_DEFAULTIMAGE,"core-image-minimal")
> + # Assert any extended/custom actions or variables for
> new non-Toaster projects
> + if not len(self.toaster_vars):
> + pass
> + else:
> +
> project.set_variable(Project.PROJECT_SPECIFIC_ISNEW,Project.PROJECT_SPECIFIC_NONE)
> +
> + # Save the updated Project
> + project.save()
> +
> + _log("Buildimport:project='%s' at '%d'" %
> (project_name,project.id)) +
> + if ('DEFAULT_IMAGE' in self.vars) and
> (self.vars['DEFAULT_IMAGE']):
> + print("|Default_image=%s|Project_id=%d" %
> (self.vars['DEFAULT_IMAGE'],project.id))
> + else:
> + print("|Project_id=%d" % (project.id))
> +
> diff --git a/bitbake/toaster-requirements.txt
> b/bitbake/toaster-requirements.txt index c0ec368..a682b08 100644
> --- a/bitbake/toaster-requirements.txt
> +++ b/bitbake/toaster-requirements.txt
> @@ -1,3 +1,3 @@
> -Django>1.8,<1.11.9
> +Django>1.8,<1.12
> beautifulsoup4>=4.4.0
> pytz
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH 3/3] isar-bootstrap: Eliminate no-gpg-check option usage
2018-11-07 17:38 ` Henning Schild
@ 2018-11-08 7:57 ` Maxim Yu. Osipov
0 siblings, 0 replies; 18+ messages in thread
From: Maxim Yu. Osipov @ 2018-11-08 7:57 UTC (permalink / raw)
To: Henning Schild; +Cc: isar-users
On 11/7/18 8:38 PM, Henning Schild wrote:
> In case reviews hold the bitbake parts back, this should not be part of
> the series.
I don't see problems here - many series include various fixes -
(f.e. see last series from Jan).
Maxim.
> Henning
>
> Am Wed, 7 Nov 2018 17:09:55 +0100
> schrieb "Maxim Yu. Osipov" <mosipov@ilbers.de>:
>
>> Marking repo as trusted eliminates this option usage.
>>
>> Suggested-by: Henning Schild <henning.schild@siemens.com>
>> Signed-off-by: Maxim Yu. Osipov <mosipov@ilbers.de>
>> ---
>> meta/recipes-core/isar-bootstrap/isar-bootstrap.inc | 3 ---
>> 1 file changed, 3 deletions(-)
>>
>> diff --git a/meta/recipes-core/isar-bootstrap/isar-bootstrap.inc
>> b/meta/recipes-core/isar-bootstrap/isar-bootstrap.inc index
>> cc1791c..592d042 100644 ---
>> a/meta/recipes-core/isar-bootstrap/isar-bootstrap.inc +++
>> b/meta/recipes-core/isar-bootstrap/isar-bootstrap.inc @@ -178,9
>> +178,6 @@ isar_bootstrap() { shift
>> done
>> debootstrap_args="--verbose --variant=minbase --include=locales "
>> - if [ "${ISAR_USE_CACHED_BASE_REPO}" = "1" ]; then
>> - debootstrap_args="$debootstrap_args --no-check-gpg"
>> - fi
>> E="${@bb.utils.export_proxies(d)}"
>> sudo -E flock "${ISAR_BOOTSTRAP_LOCK}" -c "\
>> set -e
>
--
Maxim Osipov
ilbers GmbH
Maria-Merian-Str. 8
85521 Ottobrunn
Germany
+49 (151) 6517 6917
mosipov@ilbers.de
http://ilbers.de/
Commercial register Munich, HRB 214197
General Manager: Baurzhan Ismagulov
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH 1/3] Update bitbake from the upstream.
2018-11-07 17:58 ` Henning Schild
@ 2018-11-08 9:08 ` Maxim Yu. Osipov
0 siblings, 0 replies; 18+ messages in thread
From: Maxim Yu. Osipov @ 2018-11-08 9:08 UTC (permalink / raw)
To: Henning Schild; +Cc: isar-users
On 11/7/18 8:58 PM, Henning Schild wrote:
> Am Wed, 7 Nov 2018 17:09:53 +0100
> schrieb "Maxim Yu. Osipov" <mosipov@ilbers.de>:
>
>> Origin: https://github.com/openembedded/bitbake.git
>> Commit: 701f76f773a6e77258f307a4f8e2ec1a8552f6f3
> Please include the complete "git show" header here, or at least the
> name of the patch. Just to be extra sure we find that again, should the
> hash change ...
Ok.
I've just taken as an example the latest bitbake update commit
(a6e101f5) in isar tree.
> This is one behind the last release and the only diff is a
> user-manual-change. I think that is ok, but why did you not go for the
> release?
The reason is that user-manual-change will not hurt.
No problem - I will switch V2 series to the release 1.40.0 commit:
commit 2820e7aab2203fc6cf7127e433a80b7d13ba75e0
Author: Richard Purdie <richard.purdie@linuxfoundation.org>
Date: Sat Oct 20 14:26:41 2018 +0100
bitbake: Bump version to 1.40.0
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Maxim.
> Henning
>
>> Signed-off-by: Maxim Yu. Osipov <mosipov@ilbers.de>
[snip]
--
Maxim Osipov
ilbers GmbH
Maria-Merian-Str. 8
85521 Ottobrunn
Germany
+49 (151) 6517 6917
mosipov@ilbers.de
http://ilbers.de/
Commercial register Munich, HRB 214197
General Manager: Baurzhan Ismagulov
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH 3/3] isar-bootstrap: Eliminate no-gpg-check option usage
2018-11-07 16:09 ` [PATCH 3/3] isar-bootstrap: Eliminate no-gpg-check option usage Maxim Yu. Osipov
2018-11-07 17:38 ` Henning Schild
@ 2018-11-12 9:30 ` Maxim Yu. Osipov
2018-11-27 9:43 ` Henning Schild
2018-12-03 11:49 ` Maxim Yu. Osipov
3 siblings, 0 replies; 18+ messages in thread
From: Maxim Yu. Osipov @ 2018-11-12 9:30 UTC (permalink / raw)
To: isar-users
On 11/7/18 7:09 PM, Maxim Yu. Osipov wrote:
> Marking repo as trusted eliminates this option usage.
Applied to the 'next',
Maxim.
> Suggested-by: Henning Schild <henning.schild@siemens.com>
> Signed-off-by: Maxim Yu. Osipov <mosipov@ilbers.de>
> ---
> meta/recipes-core/isar-bootstrap/isar-bootstrap.inc | 3 ---
> 1 file changed, 3 deletions(-)
>
> diff --git a/meta/recipes-core/isar-bootstrap/isar-bootstrap.inc b/meta/recipes-core/isar-bootstrap/isar-bootstrap.inc
> index cc1791c..592d042 100644
> --- a/meta/recipes-core/isar-bootstrap/isar-bootstrap.inc
> +++ b/meta/recipes-core/isar-bootstrap/isar-bootstrap.inc
> @@ -178,9 +178,6 @@ isar_bootstrap() {
> shift
> done
> debootstrap_args="--verbose --variant=minbase --include=locales "
> - if [ "${ISAR_USE_CACHED_BASE_REPO}" = "1" ]; then
> - debootstrap_args="$debootstrap_args --no-check-gpg"
> - fi
> E="${@bb.utils.export_proxies(d)}"
> sudo -E flock "${ISAR_BOOTSTRAP_LOCK}" -c "\
> set -e
>
--
Maxim Osipov
ilbers GmbH
Maria-Merian-Str. 8
85521 Ottobrunn
Germany
+49 (151) 6517 6917
mosipov@ilbers.de
http://ilbers.de/
Commercial register Munich, HRB 214197
General Manager: Baurzhan Ismagulov
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH 3/3] isar-bootstrap: Eliminate no-gpg-check option usage
2018-11-07 16:09 ` [PATCH 3/3] isar-bootstrap: Eliminate no-gpg-check option usage Maxim Yu. Osipov
2018-11-07 17:38 ` Henning Schild
2018-11-12 9:30 ` Maxim Yu. Osipov
@ 2018-11-27 9:43 ` Henning Schild
2018-11-27 10:15 ` Maxim Yu. Osipov
2018-12-03 11:49 ` Maxim Yu. Osipov
3 siblings, 1 reply; 18+ messages in thread
From: Henning Schild @ 2018-11-27 9:43 UTC (permalink / raw)
To: Maxim Yu. Osipov; +Cc: isar-users
There is no problem with disabling gpg in the debootstrap case. Because
debootstrap only works against one repo and we trust this one.
Instead of the original "if [ "${ISAR_USE_CACHED_BASE_REPO}" = "1" ]"
we should depend on the trusted option, which we have available in the
python code in this file. Doing this will enable people to debootstrap
from a trusted repo, no matter if that is the cache.
Henning
Am Wed, 7 Nov 2018 17:09:55 +0100
schrieb "Maxim Yu. Osipov" <mosipov@ilbers.de>:
> Marking repo as trusted eliminates this option usage.
>
> Suggested-by: Henning Schild <henning.schild@siemens.com>
> Signed-off-by: Maxim Yu. Osipov <mosipov@ilbers.de>
> ---
> meta/recipes-core/isar-bootstrap/isar-bootstrap.inc | 3 ---
> 1 file changed, 3 deletions(-)
>
> diff --git a/meta/recipes-core/isar-bootstrap/isar-bootstrap.inc
> b/meta/recipes-core/isar-bootstrap/isar-bootstrap.inc index
> cc1791c..592d042 100644 ---
> a/meta/recipes-core/isar-bootstrap/isar-bootstrap.inc +++
> b/meta/recipes-core/isar-bootstrap/isar-bootstrap.inc @@ -178,9
> +178,6 @@ isar_bootstrap() { shift
> done
> debootstrap_args="--verbose --variant=minbase --include=locales "
> - if [ "${ISAR_USE_CACHED_BASE_REPO}" = "1" ]; then
> - debootstrap_args="$debootstrap_args --no-check-gpg"
> - fi
> E="${@bb.utils.export_proxies(d)}"
> sudo -E flock "${ISAR_BOOTSTRAP_LOCK}" -c "\
> set -e
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH 3/3] isar-bootstrap: Eliminate no-gpg-check option usage
2018-11-27 9:43 ` Henning Schild
@ 2018-11-27 10:15 ` Maxim Yu. Osipov
0 siblings, 0 replies; 18+ messages in thread
From: Maxim Yu. Osipov @ 2018-11-27 10:15 UTC (permalink / raw)
To: Henning Schild; +Cc: isar-users
On 11/27/18 12:43 PM, Henning Schild wrote:
> There is no problem with disabling gpg in the debootstrap case. Because
> debootstrap only works against one repo and we trust this one.
Nevertheless this doesn't work.
Maxim.
> Instead of the original "if [ "${ISAR_USE_CACHED_BASE_REPO}" = "1" ]"
> we should depend on the trusted option, which we have available in the
> python code in this file. Doing this will enable people to debootstrap
> from a trusted repo, no matter if that is the cache.
> Henning
>
> Am Wed, 7 Nov 2018 17:09:55 +0100
> schrieb "Maxim Yu. Osipov" <mosipov@ilbers.de>:
>
>> Marking repo as trusted eliminates this option usage.
>>
>> Suggested-by: Henning Schild <henning.schild@siemens.com>
>> Signed-off-by: Maxim Yu. Osipov <mosipov@ilbers.de>
>> ---
>> meta/recipes-core/isar-bootstrap/isar-bootstrap.inc | 3 ---
>> 1 file changed, 3 deletions(-)
>>
>> diff --git a/meta/recipes-core/isar-bootstrap/isar-bootstrap.inc
>> b/meta/recipes-core/isar-bootstrap/isar-bootstrap.inc index
>> cc1791c..592d042 100644 ---
>> a/meta/recipes-core/isar-bootstrap/isar-bootstrap.inc +++
>> b/meta/recipes-core/isar-bootstrap/isar-bootstrap.inc @@ -178,9
>> +178,6 @@ isar_bootstrap() { shift
>> done
>> debootstrap_args="--verbose --variant=minbase --include=locales "
>> - if [ "${ISAR_USE_CACHED_BASE_REPO}" = "1" ]; then
>> - debootstrap_args="$debootstrap_args --no-check-gpg"
>> - fi
>> E="${@bb.utils.export_proxies(d)}"
>> sudo -E flock "${ISAR_BOOTSTRAP_LOCK}" -c "\
>> set -e
>
--
Maxim Osipov
ilbers GmbH
Maria-Merian-Str. 8
85521 Ottobrunn
Germany
+49 (151) 6517 6917
mosipov@ilbers.de
http://ilbers.de/
Commercial register Munich, HRB 214197
General Manager: Baurzhan Ismagulov
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH 3/3] isar-bootstrap: Eliminate no-gpg-check option usage
2018-11-07 16:09 ` [PATCH 3/3] isar-bootstrap: Eliminate no-gpg-check option usage Maxim Yu. Osipov
` (2 preceding siblings ...)
2018-11-27 9:43 ` Henning Schild
@ 2018-12-03 11:49 ` Maxim Yu. Osipov
2018-12-03 12:52 ` Jan Kiszka
3 siblings, 1 reply; 18+ messages in thread
From: Maxim Yu. Osipov @ 2018-12-03 11:49 UTC (permalink / raw)
To: isar-users
On 11/7/18 7:09 PM, Maxim Yu. Osipov wrote:
> Marking repo as trusted eliminates this option usage.
Reverted in the 'next'.
> Suggested-by: Henning Schild <henning.schild@siemens.com>
> Signed-off-by: Maxim Yu. Osipov <mosipov@ilbers.de>
> ---
> meta/recipes-core/isar-bootstrap/isar-bootstrap.inc | 3 ---
> 1 file changed, 3 deletions(-)
>
> diff --git a/meta/recipes-core/isar-bootstrap/isar-bootstrap.inc b/meta/recipes-core/isar-bootstrap/isar-bootstrap.inc
> index cc1791c..592d042 100644
> --- a/meta/recipes-core/isar-bootstrap/isar-bootstrap.inc
> +++ b/meta/recipes-core/isar-bootstrap/isar-bootstrap.inc
> @@ -178,9 +178,6 @@ isar_bootstrap() {
> shift
> done
> debootstrap_args="--verbose --variant=minbase --include=locales "
> - if [ "${ISAR_USE_CACHED_BASE_REPO}" = "1" ]; then
> - debootstrap_args="$debootstrap_args --no-check-gpg"
> - fi
> E="${@bb.utils.export_proxies(d)}"
> sudo -E flock "${ISAR_BOOTSTRAP_LOCK}" -c "\
> set -e
>
--
Maxim Osipov
ilbers GmbH
Maria-Merian-Str. 8
85521 Ottobrunn
Germany
+49 (151) 6517 6917
mosipov@ilbers.de
http://ilbers.de/
Commercial register Munich, HRB 214197
General Manager: Baurzhan Ismagulov
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH 3/3] isar-bootstrap: Eliminate no-gpg-check option usage
2018-12-03 11:49 ` Maxim Yu. Osipov
@ 2018-12-03 12:52 ` Jan Kiszka
0 siblings, 0 replies; 18+ messages in thread
From: Jan Kiszka @ 2018-12-03 12:52 UTC (permalink / raw)
To: Maxim Yu. Osipov, isar-users
On 03.12.18 12:49, Maxim Yu. Osipov wrote:
> On 11/7/18 7:09 PM, Maxim Yu. Osipov wrote:
>> Marking repo as trusted eliminates this option usage.
>
> Reverted in the 'next'.
>
Even a revert is a patch which requires reasoning. Please do so in the future.
Thanks,
Jan
--
Siemens AG, Corporate Technology, CT RDA IOT SES-DE
Corporate Competence Center Embedded Linux
^ permalink raw reply [flat|nested] 18+ messages in thread
end of thread, other threads:[~2018-12-03 12:52 UTC | newest]
Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-11-07 16:09 [PATCH 0/3] bitbake upstream update and eliminate no-gpg-check option usage Maxim Yu. Osipov
2018-11-07 16:09 ` [PATCH 1/3] Update bitbake from the upstream Maxim Yu. Osipov
2018-11-07 17:58 ` Henning Schild
2018-11-08 9:08 ` Maxim Yu. Osipov
2018-11-07 16:09 ` [PATCH 2/3] meta: Set LAYERSERIES_* variables Maxim Yu. Osipov
2018-11-07 16:20 ` Jan Kiszka
2018-11-07 16:39 ` Maxim Yu. Osipov
2018-11-07 16:41 ` Jan Kiszka
2018-11-07 17:24 ` Maxim Yu. Osipov
2018-11-07 17:26 ` Jan Kiszka
2018-11-07 16:09 ` [PATCH 3/3] isar-bootstrap: Eliminate no-gpg-check option usage Maxim Yu. Osipov
2018-11-07 17:38 ` Henning Schild
2018-11-08 7:57 ` Maxim Yu. Osipov
2018-11-12 9:30 ` Maxim Yu. Osipov
2018-11-27 9:43 ` Henning Schild
2018-11-27 10:15 ` Maxim Yu. Osipov
2018-12-03 11:49 ` Maxim Yu. Osipov
2018-12-03 12:52 ` Jan Kiszka
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox