* [PATCH 0/2] Sstate maintenance script
@ 2022-04-13 6:35 Adriaan Schmidt
2022-04-13 6:35 ` [PATCH 1/2] scripts: add isar-sstate Adriaan Schmidt
2022-04-13 6:35 ` [PATCH 2/2] bitbake-diffsigs: make finding of changed signatures more robust Adriaan Schmidt
0 siblings, 2 replies; 7+ messages in thread
From: Adriaan Schmidt @ 2022-04-13 6:35 UTC (permalink / raw)
To: isar-users; +Cc: Adriaan Schmidt
Hi,
We have been running CI with shared sstate caches for some months now, in
several downstream projects. This is the cache maintenance script that has
evolved during that time. Detailed documentation is in the script itself.
Main features:
- upload cache artifacts to shared caches on filesystem, http, or s3
- clean old artifacts from shared caches
- analyze in detail why cache misses happen (what has changed in the signatures)
The last one is especially interesting, and has already yielded some
improvements to the cacheability of Isar ([PATCH v2 0/4] Improve cacheability);
analysis is still ongoing.
This feature becomes more robust with a patch to bitbake (p2 of this series,
also submitted upstream).
One issue: testing!
This is not easy, because it involves infrastructure, and artificial tests
that provide decent coverage would be quite complex to design.
If we declare that we sufficiently trust the sstate code, we could add a
shared/persistent cache to the Isar CI infrastructure. This would further test
the sstate feature and all steps involved in maintaining such a setup.
In addition, it would significantly speed up CI builds.
Adriaan
Adriaan Schmidt (2):
scripts: add isar-sstate
bitbake-diffsigs: make finding of changed signatures more robust
bitbake/lib/bb/siggen.py | 10 +-
scripts/isar-sstate | 743 +++++++++++++++++++++++++++++++++++++++
2 files changed, 748 insertions(+), 5 deletions(-)
create mode 100755 scripts/isar-sstate
--
2.30.2
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH 1/2] scripts: add isar-sstate
2022-04-13 6:35 [PATCH 0/2] Sstate maintenance script Adriaan Schmidt
@ 2022-04-13 6:35 ` Adriaan Schmidt
2022-04-14 7:36 ` Henning Schild
2022-04-13 6:35 ` [PATCH 2/2] bitbake-diffsigs: make finding of changed signatures more robust Adriaan Schmidt
1 sibling, 1 reply; 7+ messages in thread
From: Adriaan Schmidt @ 2022-04-13 6:35 UTC (permalink / raw)
To: isar-users; +Cc: Adriaan Schmidt
This adds a maintenance helper script to work with remote/shared
sstate caches.
Signed-off-by: Adriaan Schmidt <adriaan.schmidt@siemens.com>
---
scripts/isar-sstate | 743 ++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 743 insertions(+)
create mode 100755 scripts/isar-sstate
diff --git a/scripts/isar-sstate b/scripts/isar-sstate
new file mode 100755
index 00000000..b1e2c1ec
--- /dev/null
+++ b/scripts/isar-sstate
@@ -0,0 +1,743 @@
+#!/usr/bin/env python3
+"""
+This software is part of Isar
+Copyright (c) Siemens AG, 2022
+
+# isar-sstate: Helper for management of shared sstate caches
+
+Isar uses the sstate cache feature of bitbake to cache the output of certain
+build tasks, potentially speeding up builds significantly. This script is
+meant to help managing shared sstate caches, speeding up builds using cache
+artifacts created elsewhere. There are two main ways of accessing a shared
+sstate cache:
+ - Point `SSTATE_DIR` to a persistent location that is used by multiple
+ builds. bitbake will read artifacts from there, and also immediately
+ store generated cache artifacts in this location. This speeds up local
+ builds, and if `SSTATE_DIR` is located on a shared filesystem, it can
+ also benefit others.
+ - Point `SSTATE_DIR` to a local directory (e.g., simply use the default
+ value `${TOPDIR}/sstate-cache`), and additionally set `SSTATE_MIRRORS`
+ to a remote sstate cache. bitbake will use artifacts from both locations,
+ but will write newly created artifacts only to the local folder
+ `SSTATE_DIR`. To share them, you need to explicitly upload them to
+ the shared location, which is what isar-sstate is for.
+
+isar-sstate implements four commands (upload, clean, info, analyze),
+and supports three remote backends (filesystem, http/webdav, AWS S3).
+
+## Commands
+
+### upload
+
+The `upload` command pushes the contents of a local sstate cache to the
+remote location, uploading all files that don't already exist on the remote.
+
+### clean
+
+The `clean` command deletes old artifacts from the remote cache. It takes two
+arguments, `--max-age` and `--max-sig-age`, each of which must be a number,
+followed by one of `w`, `d`, `h`, `m`, or `s` (for weeks, days, hours, minutes,
+seconds, respectively).
+
+`--max-age` specifies up to which age artifacts should be kept in the cache.
+Anything older will be removed. Note that this only applies to the `.tgz` files
+containing the actual cached items, not the `.siginfo` files containing the
+cache metadata (signatures and hashes).
+To permit analysis of caching details using the `analyze` command, the siginfo
+files can be kept longer, as indicated by `--max-sig-age`. If not set explicitly,
+this defaults to `max_age`, and any explicitly given value can't be smaller
+than `max_age`.
+
+### info
+
+The `info` command scans the remote cache and displays some basic statistics.
+The argument `--verbose` increases the amount of information displayed.
+
+### analyze
+
+The `analyze` command iterates over all artifacts in the local sstate cache,
+and compares them to the contents of the remote cache. If an item is not
+present in the remote cache, the signature of the local item is compared
+to all potential matches in the remote cache, identified by matching
+architecture, recipe (`PN`), and task. This analysis has the same output
+format as `bitbake-diffsigs`.
+
+## Backends
+
+### Filesystem backend
+
+This uses a filesystem location as the remote cache. In case you can access
+your remote cache this way, you could also have bitbake write to the cache
+directly, by setting `SSTATE_DIR`. However, using `isar-sstate` gives
+you a uniform interface, and lets you use the same code/CI scripts across
+heterogeneous setups. Also, it gives you the `analyze` command.
+
+### http backend
+
+A http server with webdav extension can be used as remote cache.
+Apache can easily be configured to function as a remote sstate cache, e.g.:
+```
+<VirtualHost *:80>
+ Alias /sstate/ /path/to/sstate/location/
+ <Location /sstate/>
+ Dav on
+ Options Indexes
+ Require all granted
+ </Location>
+</VirtualHost>
+```
+In addition you need to load Apache's dav module:
+```
+a2enmod dav
+```
+
+To use the http backend, you need to install the Python webdavclient library.
+On Debian you would:
+```
+apt-get install python3-webdavclient
+```
+
+### S3 backend
+
+An AWS S3 bucket can be used as remote cache. You need to ensure that AWS
+credentials are present (e.g., in your AWS config file or as environment
+variables).
+
+To use the S3 backend you need to install the Python botocore library.
+On Debian you would:
+```
+apt-get install python3-botocore
+```
+"""
+
+import argparse
+from collections import namedtuple
+import datetime
+import os
+import re
+import shutil
+import sys
+from tempfile import NamedTemporaryFile
+import time
+
+sys.path.insert(0, os.path.join(os.path.dirname(os.path.realpath(__file__)), '..', 'bitbake', 'lib'))
+analysis_supported = True
+from bb.siggen import compare_sigfiles
+
+# runtime detection of supported targets
+webdav_supported = True
+try:
+ import webdav3.client
+ import webdav3.exceptions
+except ModuleNotFoundError:
+ webdav_supported = False
+
+s3_supported = True
+try:
+ import botocore.exceptions
+ import botocore.session
+except ModuleNotFoundError:
+ s3_supported = False
+
+SstateCacheEntry = namedtuple(
+ 'SstateCacheEntry', 'hash path arch pn task suffix islink age size'.split())
+
+# The filename of sstate items is defined in Isar:
+# SSTATE_PKGSPEC = "sstate:${PN}:${PACKAGE_ARCH}${TARGET_VENDOR}-${TARGET_OS}:"
+# "${PV}:${PR}:${SSTATE_PKGARCH}:${SSTATE_VERSION}:"
+
+# This regex extracts relevant fields:
+SstateRegex = re.compile(r'sstate:(?P<pn>[^:]*):[^:]*:[^:]*:[^:]*:'
+ r'(?P<arch>[^:]*):[^:]*:(?P<hash>[0-9a-f]*)_'
+ r'(?P<task>[^\.]*)\.(?P<suffix>.*)')
+
+
+class SstateTargetBase(object):
+ def __init__(self, path):
+ """Constructor
+
+ :param path: URI of the remote (without leading 'protocol://')
+ """
+ pass
+
+ def __repr__(self):
+ """Format remote for printing
+
+ :returns: URI string, including 'protocol://'
+ """
+ pass
+
+ def exists(self, path=''):
+ """Check if a remote path exists
+
+ :param path: path (file or directory) to check
+ :returns: True if path exists, False otherwise
+ """
+ pass
+
+ def create(self):
+ """Try to create the remote
+
+ :returns: True if remote could be created, False otherwise
+ """
+ pass
+
+ def mkdir(self, path):
+ """Create a directory on the remote
+
+ :param path: path to create
+ :returns: True on success, False on failure
+ """
+ pass
+
+ def upload(self, path, filename):
+ """Uploads a local file to the remote
+
+ :param path: remote path to upload to
+ :param filename: local file to upload
+ """
+ pass
+
+ def delete(self, path):
+ """Delete remote file and remove potential empty directories
+
+ :param path: remote file to delete
+ """
+ pass
+
+ def list_all(self):
+ """List all sstate files in the remote
+
+ :returns: list of SstateCacheEntry objects
+ """
+ pass
+
+ def download(self, path):
+ """Prepare to temporarily access a remote file for reading
+
+ This is meant to provide access to siginfo files during analysis. Files
+ must not be modified, and should be released using release() once they
+ are no longer used.
+
+ :param path: remote path
+ :returns: local path to file
+ """
+ pass
+
+ def release(self, download_path):
+ """Release a temporary file
+
+ :param doenload_path: local file
+ """
+ pass
+
+
+class SstateFileTarget(SstateTargetBase):
+ def __init__(self, path):
+ if path.startswith('file://'):
+ path = path[len('file://'):]
+ self.path = path
+ self.basepath = os.path.abspath(path)
+
+ def __repr__(self):
+ return f"file://{self.path}"
+
+ def exists(self, path=''):
+ return os.path.exists(os.path.join(self.basepath, path))
+
+ def create(self):
+ return self.mkdir('')
+
+ def mkdir(self, path):
+ try:
+ os.makedirs(os.path.join(self.basepath, path), exist_ok=True)
+ except OSError:
+ return False
+ return True
+
+ def upload(self, path, filename):
+ shutil.copy(filename, os.path.join(self.basepath, path))
+
+ def delete(self, path):
+ try:
+ os.remove(os.path.join(self.basepath, path))
+ except FileNotFoundError:
+ pass
+ dirs = path.split('/')[:-1]
+ for d in [dirs[:i] for i in range(len(dirs), 0, -1)]:
+ try:
+ os.rmdir(os.path.join(self.basepath, '/'.join(d)))
+ except FileNotFoundError:
+ pass
+ except OSError: # directory is not empty
+ break
+
+ def list_all(self):
+ all_files = []
+ now = time.time()
+ for subdir, dirs, files in os.walk(self.basepath):
+ reldir = subdir[(len(self.basepath)+1):]
+ for f in files:
+ m = SstateRegex.match(f)
+ if m is not None:
+ islink = os.path.islink(os.path.join(subdir, f))
+ age = int(now - os.path.getmtime(os.path.join(subdir, f)))
+ all_files.append(SstateCacheEntry(
+ path=os.path.join(reldir, f),
+ size=os.path.getsize(os.path.join(subdir, f)),
+ islink=islink,
+ age=age,
+ **(m.groupdict())))
+ return all_files
+
+ def download(self, path):
+ # we don't actually download, but instead just pass the local path
+ if not self.exists(path):
+ return None
+ return os.path.join(self.basepath, path)
+
+ def release(self, download_path):
+ # as we didn't download, there is nothing to clean up
+ pass
+
+
+class SstateDavTarget(SstateTargetBase):
+ def __init__(self, url):
+ if not webdav_supported:
+ print("ERROR: No webdav support. Please install the webdav3 Python module.")
+ print("INFO: on Debian: 'apt-get install python3-webdavclient'")
+ sys.exit(1)
+ m = re.match('^([^:]+://[^/]+)/(.*)', url)
+ if not m:
+ print(f"Cannot parse target path: {url}")
+ sys.exit(1)
+ self.host = m.group(1)
+ self.basepath = m.group(2)
+ if not self.basepath.endswith('/'):
+ self.basepath += '/'
+ self.dav = webdav3.client.Client({'webdav_hostname': self.host})
+ self.tmpfiles = []
+
+ def __repr__(self):
+ return f"{self.host}/{self.basepath}"
+
+ def exists(self, path=''):
+ return self.dav.check(self.basepath + path)
+
+ def create(self):
+ return self.mkdir('')
+
+ def mkdir(self, path):
+ dirs = (self.basepath + path).split('/')
+
+ for i in range(len(dirs)):
+ d = '/'.join(dirs[:(i+1)]) + '/'
+ if not self.dav.check(d):
+ if not self.dav.mkdir(d):
+ return False
+ return True
+
+ def upload(self, path, filename):
+ return self.dav.upload_sync(remote_path=self.basepath + path, local_path=filename)
+
+ def delete(self, path):
+ self.dav.clean(self.basepath + path)
+ dirs = path.split('/')[1:-1]
+ for d in [dirs[:i] for i in range(len(dirs), 0, -1)]:
+ items = self.dav.list(self.basepath + '/'.join(d), get_info=True)
+ if len(items) > 0:
+ # collection is not empty
+ break
+ self.dav.clean(self.basepath + '/'.join(d))
+
+ def list_all(self):
+ now = time.time()
+
+ def recurse_dir(path):
+ files = []
+ for item in self.dav.list(path, get_info=True):
+ if item['isdir'] and not item['path'] == path:
+ files.extend(recurse_dir(item['path']))
+ elif not item['isdir']:
+ m = SstateRegex.match(item['path'][len(path):])
+ if m is not None:
+ modified = time.mktime(
+ datetime.datetime.strptime(
+ item['created'],
+ '%Y-%m-%dT%H:%M:%SZ').timetuple())
+ age = int(now - modified)
+ files.append(SstateCacheEntry(
+ path=item['path'][len(self.basepath):],
+ size=int(item['size']),
+ islink=False,
+ age=age,
+ **(m.groupdict())))
+ return files
+ return recurse_dir(self.basepath)
+
+ def download(self, path):
+ # download to a temporary file
+ tmp = NamedTemporaryFile(prefix='isar-sstate-', delete=False)
+ tmp.close()
+ try:
+ self.dav.download_sync(remote_path=self.basepath + path, local_path=tmp.name)
+ except webdav3.exceptions.RemoteResourceNotFound:
+ return None
+ self.tmpfiles.append(tmp.name)
+ return tmp.name
+
+ def release(self, download_path):
+ # remove the temporary download
+ if download_path is not None and download_path in self.tmpfiles:
+ os.remove(download_path)
+ self.tmpfiles = [f for f in self.tmpfiles if not f == download_path]
+
+
+class SstateS3Target(SstateTargetBase):
+ def __init__(self, path):
+ if not s3_supported:
+ print("ERROR: No S3 support. Please install the botocore Python module.")
+ print("INFO: on Debian: 'apt-get install python3-botocore'")
+ sys.exit(1)
+ session = botocore.session.get_session()
+ self.s3 = session.create_client('s3')
+ if path.startswith('s3://'):
+ path = path[len('s3://'):]
+ m = re.match('^([^/]+)(?:/(.+)?)?$', path)
+ self.bucket = m.group(1)
+ if m.group(2):
+ self.basepath = m.group(2)
+ if not self.basepath.endswith('/'):
+ self.basepath += '/'
+ else:
+ self.basepath = ''
+ self.tmpfiles = []
+
+ def __repr__(self):
+ return f"s3://{self.bucket}/{self.basepath}"
+
+ def exists(self, path=''):
+ if path == '':
+ # check if the bucket exists
+ try:
+ self.s3.head_bucket(Bucket=self.bucket)
+ except botocore.exceptions.ClientError as e:
+ print(e)
+ print(e.response['Error']['Message'])
+ return False
+ return True
+ try:
+ self.s3.head_object(Bucket=self.bucket, Key=self.basepath + path)
+ except botocore.exceptions.ClientError as e:
+ if e.response['ResponseMetadata']['HTTPStatusCode'] != 404:
+ print(e)
+ print(e.response['Error']['Message'])
+ return False
+ return True
+
+ def create(self):
+ return self.exists()
+
+ def mkdir(self, path):
+ # in S3, folders are implicit and don't need to be created
+ return True
+
+ def upload(self, path, filename):
+ try:
+ self.s3.put_object(Body=open(filename, 'rb'), Bucket=self.bucket, Key=self.basepath + path)
+ except botocore.exceptions.ClientError as e:
+ print(e)
+ print(e.response['Error']['Message'])
+
+ def delete(self, path):
+ try:
+ self.s3.delete_object(Bucket=self.bucket, Key=self.basepath + path)
+ except botocore.exceptions.ClientError as e:
+ print(e)
+ print(e.response['Error']['Message'])
+
+ def list_all(self):
+ now = time.time()
+
+ def recurse_dir(path):
+ files = []
+ try:
+ result = self.s3.list_objects(Bucket=self.bucket, Prefix=path, Delimiter='/')
+ except botocore.exceptions.ClientError as e:
+ print(e)
+ print(e.response['Error']['Message'])
+ return []
+ for f in result.get('Contents', []):
+ m = SstateRegex.match(f['Key'][len(path):])
+ if m is not None:
+ modified = time.mktime(f['LastModified'].timetuple())
+ age = int(now - modified)
+ files.append(SstateCacheEntry(
+ path=f['Key'][len(self.basepath):],
+ size=f['Size'],
+ islink=False,
+ age=age,
+ **(m.groupdict())))
+ for p in result.get('CommonPrefixes', []):
+ files.extend(recurse_dir(p['Prefix']))
+ return files
+ return recurse_dir(self.basepath)
+
+ def download(self, path):
+ # download to a temporary file
+ tmp = NamedTemporaryFile(prefix='isar-sstate-', delete=False)
+ try:
+ result = self.s3.get_object(Bucket=self.bucket, Key=self.basepath + path)
+ except botocore.exceptions.ClientError:
+ return None
+ tmp.write(result['Body'].read())
+ tmp.close()
+ self.tmpfiles.append(tmp.name)
+ return tmp.name
+
+ def release(self, download_path):
+ # remove the temporary download
+ if download_path is not None and download_path in self.tmpfiles:
+ os.remove(download_path)
+ self.tmpfiles = [f for f in self.tmpfiles if not f == download_path]
+
+
+def arguments():
+ parser = argparse.ArgumentParser()
+ parser.add_argument(
+ 'command', type=str, metavar='command',
+ choices='info upload clean analyze'.split(),
+ help="command to execute (info, upload, clean, analyze)")
+ parser.add_argument(
+ 'source', type=str, nargs='?',
+ help="local sstate dir (for uploads or analysis)")
+ parser.add_argument(
+ 'target', type=str,
+ help="remote sstate location (a file://, http://, or s3:// URI)")
+ parser.add_argument(
+ '-v', '--verbose', default=False, action='store_true')
+ parser.add_argument(
+ '--max-age', type=str, default='1d',
+ help="clean tgz files older than MAX_AGE (a number followed by w|d|h|m|s)")
+ parser.add_argument(
+ '--max-sig-age', type=str, default=None,
+ help="clean siginfo files older than MAX_SIG_AGE (defaults to MAX_AGE)")
+
+ args = parser.parse_args()
+ if args.command in 'upload analyze'.split() and args.source is None:
+ print(f"ERROR: '{args.command}' needs a source and target")
+ sys.exit(1)
+ elif args.command in 'info clean'.split() and args.source is not None:
+ print(f"ERROR: '{args.command}' must not have a source (only a target)")
+ sys.exit(1)
+ return args
+
+
+def sstate_upload(source, target, verbose, **kwargs):
+ if not os.path.isdir(source):
+ print(f"WARNING: source {source} does not exist. Not uploading.")
+ return 0
+
+ if not target.exists() and not target.create():
+ print(f"ERROR: target {target} does not exist and could not be created.")
+ return -1
+
+ print(f"INFO: uploading {source} to {target}")
+ os.chdir(source)
+ upload, exists = [], []
+ for subdir, dirs, files in os.walk('.'):
+ target_dirs = subdir.split('/')[1:]
+ for f in files:
+ file_path = (('/'.join(target_dirs) + '/') if len(target_dirs) > 0 else '') + f
+ if target.exists(file_path):
+ if verbose:
+ print(f"[EXISTS] {file_path}")
+ exists.append(file_path)
+ else:
+ upload.append((file_path, target_dirs))
+ upload_gb = (sum([os.path.getsize(f[0]) for f in upload]) / 1024.0 / 1024.0 / 1024.0)
+ print(f"INFO: uploading {len(upload)} files ({upload_gb:.02f} GB)")
+ print(f"INFO: {len(exists)} files already present on target")
+ for file_path, target_dirs in upload:
+ if verbose:
+ print(f"[UPLOAD] {file_path}")
+ target.mkdir('/'.join(target_dirs))
+ target.upload(file_path, file_path)
+ return 0
+
+
+def sstate_clean(target, max_age, max_sig_age, verbose, **kwargs):
+ def convert_to_seconds(x):
+ seconds_per_unit = {'s': 1, 'm': 60, 'h': 3600, 'd': 86400, 'w': 604800}
+ m = re.match(r'^(\d+)(w|d|h|m|s)?', x)
+ if m is None:
+ print(f"ERROR: cannot parse MAX_AGE '{max_age}', needs to be a number followed by w|d|h|m|s")
+ sys.exit(-1)
+ if (unit := m.group(2)) is None:
+ print("WARNING: MAX_AGE without unit, assuming 'days'")
+ unit = 'd'
+ return int(m.group(1)) * seconds_per_unit[unit]
+
+ max_age_seconds = convert_to_seconds(max_age)
+ if max_sig_age is None:
+ max_sig_age_seconds = max_age_seconds
+ else:
+ max_sig_age_seconds = max(max_age_seconds, convert_to_seconds(max_sig_age))
+
+ if not target.exists():
+ print(f"INFO: cannot access target {target}. Nothing to clean.")
+ return 0
+
+ print(f"INFO: scanning {target}")
+ all_files = target.list_all()
+ links = [f for f in all_files if f.islink]
+ if links:
+ print(f"NOTE: we have links: {links}")
+ tgz_files = [f for f in all_files if f.suffix == 'tgz']
+ siginfo_files = [f for f in all_files if f.suffix == 'tgz.siginfo']
+ del_tgz_files = [f for f in tgz_files if f.age >= max_age_seconds]
+ del_tgz_hashes = [f.hash for f in del_tgz_files]
+ del_siginfo_files = [f for f in siginfo_files if
+ f.age >= max_sig_age_seconds or f.hash in del_tgz_hashes]
+ print(f"INFO: found {len(tgz_files)} tgz files, {len(del_tgz_files)} of which are older than {max_age}")
+ print(f"INFO: found {len(siginfo_files)} siginfo files, {len(del_siginfo_files)} of which "
+ f"correspond to tgz files or are older than {max_sig_age}")
+
+ for f in del_tgz_files + del_siginfo_files:
+ if verbose:
+ print(f"[DELETE] {f.path}")
+ target.delete(f.path)
+ freed_gb = sum([x.size for x in del_tgz_files + del_siginfo_files]) / 1024.0 / 1024.0 / 1024.0
+ print(f"INFO: freed {freed_gb:.02f} GB")
+ return 0
+
+
+def sstate_info(target, verbose, **kwargs):
+ if not target.exists():
+ print(f"INFO: cannot access target {target}. No info to show.")
+ return 0
+
+ print(f"INFO: scanning {target}")
+ all_files = target.list_all()
+ size_gb = sum([x.size for x in all_files]) / 1024.0 / 1024.0 / 1024.0
+ print(f"INFO: found {len(all_files)} files ({size_gb:0.2f} GB)")
+
+ if not verbose:
+ return 0
+
+ archs = list(set([f.arch for f in all_files]))
+ print(f"INFO: found the following archs: {archs}")
+
+ key_task = {'deb': 'dpkg_build',
+ 'rootfs': 'rootfs_install',
+ 'bootstrap': 'bootstrap'}
+ recipes = {k: [] for k in key_task.keys()}
+ others = []
+ for pn in set([f.pn for f in all_files]):
+ tasks = set([f.task for f in all_files if f.pn == pn])
+ ks = [k for k, v in key_task.items() if v in tasks]
+ if len(ks) == 1:
+ recipes[ks[0]].append(pn)
+ elif len(ks) == 0:
+ others.append(pn)
+ else:
+ print(f"WARNING: {pn} could be any of {ks}")
+ for k, entries in recipes.items():
+ print(f"Cache hits for {k}:")
+ for pn in entries:
+ hits = [f for f in all_files if f.pn == pn and f.task == key_task[k] and f.suffix == 'tgz']
+ print(f" - {pn}: {len(hits)} hits")
+ print("Other cache hits:")
+ for pn in others:
+ print(f" - {pn}")
+ return 0
+
+
+def sstate_analyze(source, target, **kwargs):
+ if not os.path.isdir(source):
+ print(f"ERROR: source {source} does not exist. Nothing to analyze.")
+ return -1
+ if not target.exists():
+ print(f"ERROR: target {target} does not exist. Nothing to analyze.")
+ return -1
+
+ source = SstateFileTarget(source)
+ local_sigs = {s.hash: s for s in source.list_all() if s.suffix.endswith('.siginfo')}
+ remote_sigs = {s.hash: s for s in target.list_all() if s.suffix.endswith('.siginfo')}
+
+ key_tasks = 'dpkg_build rootfs_install bootstrap'.split()
+
+ check = [k for k, v in local_sigs.items() if v.task in key_tasks]
+ for local_hash in check:
+ s = local_sigs[local_hash]
+ print(f"\033[1;33m==== checking local item {s.arch}:{s.pn}:{s.task} ({s.hash[:8]}) ====\033[0m")
+ if local_hash in remote_sigs:
+ print(" -> found hit in remote cache")
+ continue
+ remote_matches = [k for k, v in remote_sigs.items() if s.arch == v.arch and s.pn == v.pn and s.task == v.task]
+ if len(remote_matches) == 0:
+ print(" -> found no hit, and no potential remote matches")
+ else:
+ print(f" -> found no hit, but {len(remote_matches)} potential remote matches")
+ for r in remote_matches:
+ t = remote_sigs[r]
+ print(f"\033[0;33m**** comparing to {r[:8]} ****\033[0m")
+
+ def recursecb(key, remote_hash, local_hash):
+ recout = []
+ if remote_hash in remote_sigs.keys():
+ remote_file = target.download(remote_sigs[remote_hash].path)
+ elif remote_hash in local_sigs.keys():
+ recout.append(f"found remote hash in local signatures ({key})!?! (please implement that case!)")
+ return recout
+ else:
+ recout.append(f"could not find remote signature {remote_hash[:8]} for job {key}")
+ return recout
+ if local_hash in local_sigs.keys():
+ local_file = source.download(local_sigs[local_hash].path)
+ elif local_hash in remote_sigs.keys():
+ local_file = target.download(remote_sigs[local_hash].path)
+ else:
+ recout.append(f"could not find local signature {local_hash[:8]} for job {key}")
+ return recout
+ if local_file is None or remote_file is None:
+ out = "Aborting analysis because siginfo files disappered unexpectedly"
+ else:
+ out = compare_sigfiles(remote_file, local_file, recursecb, color=True)
+ if local_hash in local_sigs.keys():
+ source.release(local_file)
+ else:
+ target.release(local_file)
+ target.release(remote_file)
+ for change in out:
+ recout.extend([' ' + line for line in change.splitlines()])
+ return recout
+
+ local_file = source.download(s.path)
+ remote_file = target.download(t.path)
+ out = compare_sigfiles(remote_file, local_file, recursecb, color=True)
+ source.release(local_file)
+ target.release(remote_file)
+ # shorten hashes from 64 to 8 characters for better readability
+ out = [re.sub(r'([0-9a-f]{8})[0-9a-f]{56}', r'\1', line) for line in out]
+ print('\n'.join(out))
+
+
+def main():
+ args = arguments()
+
+ if args.target.startswith('http://'):
+ target = SstateDavTarget(args.target)
+ elif args.target.startswith('s3://'):
+ target = SstateS3Target(args.target)
+ elif args.target.startswith('file://'):
+ target = SstateFileTarget(args.target)
+ else: # no protocol given, assume file://
+ target = SstateFileTarget(args.target)
+
+ args.target = target
+ return globals()[f'sstate_{args.command}'](**vars(args))
+
+
+if __name__ == '__main__':
+ sys.exit(main())
--
2.30.2
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH 2/2] bitbake-diffsigs: make finding of changed signatures more robust
2022-04-13 6:35 [PATCH 0/2] Sstate maintenance script Adriaan Schmidt
2022-04-13 6:35 ` [PATCH 1/2] scripts: add isar-sstate Adriaan Schmidt
@ 2022-04-13 6:35 ` Adriaan Schmidt
2022-04-13 8:19 ` Henning Schild
1 sibling, 1 reply; 7+ messages in thread
From: Adriaan Schmidt @ 2022-04-13 6:35 UTC (permalink / raw)
To: isar-users; +Cc: Adriaan Schmidt
In `runtaskhashes`, the keys contain the absolute paths to the recipe. When
working with shared sstate caches (where these absolute paths can be different)
we see that compare_sigfiles does not identifiy a changed hash of a dependent
task as "changed", but instead as "removed"&"added", preventing the function
from recursing and continuing the comparison.
By calling `clean_basepaths` before comparing the `runtaskhashes` dicts, we
avoid this.
Submitted upstream: https://lists.openembedded.org/g/bitbake-devel/message/13603
Signed-off-by: Adriaan Schmidt <adriaan.schmidt@siemens.com>
---
bitbake/lib/bb/siggen.py | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/bitbake/lib/bb/siggen.py b/bitbake/lib/bb/siggen.py
index 0d88c6ec..8b23fd04 100644
--- a/bitbake/lib/bb/siggen.py
+++ b/bitbake/lib/bb/siggen.py
@@ -944,8 +944,8 @@ def compare_sigfiles(a, b, recursecb=None, color=False, collapsed=False):
if 'runtaskhashes' in a_data and 'runtaskhashes' in b_data:
- a = a_data['runtaskhashes']
- b = b_data['runtaskhashes']
+ a = clean_basepaths(a_data['runtaskhashes'])
+ b = clean_basepaths(b_data['runtaskhashes'])
changed, added, removed = dict_diff(a, b)
if added:
for dep in added:
@@ -956,7 +956,7 @@ def compare_sigfiles(a, b, recursecb=None, color=False, collapsed=False):
#output.append("Dependency on task %s was replaced by %s with same hash" % (dep, bdep))
bdep_found = True
if not bdep_found:
- output.append(color_format("{color_title}Dependency on task %s was added{color_default} with hash %s") % (clean_basepath(dep), b[dep]))
+ output.append(color_format("{color_title}Dependency on task %s was added{color_default} with hash %s") % (dep, b[dep]))
if removed:
for dep in removed:
adep_found = False
@@ -966,11 +966,11 @@ def compare_sigfiles(a, b, recursecb=None, color=False, collapsed=False):
#output.append("Dependency on task %s was replaced by %s with same hash" % (adep, dep))
adep_found = True
if not adep_found:
- output.append(color_format("{color_title}Dependency on task %s was removed{color_default} with hash %s") % (clean_basepath(dep), a[dep]))
+ output.append(color_format("{color_title}Dependency on task %s was removed{color_default} with hash %s") % (dep, a[dep]))
if changed:
for dep in changed:
if not collapsed:
- output.append(color_format("{color_title}Hash for dependent task %s changed{color_default} from %s to %s") % (clean_basepath(dep), a[dep], b[dep]))
+ output.append(color_format("{color_title}Hash for dependent task %s changed{color_default} from %s to %s") % (dep, a[dep], b[dep]))
if callable(recursecb):
recout = recursecb(dep, a[dep], b[dep])
if recout:
--
2.30.2
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH 2/2] bitbake-diffsigs: make finding of changed signatures more robust
2022-04-13 6:35 ` [PATCH 2/2] bitbake-diffsigs: make finding of changed signatures more robust Adriaan Schmidt
@ 2022-04-13 8:19 ` Henning Schild
2022-04-14 8:03 ` Schmidt, Adriaan
0 siblings, 1 reply; 7+ messages in thread
From: Henning Schild @ 2022-04-13 8:19 UTC (permalink / raw)
To: Adriaan Schmidt; +Cc: isar-users
Am Wed, 13 Apr 2022 08:35:34 +0200
schrieb Adriaan Schmidt <adriaan.schmidt@siemens.com>:
> In `runtaskhashes`, the keys contain the absolute paths to the
> recipe. When working with shared sstate caches (where these absolute
> paths can be different) we see that compare_sigfiles does not
> identifiy a changed hash of a dependent task as "changed", but
> instead as "removed"&"added", preventing the function from recursing
> and continuing the comparison.
>
> By calling `clean_basepaths` before comparing the `runtaskhashes`
> dicts, we avoid this.
>
> Submitted upstream:
> https://lists.openembedded.org/g/bitbake-devel/message/13603
Submitted does not count, we will have to wait till it is merged and
then we can think about a backport.
How important is that? If it is just "nice to have" i think we should
wait and not even do the backport once merged into bitbake.
Henning
> Signed-off-by: Adriaan Schmidt <adriaan.schmidt@siemens.com>
> ---
> bitbake/lib/bb/siggen.py | 10 +++++-----
> 1 file changed, 5 insertions(+), 5 deletions(-)
>
> diff --git a/bitbake/lib/bb/siggen.py b/bitbake/lib/bb/siggen.py
> index 0d88c6ec..8b23fd04 100644
> --- a/bitbake/lib/bb/siggen.py
> +++ b/bitbake/lib/bb/siggen.py
> @@ -944,8 +944,8 @@ def compare_sigfiles(a, b, recursecb=None,
> color=False, collapsed=False):
>
> if 'runtaskhashes' in a_data and 'runtaskhashes' in b_data:
> - a = a_data['runtaskhashes']
> - b = b_data['runtaskhashes']
> + a = clean_basepaths(a_data['runtaskhashes'])
> + b = clean_basepaths(b_data['runtaskhashes'])
> changed, added, removed = dict_diff(a, b)
> if added:
> for dep in added:
> @@ -956,7 +956,7 @@ def compare_sigfiles(a, b, recursecb=None,
> color=False, collapsed=False): #output.append("Dependency on task %s
> was replaced by %s with same hash" % (dep, bdep)) bdep_found = True
> if not bdep_found:
> -
> output.append(color_format("{color_title}Dependency on task %s was
> added{color_default} with hash %s") % (clean_basepath(dep), b[dep]))
> +
> output.append(color_format("{color_title}Dependency on task %s was
> added{color_default} with hash %s") % (dep, b[dep])) if removed: for
> dep in removed: adep_found = False
> @@ -966,11 +966,11 @@ def compare_sigfiles(a, b, recursecb=None,
> color=False, collapsed=False): #output.append("Dependency on task %s
> was replaced by %s with same hash" % (adep, dep)) adep_found = True
> if not adep_found:
> -
> output.append(color_format("{color_title}Dependency on task %s was
> removed{color_default} with hash %s") % (clean_basepath(dep), a[dep]))
> +
> output.append(color_format("{color_title}Dependency on task %s was
> removed{color_default} with hash %s") % (dep, a[dep])) if changed:
> for dep in changed: if not collapsed:
> - output.append(color_format("{color_title}Hash
> for dependent task %s changed{color_default} from %s to %s") %
> (clean_basepath(dep), a[dep], b[dep]))
> + output.append(color_format("{color_title}Hash
> for dependent task %s changed{color_default} from %s to %s") % (dep,
> a[dep], b[dep])) if callable(recursecb): recout = recursecb(dep,
> a[dep], b[dep]) if recout:
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH 1/2] scripts: add isar-sstate
2022-04-13 6:35 ` [PATCH 1/2] scripts: add isar-sstate Adriaan Schmidt
@ 2022-04-14 7:36 ` Henning Schild
0 siblings, 0 replies; 7+ messages in thread
From: Henning Schild @ 2022-04-14 7:36 UTC (permalink / raw)
To: Adriaan Schmidt; +Cc: isar-users
Am Wed, 13 Apr 2022 08:35:33 +0200
schrieb Adriaan Schmidt <adriaan.schmidt@siemens.com>:
> This adds a maintenance helper script to work with remote/shared
> sstate caches.
Is that script in fact an isar-thing or rather a bitbake thing? To me
it all sounds like the whole bitbake community could really use that.
We could carry it in isar but also think bigger, hitting more people to
help us maintain and improve it.
regards,
Henning
> Signed-off-by: Adriaan Schmidt <adriaan.schmidt@siemens.com>
> ---
> scripts/isar-sstate | 743
> ++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 743
> insertions(+) create mode 100755 scripts/isar-sstate
>
> diff --git a/scripts/isar-sstate b/scripts/isar-sstate
> new file mode 100755
> index 00000000..b1e2c1ec
> --- /dev/null
> +++ b/scripts/isar-sstate
> @@ -0,0 +1,743 @@
> +#!/usr/bin/env python3
> +"""
> +This software is part of Isar
> +Copyright (c) Siemens AG, 2022
> +
> +# isar-sstate: Helper for management of shared sstate caches
> +
> +Isar uses the sstate cache feature of bitbake to cache the output of
> certain +build tasks, potentially speeding up builds significantly.
> This script is +meant to help managing shared sstate caches, speeding
> up builds using cache +artifacts created elsewhere. There are two
> main ways of accessing a shared +sstate cache:
> + - Point `SSTATE_DIR` to a persistent location that is used by
> multiple
> + builds. bitbake will read artifacts from there, and also
> immediately
> + store generated cache artifacts in this location. This speeds up
> local
> + builds, and if `SSTATE_DIR` is located on a shared filesystem,
> it can
> + also benefit others.
> + - Point `SSTATE_DIR` to a local directory (e.g., simply use the
> default
> + value `${TOPDIR}/sstate-cache`), and additionally set
> `SSTATE_MIRRORS`
> + to a remote sstate cache. bitbake will use artifacts from both
> locations,
> + but will write newly created artifacts only to the local folder
> + `SSTATE_DIR`. To share them, you need to explicitly upload them
> to
> + the shared location, which is what isar-sstate is for.
> +
> +isar-sstate implements four commands (upload, clean, info, analyze),
> +and supports three remote backends (filesystem, http/webdav, AWS S3).
> +
> +## Commands
> +
> +### upload
> +
> +The `upload` command pushes the contents of a local sstate cache to
> the +remote location, uploading all files that don't already exist on
> the remote. +
> +### clean
> +
> +The `clean` command deletes old artifacts from the remote cache. It
> takes two +arguments, `--max-age` and `--max-sig-age`, each of which
> must be a number, +followed by one of `w`, `d`, `h`, `m`, or `s` (for
> weeks, days, hours, minutes, +seconds, respectively).
> +
> +`--max-age` specifies up to which age artifacts should be kept in
> the cache. +Anything older will be removed. Note that this only
> applies to the `.tgz` files +containing the actual cached items, not
> the `.siginfo` files containing the +cache metadata (signatures and
> hashes). +To permit analysis of caching details using the `analyze`
> command, the siginfo +files can be kept longer, as indicated by
> `--max-sig-age`. If not set explicitly, +this defaults to `max_age`,
> and any explicitly given value can't be smaller +than `max_age`.
> +
> +### info
> +
> +The `info` command scans the remote cache and displays some basic
> statistics. +The argument `--verbose` increases the amount of
> information displayed. +
> +### analyze
> +
> +The `analyze` command iterates over all artifacts in the local
> sstate cache, +and compares them to the contents of the remote cache.
> If an item is not +present in the remote cache, the signature of the
> local item is compared +to all potential matches in the remote cache,
> identified by matching +architecture, recipe (`PN`), and task. This
> analysis has the same output +format as `bitbake-diffsigs`.
> +
> +## Backends
> +
> +### Filesystem backend
> +
> +This uses a filesystem location as the remote cache. In case you can
> access +your remote cache this way, you could also have bitbake write
> to the cache +directly, by setting `SSTATE_DIR`. However, using
> `isar-sstate` gives +you a uniform interface, and lets you use the
> same code/CI scripts across +heterogeneous setups. Also, it gives you
> the `analyze` command. +
> +### http backend
> +
> +A http server with webdav extension can be used as remote cache.
> +Apache can easily be configured to function as a remote sstate
> cache, e.g.: +```
> +<VirtualHost *:80>
> + Alias /sstate/ /path/to/sstate/location/
> + <Location /sstate/>
> + Dav on
> + Options Indexes
> + Require all granted
> + </Location>
> +</VirtualHost>
> +```
> +In addition you need to load Apache's dav module:
> +```
> +a2enmod dav
> +```
> +
> +To use the http backend, you need to install the Python webdavclient
> library. +On Debian you would:
> +```
> +apt-get install python3-webdavclient
> +```
> +
> +### S3 backend
> +
> +An AWS S3 bucket can be used as remote cache. You need to ensure
> that AWS +credentials are present (e.g., in your AWS config file or
> as environment +variables).
> +
> +To use the S3 backend you need to install the Python botocore
> library. +On Debian you would:
> +```
> +apt-get install python3-botocore
> +```
> +"""
> +
> +import argparse
> +from collections import namedtuple
> +import datetime
> +import os
> +import re
> +import shutil
> +import sys
> +from tempfile import NamedTemporaryFile
> +import time
> +
> +sys.path.insert(0,
> os.path.join(os.path.dirname(os.path.realpath(__file__)), '..',
> 'bitbake', 'lib')) +analysis_supported = True +from bb.siggen import
> compare_sigfiles +
> +# runtime detection of supported targets
> +webdav_supported = True
> +try:
> + import webdav3.client
> + import webdav3.exceptions
> +except ModuleNotFoundError:
> + webdav_supported = False
> +
> +s3_supported = True
> +try:
> + import botocore.exceptions
> + import botocore.session
> +except ModuleNotFoundError:
> + s3_supported = False
> +
> +SstateCacheEntry = namedtuple(
> + 'SstateCacheEntry', 'hash path arch pn task suffix islink
> age size'.split()) +
> +# The filename of sstate items is defined in Isar:
> +# SSTATE_PKGSPEC =
> "sstate:${PN}:${PACKAGE_ARCH}${TARGET_VENDOR}-${TARGET_OS}:" +#
> "${PV}:${PR}:${SSTATE_PKGARCH}:${SSTATE_VERSION}:" +
> +# This regex extracts relevant fields:
> +SstateRegex = re.compile(r'sstate:(?P<pn>[^:]*):[^:]*:[^:]*:[^:]*:'
> +
> r'(?P<arch>[^:]*):[^:]*:(?P<hash>[0-9a-f]*)_'
> + r'(?P<task>[^\.]*)\.(?P<suffix>.*)')
> +
> +
> +class SstateTargetBase(object):
> + def __init__(self, path):
> + """Constructor
> +
> + :param path: URI of the remote (without leading
> 'protocol://')
> + """
> + pass
> +
> + def __repr__(self):
> + """Format remote for printing
> +
> + :returns: URI string, including 'protocol://'
> + """
> + pass
> +
> + def exists(self, path=''):
> + """Check if a remote path exists
> +
> + :param path: path (file or directory) to check
> + :returns: True if path exists, False otherwise
> + """
> + pass
> +
> + def create(self):
> + """Try to create the remote
> +
> + :returns: True if remote could be created, False otherwise
> + """
> + pass
> +
> + def mkdir(self, path):
> + """Create a directory on the remote
> +
> + :param path: path to create
> + :returns: True on success, False on failure
> + """
> + pass
> +
> + def upload(self, path, filename):
> + """Uploads a local file to the remote
> +
> + :param path: remote path to upload to
> + :param filename: local file to upload
> + """
> + pass
> +
> + def delete(self, path):
> + """Delete remote file and remove potential empty directories
> +
> + :param path: remote file to delete
> + """
> + pass
> +
> + def list_all(self):
> + """List all sstate files in the remote
> +
> + :returns: list of SstateCacheEntry objects
> + """
> + pass
> +
> + def download(self, path):
> + """Prepare to temporarily access a remote file for reading
> +
> + This is meant to provide access to siginfo files during
> analysis. Files
> + must not be modified, and should be released using release()
> once they
> + are no longer used.
> +
> + :param path: remote path
> + :returns: local path to file
> + """
> + pass
> +
> + def release(self, download_path):
> + """Release a temporary file
> +
> + :param doenload_path: local file
> + """
> + pass
> +
> +
> +class SstateFileTarget(SstateTargetBase):
> + def __init__(self, path):
> + if path.startswith('file://'):
> + path = path[len('file://'):]
> + self.path = path
> + self.basepath = os.path.abspath(path)
> +
> + def __repr__(self):
> + return f"file://{self.path}"
> +
> + def exists(self, path=''):
> + return os.path.exists(os.path.join(self.basepath, path))
> +
> + def create(self):
> + return self.mkdir('')
> +
> + def mkdir(self, path):
> + try:
> + os.makedirs(os.path.join(self.basepath, path),
> exist_ok=True)
> + except OSError:
> + return False
> + return True
> +
> + def upload(self, path, filename):
> + shutil.copy(filename, os.path.join(self.basepath, path))
> +
> + def delete(self, path):
> + try:
> + os.remove(os.path.join(self.basepath, path))
> + except FileNotFoundError:
> + pass
> + dirs = path.split('/')[:-1]
> + for d in [dirs[:i] for i in range(len(dirs), 0, -1)]:
> + try:
> + os.rmdir(os.path.join(self.basepath, '/'.join(d)))
> + except FileNotFoundError:
> + pass
> + except OSError: # directory is not empty
> + break
> +
> + def list_all(self):
> + all_files = []
> + now = time.time()
> + for subdir, dirs, files in os.walk(self.basepath):
> + reldir = subdir[(len(self.basepath)+1):]
> + for f in files:
> + m = SstateRegex.match(f)
> + if m is not None:
> + islink = os.path.islink(os.path.join(subdir, f))
> + age = int(now -
> os.path.getmtime(os.path.join(subdir, f)))
> + all_files.append(SstateCacheEntry(
> + path=os.path.join(reldir, f),
> + size=os.path.getsize(os.path.join(subdir,
> f)),
> + islink=islink,
> + age=age,
> + **(m.groupdict())))
> + return all_files
> +
> + def download(self, path):
> + # we don't actually download, but instead just pass the
> local path
> + if not self.exists(path):
> + return None
> + return os.path.join(self.basepath, path)
> +
> + def release(self, download_path):
> + # as we didn't download, there is nothing to clean up
> + pass
> +
> +
> +class SstateDavTarget(SstateTargetBase):
> + def __init__(self, url):
> + if not webdav_supported:
> + print("ERROR: No webdav support. Please install the
> webdav3 Python module.")
> + print("INFO: on Debian: 'apt-get install
> python3-webdavclient'")
> + sys.exit(1)
> + m = re.match('^([^:]+://[^/]+)/(.*)', url)
> + if not m:
> + print(f"Cannot parse target path: {url}")
> + sys.exit(1)
> + self.host = m.group(1)
> + self.basepath = m.group(2)
> + if not self.basepath.endswith('/'):
> + self.basepath += '/'
> + self.dav = webdav3.client.Client({'webdav_hostname':
> self.host})
> + self.tmpfiles = []
> +
> + def __repr__(self):
> + return f"{self.host}/{self.basepath}"
> +
> + def exists(self, path=''):
> + return self.dav.check(self.basepath + path)
> +
> + def create(self):
> + return self.mkdir('')
> +
> + def mkdir(self, path):
> + dirs = (self.basepath + path).split('/')
> +
> + for i in range(len(dirs)):
> + d = '/'.join(dirs[:(i+1)]) + '/'
> + if not self.dav.check(d):
> + if not self.dav.mkdir(d):
> + return False
> + return True
> +
> + def upload(self, path, filename):
> + return self.dav.upload_sync(remote_path=self.basepath +
> path, local_path=filename) +
> + def delete(self, path):
> + self.dav.clean(self.basepath + path)
> + dirs = path.split('/')[1:-1]
> + for d in [dirs[:i] for i in range(len(dirs), 0, -1)]:
> + items = self.dav.list(self.basepath + '/'.join(d),
> get_info=True)
> + if len(items) > 0:
> + # collection is not empty
> + break
> + self.dav.clean(self.basepath + '/'.join(d))
> +
> + def list_all(self):
> + now = time.time()
> +
> + def recurse_dir(path):
> + files = []
> + for item in self.dav.list(path, get_info=True):
> + if item['isdir'] and not item['path'] == path:
> + files.extend(recurse_dir(item['path']))
> + elif not item['isdir']:
> + m = SstateRegex.match(item['path'][len(path):])
> + if m is not None:
> + modified = time.mktime(
> + datetime.datetime.strptime(
> + item['created'],
> + '%Y-%m-%dT%H:%M:%SZ').timetuple())
> + age = int(now - modified)
> + files.append(SstateCacheEntry(
> + path=item['path'][len(self.basepath):],
> + size=int(item['size']),
> + islink=False,
> + age=age,
> + **(m.groupdict())))
> + return files
> + return recurse_dir(self.basepath)
> +
> + def download(self, path):
> + # download to a temporary file
> + tmp = NamedTemporaryFile(prefix='isar-sstate-', delete=False)
> + tmp.close()
> + try:
> + self.dav.download_sync(remote_path=self.basepath + path,
> local_path=tmp.name)
> + except webdav3.exceptions.RemoteResourceNotFound:
> + return None
> + self.tmpfiles.append(tmp.name)
> + return tmp.name
> +
> + def release(self, download_path):
> + # remove the temporary download
> + if download_path is not None and download_path in
> self.tmpfiles:
> + os.remove(download_path)
> + self.tmpfiles = [f for f in self.tmpfiles if not f ==
> download_path] +
> +
> +class SstateS3Target(SstateTargetBase):
> + def __init__(self, path):
> + if not s3_supported:
> + print("ERROR: No S3 support. Please install the botocore
> Python module.")
> + print("INFO: on Debian: 'apt-get install
> python3-botocore'")
> + sys.exit(1)
> + session = botocore.session.get_session()
> + self.s3 = session.create_client('s3')
> + if path.startswith('s3://'):
> + path = path[len('s3://'):]
> + m = re.match('^([^/]+)(?:/(.+)?)?$', path)
> + self.bucket = m.group(1)
> + if m.group(2):
> + self.basepath = m.group(2)
> + if not self.basepath.endswith('/'):
> + self.basepath += '/'
> + else:
> + self.basepath = ''
> + self.tmpfiles = []
> +
> + def __repr__(self):
> + return f"s3://{self.bucket}/{self.basepath}"
> +
> + def exists(self, path=''):
> + if path == '':
> + # check if the bucket exists
> + try:
> + self.s3.head_bucket(Bucket=self.bucket)
> + except botocore.exceptions.ClientError as e:
> + print(e)
> + print(e.response['Error']['Message'])
> + return False
> + return True
> + try:
> + self.s3.head_object(Bucket=self.bucket,
> Key=self.basepath + path)
> + except botocore.exceptions.ClientError as e:
> + if e.response['ResponseMetadata']['HTTPStatusCode'] !=
> 404:
> + print(e)
> + print(e.response['Error']['Message'])
> + return False
> + return True
> +
> + def create(self):
> + return self.exists()
> +
> + def mkdir(self, path):
> + # in S3, folders are implicit and don't need to be created
> + return True
> +
> + def upload(self, path, filename):
> + try:
> + self.s3.put_object(Body=open(filename, 'rb'),
> Bucket=self.bucket, Key=self.basepath + path)
> + except botocore.exceptions.ClientError as e:
> + print(e)
> + print(e.response['Error']['Message'])
> +
> + def delete(self, path):
> + try:
> + self.s3.delete_object(Bucket=self.bucket,
> Key=self.basepath + path)
> + except botocore.exceptions.ClientError as e:
> + print(e)
> + print(e.response['Error']['Message'])
> +
> + def list_all(self):
> + now = time.time()
> +
> + def recurse_dir(path):
> + files = []
> + try:
> + result = self.s3.list_objects(Bucket=self.bucket,
> Prefix=path, Delimiter='/')
> + except botocore.exceptions.ClientError as e:
> + print(e)
> + print(e.response['Error']['Message'])
> + return []
> + for f in result.get('Contents', []):
> + m = SstateRegex.match(f['Key'][len(path):])
> + if m is not None:
> + modified =
> time.mktime(f['LastModified'].timetuple())
> + age = int(now - modified)
> + files.append(SstateCacheEntry(
> + path=f['Key'][len(self.basepath):],
> + size=f['Size'],
> + islink=False,
> + age=age,
> + **(m.groupdict())))
> + for p in result.get('CommonPrefixes', []):
> + files.extend(recurse_dir(p['Prefix']))
> + return files
> + return recurse_dir(self.basepath)
> +
> + def download(self, path):
> + # download to a temporary file
> + tmp = NamedTemporaryFile(prefix='isar-sstate-', delete=False)
> + try:
> + result = self.s3.get_object(Bucket=self.bucket,
> Key=self.basepath + path)
> + except botocore.exceptions.ClientError:
> + return None
> + tmp.write(result['Body'].read())
> + tmp.close()
> + self.tmpfiles.append(tmp.name)
> + return tmp.name
> +
> + def release(self, download_path):
> + # remove the temporary download
> + if download_path is not None and download_path in
> self.tmpfiles:
> + os.remove(download_path)
> + self.tmpfiles = [f for f in self.tmpfiles if not f ==
> download_path] +
> +
> +def arguments():
> + parser = argparse.ArgumentParser()
> + parser.add_argument(
> + 'command', type=str, metavar='command',
> + choices='info upload clean analyze'.split(),
> + help="command to execute (info, upload, clean, analyze)")
> + parser.add_argument(
> + 'source', type=str, nargs='?',
> + help="local sstate dir (for uploads or analysis)")
> + parser.add_argument(
> + 'target', type=str,
> + help="remote sstate location (a file://, http://, or s3://
> URI)")
> + parser.add_argument(
> + '-v', '--verbose', default=False, action='store_true')
> + parser.add_argument(
> + '--max-age', type=str, default='1d',
> + help="clean tgz files older than MAX_AGE (a number followed
> by w|d|h|m|s)")
> + parser.add_argument(
> + '--max-sig-age', type=str, default=None,
> + help="clean siginfo files older than MAX_SIG_AGE (defaults
> to MAX_AGE)") +
> + args = parser.parse_args()
> + if args.command in 'upload analyze'.split() and args.source is
> None:
> + print(f"ERROR: '{args.command}' needs a source and target")
> + sys.exit(1)
> + elif args.command in 'info clean'.split() and args.source is not
> None:
> + print(f"ERROR: '{args.command}' must not have a source (only
> a target)")
> + sys.exit(1)
> + return args
> +
> +
> +def sstate_upload(source, target, verbose, **kwargs):
> + if not os.path.isdir(source):
> + print(f"WARNING: source {source} does not exist. Not
> uploading.")
> + return 0
> +
> + if not target.exists() and not target.create():
> + print(f"ERROR: target {target} does not exist and could not
> be created.")
> + return -1
> +
> + print(f"INFO: uploading {source} to {target}")
> + os.chdir(source)
> + upload, exists = [], []
> + for subdir, dirs, files in os.walk('.'):
> + target_dirs = subdir.split('/')[1:]
> + for f in files:
> + file_path = (('/'.join(target_dirs) + '/') if
> len(target_dirs) > 0 else '') + f
> + if target.exists(file_path):
> + if verbose:
> + print(f"[EXISTS] {file_path}")
> + exists.append(file_path)
> + else:
> + upload.append((file_path, target_dirs))
> + upload_gb = (sum([os.path.getsize(f[0]) for f in upload]) /
> 1024.0 / 1024.0 / 1024.0)
> + print(f"INFO: uploading {len(upload)} files ({upload_gb:.02f}
> GB)")
> + print(f"INFO: {len(exists)} files already present on target")
> + for file_path, target_dirs in upload:
> + if verbose:
> + print(f"[UPLOAD] {file_path}")
> + target.mkdir('/'.join(target_dirs))
> + target.upload(file_path, file_path)
> + return 0
> +
> +
> +def sstate_clean(target, max_age, max_sig_age, verbose, **kwargs):
> + def convert_to_seconds(x):
> + seconds_per_unit = {'s': 1, 'm': 60, 'h': 3600, 'd': 86400,
> 'w': 604800}
> + m = re.match(r'^(\d+)(w|d|h|m|s)?', x)
> + if m is None:
> + print(f"ERROR: cannot parse MAX_AGE '{max_age}', needs
> to be a number followed by w|d|h|m|s")
> + sys.exit(-1)
> + if (unit := m.group(2)) is None:
> + print("WARNING: MAX_AGE without unit, assuming 'days'")
> + unit = 'd'
> + return int(m.group(1)) * seconds_per_unit[unit]
> +
> + max_age_seconds = convert_to_seconds(max_age)
> + if max_sig_age is None:
> + max_sig_age_seconds = max_age_seconds
> + else:
> + max_sig_age_seconds = max(max_age_seconds,
> convert_to_seconds(max_sig_age)) +
> + if not target.exists():
> + print(f"INFO: cannot access target {target}. Nothing to
> clean.")
> + return 0
> +
> + print(f"INFO: scanning {target}")
> + all_files = target.list_all()
> + links = [f for f in all_files if f.islink]
> + if links:
> + print(f"NOTE: we have links: {links}")
> + tgz_files = [f for f in all_files if f.suffix == 'tgz']
> + siginfo_files = [f for f in all_files if f.suffix ==
> 'tgz.siginfo']
> + del_tgz_files = [f for f in tgz_files if f.age >=
> max_age_seconds]
> + del_tgz_hashes = [f.hash for f in del_tgz_files]
> + del_siginfo_files = [f for f in siginfo_files if
> + f.age >= max_sig_age_seconds or f.hash in
> del_tgz_hashes]
> + print(f"INFO: found {len(tgz_files)} tgz files,
> {len(del_tgz_files)} of which are older than {max_age}")
> + print(f"INFO: found {len(siginfo_files)} siginfo files,
> {len(del_siginfo_files)} of which "
> + f"correspond to tgz files or are older than {max_sig_age}")
> +
> + for f in del_tgz_files + del_siginfo_files:
> + if verbose:
> + print(f"[DELETE] {f.path}")
> + target.delete(f.path)
> + freed_gb = sum([x.size for x in del_tgz_files +
> del_siginfo_files]) / 1024.0 / 1024.0 / 1024.0
> + print(f"INFO: freed {freed_gb:.02f} GB")
> + return 0
> +
> +
> +def sstate_info(target, verbose, **kwargs):
> + if not target.exists():
> + print(f"INFO: cannot access target {target}. No info to
> show.")
> + return 0
> +
> + print(f"INFO: scanning {target}")
> + all_files = target.list_all()
> + size_gb = sum([x.size for x in all_files]) / 1024.0 / 1024.0 /
> 1024.0
> + print(f"INFO: found {len(all_files)} files ({size_gb:0.2f} GB)")
> +
> + if not verbose:
> + return 0
> +
> + archs = list(set([f.arch for f in all_files]))
> + print(f"INFO: found the following archs: {archs}")
> +
> + key_task = {'deb': 'dpkg_build',
> + 'rootfs': 'rootfs_install',
> + 'bootstrap': 'bootstrap'}
> + recipes = {k: [] for k in key_task.keys()}
> + others = []
> + for pn in set([f.pn for f in all_files]):
> + tasks = set([f.task for f in all_files if f.pn == pn])
> + ks = [k for k, v in key_task.items() if v in tasks]
> + if len(ks) == 1:
> + recipes[ks[0]].append(pn)
> + elif len(ks) == 0:
> + others.append(pn)
> + else:
> + print(f"WARNING: {pn} could be any of {ks}")
> + for k, entries in recipes.items():
> + print(f"Cache hits for {k}:")
> + for pn in entries:
> + hits = [f for f in all_files if f.pn == pn and f.task ==
> key_task[k] and f.suffix == 'tgz']
> + print(f" - {pn}: {len(hits)} hits")
> + print("Other cache hits:")
> + for pn in others:
> + print(f" - {pn}")
> + return 0
> +
> +
> +def sstate_analyze(source, target, **kwargs):
> + if not os.path.isdir(source):
> + print(f"ERROR: source {source} does not exist. Nothing to
> analyze.")
> + return -1
> + if not target.exists():
> + print(f"ERROR: target {target} does not exist. Nothing to
> analyze.")
> + return -1
> +
> + source = SstateFileTarget(source)
> + local_sigs = {s.hash: s for s in source.list_all() if
> s.suffix.endswith('.siginfo')}
> + remote_sigs = {s.hash: s for s in target.list_all() if
> s.suffix.endswith('.siginfo')} +
> + key_tasks = 'dpkg_build rootfs_install bootstrap'.split()
> +
> + check = [k for k, v in local_sigs.items() if v.task in key_tasks]
> + for local_hash in check:
> + s = local_sigs[local_hash]
> + print(f"\033[1;33m==== checking local item
> {s.arch}:{s.pn}:{s.task} ({s.hash[:8]}) ====\033[0m")
> + if local_hash in remote_sigs:
> + print(" -> found hit in remote cache")
> + continue
> + remote_matches = [k for k, v in remote_sigs.items() if
> s.arch == v.arch and s.pn == v.pn and s.task == v.task]
> + if len(remote_matches) == 0:
> + print(" -> found no hit, and no potential remote
> matches")
> + else:
> + print(f" -> found no hit, but {len(remote_matches)}
> potential remote matches")
> + for r in remote_matches:
> + t = remote_sigs[r]
> + print(f"\033[0;33m**** comparing to {r[:8]} ****\033[0m")
> +
> + def recursecb(key, remote_hash, local_hash):
> + recout = []
> + if remote_hash in remote_sigs.keys():
> + remote_file =
> target.download(remote_sigs[remote_hash].path)
> + elif remote_hash in local_sigs.keys():
> + recout.append(f"found remote hash in local
> signatures ({key})!?! (please implement that case!)")
> + return recout
> + else:
> + recout.append(f"could not find remote signature
> {remote_hash[:8]} for job {key}")
> + return recout
> + if local_hash in local_sigs.keys():
> + local_file =
> source.download(local_sigs[local_hash].path)
> + elif local_hash in remote_sigs.keys():
> + local_file =
> target.download(remote_sigs[local_hash].path)
> + else:
> + recout.append(f"could not find local signature
> {local_hash[:8]} for job {key}")
> + return recout
> + if local_file is None or remote_file is None:
> + out = "Aborting analysis because siginfo files
> disappered unexpectedly"
> + else:
> + out = compare_sigfiles(remote_file, local_file,
> recursecb, color=True)
> + if local_hash in local_sigs.keys():
> + source.release(local_file)
> + else:
> + target.release(local_file)
> + target.release(remote_file)
> + for change in out:
> + recout.extend([' ' + line for line in
> change.splitlines()])
> + return recout
> +
> + local_file = source.download(s.path)
> + remote_file = target.download(t.path)
> + out = compare_sigfiles(remote_file, local_file,
> recursecb, color=True)
> + source.release(local_file)
> + target.release(remote_file)
> + # shorten hashes from 64 to 8 characters for better
> readability
> + out = [re.sub(r'([0-9a-f]{8})[0-9a-f]{56}', r'\1', line)
> for line in out]
> + print('\n'.join(out))
> +
> +
> +def main():
> + args = arguments()
> +
> + if args.target.startswith('http://'):
> + target = SstateDavTarget(args.target)
> + elif args.target.startswith('s3://'):
> + target = SstateS3Target(args.target)
> + elif args.target.startswith('file://'):
> + target = SstateFileTarget(args.target)
> + else: # no protocol given, assume file://
> + target = SstateFileTarget(args.target)
> +
> + args.target = target
> + return globals()[f'sstate_{args.command}'](**vars(args))
> +
> +
> +if __name__ == '__main__':
> + sys.exit(main())
^ permalink raw reply [flat|nested] 7+ messages in thread
* RE: [PATCH 2/2] bitbake-diffsigs: make finding of changed signatures more robust
2022-04-13 8:19 ` Henning Schild
@ 2022-04-14 8:03 ` Schmidt, Adriaan
2022-04-14 8:22 ` Henning Schild
0 siblings, 1 reply; 7+ messages in thread
From: Schmidt, Adriaan @ 2022-04-14 8:03 UTC (permalink / raw)
To: Schild, Henning; +Cc: isar-users
Schild, Henning, Mittwoch, 13. April 2022 10:20
> Am Wed, 13 Apr 2022 08:35:34 +0200
> schrieb Adriaan Schmidt <adriaan.schmidt@siemens.com>:
>
> > In `runtaskhashes`, the keys contain the absolute paths to the
> > recipe. When working with shared sstate caches (where these absolute
> > paths can be different) we see that compare_sigfiles does not
> > identifiy a changed hash of a dependent task as "changed", but
> > instead as "removed"&"added", preventing the function from recursing
> > and continuing the comparison.
> >
> > By calling `clean_basepaths` before comparing the `runtaskhashes`
> > dicts, we avoid this.
> >
> > Submitted upstream:
> > https://lists.openembedded.org/g/bitbake-devel/message/13603
>
> Submitted does not count, we will have to wait till it is merged and
> then we can think about a backport.
It's now on bitbake's `master-next`:
https://git.openembedded.org/bitbake/commit/?h=master-next&id=01b2b300901dc8b93973318127f8eb3c29b9a168
> How important is that? If it is just "nice to have" i think we should
> wait and not even do the backport once merged into bitbake.
In our specific setup on a GitLab K8s CI runner, we need this to get
useful analysis. Other setups might be fine without.
Adriaan
> Henning
>
> > Signed-off-by: Adriaan Schmidt <adriaan.schmidt@siemens.com>
> > ---
> > bitbake/lib/bb/siggen.py | 10 +++++-----
> > 1 file changed, 5 insertions(+), 5 deletions(-)
> >
> > diff --git a/bitbake/lib/bb/siggen.py b/bitbake/lib/bb/siggen.py
> > index 0d88c6ec..8b23fd04 100644
> > --- a/bitbake/lib/bb/siggen.py
> > +++ b/bitbake/lib/bb/siggen.py
> > @@ -944,8 +944,8 @@ def compare_sigfiles(a, b, recursecb=None,
> > color=False, collapsed=False):
> >
> > if 'runtaskhashes' in a_data and 'runtaskhashes' in b_data:
> > - a = a_data['runtaskhashes']
> > - b = b_data['runtaskhashes']
> > + a = clean_basepaths(a_data['runtaskhashes'])
> > + b = clean_basepaths(b_data['runtaskhashes'])
> > changed, added, removed = dict_diff(a, b)
> > if added:
> > for dep in added:
> > @@ -956,7 +956,7 @@ def compare_sigfiles(a, b, recursecb=None,
> > color=False, collapsed=False): #output.append("Dependency on task %s
> > was replaced by %s with same hash" % (dep, bdep)) bdep_found = True
> > if not bdep_found:
> > -
> > output.append(color_format("{color_title}Dependency on task %s was
> > added{color_default} with hash %s") % (clean_basepath(dep), b[dep]))
> > +
> > output.append(color_format("{color_title}Dependency on task %s was
> > added{color_default} with hash %s") % (dep, b[dep])) if removed: for
> > dep in removed: adep_found = False
> > @@ -966,11 +966,11 @@ def compare_sigfiles(a, b, recursecb=None,
> > color=False, collapsed=False): #output.append("Dependency on task %s
> > was replaced by %s with same hash" % (adep, dep)) adep_found = True
> > if not adep_found:
> > -
> > output.append(color_format("{color_title}Dependency on task %s was
> > removed{color_default} with hash %s") % (clean_basepath(dep), a[dep]))
> > +
> > output.append(color_format("{color_title}Dependency on task %s was
> > removed{color_default} with hash %s") % (dep, a[dep])) if changed:
> > for dep in changed: if not collapsed:
> > - output.append(color_format("{color_title}Hash
> > for dependent task %s changed{color_default} from %s to %s") %
> > (clean_basepath(dep), a[dep], b[dep]))
> > + output.append(color_format("{color_title}Hash
> > for dependent task %s changed{color_default} from %s to %s") % (dep,
> > a[dep], b[dep])) if callable(recursecb): recout = recursecb(dep,
> > a[dep], b[dep]) if recout:
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH 2/2] bitbake-diffsigs: make finding of changed signatures more robust
2022-04-14 8:03 ` Schmidt, Adriaan
@ 2022-04-14 8:22 ` Henning Schild
0 siblings, 0 replies; 7+ messages in thread
From: Henning Schild @ 2022-04-14 8:22 UTC (permalink / raw)
To: Schmidt, Adriaan (T CED SES-DE); +Cc: isar-users
Am Thu, 14 Apr 2022 10:03:39 +0200
schrieb "Schmidt, Adriaan (T CED SES-DE)" <adriaan.schmidt@siemens.com>:
> Schild, Henning, Mittwoch, 13. April 2022 10:20
> > Am Wed, 13 Apr 2022 08:35:34 +0200
> > schrieb Adriaan Schmidt <adriaan.schmidt@siemens.com>:
> >
> > > In `runtaskhashes`, the keys contain the absolute paths to the
> > > recipe. When working with shared sstate caches (where these
> > > absolute paths can be different) we see that compare_sigfiles
> > > does not identifiy a changed hash of a dependent task as
> > > "changed", but instead as "removed"&"added", preventing the
> > > function from recursing and continuing the comparison.
> > >
> > > By calling `clean_basepaths` before comparing the `runtaskhashes`
> > > dicts, we avoid this.
> > >
> > > Submitted upstream:
> > > https://lists.openembedded.org/g/bitbake-devel/message/13603
> >
> > Submitted does not count, we will have to wait till it is merged and
> > then we can think about a backport.
>
> It's now on bitbake's `master-next`:
>
> https://git.openembedded.org/bitbake/commit/?h=master-next&id=01b2b300901dc8b93973318127f8eb3c29b9a168
Cool. I guess the commit message should be changed to include that
upstream sha.
Henning
> > How important is that? If it is just "nice to have" i think we
> > should wait and not even do the backport once merged into bitbake.
>
> In our specific setup on a GitLab K8s CI runner, we need this to get
> useful analysis. Other setups might be fine without.
>
> Adriaan
>
> > Henning
> >
> > > Signed-off-by: Adriaan Schmidt <adriaan.schmidt@siemens.com>
> > > ---
> > > bitbake/lib/bb/siggen.py | 10 +++++-----
> > > 1 file changed, 5 insertions(+), 5 deletions(-)
> > >
> > > diff --git a/bitbake/lib/bb/siggen.py b/bitbake/lib/bb/siggen.py
> > > index 0d88c6ec..8b23fd04 100644
> > > --- a/bitbake/lib/bb/siggen.py
> > > +++ b/bitbake/lib/bb/siggen.py
> > > @@ -944,8 +944,8 @@ def compare_sigfiles(a, b, recursecb=None,
> > > color=False, collapsed=False):
> > >
> > > if 'runtaskhashes' in a_data and 'runtaskhashes' in b_data:
> > > - a = a_data['runtaskhashes']
> > > - b = b_data['runtaskhashes']
> > > + a = clean_basepaths(a_data['runtaskhashes'])
> > > + b = clean_basepaths(b_data['runtaskhashes'])
> > > changed, added, removed = dict_diff(a, b)
> > > if added:
> > > for dep in added:
> > > @@ -956,7 +956,7 @@ def compare_sigfiles(a, b, recursecb=None,
> > > color=False, collapsed=False): #output.append("Dependency on task
> > > %s was replaced by %s with same hash" % (dep, bdep)) bdep_found =
> > > True if not bdep_found:
> > > -
> > > output.append(color_format("{color_title}Dependency on task %s was
> > > added{color_default} with hash %s") % (clean_basepath(dep),
> > > b[dep])) +
> > > output.append(color_format("{color_title}Dependency on task %s was
> > > added{color_default} with hash %s") % (dep, b[dep])) if removed:
> > > for dep in removed: adep_found = False
> > > @@ -966,11 +966,11 @@ def compare_sigfiles(a, b, recursecb=None,
> > > color=False, collapsed=False): #output.append("Dependency on task
> > > %s was replaced by %s with same hash" % (adep, dep)) adep_found =
> > > True if not adep_found:
> > > -
> > > output.append(color_format("{color_title}Dependency on task %s was
> > > removed{color_default} with hash %s") % (clean_basepath(dep),
> > > a[dep])) +
> > > output.append(color_format("{color_title}Dependency on task %s was
> > > removed{color_default} with hash %s") % (dep, a[dep])) if changed:
> > > for dep in changed: if not collapsed:
> > > - output.append(color_format("{color_title}Hash
> > > for dependent task %s changed{color_default} from %s to %s") %
> > > (clean_basepath(dep), a[dep], b[dep]))
> > > + output.append(color_format("{color_title}Hash
> > > for dependent task %s changed{color_default} from %s to %s") %
> > > (dep, a[dep], b[dep])) if callable(recursecb): recout =
> > > recursecb(dep, a[dep], b[dep]) if recout:
>
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2022-04-14 8:22 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-04-13 6:35 [PATCH 0/2] Sstate maintenance script Adriaan Schmidt
2022-04-13 6:35 ` [PATCH 1/2] scripts: add isar-sstate Adriaan Schmidt
2022-04-14 7:36 ` Henning Schild
2022-04-13 6:35 ` [PATCH 2/2] bitbake-diffsigs: make finding of changed signatures more robust Adriaan Schmidt
2022-04-13 8:19 ` Henning Schild
2022-04-14 8:03 ` Schmidt, Adriaan
2022-04-14 8:22 ` Henning Schild
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox