public inbox for isar-users@googlegroups.com
 help / color / mirror / Atom feed
From: Henning Schild <henning.schild@siemens.com>
To: Adriaan Schmidt <adriaan.schmidt@siemens.com>
Cc: isar-users@googlegroups.com
Subject: Re: [PATCH] fix(isar-sstate): also handle zst files
Date: Fri, 10 Feb 2023 16:48:44 +0100	[thread overview]
Message-ID: <20230210164844.440757d7@md1za8fc.ad001.siemens.net> (raw)
In-Reply-To: <20230210153434.1024604-1-adriaan.schmidt@siemens.com>

Am Fri, 10 Feb 2023 16:34:34 +0100
schrieb Adriaan Schmidt <adriaan.schmidt@siemens.com>:

> With bitbake 2.0, sstate artifacts have changed from tgz to tar.zst.
> Our isar-sstate script needs to scan for those as well. The
> implementation is backwards-compatible.

I came here just wanting to make sure it will work for both, and yes it
does!

Good catch!

Henning

> Signed-off-by: Adriaan Schmidt <adriaan.schmidt@siemens.com>
> ---
>  scripts/isar-sstate | 24 ++++++++++++------------
>  1 file changed, 12 insertions(+), 12 deletions(-)
> 
> diff --git a/scripts/isar-sstate b/scripts/isar-sstate
> index 53d0541f..c14c2843 100755
> --- a/scripts/isar-sstate
> +++ b/scripts/isar-sstate
> @@ -40,7 +40,7 @@ followed by one of `w`, `d`, `h`, `m`, or `s` (for
> weeks, days, hours, minutes, seconds, respectively).
>  
>  `--max-age` specifies up to which age artifacts should be kept in
> the cache. -Anything older will be removed. Note that this only
> applies to the `.tgz` files +Anything older will be removed. Note
> that this only applies to the archive files containing the actual
> cached items, not the `.siginfo` files containing the cache metadata
> (signatures and hashes). To permit analysis of caching details using
> the `analyze` command, the siginfo @@ -576,7 +576,7 @@ def
> arguments(): '-v', '--verbose', default=False, action='store_true')
>      parser.add_argument(
>          '--max-age', type=str, default='1d',
> -        help="clean: remove tgz files older than MAX_AGE (a number
> followed by w|d|h|m|s)")
> +        help="clean: remove archive files older than MAX_AGE (a
> number followed by w|d|h|m|s)") parser.add_argument(
>          '--max-sig-age', type=str, default=None,
>          help="clean: remove siginfo files older than MAX_SIG_AGE
> (defaults to MAX_AGE)") @@ -664,21 +664,21 @@ def
> sstate_clean(target, max_age, max_sig_age, verbose, **kwargs): links
> = [f for f in all_files if f.islink] if links:
>          print(f"NOTE: we have links: {links}")
> -    tgz_files = [f for f in all_files if f.suffix == 'tgz']
> -    siginfo_files = [f for f in all_files if f.suffix ==
> 'tgz.siginfo']
> -    del_tgz_files = [f for f in tgz_files if f.age >=
> max_age_seconds]
> -    del_tgz_hashes = [f.hash for f in del_tgz_files]
> +    archive_files = [f for f in all_files if f.suffix in ['tgz',
> 'tar.zst']]
> +    siginfo_files = [f for f in all_files if f.suffix in
> ['tgz.siginfo', 'tar.zst.siginfo']]
> +    del_archive_files = [f for f in archive_files if f.age >=
> max_age_seconds]
> +    del_archive_hashes = [f.hash for f in del_archive_files]
>      del_siginfo_files = [f for f in siginfo_files if
> -                         f.age >= max_sig_age_seconds or f.hash in
> del_tgz_hashes]
> -    print(f"INFO: found {len(tgz_files)} tgz files,
> {len(del_tgz_files)} of which are older than {max_age}")
> +                         f.age >= max_sig_age_seconds or f.hash in
> del_archive_hashes]
> +    print(f"INFO: found {len(archive_files)} archive files,
> {len(del_archive_files)} of which are older than {max_age}")
> print(f"INFO: found {len(siginfo_files)} siginfo files,
> {len(del_siginfo_files)} of which "
> -          f"correspond to old tgz files or are older than
> {max_sig_age}")
> +          f"correspond to old archive files or are older than
> {max_sig_age}") 
> -    for f in del_tgz_files + del_siginfo_files:
> +    for f in del_archive_files + del_siginfo_files:
>          if verbose:
>              print(f"[DELETE] {f.path}")
>          target.delete(f.path)
> -    freed_gb = sum([x.size for x in del_tgz_files +
> del_siginfo_files]) / 1024.0 / 1024.0 / 1024.0
> +    freed_gb = sum([x.size for x in del_archive_files +
> del_siginfo_files]) / 1024.0 / 1024.0 / 1024.0 print(f"INFO: freed
> {freed_gb:.02f} GB") return 0
>  
> @@ -716,7 +716,7 @@ def sstate_info(target, verbose, **kwargs):
>      for k, entries in recipes.items():
>          print(f"Cache entries for {k}:")
>          for pn in entries:
> -            artifacts = [f for f in all_files if f.pn == pn and
> f.task == key_task[k] and f.suffix == 'tgz']
> +            artifacts = [f for f in all_files if f.pn == pn and
> f.task == key_task[k] and f.suffix in ['tgz', 'tar.zst']] print(f"  -
> {pn}: {len(artifacts)} entries") print("Other cache entries:")
>      for pn in others:


  reply	other threads:[~2023-02-10 15:48 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-02-10 15:34 Adriaan Schmidt
2023-02-10 15:48 ` Henning Schild [this message]
2023-02-10 19:16 ` Henning Schild
2023-02-11  0:15 ` Moessbauer, Felix
2023-02-16  4:32 ` Uladzimir Bely

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230210164844.440757d7@md1za8fc.ad001.siemens.net \
    --to=henning.schild@siemens.com \
    --cc=adriaan.schmidt@siemens.com \
    --cc=isar-users@googlegroups.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox