public inbox for isar-users@googlegroups.com
 help / color / mirror / Atom feed
From: Uladzimir Bely <ubely@ilbers.de>
To: "ibr@radix50.net" <ibr@radix50.net>,
	isar-users@googlegroups.com, "Schild,
	Henning" <henning.schild@siemens.com>
Cc: "Moessbauer, Felix" <felix.moessbauer@siemens.com>
Subject: Re: Better way to handle apt cache needed
Date: Wed, 28 Dec 2022 13:23:18 +0300	[thread overview]
Message-ID: <4769513.OV4Wx5bFTl@hp> (raw)
In-Reply-To: <38d18c245baa4f685642eafa9a52ab9b9ae9001c.camel@siemens.com>

In mail from среда, 28 декабря 2022 г. 12:45:07 +03 user Moessbauer, Felix 
wrote:
> On Wed, 2022-12-28 at 10:21 +0100, Baurzhan Ismagulov wrote:
> > On Wed, Dec 28, 2022 at 09:02:13AM +0000, Moessbauer, Felix wrote:
> > > The root cause for that behavior is the apt cache
> > > (deb_dl_dir_(import|export)), that copies all previously downloaded
> > > apt
> > > packages into the WORKDIR of each (bitbake) package.
> > > Given, that a common apt-cache is around 2GB and 8 tasks are run in
> > > parallel, this gives already 16GB for the tasks, and 7 * 2GB for
> > > the
> > > buildchroots (host and target), in total ~30GB.
> > 
> > Thanks Felix for the report. IIRC, it was previously mounted and was
> > supposed
> > to be converted to per-package hardlinks to parallelize sbuild
> > instances and
> > ease debugging (by knowing later which exact snapshot was fed to a
> > specific
> > build). We use small (1- / 2-TB) SSDs as job storage and a huge
> > increase would
> > have been noticeable... We'll check.
> 
> Thanks!
> 
> I just noticed, that we even make an additional copy to have the
> packages inside the sbuild chroot.
> 
> This behavior is hard to notice on small and medium sized projects (and
> given that IOPS are not an issue). But any quadratic behavior will
> eventually make the build impossible. And as Florian said, many of our
> ISAR users build in VMs on shared filesystems, where IO is extremely
> expensive / slow. If we could optimize that, it would be a huge benefit
> for a lot of users.

I've just did some measurements:

- run Isar build for qemuarm64 (8 cores = max 8 build tasks in parallel)
- measured system disk consumption every 5 seconds

Results:
- 'downloads/deb' directory finally takse 480MiB
- After the build finished, disk space decreased by ~9GiB
- During the build maximum disk space decrease was ~16GiB)

It means, that we really had about 16 - 9 = 7GiB of space temporarly used by 
parallel builds and it really corresponds to 8 * 2 * 480MiB.

So, the main goal now is to minimize this value.

> 
> Felix
> 
> > With kind regards,
> > Baurzhan





  reply	other threads:[~2022-12-28 10:23 UTC|newest]

Thread overview: 28+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-12-28  9:02 Moessbauer, Felix
2022-12-28  9:21 ` Baurzhan Ismagulov
2022-12-28  9:45   ` Moessbauer, Felix
2022-12-28 10:23     ` Uladzimir Bely [this message]
2022-12-28 11:04       ` Moessbauer, Felix
2022-12-29 23:15         ` Roberto A. Foglietta
2022-12-30  4:38           ` Uladzimir Bely
2022-12-30  7:08             ` Roberto A. Foglietta
2022-12-30  6:05           ` Moessbauer, Felix
2022-12-30  8:27             ` Roberto A. Foglietta
2022-12-30 10:04               ` Moessbauer, Felix
2022-12-30 13:11               ` Moessbauer, Felix
2022-12-30 13:33                 ` Roberto A. Foglietta
2022-12-30 13:47                   ` Roberto A. Foglietta
2022-12-31  8:59                     ` Roberto A. Foglietta
2022-12-31 21:03                       ` Roberto A. Foglietta
2023-01-09  8:12                       ` Roberto A. Foglietta
2023-01-09  9:58                         ` Roberto A. Foglietta
2023-01-19 18:08                           ` Roberto A. Foglietta
2023-01-25  4:48                             ` Roberto A. Foglietta
2023-02-10 16:05                               ` Roberto A. Foglietta
2023-02-14 10:01                                 ` Roberto A. Foglietta
2023-02-14 16:46                                   ` Roberto A. Foglietta
2022-12-30 12:29           ` Roberto A. Foglietta
2022-12-28  9:22 ` Florian Bezdeka
2023-01-02 16:15 ` Henning Schild
2023-01-05  6:31 ` Uladzimir Bely
2023-01-05 17:10   ` Roberto A. Foglietta

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4769513.OV4Wx5bFTl@hp \
    --to=ubely@ilbers.de \
    --cc=felix.moessbauer@siemens.com \
    --cc=henning.schild@siemens.com \
    --cc=ibr@radix50.net \
    --cc=isar-users@googlegroups.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox