public inbox for isar-users@googlegroups.com
 help / color / mirror / Atom feed
From: Henning Schild <henning.schild@siemens.com>
To: <isar-users@googlegroups.com>,
	"Maxim Yu. Osipov" <mosipov@ilbers.de>,
	Alexander Smirnov <asmirnov@ilbers.de>,
	"Kiszka, Jan (CT RDA IOT SES-DE)" <jan.kiszka@siemens.com>
Subject: RFC: base-apt caching improvements
Date: Wed, 27 Feb 2019 10:05:36 +0100	[thread overview]
Message-ID: <20190227100536.10caef1d@md1za8fc.ad001.siemens.net> (raw)

Hi,

i did not really like the current approach to how we cache, because i
thought we can and should do that somehow transparent and route all
"apt" downloads through a proxy that will reprepro all files as the are
requested.
I did not find a proxy that can do that, maybe i did not look hard
enough.

So i came back to the currently implemented model. Collect the files
from the rootfs-es after we are all done. The way that is currently
done is getting them from the apt cache. Unfortunately that is not
guaranteed to work. Any package we install could mess with apt-config
and therefore the caching of the rootfs-es. (i.e. something
like /etc/apt/apt.conf.d/docker-clean you will find a container images)

So that cache can not be trusted. And that cache does not work for
source packages and can not fulfill our needs with regards to full
caching/archiving.

So instead we should go and download all debs and sources of the
currently installed packages of all rootfs-es (target and its
buildchroots) explicitly and reprepro from there.

Here is what this could look like:

rm -rf /tmp/foo
cd /tmp/foo
dpkg -l | grep "^ii" | awk '{print $2"="$3}' | xargs apt-get -y download
### reprepro *.deb
rm -rf /tmp/foo-src
cd /tmp/foo-src
dpkg -l | grep "^ii" | awk '{print $2"="$3}' | xargs apt-get -y source --download-only
### reprepro *.dsc

This still lacks filtering out all the packages from isar-apt but could
be an improvement to our current way. We do not need to trust those fragile
caches because we are explicit. We can do sources ... and should probably
just do all of them anyways.
For selective sources we could "apt-get source" into a central dldir and
copy the sources to the recipe workdir. Would be a matter of mounting that
shared dir and changing do_apt_fetch to use that staging dir to leave a copy.
In fact we (Siemens) want all sources, and it might be a sane default for
others as well.

Henning

             reply	other threads:[~2019-02-27  9:05 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-02-27  9:05 Henning Schild [this message]
2019-02-28  9:40 ` Henning Schild
2019-03-01 17:38 ` Baurzhan Ismagulov
2019-03-05  9:53   ` Henning Schild

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190227100536.10caef1d@md1za8fc.ad001.siemens.net \
    --to=henning.schild@siemens.com \
    --cc=asmirnov@ilbers.de \
    --cc=isar-users@googlegroups.com \
    --cc=jan.kiszka@siemens.com \
    --cc=mosipov@ilbers.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox