From: Uladzimir Bely <ubely@ilbers.de>
To: Jan Kiszka <jan.kiszka@siemens.com>,
isar-users <isar-users@googlegroups.com>
Subject: Re: [PATCH v2] Convert apt source fetcher into native bitbake variant
Date: Thu, 28 Nov 2024 09:23:31 +0300 [thread overview]
Message-ID: <87f0b2714fbbb7f2c691c471e744ba105181344a.camel@ilbers.de> (raw)
In-Reply-To: <d656bc9d61be9adf3181ccbf333dcf8829675b5f.camel@ilbers.de>
On Thu, 2024-11-28 at 09:03 +0300, Uladzimir Bely wrote:
> On Thu, 2024-11-28 at 12:55 +0800, Jan Kiszka wrote:
> > On 27.11.24 22:07, Uladzimir Bely wrote:
> > > On Fri, 2024-11-15 at 17:40 +0100, Jan Kiszka wrote:
> > > > From: Jan Kiszka <jan.kiszka@siemens.com>
> > > >
> > > > There is no major functional difference, but we no longer have
> > > > to
> > > > manipulate SRC_URI by registering an official fetcher for
> > > > apt://.
> > > >
> > > > As the fetching no longer takes place in separate tasks,
> > > > do_fetch
> > > > and
> > > > do_unpack need to gain the extra flags that were so far
> > > > assigned
> > > > to
> > > > apt_fetch and apt_unpack. That happens conditionally, i.e. only
> > > > if
> > > > SRC_URI actually contains an apt URL.
> > > >
> > > > One difference to the original version is the possibility -
> > > > even
> > > > if
> > > > practically of minor relevance - to unpack multiple apt sources
> > > > into
> > > > S.
> > > > The old version contained a loop but was directing dpkg-source
> > > > to
> > > > a
> > > > pre-existing dir which would have failed on the second
> > > > iteration.
> > > > The
> > > > new version now folds the results together after each step.
> > > >
> > > > Another minor difference is that unversioned fetches put their
> > > > results
> > > > into the same subfolder in DL_DIR, also when specifying a
> > > > distro
> > > > codename. Only versioned fetches get dedicated folders (and
> > > > .done
> > > > stamps).
> > > >
> > > > There is no progress report realized because dpkg-source
> > > > unfortunately
> > > > does not provide information upfront to make this predictable,
> > > > thus
> > > > expressible in form of percentage.
> > > >
> > > > Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
> > > > ---
> > > >
> > > > Changes in v2:
> > > > - rebased, including the removal of isar-apt sources in
> > > > apt_unpack
> > > >
> > > > I'm carefully optimistic that this change also resolves the
> > > > previously
> > > > seen issue in CI.
> > > >
> > > > meta/classes/dpkg-base.bbclass | 104 ++++---------------------
> > > > --
> > > > ----
> > > > --
> > > > meta/lib/aptsrc_fetcher.py | 93
> > > > +++++++++++++++++++++++++++++
> > > > 2 files changed, 104 insertions(+), 93 deletions(-)
> > > > create mode 100644 meta/lib/aptsrc_fetcher.py
> > > >
> > > > diff --git a/meta/classes/dpkg-base.bbclass
> > > > b/meta/classes/dpkg-
> > > > base.bbclass
> > > > index b4ea8e17..c02c07a8 100644
> > > > --- a/meta/classes/dpkg-base.bbclass
> > > > +++ b/meta/classes/dpkg-base.bbclass
> > > > @@ -79,110 +79,28 @@ do_adjust_git[lockfiles] +=
> > > > "${DL_DIR}/git/isar.lock"
> > > > inherit patch
> > > > addtask patch after do_adjust_git
> > > >
> > > > -SRC_APT ?= ""
> > > > -
> > > > -# filter out all "apt://" URIs out of SRC_URI and stick them
> > > > into
> > > > SRC_APT
> > > > python() {
> > > > - src_uri = (d.getVar('SRC_URI', False) or "").split()
> > > > + from bb.fetch2 import methods
> > > >
> > > > - prefix = "apt://"
> > > > - src_apt = []
> > > > - for u in src_uri:
> > > > - if u.startswith(prefix):
> > > > - src_apt.append(u[len(prefix) :])
> > > > - d.setVar('SRC_URI:remove', u)
> > > > + # apt-src fetcher
> > > > + import aptsrc_fetcher
> > > > + methods.append(aptsrc_fetcher.AptSrc())
> > > >
> > > > - d.prependVar('SRC_APT', ' '.join(src_apt))
> > > > + src_uri = (d.getVar('SRC_URI', False) or "").split()
> > > > + for u in src_uri:
> > > > + if u.startswith("apt://"):
> > > > + d.appendVarFlag('do_fetch', 'depends',
> > > > d.getVar('SCHROOT_DEP'))
> > > >
> > > > - if len(d.getVar('SRC_APT').strip()) > 0:
> > > > - bb.build.addtask('apt_unpack', 'do_patch', '', d)
> > > > - bb.build.addtask('cleanall_apt', 'do_cleanall', '', d)
> > > > + d.appendVarFlag('do_unpack', 'cleandirs',
> > > > d.getVar('S'))
> > > > + d.setVarFlag('do_unpack', 'network',
> > > > d.getVar('TASK_USE_SUDO'))
> > > > + break
> > > >
> > > > # container docker fetcher
> > > > import container_fetcher
> > > > - from bb.fetch2 import methods
> > > >
> > > > methods.append(container_fetcher.Container())
> > > > }
> > > >
> > > > -do_apt_fetch() {
> > > > - E="${@ isar_export_proxies(d)}"
> > > > - schroot_create_configs
> > > > -
> > > > - session_id=$(schroot -q -b -c ${SBUILD_CHROOT})
> > > > - echo "Started session: ${session_id}"
> > > > -
> > > > - schroot_cleanup() {
> > > > - schroot -q -f -e -c ${session_id} > /dev/null 2>&1
> > > > - schroot_delete_configs
> > > > - }
> > > > - trap 'exit 1' INT HUP QUIT TERM ALRM USR1
> > > > - trap 'schroot_cleanup' EXIT
> > > > -
> > > > - schroot -r -c ${session_id} -d / -u root -- \
> > > > - rm /etc/apt/sources.list.d/isar-apt.list
> > > > /etc/apt/preferences.d/isar-apt
> > > > - schroot -r -c ${session_id} -d / -- \
> > > > - sh -c '
> > > > - set -e
> > > > - for uri in $2; do
> > > > - mkdir -p /downloads/deb-src/"$1"/${uri}
> > > > - cd /downloads/deb-src/"$1"/${uri}
> > > > - apt-get -y --download-only --only-source
> > > > source
> > > > ${uri}
> > > > - done' \
> > > > - my_script "${BASE_DISTRO}-
> > > > ${BASE_DISTRO_CODENAME}"
> > > > "${SRC_APT}"
> > > > -
> > > > - schroot -e -c ${session_id}
> > > > - schroot_delete_configs
> > > > -}
> > > > -
> > > > -addtask apt_fetch
> > > > -do_apt_fetch[lockfiles] += "${REPO_ISAR_DIR}/isar.lock"
> > > > -do_apt_fetch[network] = "${TASK_USE_NETWORK_AND_SUDO}"
> > > > -
> > > > -# Add dependency from the correct schroot: host or target
> > > > -do_apt_fetch[depends] += "${SCHROOT_DEP}"
> > > > -
> > > > -do_apt_unpack() {
> > > > - rm -rf ${S}
> > > > - schroot_create_configs
> > > > -
> > > > - session_id=$(schroot -q -b -c ${SBUILD_CHROOT})
> > > > - echo "Started session: ${session_id}"
> > > > -
> > > > - schroot_cleanup() {
> > > > - schroot -q -f -e -c ${session_id} > /dev/null 2>&1
> > > > - schroot_delete_configs
> > > > - }
> > > > - trap 'exit 1' INT HUP QUIT TERM ALRM USR1
> > > > - trap 'schroot_cleanup' EXIT
> > > > -
> > > > - schroot -r -c ${session_id} -d / -u root -- \
> > > > - rm /etc/apt/sources.list.d/isar-apt.list
> > > > /etc/apt/preferences.d/isar-apt
> > > > - schroot -r -c ${session_id} -d / -- \
> > > > - sh -c '
> > > > - set -e
> > > > - for uri in $2; do
> > > > - dscfile="$(apt-get -y -qq --print-uris --only-
> > > > source
> > > > source $uri | cut -d " " -f2 | grep -E "*.dsc")"
> > > > - cd ${PP}
> > > > - cp /downloads/deb-src/"${1}"/${uri}/* ${PP}
> > > > - dpkg-source -x "${dscfile}" "${PPS}"
> > > > - done' \
> > > > - my_script "${BASE_DISTRO}-
> > > > ${BASE_DISTRO_CODENAME}"
> > > > "${SRC_APT}"
> > > > -
> > > > - schroot -e -c ${session_id}
> > > > - schroot_delete_configs
> > > > -}
> > > > -do_apt_unpack[network] = "${TASK_USE_SUDO}"
> > > > -
> > > > -addtask apt_unpack after do_apt_fetch
> > > > -
> > > > -do_cleanall_apt[nostamp] = "1"
> > > > -do_cleanall_apt() {
> > > > - for uri in "${SRC_APT}"; do
> > > > - rm -rf "${DEBSRCDIR}/${BASE_DISTRO}-
> > > > ${BASE_DISTRO_CODENAME}/$uri"
> > > > - done
> > > > -}
> > > > -
> > > > def get_package_srcdir(d):
> > > > s = os.path.abspath(d.getVar("S"))
> > > > workdir = os.path.abspath(d.getVar("WORKDIR"))
> > > > diff --git a/meta/lib/aptsrc_fetcher.py
> > > > b/meta/lib/aptsrc_fetcher.py
> > > > new file mode 100644
> > > > index 00000000..ee726202
> > > > --- /dev/null
> > > > +++ b/meta/lib/aptsrc_fetcher.py
> > > > @@ -0,0 +1,93 @@
> > > > +# This software is a part of ISAR.
> > > > +# Copyright (c) Siemens AG, 2024
> > > > +#
> > > > +# SPDX-License-Identifier: MIT
> > > > +
> > > > +from bb.fetch2 import FetchError
> > > > +from bb.fetch2 import FetchMethod
> > > > +from bb.fetch2 import logger
> > > > +from bb.fetch2 import runfetchcmd
> > > > +
> > > > +class AptSrc(FetchMethod):
> > > > + def supports(self, ud, d):
> > > > + return ud.type in ['apt']
> > > > +
> > > > + def urldata_init(self, ud, d):
> > > > + ud.src_package = ud.url[len('apt://'):]
> > > > + ud.host = ud.host.replace('=', '_')
> > > > +
> > > > + base_distro = d.getVar('BASE_DISTRO')
> > > > +
> > > > + # For these distros we know that the same version
> > > > means
> > > > the
> > > > same
> > > > + # source package, also across distro releases.
> > > > + distro_suffix = '' if base_distro in ['debian',
> > > > 'ubuntu']
> > > > else \
> > > > + '-' + d.getVar('BASE_DISTRO_CODENAME')
I think, to avoid the issue I mentioned, we should continue using
${BASE_DISTRO}-${BASE_DISTRO_CODENAME} here without exceptions.
Also, cache_deb_src() function in rootfs.bbclass still uses this
location.
> > > > +
> > > > + ud.localfile='deb-src/' + base_distro + distro_suffix
> > > > +
> > > > '/'
> > > > + ud.host
> > > > +
> > > > + def download(self, ud, d):
> > > > + bb.utils.exec_flat_python_func('isar_export_proxies',
> > > > d)
> > > > + bb.build.exec_func('schroot_create_configs', d)
> > > > +
> > > > + sbuild_chroot = d.getVar('SBUILD_CHROOT')
> > > > + session_id = runfetchcmd(f'schroot -q -b -c
> > > > {sbuild_chroot}', d).strip()
> > > > + logger.info(f'Started session: {session_id}')
> > > > +
> > > > + repo_isar_dir = d.getVar('REPO_ISAR_DIR')
> > > > + lockfile =
> > > > bb.utils.lockfile(f'{repo_isar_dir}/isar.lock')
> > > > +
> > > > + try:
> > > > + runfetchcmd(f'''
> > > > + set -e
> > > > + schroot -r -c {session_id} -d / -u root -- \
> > > > + rm /etc/apt/sources.list.d/isar-apt.list
> > > > /etc/apt/preferences.d/isar-apt
> > > > + schroot -r -c {session_id} -d / -- \
> > > > + sh -c '
> > > > + set -e
> > > > + mkdir -p /downloads/{ud.localfile}
> > > > + cd /downloads/{ud.localfile}
> > > > + apt-get -y --download-only --only-
> > > > source
> > > > source {ud.src_package}
> > > > + '
> > > > + ''', d)
> > > > + except (OSError, FetchError):
> > > > + raise
> > > > + finally:
> > > > + bb.utils.unlockfile(lockfile)
> > > > + runfetchcmd(f'schroot -q -f -e -c {session_id}',
> > > > d)
> > > > + bb.build.exec_func('schroot_delete_configs', d)
> > > > +
> > > > + def unpack(self, ud, rootdir, d):
> > > > + bb.build.exec_func('schroot_create_configs', d)
> > > > +
> > > > + sbuild_chroot = d.getVar('SBUILD_CHROOT')
> > > > + session_id = runfetchcmd(f'schroot -q -b -c
> > > > {sbuild_chroot}', d).strip()
> > > > + logger.info(f'Started session: {session_id}')
> > > > +
> > > > + pp = d.getVar('PP')
> > > > + pps = d.getVar('PPS')
> > > > + try:
> > > > + runfetchcmd(f'''
> > > > + set -e
> > > > + schroot -r -c {session_id} -d / -u root -- \
> > > > + rm /etc/apt/sources.list.d/isar-apt.list
> > > > /etc/apt/preferences.d/isar-apt
> > > > + schroot -r -c {session_id} -d / -- \
> > > > + sh -c '
> > > > + set -e
> > > > + dscfile=$(apt-get -y -qq --print-uris
> > > > --
> > > > only-source source {ud.src_package} | \
> > > > + cut -d " " -f2 | grep -E
> > > > "\.dsc")
> > > > + cp /downloads/{ud.localfile}/* {pp}
> > > > + cd {pp}
> > > > + mv -f {pps} {pps}.prev
> > > > + dpkg-source -x "$dscfile" {pps}
> > >
> > > Hello.
> > >
> > > This still fails in CI, but this time I had some time to find the
> > > root
> > > cause.
> > >
> > > The problem is that buster(bullseye) and bookworm(trixie) provide
> > > different versions of "hello" package.
> > >
> > > If we first build e.g. `mc:qemuamd64-bookworm:hello`, hello_2.10-
> > > 3.dsc
> > > is downloaded and the whole "downloads/deb-src/debian/hello/" is
> > > considered finished with "downloads/deb-src/debian/hello.done"
> > > flag.
> > >
> > > So, when e.g. `mc:qemuamd64-bullseye:hello` build follows, it
> > > doesn't
> > > download hello_2.10-2.dsc an results in dpkg-source error.
> > >
> > > It doesn't matter if we build both targets in parallel or
> > > sequentially,
> > > the latest always fails.
> > >
> >
> > Thanks for the analysis. I'll check if I can reproduce und
> > understand
> > to
> > root cause.
> >
> > Jan
> >
>
> The easy way to reproduce:
>
> ./kas/kas-container menu # select e.g. qemuamd64-bookworm, save &
> exit
> ./kas/kas-container shell -c 'bitbake hello'
> ./kas/kas-container menu # select e.g. qemuamd64-bullseye, save &
> exit
> ./kas/kas-container shell -c 'bitbake hello'
>
> --
> Best regards,
> Uladzimir.
>
--
Best regards,
Uladzimir.
--
You received this message because you are subscribed to the Google Groups "isar-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to isar-users+unsubscribe@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/isar-users/87f0b2714fbbb7f2c691c471e744ba105181344a.camel%40ilbers.de.
next prev parent reply other threads:[~2024-11-28 6:23 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-11-15 16:40 'Jan Kiszka' via isar-users
2024-11-27 14:07 ` Uladzimir Bely
2024-11-28 4:55 ` 'Jan Kiszka' via isar-users
2024-11-28 6:03 ` Uladzimir Bely
2024-11-28 6:23 ` Uladzimir Bely [this message]
2024-11-29 11:42 ` 'Jan Kiszka' via isar-users
2024-11-29 11:53 ` 'Jan Kiszka' via isar-users
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=87f0b2714fbbb7f2c691c471e744ba105181344a.camel@ilbers.de \
--to=ubely@ilbers.de \
--cc=isar-users@googlegroups.com \
--cc=jan.kiszka@siemens.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox