public inbox for isar-users@googlegroups.com
 help / color / mirror / Atom feed
From: "cedric.hombourger@siemens.com" <cedric.hombourger@siemens.com>
To: "isar-users@googlegroups.com" <isar-users@googlegroups.com>,
	"Kiszka, Jan" <jan.kiszka@siemens.com>
Subject: Re: [HELP] tmp/work/debian-bullseye-arm64/isar-image-base-qemuarm64/1.0-r0/mnt/rootfs: target is busy
Date: Mon, 26 Feb 2024 11:13:29 +0000	[thread overview]
Message-ID: <38414ff5a9883c3980356475facc6bff68c02633.camel@siemens.com> (raw)
In-Reply-To: <c8483653-89da-42fc-9eb3-f0d9b6045044@siemens.com>

On Mon, 2024-02-26 at 11:32 +0100, Jan Kiszka wrote:
> On 26.02.24 11:26, 'cedric.hombourger@siemens.com' via isar-users
> wrote:
> > 
> > Hello,
> > 
> > Seeing some sporadic failures with the test suite (but also with
> > builds
> > of our Isar-based product) when rootfs_install_sstate_prepare gets
> > executed:
> > 
> > DEBUG: Executing shell function rootfs_install_sstate_prepare
> > umount: /home/sutlej/isar/build/tmp/work/debian-bullseye-
> > arm64/isar-
> > image-base-qemuarm64/1.0-r0/mnt/rootfs: target is busy.
> > WARNING: exit code 32 from a shell command. 
> > 
> > What's special about the machine I am running the builds on is that
> > /proc/cpuinfo reports 64 processors hence builds get massively
> > parallelized
> > 
> > I wasn't able to get to the bottom of this issue and understand why
> > the
> > bind mount is busy since rootfs_install_sstate_prepare creates,
> > uses
> > and removes that bind mount in the same function.
> > 
> > As a work-around, a lazy umount could be used but that annoys me as
> > I'd
> > like to understand what could cause this. Any ideas?
> > 
> 
> This is indeed weird: ${WORKDIR}/mnt/rootfs is only mounted by
> rootfs_install_sstate_prepare in vanilla isar, and that happens under
> isar.lock - close to impossible to have a race here. But did you
> check
> in your recipes on top (assuming the issue is not reproducible for
> you
> with vanilla isar alone) that there is no other usage of
> ${WORKDIR}/mnt/rootfs, possibly unlocked then?

That's the thing. I ran into this issue twice this morning while using
Isar *without* our product layers. First with a small patch to Isar for
later submission. I then started a ci_build -T fast from a clean tree
and hit the exact same issue.

I wonder if it could be caused by processes run by our IT to scan the
file-system when files are added/modified (seeing a number of admin
processes running proprietary "security" tools). That's the only theory
I could come up with. I may need to create some sort of a script that
mimics what we do in rootfs_install_sstate_prepare while checking with
lsof in parallel for concurrent accesses to files from the bind mount

> 
> Jan
> 



  reply	other threads:[~2024-02-26 11:13 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-02-26 10:26 cedric.hombourger
2024-02-26 10:32 ` Jan Kiszka
2024-02-26 11:13   ` cedric.hombourger [this message]
2024-02-26 11:27 ` Anton Mikanovich
2024-02-26 11:40   ` cedric.hombourger

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=38414ff5a9883c3980356475facc6bff68c02633.camel@siemens.com \
    --to=cedric.hombourger@siemens.com \
    --cc=isar-users@googlegroups.com \
    --cc=jan.kiszka@siemens.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox