public inbox for isar-users@googlegroups.com
 help / color / mirror / Atom feed
From: Jan Kiszka <jan.kiszka@siemens.com>
To: "Maxim Yu. Osipov" <mosipov@ilbers.de>, Claudius Heine <ch@denx.de>
Cc: Henning Schild <henning.schild@siemens.com>,
	isar-users <isar-users@googlegroups.com>,
	Cedric Hombourger <Cedric_Hombourger@mentor.com>
Subject: Re: [PATCH] isar-bootstrap: Fix and cleanup bind mounting
Date: Tue, 4 Dec 2018 16:45:58 +0100	[thread overview]
Message-ID: <01b4aa88-f62b-cff3-c6fa-04dac500062e@siemens.com> (raw)
In-Reply-To: <405c22d0-48cd-4ea4-4c1b-c78e6c5570ed@siemens.com>

On 04.12.18 16:42, Jan Kiszka wrote:
> On 04.12.18 15:24, [ext] Jan Kiszka wrote:
>> On 04.12.18 11:49, Maxim Yu. Osipov wrote:
>>> On 12/3/18 3:59 PM, Jan Kiszka wrote:
>>>> On 30.11.18 10:20, Maxim Yu. Osipov wrote:
>>>>> Hi Jan,
>>>>>
>>>>> I've just tried this patch (on the 'next' with reverted patch d40a9ac0) and 
>>>>> ran "fast" CI
>>>>>
>>>>> isar$mount | wc -l
>>>>> 34
>>>>>
>>>>> isar$./scripts/ci_build.sh -q -f
>>>>>
>>>>> CI script hung on CI stage when dpkg-base is modified
>>>>> causing rebuilding recipes based on dpkg-base.
>>>>>
>>>>> The mount reports less (!) mount points than before launching the script.
>>>>>
>>>>> mount | wc -l
>>>>> 31
>>>>
>>>> Any news on what's different on your side? Where exactly does your build 
>>>> hang? Was your CI environment in a clean state when running this test? 
>>>> Before the comment lots of things leaked.
>>>
>>>
>>> On my stretch laptop (i7-6820HQ CPU @ 2.70GHz (8 cores) with SSD)  the 
>>> reported problem is reproducible (I rerun 'ci_build.sh -q -f' several times 
>>> in clean state) it hung and with the less mount points (the mount points 
>>> before and after running are attached).
>>>
>>> The strange thing that I observe two bitbake processes:
>>>
>>> myo      26373  0.0  0.3 153116 29732 pts/0    Sl+  12:31   0:01 python3 
>>> /home/myo/work/isar/src/trunk/isar/bitbake/bin/bitbake 
>>> multiconfig:qemuarm-stretch:isar-image-base 
>>> multiconfig:qemuarm64-stretch:isar-image-base 
>>> multiconfig:qemuamd64-stretch:isar-image-base
>>>
>>> myo      26379  2.5  0.6 328476 50028 ?        Sl   12:31   0:40 python3 
>>> /home/myo/work/isar/src/trunk/isar/bitbake/bin/bitbake 
>>> multiconfig:qemuarm-stretch:isar-image-base 
>>> multiconfig:qemuarm64-stretch:isar-image-base 
>>> multiconfig:qemuamd64-stretch:isar-image-base
>>>
>>
>> We run multiple bitbake sessions after each other. Maybe the first one never 
>> terminates (get stuck), and that is also why the rm after the first session 
>> fails. You need to stop the build there and analyse what is keeping the mount 
>> points busy.
> 
> Wait... If I terminate a build from inside the container (i.e. "natively") and 
> then quickly try to delete the build artifacts, I can trigger that infamous 
> empty /dev bug - on the host. That has always been the problem, and that is one 
> reason why we encapsulate things into containers.
> 
> The reason for this is that bitbake's cooker waits for the last sub-process to 
> finish before it calls the cleanup hook that does all the unmounting. If you 
> delete something before that, you step into the mount point and purge its 
> content. That /may/ be the issue here as well as we run rm directly after bitbake.
> 
> IOW: Possibly just a known limitation of current Isar design /wrt to unmounting 
> in isar_handler() that now surfaces in CI. I would not be surprised you can 
> resolve that by waiting for the last cooker instance to terminate before 
> deleting tmp.
>

...or we are just missing "-l" as umount parameter isar_handler(), like Claudius 
added elsewhere. Though that may not conceptually resolve the race window 
between bitbake terminating and cooker still running the handler.

Jan

-- 
Siemens AG, Corporate Technology, CT RDA IOT SES-DE
Corporate Competence Center Embedded Linux

  parent reply	other threads:[~2018-12-04 15:46 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-11-29 16:29 Jan Kiszka
2018-11-29 18:39 ` Henning Schild
2018-11-30  9:20   ` Maxim Yu. Osipov
2018-11-30  9:36     ` Henning Schild
2018-11-30  9:40       ` Jan Kiszka
2018-11-30  9:38     ` Jan Kiszka
2018-11-30 10:19       ` Jan Kiszka
2018-11-30 10:51         ` Jan Kiszka
2018-12-03 12:59     ` Jan Kiszka
2018-12-03 14:37       ` Hombourger, Cedric
2018-12-04 10:49       ` Maxim Yu. Osipov
2018-12-04 14:24         ` Jan Kiszka
2018-12-04 15:42           ` Jan Kiszka
2018-12-04 15:45             ` Hombourger, Cedric
2018-12-04 16:59               ` Maxim Yu. Osipov
2018-12-04 17:10                 ` Jan Kiszka
2018-12-04 17:31                   ` Jan Kiszka
2018-12-04 15:45             ` Jan Kiszka [this message]
2018-12-07 13:45 ` Maxim Yu. Osipov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=01b4aa88-f62b-cff3-c6fa-04dac500062e@siemens.com \
    --to=jan.kiszka@siemens.com \
    --cc=Cedric_Hombourger@mentor.com \
    --cc=ch@denx.de \
    --cc=henning.schild@siemens.com \
    --cc=isar-users@googlegroups.com \
    --cc=mosipov@ilbers.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox