* No network available during task do_install on debian bullseye/5.10 host - but on a debian buster/4.19 host network is available
@ 2024-03-07 15:33 ` Kaufmann, Bjoern
2024-03-08 9:18 ` Baurzhan Ismagulov
2024-03-13 10:48 ` Anton Mikanovich
0 siblings, 2 replies; 9+ messages in thread
From: Kaufmann, Bjoern @ 2024-03-07 15:33 UTC (permalink / raw)
To: isar-users
Hello,
I have the following recipe my-recipe_2023.1.bb:
inherit dpkg-raw
do_install() {
ip a
}
I run a build based on isar "d26660b724b034b602f3889f55a23cd9be2e87bd" (> v0.9, < v0.10-rc1) inside of a docker container "ghcr.io/siemens/kas/kas-isar:3.3".
_____
In the first scenario the build runs inside the docker container on a debian buster VM with 4.19.0-26 kernel. In this scenario the logfile "build/tmp/work/debian-bullseye -amd64/my-recipe/2023.1-r0/temp/log.do_install" looks as follows:
DEBUG: Executing shell function do_install
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
944: eth0@if945: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever
DEBUG: Shell function do_install finished
_____
In the second scenario the exact same build runs inside the docker container on a debian bullseye VM with 5.10.0-28 kernel. In this scenario the logfile "build/tmp/work/debian-bullseye -amd64/my-recipe/2023.1-r0/temp/log.do_install" looks as follows:
DEBUG: Executing shell function do_install
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
DEBUG: Shell function do_install finished
_____
I would like to understand why the same build behaves differently on the two different host systems. Thanks in advance.
Best regards,
Bjoern
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: No network available during task do_install on debian bullseye/5.10 host - but on a debian buster/4.19 host network is available
2024-03-07 15:33 ` No network available during task do_install on debian bullseye/5.10 host - but on a debian buster/4.19 host network is available Kaufmann, Bjoern
@ 2024-03-08 9:18 ` Baurzhan Ismagulov
2024-03-11 8:24 ` Bjoern Kaufmann
2024-03-13 10:48 ` Anton Mikanovich
1 sibling, 1 reply; 9+ messages in thread
From: Baurzhan Ismagulov @ 2024-03-08 9:18 UTC (permalink / raw)
To: isar-users
Hello Bj�rn,
On 2024-03-07 15:33, 'Kaufmann, Bjoern' via isar-users wrote:
> In the first scenario the build runs inside the docker container on a debian buster VM with 4.19.0-26 kernel. In this scenario the logfile "build/tmp/work/debian-bullseye -amd64/my-recipe/2023.1-r0/temp/log.do_install" looks as follows:
...
> 944: eth0@if945: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
> link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0
> inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0
> valid_lft forever preferred_lft forever
Sounds like a docker (or some combination of VM + kernel + buster + docker)
issue. The same docker version and global configuration (if any) under buster
and bullseye? Is "VM" VMware under Windows?
With kind regards,
Baurzhan
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: No network available during task do_install on debian bullseye/5.10 host - but on a debian buster/4.19 host network is available
2024-03-08 9:18 ` Baurzhan Ismagulov
@ 2024-03-11 8:24 ` Bjoern Kaufmann
0 siblings, 0 replies; 9+ messages in thread
From: Bjoern Kaufmann @ 2024-03-11 8:24 UTC (permalink / raw)
To: isar-users
[-- Attachment #1.1: Type: text/plain, Size: 189 bytes --]
Thanks for your quick answer.
The docker version is the same (25.0.3, build 4debf41) on both machines.
"VM" means a Proxmox Cluster running on multiple debian hosts.
Best regards,
Bjoern
[-- Attachment #1.2: Type: text/html, Size: 260 bytes --]
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: No network available during task do_install on debian bullseye/5.10 host - but on a debian buster/4.19 host network is available
2024-03-07 15:33 ` No network available during task do_install on debian bullseye/5.10 host - but on a debian buster/4.19 host network is available Kaufmann, Bjoern
2024-03-08 9:18 ` Baurzhan Ismagulov
@ 2024-03-13 10:48 ` Anton Mikanovich
2024-03-14 16:50 ` Bjoern Kaufmann
1 sibling, 1 reply; 9+ messages in thread
From: Anton Mikanovich @ 2024-03-13 10:48 UTC (permalink / raw)
To: Kaufmann, Bjoern, isar-users
07/03/2024 17:33, 'Kaufmann, Bjoern' via isar-users wrote:
> Hello,
>
> I have the following recipe my-recipe_2023.1.bb:
>
> inherit dpkg-raw
>
> do_install() {
> ip a
> }
>
> I run a build based on isar "d26660b724b034b602f3889f55a23cd9be2e87bd" (> v0.9, < v0.10-rc1) inside of a docker container "ghcr.io/siemens/kas/kas-isar:3.3".
>
> _____
>
> In the first scenario the build runs inside the docker container on a debian buster VM with 4.19.0-26 kernel. In this scenario the logfile "build/tmp/work/debian-bullseye -amd64/my-recipe/2023.1-r0/temp/log.do_install" looks as follows:
>
> DEBUG: Executing shell function do_install
> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> inet 127.0.0.1/8 scope host lo
> valid_lft forever preferred_lft forever
> 944: eth0@if945: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
> link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0
> inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0
> valid_lft forever preferred_lft forever
> DEBUG: Shell function do_install finished
>
> _____
>
> In the second scenario the exact same build runs inside the docker container on a debian bullseye VM with 5.10.0-28 kernel. In this scenario the logfile "build/tmp/work/debian-bullseye -amd64/my-recipe/2023.1-r0/temp/log.do_install" looks as follows:
>
> DEBUG: Executing shell function do_install
> 1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> DEBUG: Shell function do_install finished
>
> _____
>
>
> I would like to understand why the same build behaves differently on the two different host systems. Thanks in advance.
>
> Best regards,
> Bjoern
>
>
>
Hello Bjoern,
The task do_install is performed in original environment without any
additional
chroots. It means there is no need to run Isar to have the same 'ip a'
output.
You can just control network availability inside the container in the
same way.
To check that you can perform 'kas-container shell' and then run 'ip a'.
We've checked networking on Buster and Bullseye host (but without
Proxmox VMs)
and didn't find any issues. In both cases output looks like:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
9: eth0@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
state UP group default
link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever
With the only difference in random eth0 index.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: No network available during task do_install on debian bullseye/5.10 host - but on a debian buster/4.19 host network is available
2024-03-13 10:48 ` Anton Mikanovich
@ 2024-03-14 16:50 ` Bjoern Kaufmann
2024-03-15 9:06 ` Bjoern Kaufmann
0 siblings, 1 reply; 9+ messages in thread
From: Bjoern Kaufmann @ 2024-03-14 16:50 UTC (permalink / raw)
To: isar-users
[-- Attachment #1.1: Type: text/plain, Size: 6001 bytes --]
Just adding this here, as I accidentally clicked the wrong reply button for
my last answer and thus sent you a private message:
13/03/2024 15:36, Bjoern Kaufmann wrote:
> From bitbake shell after executing kas shell kas.yml inside the docker
> container (on bullseye host VM):
>
> _____
>
> $ ip a
>
> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
> group default qlen 1000
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> inet 127.0.0.1/8 scope host lo
> valid_lft forever preferred_lft forever
> 86: eth0@if87: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
> noqueue state UP group default
> link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0
> inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0
> valid_lft forever preferred_lft forever
>
> _____
>
> $ bitbake my-recipe
> --- executes recipe tasks ---
> $ cat tmp/work/debian-bullseye
> -amd64/my-recipe/2023.1-r0/temp/log.do_install
>
> DEBUG: Executing shell function do_install
> 1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> DEBUG: Shell function do_install finished
>
> _____
>
> $ tmp/work/s2l2-linux-2023-2-amd64/down-package/2023.1-r0/te
mp/run.do_install
>
> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
> group default qlen 1000
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> inet 127.0.0.1/8 scope host lo
> valid_lft forever preferred_lft forever
> 86: eth0@if87: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
> noqueue state UP group default
> link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0
> inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0
> valid_lft forever preferred_lft forever
>
> _____
>
> As you can see, if do_install is executed by bitbake, it behaves
> differently, at least on that bullseye host system. So bitbake indeed
> seems to do something before executing do_install, even if it's not
> switching into a chroot.
>
> Best regards,
> Bjoern
On Wed, Mar 13, 2024 at 2:55 PM Anton Mikanovich <amikan@ilbers.de> wrote:
Here is one more code you can try:
do_testtask() {
ip a
}
addtask testtask
So it should be executed by 'bitbake -c testtask my-recipe' inside kas
shell.
This task is free of any recipe/bbclass dependencies and will be the
only task
executed by bitbake. If the output will be correct with it the issue is
probably caused by some other task/recipe execuded before
my-recipe:do_install.
If there will be still no eth0 you can look through python(){} or
python __anonymous(){} sections of your layer because this code is
executed on
parsing stage.
On Wednesday, March 13, 2024 at 11:48:08 AM UTC+1 Anton Mikanovich wrote:
> 07/03/2024 17:33, 'Kaufmann, Bjoern' via isar-users wrote:
> > Hello,
> >
> > I have the following recipe my-recipe_2023.1.bb:
> >
> > inherit dpkg-raw
> >
> > do_install() {
> > ip a
> > }
> >
> > I run a build based on isar "d26660b724b034b602f3889f55a23cd9be2e87bd"
> (> v0.9, < v0.10-rc1) inside of a docker container "
> ghcr.io/siemens/kas/kas-isar:3.3".
> >
> > _____
> >
> > In the first scenario the build runs inside the docker container on a
> debian buster VM with 4.19.0-26 kernel. In this scenario the logfile
> "build/tmp/work/debian-bullseye
> -amd64/my-recipe/2023.1-r0/temp/log.do_install" looks as follows:
> >
> > DEBUG: Executing shell function do_install
> > 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
> group default qlen 1000
> > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> > inet 127.0.0.1/8 scope host lo
> > valid_lft forever preferred_lft forever
> > 944: eth0@if945: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
> noqueue state UP group default
> > link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0
> > inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0
> > valid_lft forever preferred_lft forever
> > DEBUG: Shell function do_install finished
> >
> > _____
> >
> > In the second scenario the exact same build runs inside the docker
> container on a debian bullseye VM with 5.10.0-28 kernel. In this scenario
> the logfile "build/tmp/work/debian-bullseye
> -amd64/my-recipe/2023.1-r0/temp/log.do_install" looks as follows:
> >
> > DEBUG: Executing shell function do_install
> > 1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
> > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> > DEBUG: Shell function do_install finished
> >
> > _____
> >
> >
> > I would like to understand why the same build behaves differently on the
> two different host systems. Thanks in advance.
> >
> > Best regards,
> > Bjoern
> >
> >
> >
> Hello Bjoern,
>
> The task do_install is performed in original environment without any
> additional
> chroots. It means there is no need to run Isar to have the same 'ip a'
> output.
> You can just control network availability inside the container in the
> same way.
> To check that you can perform 'kas-container shell' and then run 'ip a'.
> We've checked networking on Buster and Bullseye host (but without
> Proxmox VMs)
> and didn't find any issues. In both cases output looks like:
>
> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
> group default qlen 1000
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> inet 127.0.0.1/8 scope host lo
> valid_lft forever preferred_lft forever
> 9: eth0@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
> state UP group default
> link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
> inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
> valid_lft forever preferred_lft forever
>
> With the only difference in random eth0 index.
>
>
[-- Attachment #1.2: Type: text/html, Size: 25267 bytes --]
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: No network available during task do_install on debian bullseye/5.10 host - but on a debian buster/4.19 host network is available
2024-03-14 16:50 ` Bjoern Kaufmann
@ 2024-03-15 9:06 ` Bjoern Kaufmann
2024-03-15 9:17 ` Anton Mikanovich
0 siblings, 1 reply; 9+ messages in thread
From: Bjoern Kaufmann @ 2024-03-15 9:06 UTC (permalink / raw)
To: isar-users
[-- Attachment #1.1: Type: text/plain, Size: 8501 bytes --]
I did what you proposed, but there is still no eth0.
What I also tested and what might be interesting:
def print_ifs():
import subprocess
import socket
output = subprocess.check_output("ip a", shell=True)
print(f'Output of ip a: "{str(output)}"')
print(socket.if_nameindex())
return ''
do_testtask() {
${@ print_ifs()}
ip a
}
addtask testtask
I executed it inside kas shell by 'bitbake -c testtask my-recipe' again and
the log looks as follows:
DEBUG: Executing shell function do_testtask
Output of ip a: "b'1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue
state UNKNOWN group default qlen 1000\n link/loopback 00:00:00:00:00:00
brd 00:00:00:00:00:00\n inet 127.0.0.1/8 scope host lo\n valid_lft
forever preferred_lft forever\n4: eth0@if5:
<BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group
default \n link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
link-netnsid 0\n inet 172.17.0.2/16 brd 172.17.255.255 scope global
eth0\n valid_lft forever preferred_lft forever\n'"
[(1, 'lo'), (4, 'eth0')]
Output of ip a: "b'1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group
default qlen 1000\n link/loopback 00:00:00:00:00:00 brd
00:00:00:00:00:00\n'"
[(1, 'lo')]
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
DEBUG: Shell function do_testtask finished
So as you can see
1. The python function is printed twice in a row, most probably in two
different contexts? I guess you know more about it
2. During the first execution of the python function, eth0 interfaces are
available
3. During the second execution of the python function, no eth0 interface is
available
Also Jan Kiszka told me that to his knowledge the newer bitbake isolates
tasks from networks by default. If this is the case it still doesn't really
explain the behavior show in the log above and it doesn't explain why this
doesn't happen on the buster host VMs.
Best regards,
Bjoern
On Thursday, March 14, 2024 at 5:50:43 PM UTC+1 Bjoern Kaufmann wrote:
> Just adding this here, as I accidentally clicked the wrong reply button
> for my last answer and thus sent you a private message:
>
> 13/03/2024 15:36, Bjoern Kaufmann wrote:
> > From bitbake shell after executing kas shell kas.yml inside the docker
> > container (on bullseye host VM):
> >
> > _____
> >
> > $ ip a
> >
> > 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
> > group default qlen 1000
> > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> > inet 127.0.0.1/8 scope host lo
> > valid_lft forever preferred_lft forever
> > 86: eth0@if87: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
> > noqueue state UP group default
> > link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0
> > inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0
> > valid_lft forever preferred_lft forever
> >
> > _____
> >
> > $ bitbake my-recipe
> > --- executes recipe tasks ---
> > $ cat tmp/work/debian-bullseye
> > -amd64/my-recipe/2023.1-r0/temp/log.do_install
> >
> > DEBUG: Executing shell function do_install
> > 1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
> > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> > DEBUG: Shell function do_install finished
> >
> > _____
> >
> > $ tmp/work/s2l2-linux-2023-2-amd64/down-package/2023.1-r0/te
> mp/run.do_install
> >
> > 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
> > group default qlen 1000
> > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> > inet 127.0.0.1/8 scope host lo
> > valid_lft forever preferred_lft forever
> > 86: eth0@if87: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
> > noqueue state UP group default
> > link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0
> > inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0
> > valid_lft forever preferred_lft forever
> >
> > _____
> >
> > As you can see, if do_install is executed by bitbake, it behaves
> > differently, at least on that bullseye host system. So bitbake indeed
> > seems to do something before executing do_install, even if it's not
> > switching into a chroot.
> >
> > Best regards,
> > Bjoern
>
> On Wed, Mar 13, 2024 at 2:55 PM Anton Mikanovich <ami...@ilbers.de> wrote:
> Here is one more code you can try:
>
> do_testtask() {
> ip a
> }
> addtask testtask
>
> So it should be executed by 'bitbake -c testtask my-recipe' inside kas
> shell.
>
> This task is free of any recipe/bbclass dependencies and will be the
> only task
> executed by bitbake. If the output will be correct with it the issue is
> probably caused by some other task/recipe execuded before
> my-recipe:do_install.
> If there will be still no eth0 you can look through python(){} or
> python __anonymous(){} sections of your layer because this code is
> executed on
> parsing stage.
>
> On Wednesday, March 13, 2024 at 11:48:08 AM UTC+1 Anton Mikanovich wrote:
>
>> 07/03/2024 17:33, 'Kaufmann, Bjoern' via isar-users wrote:
>> > Hello,
>> >
>> > I have the following recipe my-recipe_2023.1.bb:
>> >
>> > inherit dpkg-raw
>> >
>> > do_install() {
>> > ip a
>> > }
>> >
>> > I run a build based on isar "d26660b724b034b602f3889f55a23cd9be2e87bd"
>> (> v0.9, < v0.10-rc1) inside of a docker container "
>> ghcr.io/siemens/kas/kas-isar:3.3".
>> >
>> > _____
>> >
>> > In the first scenario the build runs inside the docker container on a
>> debian buster VM with 4.19.0-26 kernel. In this scenario the logfile
>> "build/tmp/work/debian-bullseye
>> -amd64/my-recipe/2023.1-r0/temp/log.do_install" looks as follows:
>> >
>> > DEBUG: Executing shell function do_install
>> > 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
>> group default qlen 1000
>> > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>> > inet 127.0.0.1/8 scope host lo
>> > valid_lft forever preferred_lft forever
>> > 944: eth0@if945: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
>> noqueue state UP group default
>> > link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0
>> > inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0
>> > valid_lft forever preferred_lft forever
>> > DEBUG: Shell function do_install finished
>> >
>> > _____
>> >
>> > In the second scenario the exact same build runs inside the docker
>> container on a debian bullseye VM with 5.10.0-28 kernel. In this scenario
>> the logfile "build/tmp/work/debian-bullseye
>> -amd64/my-recipe/2023.1-r0/temp/log.do_install" looks as follows:
>> >
>> > DEBUG: Executing shell function do_install
>> > 1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen
>> 1000
>> > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>> > DEBUG: Shell function do_install finished
>> >
>> > _____
>> >
>> >
>> > I would like to understand why the same build behaves differently on
>> the two different host systems. Thanks in advance.
>> >
>> > Best regards,
>> > Bjoern
>> >
>> >
>> >
>> Hello Bjoern,
>>
>> The task do_install is performed in original environment without any
>> additional
>> chroots. It means there is no need to run Isar to have the same 'ip a'
>> output.
>> You can just control network availability inside the container in the
>> same way.
>> To check that you can perform 'kas-container shell' and then run 'ip a'.
>> We've checked networking on Buster and Bullseye host (but without
>> Proxmox VMs)
>> and didn't find any issues. In both cases output looks like:
>>
>> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
>> group default qlen 1000
>> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>> inet 127.0.0.1/8 scope host lo
>> valid_lft forever preferred_lft forever
>> 9: eth0@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
>> state UP group default
>> link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
>> inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
>> valid_lft forever preferred_lft forever
>>
>> With the only difference in random eth0 index.
>>
>>
[-- Attachment #1.2: Type: text/html, Size: 27103 bytes --]
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: No network available during task do_install on debian bullseye/5.10 host - but on a debian buster/4.19 host network is available
2024-03-15 9:06 ` Bjoern Kaufmann
@ 2024-03-15 9:17 ` Anton Mikanovich
2024-03-15 9:28 ` Schmidt, Adriaan
0 siblings, 1 reply; 9+ messages in thread
From: Anton Mikanovich @ 2024-03-15 9:17 UTC (permalink / raw)
To: Bjoern Kaufmann, isar-users
15/03/2024 11:06, Bjoern Kaufmann wrote:
> I did what you proposed, but there is still no eth0.
> What I also tested and what might be interesting:
>
> def print_ifs():
> import subprocess
> import socket
>
> output = subprocess.check_output("ip a", shell=True)
> print(f'Output of ip a: "{str(output)}"')
>
> print(socket.if_nameindex())
> return ''
>
> do_testtask() {
> ${@ print_ifs()}
> ip a
> }
> addtask testtask
>
>
> I executed it inside kas shell by 'bitbake -c testtask my-recipe'
> again and the log looks as follows:
>
> DEBUG: Executing shell function do_testtask
> Output of ip a: "b'1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc
> noqueue state UNKNOWN group default qlen 1000\n link/loopback
> 00:00:00:00:00:00 brd 00:00:00:00:00:00\n inet 127.0.0.1/8 scope
> host lo\n valid_lft forever preferred_lft forever\n4: eth0@if5:
> <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
> group default \n link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
> link-netnsid 0\n inet 172.17.0.2/16 brd 172.17.255.255 scope global
> eth0\n valid_lft forever preferred_lft forever\n'"
> [(1, 'lo'), (4, 'eth0')]
> Output of ip a: "b'1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN
> group default qlen 1000\n link/loopback 00:00:00:00:00:00 brd
> 00:00:00:00:00:00\n'"
> [(1, 'lo')]
> 1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> DEBUG: Shell function do_testtask finished
>
>
> So as you can see
> 1. The python function is printed twice in a row, most probably in two
> different contexts? I guess you know more about it
> 2. During the first execution of the python function, eth0 interfaces
> are available
> 3. During the second execution of the python function, no eth0
> interface is available
>
>
> Also Jan Kiszka told me that to his knowledge the newer bitbake
> isolates tasks from networks by default. If this is the case it still
> doesn't really explain the behavior show in the log above and it
> doesn't explain why this doesn't happen on the buster host VMs.
>
> Best regards,
> Bjoern
Hello Bjoern,
The first print_ifs execution was done during recipe parsing, the second one
was done during task execution.
It happens because you've used inline python call.
For bitbake 2.0+ you can enable network access for your task by setting:
do_testtask[network] = "1"
On my side even without it 'ip a' was showing eth0, but there maybe some
other
permissions configuration.
^ permalink raw reply [flat|nested] 9+ messages in thread
* RE: No network available during task do_install on debian bullseye/5.10 host - but on a debian buster/4.19 host network is available
2024-03-15 9:17 ` Anton Mikanovich
@ 2024-03-15 9:28 ` Schmidt, Adriaan
2024-03-18 13:58 ` Bjoern Kaufmann
0 siblings, 1 reply; 9+ messages in thread
From: Schmidt, Adriaan @ 2024-03-15 9:28 UTC (permalink / raw)
To: Anton Mikanovich, Bjoern Kaufmann, isar-users
Anton Mikanovich, Sent: Friday, March 15, 2024 10:17 AM:
> 15/03/2024 11:06, Bjoern Kaufmann wrote:
> > I did what you proposed, but there is still no eth0.
> > What I also tested and what might be interesting:
> >
> > def print_ifs():
> > import subprocess
> > import socket
> >
> > output = subprocess.check_output("ip a", shell=True)
> > print(f'Output of ip a: "{str(output)}"')
> >
> > print(socket.if_nameindex())
> > return ''
> >
> > do_testtask() {
> > ${@ print_ifs()}
> > ip a
> > }
> > addtask testtask
> >
> >
> > I executed it inside kas shell by 'bitbake -c testtask my-recipe'
> > again and the log looks as follows:
> >
> > DEBUG: Executing shell function do_testtask
> > Output of ip a: "b'1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc
> > noqueue state UNKNOWN group default qlen 1000\n link/loopback
> > 00:00:00:00:00:00 brd 00:00:00:00:00:00\n inet 127.0.0.1/8 scope
> > host lo\n valid_lft forever preferred_lft forever\n4: eth0@if5:
> > <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
> > group default \n link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
> > link-netnsid 0\n inet 172.17.0.2/16 brd 172.17.255.255 scope global
> > eth0\n valid_lft forever preferred_lft forever\n'"
> > [(1, 'lo'), (4, 'eth0')]
> > Output of ip a: "b'1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN
> > group default qlen 1000\n link/loopback 00:00:00:00:00:00 brd
> > 00:00:00:00:00:00\n'"
> > [(1, 'lo')]
> > 1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
> > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> > DEBUG: Shell function do_testtask finished
> >
> >
> > So as you can see
> > 1. The python function is printed twice in a row, most probably in two
> > different contexts? I guess you know more about it
> > 2. During the first execution of the python function, eth0 interfaces
> > are available
> > 3. During the second execution of the python function, no eth0
> > interface is available
> >
> >
> > Also Jan Kiszka told me that to his knowledge the newer bitbake
> > isolates tasks from networks by default. If this is the case it still
> > doesn't really explain the behavior show in the log above and it
> > doesn't explain why this doesn't happen on the buster host VMs.
> >
> > Best regards,
> > Bjoern
>
> Hello Bjoern,
>
> The first print_ifs execution was done during recipe parsing, the second one
> was done during task execution.
> It happens because you've used inline python call.
>
> For bitbake 2.0+ you can enable network access for your task by setting:
> do_testtask[network] = "1"
Just to expand on this: In general, there is no networking in Bitbake tasks.
From the Bitbake manual (https://docs.yoctoproject.org/bitbake/2.6/bitbake-user-manual/bitbake-user-manual-metadata.html#variable-flags):
===
Variable Flags
[...]
[network]: When set to “1”, allows a task to access the network. By default, only the do_fetch task is granted network access. Recipes shouldn’t access the network outside of do_fetch as it usually undermines fetcher source mirroring, image and licence manifests, software auditing and supply chain security.
===
Yocto changelog (https://docs.yoctoproject.org/singleindex.html, grep for "[network]"):
===
Network access from tasks is now disabled by default on kernels which support this feature (on most recent distros such as CentOS 8 and Debian 11 onwards). This means that tasks accessing the network need to be marked as such with the network flag. For example:
do_mytask[network] = "1"
This is allowed by default from do_fetch but not from any of our other standard tasks. Recipes shouldn’t be accessing the network outside of do_fetch as it usually undermines fetcher source mirroring, image and licence manifests, software auditing and supply chain security.
===
Note that the changelog mentions "Debian 11 onwards", which is why you may be seeing a different behavior on buster.
In addition for Isar:
The way the Bitbake feature is implemented has a side-effect that also disables sudo. So in Isar, "network" is also enabled for tasks that need sudo.
Adriaan
> On my side even without it 'ip a' was showing eth0, but there maybe some
> other
> permissions configuration.
>
> --
> You received this message because you are subscribed to the Google Groups
> "isar-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to isar-users+unsubscribe@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/isar-users/2cb96a28-8df6-47c2-b16f-
> a8379d4ae6dc%40ilbers.de.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: No network available during task do_install on debian bullseye/5.10 host - but on a debian buster/4.19 host network is available
2024-03-15 9:28 ` Schmidt, Adriaan
@ 2024-03-18 13:58 ` Bjoern Kaufmann
0 siblings, 0 replies; 9+ messages in thread
From: Bjoern Kaufmann @ 2024-03-18 13:58 UTC (permalink / raw)
To: isar-users
[-- Attachment #1.1: Type: text/plain, Size: 6009 bytes --]
Thanks for your clarification, that explains it.
Meanwhile I also found
https://github.com/ilbers/isar/blob/master/bitbake/lib/bb/utils.py#L1630
which is most probably the function responsible for disabling the network
for tasks. But I was still wondering because the isar commit
(d26660b724b034b602f3889f55a23cd9be2e87bd) I though I was referencing in my
build doesn't contain that function yet and also the whole [network]
functionality is missing. Turns out that I made a mistake when backtracking
the commits of dependent layers of my build and I am actually using a
different isar commit (93cc388638336997a7c00b6ef8a58ee349407a54), which
already contains that functionality.
I tried it out again with do_testtask[network] = "1" and now the network
interfaces are indeed available.
Thank you all for your help.
Best regards,
Bjoern
On Friday, March 15, 2024 at 10:28:34 AM UTC+1 Schmidt, Adriaan wrote:
> Anton Mikanovich, Sent: Friday, March 15, 2024 10:17 AM:
> > 15/03/2024 11:06, Bjoern Kaufmann wrote:
> > > I did what you proposed, but there is still no eth0.
> > > What I also tested and what might be interesting:
> > >
> > > def print_ifs():
> > > import subprocess
> > > import socket
> > >
> > > output = subprocess.check_output("ip a", shell=True)
> > > print(f'Output of ip a: "{str(output)}"')
> > >
> > > print(socket.if_nameindex())
> > > return ''
> > >
> > > do_testtask() {
> > > ${@ print_ifs()}
> > > ip a
> > > }
> > > addtask testtask
> > >
> > >
> > > I executed it inside kas shell by 'bitbake -c testtask my-recipe'
> > > again and the log looks as follows:
> > >
> > > DEBUG: Executing shell function do_testtask
> > > Output of ip a: "b'1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc
> > > noqueue state UNKNOWN group default qlen 1000\n link/loopback
> > > 00:00:00:00:00:00 brd 00:00:00:00:00:00\n inet 127.0.0.1/8 scope
> > > host lo\n valid_lft forever preferred_lft forever\n4: eth0@if5:
> > > <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
> > > group default \n link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
> > > link-netnsid 0\n inet 172.17.0.2/16 brd 172.17.255.255 scope global
> > > eth0\n valid_lft forever preferred_lft forever\n'"
> > > [(1, 'lo'), (4, 'eth0')]
> > > Output of ip a: "b'1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN
> > > group default qlen 1000\n link/loopback 00:00:00:00:00:00 brd
> > > 00:00:00:00:00:00\n'"
> > > [(1, 'lo')]
> > > 1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen
> 1000
> > > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> > > DEBUG: Shell function do_testtask finished
> > >
> > >
> > > So as you can see
> > > 1. The python function is printed twice in a row, most probably in two
> > > different contexts? I guess you know more about it
> > > 2. During the first execution of the python function, eth0 interfaces
> > > are available
> > > 3. During the second execution of the python function, no eth0
> > > interface is available
> > >
> > >
> > > Also Jan Kiszka told me that to his knowledge the newer bitbake
> > > isolates tasks from networks by default. If this is the case it still
> > > doesn't really explain the behavior show in the log above and it
> > > doesn't explain why this doesn't happen on the buster host VMs.
> > >
> > > Best regards,
> > > Bjoern
> >
> > Hello Bjoern,
> >
> > The first print_ifs execution was done during recipe parsing, the second
> one
> > was done during task execution.
> > It happens because you've used inline python call.
> >
> > For bitbake 2.0+ you can enable network access for your task by setting:
> > do_testtask[network] = "1"
>
> Just to expand on this: In general, there is no networking in Bitbake
> tasks.
>
> From the Bitbake manual (
> https://docs.yoctoproject.org/bitbake/2.6/bitbake-user-manual/bitbake-user-manual-metadata.html#variable-flags
> ):
> ===
> Variable Flags
> [...]
> [network]: When set to “1”, allows a task to access the network. By
> default, only the do_fetch task is granted network access. Recipes
> shouldn’t access the network outside of do_fetch as it usually undermines
> fetcher source mirroring, image and licence manifests, software auditing
> and supply chain security.
> ===
>
> Yocto changelog (https://docs.yoctoproject.org/singleindex.html, grep for
> "[network]"):
> ===
> Network access from tasks is now disabled by default on kernels which
> support this feature (on most recent distros such as CentOS 8 and Debian 11
> onwards). This means that tasks accessing the network need to be marked as
> such with the network flag. For example:
>
> do_mytask[network] = "1"
> This is allowed by default from do_fetch but not from any of our other
> standard tasks. Recipes shouldn’t be accessing the network outside of
> do_fetch as it usually undermines fetcher source mirroring, image and
> licence manifests, software auditing and supply chain security.
> ===
>
> Note that the changelog mentions "Debian 11 onwards", which is why you may
> be seeing a different behavior on buster.
>
> In addition for Isar:
> The way the Bitbake feature is implemented has a side-effect that also
> disables sudo. So in Isar, "network" is also enabled for tasks that need
> sudo.
>
> Adriaan
>
>
> > On my side even without it 'ip a' was showing eth0, but there maybe some
> > other
> > permissions configuration.
> >
> > --
> > You received this message because you are subscribed to the Google Groups
> > "isar-users" group.
> > To unsubscribe from this group and stop receiving emails from it, send an
> > email to isar-users+...@googlegroups.com.
> > To view this discussion on the web visit
> > https://groups.google.com/d/msgid/isar-users/2cb96a28-8df6-47c2-b16f-
> > a8379d4ae6dc%40ilbers.de.
>
[-- Attachment #1.2: Type: text/html, Size: 9514 bytes --]
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2024-03-18 13:58 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
[not found] <Adpwo/NmYIMx9YhTRFidWKEPdq+1RQEj/j2AAD71GIAAIhJlAAAAYZuAAAAWgEA=>
2024-03-07 15:33 ` No network available during task do_install on debian bullseye/5.10 host - but on a debian buster/4.19 host network is available Kaufmann, Bjoern
2024-03-08 9:18 ` Baurzhan Ismagulov
2024-03-11 8:24 ` Bjoern Kaufmann
2024-03-13 10:48 ` Anton Mikanovich
2024-03-14 16:50 ` Bjoern Kaufmann
2024-03-15 9:06 ` Bjoern Kaufmann
2024-03-15 9:17 ` Anton Mikanovich
2024-03-15 9:28 ` Schmidt, Adriaan
2024-03-18 13:58 ` Bjoern Kaufmann
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox