From mboxrd@z Thu Jan 1 00:00:00 1970 X-GM-THRID: 7343661073770217472 Date: Fri, 15 Mar 2024 02:06:18 -0700 (PDT) From: Bjoern Kaufmann To: isar-users Message-Id: <5850e65d-9636-47a9-847d-b6d2462353f9n@googlegroups.com> In-Reply-To: <86320af4-266d-4040-9f41-6d2d9f267015n@googlegroups.com> References: <0a3ee875-d004-42d4-906b-4e6e847c7cd7@ilbers.de> <86320af4-266d-4040-9f41-6d2d9f267015n@googlegroups.com> Subject: Re: No network available during task do_install on debian bullseye/5.10 host - but on a debian buster/4.19 host network is available MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="----=_Part_216064_2000892731.1710493578359" X-TUID: klGAltrnPmCd ------=_Part_216064_2000892731.1710493578359 Content-Type: multipart/alternative; boundary="----=_Part_216065_1275639583.1710493578359" ------=_Part_216065_1275639583.1710493578359 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable I did what you proposed, but there is still no eth0. What I also tested and what might be interesting: def print_ifs(): import subprocess import socket output =3D subprocess.check_output("ip a", shell=3DTrue) print(f'Output of ip a: "{str(output)}"') =20 print(socket.if_nameindex()) return '' do_testtask() { ${@ print_ifs()} ip a } addtask testtask I executed it inside kas shell by 'bitbake -c testtask my-recipe' again and= =20 the log looks as follows: DEBUG: Executing shell function do_testtask Output of ip a: "b'1: lo: mtu 65536 qdisc noqueue=20 state UNKNOWN group default qlen 1000\n link/loopback 00:00:00:00:00:00= =20 brd 00:00:00:00:00:00\n inet 127.0.0.1/8 scope host lo\n valid_lft= =20 forever preferred_lft forever\n4: eth0@if5:=20 mtu 1500 qdisc noqueue state UP group=20 default \n link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff=20 link-netnsid 0\n inet 172.17.0.2/16 brd 172.17.255.255 scope global=20 eth0\n valid_lft forever preferred_lft forever\n'" [(1, 'lo'), (4, 'eth0')] Output of ip a: "b'1: lo: mtu 65536 qdisc noop state DOWN group= =20 default qlen 1000\n link/loopback 00:00:00:00:00:00 brd=20 00:00:00:00:00:00\n'" [(1, 'lo')] 1: lo: mtu 65536 qdisc noop state DOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 DEBUG: Shell function do_testtask finished So as you can see 1. The python function is printed twice in a row, most probably in two=20 different contexts? I guess you know more about it 2. During the first execution of the python function, eth0 interfaces are= =20 available 3. During the second execution of the python function, no eth0 interface is= =20 available Also Jan Kiszka told me that to his knowledge the newer bitbake isolates=20 tasks from networks by default. If this is the case it still doesn't really= =20 explain the behavior show in the log above and it doesn't explain why this= =20 doesn't happen on the buster host VMs. Best regards, Bjoern On Thursday, March 14, 2024 at 5:50:43=E2=80=AFPM UTC+1 Bjoern Kaufmann wro= te: > Just adding this here, as I accidentally clicked the wrong reply button= =20 > for my last answer and thus sent you a private message: > > 13/03/2024 15:36, Bjoern Kaufmann wrote: > > From bitbake shell after executing kas shell kas.yml inside the docker > > container (on bullseye host VM): > > > > _____ > > > > $ ip a > > > > 1: lo: mtu 65536 qdisc noqueue state UNKNOWN > > group default qlen 1000 > > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > > inet 127.0.0.1/8 scope host lo > > valid_lft forever preferred_lft forever > > 86: eth0@if87: mtu 1500 qdisc > > noqueue state UP group default > > link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0 > > inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0 > > valid_lft forever preferred_lft forever > > > > _____ > > > > $ bitbake my-recipe > > --- executes recipe tasks --- > > $ cat tmp/work/debian-bullseye > > -amd64/my-recipe/2023.1-r0/temp/log.do_install > > > > DEBUG: Executing shell function do_install > > 1: lo: mtu 65536 qdisc noop state DOWN group default qlen 10= 00 > > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > > DEBUG: Shell function do_install finished > > > > _____ > > > > $ tmp/work/s2l2-linux-2023-2-amd64/down-package/2023.1-r0/te > mp/run.do_install > > > > 1: lo: mtu 65536 qdisc noqueue state UNKNOWN > > group default qlen 1000 > > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > > inet 127.0.0.1/8 scope host lo > > valid_lft forever preferred_lft forever > > 86: eth0@if87: mtu 1500 qdisc > > noqueue state UP group default > > link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0 > > inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0 > > valid_lft forever preferred_lft forever > > > > _____ > > > > As you can see, if do_install is executed by bitbake, it behaves > > differently, at least on that bullseye host system. So bitbake indeed > > seems to do something before executing do_install, even if it's not > > switching into a chroot. > > > > Best regards, > > Bjoern > > On Wed, Mar 13, 2024 at 2:55=E2=80=AFPM Anton Mikanovich wrote: > Here is one more code you can try: > > do_testtask() { > ip a > } > addtask testtask > > So it should be executed by 'bitbake -c testtask my-recipe' inside kas > shell. > > This task is free of any recipe/bbclass dependencies and will be the > only task > executed by bitbake. If the output will be correct with it the issue is > probably caused by some other task/recipe execuded before > my-recipe:do_install. > If there will be still no eth0 you can look through python(){} or > python __anonymous(){} sections of your layer because this code is > executed on > parsing stage. > > On Wednesday, March 13, 2024 at 11:48:08=E2=80=AFAM UTC+1 Anton Mikanovic= h wrote: > >> 07/03/2024 17:33, 'Kaufmann, Bjoern' via isar-users wrote:=20 >> > Hello,=20 >> >=20 >> > I have the following recipe my-recipe_2023.1.bb:=20 >> >=20 >> > inherit dpkg-raw=20 >> >=20 >> > do_install() {=20 >> > ip a=20 >> > }=20 >> >=20 >> > I run a build based on isar "d26660b724b034b602f3889f55a23cd9be2e87bd"= =20 >> (> v0.9, < v0.10-rc1) inside of a docker container " >> ghcr.io/siemens/kas/kas-isar:3.3".=20 >> >=20 >> > _____=20 >> >=20 >> > In the first scenario the build runs inside the docker container on a= =20 >> debian buster VM with 4.19.0-26 kernel. In this scenario the logfile=20 >> "build/tmp/work/debian-bullseye=20 >> -amd64/my-recipe/2023.1-r0/temp/log.do_install" looks as follows:=20 >> >=20 >> > DEBUG: Executing shell function do_install=20 >> > 1: lo: mtu 65536 qdisc noqueue state UNKNOWN=20 >> group default qlen 1000=20 >> > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00=20 >> > inet 127.0.0.1/8 scope host lo=20 >> > valid_lft forever preferred_lft forever=20 >> > 944: eth0@if945: mtu 1500 qdisc=20 >> noqueue state UP group default=20 >> > link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0=20 >> > inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0=20 >> > valid_lft forever preferred_lft forever=20 >> > DEBUG: Shell function do_install finished=20 >> >=20 >> > _____=20 >> >=20 >> > In the second scenario the exact same build runs inside the docker=20 >> container on a debian bullseye VM with 5.10.0-28 kernel. In this scenari= o=20 >> the logfile "build/tmp/work/debian-bullseye=20 >> -amd64/my-recipe/2023.1-r0/temp/log.do_install" looks as follows:=20 >> >=20 >> > DEBUG: Executing shell function do_install=20 >> > 1: lo: mtu 65536 qdisc noop state DOWN group default qlen= =20 >> 1000=20 >> > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00=20 >> > DEBUG: Shell function do_install finished=20 >> >=20 >> > _____=20 >> >=20 >> >=20 >> > I would like to understand why the same build behaves differently on= =20 >> the two different host systems. Thanks in advance.=20 >> >=20 >> > Best regards,=20 >> > Bjoern=20 >> >=20 >> >=20 >> >=20 >> Hello Bjoern,=20 >> >> The task do_install is performed in original environment without any=20 >> additional=20 >> chroots. It means there is no need to run Isar to have the same 'ip a'= =20 >> output.=20 >> You can just control network availability inside the container in the=20 >> same way.=20 >> To check that you can perform 'kas-container shell' and then run 'ip a'.= =20 >> We've checked networking on Buster and Bullseye host (but without=20 >> Proxmox VMs)=20 >> and didn't find any issues. In both cases output looks like:=20 >> >> 1: lo: mtu 65536 qdisc noqueue state UNKNOWN=20 >> group default qlen 1000=20 >> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00=20 >> inet 127.0.0.1/8 scope host lo=20 >> valid_lft forever preferred_lft forever=20 >> 9: eth0@if10: mtu 1500 qdisc noqueue= =20 >> state UP group default=20 >> link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0=20 >> inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0=20 >> valid_lft forever preferred_lft forever=20 >> >> With the only difference in random eth0 index.=20 >> >> ------=_Part_216065_1275639583.1710493578359 Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: quoted-printable I did what you proposed, but there is still no eth0.
What I also tested= and what might be interesting:

def print_ifs():=
=C2=A0 =C2=A0 import subprocess
=C2=A0 =C2=A0 import socket

=C2=A0 =C2=A0 output =3D subprocess.check_output("ip a", shell=3DTru= e)
=C2=A0 =C2=A0 print(f'Output of ip a: "{str(output)}"')
=C2=A0= =C2=A0
=C2=A0 =C2=A0 print(socket.if_nameindex())
=C2=A0 =C2=A0= return ''

do_testtask() {
=C2=A0 =C2=A0 ${@ print_ifs()}=C2=A0 =C2=A0 ip a
}
addtask testtask

<= /div>

I executed it inside kas shell by 'bitbake -c testtask my-recipe' again and the log look= s as follows:

DEBUG: Executing shell function do= _testtask
Output of ip a: "b'1: lo: <LOOPBACK,UP,LOWER_UP> mtu 6= 5536 qdisc noqueue state UNKNOWN group default qlen 1000\n =C2=A0 =C2=A0lin= k/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00\n =C2=A0 =C2=A0inet 127.= 0.0.1/8 scope host lo\n =C2=A0 =C2=A0 =C2=A0 valid_lft forever preferred_lf= t forever\n4: eth0@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qd= isc noqueue state UP group default \n =C2=A0 =C2=A0link/ether 02:42:ac:11:0= 0:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0\n =C2=A0 =C2=A0inet 172.17.0.2/16= brd 172.17.255.255 scope global eth0\n =C2=A0 =C2=A0 =C2=A0 valid_lft fore= ver preferred_lft forever\n'"
[(1, 'lo'), (4, 'eth0')]
Output of = ip a: "b'1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group defa= ult qlen 1000\n =C2=A0 =C2=A0link/loopback 00:00:00:00:00:00 brd 00:00:00:0= 0:00:00\n'"
[(1, 'lo')]
1: lo: <LOOPBACK> mtu 65536 qdisc n= oop state DOWN group default qlen 1000
=C2=A0 =C2=A0 link/loopback 00:= 00:00:00:00:00 brd 00:00:00:00:00:00
DEBUG: Shell function do_testtask= finished


So as you can s= ee
1. The python function is printed twice in a row, most probabl= y in two different contexts? I guess you know more about it
2. Du= ring the first execution of the python function, eth0 interfaces are availa= ble
3. During the second execution of the python function, no eth= 0 interface is available


Also J= an Kiszka told me that to his knowledge the newer bitbake isolates tasks fr= om networks by default. If this is the case it still doesn't really explain= the behavior show in the log above and it doesn't explain why this doesn't= happen on the buster host VMs.

Best regards,
Bjoern



On Thursday, = March 14, 2024 at 5:50:43=E2=80=AFPM UTC+1 Bjoern Kaufmann wrote:
Just adding this here,= as I accidentally clicked the wrong reply button for my last answer and th= us sent you a private message:

13/03/2024 15:36, Bjo= ern Kaufmann wrote:
> From bitbake s= hell after executing kas shell kas.yml inside the docker
> container (on bullseye host VM):
>
> _____=
>
>= $ ip a
>
> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue = state UNKNOWN
> group default qlen 1= 000
> =C2=A0 =C2=A0 link/loopback 00= :00:00:00:00:00 brd 00:00:00:00:00:00
&= gt; =C2=A0 =C2=A0 inet=C2=A0127.0= .0.1/8=C2=A0scope host lo
> =C2=A0 =C2=A0 =C2=A0 =C2=A0valid_lft forever preferred_lft fore= ver
> 86: eth0@if87: <BROADCAST,M= ULTICAST,UP,LOWER_UP> mtu = 1500 qdisc
> noqueue state UP group = default
> =C2=A0 =C2=A0 link/ether 0= 2:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0
<= span style=3D"color:rgb(34,34,34);font-family:Arial,Helvetica,sans-serif;fo= nt-size:small">> =C2=A0 =C2=A0 inet=C2=A0172.17.0.3/16=C2=A0brd 172.17.255.255 sc= ope global eth0
> =C2=A0 =C2=A0 =C2= =A0 =C2=A0valid_lft forever preferred_lft forever
>
> _____
>
> $ bit= bake my-recipe
> --- executes recipe= tasks ---
> $ cat tmp/work/debian-b= ullseye
> -amd64/my-recipe/2023.1-r0= /temp/log.do_install>
> D= EBUG: Executing shell function do_install
> 1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group def= ault qlen 1000
> =C2=A0 =C2=A0 link/= loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> DEBUG: Shell function do_install finished
>
> _____

>
>= ; $=C2=A0tmp/work/s2l2-linux-2023-2-amd64/down-package/2023.1-r0/temp/run.do_install
><= br style=3D"color:rgb(34,34,34);font-family:Arial,Helvetica,sans-serif;font= -size:small">> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 6= 5536 qdisc noqueue state UNKNOWN
> g= roup default qlen 1000
> =C2=A0 =C2= =A0 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> =C2=A0 =C2=A0 inet=C2=A0127.0.0.1/8=C2=A0scope host lo<= br style=3D"color:rgb(34,34,34);font-family:Arial,Helvetica,sans-serif;font= -size:small">> =C2=A0 =C2=A0 =C2=A0 =C2=A0valid_lft fore= ver preferred_lft forever
> 86: eth0= @if87: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
> n= oqueue state UP group default
> =C2= =A0 =C2=A0 link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff link-netnsid = 0
> =C2=A0 =C2=A0 inet=C2=A0<= a href=3D"http://172.17.0.3/16" rel=3D"noreferrer nofollow" style=3D"color:= rgb(17,85,204);font-family:Arial,Helvetica,sans-serif;font-size:small" targ= et=3D"_blank" data-saferedirecturl=3D"https://www.google.com/url?hl=3Den&am= p;q=3Dhttp://172.17.0.3/16&source=3Dgmail&ust=3D1710578805455000&am= p;usg=3DAOvVaw2LKFzFJdvmamG9KqgPglQZ">172.17.0.3/16=C2= =A0brd 172.17.255.255 scope global eth0
> =C2=A0 =C2=A0 =C2=A0 =C2=A0valid_lft forever preferred_lft forever
>
= > _____
>
> As you can see, if do_install is executed by bitbake, it = behaves

> differently, at least on t= hat bullseye host system. So bitbake indeed
> seems to do something before executing do_install, even if it'= ;s not
> switching into a chroot.
>
= > Best regards,
> Bjoern
On Wed, Mar 13, 2024 at 2:55=E2=80=AFPM An= ton Mikanovich <ami...@ilbers= .de> wrote:
Here is one more cod= e you can try:

do_testtas= k() {
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0 ip a
}
addtask testtask

So it should be executed by 'bitbake -c testtask my-recipe= ' inside kas

shell.

This task is free of any recipe/bbclass = dependencies and will be the
only task<= /span>
executed by bitbake. If the output will= be correct with it the issue is
probab= ly caused by some other task/recipe execuded before
my-recipe:do_install.

If t= here will be still no eth0 you can look through python(){} or
python __anonymous(){} sections of your layer becaus= e this code is
executed on
parsing stage.

On Wednesday, March 13, 2= 024 at 11:48:08=E2=80=AFAM UTC+1 Anton Mikanovich wrote:
07/03/2024 17:33, 'Kaufmann, Bjoern&= #39; via isar-users wrote:
> Hello,
>
> I have the following recipe my-recipe= _2023.1.bb:
>
> inherit dpkg-raw
>
> do_install() {
> ip a
> }
>
> I run a build based on isar "d26660b724b034b602f3889f55a23cd9= be2e87bd" (> v0.9, < v0.10-rc1) inside of a docker container &qu= ot;ghcr.io/siemens= /kas/kas-isar:3.3".
>
> _____
>
> In the first scenario the build runs inside the docker container o= n a debian buster VM with 4.19.0-26 kernel. In this scenario the logfile &q= uot;build/tmp/work/debian-bullseye -amd64/my-recipe/2023.1-r0/temp/log.do_i= nstall" looks as follows:
>
> DEBUG: Executing shell function do_install
> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state = UNKNOWN group default qlen 1000
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> inet 127.0.0.1/8 scope host lo
> valid_lft forever preferred_lft forever
> 944: eth0@if945: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 = qdisc noqueue state UP group default
> link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff link-netns= id 0
> inet 172.17.0.3/16 brd 172.17.255.255 sc= ope global eth0
> valid_lft forever preferred_lft forever
> DEBUG: Shell function do_install finished
>
> _____
>
> In the second scenario the exact same build runs inside the docker= container on a debian bullseye VM with 5.10.0-28 kernel. In this scenario = the logfile "build/tmp/work/debian-bullseye -amd64/my-recipe/2023.1-r0= /temp/log.do_install" looks as follows:
>
> DEBUG: Executing shell function do_install
> 1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group defa= ult qlen 1000
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> DEBUG: Shell function do_install finished
>
> _____
>
>
> I would like to understand why the same build behaves differently = on the two different host systems. Thanks in advance.
>
> Best regards,
> Bjoern
>
>
>
Hello Bjoern,

The task do_install is performed in original environment without any=20
additional
chroots. It means there is no need to run Isar to have the same 'ip= a'=20
output.
You can just control network availability inside the container in the= =20
same way.
To check that you can perform 'kas-container shell' and then ru= n 'ip a'.
We've checked networking on Buster and Bullseye host (but without= =20
Proxmox VMs)
and didn't find any issues. In both cases output looks like:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNO= WN=20
group default qlen 1000
=C2=A0=C2=A0=C2=A0 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:= 00
=C2=A0=C2=A0=C2=A0 inet 127.0.0.1/8 scope host lo
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 valid_lft forever preferred_lft f= orever
9: eth0@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc no= queue=20
state UP group default
=C2=A0=C2=A0=C2=A0 link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff = link-netnsid 0
=C2=A0=C2=A0=C2=A0 inet 172.17.0.2/16 brd 172.17.= 255.255 scope global eth0
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 valid_lft forever preferred_lft f= orever

With the only difference in random eth0 index.

------=_Part_216065_1275639583.1710493578359-- ------=_Part_216064_2000892731.1710493578359--