I did what you proposed, but there is still no eth0. What I also tested and what might be interesting: def print_ifs(): import subprocess import socket output = subprocess.check_output("ip a", shell=True) print(f'Output of ip a: "{str(output)}"') print(socket.if_nameindex()) return '' do_testtask() { ${@ print_ifs()} ip a } addtask testtask I executed it inside kas shell by 'bitbake -c testtask my-recipe' again and the log looks as follows: DEBUG: Executing shell function do_testtask Output of ip a: "b'1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000\n link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00\n inet 127.0.0.1/8 scope host lo\n valid_lft forever preferred_lft forever\n4: eth0@if5: mtu 1500 qdisc noqueue state UP group default \n link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0\n inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0\n valid_lft forever preferred_lft forever\n'" [(1, 'lo'), (4, 'eth0')] Output of ip a: "b'1: lo: mtu 65536 qdisc noop state DOWN group default qlen 1000\n link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00\n'" [(1, 'lo')] 1: lo: mtu 65536 qdisc noop state DOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 DEBUG: Shell function do_testtask finished So as you can see 1. The python function is printed twice in a row, most probably in two different contexts? I guess you know more about it 2. During the first execution of the python function, eth0 interfaces are available 3. During the second execution of the python function, no eth0 interface is available Also Jan Kiszka told me that to his knowledge the newer bitbake isolates tasks from networks by default. If this is the case it still doesn't really explain the behavior show in the log above and it doesn't explain why this doesn't happen on the buster host VMs. Best regards, Bjoern On Thursday, March 14, 2024 at 5:50:43 PM UTC+1 Bjoern Kaufmann wrote: > Just adding this here, as I accidentally clicked the wrong reply button > for my last answer and thus sent you a private message: > > 13/03/2024 15:36, Bjoern Kaufmann wrote: > > From bitbake shell after executing kas shell kas.yml inside the docker > > container (on bullseye host VM): > > > > _____ > > > > $ ip a > > > > 1: lo: mtu 65536 qdisc noqueue state UNKNOWN > > group default qlen 1000 > > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > > inet 127.0.0.1/8 scope host lo > > valid_lft forever preferred_lft forever > > 86: eth0@if87: mtu 1500 qdisc > > noqueue state UP group default > > link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0 > > inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0 > > valid_lft forever preferred_lft forever > > > > _____ > > > > $ bitbake my-recipe > > --- executes recipe tasks --- > > $ cat tmp/work/debian-bullseye > > -amd64/my-recipe/2023.1-r0/temp/log.do_install > > > > DEBUG: Executing shell function do_install > > 1: lo: mtu 65536 qdisc noop state DOWN group default qlen 1000 > > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > > DEBUG: Shell function do_install finished > > > > _____ > > > > $ tmp/work/s2l2-linux-2023-2-amd64/down-package/2023.1-r0/te > mp/run.do_install > > > > 1: lo: mtu 65536 qdisc noqueue state UNKNOWN > > group default qlen 1000 > > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > > inet 127.0.0.1/8 scope host lo > > valid_lft forever preferred_lft forever > > 86: eth0@if87: mtu 1500 qdisc > > noqueue state UP group default > > link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0 > > inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0 > > valid_lft forever preferred_lft forever > > > > _____ > > > > As you can see, if do_install is executed by bitbake, it behaves > > differently, at least on that bullseye host system. So bitbake indeed > > seems to do something before executing do_install, even if it's not > > switching into a chroot. > > > > Best regards, > > Bjoern > > On Wed, Mar 13, 2024 at 2:55 PM Anton Mikanovich wrote: > Here is one more code you can try: > > do_testtask() { > ip a > } > addtask testtask > > So it should be executed by 'bitbake -c testtask my-recipe' inside kas > shell. > > This task is free of any recipe/bbclass dependencies and will be the > only task > executed by bitbake. If the output will be correct with it the issue is > probably caused by some other task/recipe execuded before > my-recipe:do_install. > If there will be still no eth0 you can look through python(){} or > python __anonymous(){} sections of your layer because this code is > executed on > parsing stage. > > On Wednesday, March 13, 2024 at 11:48:08 AM UTC+1 Anton Mikanovich wrote: > >> 07/03/2024 17:33, 'Kaufmann, Bjoern' via isar-users wrote: >> > Hello, >> > >> > I have the following recipe my-recipe_2023.1.bb: >> > >> > inherit dpkg-raw >> > >> > do_install() { >> > ip a >> > } >> > >> > I run a build based on isar "d26660b724b034b602f3889f55a23cd9be2e87bd" >> (> v0.9, < v0.10-rc1) inside of a docker container " >> ghcr.io/siemens/kas/kas-isar:3.3". >> > >> > _____ >> > >> > In the first scenario the build runs inside the docker container on a >> debian buster VM with 4.19.0-26 kernel. In this scenario the logfile >> "build/tmp/work/debian-bullseye >> -amd64/my-recipe/2023.1-r0/temp/log.do_install" looks as follows: >> > >> > DEBUG: Executing shell function do_install >> > 1: lo: mtu 65536 qdisc noqueue state UNKNOWN >> group default qlen 1000 >> > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 >> > inet 127.0.0.1/8 scope host lo >> > valid_lft forever preferred_lft forever >> > 944: eth0@if945: mtu 1500 qdisc >> noqueue state UP group default >> > link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0 >> > inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0 >> > valid_lft forever preferred_lft forever >> > DEBUG: Shell function do_install finished >> > >> > _____ >> > >> > In the second scenario the exact same build runs inside the docker >> container on a debian bullseye VM with 5.10.0-28 kernel. In this scenario >> the logfile "build/tmp/work/debian-bullseye >> -amd64/my-recipe/2023.1-r0/temp/log.do_install" looks as follows: >> > >> > DEBUG: Executing shell function do_install >> > 1: lo: mtu 65536 qdisc noop state DOWN group default qlen >> 1000 >> > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 >> > DEBUG: Shell function do_install finished >> > >> > _____ >> > >> > >> > I would like to understand why the same build behaves differently on >> the two different host systems. Thanks in advance. >> > >> > Best regards, >> > Bjoern >> > >> > >> > >> Hello Bjoern, >> >> The task do_install is performed in original environment without any >> additional >> chroots. It means there is no need to run Isar to have the same 'ip a' >> output. >> You can just control network availability inside the container in the >> same way. >> To check that you can perform 'kas-container shell' and then run 'ip a'. >> We've checked networking on Buster and Bullseye host (but without >> Proxmox VMs) >> and didn't find any issues. In both cases output looks like: >> >> 1: lo: mtu 65536 qdisc noqueue state UNKNOWN >> group default qlen 1000 >> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 >> inet 127.0.0.1/8 scope host lo >> valid_lft forever preferred_lft forever >> 9: eth0@if10: mtu 1500 qdisc noqueue >> state UP group default >> link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0 >> inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0 >> valid_lft forever preferred_lft forever >> >> With the only difference in random eth0 index. >> >>