public inbox for isar-users@googlegroups.com
 help / color / mirror / Atom feed
From: Baurzhan Ismagulov <ibr@radix50.net>
To: isar-users@googlegroups.com
Subject: Re: [PATCH v2 2/4] meta-isar-bin: Generate cache repos
Date: Fri, 22 Sep 2017 12:56:39 +0200	[thread overview]
Message-ID: <20170922105638.GB4240@yssyq.radix50.net> (raw)
In-Reply-To: <20170921085535.GA27874@iiotirae>

Hello Andreas and Claudius,

On Thu, Sep 21, 2017 at 10:55:37AM +0200, Andreas Reichel wrote:
> This looks like a misdesign, because meta-layers should not be populated
> by the build process and furthermore, should contain recipes and not a
> cache itself. It would be like the sstate-cache lying inside meta-oe...
> 
> Why not define a variable like 'DEB_CACHE_DIR' or something alike and
> use this directory. Then you also have more flexibility and can handle
> multiple different caches, i.e. the caches become selectable and
> independent of the layer itself...
> 
> Furthermore, you could set the DEB_CACHE_DIR variable in the local.conf
> snippents for multiconf and then you can have separate caches for
> different target architectures and don't mix everything up in one pool.
> 
> This way you can easily drop a pool by architecture without sorting
> files out.

I share your concerns and don't like that myself. That said, it's a demo
implementation we ended up considering the following project requirements:

R1. Product's binary repo is persistent across builds on the same host and
    across the hosts of different developers.

R2. Building isar-image-base should be kept simple. New users should not be
    overwhelmed with repo setup, external tools, or manual configuration (as
    much as possible).

R3. Build directory must be removable at any time without affecting cloned or
    newly built binary cache.

We've translated that to the following design points:

D1. Apt repo is versioned, e.g. in git.

D2. Config files for this feature are put under isar/...

D3. Given D1 and D2, binary packages go under isar/... :( .

The idea is that in a project, the cache is a separate repo at the same level
as Isar:

product (originally cloned from isar)
product-bin

So, for production use, a tool like kas should be used to clone both at right
revisions. We didn't want to do that for the demo due to R2. Just like
meta-isar is currently a template for real products, meta-isar-bin is an
example that should be moved out of the product source repo for a real product.

Technically, meta-isar-bin is a layer that contains configuration files that
have to be put somewhere. As package installation via apt is a requirement
(dpkg doesn't install package dependencies), meta-isar-bin should become the
default and would thus be required in the current implementation.

We had considered cloning meta-isar-bin into the build directory, but that
would mean manual configuration for new users, and (possibly modified) configs
could be deleted if the build directory is deleted -- an unpleasant surprise
for users familiar with OE.

If it's a separate directory, the config files should go there, since they are
per-repo, and there could be several ones:

product
company-bin
department-bin
product-bin

For a source distribution, having binary repos as layers sounds perverse. But
given Isar's focus on binary packages, it's layering in the sense that more
specific repos could override more general ones.

In the future, we'd like to move more stuff into the core Isar (either moving
files from meta-isar to meta, or merging meta into meta-isar and introducing
meta-template). Maybe we could introduce kas in a simple way, which would solve
this problem.

So, ATM I'd suggest to document the steps regarding meta-isar-bin for creating
real products.

If anyone has an elegant solution for the issues above, I would be glad to hear
it. I agree with a separate directory approach, but suggest to postpone it till
we find a good way to introduce it to new users.

With kind regards,
Baurzhan.

  parent reply	other threads:[~2017-09-22 10:56 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-09-19 12:20 [PATCH v2 0/4] Basic binary cache implementation Alexander Smirnov
2017-09-19 12:20 ` [PATCH v2 1/4] meta-isar-bin: Add reprepro configs Alexander Smirnov
2017-09-20  7:58   ` Henning Schild
2017-09-20  8:12     ` Alexander Smirnov
2017-09-20  8:38       ` Henning Schild
2017-09-20  8:51         ` Alexander Smirnov
2017-09-19 12:20 ` [PATCH v2 2/4] meta-isar-bin: Generate cache repos Alexander Smirnov
2017-09-20  8:11   ` Henning Schild
2017-09-20  8:26     ` Alexander Smirnov
2017-09-21  8:55       ` Andreas Reichel
2017-09-21  9:21         ` Claudius Heine
2017-09-22 10:56         ` Baurzhan Ismagulov [this message]
2017-09-25 10:49           ` Claudius Heine
2017-09-25 11:57             ` Alexander Smirnov
2017-09-25 13:48               ` Claudius Heine
2017-09-19 12:20 ` [PATCH v2 3/4] meta-isar-bin: Populate cache Alexander Smirnov
2017-09-20  8:22   ` Henning Schild
2017-09-20  8:49     ` Alexander Smirnov
2017-09-19 12:20 ` [PATCH v2 4/4] meta-isar-bin: Install packages via multistrap Alexander Smirnov
2017-09-20  8:28   ` Henning Schild

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170922105638.GB4240@yssyq.radix50.net \
    --to=ibr@radix50.net \
    --cc=isar-users@googlegroups.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox