Yocto

the idea is to use yocto as a base for a frontend operating system.

General Information

Initial goal is a yocto based ramdisk and kernel that is feature equivalent with the gsi-custom-ramdisk. That is booting a linux kernel, provide ssh login, busybox shell and monitoring via snmpd. As (nearly) all frontends are interested in fesa and whiterabbit, the required libraries are included.

For Application (Fesa deploy units) development a standard Software Development Kit is created. It includes all headers and libraries for the frontend environment (including fesa and whiterabbit) and a gcc compiler. For fesa a commandline based fesa-codegen is included.

Information Collection

Application Development

As of 2022-12 a preview version of a SDK is available.

Linux Kernel and Ramdisk are in asl75a:/common/tftp/csco/pxe/yocto/current

SDK is in asl75a:/common/usr/embedded/yocto/fesa/sdk

to use the SDK, unset LD_LIBRARY_PATH and source the yocto environment
unset LD_LIBRARY_PATH
source /common/usr/embedded/yocto/fesa/current/sdk/environment-setup-core2-64-ffos-linux

If your Makefiles are using $(CC), $(CXX) or $(CFLAGS) they are all set to use the sdk.

Layer Development

Core Development

downloads and sstate-cache are available at http://yocto-cache.acc.gsi.de

rough guide to create build environment. Don't do this on a shared central system, like acc9.

# create base directory
mkdir ~/yocto
cd ~/yocto

all steps assume you start in the base directory

poky

checkout a poky version we want to use as base for our development

# clone our yocto mirror
git clone https://git.acc.gsi.de/embedded/yocto-poky.git poky
cd poky
# checkout our base version
git checkout langdale

init

we can setup all environment variables
. ~/yocto/poky/oe-init-build-env ~/yocto/build

base config

config files are in ~/yocto/build/conf

site.conf

Defines settings relevant for the the system where we are working. That is settings that are relevant for gsi, but are not the same in case building happens somewhere else, for example your laptop at home without access to gsi.

We are running a central yocto build server. The buildserver exposes it's downloads and buildcache via http. If the settings match, the files can be downloaded from the buildserver and not every developer needs to compile them.

Create ~/yocto/build/conf/site.conf with our local (site specific) settings
# we want to use our local download cache
SOURCE_MIRROR_URL = "http://yocto-cache.acc.gsi.de/downloads"
INHERIT += "own-mirrors"
# we want to use our local build cache
SSTATE_MIRRORS = "file://.* http://yocto-cache.acc.gsi.de/sstate-cache/PATH;downloadfilename=PATH"

local.conf

Create ~/yocto/build/conf/local.conf settings
# our distro is ffos defined in meta-ffos
DISTRO ?= "ffos"

layer

For each layer a seperate git repository should be used.

meta-ffos

Add the meta-ffos layer to get a default config for fair images.

clone it
git clone git@git.acc.gsi.de:embedded/meta-ffos.git ~/yocto/meta-ffos

and add it to ~/yocto/build/conf/bblayers.conf
...
BBLAYERS ?= " \
  ...
  ${TOPDIR}/../meta-ffos \
  "

or run bitbake-layers add-layer ~/yocto/meta-ffos inside ~/yocto/build

The layer proides the image ffos-image-default. This is image is suiteable for a pxe network boot. It uses systemd as an init system, initializes network interfaces using dhcp. It includes rpm, yum for packagemanagement, python and perl for scripting. Runs a net-snmpd daemon for monitoring. The unpacked image during runtime is aprox 175MB

As meta-ffos contains distribution specific settings we need to re-init our variables . ~/yocto/poky/oe-init-build-env ~/yocto/build

more layers

meta-ffos requires additional layers

build

and now can build the image.
bitbake ffos-image-default

which should complete in a few minutes as it downloads nearly everthing from our local caches.

naming conventions

proposal

  • each layer starts with meta - yocto convention
  • meta-ffos fair frontend operating system. Base recipes and configuration for fair images
  • meta-bsp-* is a baseboard support package. Containing specific drivers, kernel modules, flattened device trees, etc for a type of hardware
    • meta-bsp-scu standard control unit - propably empty
    • meta-bsp-microioc serial cards, tcp connection to motor controller, maybe utilities for the stepper motor, not the fesa classes
    • meta-bsp-microtca microtca crates
  • meta-cern cern buildsystem
  • meta-cmw all cmw recipes
  • meta-fesa fesa core and dependant libraries.
    • or should we merge cern, cmw, fesa into one layer?
  • meta-timing whiterabbit kernel modules, tools, etc
  • meta-lobi custom unsorted recipies for lobi

leaving out groupnames from the layers, as groupnames might change.

sizes

boot image du -sh /

  • first image (busybox, openssh): 128MB
  • adding package management (dnf, also adds python): 186MB
  • replacing grub with systemd-bootd: 170MB
  • adding net-snmpd: 175MB
  • 2022-04
    • 110MB for an (fesa-) baseimage with busybox, systemd, dbus, openssh, netsnmp, etherbone, whiterabbit, saftlib, cmw, fesa
    • 210MB with rpm package management (pulls in python, perl, gpg, sqlite, ...)

ideas

  • use central bitbake server? Toaster?
Topic revision: r26 - 27 Feb 2024, AlexanderSchwinn
This site is powered by FoswikiCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding Foswiki? Send feedback