SCU Kernel Task Switching Latency

Introduction

The hardware group (thx to Stefan!) has investigated task switchting / preemption on the SCU kernel with RT patch 3.10.101 + rt111 using the program preempt-test that is available via the Linux Real Time Wiki.

Measurements and Remarks

Task Switching Latency at Optimal Conditions

The following numbers were obtained without FESA, saftlib ...
  • average: 20us including hyperthreading
  • average: 17us without hyperthreading
  • upper bound: 100us (without kernel parameter for 'busy' idle state), 80us (with kernel parameter)

VERY IMPORTANT REMARK: The above values can only be achieved when following the recommendations in the documentation.

Task Switching Latency at ACO Conditions

In order to identify sources of bad preemption behaviour, a special kernel supporting 'ftrace' was prepared. The following numbers were obtained using a typical SCU setup including FESA, saftlib, ...
  • average: 40us @ FESA
  • upper bound: ~1ms @ FESA (even latency of up to 7ms have been reported) (see discussion below)
  • network driver is hevaving correctly
  • NFS mount
    • it was tried to run FESA without NFS mounts
    • --> no improvement
    • --> problems not caused by NFS mounts

Discussion (ask Stefan for further details)
  1. The bad value for the upper bound happens when the FESA class interacts with the network card. This can be reproduced by RDA write access to a FESA property. RDA writes 'freeze' the CPU, preventing preemption or IRQ handling for high priority tasks. Subsequent writes via RDA cause subsequent 'freezes' of about 1ms length each. It is easily possible to exceed the maximum length of 28ms required for ramping. The 'freezes' happen without interaction by saftlib or dbus.
  2. RDA read access also have a bad impact on task switching latency, but much less pronounced compared to 'RDA writes'.
  3. Investigation suggest, that this is not caused by the driver of the network cards. Preemption of other ongoing network operations seems to be handled well.
  4. It was investigated, if the 1ms 'freeze' is related to 'dirty' PCIe/Wishbone driver. It is not. The PCIe / Wishbone driver was not involved in any of the 1ms 'freezes'.
  5. Would a BIOS update help? There are no further BIOS updates, as the hardware is no longer supported.

Causes of Bad Performance

  • use of non-RT mutex functions: Fast preemption requires using mutex functions dedicated for use with RT Linux. At present, neither FESA nor dbus are using those.
  • sleeping CPU: preemption may take significantly longer, if a CPU has to wake up from a sleep state. The documentation recommends keeping the CPU 100% busy at all times (can be achieved via a kernel parameter)
  • SMI: so-called System Management Interrupts may consume milliseconds of CPU time without the possibility of preemption. Is this relevant to us? Note CH: we can't prevent SMI that's hardware/bios of the comexpress board.

-- DietrichBeck - 13 Sep 2018

This topic: Timing > WebHome > TimingSystemDocumentation > TimingSystemDocuments > TimingSystemDocumentsReportsAndMeasurements > TimingSystemDocumentRep20180904
Topic revision: 27 Sep 2018, DietrichBeck
This site is powered by FoswikiCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding Foswiki? Send feedback