FESA Threads

FESA uses threads to handle events, to communicate between its Server and Realtime components and to manage various background activities. Their priorities and processor affinities may be changed to fine-tune the performance of FESA software.

FESA Threads and Default Priorities

Thread Default Priority Notes
Persistency 1  
Logging 1  
Diagnostics 1  
Signal Handler 1 Linux Signals
Client Notification Producer 5 Notifies All Subscribers of Property changes (thread pool)
Client Notification Consumer 6 Processes Notification events from the Notification queue
RDA Server 7 Processes incoming RDA requests - Server Get and Set Actions
Concurrency Layer 10 Processes RT events and executes Realtime Actions, enqueues Notifications
Event Source 11

Generates and enqueues RT events

Higher priority than consumer to avoid lost events.

Realtime / Nice Priority

Thread priority values used by FESA are 1(min) to 99(max). If a FESA deploy unit is started using realtime priorities, priority values are used directly. If the deploy unit is started with normal priorities, these are converted to nice values 19(min) to -20(max) at runtime.

When using non-realtime priorities the FESA priorities are mapped to nice as in the table below.
RT 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 ... 49 50 51 ... 97 98 99
nice 19 19 18 18 18 17 17 16 16 16 15 15 14 14 14 13 13 12 12 ... 0 0 -1 ... -19 -20 -20
The reduced range of nice priorities means different FESA priorities may map to the same nice priority. Also note that negative nice priorities (FESA priority > 50) require root permissions.

Processor Affinity

Furthermore, it is possible to specify on which core a thread is supposed to run.

Per default value for the affinity mask for each thread is 0xFFFFFFFF, which means, it can run on any core.

E.g. the SCU has 4 Cores, Core 0 till Core 3 (Check with "cat /proc/cpuinfo").

If you want to run a specific thread e.g. only on Core 3, you would describe that with the affinity mask 00000000000000000000000000001000, which is 0x00000008 in hex. (Fesa uses the Hex values).

(Search for "sched_setaffinity" and the struct "cpu_set_t" for more info)

A script to as well set the processor affinity for saftd and dbus-daemon can be found here:
/common/export/nfsinit/global/cpuset-priority-fix.sh

Configuring Thread Priorities and Processor Affinities

Thread Priorities can optionaly be defined in the deploy-unit and instance XML files.

The priority of RT-Actions is set for each Concurrency Layer in deploy-unit/scheduler/concurrency-layer:
    <scheduler>
        <concurrency-layer name="LayerOne" threading-policy="single"
            id="_170315142345_0" prio="10" affinity="0xFFFFFFFF">
            <scheduling-unit scheduling-unit-name-ref="Classname::SUName" ></scheduling-unit>
        </concurrency-layer>
    </scheduler>

Other priorities are set in the deploy-unit/prio-management tree:
    <prio-management>
        <classes>
            <custom-event-sources>
                <custom-event-source name="MyCustomEvent" prio="1" affinity="0xFFFFFFFF" />
            </custom-event-sources>
        <deploy-unit>
            <common-event-sources>
                <timer-event-source prio="1" affinity="0xFFFFFFFF" />
                <timing-event-source prio="1" affinity="0xFFFFFFFF" />
                <on-subscription-event-source prio="1" affinity="0xFFFFFFFF" />
            </common-event-sources>
            <common-management-threads>
                <persistence-thread prio="1" affinity="0xFFFFFFFF" />
                <rda-server-thread prio="1" affinity="0xFFFFFFFF" />
                <signal-handler-thread prio="1" affinity="0xFFFFFFFF" />
                <logging-thread prio="1" affinity="0xFFFFFFFF" />
                <server-notification-consumer-thread prio="1" affinity="0xFFFFFFFF" />
            </common-management-threads></deploy-unit>
    </prio-management>

Configuring Thread-Per-Device-Group

Default behaviour of a concurrency layer is to run all actions in a single thread. If multiple devices are involved in an event, there will be one call of the RTAction with the device collection containing all devices.

With the per-device-group policy a thread is created per device group. If multiple devices are involved in an event, there will be a call of the RTAction for each device group. The device collection given to each thread contains only the devices in that group.

Setting Threading Policy

The threading policy is set for each Concurrency Layer in deploy-unit/scheduler/concurrency-layer:

    <scheduler>
<concurrency-layer name="LayerOne" threading-policy="per-device-group"
id="_170315142345_0" prio="10">
<scheduling-unit scheduling-unit-name-ref="Classname::SUName" ></scheduling-unit>
</concurrency-layer>
</scheduler>

Defining Device Groups

Device groups are defined by selection criteria in the scheduling-unit part of the design. The most basic selection rule (see below) groups Devices by DeviceName (giving one thread per device). This is a unary operation. The single operand is the Device Name. The value is "implicit" meaning a group is created for each unique value that occurs.
        <scheduling-unit name="SUName">
            <selection-criterion>
                <selection-rule selection-rule-name="rule">
                    <unary-operation operand-name-ref="groupByName" ></unary-operation>
                </selection-rule>
                <operand operand-name="groupByName">
                    <field-selection>
                        <base-field-ref base-field-name-ref="name" ></base-field-ref>                        
                        <field-value>
                            <implicit />
                        </field-value>
                    </field-selection>
                </operand>
            </selection-criterion>
            <rt-action-ref rt-action-name-ref="AnAction" ></rt-action-ref>
            <logical-event-ref logical-event-name-ref="AnEvent" />
        </scheduling-unit>

Making Device Name available as a Selection Criterion

The field for grouping is the Device Name that is already present in all instance files. This needs to be added to the design as a configuration field of type "device-name-field" with name "name"

<data>
    <device-data>
        <configuration>
                <device-name-field name="name" id="_190128160618_0"/>
        </configuration>
    </device-data>
</data>

Using Hardware Module as a Selection Criterion

If Devices share hardware modules it can be useful to have one thread for all the devices on one module. This requires:
  • a configuration field in the design
  • a selection criterion referencing the field
  • a value for the field for each device in the instance file

Scheduling Unit Selection Criterion Examples

Unary operation, field value implicit = a group for each unique value of the field (As in the example)

Binary operation "and", operand field value implicit = groups containing only devices that have the same value of both fields

Binary operation "or", operand field value implicit = groups containing only devices that have the same value of both fields

Unary operation, field value explicit = a group containing only devices that match a particular value

Binary operation "and", 2x field value explicit = a group containing only devices that match both particular values

Binary operation "or", 2x field value explicit = a group containing only devices that match either of two particular values

The two operands may be further operations, allowing complex logical expressions.

Thread Interactions

Events

An Event Source thread waits for its trigger then enqueues an RT Event in the queue for its concurrency layer.

A Concurrency Layer thread dequeues the RT Event and executes the associated RT Action

The default configuration give the Event Source higher priority than the Concurrency Layer. This minimises the chance of lost events but it is possible to overflow the event queue. The behaviour on queue overflow can be configured by the setting the event-discard-policy to discard-newest or discard-oldest.

Data Integrity

By default, mutexing for Fesa fields which have (data-consistant=true) in an RT-Action is not required, since by default Fesa will take care on data integrity
  • setting fields use double-buffer mechanics with an atomic swap between buffers
  • acquisition fields use a rolling buffer, using the event-timestamp as index

The above is as well true when using multiple scheduling layers on different cores, running multiple different RT-Actions for the same device/multiplexing-context in parallel.

There are few egde cases, which however can be ignored in real-life:
  • If two RT-Actions are executed with exactly the same timestamp (nanosecond precision, e.g. two RT-Actions triggered by the same WR-Event, running in different layers/cores), then they will write to the same acquisition buffer slot
  • If there are more RT-Actions simultaneously active and writing than there are rolling-buffer slots for an acquisition field (default is 1000)

Subscription Notifications

A Concurrency Layer thread executing an RT Action can (manually or automatically) create a notification event and enqueue it in the Notification queue.

The Notification Consumer thread dequeues the Notification.

Subscribers to a Property are informed via Client Notification Produceer threads taken from a thread pool.

Settings

RDA Server thread receives incoming RDA requests and executes RT actions directly. Changes to settings are flagged to be synchronized.

The BufferSynchronizer thread copies changed to the RT Setting buffer.
Topic revision: r10 - 13 Dec 2024, AlexanderSchwinn
This site is powered by FoswikiCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding Foswiki? Send feedback