For the Green » History » Version 135

Version 134 (Pierrick Hanlet, 08/23/2019 09:19 AM) → Version 135/142 (Pierrick Hanlet, 08/23/2019 09:25 AM)

h1. For the Green

This page is intended to help newcomers to understand Front End's role in Fermilab Accelerator controls. No prior knowledge is expected. If you are overwhelmed with jargon, please see our [[Front End Glossary]].
h2. +Organization+

"The laboratory organization can be found here":
"Accelerator Division (AD) organization":

Within Accelerator Controls, there are several groups with "soft" boundaries. Generally, these groups are:
* User Applications:
* System Services:
* Front Ends:
* Controls Hardware:
* Clocks & Links Engineering:
* Systems Hardware Engineering:
* Integrated Systems Engineering:
* Networks:

h2. +Scope of the front end group+

The Controls Front End (FE) group is responsible for a variety of FE types (details in following the links):
* [[For the Green#Erlang Front Ends|Erlang Front Ends]]
* [[For the Green#Mooc Front Ends|Mooc Front Ends]]
* [[For the Green#OAC Front Ends|OAC Front Ends]]

These front ends serve to interface with a wide variety of hardware field devices in the following categories. Because the lines of responsibility are soft, the following list is only intended to give a general idea; it is neither a comprehensive
list nor a definitive list:
* Vacuum: pumps, valves, gauges
* Magnet power supplies
* RF (Radio Frequency - acceleration): - LLRF, HLRF, Modulators, RF Amplifiers, Cavities
* Beam monitors: beam loss (BLM), beam position (BPM), Multiwire, SWICS
* Motion Controllers: collimators, ???
* HRMs & IRMs: general purpose digitizers and status
* Clock/Timing
* Cameras

|\4=. Front End |
|_.Name |_.Description |_.Category |_.Type |_.Owner |_.Controller |_.Location |_.Comment |

h2. +Brief Overview of the Controls System+

Generally, a Control System (CS) is used to remotely control hardware devices which, in the case of the Fermilab accelerators, are out in the field and not easily accessible. For a complex system such as ours, the control system is required to:
* maintain its own stable operation using feedback loops
* offer remote control of hardware devices in the field
* allow operators to view/monitor parameters of the hardware
* ensure that separate hardware systems work together in a coordinated reliable way
* ensure that subsystems operate within predefined limits, and if not, generate warnings for operators
* collect and archive pertinent data from the remote systems

The following diagram gives a visual overview of the Fermilab Accelerator Controls System. Generally the operators interface with the CS via a console terminal, usually using acnet as the interface. Acnet is the local area network (LAN) with its own, home-grown transport layer protocol to connect all accelerator subsystems. The term "acnet" is also used to refer to the tools offered to operators at the console. In the diagram, the acnet network is indicated by the thick magenta lines.

The DPM is the Data Pool Manager which is responsible for both consolidating requests to the hardware (minimizing network traffic) and mapping the appropriate protocols to hardware in a consistent way that is transparent to the user. The DPM also hosts the connection to the Device Database, which is the central database for all of the CS's configuration; e.g. calibrations, alarm limits, etc.

On the right side of the figure are different front ends (FE)s. Only two types are presently shown:
* Erlang - pink boxes: erlang FEs are hosted on linux PCs called CLX nodes; many such nodes exist. Each CLX PC may host one, or more FEs, and each FE will have one, or probably more, erlang VMs running, see [[For_the_Green#Erlang Front Ends|Erlang Front Ends]].
* Mooc - blue boxes: mooc FEs are usually hosted in VME crates. Each mooc FE an executable which is downloaded onto the VME controller at boot up, see [[For_the_Green#Mooc Front Ends|Mooc Front Ends]].

A front end serves to interface with the hardware specific protocols which will depend, in part, on the physical connection to the device (e.g. serial RS232, modbus, gpib, ethernet - to name a few) and the firmware running on the device which communicates over this physical link. This part of the FE is device specific. The other task of the FE is to communicate the information to/from the hardware via acnet to the CS.


h2. +Erlang Front Ends+

h3. Erlang Features

Erlang is a functional programming language, which means (stolen from "wikipedia": and a "talk": by Dennis Nicklaus):
* computation is treated as the evaluation of mathematical functions
* extensive support libraries
* distributed runtime system; any process may spawn one, or more, new processes and each process is run in a separate erlang virtual machine (VM)
* erlang manages its processes with "supervisor" processes, _not the OS_; processes share no memory/variables so interaction is performed via efficient message passing
* avoids changing state
* variables, once assigned (called binding), cannot change value
* output of a function depends only on input arguments and not on state of the program
* no loops, the equivalent is performed by recursion
* copiously uses pattern matching
* concurrent garbage collecting
* can test software in an interactive erlang shell and "hot-swap" software
* OTP: Open Telecom Platform, is a collection of useful middleware, libraries, and tools written in the Erlang programming language. One notable feature is its "behavior" patterns imposed by the compiler to keep you honest. Behaviors are formalizations of common patterns; examples (gen_server, supervisor). Behaviors can also be defined by the developer; i.e. "home grown".

h3. Erlang Resources

Note that as of the writing of this document, we are using Erlang/OTP 21.2
* "Erlang official tutorial":
* "A very good erlang tutorial":
* "Erlang cheat sheet":
* [[acsys-fe:Debugging_a_Front-end|Rich's super cool scripts to set up a new FE and debug it]]
* [[acsys-fe:Erlang_Driver_API|Instructions on how to write an erlang FE]]

h3. Erlang Framework

The erlang framework is encompassed in the code referred to as the DAQ (data acquisition) application. The daq application has the "application behavior" (must have start() and stop() callbacks). This application performs all of the common functions required for a front end, namely connection and passing requisite information to acnet and hooks for the hardware side of the communication link, see figure. Additionally, the daq application starts all of the associated processes such as clocks, alarm handling, etc. Lastly, the daq application provides the framework for the FE code; in this case, using the home grown "driver behavior".

At boot up, the daq application:
# checks that all of the [[Required Modules]] are available,
# starts the [[Required Applications]] that make up the skeleton of the framework,
# starts the application itself with daq_app, which starts the daq supervisor, daq_sup, which starts:
** download support
** device registry
** forward settings to SYBSET - database to save last setting
** retdat and gets32 - acnet reading
** setdat and sets32 - acnet setting
** alarm handling
** setsvr - history of acnet posts
** acsys -
** fast time plot (ftp)
# registers [[Registered Functions]]; i.e. these processes are put in a special lookup table so that they can be referred to by name. This lookup table allows other erlang processes to check for other processes of the same name which might collide with the ones in this list.

h3. Erlang Front End

Each erlang FE has driver code. In erlang FEs the driver "speaks" the protocol of the hardware and communicates it to the daq. It's "driver behavior" ensures that methods to communicate with acnet are developed; these methods are: init(), terminate(), message(), reading(), and setting()

* +each attribute is an entry point+

h3. Front End Deployment

Each erlang front end is bundled as a complete package; i.e. each requisite library and binary file are part of a FE tarball, thus ensuring stability against version changes for each FE. To create the tarball, one can use [[acsys-fe:Debugging_a_Front-end#Step-2-Create the configuration script|mk_acsysfe]]. In addition to collecting all of the requisite executable applications and libraries, the mk_acsysfe script also generates two ascii files:
* *sys.config* - which must be edited to provide network connection details to each instance of the FE to be run. An example is:
[{'daq', [{'apps', []},
{'download', 'true'},
[{16#20, 'ps_lxi_driver',
[{'node', "n-iqa1ri"}, {'current_limit', {0.0, 120.0}},
{'init_voltage', 12.5}]},
{16#21, 'ps_lxi_driver',
[{'node', "n-iqa2ri"}, {'current_limit', {0.0, 120.0}},
{'init_voltage', 12.5}]},
{16#22, 'ps_lxi_driver',
[{'node', "n-iqa3ri"}, {'current_limit', {0.0, 250.0}},
{'init_voltage', 15.0}, {'port', 9221}]},
{16#46, 'ps_lxi_driver',
[{'node', "n-iqa1li"}, {'current_limit', {0.0, 120.0}},
{'init_voltage', 12.5}]}]}]},
{'fah', [{'email_to', [""]},
{'interval', 720},
{'title', "CLX30E Front-end Alarms"}]},
{'acnet', [{'default_node', "CLX30E"}]}].

%% Local variables:
%% mode: erlang
%% End:

* *frontend.rel* - which lists all of the required packages for this FE. An example is:
{"ACSys/FE", "1.0"},
{erts, "10.1"},
[{kernel, "6.1"},
{stdlib, "3.6"},
{sasl, "3.2.1"},
{os_mon, "2.4.6"},
{utillib, "2.0"},
{clock, "2.1"},
{fah, "1.2"},
{drf2, "1.3"},
{acnet, "2.1"},
{daq, "1.7"},
{sync, "1.5"},
{inets, "7.0.2"},
{tomco_amp, "1.0"}]

As a reminder, each of these applications is likely to spawn additional processes.

To start the front end, one can use the script [[acsys-fe:Debugging_a_Front-end#Step-3-Start the Development Front-End|run_fe]]. When the FE is booted, each of the processes in frontend.rel will be spawned as separate VMs; note that the order of starting processes is important and some of the later processes in frontend.rel are dependent on earlier ones. Note also, that these processes may very well spawn additional processes and each process is run in a separate erlang VM. The last of the erlang processes to be spawned in this example is the FE driver for the hardware device.

h2. +Mooc Front Ends+

h2. +OAC Front Ends+

h2. +Insundry Front Ends+

h2. +ACNET+

ACNET stands for Accelerator Control NETwork.

ACNET is a protocol definition. Simply, a way of communicating between machines that also speak ACNET.

ACNET is a connectionless, UDP, protocol. The ACNET header defines the trunk and node. In the ACNET ecosystem, the trunk and node values uniquely identify an endpoint. An ACNET node must be registered to be assigned a trunk and node.

ACNET is implemented on machines as a daemon process called ACNETD. ACNETD communicates over port 6801.

There exist libraries in C++, Erlang, JavaScript, OCaml, Python, and Rust that aid in handling ACNET messages. There are also libraries for ACNET services.

See also:

One of the most useful resources is the "Operations Wiki or Ops Wiki": is: [[ Wiki]]

h2. +ACL+

ACL is a utility created by Brian Hendricks.

Originally, Accelerator Command Language
Also, ACNET Command Language
Sometimes, ACNET Control Language (not by Brian)

Useful example:
To create an initial dabbel file (see below) with all of the devices on a node:
* log into a clx machine
* acl - this will put you in an acl shell
* list/noTitle/noHeaders/output=NodeDevices.dab node=MyNode 'LIS %nm' - where

See also:

For a summary of the list (long) of commands, see

h2. +Dabbel+

DABBEL (DataBase Batch Editing Language) is the facility for modifying the central Device Database. In short, one creates a dabbel file,
which can be modified with dabbel commands and/or your favorite editor. Once the changes are made, one can load these into the Device

Dabbel documentation:
* "Official documenatation":
* "Beau's contribution":

Useful example:
If one needs to make batch changes to all devices relating a particular node, after creating (see ACL above) a list of all acnet devices
for the node - call the file NodeDevices.dab , one can:
* > dabbel NodeDevices.dab list - this will populate the dabbel file with database information for each device and create a file NodeDevices.lis
* edit the file NodeDevices.lis with your favorite editor to make the necessary changes and save it as NodeDevices.dab
* > dabbel NodeDevices.dab modify - this will modify the database (you can watch the changes on a D80 page; be patient, it may be slow)