software operational concepts definition
TRANSCRIPT
Project Documentation Document SPEC-0013
Rev D
Software Concepts Definition
Steve Wampler, Bret Goodrich Software Group
March 2017
Software Operational Concepts Definition
SPEC-0013, Rev D Page ii Page ii2
REVISION SUMMARY:
1. Date: July 31, 2003
Revision: Draft A1
Changes: Initial version
2. Date: August 8, 2003
Revision: Draft A2
Changes: Minor modifications
3. Date: September 10, 2003
Revision: Draft A3
Changes: More details on functional architecture added
4. Date: July 26, 2004
Revision: Draft A4
Changes: Converted to HTML for TWiki
5. Date: August 8, 2006
Revision: Draft A5
Changes: Major rewrite
6. Date: 12 September 2006
Revision: Draft A6
Changes: Changed revision numbers to match standard. Formatting changes.
7. Date: 17 March 2008
Revision: Revision A
Changes: Ready for formal release.
8. Date: 22 April 2009
Revision: Revision B
Changes: Updated for FDR.
9. Date: 2 November 2009
Revision: Revision B2
Changes: Minor typos and updates.
10. Date: 5 August 2011
Revision: Revision C
Changes: Minor typos and updates prior to review.
11. Date: 20 September 2015
Revision: Revision C-1
Changes: Minor typos and updates prior to review.
12. Date: 13 October 2015
Revision: Revision C-2
Changes: Provided foundation material for WCI/WCS flow-down to requirements.
13. Date: 24 March 2017
Revision: Revision D
Changes: Updated WCI info. Up-rev’d for NSF review.
Software Operational Concepts Definition
SPEC-0013, Rev D Page iii Page iii2
Table of Contents
Preface ....................................................................................................... 1
Document Scope ........................................................................................ 1
1. INTRODUCTION ................................................................................................... 2 1.1 GOALS ................................................................................................................... 4 1.2 ACRONYMS AND ABBREVIATIONS .............................................................................. 4
1.3 GLOSSARY ............................................................................................................. 5 1.4 REFERENCED DOCUMENTS ...................................................................................... 6 2. OVERVIEW ........................................................................................................... 7 2.1 EXISTING TELESCOPE CONTROL SYSTEMS................................................................ 7 2.1.1 Projects .......................................................................................................................... 7 2.1.2 Trends in Observatory Software ..................................................................................... 8 2.1.3 Software Layers and Control Software ........................................................................... 9 2.2 THE DKIST FUNCTIONAL ARCHITECTURE ............................................................... 10 2.2.1 Information flows within DKIST ......................................................................................10 2.2.2 System Structure ...........................................................................................................13 2.2.3 Behavior ........................................................................................................................14 2.3 DKIST TECHNICAL ARCHITECTURE ........................................................................ 15 2.3.1 DKIST Common Services Framework (CSF) .................................................................15 2.3.2 The Role of the Container/Component Model ................................................................17 3. OBSERVATORY CONTROL SYSTEM ............................................................... 21 3.1 OPERATIONAL REQUIREMENTS ON THE DKIST ........................................................ 21 3.2 THE OBSERVING MODEL ........................................................................................ 21
3.3 STRUCTURE ......................................................................................................... 24 4. DATA HANDLING SYSTEM ................................................................................ 26
4.1 BULK DATA TRANSPORT ......................................................................................... 26 4.1.1 Data sources .................................................................................................................26 4.1.2 Data sinks .....................................................................................................................26 4.1.3 Data streams .................................................................................................................27 4.1.4 Data routing ...................................................................................................................28 4.2 CAMERA LINES ...................................................................................................... 28
4.3 QUALITY ASSURANCE ............................................................................................ 28 4.4 DATA STORAGE, RETRIEVAL AND DISTRIBUTION ........................................................ 29 4.5 DATA REDUCTION PIPELINES SUPPORT .................................................................... 29 5. TELESCOPE CONTROL SYSTEM ..................................................................... 30 5.1 FUNCTIONALITY OF THE TCS ................................................................................. 30 5.1.1 Wavefront Correction .....................................................................................................30 5.1.2 Coordinate Systems ......................................................................................................31 5.2 TCS SUBSYSTEMS ................................................................................................ 31 5.2.1 Mount Control System ...................................................................................................32 5.2.2 Enclosure Control System .............................................................................................32 5.2.3 M1 Control System ........................................................................................................32 5.2.4 M2 Control System ........................................................................................................32 5.2.5 Feed Optics Control System ..........................................................................................32 5.2.6 Wavefront Correction Control System............................................................................32
Software Operational Concepts Definition
SPEC-0013, Rev D Page iv Page iv2
5.2.7 Polarization Analysis and Calibration Control System ....................................................33 5.2.8 Acquisition Control System ............................................................................................33 6. INSTRUMENT CONTROL SYSTEM ................................................................... 34 6.1 OBSERVATION MANAGEMENT SYSTEM .................................................................... 34 6.2 INSTRUMENT SEQUENCER ..................................................................................... 34
6.3 MECHANISM CONTROLLER ..................................................................................... 35 6.4 DETECTOR CONTROLLER ....................................................................................... 35 6.5 TRADS ............................................................................................................... 35 6.6 STANDARD INSTRUMENT ........................................................................................ 35
Software Operational Concepts Definition
SPEC-0013, Rev D Page 1 of 36 Page 1 of X
Preface
This document is intended to act as a 'user guide' to the operational behavior of the DKIST software and
controls system. The document focuses on conceptual issues of the design without significant discussion
of design details. The role within the project of this document is that it serves as a road map for the
detailed design that is developed during the preliminary and critical design phases. This means that the
document is intended to communicate overall system characteristics to users, developers and support
staff. Because most modern telescope software systems share many common functional similarities, this
document concentrates on those aspects of the DKIST that stand out as different from this traditional
software design. That is not to ignore the importance of those aspects of the design, but rather recognition
that many aspects of observatory software design are now well understood.
This document presents two aspects of the overall system software architecture: the functional
architecture and the technical architecture. The functional architecture overview describes the basic
components of the software system and their relationships with each other. The technical architecture
describes the foundation on which the functional architecture can be implemented. An important aspect of
the DKIST software model is the clean separation of the functional and technical architectures. The focus
here is on the relationship of the functional architecture to user needs as given in the Science
Requirements Document.
Functional models are presented as part of this document to help in understanding the operational
characteristics and to promote discussion on relevant topics. Reviewers are asked to focus on the three
W's: what are the characteristics, when they are needed, and why they are needed. Issues on how the
design specifically implements these characteristics and how the work should be allocated are addressed
later in the design process and need less focus here. Issues relating to resource and performance
constraints should certainly be noted but please keep in mind the difference between conceptual issues
and issues relating to design details.
An important aspect of the DKIST functional model is the organization of the software into principal
systems and the interactions between these systems. Each principal system operates as independently as
possible from the others and performs a specific task. Separate sections in this document concentrate on
the operational concepts for each principal system.
Document Scope
The Software Concepts Definitions is a user-oriented document describing the functional characteristics
of the DKIST software and controls system. This document is the top level document for software and
controls design. All other software and controls documents must comply with the system characteristics
outlined here. Implementation details are not addressed except as (a) examples showing possible
approaches and (b) as required to address specific issues appearing at the conceptual level.
Software Operational Concepts Definition
SPEC-0013, Rev D Page 2 of 36 Page 2 of X
1. INTRODUCTION
The Daniel K Inouye Solar Telescope (DKIST) is intended to be the premier solar observatory for
experimental physics. Unlike its night-time counterparts which operate with relatively fixed instrument
sets, DKIST's science goals and requirements are best met by a laboratory style instrument configuration,
where scientific requirements often dictate that instrumentation be assembled by scientists to meet the
unique demands of each experiment's goals. In order to maximize observing efficiency the DKIST
software and control systems must be designed to operate smoothly in this environment.
The fact that instrument configurations are adapted to experiments means that the control system must
adapt to changing control configurations, provide methods of reducing errors in instrument
configurations, and provide support for interfacing components developed by multiple organizations.
Another role of the DKIST software system is to provide a framework for subsystem software
development that reduces both the cost of interfacing and the cost of maintaining software systems during
their lifetime at DKIST. Having subsystems developed with software built on a common framework
reduces software life cycle costs by promoting reuse.
In order to meet the requirement of providing flexibility in a laboratory style operations environment, the
control system uses an instrument set model. This document includes an introduction to instrument sets
and briefly outlines the salient characteristics of observing with instrument sets, including the use of
observing modes to ensure coordinated actions across all instruments in the set. The aim is to provide
some insight into the approach being proposed as part of the overall software and controls system design.
The software architecture is built on top of recent developments in software engineering practices. These
practices and the architecture itself address the same issues in software that other engineering fields have
had to address. Just as electronics engineering has benefited from standard bus configurations, pin-outs,
Figure 1. Observatory System Reference Model
Software Operational Concepts Definition
SPEC-0013, Rev D Page 3 of 36 Page 3 of X
power characteristics, etc., this architecture supports similar standardization in DKIST software. An
important component of this standardization is the Container/Component Model, also introduced in this
document.
Another trend in modern control system design is the move to increasingly distributed systems, where
multiple, typically smaller, components are spread across a system network. The DKIST control system
concept embraces this move toward distributed processing and the technical architecture is designed to
operate effectively in such an environment. This means the system communications support, at several
different levels, is a key aspect of the design.
Because a significant number of the DKIST software requirements also exist as requirements on existing
astronomical and non-astronomical systems, it makes little sense to attempt to develop new control
system architectures from scratch. However, the unique characteristics of observing with DKIST also
mean that no existing telescope system architecture can be directly applied to DKIST. Consequently, the
architecture presented in this document is heavily influenced by work done by other projects, but is not an
exact copy of any single existing project. Readers familiar with ALMA, Gemini, SOLIS, VLT and other
systems should recognize common characteristics from those systems.
The majority of observatory systems present today all exhibit a common underlying organizational model.
This model serves as a tool to help users understand the relationships between observatory components. It
also illustrates the roles of various components in meeting operational requirements. This model is purely
functional, various components may be grouped in different arrangements at different observatories and
the number of components varies with system complexity, but the basic structure and interactions are
consistent across observatories. The model itself helps emphasize the integrated nature of software and
control in a modern observatory.
From the above model, it is possible to derive a model that is specific to meet DKIST requirements. The
salient changes are in instrumentation, where the DKIST laboratory-style operations approach differs
from many existing systems, and in the addition of a TCS subsystem for coudé lab carousel control. The
DKIST model extends the reference model to accommodate flexible configuration of instruments. Of
course, more detail exists than is shown in Figure 2.
Figure 2. Observatory system model
Software Operational Concepts Definition
SPEC-0013, Rev D Page 4 of 36 Page 4 of X
The DKIST technical architecture is typical for modern observatories and displays a hierarchical layering
of functionality similar to that provided by the ALMA (and others) architecture. The n-tiered hierarchy
helps control develop and maintenance costs through localization. Individual components in the hierarchy
can be replaced with minimal impact on other components.
Figure 3. DKIST technical architecture structure
1.1 GOALS
The model presented in this document is intended to address the following goals:
Flexibility in a laboratory style operating environment;
Common control behavior to simplify operations;
Reduced development costs through reuse and separation of functional and technical
architectures;
Reduced integration costs;
Reduced maintenance costs; and
Simplification of instrument setup/take-down to reduce the chance for errors and to
increase observing efficiency.
1.2 ACRONYMS AND ABBREVIATIONS
ACS – ALMA Common Software
ALMA – Atacama Large Millimeter Array
ATST – Advanced Technology Solar Telescope (former name for DKIST)
CA – EPICS' Channel Access
CSF – ATST Common Services Framework
Software Operational Concepts Definition
SPEC-0013, Rev D Page 5 of 36 Page 5 of X
CORBA – Common Object Request Broker Architecture
CORBA/IDL – CORBA's Interface Definition Language
CORBA/IIOP – CORBA's Internet InterORB Protocol
DDS – Data Delivery System
DHS – Data Handling System
DKIST – Daniel K Inouye Solar Telescope (formerly ATST)
DST – Dunn Solar Telescope
EPICS – Experimental Physics Instrument Control System
ESO – European Southern Observatory
ICE – Internet Communications Engine
ICS – Instrument Control System
OCS – Observatory Control System
SCA – SOAR Communications Architecture
TBD – To Be Determined
TCS – Telescope Control System
XML – Extensible Markup Language
1.3 GLOSSARY
Client - a software entity that makes requests of other software entities (called servers).
Component - a software entity with formal interfaces and explicit dependencies.
Components must be remotely accessible in a distributed environment.
Container - part of the Common Software Framework, containers insulate Components and
Controllers from their physical location and provide core services to those Components.
Controller - a Component that implements the DKIST Command/Action/Response
protocol
CSF - DKIST Common Services Framework: software provided as part of the DKIST
control system for use by all components
Functional architecture - a view of the software architecture that focuses on system
behavior and component interactions
Functional interfaces - those interfaces that characterize the functional behavior of specific
Components
Life cycle interface - the interface allowing control of a Component's life cycle: starting,
stopping, relocating, monitoring, etc.
RDBMS - a Relational Database Management System
Server - a software entity that responds to requests from other software entities (called
clients)
Services interfaces - interfaces used by Components to access CSF services
Software Operational Concepts Definition
SPEC-0013, Rev D Page 6 of 36 Page 6 of X
Technical architecture - a view of the software architecture that focuses on the
implementation and support structures
1.4 REFERENCED DOCUMENTS
1. SPEC-0001, Science Requirements Documents.
2. SPEC-0005, Software and Controls Requirement.
3. SPEC-0014, Software Design Document.
4. SPEC-0015, Observatory Control System Specification Document.
5. SPEC-0016, Data Handling System Specification Document.
6. SPEC-0019, Telescope Control System Design Requirement.
7. SPEC-0022, DKIST Common Services User Manual
8. SPEC-0023, Instrument Control System Specification Document.
9. SPEC-0036, Operational Concepts Definition Document.
Software Operational Concepts Definition
SPEC-0013, Rev D Page 7 of 36 Page 7 of X
2. OVERVIEW
2.1 EXISTING TELESCOPE CONTROL SYSTEMS
The DKIST breaks new ground for solar telescopes. However, that does not mean that DKIST controls
and software need to be developed from scratch. There is a wealth of information available from other
projects that should be useful to DKIST. Further it may be that other institutions are interested in
developing software for similar tasks and the possibility for collaboration between DKIST and these
institutions exists.
This report documents the work done to establish communications between DKIST and the software
development teams on other projects. In addition, information that has been acquired from these projects
is briefly summarized.
2.1.1 Projects
Several classes of projects have been identified and interesting projects within each class were selected
for further study:
2.1.1.1 Solar astronomy projects
DST - Classic lab-based solar astronomy environment (NSO/Sunspot)
SOLIS - Semi-automated solar astronomy environment (NSO/Tucson)
GONG/GONG++ - Highly automated solar astronomy environment (NSO/Tucson)
GREGOR - New Kiepenheuer Institut für Sonnenphysik solar telescope in the early stages
(Germany/Canary Islands)
Instrument work at HAO - likely source of one or more DKIST instruments (Boulder)
Instrument work at IAC - major solar astronomy site in Europe (Canary Islands)
2.1.1.2 Stellar astronomy projects
SOAR - New, low cost 4m telescope with novel approach to controls (Chile)
Gemini - Major telescope with sophisticated control systems (Hawaii)
CFHT - Older 4m telescope with newly revamped control systems (Hawaii)
Keck - Major telescope with straightforward control systems (Hawaii)
GTC - New major telescope with state-of-the-art control systems (Canary Islands)
Starfire - Older 4m-class telescope with heavy AO experience (Albuquerque)
ESO/VLT - Major telescope with modern control system (Chile/Germany)
2.1.1.3 Radio telescope projects
ALMA - Major new telescope project exploring state-of-the-art control systems (Chile)
JCMT - Radio telescope running software in common with an infrared telescope (Hawaii)
Additional discussions were held with other sites during the 2002 SPIE conference.
Software Operational Concepts Definition
SPEC-0013, Rev D Page 8 of 36 Page 8 of X
2.1.2 Trends in Observatory Software
Looking at a wide range of observatories at various life cycle stages provides some indication of the
trends in software systems. This section summarizes some of the trends that are relevant to DKIST. The
material presented here was collected early in the project development and reflects the technology of that
time. This was used to drive the decision making for DKIST high-level software and should be
considered a background for the decisions that have been made. Future upgrades and improvements to
DKIST software should re-examine those decisions in light of advancing technologies.
2.1.2.1 General observatory control systems
Observatories are moving away from special purpose, home-grown software efforts in favor of leveraging
standards and common solutions. Examples of this include the increasing role of XML as a standard
format for both communications and for data representation, the increased use of databases supporting
standard interfaces (SQL, JDBC, ODBC, etc.), and the growing acceptance of standardized infrastructure
frameworks (CORBA, EPICS, LabVIEW, JXTA, etc).
In terms of programming languages, C++ is losing out to Java for high-level software and there are some
indications that real-time Java may soon be a valid competitor to C/C++ for near-real-time system
software. Tcl/Tk is widely used, but doesn't appear to be gaining many new converts. Python/Tk has
achieved cult-like popularity at some sites, but is not widespread. LabVIEW is in use at quite a few
observatories, but most use it for direct control of specific systems (i.e., as engineering interfaces to
mirror alignment systems, thermal control systems). Two sites (SOAR and GREGOR) are more
ambitious and are using (or plan to use) LabVIEW to develop integrated control systems.
Observatories are also moving towards establishing common software across their entire domain. ESO,
for example, now runs the VLT control software on all telescopes at La Silla as well as at Paranal. Gemini
was designed from the start to use the same control software at both sites, and recent JACH has retrofitted
a common software system onto both UKIRT and JCMT—interesting because it is the first time an
observatory as operated both optical and radio telescopes using common software control systems.
Most new sites are now designing distributed control systems, with control operations spread across
clusters of less expensive machines. Both direct peer-to-peer (where each communication is targeted to a
specific task running on a specific computer, e.g. SOAR's approach) and more generalized name service
based (where communications target applications or groups of applications without regard to which
machines they are running on, e.g. SOLIS' approach) are common. The choice appears to be driven as
much by the choice of communication framework as anything else.
Command-oriented control is gradually being replaced by configuration-oriented control, though
observatories seem to be creating their own names. Keck was one of the first with its keyword task layer
(KTL). Gemini, VLT, ALMA, JACH, and SOLIS use similar approaches. Subsystems are expected to
operate more independently with less sequencing and less synchronization with other components.
The virtual-telescope model for pointing and tracking developed by Pat Wallace at STFC (since retired) is
the dominant model for telescope pointing kernels (SOAR, Gemini, Subaru, many others). A few vendors
offer competing kernels as part of their telescope packages but are capable of integrating with Pat
Wallace's model.
2.1.2.2 Communications frameworks
CORBA is now firmly established as a viable communications structure (ALMA, ESO, GTC, SOLIS, and
others). The SPIE presentation on the use of CORBA by SOLIS was very well received. A few sites are
looking at alternatives that are similar to CORBA, including the Internet Communications Engine
(DKIST, LBT) and the Data Delivery System (LSST). Some sites (notably Gemini) use Java-based
communication systems (JINI, RMS), but not throughout their control system. EPICS' channel access is
used at several sites (Gemini, Keck, CFHT, and JACH). While LabVIEW provides a peer-to-peer
Software Operational Concepts Definition
SPEC-0013, Rev D Page 9 of 36 Page 9 of X
communication systems, SOAR has adopted their own layer on top of TCP/IP for inter-task
communication. Web-based communications is common for remote observing, though not necessarily
browser based.
2.1.2.3 Presentation tools
Tk is still popular for GUIs though languages other than TCL (Python/Tk and Perl/Tk are popular
choices). Java/Swing is used heavily at a number of sites for GUIs, but often not used for engineering
level GUIs. As mentioned earlier, LabVIEW is popular for engineering level GUIs, but rarely used above
that layer. Windows based machines are frequently used for operator consoles because of both the low
cost of the hardware and the wealth of GUI-development tools available under Windows.
2.1.2.4 Software costs
A few sites have looked closely at software costs. Although most sites are adopting accepted software
engineering practices, this does not seem to have led to a decrease in the cost of software development. In
fact, the cost of development appears to have risen, with the expectation that the increased effort during
development will result in reduced maintenance costs during operation. Because of the rising level of
software control in all telescope systems, software development is starting earlier in most projects and is
more integrated into the development phase than prior telescope efforts. A review of software
development costs for several recently completed telescopes (Gemini, SOAR, VISTA, and Keck II)
shows software contributes between 5% and 8% of a telescope construction costs.
2.1.2.5 Hardware
There is a mass movement toward commodity hardware, with the majority being Intel x86 machines
(including those made by AMD). PC clusters are very popular, with beowulf clusters specializing in data
processing. Linux, FreeBSD, and Windows are the operating systems of choice on this hardware. While
VxWorks is still very prominent for real-time systems (typically on PowerPC machines), real-time Linux
is gaining popularity, on both x86 and PPC architectures.
There are a number of commercial companies now offering telescopes in the DKIST range. Most of these
companies are capable of providing at least the lowest level control systems with their hardware and a
few offer full telescope control systems as well (Brashear, Vertex/RSI, Telescope Technologies Ltd, MT
Mechatronics).
Instruments and cameras capable of taking extremely large observations are becoming available in the
nighttime world. Several sites (CFHT, LSST) are just now coming to grips with the implications of such
large data volumes. Very few sites have as high data rates as expected with DKIST, however.
2.1.3 Software Layers and Control Software
One aspect of control software that is often overlooked is that control system software is implemented as
a series of layers. Each layer has a specific role and services to simplify the development of the software
in the layers above and below. A well-designed control system will offer common layers for the software
in many applications, but it is rare to find a site that is able to provide a common set of layers for all
applications.
Often, deciding to model a part of a control system on some other site's system means having to adopt far
more of their system than expected. This also means that it is very difficult to 'mix-and-match' pieces of
control systems to produce a coherent system. DKIST has spent some time looking at various facilities in
an effort to understand how different control software is layered and how that compares with other sites.
Table 1 shows how several sites have layered the software for a typical application (in this case, a
telescope control system operator’s control).
Software Operational Concepts Definition
SPEC-0013, Rev D Page 10 of 36 Page 10 of X
It should be noted that it is often difficult to establish a correspondence between software layers at
different sites, or even with different applications at the same site. For example, SOAR appears to have
one fewer layers in the software for its TCS control GUI than the others shown above.
Layer ALMA Gemini SOAR SOLIS
Application TCS GUI TCS GUI TCS GUI TCS GUI
Presentation Java/swing Tcl/Tk LabVIEW Java/swing
Interface CORBA/IDL EPICS/RTDB SCA CORBA/IDL
Session ACS EPICS/engine None SolisComm
Transport CORBA/IIOP EPICS/CA sockets CORBA/IIOP
Network TCP/IP TCP/IP TCP/IP TCP/IP
Table 1. Software Layers at various sites
2.2 THE DKIST FUNCTIONAL ARCHITECTURE
The DKIST Software Architecture is a, in large part, typical modern observatory software architecture
and is patterned along lines similar to the SOLIS and ALMA software architectures. The software is
based on having specialized components operating in a heavily distributed environment using
standardized communication operations.
Functionality is grouped into five categories: Observatory Control, Telescope Control, Instruments
Control, Data Handling, and Common Services; each of which is composed in turn of various related
subsystem. The OCS, TCS, ICS and DHS are implemented on top of the Common Services Framework
and are referred to as the principal systems.
The four principal systems are viewed as peers with different responsibilities:
The Observatory Control System handles overall observatory management: allocation of
resources, scheduling of observations, and monitoring of system activities. The user interfaces are
the responsibility of the OCS. The OCS also manages the Instrument Control System (ICS)
(discussed in detail below).
The Telescope Control System is responsible for operating the pointing model. Its subsystems
include the Enclosure, Mount, Feed Optics, AO, M1, and M2 control systems.
The Instrument Control System supports instrument operations in a synchronized environment. It
manages the set of instruments currently involved in an OCS experiments and provides the top
level of sequencing control for each instrument.
The Data Handling System is responsible for data collection and distribution. As such, some of
the functionality of the DHS is necessarily assigned to ICS development. Initially, the DHS
provides both a quick-look facility and the ability to save data in some form.
Later sections outline the functionality provided by each of these principal systems. This section
examines the information flows within DKIST and the overall functional behavior of the system.
2.2.1 Information flows within DKIST
The following major information flows are used in the DKIST control system:
Software Operational Concepts Definition
SPEC-0013, Rev D Page 11 of 36 Page 11 of X
2.2.1.1 Configuration data type
The DKIST control system uses a parameter-based methodology rather than a command/argument one. In
practice, this means that systems are given a set of parameters and are expected to match their state with
the state given by the input parameters. DKIST identifies the parameter set as a configuration.
A DKIST configuration data type contains a header and a body. The header gives particular meta-data
about the configuration, including a unique identifier for that configuration and a name for the originating
experiment. Other header fields may also exist to provide information not directly relevant to the input
parameters.
The body consists of a list of parameters, each element containing an attribute-value pair. Attribute-value
pairs are commonly associated with data transport systems, and are commonly used in communication
systems such as CORBA, EPICS, and LabVIEW. DKIST attribute-value pairs are string types; no other
type is transmitted and all other types are converted to strings before transmission. The performance
impact of this conversion is minimal. The advantage of using one type is immense, since it avoids
common problems with serialization, endian-ness, and accuracy.
2.2.1.2 Command/Action/Response directives
Direct control of one component by another component is accomplished using the Command/Action/
Response model. In DKIST, commands are implemented state-change directives. To effect a change in
behavior of a target component the controlling component describes the conditions necessary to
accomplish the state-change by providing the target component with a set of attributes (name, value pairs)
that characterize the difference between the existing state and the desired state. This set of attributes is a
configuration. A configuration may be simple, consisting of only a few attributes or it may be quite large
with hundreds of attributes.
When a component is commanded to some behavior with a configuration the command is immediately
examined for validity and accepted or rejected. If the command is accepted then the component starts
Figure 4. Command/Action/Response model
Software Operational Concepts Definition
SPEC-0013, Rev D Page 12 of 36 Page 12 of X
whatever actions are required to bring it to the desired state. An important component of the
Command/Action/Response model is that this action is performed asynchronously—the controlling
component that originated the command is free to continue with other operations as soon as the command
has been accepted by the target component. Once the target component has completed the action (either
successfully or unsuccessfully) the commanding component is notified via the response mechanism.
2.2.1.3 Event notifications
Status information is communicated throughout DKIST using the event service. Events are messages that
are broadcast from some source. System components that are interested in particular events must
subscribe to the appropriate event channel. Events are published by name and contain sets of attribute-
values. Subscription to events is also by name but wildcards may be used by a subscriber to receive
classes of events through a single channel. The DKIST event service is reliable and high-performance.
Events from a given publisher are also delivered to subscribers in the same order in which they are
published. All events are time-stamped and identify their source—both the generating component and the
configuration that was active in that component when the event was generated.
2.2.1.4 Alarms
System alarms have the same structure and distribution properties as events but are functionally distinct.
Alarms denote abnormal conditions that require operator intervention. Alarms are not considered an
integral part of the DKIST safety system, however. Ensuring safety is solely the responsibility of the
Global Interlock System. Alarms are useful for monitoring safety status as well as other abnormal
conditions and software systems may be implemented to refuse many commands when unchecked alarms
exist. Alarms are tagged in the same manner as events.
2.2.1.5 Health
Software components in DKIST may have an associated heath status of good, ill, bad, or unknown. The
health of a component is tested independently by the DKIST software infrastructure and results are
returned to the operator for inspection.
2.2.1.6 Log messages
Log messages are simple string messages that record system activity. As with events and alarms, log
messages are transmitted using a publish/subscribe mechanism and are time-stamped and source tagged.
In addition, log messages are categorized as one of debug, note, warning, and alarm (the alarm log
message category is not isomorphic to system alarms—all system alarms are logged using an alarm log
message but the handling of system alarms does not depend upon this logging). In addition, log messages
in the debug category have an associated level, debug messages are only published if their level is less
than or equal to the current debug level of the originating component.
2.2.1.7 Persistent stores access
A great deal of information in DKIST needs to be recorded for arbitrary periods that are independent of
the lifetimes of specific components. In addition, system components need access to initialization
parameters on startup and re-initializations. Finally, information specific to an experiment (instrument set
details, science programs, configurations, and science header data) is preserved. DKIST uses various
persistent stores for these types of information. System components have access to these stores either
directly or through database proxy services.
Alarms and log messages are always recorded in persistent stores. Events are not normally recorded but a
high-performance engineering archive is available for recording events upon demand. These persistent
stores are searchable using a general query mechanism under program control.
Software Operational Concepts Definition
SPEC-0013, Rev D Page 13 of 36 Page 13 of X
2.2.1.8 Bulk data transfers
DKIST cameras also produce scientific data (images or spectra). The volume (both size and rate) of data
produced can be significant in some experiments. DKIST provides special bulk data streams for the
transfer of science data generated by cameras. These bulk data streams hold science data in a persistent
data cache where it is accessible by components. Data is held in the data cache until it is successfully
copied onto some permanent media (tape, DVD, removable disk) and transfer off summit is completed.
Some bulk data contains calibration data—calibration data is processed and kept on-line for use by
instrument components.
2.2.1.9 Quality assurance
DKIST camera systems are capable of providing image streams for quality control purposes. This data is
broadcast on the bulk data transfer mechanism. Other components can subscribe to a quality assurance
data stream to process or view data. Quality assurance data from some instruments may require limited
processing before it is suitable for display. The quality assurance system provides support for this
processing. Quality assurance data streams are also used to deliver context images for target acquisition
and other context view displays. Images broadcast for those purposes must include coordinate
information to satisfy those uses.
Quality assurance data is normally discarded but components that preserve samples of the quick look data
may do so by subscribing to the appropriate quick look data channel
2.2.1.10 Time
A common time source provides time to all DKIST components. However, different time delivery
methods may be employed to match the time synchronization requirements of specific components. For
example, many components may be capable of operating properly with time information synchronized to
within large fractions of a second—in which case NTP is a suitable time distribution method. For
components requiring time synchronization at the millisecond level a different time distribution protocol
is used. Components requiring sub-millisecond time synchronization must rely on separate
synchronization signals as discussed below.
2.2.1.11 Synchronization signals
Synchronization signals are used where components must coordinate actions with sub-millisecond
accuracy. The synchronization system provides for clock signals to be transmitted through a switched
environment with extremely low latency and high accuracy. While a standard clock source is available in
the synchronization system, any component connected to the synchronization system may act as the
master clock for a set of components.
2.2.2 System Structure
In operation, the DKIST control system is intended to promote a laboratory-style operations model
patterned along the lines of the DST's informal operational model. In the DKIST design this model is
formalized and brought to the forefront. The laboratory-style operational model differs from the
traditional operation modes of large nighttime observatories.
Multiple instruments may be involved in a set of related observations. Such an instrument set may
combine a set of facility instruments (instruments that are built to DKIST standards and maintained by
DKIST staff) with a set of visitor instruments. Different scientific experiments may involve different
sets of instruments or different configurations of instruments within a set. The system structure includes
an Instrument Control System that is responsible for managing all aspects of the instrument set for a
specific experiment.
Software Operational Concepts Definition
SPEC-0013, Rev D Page 14 of 36 Page 14 of X
2.2.2.1 Experiments and Instrument Sets
Experiments are the heart of DKIST operations and the control system is designed with this in mind. A
laboratory style environment provides flexible support to carry out experiments that are likely not
understood or defined at the time the laboratory itself is designed. An experiment undertaken at the
DKIST requires an Instrument Set and a Science Program of Observations.
2.2.2.2 Instruments and Components
A single instrument is a collection of Components operating concurrently, with possibly stringent
synchronization requirements. An instrument set extends the concurrent operation across multiple
instruments. While the typical situation is to have the majority of an instrument's Components located on
the same optical bench, there is no requirement that this always is the case—an instrument may be spread
across multiple benches, share a bench with other instruments, or even include off-site Components. The
physical location of Components is primarily constrained by the requirements of the Experiment.
However, the DKIST control system must be able to associate the all Components in an instrument set
with a specific Experiment and coordinate the operation of those instruments.
Some Components are responsible for coordinating the actions of other Components and for providing
the interface between an instrument and the DKIST ICS. While a typical, simple instrument has one such
Control Component, the ICS provides a Component for communicating and coordinating those
management components in each instrument.
For facility instruments, all Components share common communications interfaces (both software and
hardware) which simplifies the process of adapting a Component to a new instrument. The common
communications interface is provided to instruments through the DKIST Common Services Framework.
Not all Components will use all available interfaces, of course. Components themselves may have direct
control over other Components and Controllers.
Visitor instruments can be accommodated in this model through the use of wrapper Components that map
control and status between DKIST and the visitor instruments. DKIST provides a standard instrument
template that includes these wrapper Components.
2.2.2.3 Components and Controllers
A Controller may be responsible for controlling a physical entity, such as a motor, often by using a
custom interface. A Controller itself is also a Component.
The fundamental distinction between a Component and a Controller (including a Component being used
as a Controller) is that a Controller implements the DKIST Command/Action/Response protocol. A
Controller accepts configurations, validates them, and then instantiates asynchronous actions to match the
state change defined by the configuration.
Components (and Controllers) have three major interface classes:
Functional interfaces—these interfaces defines the functionality of the Component and includes
descriptions of those operations that are unique to specific Components.
Life cycle interface—this interface, common to all Components, allows uniform control of the
Component (starting, stopping, resetting, etc.) without regard to a Component's functionality.
Service interfaces—these interfaces are used by Components to access CSF services.
2.2.3 Behavior
Experiments drive the system: instruments are composed from Components to meet the needs of the
experiment. This model allows an unprecedented level of flexibility. For example, because instruments
Software Operational Concepts Definition
SPEC-0013, Rev D Page 15 of 36 Page 15 of X
present the same Component interface as standard Components, it is even possible to compose several
instruments into a single, larger, 'instrument' if needed to support a particular experiment.
A scientist conducts an experiment by selecting the required instruments. The ICS checks the availability
of those instruments. The scientist is responsible for ensuring that physical devices associated with each
instrument are in the correct place (for example, in the correct relative order along the light path(s)) for
the experiment. An Experiment Manager is used by the scientist to direct the operation of the instrument
during the experiment, with a scripting language available to support any unusual situations. Once the
experiment is complete, the ICS may release the Components for that instrument set, making them
available for other experiments.
This is the first place where the separation of the functional and technical architectures becomes apparent.
The scientist is not concerned with the computers that are used to implement the instrument set software.
With the exception of constraints imposed by physical devices the OCS is free to assign Components to
arbitrary systems across the network and may even, subject to policy constraints, reassign Components
during the experiment (for load balancing, perhaps, or if there is a partial system failure). Furthermore,
there is no requirement that the physical devices be located in close proximity with each other.
This approach is not without risk. It now becomes the scientist's responsibility to ensure that the devices
associated with an instrument set are properly situated. This is typical of laboratory-style operations,
however. During laboratory-style operations, user error is less controllable. Component standardization is
also an important requirement imposed on the technical architecture by the DKIST software design.
2.3 DKIST TECHNICAL ARCHITECTURE
As with the functional architecture, the DKIST technical architecture is a classic multi-tiered, distributed
software architecture built on top of a communications bus. The design itself is derived from the technical
architecture for the SOLIS project and closely follows the technical architecture found in the ALMA
system design.
2.3.1 DKIST Common Services Framework (CSF)
To reduce development cost and time, as well as to contain operational and maintenance costs, DKIST
provides common services framework that includes:
A communications bus supporting the information flows described above—DKIST is unique
among observatories however, in providing an implementation-neutral communications bus
capable of working with a variety of communications middleware packages;
An application framework that provides access to common system services to all components in a
uniform manner including templates for building components within this framework; and
Libraries and APIs upon which system services and components are built.
In particular, the CSF provides the following services:
2.3.1.1 Life cycle management
This service is provided through a standard container/component model that includes the following
features:
Component start, stop, and status monitoring,
Resource allocation and reallocation; and
Configuration control through a configuration database.
Software Operational Concepts Definition
SPEC-0013, Rev D Page 16 of 36 Page 16 of X
2.3.1.2 Component support
CSF containers provide access to the common services for all components, guaranteeing consistent
interfaces with other parts of the DKIST system while freeing the component developers from having to
re-implement common system interfaces and allowing developers to concentrate on their functional
interfaces. Containers allow the separation of the logical behavior of a component from its physical
location.
2.3.1.3 Connection services
Components can use the CSF to connect to other Components by name regardless of the distribution of
the Components across DKIST hardware systems. The CSF connection service is responsible for
verifying the access rights for the Connection, locating the target Component, and providing a connection
handle. The CSF manages the connection and provides support for automatic reconnection on Component
restart. Because the CSF also manages Component life cycles, Components attempting access to other
Components do not have to worry about whether or not the target Component exists or not—the CSF will
instruct the appropriate Container to start the target Component if it appropriate to do so.
2.3.1.4 Commands and actions
The Command/Response interface provided by CSF is the basic mechanism for all Component control.
The Command/Response interface is modeled after similar approaches used in Gemini and SOLIS and is
described detail in TN-0022. Briefly, the act of commanding a system to perform some action is separated
from the response from that system when the action is complete.
2.3.1.5 Events and alarms
A uniform method of producing and handling events and alarms is provided. Events and alarms are
produced using a notification service, which allows the sending of messages while avoiding the
hierarchical structure of a call/return environment. This is particularly useful in real-time environments
where rapid response may be critical. The event system allows clients to subscribe to broad categories of
events as well as individual events.
2.3.1.6 Logging service
The CSF provides a standard N-dimensional logging service, in which logging messages are identified by
at least source, category and level. Log message generation from a Component can be controlled from
external sources.
2.3.1.7 Time Service
A standard time service is provided with sub-millisecond accuracy.
2.3.1.8 Data channels
The CSF provides support for handling several classes of data:
Low-latency data—this is data where time of delivery is bounded by tight constraints. It is
acceptable to drop some data in order to meet latency requirements. A typical example of low-
latency data is quick-look data.
Reliable bulk data—here, the reliable delivery of large data sets has high priority. Data must be
delivered, but there the order of delivery and the time it takes to deliver the data are not high
priority. Science data is an example of this type of data.
Operations data—the first two data streams typically involve data that is closely associated with
Experiments. Operations data is typically collected as part of the normal operations of the DKIST
observatory.
Software Operational Concepts Definition
SPEC-0013, Rev D Page 17 of 36 Page 17 of X
It is likely that there will be bandwidth restrictions imposed by the implementation (particularly in the
case of bulk data). Components must ensure that they can operate within these restrictions.
2.3.2 The Role of the Container/Component Model
This section provides a little more detail into the role of the Container/Component model in the
development of DKIST Components. The intent is to provide further insight on how Components can be
designed to fit within both instrument and telescope systems and operate within the DKIST software
framework.
The Container/Component model attempts to separate the functional behavior of a component from the
service aspects required to implement and manage that component. These services are uniform across all
components and can be provided by a common framework. There are several standard implementations of
this model: .NET, EJB, CORBA Component Model (CCM), etc. However, most of these systems are
heavily oriented to a business environment and provide features and require development environments
that are not appropriate for DKIST. Instead, the CSF provides a streamlined Container/Component model
that is suitable for scientific processing and real time control.
2.3.2.1 The container
The container isolates components as much as possible from the underlying implementations of the
communication services, provides a common implementation of the interfaces to those services, and is
responsible for managing the life cycle of components. This allows component developers to focus on the
functionality that their components must provide. The technical framework needed to support this
functionality within DKIST is provided by the container. This separation of the functional and technical
architecture within the model itself also makes it easier to upgrade the communications architecture as
needed to meet changing requirements and technology improvements.
Figure 5. Containers and components
Software Operational Concepts Definition
SPEC-0013, Rev D Page 18 of 36 Page 18 of X
Having containers manage the life cycle of components means that decisions about the deployment of
components can be deferred until runtime. This, in turn, simplifies the process of reconfiguring DKIST
instrumentation.
Another role of the container concerns how it provides services to components. By brokering services to
components, containers can perform functions that are difficult to do on a per component basis. For
example, a container can provide buffering of log (or other) messages produced by multiple components
to reduce network load or to reduce the number of network connections.
Containers are often divided into two classes: tight or porous. A tight container hides the functional
interface to a component by providing its own functional interface which is then mapped into the function
interface for the component. A porous container directly exposes the functional interfaces for the
components that it contains.
Tight containers are useful when constructing a system from existing components where the existing
interfaces are not appropriate for the new system (such as when adapting third party software). Tight
containers also make it easier to exchange one third party software package with another. A classical use
of a tight container is to isolate the use of a relational database management system (RDBMS) from its
implementation. An RDBMS component inside a tight container is more easily replaceable with a
different RDBMS since only the container has to be modified to match the interface to that specific
database. Tight containers are also useful in real-time systems. A tight container can isolate a real-time
component from non-real time network access, etc.
The disadvantages of a tight container are that it must understand more about the components that can be
contained within it, that interfaces are more tightly constrained, and that tight containers are bulkier than
porous containers. DKIST supports both forms of container with a novel approach: services are not
associated with the container, but rather with toolboxes that are unique to each component.
Depending upon the mix of tools loaded by a container into a toolbox, each component may appear to be
within either a tight or a porous container, or somewhere in between. Careful consideration is given to the
interfaces provided by components and services. This ensures, as much as reasonable, that these
interfaces do not expose underlying implementation details. For example, a RDBMS service's interface
should not expose database characteristics that are unique to PostgreSQL (or Sybase, or Oracle, etc).
Software Operational Concepts Definition
SPEC-0013, Rev D Page 19 of 36 Page 19 of X
Figure 6. DKIST container organization
2.3.2.2 The component
Because CSF furnishes the container, the bulk of the software development is in the design and
implementation of components. In the DKIST software model there is a 1-1 correspondence between
components and Components. This means that Components must:
Implement the CSF lifecycle interface,
Use the CSF component services interface, and
Define, in conjunction with the DKIST software group, a functional interface.
Components may act as clients and servers within the system and a single Component may, in fact, act as
both. A Component that acts as a server must provide, through its functional interface, support for
accepting commands and producing responses in accordance with the DKIST Command/Action model. A
client Component accesses a server Component by requesting access to the server (by name) via the
component services interface. If the request is granted, a direct connection to the server's functional
interface is provided to the client by the container.
Determining the appropriate size for a component is a bit of an art. In general, Components should be
cohesive and loosely coupled. Cohesiveness is the level to which all aspects of a Component's
Software Operational Concepts Definition
SPEC-0013, Rev D Page 20 of 36 Page 20 of X
functionality are related. Functions that are not logically related should not be implemented in the same
Component. A common description of a cohesive Component is one that implements all of its required
behavior and nothing more. Coupling is a measure of the interactions required between Components.
Components are loosely coupled when they depend solely on the defined interfaces between them and not
on any internal characteristics of each other.
2.3.2.3 The manager
The CSF manager is responsible for resource allocation within DKIST. When a client Component
requests access to a server Component, the client's container forwards the request to the manager. If the
request satisfies all resource constraints, the manager contacts the server's container and obtains a
reference to the server from that container. (Note that, depending on the life cycle characteristics of the
server, the server's container may need to create the server Component in order to produce the reference.)
The manager provides the reference to the client's container, which then passes it to the client. Once the
client has the server's reference the manager has no further role in the interaction between the client and
the server.
It is important to note that the manager is an aspect of the technical architecture, not the functional
architecture. Components have no knowledge of the manager—clients simply connect to servers in the
functional architecture. In fact, the above name service behavior of the manager can be replaced by some
other mechanism (such as direct peer-to-peer discovery) without affecting the Component implementation
or behavior. Similarly, if the server is handled by a tight Container, the client is unaware that it has been
given a reference to the Container instead of a direct connection to the server Component.
Software Operational Concepts Definition
SPEC-0013, Rev D Page 21 of 36 Page 21 of X
3. OBSERVATORY CONTROL SYSTEM
3.1 OPERATIONAL REQUIREMENTS ON THE DKIST
A key science requirement on the DKIST software system is that the DKIST must support a flexible,
laboratory-style operating environment. The model suggested in the Science Requirements Document is
that provided by at the Dunn Solar Telescope (DST). At the DST, solar physicists assemble instruments
on a series of optical benches mounted on a rotating platform (which serves as an image de-rotator). A
few optical benches are dedicated to 'facility' instruments while other benches are available for custom
instruments. The DST is extremely popular with solar physicists. It is not uncommon to have custom
instruments set up with multiple cameras using dichroics, beam-splitters, and reflective slit jaws to
partition the light beam. This is, by design, then, quite similar to the physical layout of the DKIST
instrument area.
To support this laboratory environment, the main observing platform for DKIST is a rotating coudé lab
supporting multiple optical benches. As with the DST, solar physicists assemble instruments from various
components as needed to meet the requirements of the intended experiment.
It is important to understand that the operation of a solar facility such as DST or DKIST differs
significantly from night-time operation of a modern, astronomical telescope. Instruments for night-time
observing are typically mounted on exposed surfaces (Cassegrain and Nasmyth foci, for example) and
subject to changing gravity vectors, temperature fluctuations, and wind buffeting. This means that these
instruments must be quite sturdy, protected from the elements, and fit within typically small space,
weight, and flexure envelopes. While great care is typically exercised in instrument design to provide
each instrument with the capability to adapt to multiple classes of experiments, some flexibility is
necessarily lost to meet the requirements imposed by the operating environment. Also, night-time
observatories are typically viewing objects too faint to take advantage of multiple cameras and often
require quite large exposure times.
Solar astronomy at DKIST is quite different. Solar physicists are typically viewing surface features on the
Sun that evolve quite rapidly, so short exposures are required. Conversely, the change in features over
time is also important, so an observing run might include many thousands of exposures. The rapid
exposure of a relatively photo-rich object means that some instruments can collect quite a bit of data
quickly. The DKIST first-light instrument, for example, is expected to produce on the order of two
gigabytes of data every second, running continuously for hours on end. The uniform gravity vector,
controlled environment, and large space envelopes provided at coudé make it possible to focus more on
flexibility. Further, the large number of observing stations means that an instrument for an upcoming
experiment may be assembled while other experiments are underway at other stations. The result is
dramatic. In the night-time world, astronomers must locate a facility providing the instrumentation (and
location) that is the best match for the desired experiment. At DKIST (and the DST), solar physicists
adapt the facility to the needs of the experiment, often adding or moving cameras and other components
assembled for the experiment—even as the experiment progresses.
Although matching the flexibility of the DST's laboratory-style operation is a requirement on DKIST, the
DST itself is 40 years old. During that time, the DST control system has evolved in more or less ad hoc
fashion and is not particularly suitable for adaptation into a modern observatory. The challenge for
DKIST software design is to develop a formal characterization (model) of laboratory operation that can
be integrated into a control system more typical of a modern, world class observatory.
3.2 THE OBSERVING MODEL
In DKIST operation, the foundation for the observing model is the interaction between experiments,
instruments, and the telescope system software. It is the experiment that is important from the Solar
Software Operational Concepts Definition
SPEC-0013, Rev D Page 22 of 36 Page 22 of X
Physicist’s point of view and this point of view is reflected in the design. Consequently, experiments are a
formal concept within the model.
A DKIST experiment includes a science program of observations and description of an instrument set
that is capable of performing those observations. Addition information associated with every experiment
includes results and a history of the experiment. Every observation consists of a sequence of steps that
describe the behavior of the instrument set (including data processing operations) needed to perform that
observation. Every observation step is a configuration: a set of attributes and a simple command
describing a state change within the instrument set. This use of science programs is typical of modern
observatory practice and matches similar functionality provided at SOLIS, Gemini, VLT, ALMA, and
other observatories.
It is the instrument set that distinguishes DKIST from these other observatories, not the science program
structure. Instead of adapting experiments to fit within the bounds imposed by separate fixed-component
instruments, a DKIST observer can construct an instrument set from the set of available instruments,
often in ways that were never conceived of by the developers of those instruments.
Figure 7. Experiment information flow
Instrument sets consist of one or more instruments. Some components may be purely mechanical with no
associated software (e.g. dichroics).
Each instrument is comprised of a fixed set of components in a permanent association. There is some
software that understands those components and their associations. Because of the static nature of these
instruments, this software may be written using specialized interfaces for the individual components.
However, such specialized interfaces introduce maintenance and upgrade issues, so modern observatory
control systems have followed a trend toward ever more standardized interfaces between components.
DKIST exploits this trend by imposing a standard interface on all software components. Thus a DKIST
instrument set differs from a conventional instrument simply adding a layer above the contained
instruments that manages and sequences the individual instruments.
Another subtle difference in the DKIST operational model is that while the TCS itself is also comprised
of components, these components are also available as components for an instrument. An experiment
Software Operational Concepts Definition
SPEC-0013, Rev D Page 23 of 36 Page 23 of X
performing near-limb observations may include an occulter as a component, for example. Also, some
components may be shared among several instruments—most telescope components would need to be
shared this way. The adaptive optics system is an example of a class of component shared by all
instruments. The OCS is responsible for allocating sharable resources and ensuring the proper
coordination among observations using shared resources.
Solar physicists, as part of the design of an experiment, construct the instrument set they need for that
experiment by assembling the requisite components and associating them within an instrument set. Every
instrument set is uniquely identified and the component associations are preserved indefinitely. This
allows an instrument set to be reconstructed (assuming the components still exist) at a later time for
additional experiments. Some instrument sets are expected to be used so frequently that the physical
associations among components are maintained.
Figure 8. Typical instrument set
Once an instrument set has been defined and implemented, its actual control is identical to that of a
conventional instrument. This greatly reduces the cost of integrating an instrument set design into a more
conventional observatory control system structure.
Because the component associations with an instrument set are software constructs, there are few
restrictions on the physical arrangements of components. In addition to allowing telescope subsystems to
function as instrument components, this also allows for more exotic instruments to be composed to meet
the needs of unusual experiments. Multiple instruments can be combined to form a sophisticated
instrument set. As a more extreme example, it is entirely possible to design an instrument set assembled
from components at different observatories, if an experiment requires such coordinated observing.
One of the first steps when performing an experiment within DKIST is to configure an instrument set.
This can be done either by browsing and selecting from a list of existing instrument sets or by
constructing a new instrument set from a catalog of instruments. The components are then configured as
Software Operational Concepts Definition
SPEC-0013, Rev D Page 24 of 36 Page 24 of X
needed for the experiment. While many components can be configured by setting a few parameters,
others, such as a sequencing component, may take more effort, depending on the requirements of the
experiment. Once the instrument set is defined, it is registered with the Observatory Control System
(OCS). The OCS records the description of the instrument set and associates it with the experiment.
A science program using the instrument set can now be developed and added to the experiment.
Observations within the program may be controlled through sequences of observation blocks.
Observation blocks are produced by the OCS for execution by the ICS.
To observe, the OCS executes the science program, submitting observation blocks from each observation
to the ICS. In the case of interactive observing, a graphical user interface for constructing the observation
blocks is provided to the observer. This interface includes context images from the Telescope Acquisition
and AO context viewer, as well as possibly from some instruments, suitable for target selection and
adjustments. The ICS carries out the observations as defined by the sequence of observation blocks that it
receives.
Observation blocks align with the notion of observing modes. Observing at DKIST consists of
sequencing through a set of distinct observing modes: maps, polar calibrations, telescope calibrations,
etc. The observing mode is made available to all instruments in an instrument set via the Instrument
Control System so each instrument can perform the appropriate actions to complete its task for that type
of observation.
3.3 STRUCTURE
The DKIST Observatory Control System has the following responsibilities:
Management of system resources;
Management of experiments, science programs, observations, and configurations;
Coordination of the Telescope Control System, Instruments Control System, and the Data
Handling Systems;
Management of DKIST systems during coordinated observing with other observatories;
Essential services for software operations; and
User interfaces for observatory operations.
In general the OCS assumes managerial responsibilities for the DKIST system and directs the activities of
the remaining principal systems. Services that are central to the operation of DKIST software are
provided by the OCS. The OCS acts as the interface between users and the DKIST systems during normal
operation, allowing users to construct science programs and manage instruments for use in an experiment,
monitor and control the experiment, and obtain science data from the experiment.
The OCS also provides basic services to support system maintenance and general system engineering
operations. This includes tools to examine system diagnostic information, handle alarm conditions,
monitor safety systems, and perform routine engineering tasks.
The OCS can also be viewed as organized hierarchically into broad functional categories: application
support, experiment support, and resource management. The top levels of these categories are shown in
Figure 9.
Software Operational Concepts Definition
SPEC-0013, Rev D Page 25 of 36 Page 25 of X
Figure 9. OCS functional categories
The application services provided by the OCS include the event, alarm, log, and persistent store services.
The application framework includes APIs and libraries as well as a general framework for building and
deploying DKIST applications. The TCS, ICS and DHS (as well as the OCS itself) are resources that are
managed by the OCS. The OCS provides for direct operator control of these resources as needed.
However, the normal operational model is to allow experiments as much resource control as practical
over the resources that are allocated to that experiment.
Software Operational Concepts Definition
SPEC-0013, Rev D Page 26 of 36 Page 26 of X
4. DATA HANDLING SYSTEM
The Data Handling System is responsible for
Bulk data transport;
Quick look;
Data storage, retrieval, and distribution; and
Data reduction pipelines support.
The DHS manages the flow of scientific data collected by DKIST instruments. The data reduction
pipelines support is a potential upgrade that is supported by the initial DHS design.
Because of the performance requirements placed on the DHS, parts of its functionality are distributed
across other components. For example, instrument camera systems perform any data processing required
to reduce data output to meet bandwidth restrictions imposed by the implementation of the bulk data
transport. Similarly, instrument component developers are responsible for providing the processing steps
required to convert raw quick-look data into meaningful quality-control information.
4.1 BULK DATA TRANSPORT
The Bulk Data Transport (BDT) system is responsible for managing the large-volume data streams within
the DKIST system. These are data streams whose rates are capable of surpassing the typical bandwidth of
traditional networks or whose rates are close enough to the limits of those networks that they might
interfere with the information flows that would also use those networks. For this reason, the BDT is a
hardware driven system that operates with one or more of its own, specialized, networks.
It is helpful to examine the players in bulk data transport. These are the data sources and sinks, the data
streams and the data routing. Many entities function as both a data source and a data sink, depending on
the type of access.
4.1.1 Data sources
Data sources can produce data for transport. Sources can be classified as:
Volatile—a volatile source can produce data but has no capability for reproducing that same data.
Examples include DKIST cameras and some data processing applications.
Transitory—a transitory source can produce data and reproduce that same data for a finite period
of time. Transitory sources include data buffers and temporary files. Transitory sources are
typically restricted by both time and storage capacity.
Permanent—a permanent source can produce data and then reproduce that same data an arbitrary
number of times over an arbitrarily long period. Examples include databases, archives and
distribution media. Permanent sources are typically restricted only by storage capacity.
4.1.2 Data sinks
Data sinks receive data from transport. Sinks can be classified as:
Volatile—the sink can receive data, but cannot retain it for any significant time. Volatile sinks
include video displays and some data processing applications.
Software Operational Concepts Definition
SPEC-0013, Rev D Page 27 of 36 Page 27 of X
Transitory—a transitory sink can receive data and retain it for a finite amount of time. Examples
include data buffers and temporary files. Transitory sinks are typically restricted by storage
capacity and time.
Permanent—a permanent sink can receive and retain an arbitrary amount of data for an arbitrary
amount of time. Permanent sinks include databases, archives, and distribution media. These sinks
are typically restricted only by storage capacity. Note that some permanent sinks, such as
distribution media, may act as transitory sources since they can be removed from the facility.
4.1.3 Data streams
The BDT is responsible managing the flows of the following types of data streams:
Science data—these are streams of scientifically useful data whose originating sources are the
DKIST science cameras. The information in these streams forms the basis of the DKIST data
product. Information on a science data stream is always associated with some experiment.
Quality assurance data—these are streams of data that are intended for quality-control checks.
Quality assurance streams may originate at the science camera, the Telescope Acquisition
System, and the AO context imager. They may also be extracted from science data streams.
Typically, quick look sinks are volatile. Quality assurance data streams are built directly on the
BDT.
Engineering telemetry—these are streams of engineering data originating from arbitrary DKIST
systems. These streams can be split into two categories: continuous and burst. DKIST
distinguishes these types of telemetry on their bandwidth requirement: continuous streams are
capable of being transmitted on conventional networks while burst streams require bulk-data
transport. For this reason, the BDT may not need to manage telemetry streams. The exact data
rates are currently unknown - it may be that all telemetry streams fall into the continuous
category, even those that provide data sporadically. Telemetry streams are mentioned here
because of the possibility that some of them may end up the responsibility of the BDT.
Figure 10. Routing data using the Bulk Data Transport
Software Operational Concepts Definition
SPEC-0013, Rev D Page 28 of 36 Page 28 of X
4.1.4 Data routing
The Bulk Data Transport system is responsible for managing the flow of streams between sources and
sinks. It is highly likely to be restricted by bandwidth limitations of the physical transport mechanisms,
including network, disks, and backplanes. Some sources that one would normally think of as volatile may
have to be implemented as transitory to throttle data production to meet the physical limitations of the
BDT.
The DHS bulk data transport system avoids hardwired connections as much as possible. Any quality
assurance stream, for example, should be capable of being routed from any quality assurance source to
any quality assurance sink and possibly to more than one sink. Multiple simultaneous routing of data
streams is possible. The only restrictions on routing are those imposed by physical limitations of the
transport mechanisms.
4.2 CAMERA LINES
Because of the large volume of data produced by many of the science cameras attached to DKIST
instruments, the DHS organizes the processing of data into separate camera lines, one for each camera.
The BDT data streams, processing nodes, and data stores associated with one camera line are physically
isolated from those found into other camera lines. This also allows the facility to scale processing by
adding additional camera lines.
4.3 QUALITY ASSURANCE
The Quality Assurance system is responsible for delivering and displaying near-real-time images from
facility cameras. The quality assurance facility is layered on top of the Bulk Data Transport system, which
accepts quality assurance data streams from various sources and routes them to quality assurance
application software.
There are four major uses of quality assurance data:
Target acquisition—images from the acquisition camera system, the AO context imager, or from
other, possibly remote, cameras are displayed for an operator or observer. These images must
have solar coordinate system information associated with them to allow for target selection based
on feature selection from the image.
Context viewing – images from one instrument's quality assurance stream may be used to view
context for another imager. These images must have solar coordinate system information
associated with them to permit alignment of the images with images from the other instrument.
Quality assurance—images from a camera, most likely used as part of an instrument set are used
to verify that the proper data is being collected at sufficient quality. Simple calibration
transformations may be applied to these images prior to delivery at the display using the data
processing pipeline support.
Instrument preparation—images from one or more cameras are display in near-real-time during
instrument preparation to assist in activities such as alignments and focus adjustments. Simple
calibrations transformations may be applied to the images but must keep pace with image
delivery cadence without incurring significant latency penalties.
Safety—images from the acquisition camera system are used by the operator after opening the
enclosure shutter to verify alignment of the mount with the Sun prior to opening the M1 cover.
These same images are used to check for vignetting and other potential problems.
Software Operational Concepts Definition
SPEC-0013, Rev D Page 29 of 36 Page 29 of X
In the last three cases, quality assurance provides for display level operations such as histogram cuts,
color mapping, etc.
The need for some quality assurance to operate at near-real-time levels means that the number of bulk
data junctions (internal sink/source connections) needs to be kept to a minimum when routing quality
assurance data streams. Furthermore, solar coordinate system information is included as part of the image
information contained in a quality assurance stream to avoid synchronization issues when processing
images.
4.4 DATA STORAGE, RETRIEVAL AND DISTRIBUTION
When science data is collected by a DKIST instrument it is sent out over the bulk data transport system,
the Data Handling System provides support for the following:
Transitory storage of all science data and the associated header information. This data is
organized by experiment, observation, frame group, and frame. The length of time that this data
is retained is dependent upon the capacity of the science data store and has not yet been
determined. However, the minimum time is sufficient to allow the transfer of data onto
distribution media or directly to off-site archives.
Transitory storage for calibration data, organized by instrument and camera. The length of time
of this storage has not yet been determined.
Routing of science and calibration data and the associated header information from transitory
storage to:
o Distribution media - the media has not yet been determined, but the size of DKIST data
products impacts the choices for distribution media,
o Data processing pipelines, and
o A permanent archive, such as the NSO Digital Library.
4.5 DATA REDUCTION PIPELINES SUPPORT
The Data Handling System must provide support for transforming data from various originating data
sources prior to their delivery to terminating data sinks. The key uses of this data processing are for
quality assurance, target acquisition, and reduction of the data volume to be delivered off-site. The first
two of these activities operate on quality assurance data streams. The transformations are implemented
through a series of applications that function as both data sinks and sources, each receiving a stream of
semi-processed data, applying some transformation algorithm to that data, and producing more refined
data. The exact algorithms are outside the scope of the Data Handling System.
The data reduction pipelines are implemented with applications connected through the Bulk Data
Transport and isolated by camera line. The generalized routing capability of the BDT along with its
inherent distribution capability allows highly flexible connections of the applications within the pipelines
and while the separate into camera lines allows for simplified scalability.
Software Operational Concepts Definition
SPEC-0013, Rev D Page 30 of 36 Page 30 of X
5. TELESCOPE CONTROL SYSTEM
The Telescope Control System (TCS) controls all DKIST mechanical systems that are involved in
delivering the light beam to scientific instruments. Its major tasks are:
Acquire and track the target with the enclosure, the telescope mount assembly and the coudé
rotator platform;
Direct the light beam through the optics to the instrument;
Manage mechanisms that can adjust beam quality to deliver the highest possible quality light feed
to the instrument; and
Control the thermal state of the telescope.
The TCS performs the majority of these tasks by directing the operation of various TCS subsystems. A
pointing kernel is operated internally to provide position and motion parameters to these subsystems. The
pointing kernel itself implements a series of virtual telescopes to produce this information.
5.1 FUNCTIONALITY OF THE TCS
The principal component of the TCS is the pointing kernel. This component continuously calculates the
solar ephemeris and applies it to a model of the telescope called the virtual telescope. This model
performs the necessary transformation to describe the optical path of the telescope to the active focal
plane. There may be as many virtual telescopes running as there are valid optical paths in the telescope;
switching from one position to another is as simple as reading one virtual telescope’s output or another.
5.1.1 Wavefront Correction
The TCS manages the state of the telescope wavefront correction. Since the DKIST telescope does not
use an independent guider, it requires tip-tilt information from the wavefront correction systems to correct
for low-frequency non-recurring pointing errors. By using this mechanism, the telescope becomes
coupled to the solar feature under observation, and thus is uncoupled from any absolute reference frame
as the feature moves about the solar surface.
Figure 11. Wavefront correction control loops
Higher-order wavefront correction information passes from the wavefront corrections systems directly to
the appropriate correcting mechanism. The TCS determines the appropriate wavefront correcting state for
the telescope and commands each subsystem into the proper state for its part of the correction. Thus,
when running open loop, the TCS commands the M1 and M2 systems to use their respective look-up
Software Operational Concepts Definition
SPEC-0013, Rev D Page 31 of 36 Page 31 of X
tables to determine their positions. Alternatively, when running closed-loop, M1 is commanded to accept
Zernike offload information from the wavefront correction systems for figure information.
5.1.2 Coordinate Systems
The position of the sun may be given in any number of coordinate systems. The topocentric coordinate
system describes the position of the sun above the local horizon in terms of altitude and azimuth. The
mount and enclosure each require a modified topocentric coordinate to operate; these positions are
adjusted for the repeatable errors of the mount and enclosure. The modified topocentric position is also
adjusted for atmospheric refraction and diffraction effects.
The celestial coordinate system uses the right ascension and declination positions found in stellar
telescopes. This system is useful for pointing to stars and other objects fixed to the celestial sphere, but
not needed for object like the sun that move in the ecliptic. However, the position of the sun in RA and
declination is sometimes of interest to observers.
The ecliptic coordinate system is defined by the motion of the earth about the sun. By definition the sun
does not deviate in latitude in this system, its center always maintains a position on the ecliptic equator.
The ecliptic coordinate system is not generally interesting; it is only a step along the transformation path.
The heliocentric coordinate system defines the two-dimensional surface of the sun. It uses the solar
rotation axis as its definition of north and south, with coordinates given as angle from solar north and
fraction of the solar radius. By definition, the solar limb is always at 1.0 radius, giving this coordinate
system a varying scale at the telescope foci. Heliocentric coordinate are extensively used by solar
physicists to identify transitory solar phenomena.
The helioprojective coordinate system is similar to the heliocentric system, but allows positions off the
solar disk.
The heliographic coordinate system defines the three-dimensional surface of the sun. This coordinate
system includes the knowledge of the tip of the solar rotation axis toward or away from the earth. The
coordinate system considers the prime meridian to always face the earth. An alternate version is the
Carrington coordinate system, where the prime meridian rotates at the average solar rotation rate.
Heliographic coordinates are extremely useful when tracking solar features as the sun rotates over several
days.
The TCS must provide sufficient coordinate system information to support target acquisition, context
image alignments, and world coordinate system calculations.
5.2 TCS SUBSYSTEMS
Most operations carried out by the TCS actually are performed by other systems under the control of the
TCS. These subsystems know nothing about the overall state of the DKIST; they only know their own
current and demand states. The coordination of all these states is the responsibility of the TCS.
For instance, when the telescope needs to be slewed to a new location, the TCS simultaneously places the
mount and enclosure control systems into their high-speed, slewing state. The TCS also discontinues the
current wavefront correction state so that the M1, M2, and feed optics control systems do not receive
useless wavefront correction data. Other activities, such as blocking light from the instruments, may also
be performed by the TCS upon its subsystems. Upon reaching the target position, the TCS places the
mount and enclosure into a tracking state and restarts the prior wavefront correction model.
Software Operational Concepts Definition
SPEC-0013, Rev D Page 32 of 36 Page 32 of X
Figure 12. Telescope subsystems
5.2.1 Mount Control System
The Mount Control System (MCS) controls the telescope mount assembly, including the altitude,
azimuth, and coudé rotator drive motors, mirror covers, cable wrap assemblies, and other components
associated with the mount operations. The TCS sends trajectory data to the MCS at a 20 Hz (or better)
rate.
5.2.2 Enclosure Control System
The Enclosure Control System (ECS) operates the DKIST enclosure, which includes the carousel, shutter,
aperture stop, ventilation, cooling, and other auxiliary equipment attached to or associated with the
enclosure. The TCS sends trajectory data to the ECS at a 1 Hz (or better) rate.
5.2.3 M1 Control System
The M1 Control System (M1CS) is responsible for the primary mirror performance, its figure and thermal
equilibrium. The TCS controls the temperature profiles and wavefront control data to the M1CS.
5.2.4 M2 Control System
The M2 Control System (M2CS) controls the secondary mirror, including its tip-tilt-focus hexapod
assembly, fast tip-tilt mechanism, and thermal equilibrium. It also controls the nearby heat stop and Lyot
stop assemblies. The TCS controls the pointing, temperature profiles and wavefront control data to the
M2CS.
5.2.5 Feed Optics Control System
The Feed Optics Control System (FOCS) operates the optical components required to bring an image to
the instruments. The components associated with the FOCS are the quasi-static alignment mirrors and the
thermal control system for the feed optics mirrors.
5.2.6 Wavefront Correction Control System
The Wavefront Correction Control System (WCCS) delivers corrected images to the instruments. The
components associated with the WCCS are the adaptive optics system, the active optics system, the
correlation tracker, and the image context viewer. The TCS determines how the WCCS distributes
wavefront control data to the other subsystems.
Software Operational Concepts Definition
SPEC-0013, Rev D Page 33 of 36 Page 33 of X
5.2.7 Polarization Analysis and Calibration Control System
The Polarization Analysis and Calibration Control System (PACCS) controls the operation of the various
optical elements forming the Polarization Analysis and Calibration (PAC) module at the Gregorian focus.
These include spinning polarizers, waveplates, and filters used for polarization observations and
calibration of the instruments and the AO system.
5.2.8 Acquisition Control System
The Acquisition Control System (ACS) provides an image of the entire solar disk from an auxiliary
telescope located on the optical support system of the mount. The acquisition camera allows the operator
to locate and move to solar features identified on the acquisition system.
Software Operational Concepts Definition
SPEC-0013, Rev D Page 34 of 36 Page 34 of X
6. INSTRUMENT CONTROL SYSTEM
The Instrument Control System (ICS) manages the simultaneous operation of a set of instruments. As part
of this task the ICS must coordinate with the OCS and the TCS to ensure the configuration of the
observatory matches the expected configuration of an observation. The OCS provides observation blocks
to the ICS for execution by an instrument set and requests update information on instrument performance.
TCS components may be allocated as components in one or more instruments. The ICS and TCS interact
to ensure that such shared resources are used properly, that the TCS is functioning appropriately given the
current active instrument set, and that TCS status information relevant to an active experiment is collected
and associated with that experiment.
The ICS is composed of the following elements:
The Observation Management System that coordinates instrument activities;
The Instrument Sequencers that control activities within an instrument;
The set of standard instrument components for development of instrument systems;
The synchronization interface to the PAC spinning waveplates; and
The camera interfaces.
6.1 OBSERVATION MANAGEMENT SYSTEM
The ICS Observation Management System (OMS) manages instruments as part of an ongoing
experiment. It is the role of the OMS to interface with the OCS, the instruments involved in an
experiment, any parasitic instruments taking advantage of the configuration, and the synchrobus
coordination of cameras and rotating modulators.
The OMS is a control layer between the OCS and its experiment management and the various instrument
control systems and their operation of the mechanisms. The OMS is responsible for organizing the
instruments required by an OCS observation, to send mode change information to them, and to collect
and return their completion states. Parasitic instruments not associated with the ongoing experiment may
register with the OMS to participate in the data collection. The OMS issues a temporary experiment ID to
these instruments and notifies them of any mode changes; however their completion or current status is
not recorded or used in the experiment.
The OMS also provides an engineering user interface for the ICS. This may be used during normal
operations for engineering diagnostics or repair. It is also used during instrument development to emulate
the OCS activities.
6.2 INSTRUMENT SEQUENCER
The ICS Instrument Sequencer (IS) is the top-level controller on every instrument control system. This
controller provides the instrument developer with the interfaces to the OMS, the script library database,
the mechanism controller, and the detector controller. The IS responds to mode changes from the OMS by
loading the appropriate observation script from the script library, determining the input properties for the
script (i.e., cameras in use, iterations, step size, etc.), and sequencing the instrument through the steps
prescribed by the script. The IS also handles communications between the mechanism controller and its
hardware mechanisms and the detector controller and its cameras.
Software Operational Concepts Definition
SPEC-0013, Rev D Page 35 of 36 Page 35 of X
6.3 MECHANISM CONTROLLER
Instrument developers are responsible for the control of the mechanical and optical elements within the
instrument. DKIST supports a standards approach to mechanism control by writing and maintaining
control software for hardware on the approved standards list. Instrument developers are encouraged to use
supported hardware controllers in their designs.
Mechanism software consists of two basic parts, the controller software that implements the basic
functionality of the mechanism and the connection software that communicates with the hardware.
Controller software implements functionality for particular types of mechanisms; there are controllers for
discrete motion control (e.g., filter slides), continuous motion control (e.g., slits, camera stages), analog
and digital I/O, and time base, to name a few. Connection software provides the communications link
between the controller and the hardware. Examples include network communications via TCP/IP, PCI bus
backplane, and serial port communications. On top of these communications transport mechanisms is the
specific protocol for interacting with the hardware device; these include motion control commands for
particular vendors (Galil, Newport, Delta Tau), protocol buffers, device drivers, etc.
6.4 DETECTOR CONTROLLER
The ICS provides the template for the instrument’s detector controller. This controller operates and
coordinates the camera systems for the instrument. Some of its tasks are the interface to the instrument
sequencer, the header collection support, and the communications with the various virtual cameras
controlled by the instrument. Individual instruments are responsible for providing local coordinate
system information needed for context viewing purposes and for world coordinate system parameters
reflecting the specifics of that instrument.
6.5 TRADS
The Time Reference and Distribution System [TRADS] is a network for synchronizing cameras,
mechanisms, and the rotating waveplate modulators. Although not a part of the ICS, TRADS nevertheless
is an important part of operating the instruments. The ICS will deliver the time base controllers that allow
instruments to connect to TRADS and obtain an accurate measure of the current phase of the
synchronization system.
6.6 STANDARD INSTRUMENT
DKIST instruments need mechanical and electronic hardware to operate their various devices. The
DKIST standard instrument attempts to standardize the choices for these components and provide a set of
software controllers and device drivers to the instrument developers. Most hardware choices fall into one
of the following categories:
Motor controllers—these devices move stepper and servo motors. They control linear and
rotating mechanisms, with and without encoders. They are used for positioning gratings, filters,
and stages.
Digital I/O—these read and write digital data. They are used to sense switches and read high-
speed camera data.
Analog I/O—these read and write analog data. They are used for temperature, pressure, and force
transducers, and voltage controlled servo loops.
Aerial I/O—these read and write serial data. They are used to read more advanced I/O systems,
such as weather stations, temperature detectors, and other standalone systems (i.e., UPS stations,
PLCs).
Software Operational Concepts Definition
SPEC-0013, Rev D Page 36 of 36 Page 36 of X
Network I/O—these read and write data across the network. They are used to control remote
equipment such as networked motor controllers, other computer systems, and reverse terminal
servers.
The standard instrument is composed of a large number of controllers as part of the DKIST Base
application support framework, which itself is layered on top of CSF. These software components add
specific functionality to the DKIST control model. The base class of the DKIST Controller is extended to
include MotorController and DioController classes, among others. These new classes know how motor
controllers and digital I/O boards operate. For motor controllers, the standard instrument adds knowledge
about motor channels, ranges and limits, encoders, user units of measurement, accelerate and deceleration
parameters, and many other feature of motor controllers. The same can be said for the digital I/O
controller: the standard instrument adds knowledge about latches, read-only and write-only bits, and high-
speed buffers.
These subclasses can be further sub-classed to add even more specific functionality. The motor controller
class has added a rate motor controller to run motors at varying rates, and the discrete motor controller to
give names to specific positions in the motor travel.
Functionality can also be targeted for specific implementations. Instruments with a multi-axis mechanism,
such as a camera stage or grating mount, can subclass the MotorController class to build unique
translation or rotation conversions into the controller, or to electronically gear two or more motors, or any
other operation that is unique to that mechanism.