high rate data broadcast system for npp...

11
HIGH RATE DATA BROADCAST SYSTEM FOR NPP IN-SITU APPLICATIONS Item Type text; Proceedings Authors Mirchandani, Chandru; Fisher, David; Brentzel, Kelvin; Coronado, Patrick; Harris, Carol Publisher International Foundation for Telemetering Journal International Telemetering Conference Proceedings Rights Copyright © International Foundation for Telemetering Download date 18/08/2018 20:32:02 Link to Item http://hdl.handle.net/10150/605345

Upload: lamthien

Post on 18-Aug-2018

222 views

Category:

Documents


0 download

TRANSCRIPT

HIGH RATE DATA BROADCAST SYSTEMFOR NPP IN-SITU APPLICATIONS

Item Type text; Proceedings

Authors Mirchandani, Chandru; Fisher, David; Brentzel, Kelvin; Coronado,Patrick; Harris, Carol

Publisher International Foundation for Telemetering

Journal International Telemetering Conference Proceedings

Rights Copyright © International Foundation for Telemetering

Download date 18/08/2018 20:32:02

Link to Item http://hdl.handle.net/10150/605345

HIGH RATE DATA BROADCAST SYSTEM FOR NPP IN-SITU APPLICATIONS

Chandru Mirchandani, David Fisher, Kelvin Brentzel, Patrick Coronado &

Carol Harris NASA/GSFC,

NASA Goddard Space Flight Center, Greenbelt, Maryland 20771

ABSTRACT A vital part of all satellite development is the ground system that will be used to process the satellite downlink. To support the High-Rate Data (HRD) Downlink satellites planned for launch by the NPOESS Preparatory Project (NPP), GSFC has developed a Single Chassis System (SCS) solution, which provides data processing from RF to data product using leading-edge hardware components. The SCS is a hardware solution that meets and exceeds this rate requirement. In realtime, the SCS autonomously processes and delivers Direct Broadcast data from NPP satellites. An objective of the HRD System development was that it be an integrated processing element housed in a single commercial off-the-shelf PC capable of exceeding the near-term requirements of NPP and moving towards the mid and far term needs of NASA.

INTRODUCTION Currently, there are no processing elements available that ingest and process telemetry data at high rates in real time. To fill this vacuum, telemetry ASIC development by NASA GSFC Microelectronics and Signal Processing Branch has to date resulted in faster solutions than any other chips identified by research for this paper. The application of these technologies, together with standardized telemetry formats, has made it possible to build systems that provide high-performance at low cost in a short development cycle. The SCS validates NASA’s push towards faster, better and cheaper. The SCS system architecture is based on the Peripheral Component Interconnect (PCI) bus and VLSI Application-Specific Integrated Circuits (ASICs). The heart of the system is a set of two cards, namely the High Rate Digital Receiver (HRDR) Card and the Return Link Processor Card (RLPC). The ASICs on the HRDR perform analog-to-digital translation and demodulation; and the ASICs on the RLPC perform frame synchronization, bit-transition density decoding, cyclic redundancy code (CRC) error checking, Reed-Solomon error detection/correction, data unit sorting, packet extraction, annotation and other CCSDS service processing. The SCS can autonomously acquire, decommutate, correct and de-packetize Direct Broadcast data from NPP satellites in realtime, while providing remotely accessible monitoring test points and indicators. The system generates data quality meta-

data information on a pass basis (Level-0), and converts the raw instrument data file into a compliant format (Level-0) in real-time. The HRDR demodulates the received RF signal and the RLPC level-0 processes the demodulated data stream at sustained rates of up to and greater than 100 Mbps per channel. The data output from the HRDR and RLPC, is a stream of packets that are processed by the software element on the SCS to be output as Production Data Sets (PDS). The SCS can be configured as a two-channel system, one stream for weather data and the other for conventional CCSDS telemetry. The weather data stream “return link” data processing solution is capable of ingesting weather satellite telemetry and performing frame synchronization with CRC and/or Bit Transition Decoding. The conventional CCSDS data stream “return-link” channel ingests CCSDS satellite telemetry and performing Reed-Solomon error detection/correction and CCSDS AOS Service/Conventional Telemetry processing in Real-time. This system has been prototyped and demonstrates the next generation autonomous high performance telemetry acquisition and level zero processing system at one fifth the cost and size of current COTS-based systems. This paper will also describe the concept of autonomous system operations, which allow non-expert users to acquire and process data. The prototype project was to validate the technology and commercialize the HRDR and ASICs to allow NASA to procure low cost/high performance ground acquisition systems for future space and earth science missions. The SCS will be validated using TERRA, AQUA and POES satellites, demonstrating the real-time acquisition of data downlinks at 30 Msymbols/sec (15 Mbps). SCS combines the entire framework from data acquisition through to the level 0 data product generation into a single desktop box. This includes an infrastructure that allows for future performance and functional upgrades. The SCS is based on a commercially available platform and a standard PCI bus. Principal elements are the HRDR Card and the RLPC, and each card has associated software. The HRDR uses specialized processing techniques implemented in VLSI CMOS ASICs to provide receipt of modulated RF signals at rates up to 150 Mbps and beyond. The RLPC includes functions for frame synchronization, Reed-Solomon error correction, and service processing at rates over 300 Mbps. The service processing includes all CCSDS-defined services (VCDU, Insert, VCA, bit stream, path packet, and encapsulation) at rates over 200,000 packets per second. These functions were implemented in high-density ‘system-in-a-chip’ ASIC components. The software components of the system are made up of two elements. The first, referred to as the Internal Control Software, is the control and monitoring system, which configures the two PCI cards prior to a data downlink session, enables the ingest and processing of telemetry, and monitors the progress of the processing. The second, referred to as the Level Zero Processing System Software, manipulates the stored frames and/or packets files output from the RLPC, and produces data products that adhere to the mission control documents. In the case of Terra, the data products are PDS files, which consist of separate files sorted for each Application Identifier (APID) in the case of Packet Data, or Virtual Channel Identifier in the case of CCSDS Frames. The weather data files produced for the end users will follow the format specified by the respective control documents.

NPOESS APPLICATION The SCS system used in an NPOESS-type application is shown in Figure 1. The science data is acquired by an orbiting satellite, which downlinks to a ground-based antenna. The received analog

signal is then transmitted to the SCS, which is composed of a hardware component and a software component. The main elements of the SCS are (1) High Rate Digital Receiver Subsystem (2) Return Link Processor Subsystem (3) Internal Control Subsystem (4) Level Zero Data Products Subsystem.

Figure 1. High Rate Data Downlink with SCS as Ground System

SYSTEM DATA FLOWS Figure 2 shows the architecture of the proposed SCS, with the HRDR ingesting a down-converted QPSK telemetry signal. The figure shows the extracted I and Q data streams, transferred to separate RLPCs in the system. The extracted data units, frames or packets are written into a standard file system on the SCSI disks, where they are processed into PDS products for transmittal to the end-user.

Figure 2. SCS Processing QPSK Data

Data

SCS – HW Subsystems • High Rate Digital Receiver • Return Link Processor

Antenna

Science Data Source

SCS – SW Subsystems • Internal Control • Level Zero Data Products

LAN/WAN

HRDR

RLPC1

RLPC2

Disk Disk Internal Control

SW

HOST CPU

LZDPS SW

PCI

SCSI

Q Data

I Data

Data Units

Data Units

Data Units

Data Product

Down-Converted Telemetry

QPSK

Frame Synchronize PN Decode RS Decode Frame/Packet Service Data Unit Extraction

Figure 3. SCS Processing BPSK Data

Recent advances in signal processing made it possible to accomplish the demodulation process with all of its sub functions using parallel processing where the clock rate of the processor is then less than the data rate. This requires a serial-to-parallel and parallel-to-serial converter operating at the A/D clock rate. The output of the receiver is a digital baseband frequency output as differential data and clock, which feeds I directly to the front-end of the telemetry processing equipment, which in this case is the RLPC. Figures 3 and 4 show the data flow for BPSK and the special case for the AQUA spacecraft respectively. The data from the direct broadcast from AQUA is re-interleaved into a single 15 Mbps data stream to be processed by the RLPC.

Figure 4. SCS Processing AQUA Direct Broadcast QPSK Data

SYSTEM ARCHITECTURE: The SCS architecture is shown in Figure 5. The HRDR Subsystem ingests telemetry, and the extracted base-band data is streamed through the RPLC Subsystem. Extracted packets and/or frames are then written into files on the storage device. Post-session or in Real-time, the data products are created by the LZDPS Subsystem and delivered to the end-user via LAN/WAN distribution.

HRDR

RLPC1

RLPC2

Disk Disk Internal Control

SW

HOST CPU

LZDPS SW

PCI

SCSI

Data

Data

Data Units

Data Units

Data Units

Data Product

Down-Converted Telemetry

BPSK

Frame Synchronize PN Decode RS Decode Frame/Packet Service Data Unit Extraction (Either in Redundant configuration or selective separation of telemetry)

HRDR

RLPC2

PCI

I (or Q) Data

Data Units

AQUA Down-

Converted Telemetry

QPSK

Frame Synchronize PN Decode RS Decode Frame/Packet Service Data Unit Extraction

Interleaver

RLPC1

Figure 5. Architectural Block Diagram

The HRDR Subsystem is made up of two elements: the HRDR Card and the HRDR Software. The software includes an NT Device Driver and Application Program Interface (API) for the HRDR Card. The HRDR subsystem is capable of ingesting external analog telemetry through the external IF Telemetry Input Interface. The processed data is a bit-synchronized, convolutionaly decoded digital telemetry data and clock, which is output through an external interface on the HRDR Card. HRDR Card: developed at NASA GSFC in conjunction with NASA Jet Propulsion Laboratory (JPL), implements the entire digital demodulation architecture in an ASIC with minimal analog processing used to condition the intermediate frequency, IF, from the satellite prior to A/D conversion. HRDR Software: consists of the Windows NT kernel-mode device driver and Application Program Interface (API) for the HRDR Card. Before the card can be accessed by a software program, the driver must be installed an enabled which allows the NT operating system to recognize the card and communicate with it. Typically this only has to be done once, when the card is first installed into the system. This driver is based on the unmodified version of the driver that comes with the 9050 SDK development kit, and the following steps describe the process used to install the driver. Note that an <.inf> file is not used in the installation process. The HRDR Card must be set up before the card can flow data. In order to insure that the data is correctly processed and that the different subsystems do not loss synchronization with each other, the Card must be setup in a sequential manner. The RLP Subsystem consists of the RLPC and the RLP Software. The RLP Software provides a Microsoft Windows NT kernel-mode device driver and API for the RLPC. The RLPC provides three primary functions that correlate to three custom integrated chips that are installed on the card: 1) the Parallel Integrated Frame Synchronization (PIFS) Chip provides data ingest and synchronization; 2) the Reed-Solomon (RS) Chip accepts data from the PIFS and performs Reed-Solomon error detection and correction; and 3) the Service Processor (SP) Chip receives input from the RS Chip and performs CCSDS Protocol (path, frame, etc) processing.

Telemetry Data

HRDR RLPC Storage

Data Products

LAN/WAN

ICS

RLP Card: The PIFS chip on the RLPC is capable correlating to a frame synchronization marker up to 64 bits in length, with error detection from 0 to 4 bits. The strategy used by the PIFS chip follows the traditional processing requirement to perform search/check/lock/flywheel frame synchronization with check and flywheel frame tolerances from 0 to 15 frames, and has the ability to provide NRZ-L, NRZ-M, and NRZ-S differential synchronous data decoding, whilst checking the frame Cyclic Redundancy Check (CRC) (16 bit) coding. The RS Chip performs error detection and correction. The RS provides error correction of Reed-Solomon encoded (255,223) frames, with CCSDS recommended interleaves ranging from 1 to 5, and with code words from 33 bytes to 255 bytes long. The SP chip provides CCSDS protocol processing, and it can process telemetry from up to eight Spacecraft at any one time. The SP chip can be set up to extract packet quality information based on Packet extraction errors. Types of packet errors include packet structure and format errors, packet service data, and packet quality data. The SP Chip can also extract VC data quality counts, based on frames received, number of errors (sequence, invalid headers etc.), and packets received per VC, number of packet errors (invalid APIDs, etc). Finally, the SP chip also provides overall frame status such as the total number of rejected/deleted frames if the frames were received with CRC errors and/or Reed-Solomon errors that cannot be corrected. To enable the level-zero process to create data products, the user can configure the SP chip to annotate the output data, be it packets, frames, rejected packets or frames, with quality information. This annotated output data is stored in file format on the local storage device. RLPC Software: can run on any hardware platform supporting Revision 2.1 of the PCI Local Bus specification. Since an application that needs to communicate with the RLPC may be running under a variety of operating systems, the RLPC Software API provides a well-documented and consistent RLPC device driver interface to an application, regardless of the operating system. This eliminates the need to change the application software when porting from one operating system to another. The RLPC API consists of a set of low-level and high-level API functions. The low-level functions encapsulate Windows NT-specific IOCTL calls and provide the basic read and write capabilities. The high-level functions use the latter to perform useful tasks. The Internal Control Subsystem (ICS) is the controlling entity for the SCS. It is a Java based software system that offers the user access to all contained subsystems for configuration, setup and operational control of both hardware and software activities. This system has also been developed to allow remote access to its control functions, data products and status information via the external network interface. The ICS software is object-oriented, cross-platform compatible, and able to be compiled under Windows NT and Digital Unix that is supported by the Java class library. For local control of the subsystems on the SCS, the ICS provides a simple mechanism to insert status information into the status database, and display this information to the user via a graphical monitoring system. In addition the ICS also has a mechanism for a user to specify and load configuration/setup information needed for an activity session, via a graphical user interface. RLPC Internal Control System: is an intuitive graphical user interface (GUI). The architecture, which is written in JAVA, allows the GUI to be adaptable to future enhancements in the system. Essentially the GUI is made up of two elements. The first element is the GUI for creating, editing and modifying a set of configuration parameters for the system. This GUI element in Figure 6, referred to as the Editor, will create a configuration file that will save the parameter values that have

to be loaded in the registers to enable the card to ingest, monitor, correct and process a telemetry session from a specific satellite.

Figure 6. Editor; and Configuration & Status Monitoring Graphical User Interfaces

The second GUI element is the Configuration and Status Monitoring subsystem. This GUI selects the configuration file to be loaded, selects the RLPC on to which it will be loaded and finally enables the card to receive and process data. The processed data is directed to a file created prior to enabling the RLP card. Once data starts flowing through the card, the counts of frames received and the processing are incrementally displayed in the GUI window. The GUI has an interactive interface in that the counts of the individual frame stream and packet streams as identified by their VCIDs and APIDs respectively can be displayed. Figure 6 also shows the Status window. HRDR Internal Control System: is a set of GUIs that allows the configuration and monitoring of the Demodulator ASICs, processing elements and interfaces on the HRDR. The setup and configuration GUI basically allows you to set up the parameters for the I & Q Channel Demodulators, I & Q Channel Viterbi Decoders, the data rate, carrier frequency and Doppler Offset.

Figure 7. HRDR Main Menu & Setup Summary/Status Page

Figure 7 shows the main menu of the HRDR setup and configuration GUI. The Demodulator GUI is selected to set up the parameters for selecting the mode of demodulation and the selection of the Master Controller. When the Master Controller has been selected the parameters for the I-channel and Q-channel are entered. Some of the values are from pre-calculated tables based on data rates, mode of demodulation and transmission characteristics. The Demodulator GUI is selected to set up the parameters for selecting the mode of demodulation and the selection of the Master Controller. For a specific spacecraft, the values for the decoding algorithm are known and hence the settings are pre-specified. These values are entered in the setup page and once saved in a configuration catalog. For debugging the default values can be changed. The final display, also depicted in Figure 7, is a top-level indication of the setup of the HRDR. The Level Zero Data Products Subsystem (LZDPS) is software that resides on the host platform. The subsystem will parse the Packet and/or Frame data files output from the Return Link Subsystem and stored on the storage disk array. Each data unit in the files has prepended annotation that provides quality and accounting information. Based on this information, the LZDPS will create a quality file for each set of data units; order the sorted data units in separate files identified by the data unit identifiers. The LZDPS is written in C code and on a 450 Mhz DEC Alpha Workstation, processes a Data Unit file of 5 Gigabytes in 20 minutes, producing 8 separate PDS files. The code has since been optimized to achieve even greater processing speed.

SINGLE-CHASSIS INTEGRATION CHALLENGES Integrating hardware components into a single chassis was one of the greatest challenges faced by the SCS designers. The HRDR uses the PCI bus to exchange set-up information, power, ground connections and non-intrusive status and monitoring data. Power requirements on the platform had to meet the needs of the HRDR and the RLPC and implemented by a relatively low-noise, high-performance power supply. The RLPC uses the PCI bus extensively. The bus is used to set-up and monitor the system and to provide a real-time data flow across the bus through the memory on the control platform out to the Storage device. When the RLPC was housed in the same chassis as the host CPU, the traffic on the PCI bus caused performance degeneration by a factor of two. It was able to process data in real time and send it to the host memory at rates in excess of 200 Mbps, but when the data was to be sent back across the PCI bus to the storage device, the rate fell. Two buses were used to decrease the conflict on a single system bus and improve overall performance. The separate chassis accommodated the higher capacity power supplies, and also made the two buses somewhat separate. Figure 8 shows that extracted data units are transmitted via the PCI bus into the virtual memory of the Host system. The Host CPU then transfers the data units into a flat file on the storage system. The transfer of data units from the Host memory to the storage device occurs, limits the performance of the PCI bus and hence slows down the throughput from the RLPC to the Host system. Greater the I/O capability and write capability of the storage system, greater the throughput. Initial tests done on the HRP using a DEC Alpha 4100 with two PCI buses and a RAID peripheral system, demonstrated a file write capability of 200 Mbps sustained. A similar configuration using a state of the technology PC, demonstrated a throughput of only 30 Mbps.

Figure 8. Single Platform HRP Configuration

Figure 9 shows a configuration 900 MHz-PC, using a Fiber Channel adapter and interface to a RAID system. The Adapter interfaced via a Fiber Controller to a RAID with three 36 GB disks.

Figure 9. Single Chassis RAID Storage Solution

The configuration in the laboratory, achieved raw reads of 120 MB/sec and raw writes of 80 MB/sec. The evaluation demonstrated a single telemetry stream ingested at over 200 Mbps, with extracted data units transferred to the Host and subsequently to the RAID system via the Fiber interface as a single file. A separate chassis that holds two RLPC cards, on an extension of the PCI bus from the Host system. The telemetry is ingested by the RLPC on one chassis and the extracted data units are transferred to the Host platform via the extension and subsequently transferred into a file format on a storage device. With two high-speed, internal LVDS SCSI disks, laboratory tests achieved data throughput in excess of 100 Mbps. The current configuration, Figure 10, using a 900 MHz PC, with a single PCI bus, achieved sustained throughput rates of over 100 Mbps for CCSDS VCDU service.

Figure 10. Single Chassis System Internal Disc Drives Solution

RLPC

Host SCSIDisk

PCI

Telemetry

Level 0

Telemetry

RLPC Host RAID Storage

PCI

Telemetry

Level 0Fiber

Channel

RLPCHost

Storage

PCI

Telemetry

Level 0

CONCLUSIONS The SCS is the low cost, high-speed solution to in-situ processing for high-rate telemetry downlinks. The SCS is capable of producing and providing access to data products within 20 minutes of a session, and for selected data stream, capable of providing a near-real time data stream for immediate higher level processing and display. The image, shown in Figure 11, is a near-real time display extracted and processed from a direct broadcast from Terra.

Figure 11. Near-Real Time Image of the Gulf of Mexico

REFERENCES 1. Mirchandani, Chandru; Nguyen, Diem, “Low-Cost, High Rate Data Processing”, 3rd

International Symposium on Reducing the Cost of Spacecraft Ground Systems & Operations, Institute of Aeronautics & Astronautics, Tainan, Taiwan - Mar. 1999.

2. Mirchandani, Chandru; Gray, Andrew, “Reliable, Low Cost All-Digital Receiver”, American

Institute of Aeronautics and Astronautics, 17th International Communications Systems Conference, Yokohama, Japan - Feb. 1998.

3. Mirchandani, Chandru; Fisher, David; Ghuman, Parminder, “Cost Beneficial Solution for

High Rate Data Processing”, International Telemetering Conference, Las Vegas, NV - Oct. 1999.

4. Mirchandani, Chandru, “Cost & Performance Measures in Systems Design”, 11th Annual

International Council on Systems Engineering (INCOSE) 2001 Symposium, Melbourne, Australia - July 2001.