embedded edge magazine - spring issue · pdf file · 2011-08-06predominantly in...

31

Upload: dinhnguyet

Post on 15-Mar-2018

221 views

Category:

Documents


0 download

TRANSCRIPT

Embedded Edge Spring 2004 1

Mike RobinsonEditor-in-Chief

[email protected]

CMP Creative ServicesArt and Layout

Gregory MontgomeryDirector of [email protected]

Robert SteigleiderAd [email protected]

Susan HarperCirculation Director

[email protected]

Embedded Edge is published by Texas Instruments, Inc. and produced in cooperation withCMP Media Inc. Entire contents Copyright © 2004 TexasInstruments, Inc. The publication of information regardingany other company’s products or services does not consti-

tute Texas Instruments’ approval, warranty or endorse-ment thereof. To subscribe on-line, visit:

www.edtn.com/customsolutions/edge/subscribe.fhtml

Code Composer Studio, TMS320, TMS320C6000, C6000,TMS320C5000, C5000, TMS320C2000, C2000,

TMS320C64x, TMS320DM642, TMS320DM64x,DSP/BIOS and eXpressDSP are trademarks of Texas

Instruments, Inc. All other trademarks are the property oftheir respective owners.

Spring 2004

Volume 5 Spring 2004 Number 1

Inside This Issue

Insighter: Hit the Moving Video Standard 2New video coding standards are continually being developed, and a whole range of new video appliances are in the offing.

Breakpoints 4News from the providers of embedded systems development products and services.

Cover: MPEG-2–to–H.264 Transcoder 8You can realize a real-time transcaler capable of transcoding a D-1 MPEG-2 bit stream into a half D-1 H.264 bit stream on a 600-MHz DSP-based TMS320DM642 processor.

Building a Digital Video Security System 12Video security is moving quickly from analog to DSP-based digital

systems and from JPEG to MPEG-4 technology.

Surveillance Systems: Beyond MPEG-4 18Advanced technologies deliver high-performance DSP-based surveillance systems that can be implemented efficiently.

A Switched Serial Fabric for VMEbus 22VXS promises to solve even the most challenging data transfer requirements of future real-time embedded systems.

Launchings 26New products and services from experts in embeddedsystems development.

On the Edge 28Emerging video apps need programmability and flexibility.

284

27

Insighter

2 Spring 2004 Embedded Edge

As Texas Instruments’ Pradeep Bardia discusses inthis issue’s “On the Edge” column, digital video

applications have grown considerably over the last fewyears, new video coding standards are continuallybeing developed, and a whole range of new video appli-ances are in the offing.

Let’s say you’re an embedded systems developer whohas to come up with a design that embodies the lateststandards and still works with systems that employolder standards. Sriram Sethuraman of Ittiam Systemsis here to help, at least if your product is a consumerstreaming-media or professional-grade broadcast appli-cation.

Next-generation or new applications will want to takeadvantage of the benefits of the new ITU-ISO H.264video compression standard, particularly the signifi-cantly lower bit rate it offers compared with MPEG-2,the predominant format for transmitting digital televi-sion content today. But they’ll have to work with allthose existing MPEG-2–based systems, and that meansa transcoder is needed to convert MPEG-2 bit streamsinto H.264-encoded bit streams. Ah, but implementingsuch a transcoder—not an easy task.

First, there’s the heavy computational price to bepaid for H.264’s greater compression efficiency. Then,the transcoder itself poses significant challenges interms of memory bandwidth and code size. And, ofcourse, you still have to achieve good transcoded videoquality.

However, Sethuraman has taken much of the painaway. He shows how to develop a low-complexity, sin-gle-chip solution using the TMS320DM642 digitalmedia processor in conjunction with careful exploita-tion of the algorithm- and instruction-level parallelisminherent in the transcoder, the capabilities of theprocessor, and such tool chain capabilities as code sec-tioning and code overlay.

Video surveillance is undergoing changes, too. Thebig move is from analog to digital systems and fromMotion JPEG to MPEG-4. Moreover, since the Internetis the universal communications medium, video securi-ty systems must be able to stream over IP to be com-patible with all network devices. And because IP infra-structures are developing rapidly, and the available

bandwidth constantly increases, it’s important that sys-tems can be adapted to use the bit rates the networksupports.

As ATEME’s Marc Guillaumet notes, when scalabilityis key, a DSP enables you to design an appropriatearchitecture for each application. He discusses the ben-efits of a programmable solution to video security thata DSP offers, compares the most commonly used videocompression standards, and then zeroes in on theMPEG-4 standard.

A digital video surveillance system is much morethan a DSP and MPEG-4, however. Joel Rotem of MangoDSP describes other advanced technologies, like videostabilization, video motion detection, and encryptionand watermarking, which deliver high-performancesurveillance systems that can be implemented effi-ciently using commercial DSPs.

Stepping out of the video domain into the much widerworld of embedded systems in general, Pentek’s RodgerHosking discusses how VXS, a switched serial backplanefabric for VMEbus defined by VITA specification 41, willsatisfy the most challenging data transfer requirementsof real-time embedded sys-tems as faster processorsand interfaces emerge, whileallowing legacy VMEbusboards to coexist with thenew VXS boards as theybecome available.

Hosking first describesthe nature of a switched fab-ric, then he looks atswitched serial fabrics, not-ing their benefits. After abrief description of the fivepopular fabrics incorporat-ed into VXS, he focuses onRapidIO as the one bestsuited to real-time embed-ded systems.

—Mike [email protected]

Hit the Moving VideoCoding Standard

THE LEADER IN PORTABLE EMULATORS FOR TI DSP’sTHE LEADER IN PORTABLE EMULATORS FOR TI DSP’s

• Emulators:XDS510PP PLUS, XDS510 USB,SPI510, SPI515, SPI525, SPI530Opto-pod, Low Voltage Adapter

• Debuggers:TI Code Composer Studio

• Evaluation Modules:F240/F243/LF2407A,C548/VC549/VC5402/VC5406/VC5410/VC5416/VC5471, VC5502, VC5509, DA250,DA255, DM642

• eZdsp™ and DSKs:F240/F243/F2407/VC5416/VC5510/C6713/C6416, eZdsp for LF2401A/LF2407A/VC33/F2812

• Software Tools:‘C’ compilers, assemblers, linkersFLASH Programming Utilities

• Operating Systems:RTXC, Real Time Executive in ‘C’

• Miscellaneous:Digital Motor Controllers, Surround Sound Modules, Prototype Modules, Power Supplies, JTAG Expander

12502 Exchange Drive, Suite 440Stafford, Texas 77477T: 281.494.4505F: 281.494.5310www.spectrumdigital.com

I N C O R P O R A T E D

We have your high performance tools for your DSP projects

We have your high performance tools for your DSP projects

Spectrum Digital = Tools for

DSP Development QED

• AUSTRALIA:61.3.9720.5344 • BENELUX: 31.345.545.535 • BRAZIL: 11.3422.4200 • FRANCE: 33.562.072954

• GERMANY: 49.2833.570977 • INDIA: 91.80.6636333 • INDIA: 91.80.2245037 • ISRAEL: 972.3.9222771

• JAPAN: 81.3.5823.0191 • KOREA: 82.2.2688.9050 • KOREA: 82.2.409.7277 • MEXICO: 52.33.3171.6367

• PR OF CHINA: 86.10.8235.7579 • POLAND: 48.22.724.30.39 • RUSSIA: 7.095.787.5939 • SCANDINAVIA: 46.40.45.95.70

• SINGAPORE: 65.6744.9789 • SWITZERLAND: 41.319723152 • SOUTH AFRICA: 27.11.882.6836 • SPAIN: 34.932.378.462

• TAIWAN: 866.2.288.11791 • UNITED KINGDOM: 44.1606.35100604/15/04

SoundClearTelephonySoftware EarnseXpressDSPCompliance

4 Spring 2004 Embedded Edge

Breakpoints

Mango DSP Ltd. (Jerusalem,www.mangodsp. com) has opened asales office in San Jose, Calif., andtapped vice president ofsales Yinon Kotzer tohead it. Joining Kotzerare Robbie Lazar, a sales manager,and Joel Rotem, an application engi-neer. Mango DSP manufactures mul-

tiprocessing DSP hardware and soft-ware for embedded systems. Theoffice has secured its first major sale

with an initial orderworth $1.3 million tosupply advanced re-

cording systems to an undisclosedU.S. company with worldwide mar-keting.

Dolby Laboratories, Inc. (San Francisco,www.dolby.com) has selected TexasInstruments’ 225-MHz TMS320C6713DSP as the engine for its latest refer-ence platform, based on its DigitalProfessional Encoder audio encodingtechnology. The reference platform isfor building small, high-performance,low-cost–per–channel equipment forcarrying digital audio, including DVDplayers and broadcast and transmissiongear for HDTV and cable and satellitetelevision.

Catalytic Inc. (Palo Alto, Calif., www.catalytic.com) plans to lay a time-saving auto-mated path from algorithm specification to implementation, initially for the DSPmarket. The aim is to complete a path that starts with MATLAB’s explorationenvironment and ends with implementation on a target DSP. The first products,expected later this year, will convert floating-point algorithms into fixed-point.Subsequent products will address other DSP implementation problems.

Catalytic Shoots for FastDSP Implementation Path

A new member of TexasInstruments’ Third Party Network,123ID Inc. (Grand Forks, N.D.,www.123id.us)will make itsbiometric userauthentication technology avail-able to developers through TI’sFingerprint Authentication Devel-opment Tool daughter boards.Specifically, 123ID will offer itsCVTeSDK-TI Fingerprint SoftwareDevelopers Kit based on TI’sTMS320C6711 floating-point DSP.The kit employs 123ID’s CodeVector Technology algorithm,which later this year will also beavailable for the TMS320C6713DSP.

123ID BringsBiometrics toTI’s Third-PartyProgram

AccelChip Inc. (Milpitas, Calif.,www.accelchip.com) has signed K. K.Rocky (Tokyo, www.kkrocky.com) todistribute its products and servicesthroughout Japan and South Korea.AccelChip provides automated flowsthat go from MATLAB algorithms to sil-icon for FPGAs, ASICs, and structuredASICs that follow a top-down designflow. K. K. Rocky provides sales andtechnical services to OEMs of wirelessand network communications, distrib-uted multiprocessing, and image pro-cessing systems.

AccelChip TapsDistributor forJapan and SouthKorea

SoundClear, integrated echo can-cellation and noise reduction DSPsoftware from Acoustic Tech-nologies, Inc. (Mesa, Ariz., www.acoustictech.com), has been cer-tified as compliant with TexasInstruments’ eXpressDSP Real-Time Software Technology for TI’sTMS320C55x DSP generation.

Mango Sprouts SalesOffice in San Jose

Dolby DigitalAudioPlatform Beatswith C67xHeart

• Industrial Control• OEM Instruments• Test & Measurement• Bio-Medical• Intelligent Data Acq

C6416 • FPGA • Ultra-fast I/O

C6711/C6713 • High Performance I/O

C6x • Modular I/O • PCI/cPCI & Stand Alone

• Single & Quad C6416 DSP to 1GHz• Virtex II or Spartan 3 User-configurable FPGA• 105 MSPS Analog I/O• PMC Sites• 6U cPCI 64-bit / 66Mhz• Multi Gbps Serial I/O• C++ Software Toolset with Source• VHDL Firmware• Source Code

• Custom Test Equipment• Intelligent Servo Control• Data Acquisition• Military & Commercial• Embedded Control

• Single C6711 or C6713 DSP to 300MHz• Integrated analog I/O• Ultra-flexible Triggering/Clocking• 64-bit PCI Streaming Interface• C++ Software Toolset

with source

• Single & Quad C6x DSP to 200MHz• Modular Peripherals & I/O via daughter cards• Wide Choice of Analog I/O• 32-bit PCI/cPCI (plug-in)• USB or Ethernet (stand alone)• Complete software tool suites• C++ Software Toolset

with source

• Software Radio• Electronic Warfare• High-Speed Physics• Medical Imaging

www.innovative-dsp.com • 805.520.3300 phone • [email protected]

Download

Complete

User ManualsNOW!

www.innovative-dsp.com

Velocia Series

Matador Series

Solomente &Modular Series

6 Spring 2004 Embedded Edge

You can realizea real-time tran-scaler capable of

transcoding anMPEG-2 D-2 bit

stream to anH.264 bit stream

on a 600-MHzTMS320DM642

processor.

By Sriram Sethuraman et al. The technology behind H.264, ajoint video coding standard of theITU-T and the ISO, significantlyreduces the bit rate compared withearlier standards-based technolo-gies, like MPEG-2 and MPEG-4.

As a result, it extends the envelope of videotransmission and storage applications.

IMPLEMENTING ANMPEG-2–TO–H.264TRANSCODER ONTHE DM642

MPEG-2–to–H.264 Transcoder

Embedded Edge Spring 2004 7

MPEG-2–to–H.264 Transcoder

Since digital television content today is transmittedpredominantly in the MPEG-2 format, such new appli-cations will need to transcode data from format toH.264. The increased compression efficiency of H.264,however, comes at a significant computational cost. Inaddition, the transcoder poses significant challenges interms of memory bandwidth and code size, not tomention the challenges in achieving good transcodedvideo quality.

You can develop a transcoder on Texas Instruments’TMS320DM642™ digital media processor, which is oneof the very few programmable digi-tal signal processors available thathas sufficient processing power torealize a single-chip solution. Bycareful exploiting the algorithm-and instruction-level parallelisminherent in the transcoder; thecapabilities of the processor, suchas its powerful instruction set andversatile EDMA controller; andsuch tool chain capabilities as codesectioning and code overlay, youcan meet the challenges of highcomputational complexity with alow-complexity solution.

OVERVIEWOF H.264 H.264 builds on the motion-com-pensated transform coding para-digm of the earlier video codingstandards. Instead of constrainingthe technology to use buildingblocks employed in earlier stan-dards (such as 8x8 IDCT), the entiretechnology has been engineeredfrom the ground up with no requirement for backwardcompatibility. Some of the salient coding tools that leadto the compression efficiency gain of H.264 areimproved spatial intraprediction, enhanced temporalprediction (through quarter-sample motion compensa-tion, variable-block size motion compensation, multi-hypothesis motion compensation, and weighted pre-diction tools), efficient context-based entropy coding(through variable-length coding or binary arithmeticcoding tools), and in-loop content and coding-modeadaptive deblocking filtering.

H.264 specifies three profiles, based on the potentialapplications. The Baseline Profile targets low-enddevices for stored playback and streaming over reliablechannels. The Main Profile targets entertainment-qual-

ity applications, like broadcasting, DVD recording andplayback, high-definition television, and digital cine-ma. The Extended Profile targets error-resilientstreaming for robust video-on-demand delivery overwired and wireless networks.

Experiments have shown that H.264 reduces the bitrate 35 to 50 percent compared with MPEG-4Advanced Simple Profile coding and 40 to 65 percentcompared with MPEG-2, at a similar visual quality. AsFigure 1 shows, the H.264 Baseline Profile cuts thepeak signal-to-noise ratio (PSNR) by more than 50 per-

cent over a wide range of bit ratescompared with MPEG-4 SimpleProfile.

MPEG-2–TO–H.264TRANSCODINGThe compression efficiencyadvantage of H.264 is spurring itsadoption in personal video re-corders (PVRs), to increase thehours of stored content, and innetwork edge servers that streamcontent over various last-milesolutions. In the PVR application,since the same device records andplays back, there are no interoper-ability issues. Hence this applica-tion is very likely to be one of thefirst to take off.

A related application is homegateways that transmit multime-dia content wirelessly from a cen-tral home server to multiple TVswithin the home. In such a case,the higher compression efficiencytranslates into good entertain-

ment-quality video even though the network is wire-less. Similarly, new video-on-demand serviceproviders that have a lower-bit-rate last-mile pipe butare still looking to provide entertainment-qualitycontent will find the new standard attractive.

Because MPEG-2 is the most prevalent format fortransmitting digital television content, virtually allthese applications require transcoding from thatformat.

The substantial difference between MPEG-2 andH.264 necessitates a solution in which completedecoding of the MPEG-2 bit stream content is fol-lowed by H.264 re-encoding, rather than com-pressed-domain or minimal-drift-based low-com-plexity transcoding. Since we are discussing enter-

Figure 1: The H.264 video coding standard

reduces the bit rate by half at a given quality

(as measured using PSNR) over the entire bit

rate range compared with MPEG-4. The

H.264 encoder used I and P slices, a single

reference frame, segmented motion estima-

tion, an in-loop filter, and context-adaptive

binary arithmetic coding. (The graph was

generated using the Joint Model H.264 refer-

ence encoder and the MoMuSys VM imple-

mentation of an MPEG-4 SP encoder.)

200 200 400 800600 1,000

25

30

35

40

Bit rate (kb/s)

MPEG-4 SP

H.264Lum

a P

SN

R (d

B)

tainment video, the H.264 profile considered here isthe Main Profile.

Complete re-encoding of the decoded MPEG-2 con-tent without using any information from MPEG-2 canrequire more than an order of magnitude increase inthe computational requirements compared withMPEG-2 decoding—in essence, typically requiringmultiple high-end programmable processors.

LOW-COMPLEXITYAPPROACHA low-complexitysingle-processorapproach to MPEG-2–to–H.264 trans-coding is possible,however (Figure 2).This approach lev-erages coding infor-mation (like mo-tion vectors, codingmodes, quantizers,and bit allocationprofile) from theMPEG-2 bit streamto reduce the com-plexity of re-encoding in the H.264 format. It employsH.264 coding tools, such as spatial intraprediction,segmented motion compensation on macroblocks,context-adaptive variable-length coding/ context-adaptive binary arithmetic coding (CAVLC/ CABAC),and quarter-sample motion compensation to reducethe bit rate of the MPEG-2 stream by 25 percent with-out any significant loss in visual quality. Furthermore,adapting the approach to work within a transcalingframework (in which the resolution at the output ofthe transcoder is lower than at the input) boosts the

compression advantage further. For instance, a 4-Mb/s MPEG-2 bit stream with D-1 resolution can betranscaled to a 1-Mb/s H.264 bit stream with half D-1 resolution (in a way similar to the long play andextended play modes in a VCR, which trade off qual-ity to increase the hours of content that can be

stored). In a transcaler, the optional processingmodule in Figure 2 would correspond to the down-scaling of the MPEG-2 decoded output to the desiredoutput resolution.

As we said, the improved compression efficiencyof H.264 comes at the expense of increased compu-tational complexity. For instance, the tools men-tioned above adapt spatially to the video content,

implying that theyrequire a good dealof contextual infor-mation and condi-tional processing.

In addition, thestandard imposescertain control flowrestrictions. For in-stance, in an intra-4x4 macroblock(MB), each 4x4 sub-block must be com-pletely reconstruct-ed before the next4x4 subblock canbe intrapredicted.Another exampleis the macroblock

pair (two vertically adjacent MBs), which are codedone after the other in macroblock adaptivefield/frame coding (MB-AFF), rather than the strictraster-scan coding in MPEG-2. The decoding and re-encoding result in high memory bandwidth require-ments and increased code size.

Given the challenges, the fact that multistandardsupport is becoming almost unavoidable on PVRs, andthe inflexibility and long lead times associated withASIC designs, programmable processors are the pre-ferred target for implementing the transcoder.

Before discus-sing the design steps, let’s take alook at the DM642 digital media processor. It con-sists of a TMS320C64x™ generation DSP integratedwith a 64-bit-wide external memory interface, a two-level memory hierarchy (16 kB each of L1 programand data caches and 256 kB of L2 memory that can

8 Spring 2004 Embedded Edge

MPEG-2–to–H.264 Transcoder

Figure 2: A low-complexity single-processor MPEG-2–to–H.264 transcoder uses

coding information from the MPEG-2 bit stream to reduce the complexity of re-

encoding in the H.264 format and employs H.264 coding tools to reduce the bit rate.

Using transcaling lowers the resolution at the transcoder’s output, thus increasing

the compression. An optional processing module can be downscaling to reduce the

coding artifacts before transcaling.

MPEG-2video

elementarysystem

MPEG-2MP@ML

videodecoder

Frames Optionalprocessing

H.264MP@L3

videore-encoder

Coding information

H.264videoelementarystream

MP@ML = Main Profile at Main Level • MP@L3 = Main Profile at Level 3

MPEG-2–to–H.264 transcoder

The approach employs H.264 coding tools to reduce thebit rate of the MPEG-2 stream by 25 percent without anysignificant loss in visual quality.

Embedded Edge Spring 2004 9

be configured as internal SRAM and L2 cache), anenhanced DMA controller with 64 programmablechannels, and digital video and audio I/O ports. TheC64x™ DSP employs a very long instruction word(VLIW) architecture with eight functional units(across two datapaths) that each can execute aninstruction in the same clock cycle, and access fromone datapath to the other is supported. Each data-path has an L (logical), D (data), M (multiplier), andS (shift) unit and thirty-two 32-bit registers. Some ofthese functional units further support SIMD instruc-tions that are customized for packed-byte or half-word operations, which are very useful in handlingvideo data. The D units are capable of loading orstoring two 64-bit aligned data or one 64-bitunaligned data word in one clock cycle, capabilitiesthat, during motion compensation, are likewise veryuseful. All instructions can be conditionally execut-ed using five conditional registers. The processorincludes an Ethernet MAC and a PCI interface. It iscapable of running at clock speeds of up to 720 MHz,

and the C64x DSP has been tested at up to 1 GHz.

DESIGN METHODOLOGYThe major design steps in implementing thetranscoder on the DM642 media processor are:

1. Decide on the granularity of interaction betweenthe decoder and the re-encoder.

2. Estimate the MCPS and code size requirements of thevarious modules after optimizing the key leaf modules.

3. Group the various processing stages suitably todesign a pipeline. The pipeline determines the codeand data buffer requirements within the I-SRAM.

4. Based on the processing requirements dictated bythe pipeline, develop a strategy to transfer data and, ifnecessary, code using DMA.

5. Implement the pipeline design and adjust the code

MPEG-2–to–H.264 Transcoder

10 Spring 2004 Embedded Edge

MPEG-2–to–H.264 Transcoder

and buffer placements in I-SRAM.

The granularity of interactiondetermines the rate of codeswitching between the MPEG-2decoder code and the H.264 re-encoder code and influences I-cache misses and the amountof data buffering needed. Forinstance, a simple transcoderdesign might decode one mac-roblock and immediately re-encode it. In that case, most ofthe data is reused, but the codeis used only once. Possiblegranularities range from groupsof macroblocks, to a frame, to agroup of frames. Depending onthe design, the interaction canbe strictly synchronous (imply-ing that one granule of decod-ing is followed by its re-encod-ing) or asynchronous. Five fac-tors govern this choice: the useof MB-AFF coding in H.264, theneed to display the decodedframe while transcoding is inprogress, the level of I-cachemisses, the EDMA bandwidthrequirements to fetch the dataagain, and the internal memoryrequirements to buffer thedata.

OPTIMIZING KEYLEAF MODULESThe C64x DSP VLIW architecture allows you toexploit both algorithm-level parallelism (ALP) existingin the transcoder and instruction-level parallelism(ILP), the parallelism that exists in a code segment inwhich dissimilar operations are required that have nointerdependencies.

Typically, all video processing algorithms have ahigh degree of ALP, since the same operation tendsto be performed on multiple pixels. The DM642DSP’s single-instruction, multiple-data (SIMD)instructions, such as DOTPU4, SUBABS4, AVGU4,and SPACKU4, come in very handy at performingmultiple operations on multiple bytes in a singleinstruction. The two datapaths also allow two paral-lel lines of execution. Since all functional units sup-port arithmetic and logical operations, such opera-

tions can be performed in paral-lel on multiple pixels in thesame clock cycle.

A high degree of ILP is com-mon typically in complicatedcontrol code in which multipleexpressions need to be evaluat-ed to finally make a decision.ILP can be used to fill anyremaining functional units in anexecution packet after takingadvantage of ALP.

Here are a few more pointerson optimizing key leaf moduleson the DM642:

• Analyze whether a module isI/O or process-limited. Thatserves to identify the best possi-ble cycle count for key loops.

• Identify appropriate SIMDinstructions to increase theamount of work performed byeach functional unit.

• Balance the load between data-paths A and B cleanly to reducecross-path accesses.

• Explore unrolling possibilities inthe case of shallow loops. How-ever, you need to carefully bal-ance code size, register pressure,and the speedup achieved.

• Maximize the utilization by packing as many slots aspossible. Explore other ways of implementing adesired computation using the available functionalunits.

• Minimize stalls due to branches and delay slots of thechosen instructions.

• Use conditional execution when possible to replacebranches.

• Ensure that data accesses do not result in memorybank stalls.

You need to group the various processing stagessuch that three criteria are met: I-cache thrashing is

Figure 3: When all stages of the re-encoder are

together at a macroblock level (a), the data

memory requirements are lower, but the code

size for the various processing stages exceeds

the L1-P cache size by several factors and hence

I-cache misses will be very high. By grouping the

various stages together at the level of a row of

macroblocks (b), you can tailor the code for each

module on which you loop for the L1-P cache

size. However, the output of each stage must be

buffered, and the L1-D cache misses increase

with the additional data buffering requirements

MB MB row

Motion refinement Motion refinement

Motion compensation Motion compensation

Encoding reconstruction

Encoding reconstructionEntropy coding

Entropy coding

Deblocking

Deblocking

(a)

(b)

. . .

minimized, suitable pipelining can be established tominimize the cycles spent by the DSP waiting fordata by using the EDMA capabilities, and the databuffering made necessary by the grouping can beaccommodated within the available I-SRAM.

PIPELINE DESIGNThe processing stages in the transcoder can be clas-sified as decoding and re-encoding. The decoderprocessing stages are syntax parsing, motion com-pensation, and texture decoding. The re-encoderprocessing stages are motion refinement, encodingloop, entropy coding, and deblocking. In the case ofa transcaler, the downscaling becomes another pro-cessing stage. Furthermore, additional processingstages related to format conversion may be neededif a video display is required.

Figure 3 shows the impact of the grouping of pro-cessing stages on instruction and data cache perfor-mance. When all stages of the re-encoder aregrouped together at a macroblock level (Figure 3a),the total code size is very high and the I-cachethrashes heavily. Grouping the various stagestogether at the level of a row of macroblocks (Figure3b) alleviates I-cache thrashing; however, the inter-mediate outputs between the grouped stages need tobe buffered, thereby increasing the internal memo-ry requirements and L1 D-cache misses. You shouldconsider reusing scratch memory space in I-SRAMto alleviate that situation.

DMA CONSIDERATIONSOnce you’ve designed the pipeline to satisfy thethree criteria, the various data transfers from and toexternal memory become apparent. These becomecandidates for DMA transfer. While schedulingEDMA transfers:

• Use QDMA for short bursts of transfer, since thesetup overhead for QDMA is low.

• Consider chaining EDMA transfers to automate thetriggering of the next transfer without the interven-tion of the DSP.

• For instances when EDMA completes asynchronous-ly and can only be done in an interrupt service rou-tine, optimize the ISR handler to reduce cycle over-head for EDMA setup.

• Balance the transfer load across the four priorityqueues in the EDMA controller according to howcritical the data is.

• Take care of cache coherency issues by ensuringthat the same region of SDRAM isn’t simultaneous-ly accessed via L2 cache and EDMA.

Aligning the data buffers properly and reusingscratch buffers is essential to minimize L1-D miss-es. Similarly, since the code size of the decoderand re-encoder can be extremely large (even larger

than the available I-SRAM), it’s essential to consid-er strategies for minimizing L1-P misses. Propercode sectioning is critical to L1-P performance.The Code Composer Studio™ integrated develop-ment environment tool chain allows a differentload-time address and run-time address for eachfunction. This capability can be used to overlaymutually exclusive pieces of code in I-SRAM at runtime. Take care, though, to ensure that the appro-priate code is in I-SRAM before the correspondingfunction call is made.

Using the design methodology described here,you can realize a real-time transcaler capable oftranscoding an MPEG-2 bit stream with D-1 resolu-tion to an H.264 bit stream with half D-1 resolutionon a 600-MHz DM642 media processor. As the clockrates of the DM642 DSP keep increasing, thetranscoder can move toward achieving full D-1transcoding on a single chip. ◆

Sriram Sethuraman ([email protected]) is thetechnologist—senior member of technical staff at IttiamSystems (Pvt.) Ltd. in Bangalore, India.The other authors areArvind Raman, senior engineer; Kismat Singh, senior engineer;Manisha Agrawal Mohan, engineer; Neelakanth Shigihalli, seniorengineer; and B. S. Supreeth, engineer.

MPEG-2–to–H.264 Transcoder

Embedded Edge Spring 2004 11

As the clock rates of the DM642 DSP keep increasing, thetranscoder can move toward achieving full D-1 transcodingon a single chip.

12 Spring 2004 Embedded Edge

Advanced DSPs andMPEG-4 Build a DigitalVideo Security System

and video compression algorithmsthat enable high-quality images tobe transferred at low bit rates.

Since the Internet is the universalcommunications medium, videosecurity systems must be able tostream over IP to be compatiblewith all network devices, therebyallowing multimedia access fromany location. (The supervision ap-plication is implemented on a sys-tem control station that can be aPC.) Moreover, IP technology allowsusers to interact with the system inreal time, thus reducing operatingand on-site maintenance costs.

Because IP infrastructures aredeveloping rapidly, and the availablebandwidth increases daily, it’simportant that systems can beadapted to use the bit rates the net-work supports.

The new technologies enable youto implement advanced digital fea-tures in your video security system:video indexing (dating and referenc-ing), audio and video alarm manage-ment, (local) image and video stor-age, privacy zone management, andautomated object recognition andtracking, while providing flexibility.The central elements are the proc-

essor and the video compressionstandard.

THE MULTISTANDARDCAPABILITY OF DSPThe segmentation of this wide mar-ket makes it important to choose apowerful and scalable solution.When scalability is key, a DSPenables you to design an appropri-ate architecture for each applica-tion. Unlike an ASIC–based system,the programmable solution offeredby a DSP provides flexibility.Because the video and audio pro-cessing are done in software, theproduct can evolve quickly as thecompression technologies advance:you can easily change the video andaudio codecs. You can also updateand maintain the application soft-ware remotely.

In addition, a DSP gives you thecost sensitivity of a hardware solu-tion, and your system cost goesdown as the cost of the silicondecreases. Also, when you choose aDSP based on Texas Instruments’TMS320C64x™ generation DSP, the

Video security is moving quickly from analog to DSP-baseddigital systems and from JPEG to MPEG-4 technology.

By Marc Guillaumet

The world of video security is experiencing extensivechanges: reduction of the quantity of data trans-mitted while keeping the same level of information,flexibility in usage conditions, and interoperabilityto allow deployment anywhere. As part of thesechanges, system intelligence is no longer concen-

trated in one location: digital video cameras not only capturevideo but also manage audio and video compression, synchro-nization, and streaming over the network in real time. These cam-eras take advantage of smart, advanced digital signal processing

Digital Video Security Systems

Digital Video Security Systems

algorithms available have benefitedfrom years of experience being opti-mized for quality versus perfor-mance for the core. You also getcomprehensive tools, easing devel-opment. Finally, application-orient-ed reference designs are available,giving you a fast track to market.

Indeed, recent progress in DSPtechnology, like digital mediaprocessors with such integrated fea-tures as video ports and an Ethernetinterface are making new digitalvideo security systems possible.Consider the example shown inFigure 1. This hardware platform,based on Texas Instruments’ TMS-320DM642™ digital media proces-sor can be used to build an IP digitalcamera, a digital IP node for analogmultiple-camera systems, or an IPdevice for video storage. The DM642processor is an excellent choicebecause of its three video ports andits Ethernet MAC. The video portsprovide a glueless interface to com-

mon video decoders and encoders,support multiple resolutions andvideo standards, and work either invideo capture or display mode.

VIDEO COMPRESSIONSTANDARDSThe most commonly used videocompression standards are JPEGand Motion JPEG (MJPEG), MPEG-2, MPEG-4, and H.264 (see thetable). JPEG and MJPEG, however,offer poor compression performancecompared with the other standards.H.264 allows more bandwidthreduction, but it’s too expensive forvideo security because of the CPUload needed to encode the videostreams. Motion JPEG and MPEG-4,on the other hand, are good candi-dates for specific video security fea-tures.

JPEG and MJPEG are appropriatefor court usage because they’rebased on motion estimation. Forthat reason, they don’t impose a

heavy CPU load. In addition, framesare compressed independently,thereby allowing error resilience,robustness, low latency, and easyimplementation of fast-forward andfast-backward. However, because oftheir poor compression performance,JPEG and MJPEG aren't suited tovideo security applications that needhigh resolution and high-frame-ratestreaming and storage. In thosecases, MPEG-4 is better adapted.

MPEG-4 enables a low bit rate anda high-quality image. In compari-son, MPEG-2 would require almosttwice the bandwidth for an equiva-lent image quality. In addition,MPEG-4 is widely used, thus ensur-ing system interoperability. The fea-ture richness of MPEG-4 technologyalso allows a flexible solution: thestream structure, resolution, and bitrate regulation mode can be adjust-ed by the application software.

MPEG-4MPEG-4 is an ISO/IEC standarddeveloped by the Moving PictureExperts Group (www.m4if.org). Itconsist of six parts: System, Visual,Audio, Conformance, ReferenceSoftware, and Delivery MultimediaIntegration Framework (DMIF)

MPEG-4 Video technology coversa wide range of applications. Thelow bit rate and error-resilient cod-ing allow for robust communicationover limited-rate wireless channels,which is useful for cell phones,videophones, and space communi-cations. MPEG-4 is also suited tovideo surveillance data compres-sion, since it’s possible to have avery low or a variable frame rate.

In addition to the six parts, manyprofiles have been defined. Theseprofiles give content-specific toolsfor visual processing.

The Simple Profile provides effi-cient, error-resilient coding of rec-tangular video objects, making itsuitable for mobile network applica-

Embedded Edge Spring 2004 13

Figure 1: A hardware platform based on the TMS320DM642 can be used to build an

IP digital camera, a digital IP node for an analog multiple-camera system, and an IP

device for video storage, thanks to the processor’s three integrated video ports and its

Ethernet MAC.

Digital IP nodefor analog

camera system

IP digitalcamera system

Lens

Battery or mains power supplyMicrophone

4 GPIO(alarms)

Ethernet

CMOS sensor

VP2

VP1

VP0

DM642

EMAC

FPGA JTAG

IP device withvideo storage

option

RTC

SDRAM

Flash

Audiocodec

ADC

GPIO

PHY

Powerregulation

BT.656

EMIF

Videodecoders

HDD

Digital Video Security Systems

14 Spring 2004 Embedded Edge

tions. The Advanced SimpleProfile provides advancederror-resilient coding tech-niques for rectangular videoobjects using a back channeland improved temporal res-olution stability with lowbuffering delay. It is suitablefor real-time coding applica-tions—for example, video-phones, teleconferencing,and remote observation.

Like Motion JPEG, MPEG-4 coding consists in the suc-cessive coding of I (intra)frames—also called keyframes—with each framecoded independently. Andlike JPEG coding, I frame coding isbased on texture coding.

The Simple Profile also uses P(predicted) frames. A P frame iscoded by taking into account thecontents of the previous frame(motion estimation data). Thismethod significantly reductes theamount of data, since only the dif-

ferences between two consecutivesframes are coded.

The MPEG-4 Advanced SimpleProfile (ASP) uses I, P, and B (bidi-rectional) frames (Figure 2). “Bi-directional” means that the encodingtakes into account motion estimationdata from the previous and the fol-lowing frame.

THE IMPACT OF THESTREAM STRUCTUREIn the MPEG-4 Advanced SimpleProfile the periodicity of I frames isspecified by the Group of Picture(GOP). A long GOP decreases thebit rate because encoded I frames

are larger than P and B frames.And because the ASP includes Bframes, it allows a lower bit ratethan the Simple Profile—roughly15 to 20 percent, depending on thevideo complexity. However, be-cause a B frame is encoded fromthe preceding and the following Ior P frame, the use of B frames

increases latency. On the otherhand, because I frames are inde-pendently encoded, decoding an Iframe allows video restorationfrom any bit stream error.

In addition, a short GOP yields ashort restoration period, enablingthe video system to change channelsquickly. Thus choosing a GOP valueinvolves a compromise among bitrate, robustness, and channel-hop-ping concerns.

It’s important to choose theappropriate resolution for the tar-geted application.

CIF resolution (352x240 forNTSC, 352x288 for PAL) doesn’t

require a great deal of pro-cessing power. Therefore itcan be useful for multi-channel applications. Italso allows low bit rates,thereby preserving storagespace and transmissionbandwidth. It’s well suitedto low-cost applicationswith mosaic display, forexample.

Full D-1 resolution(NTSC: 720x480, PAL:720x576) is more expen-sive in terms of processingpower and bit rate, but itcan meet customers’ ex-pectations in terms of qual-

ity. Thus it’s an appropriate choicefor high-end applications, wheredetail is needed.

An intermediary resolution,2CIF (NTSC: 704x 240, PAL:704x288) lowers the system costcompared with full D-1 resolutionbut still provides a good visualaspect: users who want full D-1 for

visual comfort can zoom out forthe higher resolution.

THE IMPACT OF THE BITRATE REGULATION MODEThree bit rate regulation modes canbe implemented in MPEG-4 videocompression: variable, constant,and average bit rate.

With variable bit rate (VBR), thebit rate varies over time and thequantization ratio is constant. Thismode is well suited to video securi-ty applications, since the bit rate isminimized when nothing happensin the acquired video sequence,thus reducing storage space, and

Figure 2: In the Advanced Simple Profile encoding process,

each I frame is encoded independently, each P frame is encoded

taking into account the preceding I or P frame, and each B frame

is encoded taking into account the preceding and the following I

or P frame. Hence for the data stream shown (a), P1 is encoded

in terms of the information in I1, P2 is encoded in terms of P1,

B1 and B2 need the information in I1 and P1 to be processed,

and so on (b).

I

I I

IB B B B BB

B B B B B B

P P

PP

1 1

1 1 1

1 2 2

2 2 2

2 3

3

4

4

5

5

6

6

(a)

(b)

When streaming compressed video, if the network can support variations in the bit rate, the MPEG-4 encoder can be configured for variable-bit-rate operation.

Digital Video Security Systems

Embedded Edge Spring 2004 15

the quality remains good whenmovement occurs. With constantbit rate (CBR), the bit rate is con-stant over time, but the quantiza-tion ratio varies, so changes occurin the quality along the com-pressed stream. And with averagebit rate (ABR), the size of the bitstream varies; as a result, the bitrate and the quantization ratioaren’t constant. The quantizationratio varies with the video con-tents so as to have a predictableencoded stream size while main-taining good quality.

When streaming compressedvideo, if the network can supportvariations in the bit rate, theMPEG-4 encoder can be configuredfor variable-bit-rate operation. Ifnot, the constant-bit-rate modecan be used.

Additionally, two types of toolhave been developed to improvevideo quality and lighten the encod-ing processing load: pre- and post-processing filters.

FILTERSVideo preprocessing filters are usedto reduce noise. Noise in the videosource means more (unwanted)information in the image to beencoded. As a result, the bit rate ishigher, more processing power isneeded for the encoder, the qualityof the display is lower, and postpro-cessing is necessary on the de-coder side. Of course, prepro-cessing requires processing power,so different levels of filtering existthat can be activated or not inde-pendently. The main preprocessingfilters are antidust and low-pass.

On the decoding side, postpro-cessing filters aim to reduce arti-facts due to the compression algo-rithm. The two used most aredeblocking and de-ringing filters. Adeblocking filter smoothes theblock frontiers, allowing a smartdisplay with much more com-pressed bit streams, but it requiresprocessing power on the decoderside. A de-ringing filter also re-quires processing power on thedecoder side, but it removes undu-lations due to compression.

AUDIOAudio can be very useful in a secu-rity system, since sounds can pro-vide clues. For example, an audioedge detection feature can be addedto a video security system so thatvideo recording starts when a sound

• 0.5 – 5.0 V• USB 2.0 at 480 Mb/sec• Faster load time through

QuickLOAD™ technology

ARM 28x 54x 55x 62x 64x 67x OMAP...

DSP Boards and Emulators...from the leader in TI DSP Development Tools

DSP Research, [email protected]: (408) 773-1042

FleXDS—High Performance Emulators• Mobile: PCMCIA, USB2• Desktop: PCI 510 and 560, EVMs

MSB—Modular Systems and Boards• Any mix of C6000, Xilinx, AD/DA, I/O• PCI, cPCI, VME, standalone

TIGER—DSP Development Boards• C5000 and C6000• Rich memory and I/O

VIPER—Telecom and VoIP• Up to 48 cores per board• PCI, cPCI, H.100/110, MVIP

Video Boards• Contact us

16 Spring 2004 Embedded Edge

alarm is detected.With audio, the aim is to get a

sufficient quality to preserve sur-rounding sounds while minimizingthe CPU load on the DSP. Adaptivedifferential pulse code modulation(ADPCM) is the best candidate forthe audio compression algorithm,as it allows for this compromise.Less than 1 percent of the DSP’sresources is required for CPU tasksfor acquisition, compression, andaudio streaming. ◆

Marc Guillaumet is the marketing directorat ATEME in Paris. He joined the compa-ny in 1992 as employee number 10.After seven years managing technical staffand development projects, he createdand developed ATEME's Products busi-ness unit. He has a leading role in definingthe product development roadmap andheads the technical and marketing as-pects of the unit.

Digital Video Security Systems

Tips for Your Development

H ow do you get started? What traps should you beaware of? How can you dramatically reduce your

time to market?A traditional design cycle comprises three stages: archi-

tecture definition, prototype realization, and production.In the first stage, architecture definition, you must

choose the compression algorithm and the target proces-sor. You must also evaluate the application performanceon the chosen target to design the right system architec-ture.

When developing the application prototype, you mustdesign a hardware platform, develop the DSP applicationsoftware, and integrate and validate it with the host soft-ware. Then you can validate the solution you’ve chosenusing functional and performance tests.

The way to reduce those two steps, saving time andmoney, is to use an open application development plat-form, like VSIP, a complete reference design kit, or applica-tion development platform, that includes a board basedon the TMS320DM642™ digital media processor.Furthermore, if you’re moving from another technology toDSP technology, a complete application development

platform substantially reduces the risk.When the platform meets the fundamental require-

ments of the final product, architectural definitionbecomes unnecessary, since the compression algorithms(MPEG-4, ADPCM), the processing libraries (motiondetection, preprocessing, audio event detection, and thelike), the streaming libraries, and the the target processorhave already been chosen and the architectural designalready done. In addition, end-product demos are availablefor evaluation purposes, using PC software to control anddisplay the DSP application “results.”

As for prototype realization, the initial hardware designbecomes unnecessary because the development kit pro-vides the operational hardware platform. In addition, allthe other tools needed for development—DSP andFPGA development tools and evaluation versions of thevideo and audio compression algorithms—are included inthe kit, and the manufacturer provides technical softwareand hardware support.

An open application development platform alsosmoothes the production stage by providing referencedesign schematics and software that is easy to port.

Bit rate for a given quality (typical bit rate for D-1)

CPU load1

Fast-forward, fast-backward

Camera remote control (latency)

Error resilience

Court usage (as evidence)

Channel-hopping delay

- (15–20 Mb/s)

100%

++

++

++

Yes

++

+ (5–6 Mb/s)

200%

-

VBR: + CBR: -

-

No

+/-2

++ (3–4 Mb/s)

200%

-

VBR: + CBR: -

+

No

+/-2

+++ (2–3 Mb/s)

500%–800%

-

VBR: + CBR: -

+

No

+/-2

JPEG/Motion JPEG MPEG-2 MPEG-4 H.264

Comparing Video Compression Standards

Note: VBR = variable bit rate, the mode that allows a constant quality factor.CBR = constant bit rate, the mode that allows a compressed-stream constant bit rate.

1. The CPU load values are based on the Motion JPEG reference and are relative.2. Depends on the GOP size.

September 13-16, 2004Hynes Convention Center

Boston, MA

www.embedded.com/esc

Attend the East Coast’slargest embedded systemsdesign educational opportunity

The Embedded SystemsConference BostonKeep current with 78 classes and full-day tutorials taught by more than40 industry experts addressing yourtoughest processor-based designchallenges. The event features aSingle Board Computer Focus includingSBC classes, the SBC panel, theSBC Village on the exhibits floor.

The Embedded SecuritySeminarLearn to apply embedded security toyour design with this three-day technicalprogram offering sessions on topicsincluding encryption, standards, firewalls,protecting IP, and security for wirelessand Internet protocol.

Attend both conferences in thesame weekDesign your own Combo Pass conference package choosing from 92 technical sessions from theEmbedded Systems ConferenceBoston and the EmbeddedSecurity Seminar.

Save up to $815, register by July 13th.

DevelopYour Career

Conference Sponsors:

Co-located with:

Priority code: XMAF

Beyond MPEG-4

18 Spring 2004 Embedded Edge

Building Digital Video Surveillance Systems:Beyond MPEG-4

Over the last three years, the dig-ital video surveillance market

has shown an unprecedented boom.A search on the term “digital videosurveillance” yielded 993,000results on Google. The reasons forthis surge range from the responsesto 9/11 to the rise of IP infrastruc-ture and the availability of high-performance, low-cost video pro-cessors.

A wide range of products haveappeared on the market, but theycan be roughly divided into twotypes: DVRs and IP nodes.

DVRs replace the classic video-tape recorders and are placed in thecontrol center of the surveillancesystem. Multiple analog video inputsare connected to the DVR (typicalnumbers range from 4 to 32 inputs).The DVR converts the analog signalinto digital, then compresses thevideo using MPEG or MPEG-likealgorithms to reduce the data band-width and to store the video on harddisks. Though not restricted toworking with analog cameras, DVRsare highly popular in legacy analogsystems.

IP nodes are used when buildingan IP-based surveillance system.Instead of running analog videocables throughout the protectedfacility, you can use the Ethernetinfrastructure, which can be eitherwired (“wireline”) or wireless

(although the latter is more suscep-tible to outside interference). TheIP node consists of one or morecameras whose output is digitized,compressed, and sent over the net-work to a server, which records thedata on a hard disk.

Advanced technologies deliver high-performance DSP-basedsurveillance systems that can be implemented efficiently.

By Joel Rotem

Figure 1: The TMS320DM642 digital media processor is highly suited for building an

IP node, offering audio and video, Ethernet, and generic I/O in one device.

Digital IP nodefor analog

camera system

IP digitalcamera system

Lens

Battery or mains power supplyMicrophone

4 GPIO(alarms)

Ethernet

CMOS sensor

VP2

VP1

VP0

DM642

EMAC

ADC FPGA JTAG

IP device withvideo storage

option

RTC

SDRAM

Flash

Audiocodec

ADC

GPIO

PHY

Powerregulation

BT.656

EMIF

ADC

HDD

Beyond MPEG-4

An infinite number of variationsof the two devices exist, the IPcamera being a notable one. Thisdevice incorporates a digital cam-era and some form of video com-pression and IP stack and produceseither 10/100 Ethernet or WiFi out-puts. Low-cost IP cameras arebecoming increasingly popular inlow-end home security systems.

System components can cost$50 per camera up to $1,000 perchannel. What, then, are the quali-ties that differentiate high- andlow-end surveillance systems? Theanswer is, high-performance sys-tems use advanced technologies,such as video stabilization, videomotion detection, and encryptionand watermarking.

Consider the basic solution shownin Figure 1. The system is based onTexas Instruments’ TMS320-DM64x™ digital media processor,which runs at up to 720 MHz and hasdedicated video and audio ports, aswell as an Ethernet MAC.

VIDEO QUALITYThe first parameter that differenti-ates systems is the video quality.Although analog cameras recordfull-frame-rate video (30 framesper second in the United States, 25in Europe). the resolution is typi-cally low. VHS tapes can recordonly about 240 lines of video, withthe image “stretched” to full-screen resolution.

Digital video systems typicallyuse one of two resolutions: CIF(Common Intermediate Format),which corresponds to VHS quality,and full D-1 (720x480) which is thetop quality achievable on a DVD.The latter is about four timesgreater than the former. As a result,encoding a full D-1 channelrequires four times the processingpower, bandwidth, and storagespace, translating into a propor-tionally higher cost per channel.

Although CIF resolution is morethan enough for most indoor sur-veillance applications, higher reso-lution is needed in outdoor securi-ty covering large areas or in casi-nos, where slight of hand is a majorissue and systems must pick up thedetail on a player’s card. Likewise,the number of frames per secondmay vary from as low as 2 framesper second in indoor surveillanceto the full 30 frames per second incasino surveillance systems.

The number of frames encodedmust be divided among the numberof cameras in the system. The typ-ical compression performance ofMPEG-4 is 120 CIF frames for a600-MHz DM642 processor. Thisperformance can be used for onefull-frame, full-rate channel, forexample, or four half-resolution(two CIF) encoders at 15 framesper second.

To complicate matters, manytimes a surveillance node isrequired to process two compres-sion rates on a single stream, onehigh-performance stream for videomonitoring (real-time viewing) andanother, small-size, low-fps streamfor recording.

The video bit rate is a major con-cern for any application A fulluncompressed video stream requiresmore than 100 Mb/s to store orstream. You can reduce the trans-

mission rate to about 2 to 4 Mb/swithout a major loss of quality byusing an MPEG-4 codec. The newH.264 codec promises to reduce thestream an additional 33 percent.

Video encoding latency can alsobe a major issue. Many times, acontroller will want to track a sub-ject remotely by controlling thecamera’s motion (PTZ axis com-mand). That’s impossible if thelatency of the encoding and trans-mission is too high, since the targetwill be long gone. Low-latencyencoding is typically achieved byavoiding using the B frame featurein MPEG encoders. This featureresults in less effective compres-sion, but it provides good latency.Again, using one stream for moni-toring and another stream forrecording can overcome this prob-lem, although it requires extra DSPresources.

ADVANCEDVIDEO FEATURES

StabilizationPerceived video quality is affectedby other algorithms besides thevideo compression. Video fromsecurity cameras, especially out-door cameras, often shake. Shakingcomes from many sources. Swayingbuildings, wind, cameras mountednear air vents, PTZ servos, and

Embedded Edge Spring 2004 19

Figure 2: Picking up license plates for access control is a great example of how image sta-

bilization adds significant functionality to a video surveillance system, converting raw,

shaky video from a pole-mounted camera (left) to steadied video (right).

Beyond MPEG-4

20 Spring 2004 Embedded Edge

such, all generate shaky andunsteady video. This effect is ampli-fied by high-power zoom lenses.

Removing the shaking revealscleaner details (Figure 2), whichare required for both good- andhigh-quality security video. Shakyvideo tends to compress verybadly, resulting in further loss ofdetails. Stable images result ingreater compression and higherquality for remote and Internetviewing. Also, modern digital com-pressors use many bits to encodethe moving features of a video. Ifthe whole image shakes, everythingis moving, wasting an enormousnumber of bits. That’s another rea-son why removing the shakingenables digital recorders to storemore, higher-quality video.

Several companies provide videostabilization algorithms for theTMS320C6000™ DSP platform thatuse as little as 5 percent of the DSP’sprocessing power per channel.

Video Motion Detection Video motion detection (VMD) isquickly becoming one of thehottest technologies in the home-land security market. With tens ofmillions of CCTV cameras in theworld, it’s impossible to have a per-son view each one and detectintruders. VMD algorithms provideartificial intelligence-like perfor-mance in detecting intruders byanalyzing a video stream and over-coming vibration, wind, dust, andlight changes, as well as minorinterference by, for example, smallanimals. Combining VMD with a

DVR can save considerable diskspace by recording only whenmotion is detected (or increasingthe encoder bandwidth), in addi-tion to alerting security forces.VMD algorithms typically consume5 to 20 percent of a DM642’s power,depending on the use of stabiliza-tions and the robustness of theVMD. Combining VMD with tradi-tional sensors, like tripwires andvolume detectors, can significantlyincrease detection rates andreduce false positives.

Algorithms for another form ofvideo detection, smoke and firedetection, are being added to iden-tify physical danger.

Several companies are now tak-ing the intelligence of these sys-tems to the next level by develop-

ing content analysis tools that canrespond to such instructions as“Retrieve the video where a bluecar appeared.” The tools analyzeand tag video in real time as part ofthe new MPEG-7 standard.

Two important features for pro-tecting the system from outsideinterference are encryption andwatermarking.

Data encryption is important toprevent outside parties from view-ing the video by simply connectingto the network with a sniffer.Encryption can maintain the priva-cy of the people within the protect-ed area, as well as prevent out-siders from interfering with thesystem data without detection.

Although traditional analogvideo encryption is based on ascrambling technique that rotates

the order of video lines, digital sur-veillance systems can rely onadvanced generic data encrytiontechnology, such as the triple DESsystem developed by IBM or thenew Advanced Encryption Services(AES) encryption, Snapshield

Watermarking preserves theauthenticity of the video recordedby the system. Digital images andvideos can be manipulated easily,diminishing the integrity of thecontent. Watermarking solutionsembed a digital signal containingthe secure hash value of the imagein the digital image or video. Thewatermark is invisible to the nakedeye and doesn’t alter the value ofthe content. Any attempt to manip-ulate even a single frame will bedetected. A secret key is needed to

access the information embeddedwithin the watermark, addinganother layer of security. The dateand time of capture is stamped oneach video frame; therefore anyattempt to remove frames in thevideo stream can be detected.

Watermarking is becoming es-sential both for preventing outsideparties from interfering with thevideo and for proving to the courtsthat recorded digital video is reli-able and can be used as reputableevidence.

CONNECTIVITYAside from video monitoring andarchiving, a surveillance systemneeds to provide much more func-tionality, such as connectivity. Aswivel camera can be controlled(PTZ) via a serial port to allow

Encryption can maintain the privacy of the peoplewithin the protected area, as well as prevent outsiders frominterfering with the system data without detection.

Beyond MPEG-4

Embedded Edge Spring 2004 21

remote control by security person-nel at the control center.

Audio can be sent to and fromthe DSP using the DM64x’s serialMCBSP ports, providing audiorecording and two-way intercomcapabilities. The GPIO pins can beused to control alarms, connect totraditional intruder detectiondevices (volume detectors, trip-wires, photo-optic sensors, and thelike), and control doors.

When connected to the correctinput hardware, the DSP can beused to process biometric informa-tion. Most notably, you can add fin-gerprint recognition to a securitynode to provide highly effectiveaccess control.

POWER CONSUMPTIONUsing DSP technology, it’s possibleto create compact, highly densesolutions with a large number ofstreams. Unlike Pentium-basedsolutions, which are inherentlylarge and consume 50+ W per fullvideo channel, DSP-based solu-tions can produce small IP nodesand DVRs with dozens of high-per-formance channels.

Another benefit of low powerconsumption is the limited use offans for cooling. As a rule, the moreheat a system produces, the lowerthe MTBF. Fans further reduce theMTBF, since they are notoriouslyunreliable, and once they malfunc-tion, the system is a short wayfrom burning up.

A new technology being intro-duced for IP nodes is IEEE 802.3af,power over Ethernet. Devicesusing less than 10 W can receivetheir power through the IP net-work with no need of additionalwiring. This technology can con-siderably reduce infrastructurecosts. Needless to say, low-powerdevices are much easier to run offbatteries, allowing the system tokeep working when there’s a

power-down or if someone tam-pers with the wiring.

There are thousands of IP nodesand DVRs on the market, as well ascompanies offering various tools andtechnologies to build such solutions.The solutions can be roughly dividedinto PC-based and embedded.

WHAT’S OUT THERE?Hundreds of companies offer PCsequipped with video capture cardsthat can be used as DVRs. Someuse hardware-based compressionto overcome the performance limi-tation of the Pentium. In general,these solutions offer limited perfor-mance and reliability but offer verycompetitive pricing for the DVRmarket.

The embed-ded solutionsare based onASICs or DSPsp e r f o r m i n gvideo process-ing. ASIC-based solu-tions typicallyuse MPEG-2compressionwith limitedfeatures (us-ing ICs devel-oped for enter-tainment bysuch compa-nies as Zoranand C-Cube).D S P - b a s e dsolutions, likethe one des-cribed here,can be devel-oped fromscratch or byusing toolssuch as TI'sDM642 Eval-uation Mod-ule (EVM);and third-

party video encoders, such asUBVideo's or Prodys’s, plus an audiocodec and other useful libraries.Mango DSP takes a different ap-proach, providing complete DSP-based programmable platforms, likethe four-channel RAVEN-X IP nodeor the eight-channel LARK DVR.

As always, each solution has itsmerits, depending on performance,features, channel cost, and time tomarket. ◆

Joel Rotem is the chief applications engi-neer at Mango DSP Inc. in San Jose. Beforejoining the company, he cofounded DV-Demand, which specialized in MPEG-2and DVD technology, and has filed twopatents in these fields. Prior to that, heworked at Waves Inc.

22 Spring 2004 Embedded Edge

VXS: A High-Performance VMEbusSwitched Serial Fabric

mechanical architecture. Indeed,the major reason for its longevityhas been a series of performanceand feature enhancements promot-ed and nurtured by a broad base ofVMEbus vendors.

VXS, a switched serial backplanefabric for VMEbus defined by VITAspecification 41 (www.vita.com),continues that history. It will satis-fy the most challenging data trans-fer requirements of real-timeembedded systems as faster pro-cessors and interfaces emerge andensure the continuing popularityof VMEbus by allowing legacyboards to coexist with the new VXSboards as they become available.

INNER WORKINGS OFSWITCHED FABRICSA switched fabric is a system forconnecting devices to support mul-tiple data transfers simultaneouslyand is usually implemented with acrossbar switch. Data is sent inpackets, with information con-tained in the packet header foridentification, routing, and errordetection and correction. To ensureadequate performance for any givensystem, the interconnecting fabriccan be a simple as a point-to-pointconnection between two devices ora more complicated architecturethat may include switches, routers,hubs, and repeaters. One familiar

example of a switched parallel fab-ric is RACEway.

Because of recent advances inserial data technology, the new gen-eration of switched fabrics uses se-rial links. With data bit rates now inthe gigahertz range, the new serialinterfaces can easily rival their par-allel counterparts. In many cases,the transition from parallel to serialoccurs only at the lowest levels ofthe OSI seven-layer model. In thisway, existing protocols are main-tained so that legacy products withparallel interfaces can be supportedwith hardware adapters that convertthe physical layer interface into thenew serial link.

This strategy has been extremelysuccessful in allowing the new serialtechnology to be inserted seamless-ly. One excellent example is themigration of parallel flat-cable SCSIto serial Fibre Channel as the inter-face of choice for the latest genera-tion of high-performance hard diskdrives and disk arrays. The sameSCSI protocol is employed in bothschemes.

VXS promises to solve even the most challenging data transferrequirements of future real-time embedded systems.

By Rodger Hosking

Now well into its third decade of widespreaddeployment, VMEbus continues as the dom-inant bus structure for high-performanceembedded systems. In an industry charac-terized by a steady succession of new deviceofferings with speed and density increases

every few months, VMEbus has retained this leadership posi-tion not simply because it was based on a sound electrical and

VXS Switched Serial Fabric

One of the major benefits of thenew serial interfaces is the reducednumber of signal lines and smallerconnectors and cable. This benefittranslates into enhanced systemdensity, simpler system integra-

tion, lower installation costs, andeasier maintenance. Another bene-fit is the ability to use coppercables for low-cost local connec-tions or optical cable for fast, long-haul data transmission channels.Again, the physical layer can bemade completely transparent tothe protocol layer.

Yet another benefit of seriallinks is the ability to gang multipleserial links together to boost datathroughput. Since the signal with-in a single bit link contains em-bedded clock and timing informa-tion, each link can propagate onits own across the channel, andtransceivers at each end can han-dle the multiplexing and demulti-plexing for 1x, 4x, 8x, or 16x gang-ing in low-level hardware layerdevices.

ATTRACTIVE ALTERNATIVESOnce the benefits of switched seri-al fabrics became apparent,embedded systems developerssought ways to take advantage ofthe technology for a wide range ofinterconnection needs: boards toperipherals, boards to boards,chassis to chassis, and facility tofacility. Not only are switched seri-al fabrics attractive alternatives toexisting technology for front-panelinterconnections, they also areextremely appropriate for back-plane data traffic to augment or

replace the conventional parallelbackplane bus.

Embedded system vendors arenow faced with choosing from thefive contending switched serial fab-rics defined in the VITA 41 spec:

HyperTransport, InfiniBand, PCIExpress, RapidIO, and StarFabric.These fabrics were chosen becauseof their wide acceptance in theindustry and because each is techni-cally capable of meeting the needs ofthe spec.

H y p e r -T r a n s p o r t(www. hyper-transport.org/)is AdvancedMicro Devices’solution forchip-to-chipand board-to-board connec-tions. A uni-versal inter-connect, it re-places and im-proves uponexisting multi-level buses used in suchsystems aspersonal com-puters, serv-ers, and em-bedded sys-tems, whilemaintainingsoftware com-p a t i b i l i t ywith PCI.H y p e r -T r a n s p o r t s u p p o r t s

asymmetric, variable-width datapaths.

InfiniBand (www. infinibandtal.org) is an industry-standard, chan-nel-based switched fabric designedprimarily for server and storage sys-

tem connectivity for box-to-boxlinks. It seeks to fulfill the need forgreater data center reliability, avail-ability, and scalability, as well asgreater design density for servers.

Intel’s initiative for connectivity

VXS Switched Serial Fabric

Embedded Edge Spring 2004 23

Not only are switched serial fabrics attractive alternativesto existing technology for front-panel interconnections, theyalso are extremely appropriate for backplane data traffic.

24 Spring 2004 Embedded Edge

between processors and boards, PCIExpress (www. pcisig. com) is com-patible with the current PCI soft-ware environment. It supports chip-to-chip, board-to-board and adapterinterconnections and is aimed atthe next generation of computingand communications platforms.

RapidIO is targeted at chip-to-chip and board-to-board connec-tions for real-time COTS embeddedsystems and has strong supportfrom Motorola. It’s promoted by theRapidIO Trade Association, whosefounding members are, in additionto Motorola, Alcatel, EMC, Ericsson,IBM, Lucent, Mercury ComputerSystems, Texas Instruments, andTundra Semiconductor.

Originally developed by StarGen,a fabless semiconductor company,with the assistance of the StarFabricWorking Group, StarFabric (www.starfabric.org) targets backplaneand chassis-to-chassis applicationsand supports multiple classes oftraffic. Its strength lies in providingtransparent serial links betweenPCI devices.

These five fabrics are all vying forposition. Aside from some validtechnical pros and con for each, thekey issues tend to be business ones.

For example,which majorvendors arebacking eachstandard? Howeasily can thesenew fabrics beintegrated intoexisting softwareoperating sys-tem environ-ments? Whatcomponents areavailable forbridging to exist-ing hardwareand processors?What kinds ofswitches are

available? Finally, can the fabrictechnology components achieve suf-ficiently high volume production tomake the parts inexpensive, power-efficient, and easily connected?

RAPIDIO FOR REAL-TIMEAPPLICATIONSNevertheless, one of the switched fab-rics, RapidIO, is especially well suitedto real-time embedded systems.

The objectives of this high-perfor-mance standard are fast inter-processor communication, DSP net-working, and high-speed backplaneinterconnects, as well as efficientchip-to-chip and board-to-boardtransfers. These goals are addressedwith scalable serial bit rates of up to10 GHz. Performance and efficiencyare achieved through a combinationof a low-overhead protocol andhardware error detection and cor-rection. By offloading these tasksfrom the processor, RapidIO isextremely well suited for real-timeapplications, where shared coherentmemory, channel predictability, andlow latency are essential.

At the physical layer, RapidIOuses the same differential current-mode signaling as such other stan-dards as Fibre Channel, InfiniBand,

and IEEE 802.3 XAUI.During the last few years, the

VITA 41 committee of the VMEbusStandards Organization has beendefining a switched serial backplanefabric for VMEbus called VXS. Thespecification defines a VXS payloadcard, a VXS switch card, and a con-nector scheme for various possiblebackplanes to support VXS.

‘FABRIC-AGNOSTIC’Because of the “fabric wars,” theVXS specification was defined to befabric-agnostic, in that there are fivesubspecifications, one for each ofthe five fabrics described above. Thebasic switched fabric architecturechosen to connect the boards acrossthe backplane is a ganged 4x, full-duplex serial channel. Each inter-connect thus supports data flow inboth directions simultaneously.

Although serial bit rates aredefined up to a maximum of 10Gb/s, the first systems support lowerfrequencies. With the 4x gangingand a nominal bit frequency of 2.5GHz, both the input path and theoutput path of these systems arecapable of moving data at 1 Gb/s.

VXS CARDSThe VXS payload cards are proces-sor, CPU, memory, and data con-verter 6U VMEbus cards with a VXSinterface. They have standard P1and P2 connectors that implementthe standard VME64x backplaneinterface. A new P0 backplane con-nector mounted between P1 and P2handles two 4x, full-duplex switchedserial ports (Figure 1a).

At a 2.5-GHz clock frequency,each VXS payload card can movedata in and out at an aggregate rateof 4 GB/s, two orders of magnitudeabove the original VMEbus back-plane specification.

The VXS switch card has a 6UVME board form factor, but unlikethe payload card, it has no P1 and

VXS Switched Serial Fabric

VME P1 VME P2

Key

Audiocodec

Key

Key

PowerVXS 4x serial ports

(a) (b)

Figure 1: As this simplified view of the VXS payload card shows

(a), it has standard P1 and P2 connectors that implement the

standard VME64x backplane interface, plus a new P0 back-

plane connector mounted between P1 and P2, which handles

two 4x, full-duplex switched serial ports. The VXS switch card

implements the crossbar switching to connect payload cards

together. It has a 6U VME board form factor and a power con-

nector and connectors that handle up to eighteen 4x full-duplex

serial links, instead of P1 and P2 connectors (b).

P2 connectors. Instead, the spacenormally used for the P1 and P2connectors along the rear edge ofthe board is populated with a powerconnector and connectors that han-dle up to eighteen 4x full-duplex ser-ial links (Figure 1b). The VXS switchcard implements the crossbarswitching to connect payload cardstogether.

VXS switch cards can have anynumber of crossbar switches andany number of serial ports. Theymay also include other interfaces tonetworks for communication andstorage devices, as well as front-panel serial ports to other VXSswitch cards in the same chassis orin adjacent racks. Optical serialports could be used for remote high-speed data transfers.

VXS BACKPLANEThe VXS backplane can havemany different layouts to accom-modate specialized system needs,but it will normally handle cardsand one or more switch cards. Thestandard board-to-board pitch of0.8 in. is maintained throughout,and other VMEbus card-cage

m e c h a n i c a lhardware (cardguides, frames,and so on) iscompatible. Theob-jective is toconnect the two4x serial links ofeach payload cardto links on theswitch card orcards to supportthe necessaryboard-to-boardc o n n e c t i v i t y .Some smaller sys-tems may requireonly a few pay-load slots and a

very simpleswitch card;others mayneed to use af u l l - w i d t hb a c k p l a n eand multipleswitch cardsto handle therequired traf-fic.

Since thereis a maximumof 18 seriallink connec-tions on eachswitch card,all 18 payloadcards can beconnected toeach otherthrough twor e d u n d a n tpaths, name-ly, throughboth of thetwo switchcards. Thisdual redun-dancy is at-tractive for

many applications re-quiring faulttolerance and high availability. Thetwo switch cards also have addition-al serial links that join switch cardstogether, providing yet another pathfor routing. ◆

Rodger Hosking ([email protected]) isthe vice president and a cofounder ofPentek, Inc. in Upper Saddle River, N.J. He iscurrently responsible for new product def-inition, marketing and sales activities, andstrategic alliances with third-party hard-ware and software vendors. Previously, hewas an engineering manager and projectengineer at Wave-tek and RocklandSystems. He has presented numerouspapers at technical conferences and writ-ten articles for industry publications.

VXS Switched Serial Fabric

Embedded Edge Spring 2004 25

Figure 2: This example of a 20-slot VXS backplane holds 18

payload cards divided equally in each half and two switch

cards occupying the two center positions. One serial link of

each payload card is wired to one of the switch cards; the sec-

ond link is wired to the other.

P1

P0

P2

Payload slots Switch slots

Switch

Switch to switch links

4xserial links

4xserial ports

AudioAudiocodeccodecAudiocodec

Launchings

Four-Channel MPEG-4CCTV Network Node Hits D-1 ResolutionTargeting CCTV video surveillanceand other networked video applica-tions, the RAVEN-X IP quad-chan-nel MPEG-4 node offers full D-1 res-olution thanks to its use of twoTMS320DM642 digital media pro-

cessors. The twin processors per-form MPEG-4video and audiocompression as

well as IP stack functions. Theprocessors also can run motion-detection alarm and door controlfunctions. The system accepts fourcomposite video, four stereo audio,and eight programmable signals.Two Ethernet connections providevideo transmission over IP andallow remote control and monitor-

ing. Suitable for indoor or outdooruse, the node touts programmableframe rates and image sizes for eachcamera. The system is only 5x5 in.and consumes just 8 W. Multiplenodes can be linked with digitalvideo recorders to expand the sur-veillance area. The RAVEN-X sellsfor less than $1,000 each in low vol-umes. Mango DSP Inc., San Jose,Calif.; (866) 686-2646, [email protected]

Visual Design PlatformServes EmbeddedControl SystemsEmbedded Target for TI’s TMS320-C2000 DSP platform lets embeddedcontrol system engineers visuallydesign, implement, and verify real-time control and signal processingalgorithms, thus providing a direct

path from designenvironment toimplementation.The platform fea-tures automaticcode generationand peripheralp r o g r a m m i n gsupport for Sim-ulink, an interac-tive tool for mod-eling, simulating,and analyzing dy-namic systems. Italso works withthe Code Com-poser Studio In-tegrated Develop-ment Environ-ment, with thehelp of MATLABLink for CodeComposer Stu-dio. List pricesfor EmbeddedTarget for theC2000 platformstart at $4,000.The MathWorks,Inc., Natick,

Mass.; (508) 647-7000, www.math-works.com

64-bit PCI Processor BoardBoasts Dense FPGA, DSP,and 4 GB of SDRAMThe Presence II PCI64-NP packs aVirtex-II FPGA with up to 6 millionsystem gates, a 250-MHz TMS320-C6203 DSP, and up to 4 GB of SD-RAM on a PCI v2.2, 64-bit universalcard. Other resources are dual inde-pendent 133-MHz, 9-bit ZBT memo-ries; dual independent, bidirection-al 4- bit LVDS high-speed data chan-nels; mezzanine expansion slot foraudio and video processing; BIOS;and several drivers. Cybula Ltd.,Fimber, East Yorkshire, U.K.; +44(0) 1377 236 382, www.cybula.com

ATM SAR Arrives forWireless Infrastructure An ATM SAR targets wireless infra-structure and runs on a TMS-320C64x DSP generation processorand comprises a complete stack andAAL2, AAL5, and ATM functions.Configurable options include thenumber of virtual channels, config-uration of ATM channels for trans-mitting and receiving, error han-dling and logging, and transmit andreceive and reassembly buffering.The SAR complies with eXpressDSPSoftware Development Tools andITU-T recommendations and sellsfor $15,000. eInfochips, Inc., SantaClara, Calif.; (408) 496-1882,www.einfochips.com

VoIP and FoIP PackagesTurn to 1-GHz DSPThe OpenEndpoint suite of VoIPand FoIP software now includes ver-sions that are optimized for TI’s 1-GHz TMS320C64x DSP generation.Harnessing the DSPs, the productsachieve a 192-channel capacity.Included in the upgrade are a T.38fax relay; G.726, G.723.1, andG.729a/b vocoders; and G.168 LEC;

26 Spring 2004 Embedded Edge

as well as call classifier, caller ID, andcomprehensive signaling functions.Combined hardware and softwareprice points are under $2.50 perchannel. Commetrex Corporation,Roswell, Ga.; (770) 449-7775, ext.420, www.commetrex.com

Multimedia Software andBoard Make Processing LeapA wide selection of software codecs,transraters and transcoders, sup-porting software, and a hardwarereference design have been craftedfor the 1-GHz TMS320C64x DSPgeneration. The boost in horsepow-er allows real-time, high-resolutionMPEG-2 video processing on onecard. Among the video codecs avail-able are ones that perform H.264,MPEG-4 ASP, MPEG-4 SP, MPEG-1,DivX, WMV9, or MJPEG processing.The MP4900-BRD-C6415 hardwarereference design allows softwaredevelopment on a standard PChaving a PCI bus architecture andrunning Windows or Linux. Ingen-ient Technologies, Inc., RollingMeadows, Ill; (847) 357-1980,www.ingenient.com

PCI Board Captures andPlays Data at 2 GB/s Lobo, a PCI plug-in platform, per-forms customized 2-GB/s digital cap-ture and playback. The 32/64-bitcard combines a TMS320C6713DSP; 6-million-gate Virtex-II FPGA;up to 8 GB of DDR memory; and twohigh-speed, flexible digital I/O inter-

faces, each con-sisting of 40LVDS pairs. TheI/O interfaces are

configurable by firmware to handlecustom or such standard protocolsas FPDP, FPDP2, or ChanneLink. ALobo development package, whichincludes the Lobo card, JTAG emula-tor, and the Code Composer Studiointegrated development environ-ment, sells for $12,170. The quantity

price for Lobo alone is $7,000 each.Innovative Integration, Simi Valley,Calif.; (805) 520-3300, www.innov-ative-dsp.com

H.264 Video ConferenceCodec Turns ToTMS320DM642The BC-264 codec for IP-basedvideo conferencing executes theH.264 baseline profile videoencoder on one TMS320DM642 dig-ital media processor, affording fullD-1 (720x480-pixel) two-way com-munication. It can handle bit ratesthat are constant or that vary be-tween 64 kb/s and 4 Mb/s. The co-dec fully accommodates NTSC andPAL formats with frame rates of 30frames/s and 25 frames/s, respec-tively. The price depends on config-uration. W&W Communications,Inc., Sunnyvale, Calif.; (408) 481-0264; www.wwcoms.com

Conference Call Chip with 1-GHz DSP Manages 768 Channels Version 2 of a high-density confer-ence chip, powered by a 1-GHzTMS320C64x DSP generation pro-cessor, gives telephone systems upto 768 channels of teleconferencing.Because it can accommodate bothTDM and packet channels, the chipoperates in TDM, VoIP, and mixedsystems. Other features includevoice recording and playback, tonedetection and generation, and inter-active voice response. The pricedepends on configuration. AdaptiveDigital Technologies, Inc., Con-shohocken, Pa.; (610) 825-0182,www.adaptivedigital.com

Video Codec and SoftwareGain TMS320DM642 PortThe VP6 video codec and TrueCastPlayer software have been ported tothe TMS320DM642 digital mediaprocessor, enabling devices thatcontain the processor, like set-top

boxes, personal video recorders, anddigital media receivers, to streamand play high-quality VP6-encodedcontent over an IP network. On2Technologies, Inc., New York; (646)292-3533, www.on2.com

Module Adopts 1-GHzTMS320C6416 DSPThe SMT365G module includes a 1-GHz TMS320C6416 DSP, a 2-mil-lion-gate Virtex-II FPGA, up to 256MB of memory, and two 32-bit high-speed bus interfaces for I/O or inter-DSP communication. The boardworks with the Code ComposerStudio integrated development envi-ronment, multiple DSPs, and the 3LDiamond RTOS. The SMT365Gstarts at $2,995. Sundance DSPInc., Reno, Nev.; (775) 827-3103,www.sundance.com

Mark Your Calendar for the NextTI Developer Conference.Attend the signal processing event of theyear, Feb. 15-17, 2005, in Houston,Texas.

Get In-DepthTechnical InformationThe TI Developer Conference has morethan 100 technical sessions, includinghands-on labs, in signal processingtechnologies, such as high-performanceanalog (HPA), digital signal processing(DSP), Digital Light Processing™ (DLP),FPGA, and microcontroller.

Meet Industry and TI ExpertsExperts will be available to answer yourquestions and curiosities about specificapplications, hardware, software, andtools and to discuss the future of the sig-nal processing industry.

For more information visitwww.tidevcon.com/na.

Launchings

Embedded Edge Spring 2004 27

28 Spring 2004 Embedded Edge

T he demand for digital videoapplications has grown consid-

erably over the last few years andshould continue to do so. However, amajor problem for equipment man-ufacturers is that by and large theirproducts don’t meet the variousexisting video coding standards,much less accommodate emergingstandards.

Hence the two major featuresneeded today for the suc-cessful deployment ofnew digital videoequipment are soft-ware programma-bility and systemflexibility.

Programmabil-ity enables con-sumers to down-load different vi-deo codec formatsdirectly onto theirend products. Systemflexibility enables them toswitch from one digital mediastandard to another or even run sev-eral simultaneously.

Four distinct markets for digitalvideo equipment exist today: videotelephony, including video confer-encing and IP-based video tele-phones; surveillance—networkedintelligent cameras and digital videorecorders; consumer streaming-media appliances, including set-topboxes, personal video recorders,and digital media receivers; andprofessional-grade broadcast sys-tems, including broadcast-qualityencoders and multiplexers, whichprocess many channels of stream-ing video; as well as video transportand delivery in head-end systems.

No one standard could meet thedifferent requirements of all theseapplications. As a result, severalvideo compression technologies arecurrently used. Video telephony sys-tems are based primarily on the ITUH.263 standard. Surveillance sys-tems use ISO JPEG/MJPEG andMPEG-4. Consumer streaming-media appliances use ISO MPEG-2,

along with proprietary videocodec technologies. Pro-

fessional-grade broad-cast systems support

MPEG-2. In addi-tion, the emergingWindows MediaSeries 9 standard,from Microsoft,and the H.264Main Profile are

challenging MPEG-2 and MPEG-4,

respectively.Because new stan-

dards are continually beingdeveloped, products must be able

to be upgraded easily via quick soft-ware downloads. By making thatpossible, software programmabilityincreases a digital video product’sshelf life. It also increases its viabili-ty in the North American, European,Japanese, and Asian markets—as amanufacturer could launch the samehardware but with different softwarefor each one. As an added benefit, itlowers the manufacturer’s overhead,since the customers themselvesinstall the software patch or codecupgrade, obtained over the Internet,rather than having it done by a com-pany technician, reducing the cost ofsupport, troubleshooting, and newupgrades.

Chips that support video for theseapplications also simultaneously sup-port audio and network streamingtechnologies. If the chips are fullysoftware-programmable, productsbased on them enable customers toselect any audio codec and anystreaming format at any time, as well.

Take video conferencing. The slug-gish emergence of this application isdue not only to slow broadbandgrowth and limited available band-width, but also to the video com-pression technology standards onwhich it was based: ITU H.261 andH.263. The latest video conferencingproducts now support the newerH.264 standard, which uses half thebandwidth required by the earlierstandards, offers excellent videoquality, and supports video stream-ing as well as many error-resilientfeatures. Because the previous-gen-eration products didn’t support thenew standard, OEMs whose productsweren’t software-programmable hadto design a new board or a new prod-uct that provides backward compati-bility for the H.261 and H.263 stan-dards, as well as H.264 support.

Indeed, the most up-to-date digitalvideo equipment in all four geographicmarkets now incorporate softwareprogrammability and system flexibili-ty. Powered by digital media proces-sors, these products are fully software-programmable and -upgradable.

Emerging Video Apps Need Programmability and Flexibility

Pradeep Bardia is thevideo solutions market-ing manager for TexasInstruments, Inc.’s DigitalSignal Processing Groupin Stafford,Texas.

On the Edge

By Pradeep Bardia

IMPORTANT NOTICE

Texas Instruments Incorporated and its subsidiaries (TI) reserve the right to make corrections, modifications,enhancements, improvements, and other changes to its products and services at any time and to discontinueany product or service without notice. Customers should obtain the latest relevant information before placingorders and should verify that such information is current and complete. All products are sold subject to TI’s termsand conditions of sale supplied at the time of order acknowledgment.

TI warrants performance of its hardware products to the specifications applicable at the time of sale inaccordance with TI’s standard warranty. Testing and other quality control techniques are used to the extent TIdeems necessary to support this warranty. Except where mandated by government requirements, testing of allparameters of each product is not necessarily performed.

TI assumes no liability for applications assistance or customer product design. Customers are responsible fortheir products and applications using TI components. To minimize the risks associated with customer productsand applications, customers should provide adequate design and operating safeguards.

TI does not warrant or represent that any license, either express or implied, is granted under any TI patent right,copyright, mask work right, or other TI intellectual property right relating to any combination, machine, or processin which TI products or services are used. Information published by TI regarding third-party products or servicesdoes not constitute a license from TI to use such products or services or a warranty or endorsement thereof.Use of such information may require a license from a third party under the patents or other intellectual propertyof the third party, or a license from TI under the patents or other intellectual property of TI.

Reproduction of information in TI data books or data sheets is permissible only if reproduction is withoutalteration and is accompanied by all associated warranties, conditions, limitations, and notices. Reproductionof this information with alteration is an unfair and deceptive business practice. TI is not responsible or liable forsuch altered documentation.

Resale of TI products or services with statements different from or beyond the parameters stated by TI for thatproduct or service voids all express and any implied warranties for the associated TI product or service andis an unfair and deceptive business practice. TI is not responsible or liable for any such statements.

Following are URLs where you can obtain information on other Texas Instruments products and applicationsolutions:

Products Applications

Amplifiers amplifier.ti.com Audio www.ti.com/audio

Data Converters dataconverter.ti.com Automotive www.ti.com/automotive

DSP dsp.ti.com Broadband www.ti.com/broadband

Interface interface.ti.com Digital Control www.ti.com/digitalcontrol

Logic logic.ti.com Military www.ti.com/military

Power Mgmt power.ti.com Optical Networking www.ti.com/opticalnetwork

Microcontrollers microcontroller.ti.com Security www.ti.com/security

Telephony www.ti.com/telephony

Video & Imaging www.ti.com/video

Wireless www.ti.com/wireless

Mailing Address: Texas Instruments

Post Office Box 655303 Dallas, Texas 75265

Copyright 2004, Texas Instruments Incorporated