top five essentials of electronic design

23
Engineering The Top COMPLIMENTS OF Essentials

Upload: jemmettdwyer1033

Post on 11-Apr-2015

25 views

Category:

Documents


0 download

DESCRIPTION

EDN Magi zine 23pp

TRANSCRIPT

Page 1: Top Five Essentials of Electronic Design

Engineering

The Top

Compliments of

Essentials

Page 2: Top Five Essentials of Electronic Design

2 ElEctronic DEsign

RogeR AllAn | Contributing EDitor [email protected]

A never-ending parade of refinements to IC pack-aging gives engineers more choices than ever to meet their design requirements. With more radical approaches lurking on the horizon, that mix will become even richer.

Today, though, squeezing more functions into smaller spaces at a lower cost dominates, leading design-ers to stack more chips atop each other. Thus, we’re seeing the rapid ascent of 3D IC packaging.

The impetus behind 3D IC technology’s rise comes from the consumer market’s use of more sophisticated interconnects to connect silicon chips and wafers. These wafers contain chips with continually shrinking line dimensions.

To scale down semiconductor ICs, finer line drawings are made on 300-mm wafers. Although most mass-produced ICs today are based on 55-nm design nodes or less, these design rules will shrink to 38 nm or smaller, and then down to 27 nm by 2013, according to forecasts by market forecaster VLSI Research Inc. (Fig. 1).

These downscaled IC designs accelerate the need for high-density, cost-effective manufacturing and packaging tech-niques, which will invariably challenge IC manufacturers to minimize the higher cost of capital equipment investments.

Many 3D applications still use traditional ball-grid-array (BGA), quad flat no-lead (QFN), lead-grid-array (LGA), and small-outline transistor (SOT) packages. However, more are migrating to two main approaches: fan-out wafer-level chip-scale packaging (WLCSP) and embedded-die packaging.

Presently, fan-out WLCSP is finding homes in high-pin-count (more than 120 pins) applications that use BGAs. Embedded-die technology favors the use of lower-pin-count applications that embed chips and discrete components into printed-circuit-board (PCB) laminates and use microelectro-mechanical-system (MEMS) ICs (Fig. 2).

Researchers at Texas Instruments believe that WLCSP is heading toward a standardized package configuration. It could include a combination of WLCSP ICs, MEMS ICs, and pas-sive components interconnected using through silicon vias (TSVs). The TSV’s bottom layer can be an active WLCSP device, an interposer only, or an integrated passive interposer. The top layer may be an IC, a MEMS device, or a discrete component (Fig. 3).

No matter the package type, though, as pin counts and signal frequencies increase, the need to pre-plan the package option becomes more critical. For example, a wire-bonded package with many connections may require more power-supply buf-fers on the chip due to high levels of inductance. The type of bump, pad, and solder ball placement also can significantly impact signal integrity.

TSVs: Hype Or realiTy? TSV technology is not a packaging technology solution, per

se. It’s simply an important tool that allows semiconductor die and wafers to interconnect to each other at higher levels of den-sity. In that respect, it’s an important step within the larger IC packaging world. But TSVs aren’t the only answer to 3D pack-aging advances. They represent just one part of an unfolding array of materials, processing, and packaging developments.

In fact, 3D chips that employ TSV interconnects aren’t yet ready for large volume productions. Despite making some progress, they’re limited to mainly CMOS image sensors, some MEMS devices, and, to some degree, power amplifiers. More than 90% of IC chips are packaged using tried-and-true wire-bonding means.

Speaking at this year’s ConFab Conference, Mario A. Bola-nos, manager of strategic packaging research and external collaboration at Texas Instruments, outlined a number of chal-lenges facing the use of TSVs in 3D chips. These include a

3D IC Technology Delivers The Total Package

Burgeoning market demands for cost-effec-tive, higher-density smaller packages bank their hopes on a flurry of recent advances made in materials, processing procedures, and interconnects, as well as a greater vari-ety of packaging approaches.

0812EE_F1

Wafer fabrication on advanced technology nodes

300-

mm

waf

er fo

reca

st (th

ousa

nds

of w

afer

s/w

eek)

1200

1000

800

600

400

200

02009 2010 2011 2012 2013

>27 nm

>27 nm but <38 nm

≥38 nm but <55 nm

1. Future packaging trends include greater use of 300-mm wafers as IC design nodes shrink further. This will drive the adoption of 3D ICs for greater chip densities. (courtesy of VLSI Research)

EngineeringEssentials

Page 3: Top Five Essentials of Electronic Design

ElEctronic DEsign Go To www.elecTronicdesiGn.com 3

lack of electronic design automation (EDA) tools, the need for cost-effective manufacturing equipment and processes, insuf-ficient yield and reliability data involving thermal issues, elec-tromigration and thermo-mechanical reliability, and compound yield losses and known-good die (KGD) data.

Unlike conventional ICs, which are built on silicon wafers some 750 µm thick, 3D ICs require very thin wafers, typically about 100 µm thick or less. Given the fragility of such very thin wafers, the need arises for highly specialized temporary wafer bonding and de-bonding equipment to ensure the integ-rity of the wafer structure, particularly at high processing temperatures and stresses during the etching and metalliza-tion processes. After bonding, the wafer undergoes a TSV back-side process, followed by a de-bonding step. These typi-cal steps result in higher yield levels for more cost-effective mass production.

Currently, there’s a lack of TSV standards on bonding and process temperatures and related reliability levels. The same is true regarding standardization of the TSV assignment of wafer locations. If enough IC manufacturers work on these issues, more progress can be made on expanding the roles of TSVs for interconnects. High process temperatures greater than 200°C to 300°C aren’t feasible for the economic imple-mentation of TSVs.

Ziptronix Inc., which provides intellectual property (IP) for 3D integration technology, licensed its direct-bond-inter-connect (DBI) technology to Raytheon Vision Systems. The company says that its low-temperature oxide bonding DBI technology is a cost-effective solution for 3D ICs (Fig. 4).

Nevertheless, many semiconductor IC experts view the industry at a crossroads of having to choose 2D (planar) and 3D designs. They see a threefold to fourfold increase in costs when going from 45-nm design nodes to 32- and 28-nm designs, considering the fabrication, design, process, and mask costs. Much needed improvements in lithography and chemical vapor polishing, as well as dealing with stress effects issues, make the 3D packaging challenge even more difficult. This is where TSV technology steps in.

France’s Alchimer S.A., a provider of nanometric deposition films used in semi-conductor IC interconnects, has demon-strated that TSVs with aspect ratios (height to width) of 20:1 can save IC chipmakers more than $700 per 300-mm wafer com-pared with aspect ratios of 5:1 (see the table). This was accomplished by reducing the die area need for interconnection.

Alchimer modeled TSV costs and space consumption using an existing 3D stack for mobile applications, The stack includ-ed a low-power microprocessor, a NAND memory chip, and a DRAM chip made on a 65-nm process node. The chips are interconnected by about 1000 TSVs, and the processor die was calculated for aspect ratios of 5:1, 10:1, and 20:1.

IBM, along with Switzerland’s École Polytechnique Fédérale de Lausanne

(EPFL) and the Swiss Federal Institute of Technology (ETH), is developing micro-cooling techniques for 3D ICs, using TSVs, by means of microfluidic MEMS technology (Fig. 5). The collaborative effort, known as CMOSAIC, is considering a 3D stack architecture of multiple cores with interconnect den-sities ranging from 100 to 10,000 connections/mm2.

The IBM/Swiss team plans to design microchannels with single-phase liquid and two-phase cooling systems. Nano-surfaces will pipe coolants, including water and environmen-tally friendly refrigerants, within a few millimeters of the chip to absorb the heat and draw it away. Once the liquid leaves the circuit in the form of steam, a condenser returns it to a liquid state, where it’s pumped back to the chip for cooling.

Wire BOnding and Flip CHip Wire-bonding and flip-chip interconnect technologies cer-

tainly aren’t sitting idle. Progress marches on for a number of flip-chip wafer-bumping technologies, including the use of eutectic flip-chip bumping, copper pillars, and lead-free soldering. Recent packaging developments include the use of package-on-package (PoP) methods, system-in-package (SiP), no-lead (QFN) packages, and variations thereof.

0812EE_F2

Fan-out WLP/chip embedding in substrates

3D ICwithTSV

Fan-outWLP

PCB

Flip-chip

Integratedpassive devices

3D WLP

MEMS

2. Future 3D IC packaging approaches will embody techniques such as wafer-level packaging (WLP) using through-silicon vias (TSVs) together with embedding chips into various substrates. (courtesy of Yolé Développment)

3. ICs, MEMS devices, and other components will be joined by passive components using wafer-level chip-scale packaging (WLCSP) and through-silicon vias. (courtesy of Texas Instruments)

0812EE_F3

IC, MEMS, etc.

WLCSPwith TSV

Passive

EngineeringEssentials

Page 4: Top Five Essentials of Electronic Design

At the packaging level, 3D configurations have been well known for many years. Using BGA packages in stacked-die configurations with wire bonds is nearly a decades-old prac-tice. For example, in 2003, STMicroelectronics demonstrated a stack of 10 dice using BGAs, a record at the time.

Certain 3D approaches like the PoP concept warrant special attention when it comes to high-density and high-functionality handheld products. Designers must carefully consider two issues: thermal cycling and drop-test reliability performance. Both are functions of the packaging materials’ quality and reliability. This becomes more critical as we move from inter-connect pitches of 0.5 mm to 0.4 mm for the bottom of the PoP structure and 0.4 mm to 0.5 mm for the top.

Samsung Electronics Ltd. has unveiled a 0.6-mm high, multi-die, eight-chip package for use in high-density memory applications. Designed initially for 32-Gbyte memory sizes, it features half the thickness of conventional eight-chip memory stacks and delivers a 40% thinner and lighter memory solu-

tion for high-density multimedia handsets and other mobile devices, according to the company.

Key to the package’s creation is the use of 30-nm NAND flash-memory chips, each measuring just 15 µm thick. Sam-sung devised an ultra-thinning technology to overcome the conventional technology limits of an IC chip’s resistance to external pressure for thicknesses under 30 µm. In addition, the new packaging technology can be adapted to other multichip packages (MCPs) configured as SiPs and PoPs.

“This packaging development provides the best solution for combining higher density with multifunctionality in cur-rent mobile product designs, giving designers much greater freedom in creating attractive designs that satisfy the diverse styles and thin-focused tastes to today’s consumers,” says Tae Gyeong Chung, vice president for Samsung’s package devel-opment team.

Market developments are also shaking up the QFN package arena. Germany’s Fraunhofer IZM has developed a chip-in-polymer process that imparts shock and vibration protection to the chip and lends itself to shorter interconnect distances to enhance the chip’s performance. The process starts by thinning the chip, then adhesively bonding it to a thin substrate.

This is all overlaid with resin-coated copper (about 80 µm for the resin layer and 5 µm for the copper surface). The resin is cured, and interconnect vias are laser-drilled down to the con-tact pads and plated with a metal. Then the redistribution layer on top is etched from the copper.

4 ElEctronic DEsign

EngineeringEssentials

4. This set of memory die uses Ziptronix’s direct-bond interconnect (DBI) low-temperature oxide-bonding process for 3D ICs. The die are bonded face down to the face-up logic wafer and thinned to about 10 µm. Electrical contact between memory and logic elements is made via etching memory and logic bond pads, followed by an interconnect metallization over the memory die edge. (courtesy of Ziptronix)

SiliCon ConSumption AS A FunCtion oF tSV ASpECt rAtio

tSV aspect ratio 5:1 10:1 20:1

tSV size (diameter × depth, µm) 40 × 20 20 × 200 10 × 200

Keep-out area (2.5 × diameter, µm) 100 50 2.5

total tSV footprint (mm2) 7.9 2 0.5

Footprint relative to iC area 12.3% 3.10% 0.80%

Average TSV density = 16 TSVs/mm2; die size = 8 × 8 mm

Courtesy of Alchimer S.A.

0812EE_F5

3Dstack

Proces

sor layer

Memory

layer

Analog cir

cuits

RF circu

its

Microc

hann

el CMOS circuitry

TSV

5. Future 3D IC stacks may contain processor, memory, logic, and analog and RF circuitry, all of which are interconnected with through-silicon vias (TSVs). Liquids using MEMS microchannels will perform the cooling. This is part of the CMOSAIC project, which involves IBM and two Swiss part-ners. (courtesy of the École Polytechnique Fédérale de Lausanne)

Page 5: Top Five Essentials of Electronic Design

ElEctronic DEsign Go To www.elecTronicdesiGn.com

This process has been optimized in commercial production of standard pack-ages like QFNs, without the need for specialized equipment or other delays. The use of polymer-embedded QFNs, essentially quad packs with no leads with the leads being replaced by pads on the chip’s bottom surface, is part of the HERMES project.

The goal of HERMES, which includes Fraunhofer and 10 other European indus-trial and academic organizations, is to advance the embedding of chips and components, both active and passive, to allow for more functional integration and higher density. The technology is based on the use of PCB manufacturing and assembly practice, as well as on standard available silicon dies, highlighting fine-pitch interconnection, high-power capability, and high-frequency compatibility.

The QFN package was selected because it’s more common in small, thin appliances housing microcontroller ICs. Fraunhofer researchers believe that QFNs will take over many application niches held by other types of packages. The embedded QFN contains a 5- by 5-mm chip that’s thinned to about 50 µm. The package itself measures 100 by 100 mm. The 84 I/Os on the chip are at a 100-µm pitch (400 µm on the package).

Malaysia’s Unisem Berhad has unveiled a high-density lead-frame technology, the leadframe grid array (LFGA), that offers BGA-comparable densities. The company says that it offers a cost-effective replacement for a two-layer FPGA package. Compared to a QFN package, it has shorter wire-bond lengths. In addition, it can house a 10- by 10-mm, 72-lead QFN pack-age in a body size of 5.5 mm2.

“This package offers a better footprint with higher I/O den-sity and better thermal and electrical performance. It is also thinner and, most importantly, offers a much better yield at front-end assembly,” says T.L. Li, the package’s developer.

Dai Nippon Printing has successfully embedded high-perfor-mance IC chips that are wire-bonded to a printed wiring board

(PWB) inside a multi-layer PWB, citing unique buried bumped interconnections for its success. PWBs interconnect between arbi-trary layers (via hole connections) with bumps made of high-elec-trical-conductivity paste, which are formed by screen printing.

Half-etching the base metal of the leadframe and making its inner leads longer will close the distance between the chip and the leadframe it’s attached to, as well as drastically reduce the amount of gold wires for connections, resulting in lower manu-facturing costs (Fig. 6). Mass production of ICs with more than 700 pins inside PWBs is scheduled for this year. Both active and passive components can be handled.

Work is underway to develop epoxy flux materials that improve the thermal-cycling and drop-test reliability short-comings of conventional tin solder copper (SnAgCu). Such materials will help to advance 3D ICs using PoPs. Although PoP manufacturing employs commonly used tin-lead (SnPb) solder alloys, which offer advantages over SnAgCu materials, there’s a need for a lead-free compound to handle large high-density 3D PoP structures for consumer electronics products.

The Henkel Corp. Multicore LF620 lead-free solder paste suits a broad range of packaging applications. The no-clean halide-free and lead-free material is formulated with a new activator chemistry, so it exhibits extremely low voiding in CSPs via in-pad joints, good coalescence, and excellent solder-ability over a range of surface finishes.

5

EngineeringEssentials

0812EE_F6

Gold wire Mold resin

Land Die pad Silicon chip

Gold wire

Silicon chip Die padLand

6. Dai Nippon Printing embedded high-performance IC chips can be wire-bonded to a printed wiring board (PWB) inside a multi-layer PWB using unique buried bumped interconnections. Half-etching the base metal of the leadframe and lengthening its inner leads shrinks the distance between the chip and the leadframe it’s attached to. It also reduces the amount of gold wires for connections. (courtesy

of Dai Nippon Printing)

Don’t Be Intimidated By Low-Power RF System Designlouis e. FRenzel | CommuniCAtionS EDitor [email protected]

Adding wireless connectivity to any product has never been easy. However, even when a wireless solution doesn’t seem to make sense, the potential exists. The cost is rea-sonable, and you add unexpected value and flexibility to the product. But what if you

aren’t a wireless engineer? Don’t worry, because in many

cases, the wireless chip and module companies have made such connectivity a snap.

SeleCTing a TeCHnOlOgyThe table lists a marvelous collection of wireless options.

These technologies are all proven and readily available in chip or module form. No license is required since most operate in

Page 6: Top Five Essentials of Electronic Design

ElEctronic DEsign

EngineeringEssentials

the unlicensed spectrum. They also operate under the rules and regulations in Part 15 of U.S. CFR 47. When considering wire-less for your design, you should have a copy of Part 15 handy. You can find it at www.fcc.gov.

The table only provides the main options and enough infor-mation to get you started. For a more in-depth look, check out the organizations and trade associations associated with each standard.

Some of the wireless standards have relatively complex pro-tocols to fit special applications. For example, Wi-Fi 802.11 is designed for local-area-network (LAN) connections and is relatively easy to interface to Ethernet. It also is the fastest, except for Ultra-Wideband (UWB) and the 60-GHz standard. It’s widely available in chip or module form, but it’s complex and may consume too much power.

ZigBee is great for industrial and commercial monitoring and control, and its mesh-networking option makes it a good choice if a large network of nodes must be monitored or con-trolled. It’s a complex protocol that can handle some sophis-ticated operations. Its underlying base is the IEEE 802.15.4 standard, which doesn’t include the mesh or other features, making it a good option for less complex projects.

If you’re looking for something simple, try industrial, scien-tific and medical (ISM) band products using 433- or 915-MHz chips or modules. Many products require you to invent your own protocol. Some vendors supply the software tools for that task. It’s a good way to go, because you can optimize the design to your needs rather than adapt to some existing overly complex protocol.

For very long-haul applications that require reliability, consider a machine-to-machine (M2M) option. These cell-phone modules use available cellular network data services like GRPS or EDGE in GSM networks (AT&T and T-Mobile) or 1xRTT and EV-DO in cdma2000 networks (Sprint and Verizon). You will need to do the interfacing yourself and sign up with a carrier or an intermediary company that lines up and administers cellular connections. Though more expensive, this option offers greater reliability and longer range.

Cypress Semiconductor’s proprietary WirelessUSB option operates in the 2.4-GHz band and targets human interface devices (HIDs) like keyboards and mice. It offers a data rate of 62.5 kbits/s and has a range of 10 to 50 m.

The Z-Wave proprietary standard from Sigma Design Zensys, used in home auto-mation, operates on 908.42 MHz in the U.S. and 868.42 MHz in Europe. It offers a range of up to about 30 m with data-rate options of 9600 bits/s or 40 kbits/s. Mesh capability is in the mix, too (see “Wireless In The Works” at www.electronicdesign.com, ED Online 21847).

Build VS. BuyDeciding whether to build or buy is a crucial

step when it comes to adding wireless. It’s gen-

erally a matter of experience. With less experience, it’s probably better to buy existing modules or boards. With solid high-frequency or RF expe-rience, consider doing the design on your own. Almost always, you’ll start with an available chip. The tricky part is the layout.

When self-designing, grab any reference designs available from your chip supplier to save time, money, and aggravation. Primary design issues will include antenna selection, imped-ance matching with the antenna, the transmit/receive switch, the battery or other power, and packaging. Most modules will take care of these elements.

Factoring in the testing time and cost is another essential design step. Any product you design will have to be tested to conform to the FCC Part 15 standards. Arm yourself with the right equipment, especially the spectrum analyzer, RF power meters, field strength meter, and electromagnetic interference/electromagnetic compliance (EMI/EMC) test gear with anten-nas and probes. An outside firm also could perform the testing, but that’s expensive and takes time. Factor in some rework time if you fail the tests. Most modules are pretested, so it pretty much comes down to the packaging and interfacing with the rest of the product.

COnSideraTiOnS and reCOmmendaTiOnSIf longer range and reliability are top priorities, stay with

the lower frequencies—915 MHz is far better than 2.4 GHz, and 433 MHz is even better. This is strictly physics. The only downside is antenna size, which will be considerably greater at lower frequencies. Still, you won’t be sorry when you need to transmit a few kilometers or miles. Though not impossible at 2.4 GHz, it will require higher power and the highest possible directional gain antennas.

As for data rates, think slow. Lower data rates will typically result in a more reliable link. You can gain distance by dropping the data rate. Lower data rates also survive better in high-noise environments.

Your analysis of the radiowave path is essential for a solid and reliable link. So, the first step should be to estimate your path loss. Some basic rules of thumb will give you a good approximate figure to use. Once you know your path loss, you can play around with things like transmitter power output, antenna gains, receiver sensitivity, and cable losses to zero in on hardware needs. To esti-mate the path loss between the transmitter and receiver, try:

1. The FreeWave MM2-HS-T 900-MHz radio targets embedded military and industrial applications.

2. Analog Devices’ ISM band radio chips suit home automation and control as well as smart-metering applications.

Page 7: Top Five Essentials of Electronic Design

ElEctronic DEsign Go To www.elecTronicdesiGn.com 7

EngineeringEssentials

dB loss = 37 dB + 20log(f) + 20log(d)

The frequency of operation (f) is in megahertz, and the range or distance (d) is in miles. Another formula is:

dB loss = 20log(4π/λ) + 20log(d)

Wavelength (λ) and range or distance are both in meters. Both formulas deliver approximately the same figures. Remember, this is free space loss without obstructions. The loss increases about 6 dB for each doubling of the distance.

If obstructions are involved, some corrective figures must be added in. Average loss figures are 3 dB for walls, 2 dB for win-dows, and 10 dB for exterior structure walls.

When finalizing a path loss, add the fade margin. This “fudge factor” helps ensure good link reliability under severe weather, solar events, or unusual noise and interference. As a result, transmitter power and receiver sensitivity will be suf-ficient to overcome these temporary conditions.

A fade margin figure is just a guess. Some conservative designers say it should be 15 dB, while others say 10 dB is acceptable. If unusual weather or other conditions aren’t expected, you may get away with less, perhaps 5 dB. Add that to your path loss and adjust everything else accordingly.

Another handy formula to help estimate your needs is the Friis formula:

PR = PTGRGTλ2 /(16π2d2)

PR is the received power in watts, PT is the transmit power in watts, GR is the receive antenna gain, GT is the transmit anten-na gain, λ is the wavelength in meters, and d is the distance in meters. The transmit and receive gains are power ratios. This is 1.64 for a dipole or ground plane antenna. Any directional antenna like a Yagi or patch will have directional gain. It is usually given in dB, but it must be converted to a power ratio. The formula also indicates why lower frequency (longer wave-length) provides greater range (λ = 300/fMHz).

Transmitter output power, another key figure, is usually giv-en in dBm. Some common figures are 0 dBm (1 mW), 10 dBm (10 mW), 20 dBm (100 mW), and 30 dBm (1 W). Receiver sensitivity also is usually quoted in dBm. This is the smallest signal that the receiver can resolve and demodulate. Typical figures are in the –70- to –120-dBm range.

One last thing to factor in is cable loss. In most installations, you will use coax cable to connect the transmitter and receiver to the antennas. The cable loss at UHF and microwave fre-quencies is surprisingly high. It can be several dB per foot at 2.4 GHz or more. So, be sure to minimize the cable length.

Also, seek out special lower-loss cable. It costs a bit more, but coax cable with a loss of less than 1 dB per foot is avail-able if you shop around. This is especially critical when using antennas on towers where the cable run could be long. You

low-powEr, Short-rAngE wirElESS tEChnologiES For DAtA trAnSmiSSion

Technology Frequency MaxiMuM range MaxiMuM raTe ModulaTion Main applicaTions

Bluetooth 2.4 GHz 10 m 3 Mbits/s FHSS/GFSK Cell headsets, audio, sensor data

IR 875 nm <1m 16 Mbits/s Baseband Short data transfer

ISM 315, 418, 433, 902 to 928 MHz, 2.4 GHz 10 km 1 to 115 kbits/s, 250

kbits/sOOK/ASK, FSK, DSSS

w/BPSK/QPSKIndustrial monitoring and control,

telemetry

M2M Cellular bands 10 km <300 kbits/s GSM/EDGE, CDMA 1xRTT

Remote facilities monitoring

NFC 13.56 MHz <1 ft 106 to 848 kbits/s ASK Credit card, cell-phone transac-tions

Proprietary 900 to 928 MHz, 2.4 GHz Up to several miles 1 kbit/s to 2 Mbits/s DSSS, BPSK/QPSK Industrial and factory automation

RFID 125 kHz, 13.56, 915 MHz <2 m <100 kbits/s ASK Tracking, shipping

60 GHz 60 GHz 10 m 3 Gbits/s OFDM Video, backhaul

UWB 3.1 to 6 GHz 10 m 480 Mbits/s OFDM Wireless USB, video

Wi-Fi, 802.11a/b/g/n 2.4 and 5 GHz 100 m 11, 54, 100+ Mbits/s DSSS/CCK, OFDM WLAN

ZigBee/ 802.15.4 2.4 GHz 100 m 250 kbits/s OQPSK Monitoring and control, sensor networks

Page 8: Top Five Essentials of Electronic Design

8 ElEctronic DEsign

EngineeringEssentials

can offset the loss with a gain antenna, but it’s still optimal to minimize the length and use the best cable.

With all of this information, compute the final calculation:

Transmit power (dBm) + transmit antenna gain (dB) + receive antenna gain (dB) – path loss (dB) – cable loss (dB) – fade

margin (dB)

This figure should be greater than the receiver sensitivity. Now play with all of the factors to zero in on the final specifications for everything. Two design issues remain—the antenna and its impedance matching.

The antenna requires a separate discussion beyond this arti-cle. There are many sources for antennas. A wireless module

most likely will come with an antenna and/or antenna sugges-tions. The most common is quarter-wave or half-wave vertical. When building an antenna into the product, the ceramic type is popular, as is a simple copper loop on the printed circuit board (PCB). Follow the manufacturer’s recommendations for the best results.

If it’s a single-chip design, you may need to design the impedance matching network between the transceiver and the antenna. Most chip companies will offer some recommenda-tions that deliver proven results. Otherwise, design your own standard L, T, or π LC network to do the job.

One final hint about testing: Part 15 uses field strength to indicate radiated power measured in microvolts per meter (µV/m). A field strength meter makes the measurement at

3. The CC2530 SoC from Texas Instruments fits 802.15.4, ZigBee, RF4CE, and smart-energy applications. It has an 8051 microcontroller on board, making it a true single-chip wireless solution.

1008EElowpower-FIGURE 3

DigitalAnalogMixed

RESET_N

XOSC_Q2

XOSC_Q1

P2_4

P2_3

P2_2

P2_1

P2_0

P1_7

P1_6

P1_5

P1_4

P1_3

P1_2

P1_1

P1_0

P0_7

P0_6

P0_5

P0_4

P0_3

P0_2

P0_1

P0_0

Reset

32-MHzcrystal oscillator

32.768-kHzcrystal oscillator

Debuginterface

Directmemoryaccess

ADCaudio/DC8 channels

USART 1

USART 2

Timer 1 (16 bits)

Timer 2(IEEE 802.15.4 MAC timer)

Timer 3 (8 bits)

Timer 4 (8 bits)

Watchdogtimer

High-speedRC oscillator

32-kHzRC oscillator

Clock multiplexerand calibration

8051 CPUcore

AES encryptionand decryption

On-chip voltageregulator

Power-on resetbrownout

Sleep timer

Sleep-mode controller

Memoryarbitrator

IRQ control

Radio registers

CSMA/CA strobe processor

Radio data interface

DemodulatorAutomatic

gaincontrol

Modulator

Receivechain

Freq

uenc

ysy

nthe

size

r Transmitchain

RF_P RF_N

32/64/128/256-kbyte flash

8-kbyte SRAM

Flash write

I/O

con

trolle

r

FIFO

and

fram

e co

ntro

l

VDD (2 to 3.6 V)

DCOUPL

Page 9: Top Five Essentials of Electronic Design

ElEctronic DEsign Go To www.elecTronicdesiGn.com 9

EngineeringEssentials

specified distances. The result can be converted to watts to ensure the transmitter is within the rules. The following for-mula, which is a close approximation, lets you convert between power and field strength:

V2/120π ≈ PG/4πd2

where P is transmitter power in watts, G is the antenna gain, V is the field strength in µV/m, and d is the distance in meters from the transmit antenna to the field strength meter antenna. A simplified approximation at a common FCC testing distance of 3 m with a transmit antenna gain of one is P ≈ 0.3 V2.

SOme example prOduCTSFreeWave Technologies has a line of reliable, high-perfor-

mance spread-spectrum and licensed radios for critical data transmissions. The high-speed MM2-HS-T (TTL interface) and MM2-HS-P (Ethernet interface) come ready to embed in OEM products like sensors, remote terminal units (RTUs), pro-grammable logic controllers (PLCs), and robots and unmanned vehicles. They operate in the 900-MHz band and use direct-sequence spread spectrum (DSSS).

Thanks to the radios’ over-the-air speed of 1.23 Mbits/s, users can send significantly more data in a shorter period of time. The MM2-HS-T is ideal for embedded applications that require high data rates, such as video and long distances (up to 60 miles). Both radios fit many industry, government, and military applications where it’s necessary to transmit large amounts of data, including multiple high-resolution images and video along with data.

The MM2-HS-T measures 50.8 by 36 by 9.6 mm and weighs 14 g (Fig. 1). The MM2-HS-P shares a similarly small foot-print. Both radios offer RISC-based signal demodulation with a matched filter and a gallium-arsenide (GaAs) FET RF front end incorporating multi-stage surface-acoustic-wave (SAW) filters. The combination delivers unmatched overload immu-nity and sensitivity.

The MM2-HS-P includes industrial-grade high-speed Eth-ernet that supports TCP, industrial-grade wireless security, and serial communications. Each unit can be used in a security net-work as a master, slave, repeater, or master/slave unit, depend-ing on its programming. FreeWave’s proprietary spread-spec-trum technology prevents detection and unauthorized access, and 256-bit AES encryption is available.

The ADF7022 and ADF7023 low-power transceivers from Analog Devices fit well in smart-grid and other applications operating on the short-range ISM band for remote data mea-surement. Smart-grid technology not only measures how much power is consumed, it also determines what time and price are best to save energy, reduce costs, and increase reliability for the delivery of electricity from utility companies to consumers. RF transceivers are needed for the secure and robust transmission of this information over short distances, for storing measure-ment data, and for communicating with utility computers over wireless networks.

Applications for the ADF7022 and ADF7023 include indus-trial monitoring and control, wireless networks and telemetry

systems, security systems, medical devices, and remote con-trols. Analog Devices’ free, dowloadable ADIsimSRD Design Studio supports both devices.

One particular hot area for RF transceivers involves utili-ties that are building advanced metering infrastructures, including automatic meter reading, to monitor and con-trol energy usage. Analysts expect more than 150 million smart meters to be installed worldwide. The ADF7022 and ADF7023 target these smart-grid and home/building auto-mation applications.

The ADF7022 is a highly integrated frequency-shift-keying/Gaussian frequency-shift-keying (FSK/GFSK) transceiver designed for operation at the three io-homecontrol channels of 868.25, 868.95, and 869.85 MHz in the license-free ISM band. It fully complies with ETSI-300-200 and has enhanced digital baseband features specifically designed for the io-homecontrol wireless communications protocol.

As a result, the device can assume complex tasks typically performed by a microprocessor, such as media access, packet management/validation, and packet retrieval to and from data buffer memory. This allows the host microprocessor to remain in power-down mode. Also, it significantly lowers power consumption and eases both the computational and memory requirements of the host microprocessor.

The ADF7023 low-IF transceiver operates in the license-free ISM bands at 433, 868, and 915 MHz. It offers a low transmit-and-receive current, as well as data rates in 2FSK/GFSK up to 250 kbits/s. Its power-supply range is 1.8 to 3.6 V, and it consumes less power in both transmit and receive modes, enabling longer battery life.

Other on-chip features include an extremely low-power, 8-bit RISC communications processor; patent-pending, fully integrated image rejection scheme; a voltage-controlled oscil-lator (VCO); a fractional-N phase-locked loop (PLL); a 10-bit analog-to-digital converter (ADC); digital received signal-strength indication (RSSI); temperature sensors; an automatic frequency control (AFC) loop; and a battery-voltage monitor.

The CC2530 from Texas Instruments is a true system-on-a-chip solution (SoC) tailored for IEEE 802.15.4, ZigBee, Zig-Bee RF4CE, and Smart Energy applications. (RF4CE is the forthcoming wireless remote-control standard for consumer electronics equipment.) Its 64-kbyte and up versions support the new RemoTI stack for ZigBee RF4CE, which is the indus-try’s first ZigBee RF4CE-compliant protocol stack.

Larger memory sizes will allow for on-chip, over-the-air download to support in-system reprogramming. In addi-tion, the CC2530 combines a fully integrated, high-perfor-mance RF transceiver with an 8051 MCU, 8 kbytes of RAM, 32/64/128/256 kbytes of flash memory, and other powerful supporting features and peripherals (Fig. 3).

The TI CC430 wireless platform consists of TI radio chips. Also, the company’s MSP430 16-bit embedded controller can implement the IETF standard 6LoWPAN, which is the software that enables 802.15.4 radios to carry IPv6 packets. Thus, low-power wireless devices and networks can access the Internet. Furthermore, the platform can implement Europe’s Wireless MBus technology for the remote reading of gas and electric meters.

Page 10: Top Five Essentials of Electronic Design

10 ElEctronic DEsign

EngineeringEssentials

Electronic communications began as digital tech-nology with Samuel Morse’s invention of the telegraph in 1845. The brief dots and dashes of his famous code were the binary ones and zeroes of the current through the long telegraph wires. Radio communications also started out digitally,

with Morse code producing the off and on transmission of continuous-wave spark-gap pulses.

Then analog communications emerged with the telephone and amplitude-modulation (AM) radio, which dominated for decades. Today, analog is slowly fading away, found only in the legacy telephone system; AM and FM radio broadcasting; amateur, CB/family and shortwave radios; and some lingering two-way mobile radios. Nearly everything else, including TV, has gone digital. Cell phones and Internet communications are digital. Wireless networks are digital.

Though the principles are generally well known, veteran members of the industry may have missed out on digital com-munications schooling. Becoming familiar with the basics broadens one’s perspective on the steady stream of new com-munications technologies, products, trends, and issues.

THe FundamenTalSAll communications systems consist of a transmitter (TX),

a receiver (RX), and a transmission medium (Fig. 1). The TX and RX simply make the information signals to be transmitted compatible with the medium, which may involve modulation. Some systems use a form of coding to improve reliability. In this article, consider the information to be non-return-to-zero (NRZ) binary data. The medium could be copper cable like unshielded twisted pair (UTP) or coax, fiber-optic cable, or free space for wireless. In all cases, the signal is greatly attenu-ated by the medium and noise is superimposed. Noise rather than attenuation usually determines if the communications medium is reliable.

Communications falls into one of two categories—baseband or broadband. Baseband is the transmission of data directly

over the medium itself, such as sending serial digi-tal data over an RS-485 or I2C link. The original 10-Mbit/s Ethernet was baseband. Broadband implies the use of modulation (and in some cases, multiplexing) techniques. Cable TV and DSL are probably the best examples, but cellular data is also broadband.

Communications may also be synchronous or asynchronous. Synchronous data is clocked as in SONET fiber-optical communications, while asynchronous methods use start and stop bits as in RS-232 and a few others.

Furthermore, communications links are simplex, half duplex, or full duplex. Simplex links involve one-way communications, or, simply, broadcasting. Duplex is two-way communications. Half duplex uses alternating TX and RX on the same channel. Full duplex means simultaneous (or at least con-current) TX and RX, as in any telephone.

0909EE_F3

1

01 1 0 0 1 0 0 1

1 µs

TimeTime

(a)

3 V

2 V

1 V

0 V11 00 10 01(b)

0909EE_F2

0

1

One bit interval

t

2. The bit time in this NRZ binary data signal determines data rate as 1/t.

3. Here, an 8-bit serial data word in NRZ format is to be transmitted (a). That same bit stream, when transmitted in a four-level PAM format, doubles data rate (b).

1. Encoding may be optional in this simplified model of a communications system, while some systems require modulation. Noise is the main restriction on range and reliability.

Datain

Encoding(optional) TX

Medium(copper, fiber,

radio)

Noise

RX Decoding Dataout

May includemodulation

0909EE_F1

Digital Communications: The ABCs Of Ones And Zeroes

Don’t be left in the analog dust.Avoid noise and other transmission errors using these digital modulation schemes and error-cor-rection techniques.

louis e. FRenzel | CommuniCAtionS EDitor [email protected]

Page 11: Top Five Essentials of Electronic Design

ElEctronic DEsign Go To www.elecTronicdesiGn.com 11

EngineeringEssentials

Topology is also fundamental. Point-to-point, point-to-multipoint, and multipoint-to-point are common. Networking features buses, rings, and mesh. They all don’t necessarily work for all media.

daTa raTe .VerSuS BandWidTHDigital communications sends bits serially—one bit after anoth-

er. However, you’ll often find multiple serial paths being used, such as four-pair UTP CAT 5e/6 or parallel fiber-optic cables. Multiple-input multiple-output (MIMO) wireless also implements two or more parallel bit streams. In any case, the basic data speed or capacity C is the reciprocal of the bit time (t) (Fig. 2):

C = 1/t

C is the channel capacity or data rate in bits per second and t is the time for one bit interval. The symbol R for rate is also used to indicate data speed. A signal with a bit time of 100 ns has a data rate of:

C = 1/100 × 10–9 = 10 Mbits/s

The big question is how much bandwidth (B) is needed to pass a binary signal of data rate C. As it turns out, it’s the rise time (tR) of the bit pulse that determines the bandwidth:

B = 0. 35/tR

B is the 3-dB bandwidth in megahertz and tR is in microsec-onds (µs). This formula factors in the effect of Fourier theory. For example, a rise time of 10 ns or 0.01 µs needs a bandwidth of:

B = 0.35/0.01 = 35 MHz

A more precise measure is to use the Shannon-Hartley theo-rem. Hartley said that the least bandwidth needed for a given data rate in a noise-free channel is just half the data rate or:

B = C/2

Or the maximum possible data rate for a given bandwidth is:

C = 2B

As an example, a 6-MHz bandwidth will allow a data rate up

to 12 Mbits/s. Hartley also said that this figure holds for two-level or binary signals. If multiple levels are transmitted, then the data rate can be expressed as:

C = (2B)log2M

M indicates the number of multiple voltage levels or sym-bols transmitted. Calculating the base 2 logarithm is a real pain, so use the conversion where:

log2N = (3.32)log10N

Here, log10N is just the common log of a number N. Therefore:

C = 2B(3.32)log10N

For binary or two-level transmission, the data rate for a bandwidth of 6 MHz is as given above:

C = 2(6)(3.32)log102 = 12 Mbits/s

With four voltage levels, the theoretical maximum data rate in a 6-MHz channel is:

C = 2(6)(3.32)log104 = 24 Mbits/s

To explain this, let’s consider multilevel transmission schemes. Multiple voltage levels can be transmitted over a baseband path in which each level represents two or more bits. Assume we want to transmit the serial 8-bit byte (Fig. 3a). Also assume a clock of 1 Mbit/s for a bit period of 1 µs. This will require a minimum bandwidth of:

B = C/2 = 1 Mbit/s/2 = 500 kHz

With four levels, two bits per level can be transmitted (Fig. 3b). Each level is called a symbol. In this example, the four levels (0, 1, 2, and 3 V) transmit the same byte 11001001. This technique is called pulse amplitude modulation (PAM). The time for each level or symbol is 1 µs, giving a symbol rate—also called the baud rate—of 1 Msymbol/s. Therefore, the baud rate is 1 Mbaud, but the actual bit rate is twice that, or 2 Mbits/s. Note that it takes just half the time to transmit the same amount of data.

0909EE_F4.ai

90º

180º

270º

01

(a)

90º

180º 180º

270º 270º

10 11

00 01(b)

90º110

111

011

010000

001

101

100

(c)

4. Shown are phase-shift keying (PSK) constellation diagrams for binary PSK (a), quaternary PSK (b), and 8PSK (c).

Page 12: Top Five Essentials of Electronic Design

12 ElEctronic DEsign

EngineeringEssentials

What this means is that for a given clock rate, eight bits of data can be trans-mitted in 8 µs using binary data. With four-level PAM, twice the data, or 16 bits, can be transmitted in the same 8 µs. For a given bandwidth, that trans-lates to the higher data rate equivalent to 4 Mbits/s. Shannon later modified this basic relationship to factor in the signal-to-noise ratio (S/N or SNR):

C = (B)log2(1 + S/N)

or:

C = B(3.32)log10(1 + S/N)

The S/N is a power ratio and is not measured in dB. S/N also is referred to as the carrier-to-noise ratio or C/N. C/N is usually defined as the S/N of a modulated or broadband signal. S/N is used at baseband or after demodulation. With an S/N of 20 dB or 100 to 1, the maximum data rate in a 6-MHz channel will be:

C = 6(3.32)log10(1 + 100) = 40 Mbits/s

With an S/N = 1 or 0 dB, the data rate drops to:

C = 6(3.32)log10(1 + 1) = 6 Mbits/s

This last example is why many engineers use the conserva-tive rule of thumb that the data rate in a channel with noise is roughly equal to the bandwidth C = B.

If the speed through a channel with a good S/N seems to defy physics, that’s because the Shannon-Hartley formulas don’t specifically say that multiple levels or symbols can be used. Consider that:

C = B(3.32) log10(1 + S/N) = 2B(3.32) log10M

Here, M is the number of levels or symbols. Solving for M:

M = √(1 + S/N)

Take a 40-Mbit/s data rate in a 6-MHz channel, if the S/N is 100. This will require multiple levels or symbols:

M = √(1 + 100) = 10

Theoretically, the 40-Mbit/s rate can be achieved with 10 levels.

The levels or symbols could be repre-sented by something other than different voltage levels. They can be different phase shifts or frequencies or some combination of levels, phase shifts, and frequencies. Recall that quadrature amplitude modula-

tion (QAM) is a combination of different voltage levels and phase shifts. QAM, the modulation of choice to achieve high data rates in narrow channels, is used in digital TV as well as wireless standards like HSPA, WiMAX, and Long-Term Evolution (LTE).

CHannel impairmenTSData experiences many impairments

during transmission, especially noise. The calculations of data rate versus band-width assume the presence of additive white Gaussian noise (AWGN).

Noise comes from many different sourc-es. For instance, it emanates from thermal

agitation, which is most harmful in the front end of a receiver. The sources are resistors and transistors, while other forms of noise come from semiconductors. Intermodulation distortion creates noise. Also, signals produced by mixing in nonlinear circuits cre-ate interfering signals that we treat as noise.

Other sources of noise include signals picked up on a cable by capacitive or inductive coupling. Impulse noise from auto ignitions, inductive kicks from motor or relay turn on or off, and power-line spikes are particularly harmful to digital signals. The 60-Hz “hum” induced by power lines is another example. Sig-nals coupled from one pair of conductors to another within the same cable create “crosstalk” noise. In a wireless link, noise can come from the atmosphere (e.g., lightning) or even the stars.

Because noise is usually random, its frequency spectrum is broad. Noise can be reduced by simply filtering to limit the band-width. Bandwidth narrowing obviously will affect data rate.

It’s also important to point out that noise in a digital system is treated differently from that in an analog system. The S/N or C/N is used for analog systems, but Eb/N0 usually evalu-ates digital systems. Eb/N0 is the ratio of the energy per bit to the spectral noise density. It’s typically pronounced as E sub b divided by N sub zero.

Energy Eb is signal power (P) multiplied by bit time t expressed in joules. Since data capacity or rate C (sometimes designated R) is the reciprocal of t, then Eb is P divided by R. N0 is noise power N divided by bandwidth B. Using these defi-nitions, you can see how Eb/N0 is related to S/N:

Eb/N0= S/N (B/R)

Remember, you can also express Eb/N0 and S/N in dB.

The energy per bit is a more appropri-ate measure of noise in a digital system. That’s because the signal transmitted is usually during a short period, and the energy is averaged over that time. Typi-cally, analog signals are continuous. Eb/N0 is often determined at the receiver input of a system using modulation. It’s a measure of the noise level and will affect the received bit error rate (BER).

bAnDwiDth EFFiCiEnCy

modulation type Bandwidth efficiency

FSK ≤ 1

bpSK 1

QpSK 2

8pSK 3

8QAm 3

16pSK 4

16QAm 4

0909EE_F5.ai

90º

180º

270º

00000001

0010 0011

01000101

01100111

10001001

10101011

11001101 1110 1111

5. In this constellation diagram for 16QAM, 16 unique amplitude-phase combinations transmit data in 4-bit groups per symbol.

Page 13: Top Five Essentials of Electronic Design

ElEctronic DEsign Go To www.elecTronicdesiGn.com 13

EngineeringEssentials

Different modulation methods have varying Eb/N0 values and related BERs.

Another common impairment is attenuation. Cable attenu-ation is a given thanks to resistive losses, filtering effects, and transmission-line mismatches. In wireless systems, signal strength typically follows an attenuation formula proportional to the square of the distance between transmitter and receiver.

Finally, delay distortion is another source of impairment. Sig-nals of different frequencies are delayed by different amounts over the transmission channel, resulting in a distorted signal.

Channel impairments ultimately cause loss of signal and bit transmission errors. Noise is the most common culprit in bit errors. Dropped or changed bits introduce serious transmission errors that may make communications unreliable. As such, the BER is used to indicate the quality of a transmission channel.

BER, which is a direct function of S/N, is just the percentage of the number of error bits to the total transmitted bits over a given time period. It’s usually considered to be the probability of an error occurring in so many bits transmitted. One bit error per 100,000 transmitted is a BER of 10–5. The definition of a “good” BER depends on the application and technology, but the 10–5 to 10–12 range is a common target.

errOr COdingError detection and correction techniques can help miti-

gate bit errors and improve BER. The simplest form of error detection is to use a parity bit, a check code sum, or cyclical

redundancy check (CRC). These are added to the transmit-ted data. The receiver recreates these codes, compares them, and then identifies errors. If errors occur, an automatic repeat request (ARQ) message is sent back to the transmitter and the corrupted data is retransmitted. Not all systems use ARQ, but ARQ-less systems will typically employ some form of it.

Nonetheless, most modern communications systems go much further by using sophisticated forward error correction (FEC) techniques. Taking advantage of special mathematical encoding, the data to be transmitted is translated into a set of extra bits, which are then added to the transmission. If bit errors occur, the receiver can detect the failed bits and actu-ally correct all or most of them. The result is a significantly improved BER.

Of course, the downsides are the added complexity of the encoding and the extra transmission time needed for the extra bits. This overhead is easily accommodated in more contempo-rary IC-based communications systems.

The many different types of FEC techniques available today fall into two groups: block codes and convolutional codes. Block codes operate on fixed groups of data bits to be transmitted, with extra coding bits added along the way. The original data may or may not be transmitted depending on the code type. Common block codes include the Hamming, BCH, and Reed-Solomon codes. Reed-Solomon is widely used, as is a newer form of block code called the low-density parity check (LDPC).

Convolutional codes use sophisticated algorithms, like the Viterbi, Golay, and turbo codes. FEC is widely used in wireless and wired networking, cell phones, and storage media such as CDs and DVDs, hard-disk drives, and flash drives.

FEC will enhance the S/N. The BER improves with the use of FEC for a given value of S/N, an effect known as “coding gain.” Coding gain is defined as the difference between the S/N values for the coded and uncoded data streams of a given BER target. For instance, if a system needs 20 dB of S/N to achieve a BER of 10–6 without coding, but only 8 dB S/N when FEC is used, the coding gain is 20 – 8 = 12 dB.

mOdulaTiOn Almost any modulation scheme may be used to transmit

digital data. But in today’s more complex critical applications, the most widely used methods are some form of phase-shift keying (PSK) and QAM. Special modes like spread spectrum and orthogonal frequency division multiplexing (OFDM) are especially well adopted in the wireless space.

0909EE_F7

DSP

LPF

LPF

MixersI

Q

ADC

ADC

Carrierlocal oscillator

sin

cos

Receivedsignal

0909EE_F6

DSP

MixersI

Q

DAC

DAC

Carrier

sin

cos

Transmittedsignal

6. The widely used I/Q method of modulation in a transmitter is derived from the digital signal processor.

7. An I/Q receiver recovers data and demodulates in the digital signal processor.

8. Direct sequence spread spectrum (DSSS) is produced using this basic arrangement.

0909EE_F8

Serialdata

signal

XORgate

Chippingsignal

Clock Codegenerator

Tomodulator

Page 14: Top Five Essentials of Electronic Design

14 ElEctronic DEsign

EngineeringEssentials

Amplitude-shift keying (ASK) and on-off keying (OOK) are generated by turning the carrier off and on or by shifting it between two carrier levels. Both are used for simple and less critical appli-cations. Since they’re susceptible to noise, the range must be short and the signal strength high to obtain a decent BER.

Frequency-shift keying (FSK), which is very good in noisy applications, has several widely used variations. For instance, min-imum-shift keying (MSK) and Gaussian-filtered FSK are the basis for the GSM cell-phone system. These methods filter the binary pulses to limit their bandwidth and thereby reduce the sideband range. They also use coherent carriers that have no zero-crossing glitches; the carrier is continuous. In addition, a multi-frequency FSK system provides multiple symbols to boost data rate in a given bandwidth. In most applications, PSK is the most widely used.

Plain-old binary phase-shift keying (BPSK) is a favorite scheme in which the 0 and 1 bits shift the carrier phase 180°. BPSK is best illustrated in a constellation diagram (Fig. 4a). It shows an axis where each phasor represents the amplitude of the carrier and the direction represents the phase position of the carrier.

Quaternary, 4-ary, or quadrature PSK (QPSK) uses sine and cosine waves in four combinations to produce four different symbols shifted 90° apart (Fig. 4b). It doubles the data rate in a given bandwidth but is very tolerant of noise.

Beyond QPSK is what’s called M-ary PSK or M-PSK. It uses many phases like 8PSK and 16PSK to produce eight or 16 unique phase shifts of the carrier, allowing for very high data rates in a narrow bandwidth (Fig. 4c). For instance, 8PSK allows transmission of three bits per phase symbol, theoreti-cally tripling the data rate in a given bandwidth.

The ultimate multilevel scheme, QAM, uses a mix of differ-ent amplitudes and phase shifts to define as many as 64 to 1024 or more different symbols. Thus, it reigns as the champion of getting high data rates in small bandwidths.

When using 16QAM, each 4-bit group is represented by a phasor of a specific amplitude and phase angle (Fig. 5). With 16 possible symbols, four bits can be transmitted per baud or symbol period. That effectively multiplies the data rate by four for a given bandwidth.

Today, most digital modulation and demodulation employs digital signal processing (DSP). The data is first encoded and then

sent to the digital signal processor, whose software produces the correct bit streams. The bit streams are encoded in an I/Q or in-phase and quadrature format using a mixer arrangement (Fig. 6).

Subsequently, the I/Q data is translated into analog signals by the digital-to-analog converters (DACs) and sent to the mix-ers, where it’s mixed with the carrier or some IF sine and cosine waves. The resulting signals are summed to create the analog RF output. Further frequency translation may be needed. The bottom line is that virtually any form of modulation may be produced this way, as long as you have the right DSP code. (Forms of PSK and QAM are the most common.)

At the receiver, the antenna signal is amplified, downconvert-ed, and sent to an I/Q demodulator (Fig. 7). The signal is mixed with the sine and cosine waves, then filtered to create the I and Q signals. These signals are digitized in analog-to-digital con-verters (ADCs) and sent to a digital signal processor for the final demodulation. Most radio architectures use this I/Q scheme and DSP. It’s generally referred to as software-defined radio (SDR). The DSP software manages the modulation, demodulation, and other processing of the signal, including some filtering.

The spread-spectrum and OFDM broadband wide-band-width schemes are also forms of multiplexing or multiple access. Spread spectrum, which is employed in many cell phones, allows multiple users to share a common bandwidth. It’s referred to as code division multiple access (CDMA). OFDM also uses a wide bandwidth to enable multiple users to access the same wide channel.

Figure 8 shows how the digitized serial voice, video, or other data is modified to produce spread spectrum. In this scheme, direct sequence spread spectrum (DSSS), the serial data is sent to an exclusive OR gate along with a much higher chipping signal. The chipping signal is coded so it’s recog-nized at the receiver. The narrow-band digital data (several kilohertz) is then converted to a wider bandwidth signal that occupies a wide channel. In cell-phone cdma2000 systems, the channel bandwidth is 1.25 MHz and the chipping signal is 1.288 Mbits/s. Therefore, the data signal is spread over the entire band.

Spread spectrum can also be achieved with frequency-hop-ping spread spectrum (FHSS). In this configuration, the data is transmitted in hopping periods over different randomly selected frequencies, spreading the information over a wide spectrum. The receiver, knowing the hop pattern and rate, can

10. Higher-level modulation methods like 16QAM require a better signal-to-noise ratio or higher Eb/N0.

0909EE_F10

Bit e

rror

rat

e

0.1

10–2

10–3

10–4

10–5

10–6

10–7

Eb/N0 (dB)0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17

BPSKBPSK

DPSKDPSK

OOKOOK

FSKFSK

16QAM16QAM

9. The OFDM configuration used in IEEE 802.11a/g permits data rates of 6, 9, 12, 18, 36, 48, or 54 Mbits/s. Each subcarrier is modulated by BPSK, QPSK, 16QAM, or 64QAM, depending on the data rate.

0909EE_F9

Sub-carrierwidth/spacing

312.5 kHz

52 sub-carriers

20-MHzchannel

Frequency

Page 15: Top Five Essentials of Electronic Design

ElEctronic DEsign Go To www.elecTronicdesiGn.com 15

EngineeringEssentials

reconstruct the data and demodulate it. The most common example of FHSS is Bluetooth wireless.

Other data signals are processed the same way and transmit-ted in the same channel. Because each data signal is uniquely encoded by a special chipping-signal code, all of the signals are scrambled and pseudorandom in nature. They overlay one another in the channel. A receiver hears only a low noise level. Special correlators and decoders in the receiver can pick out the desired signal and demodulate it.

In OFDM, the high-speed serial data stream gets divided into multiple slower parallel data streams. Each stream modu-lates a very narrow sub-channel in the main channel. BPSK, QPSK, or different levels of QAM are used, depending on the desired data rate and the application’s reliability requirements. Multiple adjacent sub-channels are designed to be orthogonal to one another. Therefore, the data on one sub-channel doesn’t produce inter-symbol interference with an adjacent channel. The result is a high-speed data signal that’s spread over a wider bandwidth as multiple, parallel slower streams.

The number of sub-channels varies with each OFDM sys-tem, from 52 in Wi-Fi radios to 1024 in cell-phone systems like LTE and wireless broadband systems such as WiMAX. With so many channels, it’s possible to divide the sub-channels into groups. Each group would transmit one voice or other data signal, allowing multiple uses to share the assigned bandwidth. Typical channel widths are 5, 10, and 20 MHz. To illustrate, the popular 802.11a/g Wi-Fi system uses an OFDM scheme to transmit data rates to 54 Mbits/s in a 20-MHz channel (Fig. 9).

All new cell-phone and wireless broadband systems use OFDM because of its high-speed capabilities and reliable com-munications qualities. Broadband DSL is OFDM, as are most power-line technologies. Implementing OFDM can be difficult to implement, which is where DSP steps in.

Modulation methods vary in the amount of data they can transmit in a given bandwidth and how much noise they can withstand. One measure of this is the BER per given Eb/N0 ratio (Fig. 10). Simpler modulation schemes like BPSK and QPSK produce a lower BER for a low Eb/N0, making them more reliable in critical applications. However, different levels of QAM produce higher data rates in the same bandwidth, although a higher Eb/N0 is needed for a given BER. Again, the tradeoff is data rate against BER in a given bandwidth.

SpeCTral eFFiCienCySpectral efficiency is a measure of how many bits can be trans-

mitted at a given rate over a fixed bandwidth. It’s one way to com-pare the effectiveness of modulation methods. Spectral efficiency is stated in terms of bits per second per hertz of bandwidth, or (bits/s)/Hz. Though the measure usually excludes any FEC cod-ing, it’s sometimes useful to include FEC in a comparison.

Remember 56k dial-up modems? They achieved an amazing 56 kbits/s in a 4-kHz telephone channel, and their spectral efficiency was 14 (bits/s)/Hz. Maximum through-put for an 802.11g Wi-Fi radio is 54 Mbits/s in a 20-MHz channel for a spectral efficiency of 2.7 (bits/s)/Hz. A stan-dard, digital GSM cell phone does 104 kbits/s in a 200-kHz channel, making the spectral efficiency 0.53 (bits/s)/Hz. Add EDGE modulation and that jumps to 1.93 (bits/s)/Hz. And taking it to new levels, the forthcoming LTE cell phones will have a spectral efficiency of 16.32 (bits/s)/Hz in a 20-MHz channel.

Spectral efficiency shows just how much data can be crammed into a narrow bandwidth with different modulation methods. The table compares the relative efficiencies of dif-ferent modulation methods, where bandwidth efficiency is just data rate divided by bandwidth or C/B.

There’s a tendency to think of energy on the power lines in terms of its fundamental 60- or 50-Hz frequency—the way the voltage is supposed to be created with the turbines and generators at the power house. Sure, the current lags the voltage if there’s a reactive load. That’s “power factor,”

right? But isn’t it still a matter of “real” and reactive compo-nents at 50 or 60 Hz? Yes and no. Unfortunately, that concep-tualization is a bit oversimplified.

In power distribution, power-factor correction (PFC) has traditionally been understood in terms of adding (in general) capacitive reactance at points in the power distribution sys-tem to offset the effect of an inductive load. One could say “reactive” load, but historically, power engineers have been most concerned with motors as loads when dealing with power factor. Correction could take the form of a bank of capacitors or a “synchronous condenser” (an unloaded syn-chronous motor).

Reconciling Power-Factor Correction Standards Leads To Solutions

Most of the world mandates control of current harmonics. North America specs 0.9 power factor. Does it make a difference?

Don TuiTe | AnAlog/powEr EDitor [email protected]

Page 16: Top Five Essentials of Electronic Design

More broadly, PFC can also be needed in any line-powered apparatus that uses ac-dc power conversion. These applications can range in scale from battery chargers for portable devices to big-screen TVs. Cumulatively, their input rectifiers are the larg-est contributor to mains-current harmonic distortion.

Where does that harmonic distortion come from? One com-mon misconception is that switching regulators cause harmon-ic power-factor components. Actually, they’re produced in the typical full-bridge rectifier and its filter capacitor, aided and abetted by the impedance of the power line itself.

In the steady state, the supply draws current from the line when the input voltage exceeds the voltage on the filter capaci-tor. This creates a current waveform that includes all the odd harmonics of the power-line frequency (Fig. 1).

Once the voltage crosses that point, the current is only lim-ited by the source impedance of the utility line as well as by the resistance of the diode that is forward-biased and the reactance of the capacitor that smoothes out the dc. As the power lines exhibit non-zero source impedance, the high current peaks cause some clipping distortion on the peaks of the voltage sinusoid.

Harmonics get to be considered elements of power factor because of their relationship to the power-line frequency. As Fourier components, they cumulatively represent an out-of phase current at the fundamental frequency. In fact, one broad definition of power factor is:

where THD is total harmonic distortion.

THe prOBlem WiTH pOWer FaCTOrWhatever the cause, what’s actually so wrong with power

factors less than unity? Part of the problem is economic. Anoth-er part has to do with safety. Whatever their phase relationships, all those superposed harmonic currents create measurable I2R losses as they’re drawn from the generator, through miles of transmission and distribution lines, to the home or workplace.

Historically, the utility ate the expense of the losses. At least for domestic consumers, the utility delivered volt-amperes, the consumer paid for watts, and the volt-amperes reactive (VARs) were a net loss. In fact, old mechanical power meters

didn’t even record those currents, and in any event, tariffs for domestic consumers don’t permit charging for anything but “real” power.

That situation is likely to continue since “fixing” the tariffs is unlikely to appeal to state legislators. In any event, resolving the situation on an engineering level is more practical than socking it to Joe Homeowner.

That’s the economic side of the story. In terms of safety, if Joe’s home is an apartment, he has another reason to care. Harmonics, notably the third, can and do result in three-phase imbalance, with current flowing in the ground conductor in a “wye” (Y) configuration. The wye ground conductor typically isn’t sized to carry significant current.

PFC harmonics also cause losses and dielectric stresses in capacitors and cables, in addition to overcurrents in machine and transformer windings. For a more detailed analysis, see “PFC Strategies in light of EN 61000-3-2,” by Basu, et al (http://boseresearch.com/RIGA_paper_27_JULY_04.pdf).

regulaTing pFInterestingly, mains power has been subject to interference

from the beginning. The first regulatory effort to control dis-

16 ElEctronic DEsign

EngineeringEssentials

0923_EE_F2

VIN

IL

Q1

D1

C1

VOUTVOUT

Gatingsignal

(b)

Continuous conductionmode

(a)

Inductorcurrent

Peakinductorcurrent

Averageinductorcurrent

Critical conduction(“transition”) mode

PFCcontroller

2. Power-factor correction (PFC) in ac-dc supplies consists of using a control circuit to switch a MOSFET so it draws current through an inductor in a way that fills in the gaps that would otherwise represent harmonics. When the PFC is operated in “critical conduction” or “transition” mode (a), the average inductor current is relatively low, because the peak current is allowed to fall essentially to zero amps. When it is operated in “continuous conduction” mode (b), the average current is higher. Transition mode is easier to achieve, while continuous mode results in power factors closer to unity.

0923_EE_F1

AC line

AC lineimpedance

Load withac-dc rectifier

and cap as firstpower-conversion

stage

Voltageand

current

AC linevoltage

Time

Clipped voltage sine waveat the wall outlet

Line current drawn byrectifier capacitor filter circuit

Time

1. In ac-dc switching power supplies, the power-line frequency’s current harmonics are produced when the load draws current from the power line during the intervals when the line voltage is higher than the voltage on the filter capacitor. The net effect is of a load current that is out of phase with the line voltage and contains frequency components that exhibit skin effect on power lines, causing conduction losses, and excite eddy currents in power-company transformers that result in further losses.

Page 17: Top Five Essentials of Electronic Design

ElEctronic DEsign Go To www.elecTronicdesiGn.com 17

turbances to the electrical grid, the British Lighting Clauses Act of 1899, was intended to keep uncontrolled arc-lamps from making incandescent lamps flicker.

More recently (1978 and 1982), international standards IEC 555-2 “Harmonic injection into the AC Mains” and IEC 555-3 “Disturbances in supply systems caused by household appliances and similar electrical equipment - Part 3: Voltage fluctuations” were published. (Later they were updated to IEC1000 standards.)

Like those standards, the current standards come out of Europe, but they’re nearly universal. There are related government regu-lations for power-line harmonics in Japan, Australia, and China.

In the European Union, standard IEC/EN61000-3-2, “Elec-tromagnetic compatibility (EMC) - Part 3-2 - Limits - Limits for harmonic current emissions (equipment input current ≤ 16 A per phase),” sets current limits up to the 39th harmonic for equipment with maximum power-supply specs from 75 to 600 W. Its “Class D” requirements (the strictest) apply to personal computers, computer monitors, and TV receivers. (Classes A, B, and C cover appliances, power tools, and lighting.)

What does the standard actually say? Under IEC 61000-3-2, the limits for Class D harmonic currents are laid down in terms of milliamps per watt consumed (Table 1).

glOBal diSHarmOnyAwkwardly, IEC61000-3-2, being a European-oriented

standard, is based on 230-V single-phase and 230/400-V three-phase power at the wall-plug. In consequence, the cur-rent limits have to be adjusted for 120/240-V mains voltages in North America.

While IEC61000-3-2 sets mandatory standards for supplies sold in the EU, there are voluntary standards for North Amer-ica. The U.S. Department of Energy’s Energy Star Computer Specification includes “80 Plus” power-supply requirements for desktop computers (later including servers) and laptops.

80 Plus is a U.S./Canadian electric utility-funded rebate pro-gram that subsidizes the extra cost of computer power supplies that achieve 80% or higher efficiency at low mid-range and peak outputs, relative to the power rating on the nameplate, and that exhibit a power factor of at least 0.9. Within territories served by participating utilities, the utilities pay $5 or $10 for every desktop computer or server sold.

In 2008, the 80 Plus program was expanded to recognize higher-efficiency power supplies, initially using the Olym-pic medal colors of bronze, silver, and gold, and then adding platinum (Table 2). The new subcategories were meant to help expand program branding and to make it possible to offer larger consumer rebates for participating manufacturers that had moved ahead of the curve.

In Table 2, “redundant” refers to the practice of server systems makers of oper-ating from a 230-V ac source and using multiple supplies to deliver power to the load. Some systems may have up to six

power supplies so if one fails, the others can absorb the failed unit’s share of load.

BelOW 20% lOadOne complaint about 80 Plus is that it does not set efficiency

targets for very low load levels. This may seem like a trivial objection, but it isn’t when there are large numbers of comput-ers in operations such as server farms, many of which may be in a standby or sleep mode at any given time. Ironically, the processor’s power-saving modes tend to conflict with efforts to save power in the ac supply.

Of some further significance may be the conflict between specifying requirements for the individual components of harmonic distortion, as IEC 61000-3-2 does, and specifying a single value, such as 0.9 for power factor, as the higher levels of 80 Plus do.

Texas Instruments provides an interesting analysis of the issues in a white paper, “High Power Factor and High Efficien-cy – You Can Have Both,” by Isaac Cohen and Bing Lu (http://focus.ti.com/download/trng/docs/seminar/Topic_1_Cohen_Lu.pdf). Early in the paper, the authors calculate the power factor represented by the Class D harmonic levels specified by IED61000-3-2, Class D. Making a few simplifications, the expression for power factor reduces to:

Since 0.726 is significantly less than 0.9, a supply that just meets the minimum requirement for the EU standard will fail Energy Star.

Just to make things interesting, the TI authors note that based on the basic definition of power factor as the ratio of the average power in watts absorbed by a load from a voltage or current source to the product of RMS voltage appearing across

the load and the RMS current flowing in it, it is theoretically possible to design a simple full-wave bridge and drive it with a square wave and have it meet the 0.9 power-factor Energy Star requirement by “emulating an inductive-input filter with a large inductance value.” (See the white paper for details.) Nonetheless, a Fourier analysis of the square wave shows that all harmonics above the 11th exceed the IEC61000-3-2 limits.

Ultimately, as the title of the paper suggests, the problem is a chimera. “For-tuitously, all the commonly used active-PFC circuits draw input-current wave-forms that can easily comply [with] both standards,” the authors say.

Like Texas Instruments, ON Semicon-ductor has addressed the issues of rec-

EngineeringEssentials

0923_EE_F3

Ripp

le c

urre

nt r

educ

tion

norm

aliz

ed

to o

ne p

hase

of P

FC

1.00.9

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.1

0.0

Two phasesThree phasesFour phases

Duty cycle0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0

3. The amount of ripple reduction in multi-phase PFC schemes is a function of duty cycle. The solid line represents a two-phase design. In a three-phase design, the phase difference of each phase is 360° divided by three, which is 120°. In a four-phase design, the phase difference is 90°.

Page 18: Top Five Essentials of Electronic Design

onciliation. In a communication available online, “Comments on Draft 1 Version 2.0 Energy Star Requirements for External Power Supplies (EPS)” at www.energystar.gov/ia/partners/prod_development/revisions/downloads/ON_Semiconductor.pdf, the company advised the Department of Energy that exter-nal power supplies that meet the IEC61000-3-2 typically have a power factor of 0.85 or greater when measured at 100% of rated output power.

“More specifically, at 100% of rated output power and 230-V ac line, two-stage external power supplies with an active-PFC front end exhibit a power factor greater than 0.9,” the paper explains. “The opposite is however not true, i.e. it is entirely possible that an external power supply can exhibit a power factor of 0.9 and yet will fail a given odd harmonic current and therefore will not meet the IEC61000-3-2.”

Another issue with stating a PFC requirement directly, rather than in terms of individual harmonics, has to do with design effi-ciency. For a single-stage PFC topology to meet the proposed power-factor specification at 230-V ac line, ON Semiconductor says, the necessary circuit modifications would result in a few-percent efficiency loss and in a substantially increased cost.

“For single-stage external power supplies the power factor is typically greater than 0.80. The proposed power-factor require-ment would eliminate the single-stage topology that is one of the most cost-effective ways of building highly efficient exter-nal power supplies such as notebook adapters with a nameplate output power below 150 W,” ON Semiconductor says.

Note the emphasis on single-stage. It opens the door to an interesting design question represented by TI and ON Semi. To understand it, let’s first look at actual PFC design approaches.

apprOaCHeS TO uniTy pOWer FaCTOrSince the discontinuous input-filter charging current creates

the low power factor in switch-mode power supplies, the cure is to increase the rectifier’s conduction angle. Solutions include passive and active PFC and passive or active filtering.

Passive PFC involves an inductor on the power-supply input. Passive PFC looks simple, but isn’t really practical for rea-sons that include the necessary inductance, conduction losses, and possibility of reso-nance with the output filter capacitor.

As noted above, the power-factor problem in ac-input switch-mode power supplies arises because current is drawn from the line only during parts of the ac-supply voltage waveform that rise above the dc voltage on the bulk storage (filter) capacitor(s). This non-symmetrical cur-rent draw introduces harmonics of the ac line voltage on the line.

The basic PFC concept is fairly simple (Fig. 2). A control circuit switches a MOS-FET to draw current through an inductor in a way that fills in the gaps that would otherwise represent harmonics.

The PFC controller can be designed to operate in several modes: critical conduction mode (also called transition mode), and continuous conduction mode (CCM). The differences lie in how fast the MOSFET switches, which in turn determines whether the inductor current (and the energy in the inductor) approaches zero or remains relatively high.

The terms “critical” and “transition” reflect the fact that each time the current approaches 0 A, the inductor is at a point where its energy approaches zero. Transition-mode operation can achieve power factors of 0.9. However, it is limited to lower power levels, typically 600 W and below. It is economi-cal, because it uses relatively few components. Applications include lighting ballasts and LED lighting, as well as con-sumer electronics.

The circuit topology for CCM is like critical conduction mode. But unlike the simpler mode, its ripple current has a much lower peak-to-peak amplitude and does not go to 0 A. The inductor always has current flowing through it and does not dump all of its energy at each pulse-width modulation (PWM) cycle, hence the term “continuous.”

In this case, the average current produces a higher-quality composite of the ac line current, making it possible to achieve power factors near unity. This is important at higher power levels as the higher currents magnify radiated and conducted electromagnetic interference (EMI) levels that critical conduc-tion mode would have difficulty meeting.

pFC COnTrOller deSignSTI has an interesting solution for this, embodied in its

UCC28070 two-phase interleaved continuous-current mode PFC controller (Fig. 4). The UCC28070 targets 300-W to multi-kilowatt power supplies, such as what might be used in telecom rectifiers or server front ends.

The idea behind the design of the TI chip is that for higher power levels, it is possible to parallel two PFC stages to deliver higher power. This also has thermal-management advantages, since heat losses from the two stages are spread over a wider

18 ElEctronic DEsign

EngineeringEssentials

+

M2

M1

D2

D1

T1

T2

L1VIN VOUT

Fromcurrent

transformer

UCC28070

12 to 21 V

To currentsense input

Tocurrentsenseinput

L2

0923_EE_F44. Texas Instruments’ UCC28070 power-factor correction chip integrates two pulse-width modulators operating 180° out of phase. This interleaved PWM operation reduces input and output ripple currents, and the conducted-EMI filtering is easier and less expensive.

Page 19: Top Five Essentials of Electronic Design

ElEctronic DEsign Go To www.elecTronicdesiGn.com 19

area of the circuit board. The disadvantage of simple parallel operation is higher input and output ripple currents.

TI says that a better alternative is to interleave the two stages so their currents are 180° out of phase. That way, the ripple currents cancel. In fact, designs with more than two phases are

common (Fig. 3). In those cases, the phase angles are distrib-uted evenly. In multiphase PFC, due to the lower output ripple currents, the number or physical size of the passive compo-nents can be smaller than in single-phase PFC, providing cost, space, and EMI-filter complexity tradeoffs.

The application often drives PFC controller design. For example, ON Semiconductor’s NCL30001 LED lighting con-troller, which is intended for 12-V and higher LED lighting applications between 40 and 150 W, combines CCM PFC and a flyback step-down converter (Fig. 5).

While a typical LED lighting power supply might consist of a boost PFC stage that powers a 400-V bus followed by an isolated dc-dc converter, the NCL30001 datasheet describes a simpler approach that shrinks the front-end converter (ON Semi calls it the PFC preregulator) and the dc-dc converter into a single pow-er-processing stage with fewer components. It essentially needs only one MOSFET, one magnetic element, one low-voltage output rectifier, and one low-voltage output capacitor.

ON Semiconductor’s datasheet provides an instructive description of the portion of the circuit shown in Figure 5. The

output of a reference generator is a rectified version of the input sine wave proportional to the feedback (FB) and inversely propor-tional to the feedforward (VFF) values. An ac error amp forces the average current output of the current-sense amplifier to match the reference-generator output. This output (VER-

ROR) drives the PWM comparator through a reference buffer, and the PWM comparator sums VERROR and the instantaneous current and compares the result to a 4.0-V threshold. Suitably compensated, this provides the duty-cycle control.

EngineeringEssentials

0923_EE_F5

AC in

VCC

DRV

VFF

FB

S

RQ

Referencegenerator

−+ V-to-I

+ inverter−

+

+

21.33kΩ

21.33kΩ

V-to-I

V-to-I

4 V

−+

+

VSSKIP

tSSKIPStart

Soft-skip ramp

Delay

−+

+

VSSKIP(TLD)

Terminate

−+

+

VSSKIP(sync)

Ramp comp

Clock

DC max

FB

RFB

VDD

VCC OK

BrownoutOverload

Latch

AC erroramplifier

PWMcomparator

PWM skipcomparator

AC in skipcomparator

FB skipcomparator

Transient load detectcomparator

Outputdriver

Transconductanceamplifier

+VFF

5. ON Semiconductor’s NCL30001 employs a variable reference generator, a low-frequency voltage-regulation error amplifier, ramp compensation, and a current shaping network. Inputs to the reference generator include a scaled ac input signal (AC_IN) and feedforward input (VFF).

tAblE 1: En61000-3-2 ClASS D hArmoniC CurrEnt limitS

(powEr = 75 to 600 w)

Harmonic order (n) relative limit (ma/W) absolute limit (a)

3 3.4 2.30

5 1.9 1.14

7 1.0 0.77

9 0.5 0.40

11 0.35 0.33

13 to 39 3.85/n 2.25/n

tAblE 2: 80 pluS EFFiCiEnCy lEVElSTest type 115-V internal non-redundant 230-V internal redundant

percent of rated load 20% 50% 100% 20% 50% 100%

80 plus basic 80% 80% 80% Undefined

bronze 82% 85% 82% 81% 85% 81%

Silver 85% 88% 85% 85% 89% 85%

gold 87% 90% 87% 88% 92% 88%

platinum 90% 92% 89% 90% 94% 91%

Page 20: Top Five Essentials of Electronic Design

Why is it important to dissipate heat? For most semiconductor applications, quickly moving the heat away from the die and out toward the larger system prevents highly concentrated areas of heat on the silicon.

Typical operating temperatures for sili-con die range from 105°C to 150°C, depending on the applica-tion. At higher temperatures, metal diffusion is more prevalent and eventually the device can fail from shorting.

The die’s reliability depends greatly upon the amount of time that’s spent at the high temperatures. For very short durations, a silicon die can tolerate temperatures well above the published acceptable values. However, the device’s reliability is compro-mised over time.

Due to this delicate balance between power needs and ther-mal limits, thermal modeling has become an essential tool for the automotive industry. In the automotive safety industry, the current drive is for smaller assemblies with lower part counts, which forces semiconductor providers to include more func-tions with higher power consumption.

The higher temperatures generated ultimately will affect reli-ability and, in turn, automotive safety. But by optimizing the die layout and power pulse timing early in the design cycle, designers can provide an optimized design with fewer silicon test builds, leading to a quicker development cycle time.

SemiCOnduCTOr THermal paCkagingThe automotive electronics industry uses various semicon-

ductor package types, from small, single-function transistors to complex power packages with more than 100 leads and specially designed heatsinking capabilities.

Semiconductor packaging serves to protect the die, provide electrical connection between the device and external passive components in the system, and manage thermal dissipation. For this discussion, we’ll focus on the semiconductor pack-age’s ability to conduct heat away from the die.

In leaded packages, the die is mounted to a metal plate called the die pad. This pad supports the die during fabrication, and it provides a good thermal conductor surface. A common semi-conductor package type in the auto industry is the exposed pad, or PowerPAD-style, package (Fig. 1).

The bottom side of the die pad is exposed and soldered direct-ly to the printed-circuit board (PCB), providing a direct ther-mal connection from the die to the PCB. The primary heat path runs down through the exposed pad, which is then soldered to

the circuit board. The heat is then dissipated through the PCB into the surroundings.

Exposed-pad-style packages conduct approximately 80% of the heat though the bottom of the package and into the PCB. The other 20% of the heat dissipates from the device leads and sides of the package. Less than 1% of the heat dis-sipates from the top of the package.

A similar leaded package is the non-exposed pad pack-age (Fig. 1, again). Here, plastic fully surrounds the die pad, providing no direct thermal connection to the PCB. Approxi-mately 58% of the heat dissipates from the leads and sides of the package, 40% from the bottom of the package, and approximately 2% from the top.

Heat transfer occurs via conduction, convection, or radia-tion. For automotive semiconductor packaging, the primary means of heat transfer are through conduction to the PCB and by convection to the surrounding air. Radiation, when present, represents a minor portion of the heat transfer.

THermal CHallengeSOperation, safety, and comfort automotive systems rely

heavily on semiconductors. They’re now common in body electronics, airbags, climate control, radio, steering, passive entry, anti-theft systems, tire monitoring, and more.

Despite many new applications for semiconductors in the automotive industry, three traditional areas still maintain indi-vidual environmental requirements: inside the vehicle cab, the

20 ElEctronic DEsign

EngineeringEssentials

0708_EE_F1

PowerPAD-style package

<1%

~80%

~20%

Standard leaded package<2%

~58%

~40%

1. As the percentages show, thermal dissipation is distinctly different between a standard leaded package and a PowerPAD-style package.

Optimizing die layout and power early in the design cycle, as well as thermal improve-ments at the page and system level, helps to deliver the best possible automotive designs.

Thermal Modeling Takes The Heat Off Of Automotive Silicon DesignsAnDRA HoRTon | tExAS inStrumEntS [email protected]

Page 21: Top Five Essentials of Electronic Design

ElEctronic DEsign Go To www.elecTronicdesiGn.com 21

panel firewall, or under the hood. In conjunction, three factors continue to challenge the automotive environment: high ambi-ent temperatures, high power, and limited material thermal-dissipation properties.

Temperatures for automotive environments versus other environments are typically worlds apart. Generally, consumer-electronics temperature commonly resides at 25°C, with upper limits around 70°C. On the other hand, electronics inside the passenger compartment of the car or panel applications will run at temperatures up to 85°C (see the table).

In firewall applications, where electronics are located between the engine compartment and the vehicle’s cab, devices can be exposed to ambient temperatures up to 105°C. Under-hood applications require operation in an environment with ambient temperatures up to 125°C.

Thermal considerations are especially important in safety-related systems, such as power steering, airbags, and antilock

brakes. In braking and airbag applications, power levels of up to 100 W can be expected for short duration (~1 ms).

Increased functionality demands plus concentrated mul-tiple sources drive the die’s high power. Die temps for some automotive-application semiconductors can reach up to 175°C to 185°C for short periods of time. Typically, this is the thermal shutdown limit for automotive devices.

Thermal demands increase with the addition of more safety features. Though airbags have been common in vehicles for longer than a decade, some cars now come with as many as 12 airbags. During deployment, multiple airbags require a sequenced operation and create a much greater thermal design challenge compared to a single traditional airbag.

Regarding thermal challenges in terms of material proper-ties, it’s no secret that there’s a concerted effort to reduce cost in automotive assemblies. Plastic materials are replacing metal modules and PCB enclosures. Plastic enclosures have the benefit of being cheaper to produce. They also weigh less. The tradeoff for lower cost and reduced weight, however, is a reduction in thermal performance.

Plastic materials have very low thermal conductivity, in the range of 0.3 to 1 W/mK, so they function as thermal insulators. No doubt, then, that the changeover to plastic enclosures will limit a system’s heat dissipation, increasing the thermal load on the semiconductor device.

WHy mOdeling?Within the automotive semiconductor industry, model-

ing activity typically focuses on the thermal performance and design of a single device. Careful simplifications can be assumed to obtain modeling data.

System-level simplifications, such as eliminating extraneous low-power devices from the model, using simplified rather than detailed PCB copper routing, or assuming chassis is at a fixed temperature for heatsinking, can all streamline the thermal model for fast solver times. In addition, they will still deliver an accurate representation of the thermal impedance network.

Package-level thermal modeling makes it possible to review potential packaging design changes in advance without cost-ly development and testing activity, eliminating material builds. Semiconductor packaging design can be varied to allow for the optimal thermal dissipation depending on the applica-tion’s needs.

With exposed-pad packages like PowerPAD, heat can quick-ly dissipate from the die to the PCB. Variations such as larger die pads, better connec-tion to the PCB, or improved die pad design offer ways to improve a device’s thermal performance.

Thermal modeling is also used to review the impact of potential material changes in a device. Thermal conductiv-ity of packaging materials can vary widely, from as low

EngineeringEssentials

2. For eight-lead small-outine IC (SOIC) packages, die junction temperature can be lowered 25°C by fusing several leads to the die pad.

0708_EE_F2

Tem

pera

ture

(ºC

)

141

80.5

20Te

mpe

ratu

re (º

C)

103

61.3

20

0708_EE_F3

5.95045.72015.49235.22284.50183.25952.84991.84990.30770.3841.79140.36660.28590.6695

0.02250.04950.09840.09680.15530.15110.16570.08740.2880.85120.83032.48925.00064.7317

6.20265.92725.69315.47565.07654.47573.2262.83211.74131.71

2.97843.07590.59671.2632

0.44070.42140.41490.41650.38510.35590.36010.74770.74772.3662.24714.62895.53265.5326

12023

1260201

22766

1011

146

143

140

Tem

pera

ture

(ºC

)

3. Thermal modeling software uses comma-separated variable (.csv) inputs to generate detailed die layout and show any potential hotspot locations on the die surface.

Page 22: Top Five Essentials of Electronic Design

as 0.4 W/mK (thermal insulator) to more than 300 W/mK (good thermal conductor). Using thermal modeling techniques helps balance product cost versus performance.

VeriFiCaTiOn OF mOdelingFor critical systems, careful lab-based analysis can determine

thermal performance and operating temperatures. However, lab-based measurement of these systems may be time-con-suming and costly. Here, thermal modeling is instrumental in addressing the system’s thermal needs and satisfying opera-tional requirements.

In the semiconductor industry, thermal modeling has become an early part of the concept testing and silicon die design pro-cess. The ideal thermal modeling flow begins months before fabricating any die. The IC designer and thermal engineer review die layout and power losses for the device.

Then, the thermal modeling engineer creates a thermal model based on this review. Once thermal-model results are complete, the designer and modeling engineer review the data and tune the model to accurately reflect possible application scenarios.

Verification of all finite element analysis (FEA) modeling is highly recommended. Texas Instruments’ policy is to run correlation studies comparing thermal-modeling results with a system’s physical measurements.

These correlation studies highlighted several areas of poten-tial error, including material properties, power definition, and geometry simplification. While no model will be a perfect duplication of a true system, careful attention must be paid to the assumptions made during modeling to ensure the most accurate system representation.

For material properties, published values often show the bulk conductivity of a particular material. Yet in semiconductor applications, thin layers of material are commonly used, and the increased surface area of the material can cause a decrease in thermal conductivity compared to the bulk value.

Carefully note the power represented in the model, because applied power on a device during operation can vary with time. Power losses in the board or other areas in the system may also impact the die surface’s true power.

TypeS OF mOdelingTo aid in semiconductor package design, there are four main

types of thermal models: system level, package level, die level, and die-level transient analysis. In the automotive semiconduc-tor arena, system-level thermal modeling is important because it shows how a par-ticular device will perform in a specific system.

At a basic level, automotive semi-conductor thermal modeling takes the PCB into account because it’s the pri-mary heatsink for most semiconduc-tor packages. The composition of the PCB, including copper layers and ther-mal vias, should be added to the ther-mal model to accurately determine ther-mal behavioral analysis. Furthermore,

include any unique geometry, such as embedded heatsinks, or metal connections like screws or rivets.

Forced airflow around the system and PCB can also play a role in the convective heat transfer from the system. Typi-cally, the semiconductor industry targets thermal modeling on a single, high-power device. Other power components on the PCB, though, may play a large role in the system’s overall performance too.

Simplifying the inputs from these packages, yet still main-taining a level of accuracy, often requires compact models. Compact models are less complex networks of thermal resis-tance that provide a reasonable approximation of the thermal performance of these non-critical devices.

In smaller and lower-pin-count devices, other methods can be used to improve thermal performance (Fig. 2). For instance, fusing several of the package leads to the device’s die pad can significantly improve the overall junction temperature without impacting the device’s operation.

mOdeling aSSumpTiOnSDie-level thermal analysis begins with an accurate represen-

tation of the silicon die layout, including any powered regions on the die. In simple cases, assume that the power is evenly distributed across the silicon.

However, most die layouts have power in uneven patterns across the die, depending on functionality. This uneven power distribution can be critical to the device’s overall thermal per-formance. For thermally critical devices, pay close attention to the power structure’s location on the die.

In some thermal software programs, the die layout can be entered using comma-separated variable (.csv) inputs (Fig. 3). This allows for an easy transfer of information between die lay-out and thermal-modeling software. Depending on the device’s complexity and the power level, these powered regions on the die can vary from two to three locations to several hundred.

The thermal-modeling engineer should work closely with the IC designer to identify the powered regions for inclusion in the thermal model. Often, small, very low-powered regions can be grouped into larger regions, simplifying the thermal model while still accounting for the device’s overall power. Similarly, in a thermal model, background or quiescent power can be used across the die’s surface to account for a large percentage of the non-critical low-power die structures.

Device functionality frequently requires high power over small areas on the die. These high-powered regions can lead to localized heating regions, which may be sig-nificantly hotter than the surrounding silicon.

Thermal modeling helps highlight thermal problems in which clusters of medium power silicon products located in close proximity may cause residual heating and possible thermal stress to the die under assessment.

Models can also aid in the placement or calibration of embedded thermal

22 ElEctronic DEsign

EngineeringEssentials

mAximum AmbiEnt tEmpErAturES For

VEhiClE AppliCAtionS auToMoTive environMenT

aMbienT TeMperaTure (°c)

inSiDE CAr or pAnElS 85

FirEwAll (bEtwEEn EnginE AnD CAb) 105

unDEr hooD 125

Page 23: Top Five Essentials of Electronic Design

ElEctronic DEsign Go To www.elecTronicdesiGn.com 23

EngineeringEssentials

sensors on the die. Ideally, a temperature sensor is placed at the center of the highest-powered region on the die. Yet due to layout constraints, this is often not possible. When located away from the center of the powered region, the temperature sensor is unable to read the device’s full maximum temperature.

Thermal models can be used to determine the thermal gradient across the die, includ-ing at the sensor location. Then the sensor is calibrated to account for the temperature difference between the hottest region and the sensor location.

The aforementioned model types all assume a constant dc power input. In actu-al operation, though, device power varies with time and configurability. By designing the thermal system to account for only the worst-case power, the thermal load may become prohibitive.

Transient thermal response can be reviewed using one of several different methods. The simplest method is to assume a dc power source on the die, then track the thermal response of the device as a function of time. A second method inputs a varying power source and then uses thermal software to deter-mine the final steady-state temperature.

The third and most useful transient modeling style is to view the “response with time” of varying power in multiple die loca-tions (Fig. 4). With this method, you can catch interactions between devices that may not be apparent under normal condi-tions. Transient modeling is also helpful in viewing the full dura-tion of certain die operations that occur separate from normal device function, e.g., a device’s power-up or shutdown mode.

In many automotive systems, such as braking actuation or airbag deployment, the device power remains at a low level for the bulk of the device’s lifetime. In the case of an airbag sys-tem during deployment, the power pulse can reach very high power for short durations.

imprOVemenTSDesign optimization and lower overall temperatures are the

goals of thermal modeling for the automotive semiconductor industry. Lowering the operating die junction temperatures improves a device’s reliability.

Small enhancements to the system, board, package, or die potentially leads to dramatic improvements in final tempera-tures. Device and system limitations can eliminate some of these suggestions, though.

Methods for boosting thermal performance include airflow, conductive heat paths, or external heatsinking. Another is to provide more metal area for heat dissipation, such as by adding external heatsinks, metal connection to chassis, more layers or denser copper layers on a PCB, thermally connected copper planes, and thermal vias.

Thermal vias located below the exposed pad of a device help to quickly carry heat away from the device, as well as speed

dissipation to the rest of the circuit board. Semiconductor device packages are designed to quickly move the heat away from the die and to the larger system.

A semiconductor package can be improved thermally with higher conductivity materials, direct attachment to a PCB such as PowerPAD, leads fused to the die pad, or mounting locations for external heatsinks. The semiconductor die itself allows for many possible ways to minimize overall temperature. Of course, the best way to lower temperature is to lower power.

For silicon circuit design and layout, good thermal practices include larger heat-dissipation areas, locating powered regions away from the edges of the die, using long and narrow pow-ered regions instead of square regions, and providing adequate space between high-powered regions. Silicon is a good ther-mal conductor with a conductivity of approximately 117 W/mK. Allowing the maximum amount of silicon around a pow-ered region improves the device’s thermal dissipation.

For transient power on a die, staggering power pulses to decrease instantaneous power will lower overall temperatures. This results in either a long lag time between power pulses so that the heat can dissipate, or high power events are shared over several areas on the die. These transient variations allow the thermal system to recover before more heat is applied. By care-fully designing the die, package, PCB, and system, a device’s thermal performance can be dramatically improved.

rEFErEnCES 1. to learn more about thermal modeling and other automotive

semiconductor devices, visit www.ti.com/automotive-ca.

2. Download an application note for powerpad at www.ti.com/pow-erpad_slma002d-ca.

SAnDrA horton, Analog packaging group at texas instruments,

holds a bSmE from texas A&m university, College Station, texas.

Thermal response with time

Tem

pera

ture

(ºC

)

130

125

120

115

110

105

100

95

90

85

Time (seconds)

0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5

Board

Die junction

1.2ºC

2ºC

3ºC

4ºC

5ºC

6ºC

7ºC

8ºC

Ambienttemperature

0708_EE_F44. Thermal response can vary with time across the surface of a semiconductor device. In this case, regions across the die have been powered in a staggered fashion