c e lebrating year s - computer vision · 2018. 4. 20. · given at vision 2016 covered embedded...
TRANSCRIPT
CELEBRATING 20 YEARS
VISION AND AUTOMATION SOLUTIONS
FOR ENGINEERS AND INTEGRATORS WORLDWIDE
www.v is ion-systems.com
DECEMBER 2016
Embedded Vision
Tight integration improves
performance
Industrial camera survey Technology, interface and use trends revealed
High-speed imagingVision inspects hot rolled steel products
CMOS camerasCoaXPress boosts data throughput
1612VSD_C1 1 11/30/16 11:50 AM
1612VSD_C2 2 11/30/16 11:50 AM
CurrentIn 2 year
26%
12%13%
0%0%
0%0%
1%0%
32%
–
w w w . v i s i o n - s y s t e m s . c o m V I S I O N S Y S T E M S D E S I G N D e c e m b e r 2 0 1 6 1
featuresdepar tments
www.vision-systems.com
• Complete Archives
• Industry News
• Buyers Guide
• Webcasts
• White Papers
• Feedback Forum
• Free e-newsletter
• Video Library
A P E N N W E L L P U B L I C A T I O N
Vision and Automation Solutions for
Engineers and Integrators Worldwide
Cover Story
13 INTEGRATION INSIGHTS
Leveraging embedded vision system
performance for more than just vision
Consolidated visual inspection, motion control, I/O,
and HMI simplifies design, improves performance.
Brandon Treece
17 PRODUCT FOCUS
CMOS cameras leverage the power of CoaXPress
Employing the CoaXPress interface standard allows
vendors to increase the data throughput of their
camera systems.
Andrew Wilson
22 MARKET SURVEY
Industrial camera technologies, interfaces and
applications
In conjunction with Framos, Vision Systems Design
has conducted a survey of camera manufacturers and
systems integrators.
Ute Häussler
25 INDUSTRY SOLUTIONS PROFILE
High-speed inspection system finds defects
in steel
Vision inspects the surfaces of hot rolled steel long
products as if they were cold, even though the
inspection takes place at a temperature of over 1,000°C.
Antonio Cruz-Lopez, Alberto Lago, Roberto Gonzalez,
Aitor Alvarez and José Angel Gutiérrez Olabarria
3 My View
4 Celebrating 20 years
Read the latest news from our website
6 Snapshots
9 Technology Trends
CAMERA DESIGN
Image sensor technology enables very low-light imaging
PACKAGING AND PRODUCTION
Vision system simplifies blister pack inspection
HIGH-SPEED IMAGING
Vision system upgrade improves printing plate alignment
32 Ad Index/Sales Offices
December 2016
VOL. 21 | NO. 11
Combining the vision and motion systems, I/O,
and HMI into a single controller helped Master
Machinery improve performance tenfold. 13.
1612VSD_1 1 11/30/16 11:47 AM
www.vieworks.com
• 3k / 4k / 6k TDI line scan camera with highest sensitivity & resolution• Max. 250kHz line rate• Bidirectional operations with up to 256 stages• Fast transfer speed with low power consumption by the newest hybrid (CCD in CMOS) TDI sensor• CCD pixel array with high image quality and noiseless charge transfer & accumulation• Ease of use and installation provided by GenIcam Interface & easy calibration functionality• User-friendly features to improve the system: trigger analysis, encoder noise reduction & rescaler• Exposure control using optional strobe controller for applications with variable speed• CoaXPress or Camera Link Interface
For more information, email us at [email protected]
Hybrid Structure & Highest Throughput
1612VSD_2 2 11/30/16 11:47 AM
w w w . v i s i o n - s y s t e m s . c o m V I S I O N S Y S T E M S D E S I G N D e c e m b e r 2 0 1 6 3E M S D E S I G N D e c e m b e r 2 0 1 6 3
myview Alan Bergstein: Group Publisher
(603) 891-9447 [email protected]
John Lewis: Editor-in-Chief (603) 891-9130
James Carroll Jr.: Senior Web Editor (603) 891-9320
Andrew Wilson: Contributing Editor +44 7462 476477
Kelli Mylchreest: Art Director
Mari Rodriguez: Production Director
Dan Rodd: Senior Illustrator
Debbie Bouley: Audience Development Manager
Marcella Hanson: Ad Services Manager
Joni Montemagno: Marketing Manager
www.pennwell.com
EDITORIAL OFFICES:
Vision Systems Design
61 Spit Brook Road, Suite 401Nashua, NH 03060 Tel: (603) 891-0123 Fax: (603) 891-9328
www.vision-systems.com
CORPORATE OFFICERS:
Robert F. Biolchini: Chairman
Frank T. Lauinger: Vice Chairman
Mark C. Wilmoth: President and Chief Executive Officer
Jayne A. Gilsinger: Executive Vice President, Corporate Development and Strategy
Brian Conway: Senior Vice President, Finance and Chief Financial Officer
TECHNOLOGY GROUP:
Christine A. Shaw: Senior Vice President and Group Publishing Director
FOR SUBSCRIPTION INQUIRIES
Tel: (847) 559-7330;
Fax: (847) 763-9607;
e-mail: [email protected]
web: www.vsd-subscribe.com
Vision Systems Design® (ISSN 1089-3709), Volume 21, No. 11. Vision Systems Design is published 11 times a year in January, February, March, April, May, June, July/Au-gust, September, October, November, December by Pen-nWell® Corporation, 1421 S. Sheridan, Tulsa, OK 74112. Periodicals postage paid at Tulsa, OK 74112 and at addi-tional mailing offices. SUBSCRIPTION PRICES: USA $120 1yr., $180 2 yr., $234 3 yr.; Canada $138 1 yr., $207 2 yr., $270 3 yr.; International $150 1 yr., $225 2 yr., $295 3 yr. POSTMASTER: Send address corrections to Vision Sys-tems Design, P.O. Box 3425, Northbrook, IL 60065-3425. Vision Systems Design is a registered trademark. © Pen-nWell Corporation 2016. All rights reserved. Reproduc-tion in whole or in part without permission is prohibited. Permission, however, is granted for employees of corpo-rations licensed under the Annual Authorization Service offered by the Copyright Clearance Center Inc. (CCC), 222 Rosewood Drive, Danvers, Mass. 01923, or by calling CCC’s Customer Relations Department at 978-750-8400 prior to copying. We make portions of our subscriber list available to carefully screened companies that offer prod-ucts and services that may be important for your work. If you do not want to receive those offers and/or informa-tion via direct mail, please let us know by contacting us at List Services Vision Systems Design, 61 Spit Brook Road, Suite 401, Nashua, NH 03060. Printed in the USA. GST No. 126813153. Publications Mail Agreement no. 1421727.
John Lewis, EDITOR-IN-CHIEF
WWW.VISION-SYSTEMS.COM
John Lewis, EDITOR-IN-C
WWW VISION SYSTEMS COM
Vision 2016: On A RollLast month, many of our readers helped fuel record attendance at VISION
2016, the world’s largest machine vision tradeshow. Just shy of 10,000 attend-
ees from 58 countries visited Stuttgart, Germany during the three-day event
organized by Landesmesse Stuttgart GmbH and held from November 8-10.
As the official media partner of Messe Stuttgart in sponsoring this event,
Vision Systems Design had all hands on deck. The whole team spent three exhilarating days can-
vassing the show floor to see the latest and greatest imaging products and demonstrations show-
cased by 441 exhibitors from 28 countries.
We met with many industry veterans specializing in machine vision lighting, image sensors,
industrial cameras, frame grabbers, software, and reported on a long list of product innovations
from around the world that were introduced at the show. For those interested or unable to attend
you can view an online slideshow of these innovations at: http://bit.ly/2gHvNYa.
Senior Web Editor James Carroll also posted the following intereresting recaps of his experi-
ences each day at VISION 2016, which are available for viewing online:
• Embedded vision, hyperspectral and multispectral imaging: http://bit.ly/2fmscc7
• Imaging beyond the ordinary: http://bit.ly/2g0eVaL
• North American and European machine vision outlook: http://bit.ly/2ghr4ZM
A number of organizations presented market updates at VISION 2016. In fact, you can view
the full North American vision market update presentation that AIA Director of Market Analysis
Alex Shikany gave here: http://bit.ly/2fOXZGB. On page 22 of this issue we have a summary of
the annual Framos industrial camera market survey results, which were presented live in Stutt-
gart and focused on camera technologies, interfaces and applications.
Embedded vision was another popular topic at the show. Ten of the 90 some-odd presentations
given at VISION 2016 covered embedded vision. We continue our embedded vision coverage in
this issue on page 13, where the cover story highlights how designers can leverage emdedded design
by incorporating their main machine controller, machine vision, motion system, I/O, and HMI
all into a single controller.
A number of high-speed imaging and systems based on novel 3D and hyperspectral imaging
techniques were on display at the show. Many of these products and technologies will be reported
on in upcomming issues. In the meantime, check out contributing editor Andy Wilson’s round
up of CMOS cameras employing the CoaXPress interface standard on page 17.
Market opportunities are growing, and Vision Systems Design will serve its audience of engi-
neers and system integrators to help create leading-edge applications. As the new Editor in Chief,
I look forward to building our worldwide coverage of traditional and emerging application areas
and continuing to grow the magazine, digital media and brand through 2017 and beyond.
1612VSD_3 3 11/30/16 11:48 AM
CELEBRATING 20 YEARS
Nick SiSchkaSenior Vision Solutions Engineer
(856) 547 3488 ext [email protected]
D e c e m b e r 2 0 1 6 V I S I O N S Y S T E M S D E S I G N w w w . v i s i o n - s y s t e m s . c o m4
Vision and Automation Solutions for
Engineers and Integrators Worldwide
Future architectures for CMOS image sensors
Twenty sixteen was yet another outstand-
ing year to be a participant and spectator in
the CMOS image sensor (CIS) marketplace
– with significant advancements in devices
and architectures. Viability of stacked CIS in
high-volume applications has been
proven, and is expected to dominate
CIS advances for the next 3-5 years.
Stacking delivers a smaller foot-
print for the CIS and processor
combo and overall lower power
consumption due to more efficient partition-
ing of processing functions between the two
die. A smaller footprint has advantages in
many markets, and is absolutely critical in
applications such as endoscopy. In addition,
the CIS die can be fabricated in a simplified
process that’s optimized solely around the
pixel performance and yield, resulting in
significantly lower noise, defect densities,
and non-uniformities. An ultimate goal is to
create more advanced pixels by local high
density interconnects. This allows the pixel
to be split between the upper and lower die
– for example, to allow global shutter pixels
with excellent parasitic light (in)sensitivity.
Stacking still faces challenges starting with
higher non-recurring engineering (NRE)
costs for the following: a) the cost of the stack-
ing process itself, b) costs of designing two die
with the associated complications of co-sim-
ulation and co-verification of designs that
might use different fabrication processes, c)
the need for two mask sets, and d) the added
complexities during characterization, qualifi-
cation, and production.
In certain applications, stacking can also
increase the combined sensor/processor price
over non-stacked approaches. For example, in
using wafer-level stacking, the final yield of
the bonded pair is a product of the yield of
each die. Also the two stacked die typically
need to be the same physical size. This can
be a problem for very large sensors (full frame
35mm, APS-C) and very cost-sensitive appli-
cations (i.e. IoT), where it can be difficult to
find an economic balance in die area. Again,
we expect that these problems will find solu-
tions with advances in device architectures
and smart choices in process selection.
A big part of our work at Forza Silicon is
focused on the professional/prosumer camera
markets. While there has been a great deal of
activity in high-resolution video (4K and
above), the once-humble cell phone camera
has stolen the limelight from the DSLR/digi-
tal still camera. But expect a strong response
from the “still” camera market with very un-
still like high-speed 4K video and higher
dynamic range – with performance that
might also challenge the incumbents in the
digital cinema and broadcast markets. An
exciting new trend may have started with the
announcement of a medium-format camera
aimed at non-professional photography
enthusiasts. Ultimately the pricing of this
camera will determine if we will see a signifi-
cant technology push for extra-large format
sensors in the near future.
We also continue to see a lot of activity in
the unconventional imaging applications. Var-
ious markets, including automotive and aug-
mented reality, are driving the need for 3D
image sensors, which have been addressed by
Time-of-Flight sensors. However, these con-
tinue to struggle with ambient light and the
need for an injection of significant near-infra-
red light power into the scene. These prob-
lems, and their cost impact, must be overcome
before this can become a
significant market.
Barmak
Mansoorian,
President & Co-
Founder, Forza Silicon
1612VSD_4 4 11/30/16 11:48 AM
VISION.RIGHT.NOW.
Innovative products, intelligent consulting and
extensive service. Solve your imaging projects with
speed and ease by working with STEMMER IMAGING,
your secure partner for success.
Share our passion for vision.
www.stemmer-imaging.com
–
www.vision-systems.comonline @MVTec Q&A: Machine vision software, future trends
Dr. Olaf Munkelt,
Managing Director at
MVTec Software GmbH
discusses such topics
as machine vision software (Including
HALCON 13 and MERLIC 2), future
industry trends, and application areas in
which growth could be on the horizon.
http://bit.ly/2daqI5u
Identifying trends and looking toward the future of vision
This article takes a look
back at what we learned
at the North American
vision show, The Vision
Show 2016, which was held this year
from May 3-5.
http://bit.ly/2eBSVGv
ON Semiconductor officially acquires Fairchild Semiconductor
After having
initially announced
in late 2015 that it
would acquire Fairchild
Semiconductor, ON Semiconductor has
officially completed its $2.4 billion cash
acquisition of the San Jose, CA, USA-
based company.
http://bit.ly/2cCGglb
Vision and robots team up for wine production
Robots and robotic-based
vision systems are slowly
replacing the tedious,
time-consuming tasks
involved in wine production.
http://bit.ly/2dxhwYh
OSIRIS-REx spacecraft features multi-camera vision system
Launched on September
8, NASA’s OSIRIS-REx
spacecraft, which is on a
mission to study asteroid
101955 Bennu, features three high-
resolution cameras that are based
on CCD detectors.
http://bit.ly/2dl2g3K
FLIR Systems acquires Point Grey
FLIR Systems, Inc. has
announced that it has
reached a definitive asset
purchase agreement
to acquire the business of Point Grey
Research, Inc. for approximately $253
million in cash. http://bit.ly/2fa5PqT
1612VSD_5 5 11/30/16 11:48 AM
snapshots Short takes on the
leading edge
D e c e m b e r 2 0 1 6 V I S I O N S Y S T E M S D E S I G N w w w . v i s i o n - s y s t e m s . c o m6
Researchers use high speed camera to study lightningA Florida Institute of Technology (Melbourne,
FL, USA; www.fit.edu) team deployed a high-
speed camera to study atmospheric events.
As part of a grant funded by the National Sci-
ence Foundation. Dr. Ningyu Liu, an Associ-
ate Professor, wrote the grant proposal to learn
more about lightning, and shorter duration
high-altitude discharges called jets and sprites.
Liu, along with Dr. Hamid Rassoul, and a
group of Ph.D. students in the Department of
Physics and Space Sciences used a Phantom
v1210 digital ultra-high-speed camera from
Vision Research (Wayne, NJ, USA; www.
phantomhighspeed.com). Featuring a 1280 x
800 CMOS image sensor with a 28 µm pixel
size and 12-bit depth, the camera achieves
speeds of 12,000 fps at full resolution, and up
to 820,000 fps at reduced resolution. The ther-
moelectrically and heat pipe-cooled camera
features a GigE interface, 10Gb Ethernet,
direct recording to CineMag, and “quiet fans”
for vibration sensitive
applications.
Lightning strikes are
recorded from inside
and on top of buildings
on the Florida univer-
sity’s campus using the
highest frame rate possi-
ble that allows the team
to account for the large
spatial extent of light-
ing, all while recording
at up to 22 GPixels/s, ac-
cording to Julia Tilles, a Ph.D student member
of the team.
“We’re limited to roughly 100,000 fps be-
cause moving to a higher
Hack highlights vulnerability in connected devicesA recent hacking incident involving as many
as one million Chinese security cameras and
digital video recorders highlights the fact that
internet-connected cam-
eras—without proper
s a feguarding—face
the potential of being
compromised.
Attackers used Chinese-made, consumer
security cameras, digital video recorders,
and other devices to generate webpage re-
quests and data that knocked various targets
offline, according to a Wall Street Journal
article. Among the various applications and
markets within the purview of Vision Sys-
tems Design coverage, the
vulnerability of security
and surveillance cameras
comes to mind.
Tim Matthews, vice pres-
ident of marketing for the In-
capsula product line at Imperva
(Redwood Shores, CA, USA; www.imperva.
com)—a company that specializes in web se-
curity and mitigating DDoS attacks—notes
that last year, his company revealed major
vulnerabilities in CCTV cameras as a result
of not taking the proper steps to protect
against threats.
“Last year, the Imperva research revealed
that CCTV cameras in popular destinations,
like shopping malls, were being turned into
botnets by cybercriminals, as a result of
camera operators taking a lax approach to
security and failing to change default pass-
words on the devices,” he said. “CCTV cam-
eras are among the most common Internet-
of-Things (IoT) devices and Imperva first
warned about CCTV botnets in March 2014
when it became aware of
continued on page 8
continued on page 7
1612VSD_6 6 11/30/16 11:48 AM
w w w . v i s i o n - s y s t e m s . c o m V I S I O N S Y S T E M S D E S I G N D e c e m b e r 2 0 1 6 7
snapshots
Ford to launch fully autonomous vehicle by 2021With an eye toward launching a fully auton-
omous vehicle for ride sharing by 2021, Ford
Motor Company (Dearborn, MI, USA; www.
ford.com) has made a number of key invest-
ments in tech compa-
nies and has doubled its
team in Silicon Valley.
The company’s
intent is to have a high-
volume, fully autono-
mous Society of Au-
tomotive Engineers
level 4-capable vehicle
in commercial opera-
tion in 2021 in a ride-
hailing or ride-sharing
service. To achieve its
goals, Ford has made a
number of key investments in tech companies,
expanding its advanced algorithm, 3D map-
ping, LIDAR, and sensor capabilities. These
investments include:
Velodyne (Morgan Hill, CA, USA; www.ve-
lodynelidar.com): As previously covered on our
site (http://bit.ly/2efKyRr), Ford has invested in
Velodyne, the leader in LIDAR sensors, with
an eye on quickly mass producing a more af-
fordable automotive LIDAR sensor.
SAIPS (Tel Aviv, Israel; www.saips.co.il):
Ford has acquired the Israel-based computer
vision and machine learning company to fur-
ther strengthen its artificial intelligence and
computer vision capabilities. SAIPS develops
algorithms for image and video processing,
deep learning, signal processing, and classifi-
cation, which Ford hopes will help its auton-
omous vehicles to learn and adapt to the sur-
roundings of their environment.
Nirenberg Neuroscience LLC (New York,
NY, USA; www.nirenbergneuroscience.com):
Ford announced an exclusive licensing agree-
ment with Nirenberg Neuroscience, a ma-
chine vision company founded by neurosci-
entist Dr. Sheila Nirenberg, who cracked the
neural code the eye uses to transmit visual in-
formation to the brain. Nirenberg Neurosci-
ence has a machine vision platform for per-
forming navigation, object recognition, facial
recognition and other functions.
Civil Maps (Albany, CA, USA; www.civil-
maps.com): Ford has invested in Civil Maps,
a company that has developed a scalable 3D
mapping technique, which provides Ford with
another way to develop high-resolution 3D
maps of autonomous vehicle environments.
Ford has also added two new buildings and
150,000 square feet of work and lab space ad-
jacent to its current Research and Innovation
Center in Silicon Valley, creating a dedicated,
expanded campus in Palo Alto, with plans to
double the size of the Palo Alto team by the
end of 2017.
“The next decade will be defined by auto-
mation of the automobile, and we see auton-
omous vehicles as having as significant an
impact on society as Ford’s moving assembly
line did 100 years ago,” said Mark Fields, Ford
president and CEO. “We’re dedicated to put-
ting on the road an autonomous vehicle that
can improve safety and solve social and envi-
ronmental challenges for millions of people –
not just those who can afford luxury vehicles.”
In 2016, Ford will triple its autonomous vehi-
cle test fleet to be the largest test fleet of any au-
tomaker, bringing the number to about 30 self-
driving Fusion Hybrid sedans on the roads in
California, Arizona and Michigan, with plans to
triple it again next year.
a steep 240% increase in botnet activity on
its network, much of it traced back to com-
promised CCTV cameras.”
He continued,” As we now know, these
attacks are happening more often, and mil-
lions of CCTV cameras have already been
compromised. Whether it be a router, a
Wi-Fi access point or a CCTV camera,
default factory credentials are only there to
be changed upon installation. Imperva rec-
ommends following this security protocol
of changing default passwords on devices.”
Tim Erlin, senior director of IT secu-
rity and risk strategy at cyber security com-
pany Tripwire (Portland, OR, USA; www.
tripwire.com) echoes this sentiment, and
notes that in order to use network-con-
nected cameras, regardless of the applica-
tion, companies should be taking precau-
tionary measures.
“The use of network connected cameras
in a recent large scale Distributed Denial
of Service (DDoS) attack is a clear example
of how a seemingly innocuous connected
device might be used for malicious pur-
poses,” he said. “Security researchers have
been demonstrating attacks against IP cam-
eras for a long time.”
“Preventing attacks against connected
devices,” he added, “requires effort from
both the industry and users. Vendors need
to adhere to best practices for built-in se-
curity measures, including secure remote
access, basic encryption, and patching
known vulnerabilities. These systems
can’t be deployed without consideration
for future security updates, ideally auto-
mated updates.”
Consumers should also be mindful of
potential threats. Deploy systems with se-
curity in mind. Change default credentials
for access. Put adequate access control in
place because attackers will find open and
accessible systems if they’re available.
Most major companies, organizations,
and so on; likely go to great lengths to pro-
tect themselves against such an attack. But
for those that do not, these examples serve
as a lesson that being proactive can pay off
in the long run.
continued from page 6
continued on page 8
1612VSD_7 7 11/30/16 11:48 AM
D e c e m b e r 2 0 1 6 V I S I O N S Y S T E M S D E S I G N w w w . v i s i o n - s y s t e m s . c o m8
snapshots
Security cameras embed deep neural network processingEmbedded vision company Movidius (San
Mateo, CA, USA; www.movidius.com) has
announced a partnership with the world’s
largest IP security camera provider, Hikvi-
sion (Zhejiang, China; www.hikvision.
com), to bring deep neural network tech-
nology to the company’s cameras to perform
much higher accuracy video ana-
lytics locally.
As part of the deal, Hikvision’s cameras
will be powered by the Movidius Myriad 2
vision processing unit (VPU). Myriad 2 fea-
tures a configuration of 12 programmable
vector cores, which allows users to imple-
ment custom algorithms. The VPU offers
TeraFLOPS (trillions of floating point oper-
ations per second) of performance within a
1 Watt power envelope. It features a built-in
image signal processor and hardware accel-
erators, and offloads all vision-related tasks
from a device’s CPU and GPU.
Traditionally, notes Movidius, running
deep neural networks requires devices to
depend on additional compute in the cloud,
but the Myriad 2 VPU is a low-power device
that enables the running of advanced algo-
rithms inside the cameras themselves. This
includes such tasks as car model classifica-
tion, intruder detection, suspicious baggage
alert, and seatbelt detection.
“Advances in artificial intelligence are rev-
olutionizing the way we think about personal
and public security” says Movidius CEO,
Remi El-Ouazzane “The ability to automati-
cally process video in real-time to detect anom-
alies will have a large impact on the way cities
infrastructure are being used. We’re delighted
to partner with Hikvision to deploy smarter
camera networks and contribute to creating
safer communities, better transit hubs and
more efficient business operations.”
By utilizing deep neural net-
works and stereo 3D sensing, Hikvision has
been able to achieve up to 99% accuracy in
their advanced visual analytics applications,
including those mentioned above.
“There are huge amounts of gains to be
made when it comes to neural networks and
intelligent camera systems” says Hikvision
CEO, Hu Yangzhong. “With the Myriad 2
VPU we’re able to make our analytics offer-
ings much more accurate, flagging more
events that require a response, while reducing
false alarms. Embedded, native intelligence is
a major step towards smart, safe and efficiently
run cities. We will build a long term partner-
ship with Movidius and its VPU roadmap.”
In September, Intel announced plans to
acquire Movidius, with the deal expected
to close this year. Movidius has also collabo-
rated with DJI (http://bit.ly/2f9NU7G, Shen-
zhen, China; www.dji.com), FLIR (http://bit.
ly/2eyAPE2, Wilsonville, OR, USA; www.flir.
com) Google (http://bit.ly/2f1tOgx, Moun-
tain View, CA, USA; www.google.com) and
Lenovo (http://bit.ly/2eEOHdZ, Morrisville,
NC, USA; www.lenovo.com), among others.
continued from page 6
frame rate would make our field of view just too
small. At higher frame rates and lower resolution,
a lightning channel comes into and out of the
frame so fast that we just wouldn’t get a lot of in-
formation and would have a much lower chance
of capturing something in the field of view,” she
said. “The camera’s maximum FPS can be as
high as 570,000 fps, but pushing the camera to
perform at that rate doesn’t give us a good time-
resolution to spatial-resolution trade-off. Still,
we are experimenting with shooting at slightly
higher frame rates.”
Data captured by the camera enables the
team to examine electric field measurements
and deduce the corresponding orientation of the
channel and the direction of the current.
“The v1210 is an incredibly sophisticated
camera. When we shoot between 7,000 fps to
12,000 fps, we’re able to see some of the finer
details of a lightning flash, such as branching
and leader propagation. This resolution is high
enough for us to see many elusive processes
taking place below the cloud, and it gives us a
nice, full picture. We can also use other data sets,
such as the National Charge-Moment Change
Network (CMCN), to quantify charge moved
during a lightning strike to ground,” said Tilles.
Along with the Vision Research camera, the team
uses technology such as LMA data, NEXRAD
radar data, X-ray data, electric field data, charge-
moment-change data, and NLDN data to further
evaluate the videos they capture.
continued from page 7
When it comes to the race to get fully autono-
mous vehicles on the road, the field is certainly
becoming increasingly interesting with strategic
moves such as this one, but they are certainly not
alone in their pursuit, as many other companies,
including Google (Mountain View, CA, USA;
www.google.com), Uber (San Francisco, CA,
USA; www.uber.com), Tesla (Palo Alto, CA, USA;
www.tesla.com), BMW (Munich, Germany;
www.bmw.com), Intel (Santa Clara, CA, USA;
www.intel.com), Nissan (Yokohama, Japan; www.
nissan.com), NASA (Washington, D.C.; www.
nasa.gov), are all working toward the same goal.
With so much focus being put on the technology,
it seems that before long, the roads could be filled
with driverless cars sooner than some thought.
1612VSD_8 8 11/30/16 11:48 AM
w w w . v i s i o n - s y s t e m s . c o m V I S I O N S Y S T E M S D E S I G N D e c e m b e r 2 0 1 6 9
technologytrendsJohn Lewis, Editor, [email protected]
continued on page 9continued on page 9
• PACKAGING AND PRODUCTION
Vision system simplifies blister pack inspectionUsed for small tablets/capsules or other consumer goods, blister packs are a type of pre-
formed plastic packaging that have two primary components: a cavity made from either
some form of plastic or aluminum, and the lidding made from paper, plastic, aluminum or
a lamination of soft foil and other
substances. The cavity contains
the product and the lidding seals
the product in the package.
During filling, products are
first fed properly to the pre-
formed cavities, then lidding
material gets sealed onto the
support material. Even though
every item may be identified
and inspected prior to packag-
ing, the risk of product damage
or a mishap during the blister fill-
ing process remains.
Needless to say, when it comes
HIGH-SPEED IMAGING
Vision system upgrade
improves printing plate
alignment
Designed for fast, low-cost delivery of
high-end printed graphics, flexographic
presses are giant printing machines
that can print on a variety of materi-
als including paper, film and metal foil.
This type of printing process uses flex-
ible relief plates made of thin polymer
sheets that have been coated with a
photo-sensitive surface.
After being laser engraved with relief
images of the digital artwork, these flex-
ible printing plates get wrapped around
a print cylinder and installed into the
press. Each desired color on the printed
material requires its own plate. Depend-
ing on the print job, there may be up
to ten cylinders, one for each shade of
color necessary.
During operation, the plates transfer
the ink and print the artwork onto the
passing web material. For clear, high-
quality printing, the relief image on each
plate must be aligned and properly regis-
tered with other images, print plates and
print cylinders.
Because manual plate alignment is
time consuming and error prone, the
plate mounting process has become
increasingly automated in recent years,
and machine vision systems play a key
role in reducing the time required for
pre-press preparations and improving
print quality.
To guarantee
• CAMERA DESIGN
Image sensor technology enables very low light imagingWhat do detecting a fluorescent marker
viewed under a microscope, an image of the
retina captured with an ophthalmic fundus
camera, or a surveillance image operating on
a cloudless, moonless night have in common?
All require image sensor technologies that
enable very low light imaging, 30 fps image
capture, and illuminations down to 0.1 lux.
Historically, Electron Multiplication
Charge Couple Device (EMCCD) tech-
nology has been successful in enabling the
capture of scenes with very low light levels.
This technology takes the very small charge
detected in a pixel under low light and mul-
tiplies it many times before reaching the sen-
sor’s amplifier. While this technology excells
– even down to the detection of single pho-
tons – the electron multiplication cascade
can overflow and create blooming artifacts
if signal levels entering
continued on page 11continued on page 10
continued on page 10
Figure 3: SPAN Inspection System Pvt. Ltd devel-
oped a machine vision-based blister inspection system
called Blisbeat.
1612VSD_9 9 11/30/16 11:48 AM
a)
b)
c)
Standardoutput
Bright
Dark
Final outputPixel-levelswitch
EMCCDoutput
Cameracontrol
Non-destructivesensing node
HCCDoutput register
IT-EMCCD
pixel array
D e c e m b e r 2 0 1 6 V I S I O N S Y S T E M S D E S I G N w w w . v i s i o n - s y s t e m s . c o m10
technologytrends
the EMCCD register are too high, limiting
use of sensors with this technology to scenes
that do not contain any bright components.
Interline Transfer EMCCD technology
addresses these limitations directly by com-
bining the low light sensitivity available from
an electron multiplication output register
with the image uniformity, resolution scal-
ing, and electronic global shutter capabilities
of Interline Transfer CCD. This combina-
tion enables the development of image sen-
sors that can capture continuously from very
low light to bright light in designs that can
range to multiple megapixels in resolution.
“Key to the performance of this technol-
ogy is an Intra-Scene Switchable Gain fea-
ture, which avoids overflow in the EMCCD
output register under bright illumination con-
ditions by selectively multiplying only those
portions of the scene that require it,” explains
Michael DeLuca, Go to Market Manager,
Industrial and Security Division, Image
Sensor Group, ON Semiconductor (Phoe-
nix, AZ, USA; www.onsemi.com).
The output design shown (Figure 1) illus-
trates how the charge from each pixel passes
through a non-destructive sensing node that
can be read by the camera control electron-
ics to provide an initial measurement of the
signal level for each pixel. This information is
used to drive a switch in the sensor that routes
charge packets to one of two outputs based on
a camera-selected threshold.
Pixels with high charge levels (correspond-
ing to bright parts of the image) are routed
to a standard CCD output for conversion to
voltage, while pixels with low charge levels
(corresponding to dark parts of the image) are
routed to the EMCCD output for additional
amplification before conversion to voltage.
These two datasets are then merged to
generate the final image. Because the charge
Figure 1: Intra-scene Switchable Gain Output.
to product safety, package integrity and
compliance, pharmaceutical companies
can ill-afford to take such risks. After all,
incorrectly packaged or damaged prod-
ucts may result in expensive manufac-
turer recalls, potentially fatal accidents
and damage to brand reputation.
That’s why blister packaging needs
to be carefully inspected after primary
package filling and prior to second-
ary packaging. It’s not enough to just
detect the presence of a product and
identify any empty blister cavities.
Rather, today’s systems must be capa-
ble of detecting various defects such as
broken product, color variations, shape
and size variations, color spots on prod-
uct, and foreign products contained in
the blisters.
The challenge is that the increasing
demand for continuous product and
package innovation, especially in the
pharmaceutical industry, drives con-
stant improvements that hinder the
success of traditional automated visual
techniques. Not only must automated
blister inspection systems be versa-
tile enough to adapt to various blister
packaging materials and equipment,
but they must also be easy for packag-
ing equipment operators and the people
tending the machinery to alter recipes
for inspection of various products rang-
ing from tablets and capsules to dra-
gées, ampoules, and applicators.
To resolve these issues, engineers,
vision system designers and imaging
experts at SPAN Inspection System
Pvt. Ltd (Ahmedabad, Gujarat, India;
www.spansystems.in) have developed
a machine vision-based blister pack
inspection system that is dubbed Blis-
beat. The system reportedly eliminates
user dependency with automated teach
technology that the company claims
simplifies the set up and configuration
process required for product change-
Figure 2: A scene with both bright and very
dark components, imaged by a standard
IT-CCD (a), a standard EMCCD (b), and an
Interline Transfer EMCCD device (c).
continued from page 9
continued from page 9
continued on page 12
1612VSD_10 10 11/30/16 11:48 AM
w w w . v i s i o n - s y s t e m s . c o m V I S I O N S Y S T E M S D E S I G N D e c e m b e r 2 0 1 6 11
technologytrends
form stability, running accu-
racy and productivity with-
out machine downtime,
automated flexoprint
mounting machines
generally provide an inte-
grated, vision system
that monitors, man-
ages and adjusts
the registered
alignment of the
plate mounting based on refer-
ence points such as microdots or
crosses on flexographic printing
cylinders to ensure quality and
high precision during the print-
ing process.
When a leading Italian man-
ufacturer of flexographic print-
ing and mounting machines
wanted to modernize the vision
systems used in their equipment port-
folio, they turned to imaging special-
ist FRAMOS (Taufkirchen, Germany and
Ottawa, Canada; www.framos.com) to
design a new high-speed, high-resolution
imaging system. The new imaging systems
had to meet four main requirements to
increase the system performance, accord-
ing to Lorenzo Cassano, head of the Framos
camera business unit.
“First and most important was the need
for high speed and high resolution cam-
eras to meet the megapixel and frame rate
demands of the application, along with fast
digital interfaces for timely transmission of
the large amount of process data gener-
ated,” Cassano explains. “Second was the
need to overlook a large field of interest
from multiple points of view and to have
full control to work with high-quality optical
zooming from various distances. Next, the
solution had to be flexible in terms of adjust-
ing focus and zoom factor and, finally, had
to be available at an affordable price.”
After defining requirements, applica-
tion parameters and conditions, it took the
FRAMOS team three weeks to arrive at the
final design solution (Figure
6). During the process, engi-
neers not only had to
specify the appropri-
ate vision system com-
ponents, but also built
and tested custom pro-
totypes and individual
adaptions.
The FCB-EV7500
block camera from
Sony Corp. (Tokyo,
Japan; http://pro.sony.
com/) was selected. Running
a fast 2.4 MPixel Sony Exmor
CMOS image sensor and a 30x
optical zoom auto-focus lens,
FRAMOS engineers determined
that it delivered the high image
quality needed in full HD and
offered a dynamic range suit-
able for the application.
The corresponding external iPORT SB-
GigE OEM frame grabber kit from Pleora
Technologies (Kanata, ON, Canada; www.
pleora.com), essentially transforms the Sony
camera into a GigE Vision camera, Cassano
notes. The interface is able to control the
camera over a digital channel and to trans-
mit full-resolution video at the maximum
rate supported with low, predictable latency
over a GigE link and GigE cable.
An M110 industrial PC from Tattile
(Brescia, Italy; www.tattile.com) can run
up to six cameras for multiple points of
view, each one powered through Power
over Ethernet (PoE), having a completely
dedicated power channel for all the PoE
devices and providing camera image
streaming and full lens control.
The system transmits full-resolution
images at the maximum rate supported
by the block camera, supporting cable dis-
tances of up to 100 metres and featuring a
low, predictable latency, according to Cas-
sano. “This makes it possible to have com-
plete control of the zoom and focus of each
Sony block camera with only a single cable
connection.”
from pixels with high charge levels does not
enter the EMCCD register, this output archi-
tecture allows both very low light levels and
bright light levels to be detected while avoid-
ing the image artifacts associated with over-
flow of the EMCCD output register.
The power of this technology can be seen
in Figure 2, which shows image captures of a
single scene that includes both a bright light
as well as very dark shadows, where the dark-
est portion of the image is illuminated only
by moonlight or starlight.
A traditional image sensor (Figure 2a)
images the bright part of the image well, but
doesn’t have the sensitivity to “see” in the
very darkest part of the image. A traditional
EMCCD (Figure 2b) can be configured to
image in the very darkest part of the scene,
but when the gain is turned up to enable this
low light imaging, artifacts from the bright
part of the scene destroy the image integ-
rity. Interline Transfer EMCCD technology
(Figure 2c) allows the scene to be imaged
continuously from the brightest to the darkest
part of the image, where “dark” can extend all
the way down to illumination only by moon-
light or by starlight.
Having been moved forward from the
research labs to use in production devices,
Interline Transfer EMCCD technology is
being used today in a growing family of prod-
ucts. For example, ON Semiconductor’s KAE
02150 image sensor uses Interline Trans-
fer EMCCD technology to enable low light
image capture at 1080p (1920 x 1080) resolu-
tion while operating at 30 fps, making this
device well suited to security, surveillance,
and situational awareness applications that
require high sensitivity image capture with
video frame rates.
For higher resolution needs, the 8 MPixel
(2856 x 2856) KAE 08151 image sensor is
designed in a square aspect ratio with a 22
mm diagonal, aligning with the native opti-
cal format of many scientific microscopes
and other medical equipment. “By leveraging
the advances available with Interline Transfer
EMCCD technology,” DeLuca concludes,
“these devices are the first in a new class of
image sensors that achieve high levels of per-
formance under low lighting conditions.”
Figure 6: A fully
controllable digital
imaging system built
by FRAMOS helps
an Italian manufac-
turer of flexographic
mounting machines
achieve faster, more
precise mounting.
fi
6
sp
a
p
a
to
b
S
Jf lly
continued from page 9
1612VSD_11 11 11/30/16 11:48 AM
D e c e m b e r 2 0 1 6 V I S I O N S Y S T E M S D E S I G N w w w . v i s i o n - s y s t e m s . c o m12
technologytrends
overs (Figure 3).
“Traditional systems available in the
market today have a very complex teaching
process that is extremely difficult for oper-
ators to use. The result is an increase in
time required to make recipe changes for
new products in day to day use,” explains
Pranay Soni, R&D Lead, SPAN Inspec-
tion System Pvt. Ltd. “Our software makes
it extremely easy for operators to teach
new products. It typically takes less than
a minute, even though inspection consis-
tency must be maintained.”
The system uses either a Baumer (Rade-
berg, Germany; www.baumer.com ) VLG-
23C or a Basler (Ahrensburg, Germany;
www.baslerweb.com ) aca1920-50gc color
camera for image acquisition (Figure 4).
An M111FM16 16mm focal length lens
from Tamron (Commack, NY, USA; http://
tamron-usa.com) provides extremely sharp
images and minimal optical distortion,
according to Soni, that enables Blisbeat
to consistently detect the smallest defects.
Images with a 350 x 200mm FOV, con-
taining up to 18 blisters are processed in
parallel achieving rates of up to 800 blisters
per minute. Soni notes that images are first
thresholded in HSV color space, followed
by basic blob analysis tools for calculating
area, length, width, and convexity. Other
advance algorithms for shape and symme-
try are applied thereafter followed by color
checking of product and foreign product
detection algorithms.
Most common pharmaceutical products
can be taught automatically, according to
Soni. Users simply select a few inputs such
as the product type, the number of blis-
ters, and the type of base foil etc., and then
press a single button to activate the soft-
ware, which automatically finds and prop-
erly segments all of the product colors and
cavities within area of inspection (Figure
5). “Even if a user has to manually teach a
new product,” Soni says, “it’s really simple,
because segmentation of the product color
is done by using machine learning based
on intelligent classifiers.”
Blisbeat is equipped with multi-
touch core i7 IPC which is full
IP65 from Beckhoff Automation
(Verl, Germany; www.beckhoff.
com) and real-time Beckhoff
EtherCAT based I/O module
for connectivity and communi-
cation with the machine PLC.
“After the first two Blisbeat
installations now, our customers
are very satisfied with the soft-
ware simplicity, consistent per-
formance and robust hardware,”
says Soni. “It’s also easy to inte-
grate because Blisbeat includes
a troubleshooting feature called
Live-Scopeview that enables
users to view input/output sig-
nals online to help identify
communication errors with the
machine controller or PLC.”
Machine PLC
Activateacquisition
Transferimage
Transfer hardwaretrigger to camera for
image acquisition
Transfer a signal onevery machine cycle
for processingof images
Start
1) Image processing
2) Display results on touch
screen display
3) Send signal of
accept/reject through
EtherCAT protocol to I/O
Notify input signalfrom machine to startprocessing of image
Transfer productaccept/reject
signal
1) Acquire image
1) Performs various machine interlocks
2) Maintains accept/reject result queue
3) I/O signals troubleshooting
Figure 5: A custom-built software user interface enables operators to visualize the position
and the specific nature of the blister pack defects.
Figure 4: Blisbeat utilizes automated teach technology to simplify adaptation for handling a variety of
package types and products.
continued from page 10
1612VSD_12 12 11/30/16 11:48 AM
Motioncontroller
Machinecontroller
Other machines/factory
automation
Local HMI
IntranetSmart drive
Smart drive
Smart camera
Subsystemcontroller
Robotcontroller
Industrialsensors
Conditionmonitoring
w w w . v i s i o n - s y s t e m s . c o m V I S I O N S Y S T E M S D E S I G N D e c e m b e r 2 0 1 6 13
Integration Insights
Leveraging embedded vision
system performance for more
than just visionConsolidated visual inspection, motion control, I/O,
and HMI simplifies design, improves performance.
Brandon Treece
Machine vision has long been used in indus-
trial automation systems to improve produc-
tion quality and throughput by replacing
manual inspection traditionally conducted
by humans. Ranging from pick and place and
object tracking to metrology, defect detec-
tion, and more, visual data is used to increase
the performance of the entire system by pro-
viding simple pass-fail information or closing
control loops.
The use of vision doesn’t stop with indus-
trial automation; we’ve all witnessed the mass
incorporation of cameras in our daily lives,
such as in computers, mobile devices, and
especially in automobiles. Just a few years
ago, backup cameras were introduced in auto-
mobiles, and now auto-
mobiles are shipped with
numerous cameras that
provide drivers with a full
360° view of the vehicle.
But perhaps the biggest
technological advance-
ment in the area of machine
vision has been processing
power. With the perfor-
mance of processors dou-
bling every two years and
the continued focus on
parallel processing tech-
nologies such as multicore CPUs, GPUs, and
FPGAs, vision system designers can now apply
highly-sophisticated algorithms to visual data
and create more intelligent systems.
This increase in technology opens up
new opportunities beyond just more intelli-
gent or powerful algorithms. Let’s consider
the use case of adding vision to a manufac-
turing machine. These systems are tradition-
ally designed as a network
of intelligent subsystems
that form a collaborative
distributed system, which
allows for modular design
(Figure 1).
However, as system per-
formance increases, taking this hardware-
centric approach can be difficult because
these systems are often connected through
a mix of time-critical and non-time-critical
protocols. Connecting these different systems
together over various communication proto-
cols leads to bottlenecks in latency, determin-
ism, and throughput.
For example, if a designer is attempting to
Figure 1: Systems
designed as a network
of intelligent subsys-
tems that form a col-
laborative distributive
control system allow
for modular design,
but taking this hard-
ware-centric approach
can cause bottlenecks
in performance.
Brandon Treece, Senior
Product Marketing Manager,
National Instruments, Austin,
TX, USA (www.ni.com)
1612VSD_13 13 11/30/16 11:48 AM
Intranet
Centralized controller
Visionsystem
Smart drive Camera Other machines
Industrialsensors
Conditionmonitoring
Smart drive
Motionsystem
LocalHMI
Substystemcontroller
Analog input
FPGA CPUAnalog output
Digital input
Digital output
• Any sensor
• Any protocol
• Industrially rated
• Signal conditioning
• Cameras, drives, motors, actuators
• Signal processing
• Data reduction
• Co-processing
• Custom timing, triggering and synchronization
• Custom protocols
• Fast, deterministic, closed-loop control (MHz rates)
• Safety, reliablility
• Real-time analytics
• Math and analysis libraries
• Algorithms, decision making
• Data transfer mechanisms
• Network interface
D e c e m b e r 2 0 1 6 V I S I O N S Y S T E M S D E S I G N w w w . v i s i o n - s y s t e m s . c o m14
Integration Insights
develop an application with
this distributed architecture
where tight integration must
be maintained between the
vision and motion system,
such as is required in visual
servoing, major performance
challenges can be encoun-
tered that were once hidden
by the lack of processing capa-
bilities. Furthermore, because
each subsystem has its own
controller, there is actually a
decrease in processing effi-
ciency because no one system
needs the total processing per-
formance that exists across the
entire system.
Finally, because of this dis-
tributed, hardware-centric
approach, designers are forced to use dispa-
rate design tools for each subsystem—vision-
specific software for the vision system, motion-
specific software for the motion system, and so
on. This is especially challenging for smaller
design teams where a small team, or even a
single engineer, is responsible for many com-
ponents of the design.
Fortunately, there is a better way to design
these systems for advanced machines and
equipment—a way that simplifies complex-
ity, improves integration, reduces risk, and
decreases time to market. What if we shift our
thinking away from a hardware-centric view,
and toward a software-centric design approach
(Figure 2)? If we use programming tools that
provide the ability to use a single design tool
to implement different tasks, designers can
reflect the modularity of the mechanical
system in their software.
This allows designers to simplify the con-
trol system structure by consolidating differ-
ent automation tasks, including visual inspec-
tion, motion control, I/O, and HMIs within a
single powerful embedded system (Figure 3).
This eliminates the challenges of subsystem
communication because now all subsystems
are running in the same software stack on a
single controller. A high-performance embed-
ded vision system is a great candidate to serve
as this centralized controller because of the
performance capabilities already being built
into these devices.
Let’s examine some benefits of this central-
ized processing architecture. Take for exam-
ple a vision-guided motion application such
as flexible feeding where a vision system pro-
vides guidance to the motion system. Here,
parts exist in random positions and orienta-
tions. At the beginning of the task, the vision
system takes an image of the part to determine
its position and orientation, and provides this
information to the motion system.
The motion system then uses the coordi-
nates to move the actuator to the part and pick
it up. It can also use this information to cor-
rect part orientation before placing it. With
this implementation, designers can eliminate
any fixtures previously used to orient and posi-
tion the parts. This reduces costs and allows
the application to more easily adapt to new
Figure 2: A software-centric design approach allows designers to simplify their control system
structure by consolidating different automation tasks, including visual inspection, motion control,
I/O, and HMIs within a single powerful embedded system.
Figure 3: A heterogeneous architecture combining a processor with an FPGA and I/O is an ideal solution for not
only designing a high-performance vision system but also integrating motion control, HMIs, and I/O.
1612VSD_14 14 11/30/16 11:48 AM
w w w . v i s i o n - s y s t e m s . c o m V I S I O N S Y S T E M S D E S I G N D e c e m b e r 2 0 1 6 15
Integration Insights
part designs with only software modification.
Nevertheless, a key advantage of the hard-
ware-centric architecture mentioned above is
its scalability, which is mainly due to the Eth-
ernet link between systems. But special atten-
tion must be given to the communication
across that link as well. As pointed out previ-
ously, the challenge with this approach is that
the Ethernet link is nondeterministic, and
bandwidth is limited.
For most vision-guided motion tasks where
guidance is given at the beginning of the task
only, this is acceptable, but there could be
other situations where the variation in latency
could be a challenge. Moving to a centralized
processing architecture for this design has a
number of advantages.
First, development complexity is reduced
because both the vision and the motion system
can be developed using the same software, and
the designer doesn’t need to be familiar with
multiple programming languages or environ-
ments. Second, the potential performance
bottleneck across the Ethernet networks is
removed because now data is being passed
between loops within a single application only,
rather than across a physical layer.
This leads to the entire system running
deterministically because everything shares
the same process. This is especially valuable
when bringing vision directly into the con-
trol loop, such as in visual servoing applica-
tions. Here, the vision system continuously
captures images of the actuator and the tar-
geted part during the move until the move is
complete. These captured images are used to
provide feedback on the success of the move.
With this feedback, designers can improve the
accuracy and precision of their existing auto-
mation without having to upgrade to high-per-
formance motion hardware.
This now begs the question, what does this
system look like? If designers are going to use
a system capable of the necessary computation
and control needs of machine vision systems,
as well as the seamless connectivity to other
systems such as motion control, HMIs and
I/O, they need to use a hardware architecture
that provides the performance, as well as the
intelligence and control capabilities needed by
each of these systems.
A good option for this type of system is to
use a heterogeneous processing architecture
that combines a processor and FPGA with I/O.
There have been many industry investments
in this type of architecture, including the
Xilinx (San Jose, CA, USA; www.xilinx.com)
Zynq All-Programmable SoCs (which com-
bine an ARM processor with Xilinx 7-Series
FPGA fabric), the multi-billion dollar acquisi-
tion of Altera by Intel, and other vision systems.
For vision systems specifically, using an
FPGA is especially beneficial because of its
inherent parallelism. Algorithms can be split
up to run thousands of different ways and
can remain completely independent. But
this architecture has benefits that go beyond
just vision—it also has numerous benefits for
motion control systems and I/O as well.
Processors and FPGAs can be used to per-
form advanced processing, computation, and
decision making. Designers can connect to
almost any sensor on any bus through analog
and digital I/O, industrial protocols, custom
protocols, sensors, actuators, relays, and so on.
This architecture also addresses other require-
ments such as timing and synchronization as
well as business challenges such as productiv-
ity. Everyone wants to develop faster, and this
architecture eliminates the need for having
large specialized design teams.
Unfortunately, although this architecture
offers a lot of performance and scalability, the
traditional approach of implementing it requires
specialized expertise, especially when it comes
to using the FPGA. This introduces significant
risk to designers and can make using the archi-
tecture impractical or even impossible. How-
ever, using integrated software, such as NI Lab-
VIEW, designers can increase productivity and
reduce risk by abstracting low-level complexity
and integrating all of the technology they need
into a single, unified development environment
unlike any other alternative.
Now it’s one thing to discuss theory, it’s
another to see that theory put into practice.
Master Machinery (Tucheng City, Taiwan;
Figure 4: Using a centralized, software-centric approach, Master Machinery incorporated their
main machine controller, machine vision and motion system, I/O, and HMI all into a single con-
troller yielding 10X the performance over their competition.
A good option for this type of system is to use
a heterogeneous processing architecture that
combines a processor and FPGA with I/O.
1612VSD_15 15 11/30/16 11:48 AM
MACHINE VISION | AERIAL IMAGING | AEROSPACE | INDUSTRIAL | MICROSCOPY | INSPECTION |
ASTRONOMY | QUALITY CONTROL | MILITARY | TEST & MEASUREMENT
The applications are endless...
CCD, CMOS & XCCD
Up to 47 Megapixel
TIGER | CHEETAH | PUMA
�
�
+1 (561) 989-0006 | www.imperx.com
GigE, CoaX, U3V
CameraLink
Integration Insights
www.mmcorp.com.tw) builds semiconductor
processing machines (Figure 4). This particu-
lar machine uses a combination of machine
vision, motion control, and industrial I/O to
take chips off a silicon wafer and package
them. This is a perfect example of a machine
that could use a distributed architecture like
the one in Figure 1—each subsystem would
be developed separately, and then integrated
together through a network.
Average machines like this one in the indus-
try yield approximately 2,000 parts per hour.
Master Machinery, however, took a different
approach. They designed this machine with
a centralized, software-centric architecture
and incorporated their main machine con-
troller, machine vision and motion systems,
I/O, and HMI all into a single controller, all
programmed with LabVIEW. In addition to
achieving a cost savings from not needing indi-
vidual subsystems, they were able to see the
performance benefit of this approach as their
machine yields approximately 20,000 parts per
hour—10X that of the competition.
A key component to Master Machinery’s
success was the ability to combine numerous
subsystems in a single software stack, specifi-
cally the machine vision and motion control
system. Using this unified approach allowed
Master Machinery to simplify not only the
way they design machine vision systems but
also how they designed their entire system.
Machine vision is a complex task that
requires significant processing power. As
Moore’s law continues to add performance to
processing elements, such as CPUs, GPUs,
and FPGAs, designers can use these compo-
nents to develop highly-sophisticated algo-
rithms. Designers can also use this tech-
nology to increase performance of other
components in their design as well, especially
in the areas of motion control and I/O.
As all of these subsystems increase in per-
formance, the traditional distributed archi-
tecture used to develop those machines
gets stressed. Consolidating these tasks into
a single controller with a single software
environment removes bottlenecks from the
design process, so designers can focus on
their innovations and not worry about the
implementation.
Using this unified approach allowed Master
Machinery to simplify not only the way they
design machine vision systems but also how
they designed their entire system.
1612VSD_16 16 11/30/16 11:48 AM
product focus on
w w w . v i s i o n - s y s t e m s . c o m V I S I O N S Y S T E M S D E S I G N D e c e m b e r 2 0 1 6 17
CMOS cameras leverage the power of CoaXPressEmploying the CoaXPress interface standard allows vendors to increase the data
throughput of their camera systems.
Andrew Wilson, Contributing Editor
It has been seven years since the CoaXPress
(CXP) standard was introduced at VISION
2009 in Stuttgart. Since then, the standard has
become somewhat a de-facto camera-to-com-
puter interface for high-performance camera
systems since, after usurping the well-estab-
lished Camera Link standard, CXP allows ven-
dors to increase the data throughput of their
CMOS-based camera systems.
To date, numerous camera and frame grab-
bers have endorsed the interface with a range
of CMOS and CCD cameras, frame grabbers,
cables and connectors (see “CoaXPress cam-
eras and frame grabbers tackle high-speed imag-
ing,” Vision Systems Design, January 2014).
CXP Specifications
As an asymmetric point-to-point serial com-
munication standard, the CXP-6 version
features a high speed downlink of up to
6.25Gbps per cable and a 20Mbps uplink for
communications and control. Just as Camera
Link supported a Power over Camera Link
mode, so too does CXP Power (Power over
Coax) while also offering the systems devel-
oper camera-to-computer coax cable connec-
tion lengths of up to 100m.
While a single CXP-6 high-speed link deliv-
ers speeds of 6.25Gbps, the standard offers a
number of different bit rates ranging from
CXP-1 (1.25Gbps) to CXP-6 (6.25Gbps) with
maximum camera-to-computer distances
from 212m (CXP-1) to 68m
(CXP-6). These data rates and
cable distances have been tabu-
lated at http://bit.ly/2e7rMe7.
In the design of its VCC- VCXP3M
and VCC-VCXP3R VGA cameras, for exam-
ple, CIS Corp (Tokyo, Japan, www.ciscorp.
co.jp) offers a number of selectable frame
rates that include 269 fps (CXP-1), 538 fps
(CXP2) and 536.7 fps (CXP3). To increase
this data rate further, multiple links can be
used. Indeed, today, a number of manufactur-
ers have introduced both cameras and frame
grabbers that use four CXP-6 links providing
data rates of 25Gbps.
Aggregating several links can achieve
double or quadruple the data rate of 12.5Gps
or 25Gbps, respectively. Here, the number of
links used depends on the maximum output
speed of the sensor and/or the data rate. In
the design of its CXP interface for Sony block
cameras, for example, Active Silicon (Iver,
England; www.activesilicon.com) has used a
single 2.5Gbps CXP-2 link to support all the
HD modes of the camera – 1080i and 720p at
50Hz or 60Hz–of the 1/3in 2M pixel CMOS
imager used in the camera (Figure 1).
Multiple links
Where faster megapixel imagers are used,
manufacturers must opt to provide multiple
CXP-6 links. When such sensors are used,
cameras can be configured around differ-
ent CXP configurations to provide the speed
and bit depth required. For example, JAI (San
Jose, CA, USA; www.jai.com) uses the Lin-
ce5M 2560 x 2048 CMOS sensor from Anafo-
cus (Seville, Spain; www.anafocus.com) – now
part of e2V (Chelmsford, England, UK; www.
e2v.com) – in its Lince5M-based cameras.
Studying the datasheet of the Lince5M at
http://bit.ly/2dDkZax reveals that the 2560 x
2048 1 in. CMOS imager can be operated in
12-bit mode at a maximum rate of 250 fps. At
such frame rates and bit depths, four CXP-6
links are required by JAI’s SP-5000M-CXP4
(monochrome) and SP-5000C-CXP4 (color)
cameras. However, the sensor can also be oper-
ated at less than its full data rate when interface
bandwidth is limited.
For example, the Lince5M can output full
resolution at a reduced 211 fps, 8 bit mode
Figure 1: For Sony’s FCB-H11 block camera,
Active Silicon has used a single 2.5Gbps CXP-2
link to support all the HD modes of the FCB-
H11 – 1080i and 720p at 50Hz or 60Hz–of the
1/3in 2MPixel CMOS imager used in the camera
CMOS Cameras
1612VSD_17 17 11/30/16 11:48 AM
D e c e m b e r 2 0 1 6 V I S I O N S Y S T E M S D E S I G N w w w . v i s i o n - s y s t e m s . c o m18
CMOS Cameras product focus on
CMOS Cameras with CXP interfaces
Company Model Number Data rate Sensor Other Sensor
Active SiliconIver, England, UKwww.activesilicon.com
CXP interface for Sony FCB-H11/10
Single 2.5Gbps CoaXPress link
1/3in 2M pixel CMOS 16 x 9 aspect ratio
and Sony FCB-H11 block cameras
120x zoom (H11)
AdimecEindhoven, The Netherlandswww.adimec.com
Quartz Q-2A340 340 fps 2048 x 1088 CMOS 2/3in sensor, 5.5 x 5.5μm pixel sizeCMOSIS CMV2000
Quartz Q-4A180 180 fps 2048 x 2048 CMOS 1in sensor, 5.5 x 5.5μm pixel sizeCMOSIS CMV4000
Quartz -12A180 187 fps 12Mpixel CMOS Optical size: APS-C, 5.5 x 5.5μm pixel sizeCMOSIS CMV12000
Sapphire S-25A70 73 fps 5120 x 5120 CMOS 35mm sensor, 4.5 x 4.5μm pixel sizeON Semi VITA25k
Sapphire S-25A80 80 fps 5120 x 5120 CMOS 35mm sensor, 4.5 x 4.5μm pixel sizeON Semi PYTHON25k
Norite N-5A100 100 fps 2592 x 2048 CMOS 1in, 4.8 µm x 4.8 µm pixel sizeON Semi PYTHON5000
BAP Image SystemsErlangen, Germanywww.bapimgsys.com
LC8K100CXP 80 kHz scan line
8194 px x 2 lines (Bayer)/ 8194 pixels mono, 7 x 7 μm square pixels
Awaiba Dragster
CIS CorpTokyo, Japanwww.ciscorp.co.jp
VCC-25CXP1M/R 81 fps 5120 × 5120 CMOS 35mm sensor, 4.5 x 4.5μm pixel size
VCC-5CXP3M/R 85.1fps 2592 × 2048 CMOS 1in sensor, 4.8 x 4.8μm pixel size
VCC-SXCXP3M/R 168.5 fps 1280 x 1024 CMOS 1/2in sensor, 4.8 x 4.8μm pixel size
VCC-VCXP3M/R 536.7 fps 640 x 480 CMOS 1/4in sensor, 4.8 x 4.8μm pixel size
VCC-SVCXP3M/R 386.3 fps 800 x 600 CMOS 1/3.6in sensor, 4.8 x 4.8μm pixel size
VCC-5CXP1C 80 fps 2560 x 2048 CMOS 1in image sensor, 5 x 5μm pixel size
VCC-2CXP3M 180.5 fps 1920 × 1200 CMOS 2/3in sensor, 4.8 x 4.8μm pixel size
VCC-10CXP1M/R 175 fps 3840 × 2896 CMOS 4/3in sensor, 4.5 x 4.5μm pixel size
VCC-12CXP1M/R 162 fps 4096 × 3072 CMOS 4/3in sensor, 4.5 x 4.5μm pixel size
VCC-16CXP1M 124 fps 4096 x 4096 CMOS 35mm sensor, 4.5 x 4.5μm pixel size
e2VChelmsford, England, UKwww.e2v.com
ELiiXA+ 12k 200KHz 200 kHz 11008 at 200KHz linescan sensor, 5 x 5 μm pixel size
ELiiXA+ 16k 140KHz 140 kHz 16384 or 8192 linescan sensor, 5 x 5 μm pixel size
ELiiXA+ 16k/8k 100KHz 100 kHz 16384 or 8192 linescan sensor, 5 x 5 μm pixel size
ELiiXA+ 16K/8k Colour 2335 pix/s 16384 or 8192 linescan sensor, 5 x 5 μm pixel size
ImperxBoca Raton, FL, USAwww.imperx.com
C2880 135 fps 2832 x 2128 1in sensor, 4.7 x 4.7 μm pixel sizeON Semi KAC-06040
C4080 70 fps 3000 x 4000 4/3in sensor, 4.7 x 4.7 μm pixel sizeON Semi KAC-12040
IO IndustriesLondon, ON, Canadawww.ioindustries.com
Flare 50MP 30.9 fps 7920 x 6004 35mm, 4.6 x 4.6μm pixel size
Flare 12MP 187 fps (8-bit) 4096 x 3072APS-C (28.1mm diagonal), 5 x 5μm pixel size
Flare 4MP 140 fps (8-bit) 2048 x 2048 1in sensor, 5 x 5μm pixel size
Flare 2MP 264 fps (8-bit) 2048 x 1088 2/3in sensor, 5 x 5μm pixel size
ISVISeoul, South Koreawww.isvi-corp.com
IC-M12S-CXP/IC-C12S-CXP
181 fps (8-bit) 4096 x 3072APS-C (28.1mm diagonal), 5.5 x 5.5μm pixel size
CMOSIS CMV12000
IC-M25CXP/IC-C25CXP 53 fps 5120 x 5120 35mm sensor, 4.5 x 4.5μm pixel size
JAISan Jose, CA, USAwww.jai.com
SW-2000M-CXP 80 kHz 2048 x 1 40.96mm sensor, 20 x 20μm pixel size Custom CMOS
SP-5000M-CXP2 and SP-5000C-CXP2
211 fps 2560 x 2048 1in sensor, 5 x 5μm pixel sizee2V (Anafocus) Lince5M
SP-20000M-CXP2 and SP-20000C-CXP2
30 fps 5120 x 3840 35mm, 6.4 x 6.4μm pixel sizeCMOSIS CMV20000
SW-2000Q-CXP2 80 kHz 2048 x 4 Color line scan, 20 x 20μm pixel size
SW-2000T-CXP2 80 kHz 2048 x 3 Color line scan, 20 x 20μm pixel size
SP-12000M-CXP4 and SP-12000C-CXP4
189f ps 4096 x 3072APS-C (28.1mm diagonal), 5.5 x 5.5μm pixel size
CMOSIS MV12000
SP-5000M-CXP4 and SP-5000C-CXP4
253 fps 2560 x 2048 1in sensor, 5 x 5μm pixel sizee2V (Anafocus) Lince5M
1612VSD_18 18 11/30/16 11:48 AM
w w w . v i s i o n - s y s t e m s . c o m V I S I O N S Y S T E M S D E S I G N D e c e m b e r 2 0 1 6 19
CMOS Cameras with CXP interfaces
Company Model Number Data rate Sensor Other Sensor
Lambert InstrumentsGroningen, The Netherlandswww.lambertinstruments.com
HS540M 540 fps 1696 x 1710 8 x 8μm pixel size
Laon PeopleBundang-gu, South Koreawww.laonpeople.com
LPMVC-CXP12M 190 fps 4096 x 3068 28.1mm sensor, 5 x 5μm pixel size
LPMVC-CXP25M 72 fps 5120 x 5120 35mm sensor, 4.5 x 4.5μm pixel size
MikrotronUnterschleissheim, Germanywww.mikrotron.de
EoSens 3CXPm/c 566 fps 1696 x 1710 1in sensor, 8 x 8μm pixel sizeON Semi LUPA3000
EoSens 4CXPm/c 563 fps 2336 x 1728 4/3in sensor, 7 x 7μm pixel size Alexima AM41
EoSens 25CXPm/c 80 fps 5120 x 5120 35mm sensor, 4.5 x 4.5μm pixel sizeON Semi VITA25K
EoSens 12CXP+ 165 fps 4096 x 3072 35mm sensor, 4.5 x 4.5μm pixel sizeOn SemiPython
EoSens 25CXP+ 80 fps 5120 x 5120 35mm sensor, 4.5 x 4.5μm pixel sizeOn SemiPython
NEDOsaka, Japanwww.ned-sensor.co.jp
XCM20160T2CXP 68/125 kHz 2048 x 1 28.672mm line scan, 14 x 14μm pixel size
XCM40160CXP 3.58 kHz 4096 k x 1 28.672mm line scan, 7 x 7μm pixel size
XCM60160CXP 24.88 kHz 6144 x 1 43.008mm line scan, 7 x 7μm pixel size
XCM80160CXP 18.65 kHz 8192 x 1 57.344mm line scan, 7 x 7μm pixel size
XCM80160T2CXP 33.58 kHz 8192 x 1 57.344mm line scan, 7 x 7μm pixel size
XCM16K04GT4CXP 68.97 kHz 16384 x 157.344mm line scan, 3.5 x 3.5μm pixel size
SCAN-12MX 190 fps 4096 × 3072 5.5µm × 5.5µm pixel size
SCAN-25MX 73 fps 5120 x 5120 4.5 x 4.5μm pixel size
OptronisKehl, Germanywww.optronis.com
CP70-12-M-167 / CP70-12-C-167
167 fps 4080 x 3072 5.5 x 5.5μm pixel sizeCMOSIS CMV12000
CP80-3-M-540 / CP80-3-C-540
540 fps 1696 x 1710 8 x 8μm pixel sizeON SemiLUPA3000
CP80-4-M-500 / CP80-4-C-500
500 fps 2304 x 1720 7 x 7μm pixel size Alexima AM41
CP80-25-M-72 / CP80-25-C-72
72 fps 5120 x 5120 4.5 x 4.5μm pixel sizeON Semi VITA25K
CP70-12-M-188 / CP70-12-C-188
188 fps 4080 x 3072 5.5 x 5.5μm pixel size
CP70-HD-M-900 / CP70-HD-C-900
908 fps 1920 x 1080 5.5 x 5.5μm pixel size
CP70-1-M/C-1000 1040 fps 1280 x 1024 6.6 x 6.6μm pixel sizeLuxima LUX1310 Global Shutter CMOS
SVS-VistekSeefeld, Germanywww.svs-vistek.com
hr25000MCX/CCX 80 fps 5120 x 5120 35mm, 4.5 x 4.5μm pixel sizeON Semi PYTHON25K
Toshiba TeliTokyo, Japanwww.toshiba-teli.co.jp
CSX12M25CMP19 25 fps 4096 x 3072 6 x 6μm pixel sizeProprietary 1.9 type sensor
VieworksGyeonggi-do, Republic of Koreawww.vieworks.com
VC-4MX-M/C 144 144 fps 2028 x 20448 1in, 5.5 x 5.5μm pixel sizeCMOSIS CMV4000
VC-12MX-M/C 180 180 fps 4096 x 3072 APS-like, 5.5 x 5.5μm pixel sizeCMOSIS CMV12000
VC-25MX-M/C 72 72 fps 5120 x 5120 35mm, 4.5 x 4.5μm pixel sizeON SEMI VITA25K
VT-18K3.5X-H140/H80 142 kHz/80 kHz 1784 x 256 TDI camera, 3.5 x 3.5μm pixel size Proprietary
VT-12K5X-H200/H100 200 kHz/100kHz 12480 × 256 TDI camera, 5 x 5μm pixel size Proprietary
VT-9K7X-H250/H120 250 kHz/125 kHz 8912 x 128 TDI camera, 7 x 7μm pixel size Proprietary
VT-6K10X-H170 172 kHz 6240 × 128 TDI camera, 10 x 10μm pixel size Proprietary
Table 1: Today’s line-scan and area-array cameras that support the CXP standard use a number of off-the-shelf imagers from just a handful of
companies.
1612VSD_19 19 11/30/16 11:48 AM
D e c e m b e r 2 0 1 6 V I S I O N S Y S T E M S D E S I G N w w w . v i s i o n - s y s t e m s . c o m20
CMOS Cameras product focus on
using two CXP-6 links, a fact
that has been realized in the
design of JAI’s SP-5000M-CXP2
and SP-5000C-CXP2 CXP cam-
eras (Figure 2). A number of ben-
efits result including fewer driv-
ers and equalizers to implement
the CXP interface and allowing deployment
of dual-camera configurations using four-link
CXP frame grabbers available from Active Sil-
icon, BitFlow (Woburn, MA, USA; www.bit-
flow.com), Euresys (Angleur, Belgium; www.
euresys.com) and others.
Four CXP-6 links are also used to support
the data rates from image sensors such as the
AM41 from Alexima (Pasadena, CA, USA;
www.alexima.com). While Mikrotron (Unter-
schleissheim, Germany; www.mikrotron.de)
uses the device in its EoSens 4CXPm/c Optro-
nis (Kehl, Germany; www.optronis.com) uses
the AM41 in its CP80-4-M/C-500 camera (see
“CMOS Cameras with CXP Interfaces” table
on page 18 of this issue).
Interestingly, the specifications of these
cameras found on the Mikrotron and Optro-
nis websites differs very slightly from the origi-
nal AM41V4 data sheet specification that can
be found at http://bit.ly/2ebbzVz. Regardless,
using four CXP links allows both cameras to
transfer 4Mpixel images at data rates of approx-
imately 500 fps at full resolution.
More than four
While many currently available frame grab-
bers support such four-link cameras, Kaya
Instruments (Haifa, Israel; www.kayainstru-
ments.com) Komodo CXP frame grabber is
capable of receiving image data from up to
eight CoaXPress links in single, dual, quad or
octal modes (Figure 3). Thus, it can be used
to simultaneous capture image data from two
quad CXP-6 link cameras at a maximum
data rate of 25Gbps per camera allowing dual
camera systems with a
maximum data rate
of 50Gbps on a single
frame grabber.
Interestingly, at
the time of writing,
Vieworks (Gyeonggi-
do, Republic of
Korea; www.vieworks.com)
12MPixel 8-connection CoaX-
Press camera, the VC-12MX2-
M330, announced at last month’s
VISION Stuttgart trade show,
was shown coupled to two Eure-
sys Coaxlink Quad G3 frame
grabbers running at 330 fps. Also showcased
were eight new TDI camera models featuring
proprietary, hybrid TDI (Time Delayed Inte-
gration) sensors that combine CCD-based
pixel array with a light sensitive and noiseless
charge transfer and accumulation process,
with fast CMOS readout electronics. The
resulting high sensitivity and high dynamic
range sensor is said to offer CCD-like image
quality, with the speed and low power con-
sumption commonly found in CMOS sensors.
However, such designs have not been widely
adopted for reasons that may include the cost of
adding additional drivers and equalizers in the
cameras and the additional cost of the cables
required to implement single camera/frame
grabber octal CXP-6 systems. There is not a
reason why this could not be accomplished,
however, since cameras that use devices such
as the CMV12000 from CMOSIS (Antwerp,
Belgium; www.cmosis.com) could benefit
from such implementations.
Indeed, the specification of the CMOSIS
12MPixel 4096 x 3072 device running at
300 fps in 10-bit mode would necessitate a
data rate of approximately 38Gbps and eight
CXP-6 links. This necessitates camera devel-
opers such as IO Industries (London, ON,
Canada; www.ioindustries.com) that use the
CMV12000 in its Flare 12M180-CX, a four-
channel CXP-6 implementation, to run the
device at a slower data rate of 187 fps.
Going faster
It is unlikely that many camera and (perhaps)
other frame grabber vendors will adopt octal
CXP-6 based implementations, but rather
wait for the next generation of higher-speed
drivers and equalizers. This will increase the
current maximum data rate from the existing
6.25Gbps (CXP-6) by adding 10Gbps (CXP-
10) and 12.5Gbps (CXP-12.5). Of the number
of camera manufacturers contacted for this
article, many were still awaiting to receive
samples of higher speed drivers and equalizers.
“All of our cameras are designed around
current 6.25Gbps CXP-6 standard due to the
unavailability of the new CXP driver chip,”
says Yusuke Muraoka, President of CIS Corp,
“but we will definitely look into the faster stan-
dard once the standard and the devices are
ready.” Similarly, Mikrotron did not show a
CXP-10 or CXP-12.5 camera at the VISION
Show last month.
“CXP-12 is on the company’s roadmap and
will find its way into Mikrotron CXP cam-
eras but not until mid to late next year,” says
Steve Ferrell, Head of Business Development
at Mikrotron’s North American Office (Poway,
CA, USA). “I understand that Microchip Tech-
nology (Chandler, AZ, USA; www.microchip.
com) is not scheduled to go into production
with parts until Q3 2017,” he added.
While many camera vendors appear to be
either lacking sample ICs to increase the speed
of their CXP-based cameras, it seems that
frame grabber vendors have already received
engineering samples. Marc Damhaut, Chief
Executive Officer of Euresys notes the com-
pany is currently working on prototypes and
proofs of concept for these two new versions
Figure 2: At a
reduced 211 fps, JAI’s
SP-5000M-CXP2
(greyscale) and SP-
5000C-CXP2 (color) CXP
cameras require the use
of two CXP-6 links.
Figure 3: Kaya Instruments Komodo CXP
frame grabber is capable of receiving image
data from up to eight CoaXPress links in single,
dual, quad or octal modes.
1612VSD_20 20 11/30/16 11:48 AM
w w w . v i s i o n - s y s t e m s . c o m V I S I O N S Y S T E M S D E S I G N D e c e m b e r 2 0 1 6 21
of the standard (Figure 4).
“We have built a prototype
of CXP-10 frame grabber using
engineering samples of the forth-
coming Microchip Technology
devices,” he says. The company
chose last month’s VISION 2016
show in Stuttgart to display the
frame grabber in cooperation
with Microchip and Adimec
(Eindhoven, The Netherlands;
www.adimec.com). “We’re also
working on validating a CXP-12
interface based on Macom
(Lowell, MA, USA; www.
macom.com) devices.”
Active Silicon is also developing a frame
grabber to support CXP-10 and CXP-12.5,
according to Colin Pearce, CEO. Although
this was also announced at VISION 2016, the
company did not exhibit the board.
Speed benefits
Developers of high frame rate cameras use
devices such as CMOSIS’ CMV12000 to
achieve full device frame rates of 38Gbps
using quad CXP-12.5 links and single quad-
link frame grabbers. Line-scan camera man-
ufacturers may benefit from reducing the
number of CXP cables required and increase
camera throughput with the emerging CXP-10
and CXP-12.5 standards.
At present, for example, the XCM16K-
04GT4CXP, 16384 x 1 line-scan camera
from NED (Osaka, Japan; www.ned-sensor.
co.jp) uses four CXP-5 connectors to output
data at 68.97kHz. This number of connec-
tors could be reduced should the company
choose to implement the emerging CXP stan-
dards. While the speed and resolution of both
line-scan and area-scan CMOS imagers will
continue to increase to meet the demands of
applications such as industrial inspection and
medical imaging, so too will the high-speed
interfaces needed to support them.
Formerly, relatively small companies led
such CMOS device innovations. Lately, how-
ever, there has been rapid consolidation in the
marketplace with companies such as Anafo-
cus, CMOSIS and Truesense Imaging (Roch-
ester, NY, USA) being acquired by e2V, the
AMS Group (Premstaetten, Austria; http://
ams.com) and ON Semiconductor (Phoenix,
AZ, USA; www.onsemi.com), respectively.
Whether this somewhat stifles the innova-
tion of novel CMOS imagers remains to be
seen. Camera companies wishing to more
effectively differentiate their products, how-
ever, will need to seek newer sensor start-ups
or invest in custom CMOS imagers.
Figure 4: Euresys has built a prototype of CXP-10 frame
grabber using engineering samples of Microchip Technolo-
gy devices.
Active SiliconIver, Englandwww.activesilicon.com
AdimecEindhoven, The Nether-landswww.adimec.com
AleximaPasadena, CA, USAwww.alexima.com
The AMS GroupPremstaetten, Austriahttp://ams.com
AnafocusSeville, Spainwww.anafocus.com
AwaibaFunchal, Madeira, Portugalwww.awaiba.com
BAP Image SystemsErlangen, Germanywww.bapimgsys.com
BitFlowWoburn, MA, USAwww.bitflow.com
CIS CorpTokyo, Japanwww.ciscorp.co.jp
CMOSISAntwerp, Belgiumwww.cmosis.com
e2VChelmsford, Englandwww.e2v.com
EuresysAngleur, Belgiumwww.euresys.com
ImperxBoca Raton, FL, USAwww.imperx.com
IO IndustriesLondon, Ontario, Canadawww.ioindustries.com
ISVISeoul, S. Koreawww.isvi-corp.com
JAISan Jose, CA, USAwww.jai.com
Kaya InstrumentsHaifa, Israelwww.kayainstruments.com
Lambert InstrumentsGroningen, The Nether-landswww.lambertinstruments.
com
Laon PeopleBundang-gu, South Koreawww.laonpeople.com
Luxima TechnologyPasadena, CA, USAwww.luxima.com
MACOMLowell, MA, USAwww.macom.com
Microchip TechnologyChandler, AZ, USAwww.microchip.com
MikrotronUnterschleissheim, Ger-manywww.mikrotron.de
NEDOsaka, Japanwww.ned-sensor.co.jp
ON SemiconductorPhoenix, AZ, USAwww.onsemi.com
OptronisKehl, Germanywww.optronis.com
SVS-VistekSeefeld, Germanyww.svs-vistek.com
Toshiba TeliTokyo, Japanwww.toshiba-teli.co.jp
VieworksGyeonggi-do, Republic of Koreawww.vieworks.com
Companies mentioned
For more information about CoaXPress products, visit Vision Systems Design’s Buyer’s Guide buyersguide.vision-systems.com
1612VSD_21 21 11/30/16 11:48 AM
<150 26%
23%
21%
11%
6%
13%
150–350
350–650
650–1000
1000–3000
>3000
$
How many industrial cameras in percent didyou sell in the following price ranges?
D e c e m b e r 2 0 1 6 V I S I O N S Y S T E M S D E S I G N w w w . v i s i o n - s y s t e m s . c o m22
Industrial camera applications, technologies and interfacesManufacturers and integrators review industrial camera
market status and future trends.
Ute Häussler
In collaboration with Vision Systems
Design and Inspect magazines, FRAMOS
(Taufkirchen, Germany; www.framos.
com) has conducted a market survey of
trends, interfaces and future developments
in camera technology. The study, based on
the opinions of 52 users and 8 manufactur-
ers—55% who were from Europe, 23% from
North and South America, and 22% from
Asia/The Middle East—were presented at
the VISION 2016 show in Stuttgart. Europe
ranked first in terms of both purchasing and
production at 62% and 43%. Manufacturers
also produce in Asia (13%) and in America
(6%). Behind Europe, Asia and America have
equally strong purchasing markets, at 28%.
Compared with 2015, American production
has declined to 28%, which can be ascribed to
weaker survey participation from North and
South America.
While camera vendors cited that measure-
ment and logistics comprised 50% and 13% of
sales respectively, production automation and
quality assurance applications ranked equally
at 63%. Medical diagnos-
tics and scientific applica-
tions also ranked equally
at 38%. Similar results were
reflected by camera users
who said that 48% of the
cameras they purchased
are deployed in automation applications, 46%
in quality assurance, 40% in measurement,
and 29% in scientific applications. Medical
diagnostic applications account for 10% of
user camera deployments. While traffic appli-
cations, including vehicle assistance systems
are significant to manufacturers, representing
25% and 13% of sales respectively, they appear
to be less relevant to users, with deployment
into these application ranking at only 10% and
2% respectively.
As in past years, both camera manufacturers
and systems developers see a continued growth
in the image processing and machine vision
industry with 90% of users intending to intro-
duce or to replace existing systems within the
next two years.
Camera pricing
After a high of 70% in 2014, manufacturers pro-
duction roadmaps have steadily declined to
44% for production of cameras in the mid-price
range between $150 and $650, which reveals
some price stabilization after successive drops
in the previous years. Low-
cost cameras less than $150
are least significant to manu-
facturers and users in terms of
percentage, at 26% and 11%,
respectively. Compared with
2015, high-priced cameras
from $650/$1,000/$3,000 have dropped by 12%
points (Figure 1). As a result, production of cus-
tomized cameras for specific applications seems
to be a strong selling point, and a market advan-
tage, for smaller camera manufacturers.
CMOS vs. CCD
In the light of Sony’s discontinuation of CCD
imagers, users see the greatest growth for com-
panies such as ON Semiconductor, which cur-
rently holds a 29% market share. The declines
foreseen by users and manufacturers for Sony
in last year’s study haven’t materialized with
32% of all camera manufacturers currently
relying on Sony. In 2 years, Sony is expected
Figure 1: The fact that camera costing more
than $1,000 account for almost 20% of man-
ufacturer sales indicates strong demand for
high-end cameras in many applications.
Ute Häussler, Manager of
Marketing Communications,
FRAMOS (Taufkirchen, Ger-
many; www.framos.com)
MARKETS U R V E Y
1612VSD_22 22 11/30/16 11:48 AM
CurrentIn 2 years11%
2%8%
10%
13%10%
1%1%
0%0%
15%18%
33%41%
1%4%
0%4%
0%1%
17%9%
Firewire
USB 2.0
USB 3.0
Camera Link
Camera Link HS
Ethernet
GigE(Gigabit Ethernet)
Dual GigE
10GigE
CoaXpress
Others
How often do you use cameras with thefollowing interface types – now and
expected in 2 years?
to grow back to the 37% levels it had before
the discontinuation. Even more users this year
rely on Sony compared to last year, with an
increase from 35% to 53%.
Camera vendors cite that CMOS technol-
ogy accounts for 85% of camera sales. Camera
users expect to reach this purchasing level in
the next two years. With no increase com-
pared to last year, 51% of users rely on CMOS
today but are predicting faster growth to 83%,
in contrast to 70% in 2015. Based on the shift
from CCD technology, e2v benefits with ref-
erence to manufacturers (from 3% to 12%)
and customer-specific sensors (from 4% to
19%) with an equally positive forecast.
While approximately 30% of users relied
on sensors under 1 MPixel last year, in 2016,
only 10% do. This significant decline can be
marked as growth in the 1-3 MPixel (+10%
points), the 3-5 MPixel (+2% points), and the
5-10 MPixel (+3% points) categories. Man-
ufacturers as well as users expect a focus
on frame rates between 25 fps and 60 fps
today and in the coming two years. At the
same time, compared to last year, significant
increases in the area of over 100 fps (+13%
points for users) and 200 fps (+14% points for
manufacturers) have taken place.
Standard interfaces
GigE Vision dominates according to man-
ufacturers, with 33%, followed by Ether-
net with 15%. Compared with last year and
based on a low proportion of American partic-
ipants, the previously high Ethernet percent-
age has been reduced. Manufacturers and
users expecting USB 3.0 and GigE Vision to
grow fastest with an increase of 8% and 10%
points, respectively.
More than 75% of manufacturers and,
60% of users expect bandwidths greater than
5 GB to become relevant or very relevant.
50% of manufacturers believe that USB 3.1
is the most important interface for high-speed
applications, followed by 38% for 10 GigE. In
contrast, 44% of users favor 10 GigE for fast
transmission, and only 37% of users vote for
USB 3.1.
Figure 2: Among systems integrators, the
GigE standard proved the most popular fol-
lowed by Camera Link and USB 3.
1612VSD_23 23 11/30/16 11:48 AM
ProfileIn
du
str
y S
olu
tio
ns
D e c e m b e r 2 0 1 6 V I S I O N S Y S T E M S D E S I G N w w w . v i s i o n - s y s t e m s . c o m24
High-speed inspection system finds defects in steelVision inspects the surfaces of hot rolled steel long products as if they were cold, even
though the inspection takes place at a temperature of over 1,000°C.
Antonio Cruz-Lopez, Alberto
Lago, Roberto Gonzalez, Aitor
Alvarez and José Angel
Gutiérrez Olabarria
To produce seamless steel tubes, a steel billet
is transported into a furnace where it is first
heated. Next, the billet is pierced to form a
thick-walled hollow shell, after which a man-
drel bar is inserted into the shell. The shell then
undergoes elongation rolling in a mandrel mill.
Following the elongation process, the billet is
conveyed to a push bench, where it is pushed
through a series of roller cages. The result is that
a hollow length of steel tubing with consecu-
tively smaller wall thicknesses is formed.
As effective as the hot rolling process is,
the roller cages in the push benches can
sporadically produce marks and defects on
the surface of the steel that are extremely diffi-
cult to detect in hot conditions. Hence, in their
quality improvement programs, many manu-
facturers look to identify such defects as early
as possible to avoid producing tons of defective
material at considerable expense.
Vision system
To resolve those issues, engineers at Tecna-
lia (Derio, Bizkaia, Spain; www.tecnalia.
com) have developed a machine vision system
dubbed Surfin’ that can enable steel manufac-
turers to detect such defects as the steel ema-
nates from the push bench (Figure 1). The de-
tection of such defects provides manufacturers
with an indication of any issues in the produc-
tion process, enabling them to perform pre-
ventative maintenance on the push benches at
an early stage and preclude any defective steel
tubes from being delivered to their customers.
The typical defects found on the surface
of such tubes generated by the roller cages
usually follow a repetitive pattern and con-
tinue to appear until the rolling stands are
changed. They can include tears or rips in the
Antonio Cruz-Lopez, Alberto La-
go, Roberto Gonzalez, Aitor Alvarez and
José Angel Gutiérrez Olabarria, Machine
Vision Engineering Team, Tecnalia, C/
Geldo 700, 48160 Derio, Bizkaia, Spain
(www.tecnalia.com).
Figure 1: Engineers at Tecnalia have devel-
oped a vision system called Surfin’ that can
enable steel manufacturers to detect defects
in steel tubing as the steel emanates from a
push bench at over 1000oC.
1612VSD_24 24 11/30/16 11:48 AM
1
a) b)
32
1
a) b) c)
w w w . v i s i o n - s y s t e m s . c o m V I S I O N S Y S T E M S D E S I G N D e c e m b e r 2 0 1 6 25
Industry Solutions Profile
surface, rolling stand blocking marks, cracks
and detached steel which is later pasted onto
another part of the surface of the steel tube.
The challenges faced by the developers were
not trivial. The conditions in such a produc-
tion environment are extreme. Not only are the
steel tubes produced at a relatively high speed of
6-7m/sec (Surfin’ can work up to 10 m/sec), the
temperature of the steel as it emanates from the
roller cages is about 1000ºC. Compounding the
inspection problem is that the environment is
dirty and water and oil vapor are present.
As the hot surface of the steel radiates light
that is directly related to thermal emission in
the IR, red, orange and yellow bands, captur-
ing an image of all the light reflected by the
surface would saturate the sensor in a camera,
since the camera would be sensitive to all the
radiation emitted by the steel tube. To solve
this, the Surfin’ system (patent ES2378602 and
EP2341330) uses light of a wavelength far from
the emitted spectrum of the incandescent steel.
Images that reach the cameras in the system
are then optically filtered with a narrow opti-
cal band-pass filter from Edmund Optics (Bar-
rington, NJ, USA; www.edmundoptics.com)
centered on 470nm with a width of 10nm and
an infrared (IR) radiation filter. Both filters
enable the CCD cameras to only receive radi-
ation in the desired wavelength band, while
the incorporation of the IR filters protects the
electronic systems from heat radiation. The
controlled lighting technique allows the system
to capture images of the entire surface of the
tube as if it were cold.
To enable the system to capture a 360°image
of the surface of the steel tube, the system uses
three sets of 14.000 lines/s Teledyne DALSA
(Waterloo, ON, Canada; www.teledynedal-
sa.com) Spyder 3 line scan cameras that are
mounted at 120° intervals perpendicular to
the plane of the rolling steel shaft in protec-
tive enclosures around the output of the push
bench. In a former version of the system, two
powered 200mW 473 nm blue laser light sourc-
es from Laserglow Technologies (Toronto, ON,
Canada; www.laserglow.com) are employed
on both sides of each camera to illuminate the
surface of the steel with dark field lighting. As
a result of the geometry of the system, it is pos-
sible to capture a complete image of the tube
continuously and in real-time (Figure 2a and b).
Because of the temperature of the envi-
ronment, it was important to keep the cam-
eras cool continuously. To do so, compressed
cooled refrigerated air is injected into the pro-
tective enclosures to protect the camera and
laser equipment from the heat and the harsh
environment. Not only does the air cool the sys-
tems, but excess air is then expelled through a
window through which the lasers project their
light beam and the camera captures the image,
preventing the deposition of scale, oxides, dust
and liquids.
Figure 2 a and b: To enable the system to accurately capture a 360° image of the surface of the steel tube (3), the system uses three sets of lasers
(1) and 14.000 lines/s line-scan cameras (2). Figure 2b: Sets of lasers and cameras are mounted at 120° intervals in the same plane perpendicular to
the plane of the rolling shaft in protective enclosures around the output of the push bench.
Figure 3: Typical defects that occur in the steel include (from left): (a) material pasted (b) mate-
rial removed and (c) rolling marks.
1612VSD_25 25 11/30/16 11:48 AM
1.0
0.2 0.4 0.6
TNR (specificity) = 1 - FPRTNR (specificity) = 1 - FPR
thr:0.534
0.8
Threshold
TPR (sensitivity), TNR (specificity) vs. threshold
Rates
1.0
FPR = 1.58%FNR = 1.49%
0.8
0.6
0.4
0.2
0.0
TNR (specificity) = 1 - FPRTNR (specificity) = 1 - FPR
1.0
0.2 0.4 0.6
TNR (specificity) = 1 - FPRTNR (specificity) = 1 - FPR
thr:0.534
0.8
Threshold
TPR (sensitivity), TNR (specificity) vs. threshold
Rates
1.0
FPR = 1.58%FNR = 1.49%
0.8
0.6
0.4
0.2
0.0
TNR (specificity) = 1 - FPRTNR (specificity) = 1 - FPR
D e c e m b e r 2 0 1 6 V I S I O N S Y S T E M S D E S I G N w w w . v i s i o n - s y s t e m s . c o m26
Industry Solutions Profile
Image processing
Once the images of the surface of the steel
are captured, they are transferred 100m to a
PC-based server in the control room over an
optical fiber Gigabit Ethernet link. Here, the
images are first pre-processed to enhance the
contrast of the images with custom built image
enhancement algorithms such as histogram
equalization. Since the usable data in the orig-
inal images are represented by close contrast
values, the technique increases the global con-
trast of the images.
Having enhanced the images, they are then
processed using custom built software which, in
a previous version of the system, employed an
assisted learning system based on Support Vector
Machines (SVMs). Once the system has been
taught to learn specific defects from different
examples by texture, contrast, and size, the algo-
rithm can then automatically detect and classi-
fy the most important production defects in the
production environment (Figure 3).
The PC-based server used to store the
images from the cameras together with data
of the defects found and their locations on the
tubes also stores alarms for pressure, tempera-
ture, speed signal, communications, and other
tube production data in an Oracle database for
quality control and traceability. It is also possi-
ble to perform remote inspection of the data on
the server by installing a client application on
a computer connected to the company LAN.
Since the system was originally developed,
it has undergone several enhancements. While
the basic concept behind the system has been
retained, much has been improved. The struc-
ture of the system has now been redesigned
to enable the alignment and the adjustment
of the cameras and the lighting to be adjust-
ed more easily.
Newer versions of the system have also adopt-
ed a liquid, rather than an air refrigeration tech-
nique to enable both the lighting and sensors to
be placed closer to the
steel tubing and to allow
hotter or larger steel sec-
tions to be imaged.
LED light sources from
Metaphase Technolo-
gies (Bristol, PA, USA;
www.metaphase-tech.
com) have also replaced
the earlier lasers, lead-
ing to an increase in life-
time of the light sources
from 2000 to 50,000
hours, and eliminating
artifacts such as speckle
that can corrupt the
images captured by
the cameras.
The software user interface has also been
improved, enabling plant operators to visual-
ize the position and the specific nature of the
defects on the steel as they occur (Figure 4).
It is now also possible to store months of pro-
duction data on the database, allowing plant
managers to review the periodicity of any errors
that might be occurring and to schedule regu-
lar preventative maintenance operations. The
system can also support many users who can
not only access the system locally but over the
Internet as well.
Change in classification
Perhaps the most important recent develop-
ment to the Surfin’ system, however, is the
replacement of the older SVM-based clas-
sifier by an in-house developed candidate
window detection stage and a Convolutional
Neural Network (CNN) for defect classifica-
tion. CNNs can learn to extract the relevant
features that characterize each type of defect
from the training images and perform classi-
fication, while an SVM only maps its input to
some high dimensional space where the dif-
ferences between the classes of defects can
be revealed.
By assuming that all objects of interest–such
as the defects–share common visual properties
that distinguish them from the background, the
Figure 4: A custom-built software user interface enables plant opera-
tors to visualize the position and the specific nature of the defects on
the steel in real time.
Figure 5: The most relevant performance
metric when performing 2-class classification
(as in defect versus no-defect) is the AUC,
or Area Under the ROC (Receiver Operating
Characteristic) curve. The better a model is,
the closer its AUC is to 1. In this way, when
comparing several models, the best one can
be selected by choosing the one with the
highest AUC. While the value of AUC with
the SVM classifier is 0.88, the AUC of the
CNN-Surfin classifier is 0.997 for the two class
classification case.
Figure 6: The point where the vertical line
corresponding to a threshold value cuts both
curves yields False Positive and False Negative
Rates. A common choice for the threshold
value is that which yields approximately equal
False Positive and False Negative Rates. For
CNN-Surfin, a False Positive Rate of 1.58%
and a False Negative Rate of 1.49% were
achieved.
1612VSD_26 26 11/30/16 11:48 AM
Thanks for a great 2016.
Season‘s Greetings and
a Happy New Year.
What you expect +more
w w w . v i s i o n - s y s t e m s . c o m V I S I O N S Y S T E M S D E S I G N D e c e m b e r 2 0 1 6 27
Industry Solutions ProfileIndustry Solutions Profile
candidate window detection stage outputs a set of regions that are likely
to contain those defects. A Convolutional Neural Network (CNN) then
extracts the learned features and performs the actual defect classifica-
tion on the image data.
The CNN classifier was validated over a custom image database with
defective hot tube images, and it was discovered that the deep learn-
ing-based approach significantly outperformed the earlier SVM Classi-
fier by decreasing both the number of false positives and false negatives
that were detected.
The most relevant performance metric when performing 2-class clas-
sification (as in defect versus no-defect) is the AUC, or Area Under the
ROC (Receiver Operating Characteristic) curve, which is built by plot-
ting the False Positive Rate in the x-axis and the True Positive Rate in
the y-axis and then computing the area under this function (Figure 5).
Ideally, the value of this function is 1.00 for every value in the x-axis,
and thus the better a model is, the closer its AUC is to 1. In this way,
when comparing several models, the best one can be selected just by
taking the one with the highest AUC. While the value of AUC with the
SVM classifier is 0.88, the AUC of the CNN-Surfin classifier is 0.997
for the two class classification case.
In addition, for a given model, a threshold can be selected to enable the
system to decide if a sample is defective. Since the models’ output is nor-
mally a probability value between 0 and 1, a sample will be tagged as NOK
if the probability value is greater than the threshold, and OK otherwise.
By moving the threshold value towards 1.0, the number of false posi-
tives can be reduced at the cost of increasing the number of false nega-
tives, or vice versa. It is then possible to visually check where the system
operates by plotting the threshold in the x-axis and both the Specificity
or True Negative Rate (= 1 – False Positive Rate) and the Sensitivity or
True Positive Rate (= 1 – False Negative Rate) in the y-axis.
The point where the vertical line corresponding to the threshold
value cuts both curves yields False Positive and False Negative Rates.
A common choice for the threshold value is that which yields approxi-
mately equal False Positive and False Negative Rates. For CNN-Surf-
in’, a False Positive Rate of 1.58% and a False Negative Rate of 1.49%
were achieved (Figure 6) compared with a False Positive Rate of 17.98%
and a False Negative Rate of 18.00% on the SVM version of Surfin’, a
x12 decrease in the number of classification mistakes made by Surfin’.
The new classifier is now set to be implemented in production envi-
ronments. Even so, engineers at Tecnalia are working to improve the
system with the aim of enabling steel producers to produce steel with
zero defects. The 4-class problem (OK versus 3 types of defects), for
example, has been evaluated for CNN-Surfin’ using a generalization
of the AUC (an averaged extension) and it yielded AUC = 0.9956. More
samples, however, are currently being gathered to make this number
statistically significant.
Since its introduction, the Surfin’ system has been delivered to com-
panies such as Tubos Reunidos (Bilbao, Vizcaya, Spain; www.tubos-
reunidos.com) and Aceros Inoxidables Olarra (Loiu, Bizkaia, Spain;
www.olarra.com) where it has enabled production issues to be detect-
ed at early stages in the hot process production. Tecnalia is working
with other steel production companies to deploy the system to detect
more complex shaped steel parts, such as beams with U or H-shaped
cross-sections that are used in construction and civil engineering.
Tecnalia has formed a relationship with Sarralle Group (Azpeitia,
Gipuzkoa, Spain, www.sarralle.com) to distribute the Surfin’ system
worldwide.
Companies mentioned:
Aceros Inoxidables OlarraLoiu, Bizkaia, Spainwww.olarra.com
Edmund OpticsBarrington, NJ, USAwww.edmundoptics.com
Laserglow TechnologiesToronto, ON, Canadawww.laserglow.com
Metaphase TechnologiesBristol, PA, USAwww.metaphase-tech.com
Tubos Reunidos Industrial S.A.Bilbao, Vizcaya, Spainwww.tubosreunidos.com
Teledyne DalsaWaterloo, ON, Canadawww.teledynedalsa.com
TecnaliaDerio, Bizkaia, Spainwww.tecnalia.com
SarralleAzpeitia, Gipuzkoa, Spainwww.sarralle.com
1612VSD_27 27 11/30/16 11:48 AM
PRODUCTSVision+Automation
» E-mail your product announcements, with photo if available, to [email protected] | Compiled by James Carroll
D e c e m b e r 2 0 1 6 V I S I O N S Y S T E M S D E S I G N w w w . v i s i o n - s y s t e m s . c o m28
Embedded computers MXC-6400 expandable embedded computers feature
6th generation Intel Core i7/i5/i3 processors and the
QM170 chipset. Targeting intelligent transportation
and industrial automation applications, the embedded
computers feature two DisplayPort and one DVI-I
ports enabling up to 4K UHD resolution, two
software-programmable RS-232/422/485 + two
RS-232 ports, three Intel GbE ports with teaming function, six USB 3.0 ports, and
16CH DI and 16CH DO. Additionally, the computers handle shock up to 50Gs.
ADLINK Technology, Inc., San Jose, CA, USA, www.adlinktech.com
Line scan infrared camera LineCAM12 features a 1024 x 1 InGaAs line scan
detector that is available with 250 µm tall pixels or
12.5 µm square pixels. The InGaAs array is backside
illuminated and has >75% quantum efficiency from
1.1 to 1.6 µm. The camera also operates in the SWIR
and visible spectrum from 0.4 – 1.7 µm acquiring
14-bit data at 37k lines/s on USB3 or Camera Link. The
camera is compatible with with C-, F-, and M42 mount lenses.
Princeton Infrared Technologies Inc.
Monmouth Junction, NJ, USA, www.princetonirtech.com
Backlights target industrial environmentsDesigned for inspection and measurement of fast moving objects, LTBP strobed
LED backlights feature uniformity down to ±10 %. Available in sizes rang-
ing from 48 x 36 to 288 x 216 mm, the lights come in red, white, green, and
blue models. These lights feature M8/M12 connectors, scratch-resistant protec-
tive covering, and a reduced thickness of 26 mm. Positioned behind the objects
to be inspected, the lights provide a silhouette, edge contrast and high illuminance with exposure times as low
as tens of μs. In addition to strobe mode, the lights work in continuous mode for alignment/setting when used with the
LTDV1CH-17V controller. Opto Engineering, Mantova, Italy, www.opto-engineering.com
First programmable USB 3.0 hub launchedThe USBHub3+ is an 8-port pro-
grammable USB 3.0 hub that is
reportedly the first programma-
ble USB 3.0 hub available. Up to
8 USB 3.0 ports can be controlled
through software to enable or dis-
able individual ports, set current
limits and monitor current and volt-
age on each port. An additional port
is provided to expand to multiple
hubs. Acroname’s BrainStem tech-
nology and API is used to control
the USBHub3+ and a sample open
source GUI is provided, but users can
also use C, C++, or Python to inter-
face with BrainStem APIs. Addition-
ally, the USBHub3+ is designed to
withstand up to +/-30kV ESD strikes.
Acroname
Boulder, CO, USA
www.acroname.com
1612VSD_28 28 11/30/16 11:50 AM
w w w . v i s i o n - s y s t e m s . c o m V I S I O N S Y S T E M S D E S I G N D e c e m b e r 2 0 1 6 29
Vision ■+ Automation Products
Frame grabber features on-board 3D profilingThe Radient eV-CXP CoaXPress frame grabber,
the first in the Radient eV-Series line of frame
grabbers featuring on-board 3D profiling,
acquires from multiple independent cameras
at once by way of two (Dual) or four (Quad)
CXP connections, each supporting up to 6.25
Gbps of input bandwidth. The frame grabber
performs laser line extraction needed for 3D
profiling on the card itself, reducing system
demand and freeing resources for inspection
tasks. Capable of extracting 9,000 profiles
per second with no host CPU usage, the PCI
Express frame grabber is suitable for PCB, road
and rail maintenance, food analysis and other
inspection tasks.
Matrox Imaging
Dorval, QC, Canada
www.matrox.com/imaging
Camera features 12 MPixel Pregius sensorNewly available to the CX series of industrial
cameras is the VCXU-123M USB 3.0 camera
that features the monochrome Sony Pregius
IMX253 global shutter CMOS image sensor,
a 12 MPixel sensor with a 3.45 µm pixel size
that can achieve frame rates of 31 fps. Also
featuring low dark noise and a 71 dB dynamic
range, M3 mount, and 458 MB internal buffer
the camera measures 29 x 29 x 38 mm and
targets surface inspection, 2D/3D measure-
ment, package inspection and traffic moni-
toring applications.
Baumer
Radeberg, Germany
www.baumer.com
Camera features cloud-based data processingHyperspectral imaging camera manufacturer
Cubert and VITO’s Remote Sensing Unit jointly
developed the ButterflEYE-LS camera that fea-
tures a hyperspectral imaging chip from imec
and a 2 MPixel global shutter line scan Si
CMOS detector covering a spectral range of
470 – 900 nm at speeds of up to 30 fps. Addi-
tionally, the camera features a cloud-based
image processing solution developed by VITO
1612VSD_29 29 11/30/16 11:50 AM
D e c e m b e r 2 0 1 6 V I S I O N S Y S T E M S D E S I G N w w w . v i s i o n - s y s t e m s . c o m30
Vision ■+ Automation Products
Remote Sensing that is able to generate hyper-
spectral ground maps after completing a flight.
Weighing less than 400g, the compact camera
is suitable for deployment in small UAVs for
precision agriculture, vegetation, and environ-
mental monitoring.
Cubert
Ulm, Germany
www.cubert.org
IR cameras designed for outdoor useViento 67-640 infrared cameras, available in
9 Hz and 30 Hz models, are designed to be
deployed in rugged outdoor applications such
as surveillance or robotics. The cameras fea-
ture a 640 x 480 uncooled VOx microbolome-
ter infrared detector with a 17 µm pixel size and
provide standard NTSC/PAL composite analog
video output with simultaneous 8-bit / 14-bit
Camera Link digital output. Equipped with IP67-
rated environmental housing, the cameras fea-
ture a spectral range of 8 – 14 µm, fixed focal
length, fixed mount, automatic image normal-
ization with an integrated mechanical shutter,
and proprietary Image contrast enhancement
that offers heightened contrast and scene detail
in low thermal contrast settings.
Sierra-Olympic Technologies
Hood River, OR, USA
www.sierraolympic.com
USB 3.1 cameras feature Sony CMOS sensorsThese USB 3.1 cameras with Type-C connec-
tors are based on Sony CMOS image sensors.
The first models of the cameras—which will
be available in housed or single-board ver-
sions with different lens holders—include the
UI-3860LE camera, which is based on the Sony
IMX290 CMOS sensor. The IMX290 is a rolling
shutter, backside-illuminated 2 MPixel CMOS
sensor from the Sony STARVIS series that can
achieve a frame rate of 120 fps in full HD. Addi-
tionally, the UI-3880LE camera will be based
on the 6 MPixel Sony IMX178 STARVIS sensor,
which is a rolling shutter CMOS sensor capa-
ble of achieving speeds up to 60 fps. Future
models will include sensors from ON Semicon-
ductor, e2v, and more from Sony.
IDS Imaging Development Systems
Obersulm, Germany
www.ids-imaging.com
Cameras feature GenICam complianceA35 and A65 thermal imaging cameras target
machine vision and automation applications
and provide 14-bit temperature linear output
through GenICam-compliant software. The
A35 camera features a 320 x 256 uncooled
VOX microbolometer with a 25 µm pixel pitch
and 60 fps frame rate, while the A65 camera
features a 640 x 512 uncooled VOX microbo-
lometer with a 17 µm pixel pitch and 30 fps
frame rate. Both cameras feature a spectral
range of 7.5 – 13 µm and are available with 10
field of view options, from 8 to 90°, providing
users the option to pinpoint a single target or
monitor a large area. Additionally, both ther-
mal imaging cameras are GigE Vision compli-
ant and measure only 4.2 x 1.9 x 2 in.
FLIR
Wilsonville, OR, USA
www.flir.com
Microscope cameras feature USB 3.0 interfaceThe KAPELLA and RIGEL cameras are color
and monochrome versions, respectively, of
the same camera, which features a back-illu-
minated 2.3 MPixel Sony CMOS image sensor
with a 5.86 µm pixel size and can achieve a
frame rate of 60 fps. The PROKYON camera
features an image sensor with the same speci-
fications as the KAPELLA and RIGEL, but is able
to produce images with resolutions from 2.3
to 20.7 MPixels. All three USB 3.0 cameras fea-
ture 12 bit A/D conversion, a dynamic range of
73.3 dB, and are equipped with an Intel Quad
Core processor with 8 GB RAM.
Jenoptik Optical Systems
Jena, Germany
www.jenoptik.com
LIDAR sensor captures hi-res 3D imagesUsing Time of Flight distance measurement
with calibrated reflectivities, the Puck High-Res
LiDAR sensor targets applications ranging from
autonomous vehicles to surveillance. The puck
expands on the VLP-16 Puck, a 16-channel,
real-time 3D LIDAR sensor that weighs 830 g.
Featuring a 360° field of view (FOV) and 100 m
range, the sensor delivers a 20° vertical FOV for
1612VSD_30 30 11/30/16 11:50 AM
Multi Angle and Colour Direct Lighting
FIBS-i75-8 is designed with
8 rows of led with 4 colours
(White ,Green ,Blue, Red) and
4 angles (15°, 35°, 55°, 75°).
Each row can be individually
controlled to enhance image
colour and effect for inspection
requirements. It is offered in
full and semi-circle.
www.falcon-illumination.com/www.falcon-illumination.de www.phrontier-tech.com
PHORCE® USB 3.0
Fiber Extender
For long-distance Machine
Vision and video applications
• Extends SuperSpeed USB3.0
connections up to 300
meters through MM or SM
fiber cables
• Plug-and-play: no drivers
required
• Supports 5Gb/s transmission
bandwidth
• Provides USB bus power for
USB devices
• Available for 1-fiber, 2-fiber,
and CWDM connections
w w w . v i s i o n - s y s t e m s . c o m V I S I O N S Y S T E M S D E S I G N D e c e m b e r 2 0 1 6 31
A D V E R T I S E M E N T
Product Showcase
Vision ■+ Automation Products
a tighter channel distribution—1.33° between
channels instead of 2.00°, to deliver greater
details in the 3D image at longer ranges. Addi-
tionally, the sensor features a Class 1 Eye-safe
laser with a wavelength of 903 nm, a rotation
rate of 5 – 20 Hz, and an integrated web server
for monitoring and configuration.
Velodyne LiDAR
Morgan Hill, CA, USA
www.velodynelidar.com
Multi-camera solution controls up to four camerasEnabling the control of up to four cameras at
once is the Norite N-5A100. The system con-
sists of individual N-5A100 cameras, which fea-
ture the 5 MPixel PYTHON5000 CMOS image
sensor in a 29 x 29 x 43 mm format. Multiple
Norite N-5A100 cameras can be controlled from
one user interface, and up to four cameras can
be connected to one frame grabber. For devel-
opers of vision systems with multiple cameras,
including those needed for 3D or area of inter-
est applications, the N-5A100 cameras offer
reduced system complexity at 105 fps through-
put per camera via CoaXPress interface.
Adimec Advanced Imaging Systems
Eindhoven, The Netherlands
www.adimec.com
CoaXPress camera features 50 MPixel sensorFeaturing a 35mm full frame 50 MPixel elec-
tronic global shutter CMOS image sensor that
can produce 7920 x 6004 images at more than
30 fps is the Flare 50MP camera. The camera—
which is available in monochrome and color—
provides four CoaXPress digital video outputs
that enable high-speed data transmission at up
to 25 Gbps. Additionally, the camera features
a dynamic range of 60 dB, 8/10/12-bit pixel bit
depth, and GenICam or Flare Control software
for camera control.
IO Industries
London, ON, Canada
www.ioindustries.com
USB 3.0 camera features 29 MPixel CCD sensorFeaturing the 29 MPixel KAI-29050 CCD image
sensor, which is a 35 mm sensor with a fully
global electronic shutter, is the new Lt29059
USB 3.0 camera. The camera also features a
Canon EF lens mouth with fully-integrated con-
troller for auto focus/iris supported by Lumen-
era’s API, a zero-loss 256 MB RAM frame
buffer, binning and region of interest modes,
as well as a locking USB 3.0 connector.
Lumenera
Ottawa, ON, Canada
www.lumenera.com
1612VSD_31 31 11/30/16 11:50 AM
D e c e m b e r 2 0 1 6 V I S I O N S Y S T E M S D E S I G N w w w . v i s i o n - s y s t e m s . c o m32
Sales Offices
Advertisers Index
Advertiser / Page no.
This ad index is published as a service. The publisher does not assume any liability for errors or omissions.
Vision ■+ Automation Products
Board-level or industrial USB 3.0 camerasThe 27 series of USB 3.0 cameras features a number of new color
and monochrome, industrial and board-level models. The 5 and 10
MPixel cameras, for example, fea-
ture the Aptina (ON Semiconduc-
tor) MT9P006 and MT9J003 CMOS
image sensors, respectively, and
feature a compact design starting
at 30 x 30 x 10 mm. Additionally,
these cameras feature a free 1D and
2D barcode software development kit as well as software for on-screen
measurement and image acquisition. Drivers for LabView, HALCON,
MERLIC, VisionPro, DirectX, Twain, and NeuroCheck are also included
with the cameras.
The Imaging Source
Bremen, Germany
www.theimagingsource.com
Line scan cameras achieve high speedsThe SW-4000M-PMCL and SW-8000M-PMCL Sweep monochrome
line scan cameras feature 4K (4096) and 8K (8192) CMOS line scan
sensors, respectively, can reach scan rates of 200 kHz and 100 kHz.
The SW-4000M-PMCL
camera features a 7.5
µm pixel size, 30.72 mm
sensor scanning width,
and F-Mount of M42x1
mount, while the SW-
8000M-PMCL features a 3.75 x 5.78 µm pixel size, 30.72 mm sensor
scanning width, and F-Mount of M42x1 mount. In both cameras, 8
and 10-bit data output is handled via a Camera Link Deca interface.
JAI
San Jose, CA, USA
www.jai.com
3D sensors designed for small part inspectionGocator 2400 sensors, which are currently available in the Gocator 2410
and 2420 models, feature a 2 MPixel camera with up to 1940 points/pro-
file resolution. The blue-laser profiling sensors—which are designed for
electronics and small parts inspection—feature an X resolution of 6 µm
and repeatability down to 0.2 µm, as well as a field of view of up to 32 mm
and a measurement range of up to 25 mm. Additionally, the
3D smart sensors feature
a GigE interface, a speed
of 400 – 5000 Hz with
windowing, an embed-
ded processor, and IP67
industrial housing.
LMI Technologies
Delta, BC, Canada
www.lmi3d.com
Main Office 61 Spit Brook Road, Suite 401 Nashua, NH 03060 (603) 891-0123 FAX: (603) 891-9328
Publisher Alan Bergstein (603) 891-9447 FAX: (603) 891-9328 E-mail: [email protected]
Executive Assistant Julia Campbell (603) 891-9174 FAX: (603) 891-9328 E-mail: [email protected]
Digital Media Sales Operations Manager Tom Markley (603) 891-9307 FAX: (603) 891-9328 E-mail: [email protected]
Ad Services Manager Marcella Hanson (918) 832-9352 FAX: (918) 831-9415 E-mail: [email protected]
List Rental Kelli Berry (918) 831-9782 FAX: (918) 831-9758 E-mail: [email protected]
North American Advertising & Sponsorship Sales Judy Leger (603) 891-9113 FAX: (603) 891-9328 E-mail: [email protected]
Product Showcase Advertising & Reprint Sales Judy Leger (603) 891-9113 FAX: (603) 891-9328 E-mail: [email protected]
International Sales Contacts
Germany, Austria, Northern Switzerland, Eastern Europe Holger Gerisch +49 (0) 8801-9153791 FAX: +49 (0) 8801-9153792 E-mail: [email protected]
Hong Kong, China Adonis Mak 852-2-838-6298 FAX: 852-2-838-2766 E-mail: [email protected]
Japan Masaki Mori 81-3-3219-3561 FAX: 81-3-5645-1272 E-mail: [email protected]
Israel Dan Aronovic (Tel Aviv) 972-9-899-5813 E-mail: [email protected]
Should you need assistance with creating your ad please contact:
Marketing Solutions Vice President Paul Andrews (240) 595-2352 Email: [email protected]
Allied Vision ..............................................................................CV4
Alysium-Tech GmbH .....................................................................27
Edmund Optics ............................................................................ 4
Falcon Illumination ......................................................................31
Imperx ....................................................................................... 16
Jargy Co. Ltd. ............................................................................. 29
Matrox Imaging ...................................................................... CV3
Photron ...................................................................................... 23
Phrontier Technologies ................................................................31
Point Grey ................................................................................CV2
Stemmer Imaging ......................................................................... 5
Vieworks ...................................................................................... 2
1612VSD_32 32 11/30/16 11:50 AM
The Wait Is Over New! Matrox Iris GTR
The Next-Generation Smart Camera from MatroxThe latest Matrox smart camera is smaller to fit in tighter spaces, faster to inspect more and handle higher production rates, and is easier on your projectbudget. Plus it can either run your own vision applications programmed using the field-proven Matrox Imaging Library (MIL) or be set up using the Matrox Design Assistant flowchart-based vision software.
Watch the Iris GTR videowww.matrox.com/irisgtr/vsd
1612VSD_C3 3 11/30/16 11:50 AM
Your image is everything
Choose the right CMOS camera. With the best CMOS advice.The Mako offers a wide variety of models with next-generation
CMOS sensors. But which one is right for you? Trust our imaging
experts to help you compare the differences and select the
perfect sensor for your application.
RIGHT
CAMERA
RIGHT
PRICE
Learn more about choosing
the right CMOS sensor at
AlliedVision.com/MakoCMOS
1612VSD_C4 4 11/30/16 11:50 AM