vector3 whitepaper
DESCRIPTION
It has taken decades for computers to be accepted by broadcast engineers and master control room staff. In this paper we will take a look back and see how this acceptance came about and why it took so long. This document is in two parts. In the first, we look at operations in master control rooms in the 80s, at the advent of automation systems and the problems associated with them, and at the two distinct approaches to developing video servers taken by different manufacturers. In the second, we describe what we believe will be the standard configuration in master control rooms over the coming decades, with the IT revolution now complete and computers fully accepted as reliable video sources.TRANSCRIPT
The compuTer revoluTion and The masTer conTrol room
aBsTracT
it has taken decades for computers to be accepted by broadcast engineers and master control room staff.
in this paper we will take a look back and see how this acceptance came about and why it took so long.
This document is in two parts. in the first, we look at operations in master control rooms in the 80s,
at the advent of automation systems and the problems associated with them, and at the two distinct
approaches to developing video servers taken by different manufacturers. in the second, we describe
what we believe will be the standard configuration in master control rooms over the coming decades,
with the iT revolution now complete and computers fully accepted as reliable video sources.
conclusion of a 15-year debate
by román ceano
The author of this paper has first-hand experience as co-founder of vector 3, one of the most successful automation
companies in the industry. he, his partner and their colleagues gained their expertise by providing computer-
based solutions for the industry throughout the entire historical period of this document. as a result, his company’s
broadcast solutions have been thoroughly tested, tweaked and optimised for smooth and error free operation, and
provide a complete and robust automation system.
1
The advent of tape was a great technologi-
cal advance: prior to its invention master
control rooms could only operate live.
however, the wide plastic open reels of
the 50s were unwieldy and editing was painstak-
ing, manual work.
Tapes evolved through the decades into ex-
tremely sophisticated cassettes, so the old manual
cutting and editing equipment used for open-reel
tapes could happily be relegated to the museum.
To cut a long story short, by the early 1980s the
video tape had been king of television facilities for
30 years.
The operations and procedures in master
control rooms in those days looked something like
submarine warfare. several operators sat at a long
table facing a stack of monitors, their fingers on
panels covered with buttons of all shapes, sizes
and colours, performing their routine with brisk and
concise verbal exchanges.
in the gallery, as it was called by its inhabitants,
the day’s work was dictated by the flow of tapes. in
the early morning, a junior with a cart would arrive
from the archive department. The morning shift
supervisor went through the procedure with him.
The junior carried two lists: tapes arriving for the
mcr (those in the cart) and tapes to be returned to
the archive. The cart was emptied, ticking off tapes
on the first list. The second part of the procedure
The advent of the tape was a great technological advance.
ima
ge
: ww
w.m
eld
rum
.co
.uk
/mh
p/k
na
ck
ers
/be
hin
d.h
tml
PART 1 The Hand-On-Tape Era
how Things were in The good old days…
a junior goes into the rack room and loads some tapes into a number of vTrs following hand-
written instructions from his shift supervisor. once the cassettes are loaded and the vTrs are
in operational mode, the vTr operator reviews their output in the monitor stack. each monitor
displays a time clock superimposed on the video — a relatively new advance at the time. once he
is satisfied, the operator disengages the heads so the tape won’t get damaged while in standby,
and gives the all-clear to the supervisor.
all the staff wait and watch the red leds of the station’s clock, which is linked by rF to the
atomic clock in Frankfurt, where a handful of atoms mark time for the whole european continent.
someone calls out the countdown, perhaps the mixer operator, with his hand waiting on the
lever, or maybe the supervisor himself, if the situation is critical enough: “Ten minutes, five min-
utes, two minutes, one minute, 45 seconds, 30 seconds, 15 seconds, 10, 5, 4 …”. The vTr operator
engages the heads and pushes play. The image blinks in the second most central monitor in the
stack (pvw) and the mixer operator gets ready to abort in case the tape jams. But the image sta-
bilizes and he shouts “top!” or “in” or “air!” or whatever the standard expression is in that master
control room. The audio man in his booth checks and carefully adjusts some mysterious-looking
knobs.
one more successful transition. one more on-air event, marked off on the playlist sheet.
2
consisted of filling the cart with the tapes that were
not needed in the mcr and ticking the second list.
when the procedure was complete, everybody
signed and they went their separate ways.
The supervisor then checked the playlist print-
out and made sure that all the tapes required for
that day were either in his own mini archive or had
just been delivered via the cart procedure. with
millions of people watching the very thing his finger
was triggering, the supervisor could receive calls
from his superiors if things went wrong, or, if they
went really wrong, from a very senior one.
The supervisor would check if it was possible to
compile the content of multiple tapes onto a single
tape (these consecutive events could then be trig-
gered as a single block). supervisors were always
asking for more vTrs, since the more they had,
the more comfortably and safely their teams could
work. if more vTrs were not provided, supervisors
would create compiled commercial blocks. These
resulted in a certain loss of quality, since back then
copies were not error free, as they are in the digital
age.
The moments before a long non-compiled
commercial block in prime time were exhilarating.
The supervisor, looking like captain James T. Kirk,
stood behind the operators as they gazed intently
at the stack of monitors, while a tense junior waited
in the rack room with his hand on a pile of tapes,
anxiously reciting alan shepard’s prayer. as the
millions in the audience listened to the anchor’s
cheery “back-in-a-minute”, the gallery supervisor
barked battle commands, secretly cursing slower
editors who seemed oblivious to the urgency of the
control room.
hours later, in an intimate huddle prior to the
graveyard shift, while the audience happily slept
after a pleasant evening in front of the box, incom-
ing and outgoing supervisors would discuss the
plan for the small hours, when the broadcast would
be peaceful enough to free up some vTrs to check
tapes for the archives staff. one more day had
passed, and another playlist sheet went to the bin
with every line crossed off.
roBoTs and promises
The 24/7 on-air operation of closed tape playout in
the master control room required so many highly
skilled people that, once closed cassette technol-
ogy became stable, it seemed clear that some kind
of industrial automation was required. The comput-
er industry made some attempts at answering this
need, but their solutions were still reliant on enor-
mous open tape systems for long-term archiving. in
the computer industry, closed cassettes were used
for very small-scale operations, so there were no
closed reel cart machines available that could be
adapted for broadcast.
in the 1980s the sony corporation launched
one of the most inspired devices in the history of
Tv technology. it was called the library manage-
ment system (lms), and it became a symbol of
status amongst stations. Those who owned an lms
belonged to the elite group of world-class broad-
casters.
an lms was basically a very large cabinet with
an internal corridor and shelves for tapes on either
side. instead of human juniors loading tapes, two
big robotic arms performed the task. on one side,
an enclosure the size of a telephone box housed a
stack of vTrs. lmss came in different sizes, but five
vTrs for 1000 tapes was average.
it was amazing to see the arms at work, but
even more so when you understood what they
were doing. Just like the human shift supervisors,
the lms operating system was working out ways to
compile events on tape in advance. sony’s digital
Betacam vTr could create copies whose quality
was, for the first time, identical to the original, en-
abling blocks to be compiled with no loss of quality.
The lms software performed incredible calcula-
tions in advance, like a chess player.
The lms was the first device in which the tape
was no longer the main element. identical se-
quences, termed “media” and identifiable with the
same “media id”, were recorded on different tapes.
Tapes evolved through the decades into an extremely sophisticated cassette.
re
pri
nte
d f
rom
co
mp
ute
r d
esk
top
en
cyc
lop
ed
ia. c
op
yri
gh
t 2
00
2 T
he
co
mp
ute
r l
an
gu
ag
e c
om
pa
ny
inc
.
3
multiple copies of each media file existed, and the
lms software vastly increased playout efficiency
by copying, compiling and playing out optimised
blocks.
But the all-in-one approach was not without
issues. a gallery boasting an lms tended to be a
slave to it because the playlist fed into the ma-
chine became the de facto “master playlist.” live
events were considered to be small islands and it
was unclear if staff were
controlling the lms dur-
ing commercial blocks
or the other way round.
operators were equipped
with a gpi button to tell
the lms when to resume
automated playout once
the humans had finished
their brief episode of live
programming.
The question of who
was in charge produced
many operational conun-
drums, since the flexibility
to change events at the
last minute disappeared
and the need for sys-
tematic workflows was
imposed on people whose
creativity had come at the
price of legendary delays
in delivering the goods.
putting new tapes inside
an lms was not easy, as it
was occupied with its own
logical processes, and
persuading its software to
operate at the limit of its
capacity was a challenge
in itself.
at the same time that
playout was becoming
increasingly automated
through the use of cart
machines, the scheduling
process was also com-
puterized. This resulted in
many problems initially, as early programmers did
not understand in detail the workings of a master
control room. To start with, many systems did not
calculate in frames but in seconds. This probably
did not seem to be a problem for computer geeks,
but for gallery supervisors it was a nightmare.
programs were edited by rounding times to a half
second, the cumulative result of which was that
time schedules could be out by 10 or more sec-
onds after several hours of playout. many stations
had a process called “traffic” which, among other
things, adjusted a playlist given in round seconds
to frames. computer-generated playlists repre-
sented master control room staff’s first contact with
computers and for gallery supervisors in particular,
it was not a pleasant experience.
with all its marvellous
technology and its super-
high-quality performance, the
lms was extremely expen-
sive. The complexity of its set-
up and its arcane configura-
tion settings meant the price
of ownership was too high for
even those Tv stations used
to paying three shifts of eight
well-paid staff.
odetix launched a
much simpler device. sony
answered with Flexicart,
panasonic with smartcart
and Thomson with procart.
These new-generation cart
machines were much smaller
and the idea was to feed
them daily with the tapes
required. Two approaches
were proposed to handle
on-air management. The tra-
ditional model called for the
cart machines to operate with
their own router controlled
remotely by its own software,
as the lms used to do. They
would deliver a signal which
the supervisor had no con-
trol over ( just like when the
mcr passed the signal to a
live studio). with the second
approach, the robots were
used as telemanipulators. The
vTr operator could control
each vTr and the robot
arm remotely from his table,
enabling him to select one and deliver a signal to
the master control room mixer. if commercial blocks
needed to be compiled, this would be managed
manually by control room staff.
shift supervisors strongly favoured the second
option, which gave them back control. But a third,
even more alarming configuration was eventually to
ima
ge
: Bc
s B
roa
dc
ast
sto
re
Sony’s Flexicart: one of a new generation of cart machines from the mid 90s.
4
succeed. engineering departments proposed that
computers control not only the vTr router but also
the mcr’s main mixer. They argued that a comput-
er could position the vTrs, do the pre-roll, control
the mixer and trigger events with greater precision
than the human hand.
in practice, however, this was more complex
than the visionaries had foreseen. a whole genera-
tion of operators saw how clumsy computers cre-
ated havoc. certain cart machines became famous
for their stupidity and a “polar bear in the zoo” type
syndrome was common, in which they performed
the same action repeatedly, such as putting the
tape in the vTr and taking it out again, over and
over, until the supervisor finally unplugged and
rebooted the machine. viewers at home would see
sudden rewinds, not realizing that this was an unfor-
tunate side effect of early attempts at automation.
But as algorithms and software quality rapidly
improved, time accuracy went back to where it had
been for decades: somewhere between zero and
one frame, and automated master control rooms
became the norm. a new generation of software
programmers familiar with gallery operations did
most of the work and broadcast engineers su-
perseded computer engineers in the planning of
master control room automation.
distrustful gallery teams would generally make
and keep a record of the true “som” (start of
message: the point on a tape when a programme
started) which they typically did not share with the
rest of the facility. computing islands were created
to separate the mcr from the chaos of the rest of
the Tv station.
life could have continued in this way but the
computer industry had made a new promise to sta-
tion owners: video could be converted into a com-
puter file and played out as easily as a printer prints
a playlist. station managers were eager to try it and
excited by the possibility of no longer needing the
expensive vTr head cleaning procedures required
for video cassettes.
so by the mid 90s the era of the tape seemed
to be over. shift supervisors, who had first been an-
noyed by the lack of frames in the computer-gener-
ated playlists of the 70s, then shocked by the inflex-
ibility of the first generation of automation software
in the 80s, were to face new and greater horrors
with the next step in the revolution. it would take
another 10 years to complete and, perversely, the
richest stations would be the ones to suffer most.
sTandard BehemoThs vs. dedicaTed dwarFs
now, with the revolution complete, it is difficult to
remember the excitement of the broadcast market
in the mid 90s. The coalescence of computers and
video exceeded all expectations once the original
technology (of selecting the colour of each point
on the monitor by changing values in an associated
memory buffer) was backed up by low-level soft-
ware which useful applications could be built upon.
each iBc presented the visitor with more than
just a selection of new products; it now offered
entirely new markets to broadcasters. as usually
happens during periods of rapid technological de-
velopment, anything seemed possible and compa-
nies had difficulties deciding which of their teams’
inventions to push as their flagship product.
within a short time the big names in the comput-
er industry, who were already providing services to
the big names in the broadcast industry, saw a new
niche to be exploited: the master control room.
Broadcast corporations’ systems departments
were at this time busy computerizing the archiving
of tapes and the workflow described in part above,
including the creation of playlists, automated com-
mercial blocks and rights management.
The idea was that once you had sampled the
video frame by frame, it became “a set of 1s and 0s”
that you could store and manage as a computer file.
The maths was explained on paper napkins to many
new to the technology: 576 lines x 720 columns
made a total of nearly half a million points or “pixels”.
each “pixel” must contain sufficient informa-
tion to specify a colour and, as computer techies
explained to each other over and over again, the
human eye has a very fine perception of colour:
we can distinguish several million different colours.
The first power of two (1s and 0s) that allows suffi-
cient combinations to do this is 8 (28 = 256). 256 to
the power of 3 (the 3 colours emitted by the tubes)
gives more than 16 million colours, going beyond
the human capacity to differentiate.
The final maths involves multiplying the half
million points by this information about the 3
colours to find out how big a single frame is, and
then multiplying the result by 25 frames per second
to find the size on disk of one second of broadcast
quality video.
This meant huge files and ushered in the era
of the iT behemoths. microcomputing had been
eroding the market for minicomputers and the
broadcast sector looked very promising — a sector
in which only really large-scale bit-crunchers could
compete. replacing an lms with a large computer
was profitable and did not seem to be too difficult.
5
over the years, the main computer manufac-
turers offered bigger and bigger monsters with
immense storage capacities and bandwidths. The
idea was that all video would flow as files inside the
station. Because transfer rates were so slow, the
obvious solution was to create a central storage
repository where all the video files could be ac-
cessed from workstations for editing or playout.
From today’s perspective, these mammoth
systems resemble those oddly-shaped planes seen
crashing in documentaries about the beginnings of
human flight. But at the time they seemed like real,
viable solutions. some of them were truly sophis-
ticated and, once the main problems were under-
stood (that computer bit rates are not constant, but
rather rise and fall, while video requires a constant
sustained bit rate) the products started to evolve.
This evolution did not last long. once the mag-
nitude of the redesign work required to guarantee
a “sustained bit rate” was understood, develop-
ment work stopped, and most of the products of
this kind were discontinued. The broadcast indus-
try’s technical requirements demanded a research
investment that the potential sales could not justify.
if standard iT technologies available at the time did
not work (and they certainly did not), the market
was not profitable. so, one by one, the computer
manufacturers pulled out.
while computer manufacturers, working closely
with the system departments of broadcast corpo-
rations, were coming to realize how complex this
market was, traditional video manufacturers were
following another path. They knew from the begin-
ning that sustained bit rates were needed and that
disk controllers were the key. They successfully
developed disk controllers and managed to create
systems capable of recording and playing, with
small buffers to achieve frame-accurate output.
Being traditional video companies, they did not
have the resources to develop a system capable of
playing more than two or three channels or with a
large amount of storage. in addition, because the
technology was developed for this sole purpose, it
was incompatible with any other computer tech-
nology. These products, and particularly the most
successful of them, Tektronix profile, were reliable
and were aimed at a more humble target than the
products of their computer manufacturer counter-
parts. instead of computerizing the whole workflow
of the station, they only aimed to replace vTrs and
only those used specifically for playout.
The commercial teams of big blue chip com-
puter companies had had their fingers burnt by
a number of painful, expensive fiascos, and as a
result the “dwarfs” (and what became known as
”videoservers”) came to dominate the market and
take over the rack rooms.
The initial key to the success of these systems
lay in the simplicity of their design, but in the long
run this spelt their demise. as non-linear or com-
puter editing became more popular, it was clear
that returning to “base band” before loading in
the videoserver was not a good idea. so over the
years, videoserver manufacturers tried to connect
them to nles and other videoservers in order to
create larger systems. however, this was not sim-
ple, and the combination of cart machine and video
server in cache mode remained the most popular
amongst broadcast engineers.
one of the technologies that contributed to
the success of these proprietary dwarfs was video
compression, which reduced the need for storage
and bandwidth, deemed necessary in the early
days of video computing, to a manageable fraction.
But the downside was that compression brought
back the problem of copy degradation that digital
video had removed from tape technology. copying
video from one disk to another by decompressing
to sdi and then recompressing resulted in a reduc-
tion in quality known as concatenation. in an iT
environment files could be copied directly with no
compression and no loss of quality.
market demand for compatibility began to
increase. There were two main obstacles to the
advance of standards that would have allowed the
interchange of files without concatenation. Firstly,
there were the inherent technical barriers: non-
standard hardware video servers used specially
developed operating systems and invented-on-
the-fly controllers, so building components such
as smB-compatible drivers from scratch was not
simple. most of the manufacturers resorted to FTp
emulators that made access to files possible but
slow and rudimentary.
The second obstacle was commercial. industry
committees held ongoing meetings that went no-
where because the big names did not really want
The legendary Tektronix Profile PDR-100
6
standards. each dreamed of a broadcast market
entirely dominated by their own brand. years of
discussion led only to the concept of the “wrapper”,
and after close to another decade a semi-common
wrapper was agreed upon: the mXF. Broadcast
engineers did not push strongly for integration,
since they were familiar with concatenation, which
was similar to the copy degradation of the ana-
logue era, and by that time they had developed a
certain phobia of multi-brand integration based on
pseudo-standards that nobody could enforce.
nowadays just a few of these systems still exist,
and the pressure for them to become computers
has become irresistible. exchanging files with third
parties has become as important as playing or
recording them. The revolution is over, and an iT-
based master control room with broadcast quality
and broadcast reliability is not only possible but
has become the norm. and just as the jet planes of
today are very similar to those of the 60s, the cur-
rent look of iT-based multichannel master control
rooms will probably survive as long as the broad-
cast industry itself.
here is a short overview of how things work today.
The masTer conTrol room Based on iT hardware
1 componenTs
1.1 video server
The video server should be able to play files
compressed with any compression scheme and
wrapped in any wrapper (at least mXF, avi and
mov). it must boast full connectivity over ethernet
or Fc with support for smB protocol. To avoid tech-
nical issues it is advisable to use standard sas or
saTa controllers. as a rule of thumb, the more disks
the better, as this means that not only capacity but
also bandwidth is increased. all raid options must
be available and it should be possible to change
these options in the future simply by reformatting.
last but not least, the video server must be able
to perform any task that the signal may need before
being played out, including branding, transitions
and effects, so that routers are the only video equip-
ment required in the master control room. a video
server that only plays video is no longer acceptable.
1.2 automation
The mcr automation software needs to have a
customizable user interface where metadata associ-
ated with the clips can be displayed. ideally, thumb-
nails of clips are provided and, even better, low-res-
olution proxy clips are available, which can be used
for trimming and cut-editing (e.g. for cleaning live
ingests). in general the software must have the level
of user-friendly gui that every office worker is ac-
customed to. security is no excuse for an unfriendly
ui, as usability is a key factor in security.
The automation system must provide very good
sub-event management and be able to respond
to the challenge of branding the channel in all
its different areas, with all options and features
available. it must be able to trigger events at a
fixed time, either inside or outside a program, or
in an independent playlist. in this way the ideas
of marketing and publicity departments do not
PART 2 The Master Control Room Today
7
degenerate into overly complex workflows, but
rather become smooth, hands-free procedures.
large amounts of new data are required for
branding, so the automation system needs to be
able to receive information in real time from many
different data sources. smB support is the best way
to connect with the rest of the facility. as it is part of
a networked environment and needs to be able to
interact with other devices, the automation system
must be able to handle the most common playlist
formats and provide as-run logs for the station’s
mam, traffic and advertisements.
The playlist management system must allow for
easy changes, especially if the station is airing live
programmes whose duration is not known in ad-
vance. The playlist should provide a full picture to
the operator, providing no more or less information
than is considered necessary by the supervisor.
regarding the remote control of devices, ide-
ally the automation system should support all the
equipment that can be found in a master control
room and be capable of controlling timings accu-
rately. The device server should be in a separate
machine, enabling the sharing of equipment con-
trolled remotely via rs232 or rs422.
Finally, it is a standard request that there must be
a backup for all automation processes and that this
backup must be shared, limiting costly 1+1 mirroring
to the most critical channels and times. in this regard
a distributed approach is desirable and gives the
added advantage of easy connectivity via internet
for control and monitoring from remote locations.
1.3 on-line storage
it is clear that, apart from the disks inside the serv-
ers, there is the need for a central storage facility,
sometimes known as a “repository”. in this storage
facility all the material necessary for the timeframe
determined must be available. servers need to be
capable of playing out files from this repository in
emergencies (for example if a new blank server is
added and needs to start playing immediately).
The technical requirements of the central storage
repository will depend on the number of channels. in
the case of multichannel facilities, only half a dozen
manufacturers are able to provide suitable products.
normally a nas is enough but in some cases a
san may be required. a thorough analysis is advis-
able before purchasing a san, since not only is it
Example of large-scale implementation
8
more expensive but, more importantly, installation
and maintenance are significantly more difficult. it
must also be said that for some very small appli-
cations, a simple computer with some hard disks
installed can do the job, but for broadcast applica-
tions a true nas is strongly recommended.
1.4 near on-line storage (cart machine)
Tapes have not disappeared: Betacam has simply
been replaced by lTo. The efficient surface to volume
ratio of tapes and their relatively low price mean that
they will continue to be used for some time to come.
workflows that include hd would probably
require a cart machine for near on-line storage,
and in general for any facility it is very reassuring
to have such large long-term storage facilities
complemented by off-line storage.
2 worKFlow
2.1 ingest
a master control room must be able to accept
video material in any format and store the
associated metadata in an easy and reliable way.
For traditional videotape ingests, it must have vTrs
controlled remotely but with manual supervision
if required. For file-based ingest, the mcr must
be able to connect to remote FTps as well as
corporate lans. Tools for moving files all around
the mcr must be provided to enable automation of
most parts of the workflow. depending on priority,
material will be sent either to the cart machine or to
the on-line storage facility.
having the playlist in advance is one of the
most popular methods of managing the ingest.
nevertheless, there will preferably be enough time
between the arrival of material and playout to allow
an independent ingest workflow. at the other end
of the scale we find the on-the-fly crash-ingest
for which mark-in and mark-out tools must be
provided, and for which editing while recording and
playing is a key feature.
2.2-moving material
The material must be moved towards the playout
video servers at a predetermined pace based on the
probability of it going on air. comparing the material
on disk with the playlist for a specific timeframe is
again the ideal solution, though not the only one.
once material is in the on-line storage facility,
it can be played out from there, but if there is no
need to do so, it is advisable to copy it to the video
servers’ local disks (cache). This cache can be
created from the playlist that is on air and provides
an advance warning of potential problems.
2.3 Branding
most branding graphics are based either on
metadata associated with the clip or on metadata
arriving from external sources. The different kinds
of graphics used for branding will need a different
treatment, but it is crucial that an operator warning
be triggered if metadata fails to arrive. This calls
for an integrated system that manages everything:
automation, playout and branding.
2.4 redundancy
it is in redundancy that the iT-based master control
room really excels. The flexibility of the lan, the
modularity of the components and the solution’s
distributed architecture enable any piece of
equipment to provide a backup for another of its
kind. Because there are only two main types of
equipment (video server and automation system),
the situation is ideal for redundancy.
There are many types of redundancy possible,
ranging in complexity from simple, double and
shared to restart and shared plus restart.
1
BACKUP A-B
MAIN B
LAN Switch
MCR MAIN A
MCR BACKUP A - B
PGM
AUTOVIA
MAIN A
PGM
MCR MAIN B
1
1
1
VIDEO router
Example of shared redundancy.
9
2.5-integration, configuration and maintenance
an iT-based master control room can be safer
than the safest facility from the 90s. however,
great care must be taken in its deployment.
The pc configuration must be built according to
strict specifications that have been tested over
many months and the specific pcs involved in a
particular installation must be tested thoroughly.
This will resolve the two most typical pc problems:
incompatibility issues hidden inside the machine
and a lack of component testing by manufacturers.
staff involved in design and implementation
need to be fully trained in all key technologies:
lan, disks and the particular set of applications
that are required. while professionals in these
disciplines are easy to find, there are many people
who purport to be experts but are not. using
expert and experienced staff will prevent the third
common pc problem: poor and unprofessional
maintenance.
once all these requirements have been fulfilled,
we have a system that can be easily maintained,
with components which can be bought in standard
stores at competitive prices and which does not
lock the customer into any single supplier.
conclusion
The debate between the iT-based master control
room and proprietary systems is over. The winner
has been clear for many years, but for nearly a
decade the conclusion was masked by computer
manufacturers’ lack of in-depth understanding
of the broadcast environment and the ability of
some traditional video manufacturers to develop
specialized proprietary systems relatively cheaply.
with the computing power of today’s standard
pcs, there is no longer any question: proprietary
video servers will join the ranks of so many other
devices, serving only to bring back fond memories
of the technologies of our youth.