introduction i

13
Exam 101 » Introduction I Computers, Software and Operating Systems Contents » What Is A Computer, Anyway? » Components Of A Computer » Software » The Most Important Operating Systems » Windows And OS X » Linux » More Differences And Similarities

Upload: jose-luis-pinilla

Post on 20-Jul-2016

227 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Introduction I

Exam 101 » Introduction I

Computers, Software and Operating Systems Contents » What Is A Computer, Anyway? » Components Of A Computer » Software » The Most Important Operating Systems

» Windows And OS X » Linux » More Differences And Similarities

Page 2: Introduction I

Exam 101 » Introduction I

1

Exam 101: Introduction I

What Is A Computer, Anyway?

Early computers The first computers in a modern sense were built during World War II to assist with decrypting secret messages

or doing difficult calculations, and they were big, complicated and error-prone devices – the electronic

components such as transistors or integrated circuits which today’s computers consist of hadn’t been invented

yet. What did come to light during this time and the years immediately after the war were a number of basic

assumptions that had to hold for a device to be considered a “computer”:

» A computer processes data according to a sequence of automatically executed instructions, a

program.

» Programs must allow for conditional execution and loops.

» It must be possible to change or replace the program that a computer executes.

For example, many technical devices – from television sets and digital cameras to washing machines or cars –

today contain programmed control units, almost small computers. Even so, we don’t consider these devices

“computers”, because they only execute fixed, unchangeable programs. Conversely, a pocket calculator can be

used to “process data”, but – at least as long as it isn’t a more expensive “programmable calculator” – that

doesn’t happen automatically; a human being must tap the keys.

In the early 1950s, computers were highly specialized devices that one would – exactly as Aiken stipulated –

expect to see mostly within research institutions. Science-fiction films of the time display the halls, replete with

rows of cupboards containing mysterious spinning reels. Within the space of not quite 70 years, this image has

changed dramatically.

“Small” computers in the 1970s Ken Olsen was the CEO of another computer manufacturer, Digital Equipment Corporation (DEC), which

spearheaded the development of “small” computers in the 1970s2 – where “small” at the time was understood

as meaning something like “does not need a machine hall with air conditioning and its own power plant and

costs less than a million dollars”; advances in hardware technology allowed this to change, towards the end of

the 1970s, to something like “can be bodily lifted by two people”.

Page 3: Introduction I

Exam 101 » Introduction I

2

DEC is important to the Linux community because Unix – the operating system that inspired

Linus Torvalds to start Linux some twenty years later – was first developed on DEC PDP-8 and

PDP-11 computers.

Home computers The 1970s also saw the advent of the first “home computers”. These cannot be compared with today’s PCs –

one had to solder them together on one’s own (which would be physically impossible today), and they rarely

featured a reasonable keyboard and seldom if ever a decent display. They were for the most part a tinkerer’s

pastime, much like an electric train set, because in all honesty they weren’t really useful for much at all. Even so,

they were “computers” in the sense of our earlier definition, because they were freely programmable – even

though the programs had to be laboriously keyed in or (with luck) loaded from audio cassette tape. Still they

weren’t taken completely seriously, and Ken Olsen’s quote has accordingly often been misconstrued: He had

nothing whatsoever against small computers (he was in the business of selling them, after all). What he didn’t

conceive of was the idea of having one’s complete household (heating, lights, entertainment and so on)

controlled by a computer – an idea that was quite hypothetical at the time but today seems fairly feasible and

perhaps no longer as absurd.

IBM PC Only during the late 1970s and 1980s, “home computers” mutated from kits to ready-to-use devices (names like

“Apple II” or “Commodore 64” may still be familiar to the older members of our audience) and started

appearing in offices, too. The first IBM PC was introduced in 1981, and Apple marketed the first “Mac- IBM PC

intosh” in 1984. The rest, as they say, is history – but one should not forget that the world of computers does

not consist of PCs and Macs only. The giant, hallfilling computers of yore are still around – even though they

tend to get rarer and often really consist of large groups of PCs that are quite closely related to the PCs on our

tables and which cooperate. However, the principle hasn’t changed from Howard Aiken’s time: Computers are

still devices that automatically process data according to changeable programs which may contain conditions

and loops. And things are likely to stay that way.

Page 4: Introduction I

Exam 101 » Introduction I

3

Components Of A Computer

Let’s take the opportunity of casting a glance at the “innards” of a computer (or, more precisely, an “IBM-

compatible” PC) and the components we are likely to find there:

Processor The processor (or “CPU”, for “central processing unit”) is the core of the computer: Here is where the automatic

program-controlled data processing takes place that actually makes it a computer. Today’s processors usually

contain several “cores”, which means that the major components of the processor exist multiple times and can

operate independently, which in principle increases the computer’s processing speed and thereby its

performance – and particularly fast computers often have more than one processor. PCs normally contain

processors by Intel or AMD (which may differ in detail but can execute the same programs). Tablets and

smartphones generally use ARM processors, which aren’t quite as powerful but much more energy-efficient.

Intel and AMD processors cannot directly execute programs prepared for ARM processors and vice-versa.

RAM A computer’s working memory is called “RAM” (or “random-access memory”, where “random” means

“arbitrary” rather than “haphazard”). This stores not only the data being processed, but also the program code

being executed.

This is an ingenuous trick going back to the computing pioneer John von Neumann, a

contemporary of Howard Aiken. It implies that there is no longer a difference between code

and data – this means programs can manipulate code just as well as addresses or kitchen

recipes. (In the old days, one would “program” by plugging and unplugging leads on the

outside of the computer, or programs were punched on paper tape or cards and could not be

changed straightforwardly.)

Today’s computers normally feature 1 gibibyte of RAM or more. 1 gibibyte is 230, or 1,073,741,824bytes3 –

really an inconceivably large number. By way of comparison: Harry Potter and the Deathly Hallows contains

approximately 600 pages of up to 1,700 letters, spaces, and punctuation characters – perhaps a million

characters. Hence, one gibibyte corresponds to about 1,000 Harry Potter tomes, at somewhat more than a

pound per book that is already a van full of them, and if you’re not just interested in the exploits of the young

wizard, 1,000 books is an impressive library.

Page 5: Introduction I

Exam 101 » Introduction I

4

Graphics card Not so long ago people were happy if their computer could control an electric typewriter to produce its output.

The old home computers were connected to television sets, producing images that could often only be called

atrocious. Today, on the other hand, even simple “smartphones” feature quite impressive graphics, and

common PCs contain graphics hardware that would have cost the equivalent of an expensive sports car or small

house in the 1990s4. Today’s watchword is “3D acceleration”, which doesn’t mean that the display actually

works in 3D (although even that is slowly getting fashionable) but that processing the graphics inside the

computer does not just involve left, right, top and bottom – the directions visible on a computer monitor – but

also front and back, and that in quite a literal sense: For photorealistic games it is quite essential whether a

monster lurks in front of or behind a wall, hence whether it is visible or not, and one of the goals of modern

graphic cards is to relieve the computer’s CPU of such decisions in order to free it up for other things.

Contemporary graphics cards contain their own processors, which can often perform calculations much faster

than the computer’s own CPU but are not as generally useful.

Many computers don’t even contain a separate graphics card because their graphics hardware

is part of the CPU. This makes the computer smaller, cheaper, quieter and more energy-

efficient, but its graphics performance will also take somewhat of a hit – which may not be an

actual problem unless you are keen on playing the newest games.

Motherboard The motherboard is the (usually) rectangular, laminated piece of plastic that the computer’s CPU, RAM, and

graphics card are affixed to – together with many other components that a computer requires, such as

connectors for hard disks, printers, a keyboard and mouse, or network cables, and the electronics necessary to

control these connectors. Motherboards for computers come in all sorts of sizes and colours5 – for small, quiet

computers that can act as video recorders in the living room or big servers that need a lot of space for RAM and

several processors.

Power supply A computer needs electricity to work – how much electricity depends on exactly which components it contains.

The power supply is used to convert the 240 V AC mains supply into the various low DC voltages that the

electronics inside the computer require. It must be selected such that it can furnish enough power for all the

components (fast graphics cards are usually the number-one guzzlers) while not being overdimensioned so that

it can still operate efficiently.

Most of the electricity that the power supply pumps into the computer will sooner or later end up as heat,

which is why good cooling is very important.

For simplicity, most computers contain one or more fans to blow fresh air onto the expensive electronics, or to

remove hot air from the case. With appropriate care it is possible to build computers that do not require fans,

Page 6: Introduction I

Exam 101 » Introduction I

5

which makes them very quiet, but such computers are comparatively expensive and usually not quite as fast

(since, with processors and graphics cards, “fast” usually means “hot”).

Hard disks While a computer’s RAM is used for the data currently being processed (documents, spreadsheets, web pages,

programs being developed, music, videos, …—and of course the programs working on the data), data not

currently in use are stored on a hard disk. The main reason for this is that hard disks can store much more data

than common computers’ RAM—the capacity of modern hard disks is measured in tebibytes (1 TiB = 240 Byte),

so they exceed typical RAM capacities by a factor of 100–1000.

We pay for this increase in space with a decrease in retrieval times— RAM access times are

measured in nanoseconds while those to data on (magnetic) hard disks are measured in

milliseconds. This is a mere 6 orders of magnitude—the difference between a meter and 1,000

kilometers.

Traditionally, hard disks consist of rotating platters coated with a magnetisable material. Read/write heads can

magnetize this material in different places and re-read the data thus stored later on. The platters rotate at 4,500

to 15,000 RPM, and the difference between the read/write head and the platter is extremely minute (up to 3

nm). This means that hard disks are quite sensitive to disruption and falls, because if the read/write head comes

into contact with the platter while the disk is running—the dreaded “head crash”—the disk is destroyed.

Newfangled hard disks for mobile computers have acceleration sensors which can figure out

that the computer is falling, to try and shut down the hard disk in order to prevent damage.

The newest fashion is SSDs or “solid-state disks”, which instead of magnetized platters use “flash memory” for

storage—a type of RAM which can maintain its content even without electricity. SSDs are faster than magnetic

hard disks, but also considerably more expensive per gigabyte of storage. However, they contain no moving

parts, are impervious to being shoved or dropped, and save energy compared to conventional hard disks, which

makes them interesting for portable computers.

SSDs are also reputed to “wear out” since the flash storage spaces (called “cells”) are only

rated for a certain number of write operations. Measurements have shown that this does not

lead to problems in practice.

There are various methods of connecting a hard disk (magnetic or SSD) to a computer. Currently most common

is “serial ATA” (SATA), older computers use “parallel ATA”, also called “IDE”. Servers also use SCSI or SAS

(“serially attached SCSI”) disks. For external disks, one uses USB or eSATA (a variant of SATA with sturdier

connectors).

Page 7: Introduction I

Exam 101 » Introduction I

6

Incidentally: The difference between gigabytes and gibibytes (or terabytes and tebibytes) is

most notable with hard disks. For example, you buy a “100 GB drive”, connect it to your

computer and, shock horror, realize that your computer only shows you 93 “GB” of free space

on the new disk! However, your drive is not damaged (lucky you) –the disk drive

manufacturer only uses (quite correctly) “gigabytes”, i. e., billions of bytes, while your

computer probably (if inaccurately) calculates the free space in units of “gibibytes” or 230

bytes.

Optical drives Besides hard drives, PCs usually support optical drives that can read, and often also write, media such as CD-

ROMs, DVDs or Blu-ray disks. (Mobile devices sometimes have no room physically for an optical drive, which

does not mean such drives can’t be connected externally.) Optical media—the name derives from the fact that

the information on there is accessed by means of a laser—are mostly used for the distribution of software and

“content” (music or films), and their importance is waning as more and more companies rely on the Internet as

a distribution medium.

In former times one also considered optical media for backup copies, but today this is no

longer realistic—a CD-ROM can hold up to approximately 900 MiB of data and a DVD up to 9

GiB or so, thus for a full backup of a 1 TiB hard disk you would require 1000 CD-size or 100

DVD-size media, and constantly swapping them in and out would also be a hassle. (Even Blu-

ray discs can only fit 50 GiB or so, and drives that can write to Blu-ray discs are still fairly

expensive.)

Display You can still see it in old movies: the green sheen of the computer screen. In reality, green displays have all but

disappeared, colour is in fashion, and new displays are no longer massive hulks like the CRTs (cathode-ray tubes)

we used to have, but are slim, elegant monitors based on liquid crystals (LCD, “liquid-crystal display”). LCDs

don’t confine themselves to the advantage of taking up less space on a desk, but also neither flicker nor bother

the user with possibly harmful radiation—a win-win situation. There are a few disadvantages such as colour

changes when you look at the screen at a too-acute angle, and cheaper devices may deliver a blotchy picture

because the backlight is not uniform.

With CRTs one used to take care to not let them stand around unused showing the same

picture for long periods of time, because the picture could “burn in” and appear as a

permanent blurry backdrop. Accordingly one used a “screen saver”, which after a certain

amount of idle time would replace the content of the screen by a more or less cute

animation to avoid burn-in (the classic was an aquarium with fish and other aquatic fauna).

LCDs no longer suffer from the burn-in problem, but screen savers are still sticking around

for decorative value.

Page 8: Introduction I

Exam 101 » Introduction I

7

Resolution LCDs are available in all sizes from “smartphone” to wall-size large screens; resolution their most important

property is the resolution, which for PC displays usually ranges between 1366×768(horizontally ×vertically) and

1920×1080 “pixels”. (Lower and higher resolutions are possible, but do not necessarily make economic or visual

sense.) Many computers support more than one screen in order to enlarge the working space.

Also usual today is an aspect ratio of 16 : 9, which corresponds to high-definition television—

actually a silly development, since most computers aren’t even used for watching television,

and a taller but narrower display (such as the formerly-common 4 : 3format) is better suited

most of the more frequently-used applications like word processing or spreadsheet

calculations.

Other peripherals Of course you can connect many more devices to a computer besides the ones we mentioned: printers,

scanners, cameras, television receivers, modems, robotic arms, small missile launchers to annoy your cubicle

neighbors, and so on. The list is virtually endless, and we cannot discuss every class of device separately here.

But we can still make a few observations:

» One commendable trend, for example, is the simplification of connections. While almost every class of

device used to have their own interface (parallel interfaces for printers, serial interfaces for modems,

“PS/2” interfaces for keyboards and mice, SCSI for scanners, …), today most devices use USB (universal

serial bus), a relatively foolproof and reasonably fast method which also supports “hot-plugging”

connections while the computer is running.

» Another trend is that towards more “intelligence” in the peripherals themselves: Formerly, even

expensive printers were fairly stupid devices at an IQ level of electric typewriters, and programmers had

to very carefully send exactly the right control codes to the printer to produce the desired output.

Today, printers (at least good printers) are really computers in their own right supporting their own

programming languages that make printing much less of a hassle for programmers. The same applies in

a similar fashion to many other periperals.

Of course there are still very stupid printers (especially at lower price points) which leave

preparing the output to the computer itself. However, these still make a programmer’s life

as easy as their more expensive relations.

Page 9: Introduction I

Exam 101 » Introduction I

8

Software

Just as important as a computer’s “hardware”, i. e., the technical components it consists of6, is its “software”—

the programs it is running. This can very roughly be divided into three categories:

Firmware » The firmware is stored on the computer’s motherboard and can only be firmware changed or replaced

inconveniently if at all. It is used to put the computer into a defined state after switching it on. Often

there is a way of invoking a setup mode that allows you to set the clock and enable or disable certain

properties of the motherboard.

On PCs, the firmware is called “BIOS” (Basic Input/Output System) or, on newer

systems, “EFI” (Extensible Firmware Interface).

Some motherboards include a small Linux system that purportedly boots more quickly

than Linux and which is supposed to be used to surf the Internet or watch a DVD

without having to boot into Windows. Whether this is actually worth the trouble is up

to debate.

Operating system » The operating system makes the computer into a usable device: It manages operating system the

computer’s resources such as the RAM, the hard disks, the processing time on the CPU(s) available to

individual programs, and the access to other peripherals. It allows starting and stopping programs and

enforces a separation between several users of the computer. Besides, it enables—on an elementary

level—the participation of the computer in a local area network or the Internet. The operating system

frequently furnishes a graphical user interface and thus determines how the computer “looks and feels”

to its users.

» When you buy a new computer it is usually delivered with a pre-installed operating system: PCs with

Microsoft Windows, Macs with OS X, smartphones often with Android (a Linux derivative). The

operating system, though, is not tied as closely to a computer as the firmware, but can in many cases be

replaced by a different one—for example, you can install Linux on most PCs and Macs.

Or you install Linux in addition to an existing operating system— usually not a

problem either.

Page 10: Introduction I

Exam 101 » Introduction I

9

User-level programs. Applications. Utilities » User-level programs allow you to do something useful, such as write documents, draw or manipulate

pictures, compose music, play games, surf the Internet or develop new software. Such programs are

also called applications. Additionally, there are often utilities that the operating system provides in

order to allow you—or a designated “system administrator”—to make changes to the computer’s

configuration and so on. Servers, in turn, often support software that provides services to other

computers, such as web, mail or database servers.

The Most Important Operating Systems

Windows And OS X When talking about computer operating systems, most people will automatically think of Microsoft Windows7.

This is due to the fact that nowadays most PCs are sold with Windows preinstalled—really not a bad thing in

itself, since their owners can get them up and running without having to take the trouble to install an operating

system first, but, on the other hand, a problem because it makes life hard for alternative operating sysetms such

as Linux.

In fact it is not at all straightforward to buy a computer without a preinstalled copy of

Windows—for example, because you want to use it exclusively with Linux—, except when

building one from scratch. Theoretically you are supposed to be able to get a refund for an

unused preinstalled copy of Windows from the computer’s manufacturer, but we know of

nobody who actually managed to obtain any money.

Windows NT

Today’s Windows is a descendant of “Windows NT”, which was Microsoft’s attempt to establish an operating

system that was up to the standards of the time in the 1990s (earlier versions such as “Windows 95” were

graphical extensions to the then-current Microsoft operating system, MS-DOS, and fairly primitive even by the

standards of the day). Decency forbids us a critical appreciation of Windows here; let it suffice to say that it does

approximately what one would expect from an operating system, provides a graphical user interface and

supports most peripheral devices (support for more is provided by the individual device manufacturers).

Mac OS

Apple’s “Macintosh” was launched in 1984 and has since been using an oper ating system called “Mac OS”. Over

the years, Apple made various changes to the platform (today’s Macs are technically about the same as

Windows PCs) and operating system, some of them quite radical. Up to and including version 9, MacOS was a

Page 11: Introduction I

Exam 101 » Introduction I

10

fairly flimsy artefact which, for example, only provided rudimentary support for running several programs at the

same time. The current “Mac OS X”—the “X” is a Roman 10, not the letter “X”—is based on an infrastructure

related to BSD Unix and is not unlike Linux in many ways.

Since February, 2012, the official name for the Macintosh operating system is “OS X” rather

than “Mac OS X”. If we let slip a “Mac OS” every so often, you know what we really mean.

Differences

The big difference between Windows and OS X is that OS X is sold exclusively with Apple computers and will not

run on “normal” PCs. This makes it much more straightforward for Apple to provide a system that is obviously

very homogenous. Windows, on the other hand, must run on all sorts of PCs and support a much wider array of

hardware components that can occur in completely unforeseen combinations. Hence, Windows users have to

contend with incompatibilities that are sometimes difficult or even impossible to sort out. On the other hand,

there is a much greater selection of hardware for Windows-based computers, and prices are, on the whole, less

exorbitant.

Similarities

Windows and OS X are similar in that they are both “proprietary” software: Users are forced to accept what

Microsoft or Apple put in front of them, and they cannot examine the actual implementation of the system, let

alone make changes to it. They are bound to the upgrade schedule of the system, and if the manufacturer

removes something or replaces it by something else, they need to adapt to that.

There is one difference here, though: Apple is essentially a hardware manufacturer and only

provides OS X to give people an incentive to buy Macs (this is why OS X isn’t available for non-

Macs). Microsoft, on the other hand, does not build computers, and instead makes its money

selling software such as Windows which runs on arbitrary PCs. Therefore, an operating system

like Linux is much more of a threat to Microsoft than to Apple—most of the people who buy

an Apple computer do this because they want an Apple computer (the complete package), not

because they are especially interested in OS X. The PC as a platform, however, is being

encroached upon by tablets and other new-fangled types of computer that don’t run

Windows, and that puts Microsoft under extreme pressure. Apple could easily survive selling

just iPhones and iPads instead of Macs—Microsoft without Windows would probably go

bankrupt fairly soon in spite of having loads of money in their bank account.

Page 12: Introduction I

Exam 101 » Introduction I

11

Linux

Linux is an operating system that was first started out of curiosity by Linus Torvalds, but then took on a life of its

own—in the meantime, hundreds of developers (not just students and hobbyists, but also professionals at

companies such as IBM, Red Hat, or Oracle) are developing it further.

Linux was inspired by Unix, an operating system developed in the 1970s at AT&T Bell Laboratories and geared

towards “small” computers (see above for the meaning of “small” in this context). Unix soon became the

preferred system for research and technology. For the most part, Linux uses the same concepts and basic ideas

as Unix, and it is easy to get Unix software to run on Linux, but Linux itself does not contain Unix code, but is an

independent project.

Unlike Windows and OS X, Linux isn’t backed by an individual company whose economic success hinges on the

success of Linux. Linux is “freely available” and can be used by anyone—even commercially—who subscribes to

the rules of the game (as outlined in the next chapter). This together with the fact that by now Linux no longer

runs just on PCs, but in substantially identical form on platforms ranging from telephones (the most popular

smartphone operating system, Android, is a Linux offshoot) to the largest mainframes (the ten fastest

computers in the world are all running Linux) makes Linux the most versatile operating system in the history of

modern computing.

Distributions

Strictly speaking “Linux” is just the operating system kernel, i. e., the program that handles the allocation of

resources to applications and utilities. Since an operating system without applications isn’t all that useful, one

usually installs a Linux distribution, which is to say a package consisting of “Linux” proper and a selection of

applications, utilities, documentation and other useful stuff. The nice thing is that, like Linux itself, most Linux

distributions are “freely available” and hence available free of charge or at very low cost. This makes it possible

to equip a computer with software whose equivalents for Windows or OS X would run into thousands of dollars,

and you do not run the risk of falling foul of licensing restrictions just because you installed your Linux

distribution on all your computers as well as Aunt Millie’s and those of your buddies Susan and Bob.

More Differences And Similarities

Graphical user interface

Actually, the three big operating systems—Linux, Windows, and OS X—differ only in detail in what they present

to the users. All three offer a graphical user interface (GUI) which allows even casual users to manage their files

Page 13: Introduction I

Exam 101 » Introduction I

12

through simple gestures like “drag and drop”. Many popular applications are available for all three operating

systems, so which one you are using at the end of the day becomes almost immaterial as long as you are

spending most of your time inside the web browser, office package, or e-mail program. This is an advantage

because it enables a “gradual” migration from one system to the other.

Command line

Besides the graphical interface, all three systems also offer a way to use a “command line” to input textual

commands which the system then executes. With Windows and OS X, this feature is mostly used by system

administrators, while “normal” users tend to shun it—a question of culture. With Linux, on the other hand, the

command line is much less ostracized, which may have to do with its descent from the scientific/technical Unix

philosophy. As a matter of fact, many tasks are performed more conveniently and efficiently from the command

line, especially with the powerful tools that Linux (and really also OS X) provide. As a budding Linux user, you do

well to open up to the command line and learn about its strengths and weaknesses, just as you should learn

about the strengths and weaknesses of the GUI. A combination of both will give you the greatest versatility.