Operating Systems

Kevin Cole
Gallaudet Research Institute
kjcole@gri.gallaudet.edu
Copyright © 1996

[The following random thoughts are notes for lectures given in 1996. They are not intended to be coherent. -- KJC]

A modern computer system -- whether it is a PC or a large supercomputer -- is actually a collection of special purpose computers all working together. In a typical desktop system, for example, you might find a small computer, i.e. micro-processor, which controls the video, another which controls the disk drive, and a third which controls an internal modem. Then there is a micro-processor that controls all the other micro-processors. This is the "heart and soul" of the hardware -- the Central Processing Unit or CPU. In IBM-compatibles, this will be an Intel or Intel clone (8086, 286, 386, 486, Pentium, etc). In Apple computers the CPU is a Motorola chip (6502, 68000, 68020, 68030, etc). The CPU often handles math operations (typically integer math), logical decisions, and memory control. In fact you may see the term Arithmetic/Logic Unit or ALU used to talk about the CPU or a portion of it.

The CPU can perform the math, or move information to and from disk, modem, screen, and keyboard, but it needs to be told how to do it. In which order should it access the information? Does the keyboard get a higher priority than the screen? If the CPU is busy with the printer, and information starts to come over the modem, should the CPU stop sending to the printer in order to handle the modem? Or can it divide it's attention between the two tasks without losing any data?

An operating system is a collection of programs that together act as:

A hard disk looks like a stack of old phonograph records, connected by a rod going through the center. Each platter has a read/write head akin to a needle. (Actually, there are usually two heads per platter -- one for the top and one for the bottom.) Like a needle, a head can move in towards the center of the disk, or out towards the edges. Information is located by telling the microprocessor how far to move the heads, which platter the information is on, and how much of the circumference is used. You can think of these as X, Y, and Z, or latitude, longitude, and altitude, or in computer science terms cylinder, sector and track.

Suppose you want to print a file. The computer system has to locate the information, using cylinder, sector and track, copy the information from the disk to RAM, and then copy it from RAM to a printer output port. It would be rather tedious for you to have to remember all of that yourself. The operating system, in this case, goes to a translation table, and finds the filename that you have specified. It then translates that filename to a cylinder, sector and track number, and asks the disk controller to retrieve whatever information is at that location, and place it in RAM. The operating system must also make sure that the RAM is not currently being used by some other process or application. Then it must wait for the printer to become available. Finally, once the file has been printed, it must cross that request off its "to-do" list.

The operating system coordinates the system in much the same way as a good hotel manager keeps track of what's happening in a large hotel. The kitchen staff, the laundry staff and other groups act independently, but someone has to coordinate their efforts. Guests occupy rooms temporarily. Someone must coordinate which guest name goes with which room name, how many rooms are free, and where they are located. An operating system must control the microprocessors acting independently, making sure that they don't interfere with each other's functions, and must keep track of which filenames go with which cylinder, sector and track number, how much free space is available on a disk and where those free areas are located.

An operating system translates commands that you type, or mouse movements that you make, into instructions to the various components of the system. When you type "WP" or click on an icon to start WordPerfect, the operating system tells the disk controller to locate a file named WP.EXE (or something similar), copy this program into RAM, and then use what is in that RAM as further instructions on what to do next.

You've probably heard the phrase "boot disk" or "bootable floppy". This phrase evolved from the term "bootstrap loader". The term comes from the expression "to pull one's self up by one's own bootstraps". Computers contain a small portion of memory which cannot be changed. This Read-Only Memory or ROM has a small program burned into it. This small program instructs the computer to search for a larger program in a special area on a "bootable" disk. This larger program, in turn, loads several other programs, which in turn, may load still others. In this way, the computer "pulls itself up by it's own bootstraps". When it finishes this process, what is left is a running operating system.

On a PC, this booting includes the loading of special software to control various devices. These are called device drivers. On Microsoft DOS and Microsoft Windows operating systems, the names of these drivers can often be found in the CONFIG.SYS file, while additional portions of the operating system can be found in the AUTOEXEC.BAT file. On machines which use the Linux operating sytem, the names of the drivers can be found in /etc/conf.modules.

In very early computers, users would make requests by physically altering the wiring. This gave way to using electrical switches (very much like common household light switches), which in turn gave rise to the punch card. Today, we use a keyboard and a mouse to give commands to the computer.

DOS stands for Disk Operating System. Today, the term is used most often to describe the Microsoft Command Line Interface (CLI) common to many IBM compatibles. Command Line Interfaces provide the keyboard as the primary means for the user to interact with the operating system. The COMMAND.COM program is DOS's CLI. It provides the user with a prompt (usually something like C:\>) and waits for the user to provide commands on the "command line".

The command line interface is giving way to the Graphical User Interface or GUI (pronounced "goo-ey"). The Apple Macintosh and the Commodore Amiga were the first consumer-level computers to use a GUI. However, today, the best-known GUI's are the the members of the Microsoft Windows family -- Windows for Workgroups, Windows 95 and Windows NT.

DOS and Windows for Workgroups were both built on the assumption that computers would not expand beyond a certain size. (Computer geeks will recognize this as the infamous 640 KB problem.) In essense, the problem is that DOS and Windows cannot access all of the memory that today's PC's can support. One way around this was to create "virtual" memory on portions of the hard disk. A portion of the operating system known as a memory manager would translate memory addresses into disk addresses and vice versa, making applications which needed large amounts of memory believe that they actually had such memory available to them.

In addition, PC's prior to the 386 had a slower method of transferring information between various hardware components. The method consisted of 16 "wires" all bundled together, and collectively known as a bus. This bus could carry 16 bits of information (each bit being a 1 or 0, which when taken as a set, can represent quite a variety of information). DOS and Windows for Workgroups were designed around the 16-bit bus.

Windows 95 and Windows NT are capable of handling more memory and the wider 32-bit bus. This means (in theory) that these newer operating systems can transfer information to and from various devices (such as the screen, the speakers, disks, CD's, and printers) twice as fast as their parents. They also manage memory better. In fact, they split up the memory in such a way that different programs can continue running in different portions of the memory simultaneously. DOS really couldn't do this well at all, and Windows for Workgroups made a feeble attempt.

Windows NT owes its origins largely to Unix and to a guy who used to work for DEC, but quit to go to work for Microsoft. Its main advantage over 95 is that it is designed with security and cross-platform compatibilty in mind. In other words, it was designed to interact with a network in which not all machines are IBM compatibles -- an environment such as the Internet.

The future

Unix is one of the most mature operating systems available today. First written in the late 1970's, it has continued to survive and evolve, and now can be found in several different flavors. The GUI most commonly used with Unix-based systems is known as X Windows. One of the most popular flavors of Unix today is Linux, a free operating system written by a graduate student at the University of Finland named Linus Torvalds. In addition to being available for free, Linux is fast, stable, and has Open Source, which means that anyone with enough brains or courage can modify the way the operating system works at a very fundemental level. (Linux is my operating system of choice.)

A maverick from Apple Computers has founded a company called "Be", which sold a computer called the BeBox, and an operating system named BeOS designed to run on both the BeBox and Power Macs. The company is aiming at the multimedia market, and is designing their computers with audio/video being one of the more important design features. Today, the BeBox is no longer available, but the latest versions of the BeOS run on both IBM-compatibles and Macs.

Client/Server technology will continue to expand, til we end up with true "virtual machines". A client is a program that requests a service. A server is a program that provides a service to a client. Clients are typically run by users. Servers are under the control of the operating system directly, and are always running, always "listening" for requests from clients. The client/server that you will probably be most familiar with is e-mail. When you send a message, you are using a client. The machine which receives your message is running a server which listens for incoming mail, and acts as the electronic postal service, sorting, filing, forwarding and rejecting mail. Other common client/server pairs include FTP, Telnet, Gopher and the World Wide Web.

The client/server model could be extended to operating systems. This would be in some ways, an extention of the file server concept. Not only would you have a central computer on your network which provided applications upon request, but it would also house portions of the operating system. And that system could be distributed across a very wide area. Programming languages like Java are just beginning to touch on the possibilities here.

Watch out for OpenDoc. It's supposed to be an up-and-coming standard -- similar to ASCII -- which will make formatted documents compatible across non-compatible computers.

Other random thoughts to talk about

The evolution of codes:

Amusing thought: RAM is used to create a virtual disk, but disk is used to create virtual RAM.

FAT the File Allocation Table is the hotel's guest book, keeping track of which files are where on the disk, and which areas of the disk are free. Subdirectories are nothing more than special database files which contain records consisting of filenames, creation dates, sizes, and protections. Some programs such as the Norton Disk Doctor (NDD) bypass some of the operating system's functions, and allow you to examine subdirectories as normal files. A very educational experience, if you're careful.