Coupon Accepted Successfully!



Operating Systems are among the most complex pieces of software. Five major achievements in the development of Operating System are:

1. Process

It is somewhat more general term than job. Process is a program that is in execution. Three major lines of computer system development created problems in timing and synchronization that contributed to the development of the concept of the process. Multiprogramming batch processing, time sharing and real time transaction. The design of system software to co-ordinate the various activities turned out to be difficult. With many jobs in progress at any one time, each of which involved numerous steps to be performed in sequence, it became impossible to analyze all the possible combination of sequences of events. So many errors were detected which were difficult to diagnose because they needed to be distinguished from application software errors and hardware errors. To tackle these problems, it is required to monitor and control the various programs executing on the processor in a systematic way. The concept of process provides the foundation process consists of the following three components.

  • An executable program
  • The associated data needed by the program
  • Execution context of program

Execution context includes the information that the Operating System needs to manage the process and that the processor needs to properly execute the process. If two processes A and B exist in a portion of the main memory, each process is recorded in process list, which is maintained by Operating System. Process index register contain the index in to the process list of the process currently controlling the processor, Program counter points to the next instruction in that process to be executed. Base and limit register defines the region in memory occupied by the process. Thus process is realized as a data structure. A process can either be executing or awaiting execution. The entire state of process is contained in its context.

Memory Management

Users need a computing environment that supports the flexible use of data, efficient and orderly control of storage allocation. An Operating System, to satisfy these requirements has five principal storage management responsibilities as follows:

Process Isolation: Operating System must prevent independent process from interfacing with data and memory of each other.

Automatic Allocation and Management : Programs should be dynamically allocated memory across the memory is required. Operating System can achieve efficiency by assigning memory to jobs only as needed.

Support of Modular Programming : Programmers should be able to define program modules and to create, destroy and alter the size of modules dynamically.

Protection and Access Control : Sharing of memory at any level of memory hierarchy. Operating System must allow portions of memory to be accessible in various ways by various users.

Long term storage : Many users and application require means for storing information for extended periods.

Operating Systems meet these requirements with the concept of Virtual Memory and file system facilities. Virtual Memory is a facility that allows program to address memory from a logical point of view without regard to the amount of main memory physically available. That is, when a program is executing only a portion of program and data may actually be maintained in main memory. Other portions of program and data are kept in blocks in disk itself and will be brought to main memory whenever its execution is required.

Information protection and security

Operating System must support a variety of protection and security mechanism to computer system and the information stored in them. Some overall protection policies are:

  • No sharing: In this case, processes are completely isolated from each other and each process has exclusive control over resources statically or dynamically assigned to it.
  • No sharing originals of program or data files
  • Controlled information dissemination: In some systems, security classes are defined to enforce a particular dissemination policy. Users and applications are given security clearances of a certain level, whereas data and other resources (e.g., I/O devices) are given security classifications. The security policy enforces restrictions concerning which users have access to which classifications
  • Access Control : Is concerned with regulating user access to the total system, sub systems, and data, and regulating process access to various resources and objects within the system.
  • Information flow control : Regulates the flow of data within the system and its delivery to users.

Scheduling and Resource Management


A key task of the Operating system is manage the various resources available to it (main memory space, I/O devices, processors) and to schedule their use by the various active processes. Any resource allocation and scheduling policy must consider the following three factors:

  • Fairness: Typically, we would like all processes that are competing for the use of a particular resource to be given approximately equal and fair access to that resource. This is especially so for jobs of the same class, that is, jobs of similar demands, which are charged the same rate.
  • Differential responsiveness: On the other hand, the operating system may need to discriminate between different classes of jobs with different service requirements. The operating system should attempt to make allocation and scheduling decisions to meet the total set of requirements. The operating system should also view these decisions dynamically. For example, if a process is waiting for the use of an I/O device, the operating system may wish to schedule that process for execution as soon as possible to free up the device for later demands from other processes
  • Efficiency : Within the constraints of fairness and efficiency, the operating system should attempt to maximize throughput, minimize response time, and in the case of time sharing, accommodate as many users as possible.

System Structure


The operating system maintains a number of queues, each of which is simply a list of processes waiting for some resource. The short-term queue consists of processes that are in main memory (or at least an essential minimum portion is in main memory) and are ready to run. Any one of these processes could use the processor next. It is up to the short-term scheduler, or dispatcher, to pick one. A common strategy is to give each process in the queue some time in turn; this is referred to as a round-robin technique. Priority levels may also be used. The long-term queue is a list of new jobs waiting to use the system.

The operating system adds jobs to the system by transferring a process from the long-term queue to the short-term queue. At that time, a portion of main memory must be allocated to the incoming process. Thus, the operating system must be sure that it does not over-commit memory or processing time by admitting too many processes to the system. There is an I/O queue for each I/O device.

More than one process may request the use of the same I/O device. All processes waiting to use each device are lined up in the device queues. Again, the operating system must determine which process to assign to an available I/O device. The operating system receives control, of the processor at the interrupt handler if an interrupt occurs. A process may specifically invoke some operating system service, such as an I/O device handler, by means of a service call. In this case, a service-call handler is the entry point in to the operating system. In any case, once the interrupt or service call is handled, the short-term scheduler is invoked to pick a process for execution.


Test Your Skills Now!
Take a Quiz now
Reviewer Name