[email protected] +91 9175287066
This website is always under updation with new contents
Unit - II
Unit I
Unit II
Unit III
Unit IV
Unit V
Q1a) Explain i) Simple system structure ii) system calls
Ans: Simple system structure: Many commercial systems do not have a well defined structure.
Frequently such operating system started as small, simple and limited systems and then grew beyond their original scope.
MS-DOS is an example of such a system. It was written to provide the most functionality in the least space.
UNIX is another system that was initially limited by hardware functionality. It consists of two separable parts: kernel and the system program.
The kernel further separated into series of interfaces and device drivers, which were added and expanded over the years as UNIX evolved.
Everything below the system call interface and above the physical hardware is kernel.
The kernel provides the file system, CPU scheduling, memory management and other operating system function through system calls.
New versions of UNIX are designed to use more advanced hardware. The operating system can then retain much greater control over the computer and over the applications that make use of that computer.
Under the top down approach the overall functionality and features are determined and are separated into components.
This separation allows programmers to hide information, they are therefore free to implement the low level routines as they see fit, provided that the external interface of the routines stays unchanged and that the routines itself performs the advertised task.
Q2) Describe general system architecture in detail.
system can be divided into four components: the hardware, the operating system The application program and the users.
The hardware i.e. the central processing unit, the memory and the input/output devices- provides the basic computing resources.
The application programs such as words processors, spreadsheets, compilers and web browsers- define the ways in which these resources are used to solve the computing problems of the users. The operating system controls an coordinates the use of the hardware among the various application program.
The components of an operating system are its hardware, software and data. The OS provides the means for the proper use of these resources in the operation of the computer system.
The operating system can be explored from two viewpoints: the user and the system. User view: The user view of the computer varies by the interface being used.
Most computer users sit in the front of a PC, consisting of a monitor, keyboard, mouse and system unit.
Such a system is designed for one user to monopolize its resources, to maximize the work that the user is performing.
Some users sit at a terminal connected to a mainframe or minicomputer. Others users are accessing and may exchange information.
Others users sit at workstations, connected to networks of other workstations and servers.
These users have dedicated resources at their disposal, but they also share resources.
System view: We can view OS as a resource allocator.
A computer system has many resources hardware and software that may be required to solve a problem: CPU time, memory space, file storage space, I/O device and so on. An OS is a control program which manages the execution of user programs to prevent errors and improper use of computer.
The primary goal of an operating system is efficient operation of the computer system. This is the case for large, shared, multiuser systems.
ii) System call:
System calls provide an interface between the process and the operating system. System calls allow user-level processes to request some services from the operating system which process itself is not allowed to do.
System programs provide basic functioning to users so that they do not need to write their own environment for program development (editors, compilers) and program execution (shells). In some sense, they are bundles of useful system calls.
Operating systems contain sets of routines for performing various low-level operations.
For example, all operating systems have a routine for creating a directory. If you want to execute an operating system routine from a program, you must make a system call. In computing, a system call, or software interrupt is the mechanism used by an application program to request service from the operating system.
System calls often use a special machine code instruction which causes the processor to change mode (e.g. to "supervisor mode" or "protected mode").
This allows the OS to perform restricted actions such as accessing hardware devices or the memory management unit.
System calls often use a special CPU instruction which causes the processor to transfer control to more privileged code, as previously specified by the more privileged code. This allows the more privileged code to specify where it will be entered as well as important processor state at the time of entry.
When the system call is invoked, the program which invoked it is interrupted, and information needed to continue its execution later is saved.
The processor then begins executing the higher privileged code, which, by examining processor state set by the less privileged code and/or its stack, determines what is being requested. When it is finished, it returns to the program, restoring the saved state, and the program continues executing. As an example how system calls are used consider writing a simple program to read data from one file and to copy them to another file.
The first input that the program will need is the names of two files, the input file and the output file.
Once the two names are obtained the program must open the input file and create the output file.
Each of this operation requires another system and encounter possible error conditions.
When the program tries to open the input file it may find that no file of that name exists or that the file protected against access.
In these cases the program should print a message on the console which uses another sequence of system call and then the program terminate abnormally using another system call. Now that both are set up we enter a loop that reads from the input file and writes to output file.
Finally after the entire file is copied the program may close both files with another system call and write a message to the console and this needs more system call and finally terminate normally using the final system call.
System call can be grouped into five major categories: process control, file management, device management, information maintenance and communication.
Q3) Give an overview of secondary storage management.
Ans: Secondary-Storage Management Generally speaking, systems have several levels of storage, including primary storage, secondary storage and cache storage. Instructions and data must be placed in primary storage or cache to be referenced by a running program. Because main memory is too small to accommodate all data and programs, and its data are lost when power is lost, the computer system must provide secondary storage to back up main memory. Secondary storage consists of tapes, disks, and other media designed to hold information that will eventually be accessed in primary storage (primary, secondary, cache) is ordinarily divided into bytes or words consisting of a fixed number of bytes. Each location in storage has an address; the set of all addresses available to a program is called an address space. The three major activities of an operating system in regard to secondary storage management are:
1. Managing the free space available on the secondary-storage device.
2. Allocation of storage space when new files have to be written.
3. Scheduling the requests for memory access.
Q4) Explain:- i) I/O systems ii) Protection system
Ans: I/O systems: One of the purposes of an operating system is to hide the complexities of specific hardware from the user. The I/O subsystem consists of
• A memory management component that includes buffering ,caching and spooling
• A general device drivers interface
• Drivers for specific hardware devices.
• As we know that OS has a set device drivers i. e. software routines that control respective I/O devices through their controllers. Device drivers are written for keyboard.
Mouse, Monitor, Disk etc. and accordingly they are named as keyboard device drivers, mouse device drivers.
A device driver hides the peculiarities of its I/O device. Also we know that the device communicate with a host CPU through a connection point called as port.
A controller is an electronic device that controls a port or a bus or a device. Protection system:
If a computer system has multiple users and allow the concurrent execution of multiple processes, then the various processes must be protected from one another’s activities.
For that purpose mechanisms ensure that the files , memory segment, CPU and other resources can be operated on by only those process s that have gained proper authorization from the operating system.
For example memory addressing hardware ensures that a process can execute only within its own address space.
The timer ensures that no process can gain control of the CPU without eventually Relinquishing control. Device control registers are not accessible to users, so that the integrity of the various peripheral devices is protected.
Protection is any mechanism for controlling the access of programs, process or users to the resources defined by a computer system.
This mechanism must provide means for specification of the controls to be imposed and means for enforcement.
Protection can improve reliability by detecting latent errors at the interfaces between component subsystems.
Early detection of interfaces errors can often prevent contamination a healthy subsystem by another subsystem that is malfunctioning.
An unprotected resource cannot defend against use by an unauthorized or incompetent user.
A protection oriented system provides a means to distinguish between authorized and unauthorized usage.
Q5) Explain how operating system manages the devices?
Ans: Once the program is running it may need additional resources to proceed. Additional resources may be more memory tape drives, access to files, and so on.
If the resources are available they can be granted and control can be returned to user program; otherwise the program will have to wait until sufficient resources are available. Files can be a thought of as abstract or virtual devices. Thus many of the system calls for files are also needed for devices.
If the system has multiple users, we must first request the device to ensure exculsive use of it.
After we have finished with the device we must release it. These functions are similar the open and close system calls for files.
Once the device has been requested, we can read, write and reposition the device just as we can with ordinary files.
In fact the similarity between I/O devices and files is so greater that many operating systems including UNIX and MS-DOS merge the two into a combination file device structure. In this case I/O devices are identified by special file names.
Q6) List the advantages of layered approach to system design.
Ans: The modularization of a system can be done in many ways. One method is the layered approach in which the operating system is broken up into a number of layers, each built on top of lower layers.
The bottom layers are the hardware the highest is the user interface.
An operating system layer is an implementation of an abstract object that is the encapsulation of data and of the operations that can manipulate those data.
A typical operating system layer say layer M—is depicted in fig. It consists of a data structures and a set of routines that can be invoked by higher level layers.
Layer m in turn can invoke operations on lower level layers. The main advantage of the layered approach is the modularity. The layers are selected such that each uses functions and services of only lower level layers.
This approach simplifies debugging and system verification. The first layer can be debugged without any concern for the rest system verification.
The first layer can be debugged without any concern for the rest of the system, because by definition, it uses only the basic hardware to implement its function. Once the layer is debugged its correct functioning can be assumed while the second layer is debugged, and so on.
If an error is found during the debugging of a particular layer the error must be on that layer because the layer below it is already debugged without.
Thus the design and implementation of the system are simplified when the system is broken down into layers.
Each layer is implemented with only those operations provided by lower level layers. A layer does not need to know how these operations are implemented;
it needs to know only what these operations do.
Hence each layer hides the existence of certain data structures, operations and hardware from the higher level layers.
The major difficulty with the layered approach involves the careful definition of the layers, because layer can use only those layers below it.
A final problem with layered approach is that they tend to be less efficient than other types.
For instance when a user program executes an I/O operation it executes a system call that is trapped to the I/O layer, which calls the memory management layer, which in turn calls the CPU scheduling layer which is then passed to the hardware. At each layer the parameters may be modified data may need to be passed, and so on.
Each layer adds overhead to the system call, the net result is a system call that takes longer then does one on a non layered system.

Q7) what different services are offered by operating system? Explain.
Ans: Following are the five services provided by operating systems to the convenience of the users.
Program Execution: The purpose of computer systems is to allow the user to execute programs.
So the operating systems provide an environment where the user can conveniently run programs.
The user does not have to worry about the memory allocation or multitasking or anything.
These things are taken care of by the operating systems. Running a program involves the allocating and deallocating memory, CPU scheduling in case of multiprocess.
These functions cannot be given to the user-level programs. So user-level programs cannot help the user to run programs independently without the help from operating systems. I/O Operations: Each program requires an input and produces output.
This involves the use of I/O. The operating systems hides the user the details of underlying hardware for the I/O.
All the user sees is that the I/O has been performed without any details.
So the operating systems by providing I/O makes it convenient for the users to run programs.
For efficiently and protection users cannot control I/O so this service cannot be provided by user-level programs.
File System Manipulation: The output of a program may need to be written into new files or input taken from some files.
The operating systems provide this service. The user does not have to worry about secondary storage management.
User gives a command for reading or writing to a file and sees his/ her task accomplished.
Thus operating systems make it easier for user programs to accomplish their task.
This service involves secondary storage management. The speed of I/O that depends on secondary storage management is critical to the speed of many programs and hence I think it is best relegated to the operating systems to manage it than giving individual users the control of it.
It is not difficult for the user-level programs to provide these services but for above mentioned reasons it is best if this service s left with operating system.
Communications: There are instances where processes need to communicate with each other to exchange information. It may be between processes running on the same computer or running on the different computers.
By providing this service the operating system relieves the user of the worry of passing messages between processes.
In case where the messages need to be passed to processes on the other computers through a network it can be done by the user programs.
The user program may be customized to the specifics of the hardware through which the message transits and provides the service interface to the operating system.
Error Detection: An error is one part of the system may cause malfunctioning of the complete system. To avoid such a situation the operating system constantly monitors the system for detecting the errors.
This relieves the user of the worry of errors propagating to various part of the system and causing malfunctioning.
This service cannot allow to be handled by user programs because it involves monitoring and in cases altering area of memory or deallocation of memory for a faulty process.
Or may be relinquishing the CPU of a process that goes into an infinite loop. These tasks are too critical to be handed over to the user programs.
A user program if given these privileges can interfere with the correct (normal) operation of the operating systems.

Q8) Explain i) Command interpreter system. ii) Information maintenance
Ans: Command Interpreter System: A command interpreter is an interface of the operating system with the user.
The user gives commands with are executed by operating system (usually by turning them into system calls).
The main function of a command interpreter is to get and execute the next user specified command.
Command-Interpreter is usually not part of the kernel, since multiple command interpreters (shell, in UNIX terminology) may be support by an operating system, and they do not really need to run in kernel mode.
There are two main advantages to separating the command interpreter from the kernel.
1. If we want to change the way the command interpreter looks, i.e., anyone want to change the interface of command interpreter, he is able to do that if the command interpreter is separate from the kernel.
We cannot change the code of the kernel so we cannot modify the interface.
2. If the command interpreter is a part of the kernel it is possible for a malicious process to gain access to certain part of the kernel that it showed not have to avoid this ugly scenario it is advantageous to have the command interpreter separate from kernel.
Information maintenance: many system calls exist simply for the purpose of transferring information between the user program and the operating system.
For example most system have a system call to return the current time and date.
Other system calls may return information about the system, such as the number of current users,
the versions number of the operating system such as the number of current users, the version number of the operating system,
the amount of free memory or disk space and so on.
In addition the Os keeps information about all its processor, and there are system calls to access this information.
Generally there are also calls to access this information. Generally there are also calls to reset the process information.

Q9) Explain the responsibilities of operating system in process management.
Ans: Process Management The operating system manages many kinds of activities ranging from user programs to system programs like printer spooler, name servers, file server etc.
Each of these activities is encapsulated in a process. A process includes the complete execution context (code, data, PC, registers, OS resources in use etc.)
It is important to note that a process is not a program. A process is only ONE instant of a program in execution.
There are many processes can be running the same program. The five major activities of an operating system in regard to process management are
• Creation and deletion of user and system processes:
There are four principles that cause processes are to be created
i) System initialization
ii) Execution of process creation system call by a running process
iii) A user request to create new a new process
iv) Initiation of a batch job
When an operating system is booted several process are created. Some of these are foreground process and others are background process which is not associated with particular user.
Processes that stay in background to handle some activity such as e-mail, webpage and so on are called as daemons.
Often a running process will issue system calls to create one or more new process to help it do its job.
Process termination: the process will takes due to one of the following condition:
 Normal exit
 Error exit
 Fatal error
 Killed by another process
Most process terminate because they have done their work. When a compiler has compiled the programs given to it, the compiler executes system call to tell operating system that it is finished.
The second reason for termination is that the process discovers a fatal error.
The third reason termination is an error caused by process often due to a program bug.
The forth reason that a process might terminate is that the process executes a system call telling the operating system to kill some other process.
• Suspension and resumption of processes
i) During its life cycle a process may go through different different state.
It is called as state transition • A mechanism for process synchronization and process communication:
Communication between processes takes place by calls to send and receive primitives.
There is different design option for implementing each primitive.
• A mechanism for deadlock handling:

Q10) what do you mean by system structure? Explain with example any one structure.
Ans: system structure: A system is large and complex as a modern operating system must be engineered carefully if it is to function properly and be modified easily.
A common approach is to partition the task into small components rather than have one monolithic system.
Each of these modules should be well defined portion of the system, with carefully defined inputs, output and function.
There are two types of structures simple and layered.
ad request and the disk controller does not know or care whether it came from the CPU or from a DMA controller.
Typiacally the memory address to write to is on the bus
address lines sos when th edisk controller fetches the next word from its internal buffer, it knows where to write it.
The Write to memory is another standard acknowledge ment signal to the DMA controller, also over the bus.
The DMA controller then increamnets the memory address to use and decreaments the bus count.
If the byte count is still greater thane 0, steps 2 through 4 are repeated until the count reaches 0.
At that time the DMA controlller interrupts the CPU to let it know that the transfer is now complete.
When th eopearating sytem starts up it does not have to copy the disk block to memory it is already there.

Q11) Discuss any four activities of an operating system in regard to file management.
Ans: File Management: A file is a collected of related information defined by its creator.
Computer can store files on the disk (secondary storage), which provide long term storage.
Some examples of storage media are magnetic tape, magnetic disk and optical disk. Each of these media has its own properties like speed
capacity, and data transfer rate and access methods.
File systems normally organized into directories to ease their use. These directories may contain files and other directions.
The five main major activities of an operating system in regard to file management are
1. The creation and deletion of files: Two steps are necessary for creating files.
First the space in the file system must be found for the file and the second an entry records the names of the file and the location in the file system and possibly other information.
To delete a file we search the directory for the named file.
Having found the associated directory entry, we release all the space, so that it can be reused by other files, and erase the directory entry
2. The creation and deletion of directories:
Following are some directory operations in case of UNIX OS:
Create: a directory is created.
Delete: A directory is delete if it is empty.
Opendir: directories can be read. For example to list all the files in a directory a listing programs opens the directory to read out the names of all files it contains.
Closedir: When a dir has been read it should be close to free up internal table space.
3. The support of primitives for manipulating files and directories:
Implementing file storage is keeping track of which disk block go with which file.
Some methods are contiguous memory allocation and linked list allocation.
Contiguous disk allocation has two significant disadvantages.
First it is simple to implement because keeping track of where a file’ s block area is reduced to remembering two numbers:
the disk address of the first block and the number of blocks in the file.
Given the number of the first block the number of any other block can be found by a simple addition.
Second the read performance is excellent because the entire file can be read, from the disk in single operations.
The second methods for storing files are to keep each one as linked list of disk block.
The first word of each block is used as a pointer to next one. The rest of the block is for data.
4. The mapping of files onto secondary storage.
5. The backup of files on stable storage media: Destruction of a file system is greater disaster than destruction of a computer.
If a computer is destroyed by fire, lightning surges then the to prevent files from loss it should be backup.
Backup to tape are generally made to handle one of two potential problems i. e. recover from disaster, and recover from stupidity.

Q12) Explain following communication model: i) Message passing ii) Shared memory
Ans: Message passing: In a message passing model information is exchanged through an intercrosses communication facility provided by the operating system.
Before communication can take place a connection must be opened.
The name of the other communicator must be known be it another process on the same CPU or a process on another compute connected by a communication network.
Each computer in a network has a host name such as an IP name, by which it is commonly known.
Similarly each process has a process name which I translated into an equivalent identifier by which the operating system refer to it.
The Get hostid and get processed system calls do this translation.
These identifiers are then passed to the general purpose open and close calls provided by the file system or to specific open connection and close connection system calls.
The recipient process usually give its permission for communication to take place with an accept connection call.
The source of communication known as as the client and the receiving daemon known as server, then exchange message by read message an write message system calls.
The close call terminates the communication.
ii) Shared memory: In the shared memory model processed use map memory system calls to gain access to regions of memory owned by other processed.
OS tries to prevent one process from accessing another process’s memory.
Shared memory requires that several processes agree to remove this restriction.
They may then exchange information by reading and writing data in these shared areas.
The form of data and the location are determined by these processes and are not under the operating system’s control.
The processes are also responsible ensuring that they are not writing to the same location simultaneously.
removed from memory after entering this state, it may become a Zombie process

Q13a) Discuss the memory management in detail.
Ans: Memory management is the act of managing computer memory. The essential requirement of memory management is to provide ways to dynamically allocate portions of memory to programs at their request and freeing it for reuse when no longer needed. This is critical to the computer system.
Several methods have been devised that increase the effectiveness of memory management.
Virtual memory systems separate the memory addresses used by a process from actual physical addresses, allowing separation of processes and increasing the effectively available amount of RAM using paging or swapping to secondary storage.
The quality of the virtual memory manager can have a big impact on overall system performance.
Following some method support memory management Dynamic memory allocation:
The task of fulfilling an allocation request consists of finding a block of unused memory of sufficient size.
Even though this task seems simple, several issues make the implementation complex.
One of such problems is internal and external fragmentation, which arises when there are many small gaps between allocated memory blocks, which are insufficient to fulfill the request.
Another is that allocator's metadata can inflate the size of (individually) small allocations; this effect can be reduced by chunking.
Usually, memory is allocated from a large pool of unused memory area called the heap (also called the free store).
Since the precise location of the allocation is not known in advance, the memory is accessed indirectly, usually via a pointer reference.
The precise algorithm used to organize the memory area and allocate and deallocate chunks is hidden behind an abstract interface and may use any of the methods described below.
Systems with virtual memory : Virtual memory is a method of decoupling the memory organization from the actual physical hardware.
The applications operate memory via virtual adresses. Each time an attempt to access the actual data is made;
virtual memory subsystem translates the virtual address to a physical address, which corresponds to the address of the data as seen by the hardware.
The address translation process itself is managed by the operating system.
In this way addition of virtual memory enables granular control over the main memory and ways to access it.
This can be used to increase the effectiveness of memory management.
In virtual memory systems the operating system controls the ways a process can access the memory.
This feature can be used to disallow a process to read or write to memory that is not allocated to the said process,
essentially prevents malicious or malfunctioning code in one program from interfering with the operation of other running programs.
Paging: In computer operating systems, paging is one of the memory-management schemes by which a computer can store and retrieve data from secondary storage for use in main memory.
In the paging memory-management scheme, the operating system retrieves data from secondary storage in same-size blocks called pages.
The main advantage of paging over memory segmentation is that it allows the physical address space of a process to be noncontiguous.
Before the time paging was used, systems had to fit whole programs into storage contiguously, which caused various storage and fragmentation problems.
Paging is an important part of virtual memory implementation in most contemporary general-purpose operating systems,
allowing them to use disk storage for data that does not fit into physical random-access memory (RAM).
Fragmentation: In computer storage, fragmentation is a phenomenon in which storage space is used inefficiently,
reducing storage capacity and in most cases reducing the performance. The term is also used to denote the wasted space itself.
There are three different but related forms of fragmentation: external fragmentation, internal fragmentation, and data fragmentation.
Various storage allocation schemes exhibit one or more of these weaknesses.
Fragmentation can be accepted in return for increase in speed or simplicity.

Q14) Give system with n process. In how many ways can that process be scheduled? What is the purpose of system calls?
Ans: As processes enter the system they are put into job queue. This job queue consists of all processes in the system. The process that are residing in main memory and are ready and waiting to execute are kept on a list called the ready queue It is a doubly linked list of queues which are ready to run.
They are ordered by priority, the highest priority process is at the front of the queue.
Because queue works on First-in Frist- Out principle whenever the CPU takes up a new process.
It actually selects the process with the highest priority.
A ready queue header contains pointers to the first and final PCBs in the list i. e. the header of the queue contains two pointers.
The first pointer points to first PCB and second points to last PCB in the list and each PCB has a pointer to point to the next process in the readyqueue.


Q15) explain various operating system components in brief. Components of the operating system
The operating system comprises a set of software packages that can be used to manage interactions with the hardware.
The following elements are generally included in this set of software:
Kernel : The central module of an operating system.
It is the part of the operating system that loads first, and it remains in main memory.
Because it stays in memory, it is important for the kernel to be as small as possible while still providing all the essential services required by other parts of the operating system and applications.
Typically, the kernel is responsible for memory management, process and task management, and disk management.
Shell : The shell, allowing communication with the operating system via a control language, letting the user control the peripherals without knowing the characteristics of the hardware used, management of physical addresses, etc.
The outermost layer of a program. Shell is another term for user interface. Operating systems and applications sometimes provide an alternative shell to make interaction with the program easier.
For example, if the application is usually command driven, the shell might be a menu-driven system that translates the user's selections into the appropriate commands.
Sometimes called command shell, a shell is the command processor interface. The command processor is the program that executes operating system commands. The shell, therefore, is the part of the command processor that accepts commands.
After verifying that the commands are valid, the shell sends them to another part of the command processor to be executed.
UNIX systems offer a choice between several different shells, the most popular being the Cshell, the Bourne shell, and the Korn shell.
Each offers a somewhat different command language.
File system: The file system, allowing files to be recorded in a tree structure. Also referred to as simply a file system or filesystem.
The system that an operating system or program uses to organize and keep track of files.
For example, a hierarchical file system is one that uses directories to organize files into a tree structure.
Although the operating system provides its own file management system, you can buy separate file management systems.
These systems interact smoothly with the operating system but provide more features, such as improved backup procedures and stricter file protection.

Q16) when is a process terminated? Explain process control block.
Ans: A process terminates when it finishes executing its final statement and asks the operating system to delete it by using the exit system call.
At that the process may return data to its parent process.
All the resources of the process including physical and virtual memory open files and I/O buffer are deallocated by the operating system.
Termination occurs under additional circumstances. A process can cause the termination of another process via an appropriate system call.
Usually only the parent process that is to be terminated can invoke such a system call. Otherwise user could arbitrary kill each other‘s job.
A parent therefore needs to know the identities of its children.
Thus when one process creates a new process, the identity of the newly created process is passed to the parent.
A parent may terminate the process execution of one of its children for variety of reasons such as these:
• The child has exceeded its usage of some of the resources that it has been allocated.
This requires the parent to have a mechanism to inspect the state of its children.
• The task assigned to the child is no longer required.
• The parent is exiting and the operating system does not allow a child to continue if its parent terminates.
Process control block: A process in an operating system is represented by a data structure known as a process control block (PCB) or process descriptor.
The PCB contains important information about the specific process including
• The current state of the process i.e., whether it is ready, running, waiting, or whatever.
• Unique identification of the process in order to track "which is which" information.
• A pointer to parent process.
• Similarly, a pointer to child process (if it exists).
• The priority of process (a part of CPU scheduling information).
• Pointers to locate memory of processes.
• A register save area.
• The processor it is running on.
The PCB is a certain store that allows the operating systems to locate key information about a process.
Thus, the PCB is the data structure that defines a process to the operating systems

Q17) Give example of virtual memory. Explain paging and segmentation.
Ans : Virtual Memory : An imaginary memory area supported by some operating systems (for example, Windows but not DOS) in conjunction with the hardware.
You can think of virtual memory as an alternate set of memory addresses.
Programs use these virtual addresses rather than real addresses to store instructions and data.
When the program is actually executed, the virtual addresses are converted into real memory addresses.
The purpose of virtual memory is to enlarge the address space, the set of addresses a program can utilize.
For example, virtual memory might contain twice as many addresses as main memory.
A program using all of virtual memory, therefore, would not be able to fit in main memory all at once.
Nevertheless, the computer could execute such a program by copying into main memory those portions of the program needed at any given point during execution.
To facilitate copying virtual memory into real memory, the operating system divides virtual memory into pages, each of which contains a fixed number of addresses.
Each page is stored on a disk until it is needed. When the page is needed, the operating system copies it from disk to main memory, translating the virtual addresses into real addresses.
The process of translating virtual addresses into real addresses is called mapping. The copying of virtual pages from disk to main memory is known as paging or swapping.
Paging : Page Definition: Operating systems use various techniques to allocate memory.
Pages are fixed units of computer memory allocated by the computer for storing and processing information.
Pages are easier to place in computer memory since they are always the same size.
A technique used by virtual memory operating systems to help ensure that the data you need is available as quickly as possible.
The operating system copies a certain number of pages from your storage device to main memory.
When a program needs a page that is not in main memory, the operating system copies the required page into memory and copies another page back to the disk.
One says that the operating system pages the data. Each time a page is needed that is not currently in memory, a page fault occurs.
An invalid page fault occurs when the address of the page being requested is invalid. In this case, the application is usually aborted.
This type of virtual memory is called paged virtual memory. Another form of virtual memory is segmented virtual memory.
Segmentation: Memory segmentation is the division of computer memory into segments or sections.
Segments or sections are also used in object files of compiled programs when they are linked together into a program image, or when the image is loaded into memory.
In a computer system using segmentation, a reference to a memory location includes a value that identifies a segment and an offset within that segment.
Different segments may be created for different program modules, or for different classes of memory usage such as code and data segments.
Certain segments may even be shared between programs.

Q18) what is logical and physical address? Explain protection system.
Ans: logical and physical address : In computing, a physical address, also real address, or binary address
is the memory address that is electronically (in the form of binary number) presented on the computer address bus circuitry in order to enable the data bus to access a particular storage cell of main memory.
In computing, a logical address is the address at which an item (memory cell, storage element, and network host) appears to reside from the perspective of an executing application program.
In computing, a logical address is the address at which an item (memory cell, storage element, and network host) appears to reside from the perspective of an executing application program.
A logical address may be different from the physical address due to the operation of an address translator or mapping function.
Such mapping functions may be, in the case of a computer memory architecture, a memory management unit (MMU) between the CPU and the memory bus, or an address translation layer, e.g.
, the Data Link Layer, between the hardware and the internetworking protocols (Internet Protocol) in a computer networking system.

Q19) Explain how operating system manages the devices?
Ans: Once the program is running it may need additional resources to proceed.
Additional resources may be more memory tape drives, access to files, and so on.
If the resources are available they can be granted and control can be returned to user program;
otherwise the program will have to wait until sufficient resources are available.
Files can be a thought of as abstract or virtual devices.
Thus many of the system calls for files are also needed for devices.
If the system has multiple users, we must first request the device to ensure exculsive use of it.
After we have finished with the device we must release it. These functions are similar the open and close system calls for files.
Once the device has been requested, we can read, write and reposition the device just as we can with ordinary files.
In fact the similarity between I/O devices and files is so greater that many operating systems including UNIX and MS-DOS merge the two into a combination file device structure.
In this case I/O devices are identified by special file names.

Q20) List the advantages of layered approach to system design.
Ans: The modularization of a system can be done in many ways.
One method is the layered approach in which the operating system is broken up into a number of layers, each built on top of lower layers.
The bottom layers are the hardware the highest is the user interface.
An operating system layer is an implementation of an abstract object that is the encapsulation of data and of the operations that can manipulate those data.
A typical operating system layer say layer M—is depicted in fig. It consists of a data structures and a set of routines that can be invoked by higher level layers.
Layer m in turn can invoke operations on lower level layers. The main advantage of the layered approach is the modularity. The layers are selected such that each uses functions and services of only lower level layers.
This approach simplifies debugging and system verification. The first layer can be debugged without any concern for the rest system verification.
The first layer can be debugged without any concern for the rest of the system, because by definition, it uses only the basic hardware to implement its function.
Once the layer is debugged its correct functioning can be assumed while the second layer is debugged, and so on.
If an error is found during the debugging of a particular layer the error must be on that layer because the layer below it is already debugged without.
Thus the design and implementation of the system are simplified when the system is broken down into layers.
Each layer is implemented with only those operations provided by lower level layers.
A layer does not need to know how these operations are implemented; it needs to know only what these operations do.
Hence each layer hides the existence of certain data structures, operations and hardware from the higher level layers.
The major difficulty with the layered approach involves the careful definition of the layers, because layer can use only those layers below it.
A final problem with layered approach is that they tend to be less efficient than other types.
For instance when a user program executes an I/O operation it executes a system call that is trapped to the I/O layer which calls the memory management layer, which in turn calls the CPU scheduling layer which is then passed to the hardware.
At each layer the parameters may be modified data may need to be passed, and so on.
Each layer adds overhead to the system call, the net result is a system call that takes longer then does one on a non layered system.
For accessing computer programs go to TECHNOLOGY