[email protected] +91 9175287066
This website is always under updation with new contents
Unit - I
Unit I
Unit II
Unit III
Unit IV
Unit V
Q1a) Explain i) Time sharing system ii) Distributed system.
Ans: i) Time sharing system: These systems are quite interactive that allow users interaction with the system through OS commands.
The user program can be entered and executed in an interactive mode. Each interactive user is assigned a time slice in a round robin fashion during which it controls the CPU. During the time slice the process gets control of CPU and tries to complete its computation is not completed during the assigned time slot then the running process is preempted, then it has to wait for time slot in the next cycle to resume computation from where it was preempted.
The time slot cycle is so adjusted that each user has a feel as if the CPU is assigned to it all the time. These systems provide quick terminal response and allow online debugging. Each user virtually gets the entire machine. This implies that CPUs time is shared between many users processes. Time sharing or multitasking is a logical extension of multiprogramming only. Multiple jobs are executed by the CPU switching between them but switching is so frequent that the users may interact with each program while running.
This time sharing or multitasking handles many task simultaneously.
It is also possible on a system for a single user. For example in window OS one can open more than one tasks in the running mode.
A time shared system allow many user to share the computer simultaneously. Because each action or command in such a system tends to be short so only a small amount of CPU time is needed for each user.
ii) Distributed system: It comprises of a large central computer to which a large number of remote terminals are attached.
Consider the example in which there are two sites site1 and site2.
These two sites are connected by using some communication link and that the sites having a printer.
Then site 2 can use the printer at site1 without moving from site2 to site1, by resource sharing that is possible in a distributed system. Distributed processing is a form of online processing that allow single transaction to be composed of application program that access one or more database on one or more computers across a network. This type of transaction. Hera computers may be of different size and platforms. They may include microprocessor, minicomputer etc. So using thus type of processing we can synchronize the related data at these regional and branch locations, update it simultaneously with sure results.
Advantages of distributed system:
• If a number of sites are connected by high speed communication lines, then it allows us to do resource sharing in a distributed system.
• There is a computation speed up.

Q2) Describe general system architecture in detail.


A system can be divided into four components: the hardware, the operating system,
The application program and the users. The hardware i.e. the central processing unit, the memory and the input/output devices- provides the basic computing resources.
The application programs such as words processors, spreadsheets, compilers and web browsers- define the ways in which these resources are used to solve the computing problems of the users.
The operating system controls an coordinates the use of the hardware among the various application program.
The components of an operating system are its hardware, software and data.
The OS provides the means for the proper use of these resources in the operation of the computer system.
The operating system can be explored from two viewpoints: the user and the system.
User view: The user view of the computer varies by the interface being used. Most computer users sit in the front of a PC, consisting of a monitor, keyboard, mouse and system unit.
Such a system is designed for one user to monopolize its resources, to maximize the work that the user is performing.
Some users sit at a terminal connected to a mainframe or minicomputer. Others users are accessing and may exchange information.
Others users sit at workstations, connected to networks of other workstations and servers.
These users have dedicated resources at their disposal, but they also share resources.
System view: We can view OS as a resource allocator. A computer system has many resources hardware and software that may be required to solve a problem:
CPU time, memory space, file storage space, I/O device and so on.
An OS is a control program which manages the execution of user programs to prevent errors and improper use of computer.
The primary goal of an operating system is efficient operation of the computer system. This is the case for large, shared, multiuser systems.
Q3) Explain i)I/O interrupts

Interrupts are very important in operating system , consider the following figure
In the fig we see three steps process for I/O. In steps 1 the driver tells the controller what to do by writing into its device registers.
The controller then starts the device. When the controller has finished reading or writing the number of bytes it has been told to transfer, it signals the interrupt controller chip using certain bus line in step2.
If the interrupts controller is prepared to accept the interrupt, it asserts a pin on the CPU chip informing it, in step3.
In step 4 the interrupts controller puts the number of the device on the bus so the CPU can read it and know which device has just finished.
Once the CPU has decided to take the interrupt the program counter and PSW are typically then pushed onto the current stack and the CPU switched into kernel mode.
The device number may be used as an index into part of memory to find the address of the interrupt handler for this device.
This part of memory is called as interrupt vector. Once the interrupt handler has started, it removes the stacked program counter and PSW and saves them, then queries the device to learn its status.
When the handler is all finished, it returns to the previously running user program to the first instruction that was not yet executed.
These steps are shown in figure.

The major component of a computer is the memory. The memory system is constructed as a hierarchy of layers as shown in fig.
The top layers have higher speed, smaller capacity and greater cost per bit than the lower ones often by a factor of a billion or more.
The top layer consists of the registers internal to the CPU. They are made of the same material as the CPU and are thus just as fast as the CPU.
The storage capacity available in them is typically 32*32 bits on a 32bit CPU and 64*64 bits on a 64 bits CPU.
Program must manage the registers themselves in software. Next comes the cache memory which is mostly controlled by the hardware.
Main memory is divided up into cache lines, typically 64 bytes with addresses 0 to 63 in cache line 0, addresses 64 bit to 127 in cache line 1, and so on.
The most heavily used cached lines are kept in a high a speed cache located inside or very close to the CPU.
When the program needs to read a memory word, the cache hardware checks to see if the line needed is in the cache.
If it is called cache hits the request is satisfied from the cache and no memory request is sent over the bus to the main memory.
Cache hits normally take two clock cycle Cache memory is limited due to its high cost.
Main memory comes in the next hierarchy. This is the workhouse of the memory system. Main memory usually called RAM.
These main memories currently are hundreds of megabytes to several gigabytes and growing rapidly.
All CPU requests that cannot be satisfied out of the cache go to main memory.
In addition to the main memory many computers have a small amount of nonvolatile RAM but this nonvolatile memory does not lose its content when the power is switched off.
It is called as ROM and usually programmed at the factory and cannot be changed afterward.
It is fast inexpensive. EPROM and flash are also nonvolatile memory but in contrast they can be erased and rewritten.
The next in hierarchy is the disk. Disk storage is two orders of magnitude cheaper then ram per bit and often two orders of magnitude larger as well.
The only problem is that the time to randomly access data on it is close to three orders of magnitude slower.
This low speed id due to the fact that a disk is a mechanical device.
The final layer in the memory hierarchy is magnetic tape. This medium I soften used as a backup for disk storage and for holding very large data sets.
To access a tape it must be first put into tape reader. Then the tape may have to be spooled forward to get to the requested block.
Q4) Explain memory and CPU protection in brief
Ans: Memory protection: We must provide memory protection at least for the interrupt vector and the interrupt service routines of the operating system.
In general we want to protect the operating system from access by user programs, and in addition to protect the user program from one another.
This protection must be provided by the hardware.
To separate each program’s memory space we need the ability to determine the range of legal addresses that the program may access, and to protect the memory outside that space.
We can provide this protection by using two registers called a base registers and a limit register.
The base register holds the smallest legal physical memory address; the limit register contains the size of the range.
For example if the base register holds 300040 and limit register is 120900, then the program can legally access all addresses from 300040 through 420940 inclusive.
This protection is accomplished by the CPU hardware comparing every address generated in user mode with the registers.
Any attempt by a program executing in user mode to access monitor memory or other user’s memory result in trap to the monitor which treats the attempt as a fatal error. This scheme prevents the user program from modifying the code or data structures of either the operating system or other users.
The base and limit registers can be loaded by only the operating system, which uses a special privileged instruction.
Since privileged instruction can be executed only in monitor mode and since only the operating system executes in monitor mode, only the operating system can load the base and limit registers. This scheme allows the monitor to change the value of the registers, but prevent user program from changing the registers contents.
CPU protection: We must prevent user program from getting stuck in an infinite loop or not calling system services and never returning control to the operating system.
To do this we can use a time. A time r can be set to interrupt the computer after a specified period.
We can use the timer to prevent a user program from running too long.
A simple technique is to initialize a counter with the amount of time that a program is allowed to run.
A more common use of a timer is to implement time sharing. The operating system also saves registers, internal variables and buffers and changes several other parameters to prepare for the next program to run.
Another use of the timer to compute the current time. A timer interrupts signals the passage of some period
allowing the operating system to compute the current time in reference to some initial time.
Q5) What do you mean by dual mode opearation? Explain.
Ans: To provide protection for shared resource the operating system provides hardware support that allow us to differentiate among various modes of execution.there are two modes of operation user mode and monitor mode (supervisor mode, system mode or privileged mode).
A bit called mode bit is added to the hardware of the computer to indicate the current mode : monitor(0) or user(1) .
With the mode mode bit we are able to distinguish between a task that is executed on behalf of the operating system and one that executed on behalf of the user.
At system boot time the hardware starts in monitor mode. The operating system is then loaded and starts user processes in user mode.
Whenever a trap or interrupts occurs the hardware switch from user mode to monitor mode.
Thus whenever the operating system gains control of the computer it is monitor mode.
The system always switches to user mode before passing control to user program.
The dual mode operation provides us with the means for protecting the operating system from errant users, and errant users from one another.
We accomplish this protection by designing some of the machine instruction that may cause harm as privileged instructions.
The lack of hardware supported dual mode operation can cause serious shortcomings in an operating system.
For instances MS-DOS was written for the Intel 8088 architecture, which has no mode bit an
therefore no dual mode operation. A user program run away can wipe out the operating system by writing over to with data and multiple programs running away can wipe out the operating system by writing over it with data and multiple programs are able to write to a device at the same time, with possibly disastrous results.
Most recent and advanced versions of the Intel CPU such as Pentium do provide dual mode operation which provides greater protection for the operating system.
Q6) Define operating system. Why is it necessary? Explain
Ans: An operating system (OS) is a set of programs that manage computer hardware resources and provide common services for application software.
The operating system is the most important type of system software in a computer system.
A user cannot run an application program on the computer without an operating system, unless the application program is self booting.
Time-sharing operating systems schedule tasks for efficient use of the system and may also include accounting for cost allocation of processor time, mass storage, printing, and other resources.
For hardware functions such as input and output and memory allocation, the operating system acts as an intermediary between application programs and the computer hardware,[1][2] although the application code is usually executed directly by the hardware and will frequently call the OS or be interrupted by it.
Operating systems are found on almost any device that contains a computer—from cellular phones and video game consoles to supercomputers and web servers.
Q7) Give advantages of multiprogrammed batched system.
Ans: multiprogramming is a technique to execute number of programs simultaneously by a single processor.
Buffering and spooling system performance by over lapping the input and computation of a single job, but both have their limitations.
A single user cannot always keep CPU always busy at all times.
So multiprogramming offers a more efficient approach, to increase the system performance.
To increase the resource utilization system supporting the multiprogramming approach, allow more than one job to utilize CPU at any moment,
Let us consider an example say, there are two programs p1 and p2 that are residing in the main memory.
The OS picks one of the programs and start executing it. During execution say p1 needs some 100 operation to complete.
Then in a sequential environment CPU would sit idle.

Execution of p1 and p2 in a multiprogramming environment
IF p1 is busy with some I/O then it switches to p2. Similarly, if p2 wants to do some I/O then it switches to p3. This goes on.
Advantages of multiprogramming:
High CPU utilization as it never idle
Supports multiple simultaneous interactive user
Efficient memory utilization more CPU throughout
Disadvantages:
Jobs may have different sizes so some powerful memory management policy is desired to accommodate them in memory.
CPU scheduling is must because now many jobs are ready to be run on the CPU.
The user cannot interact with job when it is being executed.
Q8) explain i) Real time system
ii) Parallel systems Ans: i) Real time system: It is defined as a system in which the correctness of computations depends not only on the logical correctness of the computation but also on the time at which the result is produced.
So we can say that it has strict time constraints.
The inputs, data or output should be available within this time period otherwise disasters can happen.
Sensors bring data to the computer. The computer must analyze the data and adjust controls to modify the sensor inputs.
For ex. Some real time system is like AIR- traffic control, medical imaging systems, industrial control system etc.
Also included in this category are automobile engine fuel injection system, home appliances controls and weapons system.
A system is said to be real-time if the total correctness of an operation depends not only upon its logical correctness, but also upon the time in which it is performed.
Real-time systems, as well as their deadlines, are classified by the consequence of missing a deadline.
Hard: Missing a deadline is a total system failure.
Firm: Infrequent deadline misses are tolerable, but may degrade the system's quality of service.
The usefulness of a result is zero after its deadline.
Soft: The usefulness of a result degrades after its deadline, thereby degrading the system's quality of service.
Thus, the goal of a hard real-time system is to ensure that all deadlines are met, but for soft real-time systems the goal becomes meeting a certain subset of deadlines in order to optimize some application specific criteria.
The particular criteria optimized depends on the application, but some typical examples include maximizing the number of deadlines met, minimizing the lateness of tasks and maximizing the number of high priority tasks meeting their deadlines.
ii) Parallel systems: A system consisting of more than one processor that is tightly coupled i.e. heavily sharing resources like bus, clock, memory, I/O device is a parallel system.
A single CPU may be slow. So a possible solution is to have many CPUs put parallel to execute a single given problem.
This is parallel processing and such system is also called as supercomputers.
These systems will increase the CPU throughput reduce time required for job execution.
Q9)Explain simple batched system and multi programmed batched system.
Ans: i) simple batched system: Batch operating system is the operating system which analyzes your i/p and groups them into batchs .
That is data in each batch is of similar characteristics.And then it performs operation on each individual batch.
SOME INFORMATION ON WORKING OF BATCH OPERATING SYSTEM:
Batch operating systems could only execute one program at a time.
The operating system maintained a queue of user programs which had been submitted and were waiting for a chance to execute.
Each user program which needed to execute was called a "job".
A human "operator" watched over the queue with the ability to move some jobs to the front or back, or kill a job which got hung or ran too long.
Some users had higher priorities than others.
The Batch operating system uses a batch file during the boot sequence, the batch file contains all of the boot info..
One difficulty with simple batch systems is that the computer still needs to read the the deck of cards before it can begin to execute the job.
This means that the CPU is idle (or nearly so) during these relatively slow operations.
Since it is faster to read from a magnetic tape than from a deck of cards, it became common for computer centers to have one or more less powerful computers in addition to there main computer.
The smaller computers were used to read a decks of cards onto a tape, so that the tape would contain many batch jobs.
This tape was then loaded on the main computer and the jobs on the tape were executed.
The output from the jobs would be written to another tape which would then be removed and loaded on a less powerful computer to produce any hardcopy or other desired output.
It was a logical extension of the timer idea described above to have a timer that would only let jobs execute for a short time before interrupting them so that the monitor could start an IO operation.
Since the IO operation could proceed while the CPU was crunching on a user program, little degradation in performance was noticed.
Since the computer can now perform IO in parallel with computation, it became possible to have the computer read a deck of cards to a tape, drum or disk and to write out to a tape printer while it was computing. This process is called SPOOLing: Simultaneous Peripheral Operation OnLine.
Spooling batch systems were the first and are the simplest of the multiprogramming systems.
One advantage of spooling batch systems was that the output from jobs was available as soon as the job completed, rather than only after all jobs in the current cycle were finished.
EXAMPLE OF BATCH OPERATING SYSTEM IS AS FOLLOWS..
1) DOS (Disk operating system
2) IBM OS/2
3) Windows 1,2,3 95, 98 and ME
Q10) How DMA is useful? Discuss.
Ans: For reading data the CPU needs to address the device controllers to exchange data with them. The CPU can request data from an I/O contorller on byte at a time but doing so wastes the CPU’s time , so a different scheme , called DMA is used.
When DMA is not used the disk controller reads the block from the drive serially, bit by bit
until th entire block is in the controler ‘s internal buffer.
Next it computes checksum to verify that no read errors have occurred.
Then the controller causes an interrupts . When opearating system starts running it can read the disk block from th econtroler’s buffer a byte or a word at a time by executing a loop, withh each iteration reading one byte or word from a controller device register and storing it in main memory.
When DMA is used the procedure is different .
First the cPU prigramms the DMA controlller by setting its registers so it knows what to transfer where.
It also issues a command to the disk controlller telling it to read data from the disk into its internal buffer and verify the checksum.
When valid data are in the disk constroller’s buffer , DMA can begin.
The DMA controller initiates the transfer by issuning a read request over the bus to the disk controller.
This read request looks like any other read request and the disk controller does not know or care whether it came from the CPU or from a DMA controller.
Typiacally the memory address to write to is on the bus ‘ address lines sos when th edisk controller fetches the next word from its internal buffer, it knows where to write it.
The Write to memory is another standard acknowledge ment signal to the DMA controller, also over the bus.
The DMA controller then increamnets the memory address to use and decreaments the bus count.
If the byte count is still greater thane 0, steps 2 through 4 are repeated until the count reaches 0.
At that time the DMA controlller interrupts the CPU to let it know that the transfer is now complete.
When th eopearating sytem starts up it does not have to copy the disk block to memory it is already there.
Q11) what is interrupt and interrupt service routine. Explain
Ans: Inetrrupt : Refer Q1c of summer 2006 Interrupt service routine: An interrupt handler, also known as an interrupt service routine (ISR), is a callback subroutine in microcontroller firmware, operating system or device driver whose execution is triggered by the reception of an interrupt.
Interrupt handlers have a multitude of functions, which vary based on the reason the interrupt was generated and the speed at which the interrupt handler completes its task.
An interrupt handler is a low-level counterpart of event handlers.
These handlers are initiated by either hardware interrupts or interrupt instructions in software, and are used for servicing hardware devices and transitions between protected modes of operation such as system calls.
As discussed earlier, routine executed in response to an interrupt is called as the interrupt service routine(ISR).
Subroutine is the part of a big program which performs some specific part related to the program. Subroutine is called by the program itself.
It helps to avoid complexity in the program.
ISR is the code executed in response to the interrupt which may occur during the execution of a program.
ISR may have nothing in common to the program which is being executed at the time of interrupt is received.
In the case of subroutine, only address of next field is stored in the stack automatically when it is called while in the case of ISR, along with the address of next field, content of status register and some other registers(if required) are also stored.
General Sequence Followed When Interrupts Occur By an External Device
1. Interrupt request (IRQ) signal is sent by the device to the processor.
2. If the interrupt line is enabled the following sequence of events occur in the system, else the interrupt is ignored. The processor completes its present instruction (if any) and pay attention to the IRQ.
3. It stores the address of the next location and content of status register to the stack
4. It informs the device that its request has been granted and in response the device de-activates its IRQ.
5. Using some suitable technique the processor loads its program counter (PC) with address of the ISR.

Q12) what are different processs states? Describe in detail how the process transits from one to another.
Ans: In a multitasking computer system, processes may occupy a variety of states.
The following typical process states are possible on computer systems of all kinds.
In most of these states, processes are "stored" on main memory.
Created: (Also called New) When a process is first created, it occupies the "created" or "new" state.
In this state, the process awaits admission to the "ready" state.
Ready or waiting: (Also called waiting or runnable) A "ready" or "waiting" process has been loaded into main memory and is awaiting execution on a CPU.
There may be many "ready" processes at any one point of the systems execution - for example, in a one processor system, only one process can be executing at any one time, and all other "concurrently executing" processes will be waiting for execution.
A ready queue or run queue is used in computer scheduling.
Modern computers are capable of running many different programs or processes at the same time.
However, the CPU is only capable of handling one process at a time.
Processes that are ready for the CPU are kept in a queue for "ready" processes.
Other processes that are waiting for an event to occur, such as loading information from a hard drive or waiting on an internet connection, are not in the ready queue.
Running: A process moves into the running state when it is chosen for execution.
The process's instructions are executed by one of the CPUs of the system.
There is at most one running process per CPU or core.
Blocked: A process that is blocked on some event (such as I/O operation completion or a signal).
Terminated: A process may be terminated, either from the "running" state by completing its execution or by explicitly being killed.
In either of these cases, the process moves to the "terminated" state.
If a process is not removed from memory after entering this state, it may become a Zombie process

Q13) define operating system .What are the main purpose of an operating system?
The major purpose of an OS are:
-resource management,
-data management,
-job (task) management, and
-standard means of communication between user and computer.
The resource management function of an OS allocates computer resources such as CPU time, main memory, secondary storage, and input and output devices for use.
The data management functions of an OS govern the input and output of the data and their location, storage, and retrieval.
The job management function of an OS prepares, schedules, controls, and monitors jobs submitted for execution to ensure the most efficient processing.
A job is a collection of one or more related programs and their data. A job is a collection of one or more related programs and their data.
The OS establishes a standard means of communication between users and their computer systems.
It does this by providing a user interface and a standard set of commands that control the hardware.
Q14) what are difference between a trap and interrupts?
Ans: Trap :In computing and operating systems, a trap, also known as an exception or a fault, is typically a type of synchronous interrupt typically caused by an exceptional condition (e.g., breakpoint, division by zero, invalid memory access). A trap usually results in a switch to kernel mode, wherein the operating system performs some action before returning control to the originating process. A trap in a system process is more serious than a trap in a user process, and in some systems is fatal. In some usages, the term trap refers specifically to an interrupt intended to initiate a context switch to a monitor program or debugger.

Q1b) what is I/O interrupts. Explain I/O methods.
I/O methods: Asynchronous I/O, or non-blocking I/O, is a form of input/output processing that permits other processing to continue before the transmission has finished.
Input and output (I/O) operations on a computer can be extremely slow compared to the processing of data.
An I/O device can incorporate mechanical devices that must physically move, such as a hard drive seeking a track to read or write
this is often orders of magnitude slower than the switching of electric current.
For example, during a disk operation that takes ten milliseconds to perform, a processor that is clocked at one gigahertz could have performed ten million instruction-processing cycles.
A simple approach to I/O would be to start the access and then wait for it to complete.
But such an approach (called synchronous I/O or blocking I/O) would block the progress of a program while the communication is in progress, leaving system resources idle.
When a program makes many I/O operations, this means that the processor can spend almost all of its time idle waiting for I/O operations to complete.
Q15) Differentiate between symmetric and asymmetric multiprocessing
Ans: Asymmetric multiprocessing - In asymmetric multiprocessing (ASMP), the operating system typically sets aside one or more processors for its exclusive use. The remainder of the processors run user applications.
As a result, the single processor running the operating system can fall behind the processors running user applications.
This forces the applications to wait while the operating system catches up, which reduces the overall throughput of the system.
In the ASMP model, if the processor that fails is an operating system processor, the whole computer can go down.
Symmetric Multiprocessing - Symmetric multiprocessing (SMP) technology is used to get higher levels of performance.
In symmetric multiprocessing, any processor can run any type of thread. The processors communicate with each other through shared memory.
SMP systems provide better load-balancing and fault tolerance. Because the operating system threads can run on any processor, the chance of hitting a CPU bottleneck is greatly reduced.
All processors are allowed to run a mixture of application and operating system code.
A processor failure in the SMP model only reduces the computing capacity of the system.
SMP systems are inherently more complex than ASMP systems.
A tremendous amount of coordination must take place within the operating system to keep everything synchronized.
For this reason, SMP systems are usually designed and written from the ground up.
The Microsoft Windows NT operating system is an SMP system.
The Windows NT kernel works with the hardware abstraction layer to manage multiple processors.
The hardware abstraction layer hides the details of the symmetric multiprocessing hardware from the rest of the operating system.
The kernel controls which code runs on each processor at a given time.
The Windows NT kernel enables multiprocessor support by allocating threads based on the thread's priority across all available CPUs in a system.
Any thread of any application can run independently on any available processor.
It ensures that the system's processors are always as busy as possible running the highest priority code.
The Windows NT kernel dispatches all operating system threads across all available processors, including I/O, networking, and graphics.
Even though the best performance increase will come from multi-threaded applications, you will often get a small performance boost running even a single threaded application on a multi-processor system.
Q16) Explain hard and soft real time system
Ans: Following Tabel shows the major differences between hard and soft real-time systems.
The response time requirements of hard real-time systems are in the order of milliseconds or less and can result in a catastrophe if not met.
In contrast, the response time requirements of soft real-time systems are higher and not very stringent.
In a hard real-time system, the peak-load performance must be predictable and should not violate the predefined deadlines.
In a soft real-time system, a degraded operation in a rarely occurring peak load can be tolerated.
A hard real-time system must remain synchronous with the state of the environment in all cases.
On the other hand soft real-time systems will slow down their response time if the load is very high.
Hard real-time systems are often safety critical. Hard real-time systems have small data files and real-time databases.
Temporal accuracy is often the concern here.
Soft real-time systems for example, on-line reservation systems have larger databases and require long-term integrity of real-time systems.
If an error occurs in a soft real-time system, the computation is rolled back to a previously established checkpoint to initiate a recovery action.
In hard real-time systems, roll-back/recovery is of limited use.

Q17) Give an overview of secondary storage management
Ans: Secondary-Storage Management Generally speaking, systems have several levels of storage, including primary storage, secondary storage and cache storage.
Instructions and data must be placed in primary storage or cache to be referenced by a running program.
Because main memory is too small to accommodate all data and programs, and its data are lost when power is lost, the computer system must provide secondary storage to back up main memory.
Secondary storage consists of tapes, disks, and other media designed to hold information that will eventually be accessed in primary storage (primary, secondary, cache) is ordinarily divided into bytes or words consisting of a fixed number of bytes.
Each location in storage has an address; the set of all addresses available to a program is called an address space.
The three major activities of an operating system in regard to secondary storage management are:
1. Managing the free space available on the secondary-storage device.
2. Allocation of storage space when new files have to be written.
3. Scheduling the requests for memory access.
Q18) Explain how operating system manages the devices?
Ans: Once the program is running it may need additional resources to proceed.
Additional resources may be more memory tape drives, access to files, and so on.
If the resources are available they can be granted and control can be returned to user program;
otherwise the program will have to wait until sufficient resources are available.
Files can be a thought of as abstract or virtual devices. Thus many of the system calls for files are also needed for devices.
If the system has multiple users, we must first request the device to ensure exculsive use of it.
After we have finished with the device we must release it. These functions are similar the open and close system calls for files.
Once the device has been requested, we can read, write and reposition the device just as we can with ordinary files.
In fact the similarity between I/O devices and files is so greater that many operating systems including
UNIX and MS-DOS merge the two into a combination file device structure. In this case I/O devices are identified by special file names.
Q19) List the advantages of layered approach to system design.
Ans: The modularization of a system can be done in many ways.
One method is the layered approach in which the operating system is broken up into a number of layers, each built on top of lower layers.
The bottom layers are the hardware the highest is the user interface.
An operating system layer is an implementation of an abstract object that is the encapsulation of data and of the operations that can manipulate those data.
A typical operating system layer say layer M—is depicted in fig.
It consists of a data structures and a set of routines that can be invoked by higher level layers.
Layer m in turn can invoke operations on lower level layers.
The main advantage of the layered approach is the modularity.
The layers are selected such that each uses functions and services of only lower level layers.
This approach simplifies debugging and system verification.
The first layer can be debugged without any concern for the rest system verification.
The first layer can be debugged without any concern for the rest of the system, because by definition, it uses only the basic hardware to implement its function. Once the layer is debugged its correct functioning can be assumed while the second layer is debugged, and so on.
If an error is found during the debugging of a particular layer the error must be on that layer because the layer below it is already debugged without.
Thus the design and implementation of the system are simplified when the system is broken down into layers.
Each layer is implemented with only those operations provided by lower level layers.
A layer does not need to know how these operations are implemented; it needs to know only what these operations do.
Hence each layer hides the existence of certain data structures, operations and hardware from the higher level layers.
The major difficulty with the layered approach involves the careful definition of the layers, because layer can use only those layers below it.
A final problem with layered approach is that they tend to be less efficient than other types.
For instance when a user program executes an I/O operation it executes a system call that is trapped to the I/O layer, which calls the memory management layer, which in turn calls the CPU scheduling layer which is then passed to the hardware.
At each layer the parameters may be modified data may need to be passed, and so on.
Each layer adds overhead to the system call, the net result is a system call that takes longer then does one on a non layered system.
Q20) what different services are offered by operating system? Explain.
Program Execution: The purpose of computer systems is to allow the user to execute programs.
So the operating systems provide an environment where the user can conveniently run programs.
The user does not have to worry about the memory allocation or multitasking or anything. These things are taken care of by the operating systems.
Running a program involves the allocating and deallocating memory, CPU scheduling in case of multiprocess.
These functions cannot be given to the user-level programs.
So user-level programs cannot help the user to run programs independently without the help from operating systems.
I/O Operations: Each program requires an input and produces output. This involves the use of I/O.
The operating systems hides the user the details of underlying hardware for the I/O.
All the user sees is that the I/O has been performed without any details.
So the operating systems by providing I/O makes it convenient for the users to run programs.
For efficiently and protection users cannot control I/O so this service cannot be provided by user-level programs.
File System Manipulation: The output of a program may need to be written into new files or input taken from some files.
The operating systems provide this service. The user does not have to worry about secondary storage management.
User gives a command for reading or writing to a file and sees his/ her task accomplished.
Thus operating systems make it easier for user programs to accomplish their task.
This service involves secondary storage management.
The speed of I/O that depends on secondary storage management is critical to the speed of many programs and hence I think it is best relegated to the operating systems to manage it than giving individual users the control of it.
It is not difficult for the user-level programs to provide these services but for above mentioned reasons it is best if this service s left with operating system.
Communications: There are instances where processes need to communicate with each other to exchange information.
It may be between processes running on the same computer or running on the different computers.
By providing this service the operating system relieves the user of the worry of passing messages between processes.
In case where the messages need to be passed to processes on the other computers through a network it can be done by the user programs.
The user program may be customized to the specifics of the hardware through which the message transits and provides the service interface to the operating system.
Error Detection: An error is one part of the system may cause malfunctioning of the complete system.
To avoid such a situation the operating system constantly monitors the system for detecting the errors.
This relieves the user of the worry of errors propagating to various part of the system and causing malfunctioning.
This service cannot allow to be handled by user programs because it involves monitoring and in cases altering area of memory or deallocation of memory for a faulty process. Or may be relinquishing the CPU of a process that goes into an infinite loop.
These tasks are too critical to be handed over to the user programs.
A user program if given these privileges can interfere with the correct (normal) operation of the operating systems.
Q21) Explain
i) Command interpreter system.
ii) Information maintenance
Ans: Command Interpreter System: A command interpreter is an interface of the operating system with the user.
The user gives commands with are executed by operating system (usually by turning them into system calls).
The main function of a command interpreter is to get and execute the next user specified command.
Command-Interpreter is usually not part of the kernel, since multiple command interpreters (shell, in UNIX terminology) may be support by an operating system, and they do not really need to run in kernel mode.
There are two main advantages to separating the command interpreter from the kernel.
1. If we want to change the way the command interpreter looks, i.e., anyone want to change the interface of command interpreter, he is able to do that if the command interpreter is separate from the kernel.
We cannot change the code of the kernel so we cannot modify the interface.
2. If the command interpreter is a part of the kernel it is possible for a malicious process to gain access to certain part of the kernel that it showed not have to avoid this ugly scenario it is advantageous to have the command interpreter separate from kernel.
Information maintenance: many system calls exist simply for the purpose of transferring information between the user program and the operating system.
For example most system have a system call to return the current time and date.
Other system calls may return information about the system, such as the number of current users, the versions number of the operating system such as the number of current users, the version number of the operating system,
the amount of free memory or disk space and so on. In addition the Os keeps information about all its processor, and there are system calls to access this information. Generally there are also calls to access this information.
Generally there are also calls to reset the process information.
Q22) Explain the responsibilities of operating system in process management.
Ans: Process Management The operating system manages many kinds of activities ranging from user programs to system programs like printer spooler, name servers, file server etc.
Each of these activities is encapsulated in a process.
A process includes the complete execution context (code, data, PC, registers, OS resources in use etc.).
It is important to note that a process is not a program. A process is only ONE instant of a program in execution.
There are many processes can be running the same program.
The five major activities of an operating system in regard to process management are • Creation and deletion of user and system processes:
There are four principles that cause processes are to be created
i) System initialization
ii) Execution of process creation system call by a running process
iii) A user request to create new a new process
iv) Initiation of a batch job
When an operating system is booted several process are created. Some of these are foreground process and others are background process which is not associated with particular user. Processes that stay in background to handle some activity such as e-mail, webpage and so on are called as daemons. Often a running process will issue system calls to create one or more new process to help it do its job. Process termination: the process will takes due to one of the following condition:
 Normal exit
 Error exit
 Fatal error
 Killed by another process
Most process terminate because they have done their work.
When a compiler has compiled the programs given to it, the compiler executes system call to tell operating system that it is finished.
The second reason for termination is that the process discovers a fatal error.
The third reason termination is an error caused by process often due to a program bug.
The forth reason that a process might terminate is that the process executes a system call telling the operating system to kill some other process.
Q23) what do you mean by system structure? Explain with example any one structure.
Ans: system structure: A system is large and complex as a modern operating system must be engineered carefully if it is to function properly and be modified easily.
A common approach is to partition the task into small components rather than have one monolithic system.
Each of these modules should be well defined portion of the system, with carefully defined inputs, output and function.
There are two types of structures simple and layered.
Q24) Discuss any four activities of an operating system in regard to file management.
Ans: File Management: A file is a collected of related information defined by its creator.
Computer can store files on the disk (secondary storage), which provide long term storage.
Some examples of storage media are magnetic tape, magnetic disk and optical disk.
Each of these media has its own properties like speed, capacity, and data transfer rate and access methods.
File systems normally organized into directories to ease their use. These directories may contain files and other directions.
The five main major activities of an operating system in regard to file management are
1. The creation and deletion of files: Two steps are necessary for creating files.
First the space in the file system must be found for the file and the second an entry records the names of the file and the location in the file system and possibly other information.
To delete a file we search the directory for the named file.
Having found the associated directory entry, we release all the space, so that it can be reused by other files, and erase the directory entry
2. The creation and deletion of directories:
Following are some directory operations in case of UNIX OS:
Create: a directory is created.
Delete: A directory is delete if it is empty.
Opendir: directories can be read. For example to list all the files in a directory a listing programs opens the directory to read out the names of all files it contains.
Closedir: When a dir has been read it should be close to free up internal table space.
3. The support of primitives for manipulating files and directories: Implementing file storage is keeping track of which disk block go with which file. Some methods are contiguous memory allocation and linked list allocation.
Contiguous disk allocation has two significant disadvantages.
First it is simple to implement because keeping track of where a file’ s block area is reduced to remembering two numbers:
the disk address of the first block and the number of blocks in the file.
Given the number of the first block the number of any other block can be found by a simple addition.
Second the read performance is excellent because the entire file can be read, from the disk in single operations.
The second methods for storing files are to keep each one as linked list of disk block. The first word of each block is used as a pointer to next one.
The rest of the block is for data.
4. The mapping of files onto secondary storage.
5. The backup of files on stable storage media: Destruction of a file system is greater disaster than destruction of a computer.
If a computer is destroyed by fire, lightning surges then the to prevent files from loss it should be backup.
Backup to tape are generally made to handle one of two potential problems i. e. recover from disaster, and recover from stupidity.
Q25) Explain following communication model:
i) Message passing ii) Shared memory
Ans: Message passing: In a message passing model information is exchanged through an intercrosses communication facility provided by the operating system.
Before communication can take place a connection must be opened.
The name of the other communicator must be known be it another process on the same CPU or a process on another compute connected by a communication network.
Each computer in a network has a host name such as an IP name, by which it is commonly known.
Similarly each process has a process name which I translated into an equivalent identifier by which the operating system refer to it.
The Get hostid and get processed system calls do this translation.
These identifiers are then passed to the general purpose open and close calls provided by the file system or to specific open connection and close connection system calls.
The recipient process usually give its permission for communication to take place with an accept connection call.
The source of communication known as as the client and the receiving daemon known as server, then exchange message by read message an write message system calls.
The close call terminates the communication.
ii) Shared memory: In the shared memory model processed use map memory system calls to gain access to regions of memory owned by other processed.
OS tries to prevent one process from accessing another process’s memory.
Shared memory requires that several processes agree to remove this restriction.
They may then exchange information by reading and writing data in these shared areas.
The form of data and the location are determined by these processes and are not under the operating system’s control.
The processes are also responsible ensuring that they are not writing to the same location simultaneously.
Q26) explain various operating system components in brief.
The operating system comprises a set of software packages that can be used to manage interactions with the hardware.
The following elements are generally included in this set of software:
Kernel : The central module of an operating system. It is the part of the operating system that loads first, and it remains in main memory.
Because it stays in memory, it is important for the kernel to be as small as possible while still providing all the essential services required by other parts of the operating system and applications.
Typically, the kernel is responsible for memory management, process and task management, and disk management.
Shell : The shell, allowing communication with the operating system via a control language, letting the user control the peripherals without knowing the characteristics of the hardware used, management of physical addresses, etc.
The outermost layer of a program. Shell is another term for user interface.
Operating systems and applications sometimes provide an alternative shell to make interaction with the program easier.
For example, if the application is usually command driven, the shell might be a menu-driven system that translates the user's selections into the appropriate commands. Sometimes called command shell, a shell is the command processor interface.
The command processor is the program that executes operating system commands.
The shell, therefore, is the part of the command processor that accepts commands.
After verifying that the commands are valid, the shell sends them to another part of the command processor to be executed.
UNIX systems offer a choice between several different shells, the most popular being the Cshell, the Bourne shell, and the Korn shell.
Each offers a somewhat different command language.
File system: The file system, allowing files to be recorded in a tree structure.
Also referred to as simply a file system or filesystem. The system that an operating system or program uses to organize and keep track of files.
For example, a hierarchical file system is one that uses directories to organize files into a tree structure.
Although the operating system provides its own file management system, you can buy separate file management systems.
These systems interact smoothly with the operating system but provide more features, such as improved backup procedures and stricter file protection.
Q27) when is a process terminated? Explain process control block.
Ans: A process terminates when it finishes executing its final statement and asks the operating system to delete it by using the exit system call.
At that the process may return data to its parent process.
All the resources of the process including physical and virtual memory open file
s and I/O buffer are deallocated by the operating system. Termination occurs under additional circumstances.
A process can cause the termination of another process via an appropriate system call.
Usually only the parent process that is to be terminated can invoke such a system call.
Otherwise user could arbitrary kill each other‘s job. A parent therefore needs to know the identities of its children.
Thus when one process creates a new process, the identity of the newly created process is passed to the parent.
A parent may terminate the process execution of one of its children for variety of reasons such as these:
• The child has exceeded its usage of some of the resources that it has been allocated.
This requires the parent to have a mechanism to inspect the state of its children.
• The task assigned to the child is no longer required.
• The parent is exiting and the operating system does not allow a child to continue if its parent terminates.
Process control block: A process in an operating system is represented by a data structure known as a process control block (PCB) or process descriptor.
The PCB contains important information about the specific process including
• The current state of the process i.e., whether it is ready, running, waiting, or whatever.
• Unique identification of the process in order to track "which is which" information.
• A pointer to parent process.
• Similarly, a pointer to child process (if it exists).
• The priority of process (a part of CPU scheduling information).
• Pointers to locate memory of processes.
• A register save area.
• The processor it is running on.
The PCB is a certain store that allows the operating systems to locate key information about a process.
Thus, the PCB is the data structure that defines a process to the operating systems
Q28) Give example of virtual memory. Explain paging and segmentation.
Ans : Virtual Memory : An imaginary memory area supported by some operating systems (for example, Windows but not DOS) in conjunction with the hardware.
You can think of virtual memory as an alternate set of memory addresses.
Programs use these virtual addresses rather than real addresses to store instructions and data.
When the program is actually executed, the virtual addresses are converted into real memory addresses.
The purpose of virtual memory is to enlarge the address space, the set of addresses a program can utilize.
For example, virtual memory might contain twice as many addresses as main memory.
A program using all of virtual memory, therefore, would not be able to fit in main memory all at once.
Nevertheless, the computer could execute such a program by copying into main memory those portions of the program needed at any given point during execution.
To facilitate copying virtual memory into real memory, the operating system divides virtual memory into pages, each of which contains a fixed number of addresses.
Each page is stored on a disk until it is needed. When the page is needed, the operating system copies it from disk to main memory, translating the virtual addresses into real addresses.
The process of translating virtual addresses into real addresses is called mapping.
The copying of virtual pages from disk to main memory is known as paging or swapping.
Paging : Page Definition: Operating systems use various techniques to allocate memory.
Pages are fixed units of computer memory allocated by the computer for storing and processing information.
Pages are easier to place in computer memory since they are always the same size.
A technique used by virtual memory operating systems to help ensure that the data you need is available as quickly as possible.
The operating system copies a certain number of pages from your storage device to main memory.
When a program needs a page that is not in main memory, the operating system copies the required page into memory and copies another page back to the disk.
One says that the operating system pages the data. Each time a page is needed that is not currently in memory, a page fault occurs. An invalid page fault occurs when the address of the page being requested is invalid. In this case, the application is usually aborted. This type of virtual memory is called paged virtual memory. Another form of virtual memory is segmented virtual memory.
Segmentation: Memory segmentation is the division of computer memory into segments or sections.
Segments or sections are also used in object files of compiled programs when they are linked together into a program image, or when the image is loaded into memory.
In a computer system using segmentation, a reference to a memory location includes a value that identifies a segment and an offset within that segment.
Different segments may be created for different program modules, or for different classes of memory usage such as code and data segments.
Certain segments may even be shared between programs.
Q29) what is logical and physical address? Explain protection system.
Ans: logical and physical address : In computing, a physical address, also real address, or binary address, is the memory address that is electronically (in the form of binary number) presented on the computer address bus circuitry in order to enable the data bus to access a particular storage cell of main memory.
In computing, a logical address is the address at which an item (memory cell, storage element, and network host) appears to reside from the perspective of an executing application program.
" In computing, a logical address is the address at which an item (memory cell, storage element, and network host) appears to reside from the perspective of an executing application program.
A logical address may be different from the physical address due to the operation of an address translator or mapping function.
Such mapping functions may be, in the case of a computer memory architecture, a memory management unit (MMU) between the CPU and the memory bus,
or an address translation layer, e.g., the Data Link Layer, between the hardware and the internetworking protocols (Internet Protocol) in a computer networking system.
Q30) Explain wild card characters in UNIX, with suitable examples.
There are several special characters that can be used to match multiple files at the same time: A number of characters are interpreted by the UNIX shell before any other action takes place.
These characters are known as wildcard characters. Usually these characters are used in place of filenames or directory names.
* An asterisk matches any number of characters in a filename, including none.
? The question mark matches any single character.
[ ] Brackets enclose a set of characters, any one of which may match a single character
at that position.
- A hyphen used within [ ] denotes a range of characters.
~ A tilde at the beginning of a word expands to the name of your home directory. If you
append another user's login name to the character, it refers to that user's home directory.
Here are some examples:
1. cat c* displays any file whose name begins with c including the file c, if it exists.
2. ls *.c lists all files that have a .c extension.
3. cp ../rmt?. Copies every file in the parent directory that is four characters long and begins with rmt to the working directory. (The names will remain the same.) 4. ls rmt[34567] lists every file that begins with rmt and has a 3, 4, 5, 6, or 7 at the end.
5. ls rmt[3-7] does exactly the same thing as the previous example.
6. ls ~ lists your home directory.
7. ls ~hessen lists the home directory of the guy1 with the user id hessen.
Q31) what is kernel Operating system? Explain its functions
Ans: The kernel is the essential center of a computer operating system, the core that provides basic services for all other parts of the operating system.
A synonym is nucleus. The central module of an operating system.
It is the part of the operating system that loads first, and it remains in main memory.
Because it stays in memory, it is important for the kernel to be as small as possible while still providing all the essential services required by other parts of the operating system and applications.
Typically, the kernel is responsible for memory management, process and task management, and disk management.
A kernel can be contrasted with a shell, the outermost part of an operating system that interacts with user commands.
Kernel and shell are terms used more frequently in UNIX operating systems than in IBM mainframe or Microsoft Windows systems.

In computer science, the kernel is the central component of most computer operating system(OS).
Its responsibilities include managing the system's resources (the communication between hardware and software components).
As a basic component of an operating system, a kernel provides the lowest-level abstraction layer for the resources (especially memory, processors and I/O devices) that application software must control to perform its function.
It typically makes these facilities available to application programs through intercrosses communication mechanisms and system calls.
These tasks are done differently by different kernels, depending on their design and implementation.
While monolithic kernels will try to achieve these goals by executing all the code in the same address space to increase the performance of the system, microkernels run most of their services in user space, aiming to improve maintainability and modularity of the codebase.
A range of possibilities exists between these two extremes.
Q32) Write a note on Vi editors in Unix.
Ans: The vi editor (short for visual editor) is a screen editor which is available on almost all Unix systems.
Once you have learned vi, you will find that it is a fast and powerful editor.
vi has no menus but instead uses combinations of keystrokes in order to accomplish commands.
Starting vi
To start using vi, at the Unix prompt type vi followed by a file name.
If you wish to edit an existing file, type in its name; if you are creating a new file, type in the name you wish to give to the new file.
%vi filename
Then hit Return. You will see a screen similar to the one below which shows blank lines with tildes and the name and status of the file.
"myfile" [New file]
vi's Modes and Moods
vi has two modes: the command mode and the insert mode.
It is essential that you know which mode you are in at any given point in time.
When you are in command mode, letters of the keyboard will be interpreted as commands.
When you are in insert mode the same letters of the keyboard will type or edit text.
vi always starts out in command mode. When you wish to move between the two modes, keep these things in mind.
You can type i to enter the insert mode. If you wish to leave insert mode and return to the command mode, hit the ESC key.
If you're not sure where you are, hit ESC a couple of times and that should put you back in command mode.
General Command Information
As mentioned previously, vi uses letters as commands. It is important to note that in general vi commands:
• are case sensitive - lowercase and uppercase command letters do different things
• are not displayed on the screen when you type them
• generally do not require a Return after you type the command.
You will see some commands which start with a colon (:).
These commands are ex commands which are used by the ex editor. ex is the true editor which lies underneath vi -- in other words, vi is the interface for the ex editor.
Entering Text
To begin entering text in an empty file, you must first change from the command mode to the insert mode.
To do this, type the letter i. When you start typing, anything you type will be entered into the file.
Type a few short lines and hit Return at the end of each of line. Unlike word processors, vi does not use word wrap.
It will break a line at the edge of the screen. If you make a mistake, you can use the Backspace key to remove your errors.
If the Backspace key doesn't work properly on your system, try using the Ctrl h key combination.
Cursor Movement
You must be in command mode if you wish to move the cursor to another position in your file.
If you've just finished typing text, you're still in insert mode and will need to press ESC to return to the command mode.
Moving One Character at a Time
Try using your direction keys to move up, down, left and right in your file.
Sometimes, you may find that the direction keys don't work.
If that is the case, to move the cursor one character at the time, you may use the h, j, k, and l keys. These keys move you in the following directions:
h left one space l right one space
j down one space k up one space
If you move the cursor as far as you can in any direction, you may see a screen flash or hear a beep.
Q33) Describe various file types supported by UNIX operating system.
Ans: UNIX ha many different types of files which are described as follows:
1) Directories file: These are the key to the hierarchical nature of the file system and UNIX was one of the first operating system to implement such a structure. The hierarchical aspect improves access times.
A directory is a special sort of file.
It can contain ordinary files or additional directories files.
With directories the user has complete flexibility in grouping files in ma meaningful ways.
A UNIX directory must have a name normally of up to fourteen characters.
These characters may be upper or lower case or a mixture of both.
2) Linked files and i-nodes: Unlike some other operating systems that have hierarchical file system, a UNIX file can have more than one name.
A single UNIX file may also be identified in more than one directory. This is done by the use of multiple links to the file.
UNIX assigns a unique number to every file that is created in the file system and this is called as i-node number.
Every file on the system must have one i-node. Following fig shows the use of i-nodes for files that have one link.
i. e. files that are only known by one name.
Since UNIX can also have multiple links, a file can appear in more than one directory.
Once the directory is entered, the link is followed and the user finishes up inside that directory.
The file is not duplicated. If duplicated this would be wastage of storage space.
The key thing is that we are linking to a single files and from the links originate doesn’t matter.
This ability to link file sis very useful, since different people can access the same file from different areas in the file system.
3) Ordinary files: This consists of a sequential series of bytes, which occupy disk- storage space.
An ordinary file is different from a directory in that a directory allows layers of files to be built up.
It therefore constitutes a ‘leaf node’ in the context of tree structured hierarchy.
The UNIX imposes no rules regarding the internal format of ordinary files. Ordinary file are created using text editor.

Q34) List the various features of Linux operating system
Ans: Following are the key features of the Linux operating system:
Multitasking: several programs running at the same time.
Multiuser: several users on the same machine at the same time (and no two-user licenses!).
Multiplatform: runs on many different CPUs, not just Intel.
Multiprocessor/multithreading: it has native kernel support for multiple independent threads of control within a single process memory space.
It has memory protection between processes, so that one program can't bring the whole system down.
Demand loads executables: Linux only reads from disk those parts of a program that are actually used.
Shared copy-on-write pages among executables. This means that multiple processes can use the same memory to run in.
When one tries to write to that memory, that page (4KB piece of memory) is copied somewhere else.
Copy-on-write has two benefits: increasing speed and decreasing memory use.
Virtual memory using paging (not swapping whole processes) to disk: to a separate partition or a file in the file system, or both, with the possibility of adding more swapping areas during runtime.
A total of 16 of these 128 MB (2GB in recent kernels) swapping areas can be used at the same time, for a theoretical total of 2 GB of useable swap space.
It is simple to increase this if necessary, by changing a few lines of source code.
A unified memory pool for user programs and disk cache, so that all free memory can be used for caching, and the cache can be reduced when running large programs.
All source code is available, including the whole kernel and all drivers, the development tools and all user programs;
also, all of it is freely distributable. Plenty of commercial programs are being

Q35) Explain organization of Unix opearating system.
Ans: Operating Systems today generally consist of many distinct pieces or components.
We can simplify our description of an OS by viewing it as many layers of related components.
A generic OS layered diagram is pictured below.
Note the component at the top of the diagram is the user, and the component at the bottom of the diagram is the physical hardware.
Management (control) of all tasks in between these two physical components is the responsibility of the OS.

As with most modern operating systems, the Unix OS is also made up of many different components.
In a very general sense, Unix is divided into two main components, the kernel component and the utilities.
The kernel, which is critical to the operation of the OS, is loaded into Random Access Memory (RAM) by the boot loader, where it remains memory resident for as long as the machine remains powered on.
The utilities are programs which (typically) reside on a disk device (e.g. a harddrive).
Individual utillities are loaded into RAM as needed or requested and are discarded from RAM upon completion.
1 The relationship between the kernel and the utilities is pictured below.
Perhaps the most "important" or well known of the Unix utilities is known as the Unix shell.
The shell is the mechanism which allows users to enter commands to run other utility programs.
There are several popular UNIX shell programs which will be discussed later.
What is important to keep in mind that the shell is merely another UNIX utility program, which is typically loaded at login.
The diagram below provides another visual representation of the organization of the Unix OS.
At the core of the OS is the hardware, which is managed by the surrounding outer layer, the kernel.
In the next outer layer come the utilities.
Many of the utilities are system commands, but these can also be user written programs as shown by the a.out program.2 Finally in the outermost layer are other application programs which can be built on top of lower layer programs.
Q36) Give purpose of following directories in Unix File system.
i) /boot
ii) /dev
iii)/etc
iv) /lost + found
Ans: /boot: In Linux, and other Unix-like operating systems, the /boot/ file directory holds files used in booting Linux.
The usage is standardized in the File system Hierarchy Standard.
The contents are mostly Linux kernel files or boot loader files, depending on the boot loader.
/boot/ is often simply a directory on the main (or only) hard drive partition.
However, it may be a separate partition.
Using a separate partition is generally only used when boot loaders are incapable of reading the main file system (e.g. SILO does not recognize XFS) or other problems not easily resolvable by users. ii)/Dev: It is the directory where all the devices are kept.
UNIX treats devices such as printers, disk-storage devices, terminals and been areas of the computer’s memory as a file.
The files/dev termed as special files, since the file types are neither directory nor ordinary.
The /dev will contain file for each device on the network so typical entries filename such as tty1, tty2 and so forth.
/dev is also a part of the root file system.
The files in /dev contains information which is common to all files as files creation and modification dates and file permission.
One important thing to know about /dev is that there are no file sizes.
Instead there are included what are known as major and minor device number.
The major number encodes the type of devices being referred to while minor number distinguishes between possible minor numbers.
iii)/etc: the /etc directory which resides at rot level contain various administration utilities together with other special system files that allow the Unix host system to start up properly at the bootstrap time.
Utilities for handling of the system’s terminal devices are stored in ‘/etc’ as are lists of all the registered users of the system including you.
iv)/lost + found: The lost+found directory is used by fsck. Files that have no valid links are copied to this directory.
Each file system should contain one lost+found directory.
Orphaned files and directories (those that cannot be reached) are, if you allow it, reconnected by placing them in the lost+found subdirectory in the root directory of the file system. The name assigned is the i-node number.
If you do not allow the fsck command to reattach an orphaned file, it requests permission to destroy the file.
For accessing computer programs go to TECHNOLOGY