Operating system interview questions

Q.1) What is an operating system?

A computer system has many resources which may be software or a hardware. A few of the common resources are CPU, files, different processes, memory etc. All these resources are managed by a low level software which is known as operating system. It acts as an interface between computer user and machine.

For more details please refer: introduction to operating system

Q.2) Explain the architecture of a Linux operating system?

The Linux operating system consist of four important components:

  • Hardware: Consists of devices like mouse, CPU, keyboards, NIC cards
  • Kernel: It directly interacts with the hardware and provide services to upper layer.
  • Shell: It acts as interface between kernel and user application
  • Utilities: It is a program that provides information about the operating system to users like grep, ssh, top, date etc.

Linux is based on UNIX architecture. For more details please refer: Linux architecture

Q.3) What are common functions of operating system?

As discussed earlier operating is the software which is responsible for managing the different resources of a system. A few of the important resources are: CPU, memory, process, storage allocation, file system.

Operating System serves two important purposes:

  • It acts as an interface between the end user’s applications and computer hardware which provides an environment for building and executing the applications.
  • It is responsible for allocating the different resources among different users and their applications.

For more details, please refer: Function of an operating system

Q.4) What are different components of an operating system?

Three important components of an operating system are:

  • Kernel: Core components which acts as in interface between user level application and h/w
  • Libraries: Libraries contains the functions for file manipulation, getting the date, time
  • Utilities: It is a program that provides information about the operating system to users like grep, ssh, top, date etc.

For more details please refer: Components of an operating system

Q.5) What is kernel ?

  • The kernel is the core component of the Operating system and is responsible for process management, I/O management, file sub-system, memory Management, scheduler, secondary storage management, interrupts handling etc.
  • It is the first program that is loaded after the bootloader and remains in the memory until the operating system is shut-down.
  • The major aim of the kernel is to manage communication between software i.e. user-level applications and hardware i.e., CPU and disk memory.
  • The process calls appropriate system call to avail services provided by the kernel.

For details please refer: Introduction to kernel

Q.6) What is difference between monolithic and micro kernel?

A few if the important differences are:

Monolithic Kernel Micro Kernel
It is one of those kernels where the user services/applications such as device drivers,
file server, graphical tools, programs of any
language, and kernel space services like process management, device management, interrupt handling share the same memory.
In this user services and kernel services are implemented in different space and hence are slower, unlike monolithic kernel.
They are faster as there is no separate user space and kernel spaceThey are slower as user space and kernel space are implemented separately.
All the components are tightly coupled hence failure in any one component affects the entire system.The components are loosely coupled and fault in one component does not affect the other.
They are bigger than a micro kernel and are not easily customizable.They are smaller than monolithic kernel
Example: Unix, LinuxExample: Symbian
untitled image

For other types of kernel, please refer: Types of kernel

Q.7) What is kernel space and user space?

Kernel space and user space are memory region which differs in the privilege of their components.

Kernel Space:

  • It is a part of memory where the code of the kernel is located and executed under.
  • The components used in kernel space are referred to as kernel space components.
  • Processes running under this space have full access to the kernel and do not require any system calls to avail of service provided by the kernel.
  • They can access any machine instruction of the underlying architecture
  • They can access h/w controllers and register as per to rules.
  • Example: getpid, socket, read, write, open, process management subsystem, memory management subsystem are system space or kernel space components.

User Space:

  • It is a set of locations where normal user’s processes run that is space other than kernel space.
  • This area is often referred to as userland and components are referred to as user space components, the code that runs outside the operating system’s kernel.
  • Userland usually refers to the various programs and libraries that the operating system uses to interact with the kernel.
  • Processes running under this pace do not have the full privilege and do not have access to kernel space.
  • They can access a small part of the kernel by using system calls.
  • They cannot access all the machine instructions and can access h/w controllers and register with the help of system calls.
  • Examples: Any C program or top, ps, ls, malloc, fopen are common user space program which evokes system calls to get the job done.

Diag-1: User space and Kernel Space

untitled image

Q.8) What is system calls?

  • The system call helps in availing the service provided by the kernel to the userland process.
  • These are used by the user space components to get the facilities provided by kernel space components.
  • Kernel space components cannot be accessed directly that is without using system calls.
  • Common system calls are: read, write, socket, etc.

Q.9) How does system call works?

  • The system call helps in availing the service provided by the kernel to the userland process.
  • Common system calls are: read, write, socket, etc.

Working Principle:

  • Execution of system call is an example of Software Interrupt where the control of execution is passed to kernel space code that is to appropriate system call system routine.
  • There is a special number associated with each system call and the user space program puts this number in a register and is passed as an argument to system calls.
  • System call numbers correspond to an entry in a table called the system call table.
  • Each entry is like a function pointer that holds an address of a function that the CPU will begin executing when that interrupt is received.
  • Before the control is passed to the called routines, the current context of the process like registers content, program counter which holds the next instruction to be executed is saved onto the stack.
  • Once the service routine is done, the register content is overwritten with the saved context and starts the normal execution in the user space.

System call Interface Layer:

  • It serves as an interface between system calls and the underlying operating system.
  • The System call interface layer provides a trap handler that is invoked whenever a system call API is invoked.
  • As per the system call API and its rules, the trap handler with the help of the system call table (also located in the system call interface layer) will invoke the appropriate system call system routine in the appropriate sub-system of the kernel.

Diag-1: System call working

How does a system call work - Stack Overflow

Q.10) What is difference between a program and a process?


  • It is a passive entity, it’s a static object that can exist in a file.
  • It does not require any other resources other than the file in which it is stored
  • It contains the instructions or an algorithm to be executed.
  • A program exists in a single space and continues to exist as time goes forward.


  • It is a dynamic entity that is a program under execution in main memory is called process.
  • Process holds resources such as CPU, memory, I/O etc.
  • It is a sequence of instructions under execution.
  • A process exists in a limited period time.


   int i, prod =1;
   for( i =0;i <100; i++)
      prod = prod * i

Here program contains one multiplication( prod = prod * i) where as a process executes 100 multiplications that is for each loop.

Q.11) What are different section of a process?

Different Sections of a Process is:

Stack Segment:

  • The stack sits at the higher address and grows towards the lower address.
  • It is unique for each process and stores store automatic variables (non-static local variables).
  • A “stack pointer” register tracks the top of the stack; it is adjusted each time a value is “pushed” onto the stack.
  • When a function is called, a stack frame (or a procedure activation record) is created and PUSHed onto the top of the stack.
  • This stack frame contains information such as the address from which the function was called and where to jump back to when the function is finished (return address), parameters, local variables, and any other information needed by the invoked function.
  • Data is removed in a last-in-first-out manner from the stack.

Heap Segment:

  • This segment is responsible for holding all the variables are which are dynamically allocated via malloc(), calloc(), realloc() and new for C++ function.
  • This is shared by all the shared libraries and processes.
  • The stack and heap are traditionally located at opposite ends of the process’s virtual address space.
  • It is typical for the heap to grow upward. This means that successive items that are added to the heap are added at addresses that are numerically greater than previous items.
  • It is also typical for the heap to start immediately after the BSS read of the data segment.
  • The end of the heap is marked by a pointer known as the break(brk).
  • A program will terminate if they reference past the break.

Data Segment:

  • This section of memory is responsible for holding global variables, static variables, constant and extern variables.
  • This is further divided into two parts:
    • Initialized data segment: This contains all the global, static, extern, and constant variables which are initialized with non-zero values.
    • char a[] = “hello” is initialized read write area and char *a= “hello” is initialized read only area.
    • Uninitialized data segment: This is also called as bss(block starting symbol) segment and contains all the global, static, extern, and constant variables which are uninitialized.
    • It contains all global variables and static variables that are initialized to 0 or do not have explicit initialization in source code

Text Segment:

  • This is also referred to as a code segment and contains the executable and usually starts from a low address and is a static and read-only part of memory.
  • This is the machine language representation of the program steps to be carried out, including all functions making up the program, both user defined and system


We also have one segment called the command line arguments section which holds the variable passed as a command-line argument. int main( int argc, int *argv[])

Diag-1: Process memory layout

Memory Layout in C - javatpoint

Q.12) What are different states of a process?

As the process executes that is from creation to termination, it goes through different states. The state of a process can be defined as the current activity of a process.

The common states of a process are:


The process is being created.


This state signifies that it is waiting to be assigned to a processor. It has got all the resources and is waiting for the short term scheduler to be given to the processor.

If there is more than one program in the ready state (main memory) means it is multiprogramming


The process is running; its instructions are getting executed that is it has been picked up by the short term scheduler and is given to the CPU.


This wait signifies that the process is waiting for some resources, some event to occur (like I/O operations). It’s process descriptor(PD) is added to the corresponding wait Queue.


The process is temporarily swapped to secondary memory by the medium term scheduler to make room for a new process as the process in wait or block state can not progress.

Terminated State:

The process has completed its execution (normal Termination) or abnormal termination because of trying to access some illegal memory or instructions.


A process can take minimum of four states: new->ready->running->terminated

Diag-1: Process States

untitled image

Q.13) What is process control block( PCB) or process descriptor(PD)?

  • It is a data structure that contains all relevant information or attributes about a process.
  • Each process has one unique PCB.
  • It is also termed as Process Descriptor (Pd).
  • To execute a process, the OS will create a data structure that contains all the information about a process called PCB. It is also being termed as a context of a process.

Some of the important fields present in the process control block are:

  • PID
  • States
  • Scheduling Parameter
  • Program Counter
  • CPU registers
  • Pointer to the other PCB
  • List of open files
  • List of open devices like keyboard, mouse

All these PCB’s are present in one master Process Control block.

Diag-1: Process Control Block(PCB)

untitled image

Q.14) What is context switching?

It is the transfer of control of execution of a process from user space to kernel space.

  • It is switching the context of the process from user space to kernel space.
  • It involves saving the current context of a process before moving the control of execution to kernel space.
  • The current context of the process that is the current register content, program counter all are saved so that it can be reloaded once the control switches back to the user space program.
  • When programs run, they build up a great deal of state in CPU caches, TLBs. Switching to another job causes this state to be flushed and a new state relevant to the currently-running job to be brought in, which may exact a noticeable performance cost.
  • Context switching is the basis of multiprogramming or multi-tasking.

Before the context is switched to the kernel space, the complete process control block(PCB) which is nothing but a data structure to holds the information’s about the process is saved onto the stack.

Important information present in PCB which are saved onto the stack is as follows:

  • The process state
  • The program counter
  • The values of different registers.
  • The CPU scheduling information.
  • Memory Management information about the process
  • I/O status information.

When the PCB of the currently executing process is saved, the operating system loads the PCB of the next process that has to be run on CPU. This is a heavy task and it takes a lot of time.

Thus having frequent context switching is an overhead and involves the scheduler picking ready jobs to be processed.

Overhead of Context Switching:

Direct Factors:

  • Latency
  • Saving/restoring contexts
  • Finding the next process to execute, calling the scheduler

Indirect Factors:

  • TLB needs to be reloaded.
  • Processors pipeline to be Flushed.

Diag-1: Context Switching

untitled image

Q.15) What do you mean by normal and abnormal termination of a process?

Any process can either be terminated normally or abnormally.

A process can be terminated in one of the following ways:

  • by exiting (i.e., the process terminates itself calling exit system call)- Normal Termination
  • by being signaled –Abnormal Termination
  • by having no running threads (i.e. the thread count goes to 0)

Normal Terminations:

  • This reflects that the process has completed its task and has terminated gracefully.
  • The exit() system call is used by most operating systems for process termination.
  • The process leaves the processor and releases all its resources

Abnormal Terminations:

This reflects that the process has not completed its task rather it has terminated in between because of any other reasons.

Some of the common reasons for abnormal terminations are:

  • The process is trying to access some memory locations which it should not, resulting in a SIGSEGV signal.
  • Termination because of permissions that is read-write permissions.
  • Another process explicitly sends a signal to the process that is with kill(pid, SIGTERM).
  • Trying to bypass the System call API accessing the kernel.

Q.16) What is init process?

  • It is the first process to be created when the Linux machine boots up.
  • It is a daemon process that continues to run until Linux is shut down that is it cannot be killed, if a signal is sent to the Init process, it is simply discarded.
  • Init is started by the kernel during the booting process; a kernel panic will occur if the kernel is unable to start it or the init process calls exit() itself.
  • It will adopt all the orphaned processes.
  • Since Init process is the ancestor process of all the other processes, and killing them would make all the process as an orphan and no process would be left to re-parent them to.
  • The PID of Init process is 1 and PPID is 0.

Q.17) What do you mean by parent and child process?

Parent Process

  • A parent process is a process that has created another process called the child process.
  • In Linux, every process has a parent process except the Init and a few other kernel processes.

Child Process:

  • It is the process created by a process.
  • In Linux, all the processes except the Init process are children of a process.

Diag-1: Process ID and Parent Process ID

untitled image

Q.18) What is daemon process or a background process?

  • These are the process that runs independently of any terminal session.
  • A process which does not needs any interaction with the user and need to run longer is made as a background process.
  • Syntax: process_name &>/dev/null &
  • This will redirect the output of the command to nowhere and can be changed for redirecting to the proper location.
  • Typically, daemon names end with the letter d: for example, syslogd is the daemon that implements the system logging facility and sshd is a daemon that services incoming SSH connections.
  • The daemon process is a process orphaned intentionally.

Common advantages associated with the Daemon process are:

  • log out without losing the service (which saves some resources)
  • do not risk to loosing the service from an accidental ctrl-c
  • does not offer a minor security risk from someone accessing the terminal, hitting ctrl-c and taking our session

Q.19) What is orphan process?

  • A process is considered to be an orphan if its parent process does not exist.
  • All the orphan processes are re-parented by the Init process.
  • Orphan processes take resources while they are in the system, and can potentially leave a server starved for resources. Having too many Orphan processes will overload the init process and, can hang up a Linux system

Reason for Orphan Process:

  • A process can be orphaned either intentionally or unintentionally. Sometimes a parent process exits/terminates or crashes leaving the child process still running, and then they become orphans.
  • Also, a process can be intentionally orphaned just to keep it running. For example, when we need to run a job in the background which does not need any manual intervention and going to take a long time, then we detach it from the user session and leave it there.
  • Same way, when we need to run a process in the background for infinite time, we need to do the same thing. Processes running in the background like this are known as a daemon process.

Q.20) What is zombie process?

  • Zombie or a defunct process is a process that has completed its execution but still has an entry in the parent process table.
  • When the child process ends, it must release all the resources so that they can be used by other processes.
  • The child process sends a SIGCHILD signal to the parent process indicating that the child process has terminated and the parent should read the exit status and remove the entry from its table.
  • If the parent process does not clean the entry from its table, it is referred as the Zombie process.
  • To remove zombies from a system, the SIGCHILD signal can be sent to the parent manually, using the kill command.
  • If the parent process still refuses to reap the zombie, the next step would be to remove the parent process. When a process loses its parent, init becomes its new parent. Init periodically executes the wait() system call to reap any zombies with init as parent

Q.21) What is fork() system call?

  • fork() is a system call API that is responsible for creating a new, duplicate process as that of the parent process called the child process.
  • The child process will have a new Process Descriptor where most of the fields are common as that of the parent process.
  • Syntax: int ret = fork(void)
  • It returns 0 to the child process and PID of the child process created to the parent process in case of success and returns -1 in case of failure.

Q.22) What are the reason a fork() call can fail?

A fork() can fail because of either of the two reasons:

  • If the parent process has exceeded the limit for the number of child processes it can create.
  • If there is not enough space in the process table or not enough space in virtual memory.

Q.23) What are the fields that are shared between a parent process and child process in case of fork() api?

In case of fork(), the child process and a parent process share the same virtual address space, where most of the fields are common as that of the parent process. Some of the fields that are common are:

Some of the fields which are duplicate as that of parents are:

  • CPU Registers
  • Current working directory
  • Signal Mask
  • Scheduling Parameters
  • No of open files
  • Resource Limit
  • Identical Copy of Memory

The child process will have its Process ID and even data section and the state of the child process will be different from that of the parent.

Q.24) what is relation between number of child process created and fork system calls?

  • If there are n fork calls, the number of child process created is 2^n -1 and
  • Total number of process is 2^n, including the main parent process


int main()
   printf("\n Hello\n");
  • For the above call total child process created is 2^3 -1 =7.
  • The first fork will result in 1 child process and both child and parent will execute the next fork, thus 2 more child process, and then all 4(two child and 2 parents) will call the last fork thus 4 child created. Hence total of 7 child process is created and
  • Eight times hello will be printed.

Q.25) What is exec family of system call?

  • The exec() family of functions replaces the current process image with a new process image.
  • One of the practical application of using this family of function is during the use of the fork() API.
  • In the case of fork() API system call, the child process has the same address space as that of the parent process.
  • That is the child processes will run two instances of the same application or a program.
  • Thus having a child process that does execute the same program as that of the parent process does not hold any significance.
  • When a process calls exec family APIs, all code (text), data (initialized and uninitialized), stack and, heap section of calling process is replaced with the executable of the new program.
  • Hence with exec family system call replaces the child process address space with a new process image.
  • The exec subroutine does not create a new process but overlays the current program with a new one, which is called the new process image.


This family has many API with the basic syntax as a path of the program to execute and a variable number of arguments.

int execl (const char *path, const char *arg,...,(char *)NULL
  • The first argument is the path of the file being executed.
  • The second argument is the list of the variable length arguments.
  • It must be terminated with NULL pointer.

Return Value:

It does not return anything on success and -1 on failure.


Sample Program-1:
Old Image: execl.c

#include <stdio.h>
#include <stdlib.h>
#include <errno.h>
#include <string.h>
#include <unistd.h>

int main()
  printf("\n Executing the old program ..\n");
  int res = execl ("image", "Hello", NULL);
  if (res == -1)
    printf("\n execl() API failed  %s\n", strerror(errno));
    exit (EXIT_FAILURE);
  return 0;

New image program( new_image.c)

#include <stdio.h>

int main( int argc, char *argv[])
  printf ("\n New image is executing because of execl() API\n");
  printf ("\n Argument from old image is %s\n", argv[0]);
  return 0;
[aprakash@wtl-lview-6 execl]$ gcc new_image.c -o image
[aprakash@wtl-lview-6 execl]$ gcc execl.c 
[aprakash@wtl-lview-6 execl]$ ./a.out 

 Executing the old program ..
 New image is executing because of execl() API
 Argument from old image is Hello

Q.26) What is difference between wait() and waitpid() system call?

  • Both the API’s are used to block the execution of calling process until one of the child terminates or stopped by a signal or resumed by a signal.
  • If the parent process happens to terminate before the child process, the child process becomes orphan and gets re-parented by the Init process.
  • If the child process terminates and the parent process does not remove its entry is still there in the process control block(PCB), it is considered as a Zombie process.
  • Using wait() and waitpid(), parents process can get the exit status of the child process and can remove its entry and can avoid zombie process.

Common difference between them are:

  • wait() API is a blocking call that is if non of the child process has terminated then the caller prcoess will block until any of the child terminates whereas waitpid() is a non-blocking call that we can pass WNOHANG in option field to make it non-blocking.
  • The second important difference is that in case of waitpid() we can pass the specific pid of the child process to be waited for but in case of wait() there is no such options.

For more details with example, please refer: wait() and waitpid() system calls


Q.27) What is threads?

  • A thread can be defined as a concurrent or parallel unit of execution within a single process.
  • Each thread represents a separate flow of control.
  • They provide a way to improve application performance through parallelism.
  • A process or a system with multiple units of execution that is thread is known as a multithreaded system or multithreaded process.
  • Any process will have a minimum of one thread called the main thread.
  • Threads are also known as Light weight Process.

Q.28) Give few common example of a thread.

Example of a Thread:

  • Microsoft word has multiple threads which perform different task parallelly, like one thread responsible for taking the input while other thread does the spell check and other does the spell count.
  • The web server runs as a multihreaded system where the request from each client is handled by a separate thread.
  • If the web server ran as a traditional single-threaded process, it would be able to service only one client at a time. The amount of time that a client might have to wait for its request to be serviced could be enormous.

Diag-2: Multithreaded Server Architecture

Threads in Uniprocessor and Multiprocessor System

  • In the case of a uniprocessor system only one thread is executed at a time by the CPU, but the CPU switches(context switch) rapidly between the threads to give the illusion that threads are running parallelly.
  • But in case of multiprocessor system, different thread can execute at the same time by different processor parallely.

Diag-3: Threads in Uniprocessor and Multiprocessor System

Q.29) What are two important advantages of having a thread in a program?

There are two main reasons to have threads in a program:

  • Parallelism
  • Responsiveness


  • It is nothing but executing more than one task at a time and so threads help in achieving parallelism.
  • In the case of multiprocessor, each thread can be executed independently by a different processor at the same time.
  • The task which is completely independent of each other or can be broken down into sub-task can be run made to run parallel on the different processor by using threads.
  • Consider a simple example of finding the sum of numbers from 1 to 100, this task can be broken down into two subsets task that is finding the sum from 1 to 50 and the other responsible for finding from 51 to 100 which can be summed later.
  • Since each sub task is independent of each other, hence can be done parallelly by the two threads parallel.
  • Thus parallelism optimizes the performance of the system.


  • Thread based application has better responsiveness in the sense that even if one thread is blocked because of any reason(I/O operation), the other thread can continue to execute.
  • If the process is blocked, the complete execution comes to halt and the CPU goes to the idle state.

Q.30) what are two types of threads?

The two types of thread are:

  • User-level threads and
  • Kernel-level threads

User Level Threads

  • User-level threads are implemented by user level thread libraries rather than by using system calls.
  • The kernel is not aware of any such threads and independent of the operating system
  • Thread switching does not need to call system calls and to cause an interrupt to the kernel.
  • There will be only one Thread Descriptor(TD) for all the threads.
  • Example: Java Thread and POSIX Threads

Kernel Level Threads:

  • These threads are implemented by the operating system by using system calls.
  • The kernel has a table that keeps track of all threads in a system.
  • There will be one thread Descriptor(TD) per thread unlike user level threads

Examples of Kernel Level threads

  • The Idle process is a kernel thread:
    When a ready queue is empty then the scheduler schedules idle process, there is one idle thread per process in modern OS
  • Page Daemon:
    Responsible for selecting the page that need to be replaced during page replacement algorithm that is when system space memory is low.

Q.31) Explain advantages and disadvantages of user-level and kernel-level threads?

Advantage of User Level Thread:

  • They do not require modification to the OS, they can be created on OS that does not support threads.
  • Thread switching(context switching) is not much expensive, without the need for kernel intervention.
  • Thread is represented by program counter(PC), Registers, Stack and, a small Thread Control Block(TCB).

Disadvantage of User Level Thread:

  • There is a lack of coordination between threads and the operating system kernel. Therefore, the process as a whole gets one time slice irrespective of whether a process has one thread or 1000 threads within. It is up to each thread to relinquish control to other threads.
  • User-level threads require non-blocking systems call. Otherwise, the entire process will be blocked in the kernel, even if there are runnable threads left in the processes.
  • For example, if one thread causes a page fault, the process blocks, as there is only one thread descriptor(TD) for all the threads.

Advantages of Kernel Level Threads:

  • Because kernel has full knowledge of all threads, scheduler may decide to give more time to a process having a large number of threads than a process having a small number of threads.
  • As each threads have its own thread descriptor(TD), thus even if one thread gets blocked, other thread can continue executions.

Disadvantages of Kernel Level Threads:

  • The kernel-level threads are slow and inefficient. For instance, kernel thread operations are hundreds of times slower than that of user-level threads.
  • Since kernel must manage and schedule threads as well as processes. It requires a full thread control block (TCB) for each thread to maintain information about threads, as a result there is significant overhead and increased in kernel complexity

Q.32) What is thread control block(TCB)?

Similar to Process Control Block(PCB), each thread maintains a table called Thread Control Block(TCB) which contains thread specific information.

Some of the common attributes kept in TCB are:

  • Thread ID
  • The content of CPU Register
  • Program Counter(PC)
  • Scheduling Information
  • Thread state(status)
  • Stack Pointer
  • Signal Mask
  • Thread Parameters like start function, stack size.
  • Pointer to PCB of the process of which thread is part of

Diag-1: PCB-TCB Relation

PCB-TCB Relation

Q.33) What are common and different fields among threads of a process?

Each process has four sections that is:

Similarly, even thread has these sections but text, Data and, Heap sections are common to all the threads but only the stack section is separate for each thread.

A multithreaded aware operating system also needs to keep track of threads. The items that the operating system must store that are unique to each thread are:

The items that are shared among threads within a process are:

  • Text segment (instructions)
  • Data segment (static and global data)
  • BSS segment (uninitialized data)
  • Open file descriptors
  • Signals
  • Current working directory
  • User and group IDs

Diag-1: Memory Layout with two Threads

Q.34) What are common properties of a thread?

A few of the common properties of a thread are:

Please read each of the above link for details, question will be asked from above for sure

Q.35) Why threads are lighter than a process?

  • Thread is said to be lightweight than a process in terms of resource consumption.
  • Most of the resources of a thread are inherited from a process like complete memory space, signal table, page table.
  • Each thread only has its own stack section that means its own local variable, a program counter(next instruction to be executed), registers.
  • Thread is also called light weight process because of the above reason.

Q.36) What are different pros and cons of a thread?

A few of the advantages associated with threads are:


It is nothing but executing more than one task at a time and so threads help in achieving parallelism. In the case of a multiprocessor, each thread can be executed independently by a different processor at the same time.


It adds more responsiveness to the system; if one thread is blocked, the other thread can continue to work but in the case of process, if the process is blocked the CPU goes to an idle state.


Threads are more economical than processes in terms of creation, termination, and context switching as all threads share the same address space that is page table, signal table, open file descriptors are common among threads.

Inter-Process Communication(IPC):

IPC among threads are cheaper as all the threads share the same address space that is they share the common memory

Some of the disadvantages associated with threads are:


Thread-based application is less robust in comparison to process if something goes wrong even with a single thread-like memory access violation, it affects and corrupts all the threads as all threads share the common memory. This scenario leads to termination of the process abnormally.


Threads based applications are not as secure as that of the process since all the threads share the common memory and can read and modify the shared memory easily.


Thread based application has synchronization overhead since all the threads share the common memory and can read and modify the shared memory easily. Hence explicit synchronization mechanism like mutex, a semaphore is needed.


Debugging a multithreaded program is difficult than a single threaded process.

Q.37) Difference between a process and a thread

A few of the common difference process and threads are:

A process is an active entity that is any program under execution is termed as processA thread is a concurrent or parallel unit of execution within a process. A process can have multiple threads.
Process is heavier than a thread.A thread is a lightweight than a process.
Process are more robust than a thread.A thread is less robust than a process.
Synchronization overhead is less.Synchronization overhead is more.
Process are more secure.Threads are less secure as all the threads share the same address space hence can read each other space.

Q.38) What is thread join?

It is a synchronization method where the calling thread is blocked until the thread whose join method is complete. This ensures that the calling(main) thread does not terminates before the child thread.

For details please refer, thread related API p_thread_join().

Q.39) What is process synchronization?

  • Process or a thread synchronization is a mechanism that deals with the synchronization of processes or threads, that is controlling the execution of one process or thread by another process or thread.
  • It controls the execution of processes or threads in a such way that no two processes or threads can have access to the same shared data or resources concurrently to ensure that consistent results are produced.
  • Threads should be synchronized to avoid critical resource use conflicts.
  • It is a solution to critical section problem or a race condition problem.

Q.40) What are common synchronization method?

A few of the common synchronization methods are:

  • Mutex
  • Semaphore
  • Spin locks
  • Condition variables

Need of Synchronization:

  • When multiple processes or threads execute concurrently sharing some system resources.
  • To avoid any loss or inconsistent result.

Q.41) What is critical section?

  • Critical Section is a segment of a code that contains shared variables or resources which are needed to be synchronized among multiple processes or threads.
  • If allowed to access simultaneously by more than one process or thread, it leads to data loss or inconsistencies.
  • In simple terms, a critical section contains instruction or a group of instructions that must be executed atomically.
  • Thus to avoid such inconsistencies or loss of data, processes or threads must be synchronized with each other and only one process or a thread can enter the critical section at a time.

For more details, please refer Critical section.

Q.42) What is critical section problem?

Critical Section problem states that only one process can enter the critical section at a time and hence it needs a synchronization among processes.


Consider a scenario where more than one process is allowed to enter the critical section that is allowed to access the shared variables or resources.

  • If one process reads a shared variable and modifies its value(in register-R1) but before the value is written, the other process reads the same variable.
  • Hence later process does not read the updated data and modifies the same and also writes it to memory.
  • Now the first process again reads the value from its register(R1) and writes its to memory and hence the value of the shared variable updated by the second process is lost. Thus data is either lost or the value is inconsistent if the synchronization is not maintained among processes.
P1                                            p2
{                                             {
    x++;                                           x--;
}                                             }
p1                                     p2
{                                      {
  a) Move x to R1                        a) Move x to R2
  b) Increment the R1                    b) Decrement the R2
  c) Copy R1 to x                        c) Copy R2 to x
}                                      }

Now as per to above example, consider the value of x to be 5

case 1:
Order : p1(a) -> p1(b)->p2(a)->p2(b)->p2(c)->p1(c)
Value : 6

case 2:
Order : p1(a)->p1(b)->p1(c)->p2(a)->p2(b)->p2(c)
Value : 5

Hence we can see that the value of x is inconsistent and it depends upon the order of execution. To overcome this both the processes must be synchronized.

Q.43) What are different solution for critical section problem?

The solution to Critical section can broadly be classified as:

  • Mutex Lock and
  • Semaphore

Q.44) What is race condition?

The race condition is an undesirable situation that may occur inside a critical section that is when common variables or resources are shared among processes or threads and

  • The outcome of execution depends upon the order of execution and
  • Processes or threads competes(races) with each other to access the shared variables or resources in CS.
  • It leads to data loss or inconsistencies.

Race condition is avoided if the Critical section is atomic that is only one process or a thread can execute in its critical section.

  Entry Section
  Critical section
  Exit Section
  Remainder Section

Data Loss and Inconsistencies:


Consider a scenario where more then one process are allowed to enter the critical section that is allowed to access the shared variables or resources.

  • If one process reads a shared variable and modifies its value(in register-R1) but before the value is written, the other process reads the same variable.
  • Hence later process does not read the updated data and modifies the same and also writes it to memory.
  • Now the first process again reads the value from its register(R1) and writes its to memory and hence the value of the shared variable updated by the second process is lost. Thus data is either lost or the value is inconsistent if the synchronization is not maintained among processes.
P1                                        p2
{                                         {
    x++;                                     x--;
}                                         }
p1                                        p2
{                                         {
  a) Move x to R1                          a) Move x to R2
  b) Increment the R1                      b) Decrement the R2
  c) Copy R1 to x                          c) Copy R2 to x
}                                        }

Now as per to above example, consider the value of x to be 5

case 1:
Order : p1(a) -> p1(b)->p2(a)->p2(b)->p2(c)->p1(c)
Value : 6

case 2:
Order : p1(a)->p1(b)->p1(c)->p2(a)->p2(b)->p2(c)
Value : 5

case 3:
Order: p2(a)->p2(b)->p2(c)->p1(a)->p1(b)->p1(c)
value: 5

Hence we can see that the value of x is inconsistent and it depends upon the order of execution. To overcome this both the process must be synchronized.

Q.45) What is mutex?

A mutex is a locking mechanism used to synchronize access to the resource. At a time only one thread or a process can acquire the lock and enter the critical section. The process or thread which gets the lock can only release the lock or make the lock available to other. It ensures ME that is only one process or a thread can enter the CS at a time. The other process or a thread has to be in busy –waiting state.

Q.45-A) Is the mutex can be used only for threads within a single process or across multiple processes?

Mutex can be used for both that is for synchronization among multiple threads within a process and also among process. There is an attribute field PTHREAD_PROCESS_SHARED and PTHREAD_PROCESS_PRIVATE which govern this. By default it is private that is among the multiple thread within a same process.

Q.45-B) What happens when a process with a mutex creates a child process?

The child process gets a private copy of the mutex which will be used among the threads of child process.

For more details on API , please refer : Mutex

Q.46) What is semaphore?

  • A semaphore is a signaling mechanism where one thread or a process notifies the other thread or a process about certain event.
  • By virtue of this event notification synchronization is achieved.
  • A semaphore is a variable or an object in a system which represents number of resources available in a system at that point or number of thread or a process which can access the similar instance of a resources at a time.
  • This object is manipulated by two operations: wait and signal by using two routines sem_wait() and sem_post() respectively.
  • Wait decrements the value of semaphore and blocks if the value goes negative.
  • The positive value indicates the number of threads which can access the similar instances of a resource and negative value indicates the number of thread which is in waiting state.
  • signal increments the value of semaphore and wakes one waiting thread out of many(if).
  • There are two types of semaphore: Binary and Counting Semaphore.

Q.47) What are two operations performed on semaphore?

On a semaphore variable two operations can be performed which decrements and increments the value. The decrement is performed by wait() and increment is performed by signal() operation.

The wait() function is used to decrement the value of the semaphore variable “S” by one if the value of the semaphore variable is positive. If the value of the semaphore variable is 0, then no operation will be performed.

wait(S) {
    while (S == 0); //there is ";" sign here

The signal() function is used to increment the value of the semaphore variable by one.

signal(S) {

Q.48) What are two different types of semaphore?

There are two different types of semaphore:

  • Binary Semaphore and
  • Counting Semaphore

Binary Semaphore:

  • In this the semaphore value can either be 0 or 1.
  • Initially, the value of semaphore variable is set to 1 and if some process wants to use some resource then the wait() function is called and the value of the semaphore is changed to 0 from 1.
  • The process then uses the resource and when it releases the resource then the signal() function is called and the value of the semaphore variable is increased to 1.
  • If at a particular instant of time, the value of the semaphore variable is 0 and some other process wants to use the same resource then it has to wait for the release of the resource by the previous process. In this way, process synchronization can be achieved.

Counting Semaphore:

  • In counting semaphore the value of variable can be any any positive values unlike binary semaphore where the value is either 0 or 1.
  • In Counting semaphores, firstly, the semaphore variable is initialized with the number of resources available.
  • After that, whenever a process needs some resource, then the wait() function is called and the value of the semaphore variable is decreased by one.
  • The process then uses the resource and after using the resource, the signal() function is called and the value of the semaphore variable is increased by one. So, when the value of the semaphore variable goes to 0 i.e all the resources are taken by the process and there is no resource left to be used.
  • If some other process wants to use resources with no resource left then that process has to wait for its turn. In this way, we achieve the process synchronization.

Q.49) What is difference between binary semaphore and counting semaphore?

The basic difference between binary and counting semaphore are:

Binary SemaphoreCounting Semaphore
It can have either 0 or 1 as valuesIt can attain any positive values
It is preferred when only one resource has to be shared among multiple threads but one at a time.It is preferred when more than one resources has to share among multiple processes/thread.
It allows only thread or process to access the resource at a timeFor example, Only three processes or threads to be printing because there are only three printers. So semaphore can have value initial value as 3.

Q.49-A) Can semaphore be used among threads of a single process or can be used among processes?

Semaphore can be used for both that is for synchronization among multiple threads within a process and also among process. The second argument(pshared) used during initialization governs this. If the value is non-zero means can be used among process and if 0 it can be shared among thread so fa single process.

Q.50) When to use Mutex?

Mutex is preferred for mutual exclusion that is one process/task to access the critical resource at a time. Simultaneous access may lead to data loss/inconsistencies.

Example: Access of a file by two process at the same time, one process is trying read while other is trying to write which may lead to data inconsistency or loss. Mutex can help us to avoid same.

It can be used in some kind of configuration or log file or counters or database synchronization where only process/thread can access the file at a time.

Q.51) When to use semaphore?

  • Semaphore is a signaling mechanism and should be preferred when one thread sleep until some other thread signals it to wake up.
  • It is a kind of event-driven mechanism. once the event occurs, the signal is generated.
  • The thread which locked the semaphore variable does not take ownership, it can be signaled by another thread to unblock. Hence should not be used for mutual exclusion.
  • This can be useful in producer and consumer problems where both can signal each other once the buffer is full and empty respectively.
  • Also useful in DB thread pool, DB connection pool that is there is a limit to number of connection which can be made to database at a time, hence semaphore can be initialized by that number making sure that only that number of thread makes the connection with database at a time.

Thus the correct use of semaphore is for signaling from one task to other. The best example would be in the case of message queue where the reader process is blocked till the sender process puts a message in the queue or r the producer holds itself till the consumer signals about any buffer slot being empty.

Another good use case can be in the case of parent and child process where the parent process should wait till the child finished. Thus semaphore can be used to notify the parent process once the child process finishes.

Thus in both the case we can see that one task is governed by other. This is sole of synchronization.

Q.52) Difference between semaphore and mutex?

Few of the major difference between mutex and semaphore are

Mutex is a locking mechanism.Semaphore is a signaling mechanism.
Mutex is used for solving critical section problems.Semaphore is used for event notification that is one thread can notify the other about a certain event.
The thread which acquired the lock can only release the lock.In semaphore, it can be released by other threads by use of the signal.
Mutex allows the multiple threads to access a single resource but one at a time.Semaphore allows multiple threads to access the finite instance of resources.

Q.53) Can semaphore(more specific Binary semaphore) be used for mutual exclusion?

  • In the case of semaphore, the thread does not take ownership, that is semaphore variable locked by one thread can be signaled by another thread and is unlocked.
  • This is not the case in the mutex, the thread that has locked the mutex becomes its current owner and remains until the same thread has unlocked it.
  • Because of this, a semaphore cannot ensure mutual exclusion.
sem_t sem_var =1;
Critical Section(CS);
  • Thread T1 calls wait() and decrement the semaphore variable to 0 and enter the CS.
  • Just imagine a case where thread T2 calls signal on the same variable and increment the semaphore variable to 1.
  • Thread T1 is still in CS, the other thread(T3) can call wait() and decrements the value to 0 and can enter the CS.
  • Hence there can be two threads in a critical section(CS) at a time.
  • Thus semaphore cannot be used to ensure mutual excursion.
  • This is referred to as Premature release and is one of the drawbacks of using semaphore.

Q.23) what is disadvantage associated with mutex?

The problem with mutex is that putting threads to sleep if tried to acquire a resource held by other thread and waking them up again are both rather expensive operations, they’ll need quite a lot of CPU instructions and thus also take some time. If now the mutex was only locked for a very short amount of time, the time spent in putting a thread to sleep and waking it up again might exceed the time the thread has actually slept by far and it might even exceed the time the thread would have wasted by constantly polling on a spinlock.

Q.54) what is difference between busy wait (spinning) and blocking wait?

Thus busy wait is a technique in which a process continuously checks the particular condition to be true, such as lock is available.

Spinning lock are useful when the critical section is small else lead to waste of CPU cycle. That is the reason spin lock is not very popular.

This is better than a blocking lock since in this case of process does not go the blocking that is in wait queue, there is no context switching.

Q.55) When to use spinlock instead of mutex?

Mutex and Spinlock are two types of kernel lock.

Spinlock is preferred when:

  • We have a small critical section.
  • When we have multi-core Operating system where locks are held for small amount of time, mutex with sleep (context switch (save registers/state in the locking thread and restore registers/state in another thread). The time it takes and also the cache cost of doing this can be significant and waking them up will decrease the performance.
  • There is something called hybrid mutex in which for first few seconds it will behave as a spinning lock and still it fails to acquire lock, it will work as a normal mutex that is put to sleep.


  • An interrupt handler within OS kernel must never sleep. If it does the system will freeze / crash. If we need to perform some operation with the interrupt service routine(ISR) like to insert a node to globally shared linked list from the interrupt handler, acquire a spinlock – insert node – release spinlock.
  • Protect access to hardware registers.

Q.56) what will happen if a non-recursive mutex is locked more than once?

It will lead to deadlock. If a thread which has already locked a mutex, tries to lock the mutex again, it will enter into waiting list of mutex, which result in deadlock. It is because no other thread can unlock the mutex.

Q.57) What is condition variable?

  • Condition variables are one of the synchronization mechanism which enables the thread to wait until particular condition holds.
  • For example, parent thread blocks itself until the child thread completes execution. In this case, parents go for a spinning that keeps looping until the child finishes. This results in CPU cycle wastage.
  • Thus condition variable helps the parent to sleep rather than spinning until the condition that is child finishes becomes true.
  • When a thread goes to sleep, it releases the lock and re-acquires the lock once the condition becomes true and execute.
  • It has two operations that is wait() and signal().

For details, please refer Condition variable post

Q.58) What is scheduler?

  • CPU scheduling is the process of selecting the next process to be allocated to the CPU whenever the CPU becomes idle because of execution of another process is on hold(waiting state) due to the unavailability of any resources like I/O or the process terminates.
  • The component of the kernel responsible for doing CPU scheduling is known as schedulers.
  • The scheduler, also known as a process scheduler can be seen as a program that divides the finite resources of processor time between different runnable processes on the system.

Q.59) What are different types of schedulers?

There are three types of schedulers:

  • Long term schedulers
  • Mid term schedulers
  • Short term schedulers

Long Term Schedulers:

  • This scheduler is responsible for moving the job to the ready state from New state that is from the job queue on secondary memory to the ready queue on main memory.
  • It is also known as Job schedulers.
  • It defines the degree of multiprogramming that is having more than one job in ready state/main memory.

Mid Term Schedulers:

  • Medium term scheduling is part of the swapping.
  • If a process needs an I/O request and is moved to wait state and being in wait or block, it cannot make progress, thus it’s good to move such process from main memory to secondary memory to make room for another process (if required).
  • This moving of process from main memory to secondary memory is called swapping and Mid term scheduler is responsible for this.
  • Thus it reduces the degree of multiprogramming.

Short Term Schedulers:

  • This scheduler is responsible for moving the job from ready to running state.
  • This is also known as process scheduler or CPU scheduler that is providing the processor for running.
  • Short term scheduler also known as a dispatcher, executes most frequently and makes the fine-grained decision of which process to execute next.
  • In many OS systems, short term scheduler is just picking the jobs from the ready state while the dispatcher is responsible for moving it to the run state that is to the CPU.

For more details, please CPU scheduling post.

Q.60) What is preemptive and non-preemotive scheduling?

Preemptive Scheduling:

  • In this the process running can be forced to yield CPU in between of their execution.
  • Timer and other interrupts may be reason for half way yield of CPU.
  • In this scheduling method the tasks are mostly assigned with their priorities, and higher priority task are made to execute by yielding CPU from lower priority task.
  • Some Algorithms that are based on preemptive scheduling are Round Robin Scheduling (RR), Shortest Remaining Time First (SRTF), Priority (preemptive version) Scheduling, etc.

Non-Preemptive Scheduling:

  • In this the process runs until it voluntarily yields CPU in below scenarios:
    • Process blocks on an event(I/O or synchronization)
    • Process yields
    • Process terminates
  • Some Algorithms based on non-preemptive scheduling are: Shortest Job First (SJF basically non-preemptive) Scheduling and Priority (non- preemptive version) Scheduling, etc.

For details please refer preemptive and non-preemptive scheduling post

Q.61) What is priority inversion problem?

It is scenario in which a high priority task is indirectly preempted by lower priority task in a sense inverting the priorities of the associated task, and violating the priority-based ordering of execution. This is called priority inversion and usually occurs when resource sharing is involved.

For more details, please refer priority inversion posts

Q.62) What is the solution of priority inversion problem?

There are two solutions:

  • Priority inheritance
  • Priority Inversion

Priority Inheritance:

  • This is a solution to priority inversion where the lower priority task continues running in the CS with its existing priority.
  • Once the high priority task request for running, its priority is made highest among all the task which are waiting for the same critical resource.
  • Since the priority of the lower priority task is made highest, the medium priority task cannot preempt this task. This avoids the priority inversion problem.
  • When the task which is given the highest priority among all tasks, finishes the job and releases the critical resource then it gets back to its original priority value (which may be less or equal).

Priority Ceiling:

  • In this, the mutex is assigned a predefined priority ceiling and this must be the highest of all the tasks that can access the resources.
  • When a task acquires a lock(mutex), the same priority is given to the task.
  • Hence the mid-priority task cannot preempt as long as the low-priority task owns the mutex, nor that any task preempts that want the mutex.
  • Once the task is done, it releases the lock and acquires the old priority.

Every time a shared resource is acquired, the acquiring task must be hoisted to the resource’s priority ceiling. Conversely, every time a shared resource is released, the hoisted task’s priority must be lowered to its original level. All this extra code takes time.

Q.63) what is deadlock?

Deadlock is a special unwanted scenario where non of the process in the system is in the running state that is all the process are in the waiting state, waiting for some kind of resources which is held by another process in the system.

Q.64) Explain the necessary condition for Deadlock?

There are four necessary conditions for a deadlock described below:

  • Mutual Exclusion:

It states only one process or a thread can enter the CS at a time.

  • Hold and wait:

This states that process is holding up atleast one resource and waiting for the additional resources held by the other process.

  • No Preemption:

A process once acquired the resources cannot be forced to release the resources until the process voluntarily releases it.

  • Circular waits:

It states that set of process say {Po,P1,P2,P3..} of waiting process must exist such that Po is waiting for resource held by P1 ,similarly P1 needs resources held by P2 and so on.

Q.64) Explain the controlling mechanism for Deadlock?

Methods for Handling Deadlock:
Methods that are used in order to handle the problem of deadlocks are as follows:

  • Ignoring the Deadlock
  • Deadlock Prevention
  • Deadlock Avoidance
  • Deadlock detection and recovery

Ignoring the Deadlock:

According to this method, it is assumed that deadlock would never occur, and hence no special provision for handling the deadlock. It can be beneficial for those operating systems that is only used for browsing and for the normal task.

Deadlock Prevention:

There are four essential conditions for the deadlock to occur:

  • Mutual Exclusion
  • Hold and wait
  • No preemption
  • Circular wait

Thus as a process to prevent deadlock if we become able to violate any condition among the above four and do not let them occur together then the deadlock can be prevented.

Mutual Exclusion:

It cannot be avoided as it leads to data inconsistencies, eliminated by semaphore/mutex

Hold n wait:

This can be satisfied by making it hold and wait i.e request can be allocated if all the resources are acquired before program begins.


It is inefficient because it might be that not all the resources is needed at the same time, thus other process has to wait wasting CPU cycles. Hence even this is not feasible solution.

No preemption:

The third necessary condition for deadlocks is that there should be no preemption of the resources that have already been allocated. In order to ensure that this condition does not hold the following rules can be used.

  • If a process that is holding some resource requests another resource and if the request cannot be allocated to it, it must release all the resources currently allocated to it.
  • When a process requests some resources, if they are available, then allocate them. If in case the requested resource is not available and is being held by some process which is again waiting for some resource, the operating system preempts it from the waiting process and allocate it to the requesting process. And if that resource is being used, then the requesting process must wait.
  • Even this solution is practically not feasible.

Circular Wait:

It can be prevented by imposing a total order relation

Example: A process with higher priority cannot request a resource from a process with a lower priority.

This method ensures that not a single process can request a resource that is being utilized by any other process and due to which no cycle will be formed.

Deadlock Avoidance:

The deadlock Avoidance method is used by the operating system in order to check whether the system is in a safe state or in an unsafe state. The request for any resource will be granted if the resulting state of the system does not cause any deadlock in the system.

According to the simplest and useful approach, any process should declare the maximum number of resources of each type it will need. The algorithms of deadlock avoidance mainly examine the resource allocations so that there can never be an occurrence of circular wait conditions.

There are two algorithm for deadlock avoidance:

  • Resource allocation Graph Algorithm
  • Bankers Algorithm

Safe State and Unsafe State:

  • A system is said to be in a safe state if the system can allocate resources to each process(up to its maximum requirement) in some order without leading to deadlock.
  • The sequence in which the process can be executed without leading to deadlock is known as a safe sequence. Thus a formally a system is in a safe state if there exists a safe sequence.
  • In an unsafe state, the operating system cannot prevent processes from requesting resources in such a way that any deadlock occurs.
  • It is not necessary that all unsafe state are deadlocks; an unsafe state state may lead to deadlock.

Thus a safe state is not a deadlocked state and conversely a deadlocked state is an unsafe state.

Deadlock detection and recovery:

With this method, the deadlock is detected first by using some algorithm of the resource-allocation graph. After the deadlock is detected, there is some mechanism to recover from the deadlock- resource preemption and process termination.

There are three basic approaches to recover from deadlock:

  • Inform the system operator, and allow him/her to take manual intervention
  • Terminate one or more processes involved in the deadlock
  • Resource Preemption

Process termination can either be:

  • Kill one by one:

It is simple to implement but for each termination of a process we need to run deadlock detection algorithm to check its presence.

  • Kill All

We need to run deadlock detection algorithm only once but process near the completation phase is also terminated. 

Resource preemption:

In this we preempt some process and gives its resources to other process till deadlock is eliminated

Here we need to keep some important points in mind:

  • Selecting a victim is major concern
  • Preempting or rollback to safe state and then restart
  • Starvation: guarantees that resources are not always preempted from same process always.

Q.65) What is difference between starvation and deadlock?

Here, are some important differences between Deadlock and starvation:

The deadlock situation occurs when none of the processes gets executed.Starvation is a situation where all the low priority processes got blocked, and the high priority processes execute.
Deadlock is an infinite process.Starvation is a long waiting but not an infinite process.
Every Deadlock always has starvation.Every starvation does not necessarily have a deadlock.
Deadlock happens then Mutual exclusion, hold and wait, no preemption and circular wait do occur simultaneously.It happens due to uncontrolled priority and resource management.
It can be prevented by avoiding the necessary conditions for deadlockIt can be prevented by Aging

Q.13) Why is thread faster than a process?

This is because; whenever a process requires or waits for a resource CPU takes it out of the critical section and allocates the mutex to another process. 
Before de-allocating the earlier one, it stores the context (till what state did it execute that process) in registers. Now if this deallocated process has to come back and execute as it has got the resource for which it was waiting, then it can’t go into critical section directly. CPU asks that process to follow scheduling algorithm. So this process has to wait again for its turn. While in the case of thread based application, the application is still with CPU only that thread which requires some resource goes out, but its co threads (of same process/application) are still in the critical section. Hence it directly comes back to the CPU and does not wait outside. Hence an application which is thread based is faster than an application which is process based. 
Be sure that it’s not the competition between thread and process; it’s between an application which is thread based or process based.


1) Multithreading without threads: fork () and execl ()

2) Multiple execution unit within one process but no threads: fork () and execl () that is each child will have different task

3) Share certain data space between execution Unit but no threads: shared memory system call share code space but no thread: fork

Categories: Interview Preparation

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: