Q1. Explain
the two models of interprocess communication? What are the strengths and
weaknesses of the two approaches? |
Interprocess
communication is the mechanism provided by the operating system that allows
processes to communicate with each other. This communication could involve a
process letting another process know that some event has occurred or
transferring of data from one process to another.
A
diagram that illustrates interprocess communication is as follows:
The models of interprocess communication are as follows:
1.Shared Memory Model
Shared
memory is the memory that can be simultaneously accessed by multiple processes.
This is done so that the processes can communicate with each other. All POSIX
systems, as well as Windows operating systems use shared memory.
Advantage of Shared Memory Model
Memory
communication is faster on the shared memory model as compared to the message
passing model on the same machine.
Disadvantages of Shared Memory Model
Some
of the disadvantages of shared memory model are as follows:
- All the processes that use the shared
memory model need to make sure that they are not writing to the same
memory location.
- Shared memory model may create problems
such as synchronization and memory protection that need to be addressed.
2.Message Passing Model
Multiple
processes can read and write data to the message queue without being connected
to each other. Messages are stored on the queue until their recipient retrieves
them. Message queues are quite useful for interprocess communication and are
used by most operating systems.
Advantage of Messaging Passing Model
The
message passing model is much easier to implement than the shared memory model.
Disadvantage of Messaging Passing Model
The
message passing model has slower communication than the shared memory model
because the connection setup takes time.
A
diagram that demonstrates the shared memory model and message passing model is
given as follows:
Q2. How the
semaphores help in the process synchronization? Differentiate between binary
and counting semaphore? |
How the semaphores help in the process
synchronization?
Semaphores are integer variables that are used to solve the critical
section problem by using two atomic operations, wait and signal that are used
for process synchronization.
The definitions of wait and signal are as follows −
- Wait
The wait operation decrements the value of its
argument S, if it is positive. If S is negative or zero, then no operation is
performed.
wait(S)
{
while (S<=0);
S--;
}
- Signal
The signal operation increments the value of its
argument S.
signal(S)
{
S++;
}
The two common kinds of semaphores are
- Counting
semaphores
- Binary
semaphores.
Counting
Semaphores
This type of Semaphore uses a count that helps task to be
acquired or released numerous times. If the initial count = 0, the counting
semaphore should be created in the unavailable state.
However, If the count is > 0, the semaphore is created in the
available state, and the number of tokens it has equals to its count.
Binary
Semaphores
The binary semaphores are quite similar to counting semaphores,
but their value is restricted to 0 and 1. In this type of semaphore, the wait
operation works only if semaphore = 1, and the signal operation succeeds when
semaphore= 0. It is easy to implement than counting semaphores.
Q3. Write
short notes on a) Dekker’s
Solution b)Peterson
Solution |
a)
Dekker’s Solution
Dekker’s Solution.
Dekker’s algorithm was the first provably-correct solution to the critical
section problem. It allows two threads to share a single-use resource without
conflict, using only shared memory for communication. It avoids the strict
alternation of a naïve turn-taking algorithm, and was one of the first mutual
exclusion algorithms to be invented.
b)
Although there are many versions of Dekker’s Solution,
the final or 5th version is the one that satisfies all of the above conditions
and is the most efficient of them all.
c)
Note – Dekker’s
Solution, mentioned here, ensures mutual exclusion between two processes only,
it could be extended to more than two processes with the proper use of arrays
and variables.
d)
Algorithm – It
requires both an array of Boolean values and an integer variable:
e)
var flag: array
[0..1] of boolean;
f)
turn: 0..1;
g)
repeat
h)
i)
flag[i] := true;
j)
while flag[j] do
k)
if turn = j then
l)
begin
m)
flag[i] := false;
n)
while turn = j do
no-op;
o)
flag[i] := true;
p)
end;
q)
r)
critical section
s)
t)
turn := j;
u)
flag[i] := false;
v)
w)
remainder section
until false;
b)Peterson
Solution-
The producer consumer problem (or
bounded buffer problem) describes two processes, the producer and the consumer,
which share a common, fixed-size buffer used as a queue. Producer produce an
item and put it into buffer. If buffer is already full then producer will have
to wait for an empty block in buffer. Consumer consume an item from buffer. If
buffer is already empty then consumer will have to wait for an item in buffer.
Implement Peterson’s Algorithm for the two processes using shared memory such
that there is mutual exclusion between them. The solution should have free from
synchronization problems.
Peterson’s algorithm –
// code for
producer (j) // producer j is
ready // to produce an
item flag[j] = true; // but consumer
(i) can consume an item turn = i; // if consumer is
ready to consume an item // and if its
consumer's turn while (flag[i] == true &&
turn == i) {
// then producer will wait } //
otherwise producer will produce //
an item and put it into buffer (critical Section) //
Now, producer is out of critical section flag[j]
= false; //
end of code for producer //--------------------------------------------------------
//
code for consumer i //
consumer i is ready //
to consume an item flag[i]
= true; //
but producer (j) can produce an item turn
= j; //
if producer is ready to produce an item //
and if its producer's turn while (flag[j] == true &&
turn == j) {
// then consumer will wait } //
otherwise consumer will consume //
an item from buffer (critical Section) //
Now, consumer is out of critical section flag[i]
= false; // end of code
for consumer
|
ANSWER
The Dining Philosopher Problem
– The
Dining Philosopher Problem states that K philosophers seated around a circular
table with one chopstick between each pair of philosophers. There is one
chopstick between each philosopher. A philosopher may eat if he can pickup the
two chopsticks adjacent to him. One chopstick may be picked up by any one of
its adjacent followers but not both.
Semaphore Solution to Dining
Philosopher –
Each philosopher is represented by the following
pseudocode:
process P[i]
while true do
{ THINK;
PICKUP(CHOPSTICK[i], CHOPSTICK[i+1 mod 5]);
EAT;
PUTDOWN(CHOPSTICK[i], CHOPSTICK[i+1 mod 5];
}
how it can be
minimised?
The challenge in the dining philosophers problem is to design
a protocol so that the philosophers do not deadlock (i.e. every philosopher has
a chopstick), and so that no philosopher starves (i.e. when a philosopher is
hungry, he/she eventually gets the chopsticks). Additionally, our protocol
should try to be as efficient as possible -- in other words, we should try to
minimize the time that philosophers spent waiting to eat.
Q5. Explain
the critical section? Mention the minimum requirements that should be
satisfied by a solution to critical section problem? |
ANSWER
CRITICAL
SECTION-
The
critical section is a code segment where the shared variables can be accessed.
An atomic action is required in a critical section i.e. only one process can
execute in its critical section at a time. All the other processes have to wait
to execute in their critical sections.
minimum requirements that should be
satisfied by a solution to critical section problem
The critical section problem needs a
solution to synchronize the different processes. The solution to the critical
section problem must satisfy the following conditions −
- Mutual
Exclusion
Mutual exclusion implies
that only one process can be inside the critical section at any time. If any
other processes require the critical section, they must wait until it is free.
- Progress
Progress means that if a
process is not using the critical section, then it should not stop any other
process from accessing it. In other words, any process can enter a critical
section if it is free.
- Bounded
Waiting
Bounded waiting means
that each process must have a limited waiting time. Itt should not wait
endlessly to access the critical section.
Q6. Show how
wait () and signal () semaphore operations could be implemented in
multiprocessor environments, using the Test and Set () instruction.
The solution should exhibit minimal busy waiting. |
ANSWER-
To overcome the need for busy
waiting the wait and signal semaphore operations are used. When a process
executes the wait operation and finds that the semaphore value is not positive,
it must wait. However, rather than busy waiting, the process can block itself.
The block operation places a process into a waiting queue associated with the
semaphore, and the state of the process is switched to the waiting state. Then,
control is transferred to the CPU scheduler, which selects another process to
execute.
A process that is blocked, waiting
on a semaphore S, should be restarted when some other process executes a signal
operation. The process is restarted by a wakeup operation, which changes the
process from the waiting state to the ready state. The process is then placed
in the ready queue.(The CPU may or may not be switched from the running process
to the newly ready process, depending on the CPU-scheduling algorithm.)
Each semaphore has an integer value
and a list of processes. When a process must wait on a semaphore, it is added
to the list of processes. A signal operation removes one process from the list
of waiting processes and awakens that process.
The wait semaphore operation
can be defined as
void wait (semaphore S){
S.value--;
if(S.value<0){
add this
process to S.L;
block();
}
}
The signal semaphore operation
can now be defined as
void signal (semaphore S){
S.value++;
if(S.value <= 0){
remove a process P from
S.L;
wakeup(P);
}
}
The block of operation suspends the process
that invokes it. The wakeup (P) operation resumes the execution of a blocked
process P. These two operations are provided by the operating system as basic
system calls. Although under the classical definition of semaphores with busy
waiting the semaphore value is never negative, this implementation may have
negative semaphore values. If the semaphore value is negative, its magnitude is
the number of process waiting on that semaphore.
Q7. Explain in
brief about the Sleeping Barber Problem with its solution. |
ANSWER-
Problem : The analogy
is based upon a hypothetical barber shop with one barber. There is a barber
shop which has one barber, one barber chair, and n chairs for waiting for
customers if there are any to sit on the chair.
·
If there is no customer, then the barber sleeps in his
own chair.
·
When a customer arrives, he has to wake up the barber.
·
If there are many customers and the barber is cutting
a customer’s hair, then the remaining customers either wait if there are empty
chairs in the waiting room or they leave if no chairs are empty.
·
Solution : The
solution to this problem includes three semaphores.First is for the customer which counts the
number of customers present in the waiting room (customer in the barber chair
is not included because he is not waiting). Second, the barber 0 or 1 is used
to tell whether the barber is idle or is working, And the third mutex is used
to provide the mutual exclusion which is required for the process to execute.
In the solution, the customer has the record of the number of customers waiting
in the waiting room if the number of customers is equal to the number of chairs
in the waiting room then the upcoming customer leaves the barbershop.
·
·
·
When the barber shows up in the morning, he executes
the procedure barber, causing him to block on the semaphore customers because
it is initially 0. Then the barber goes to sleep until the first customer comes
up.
·
When a customer arrives, he executes customer
procedure the customer acquires the mutex for entering the critical region, if
another customer enters thereafter, the second one will not be able to anything
until the first one has released the mutex. The customer then checks the chairs
in the waiting room if waiting customers are less then the number of chairs
then he sits otherwise he leaves and releases the mutex.
·
If the chair is available then customer sits in the
waiting room and increments the variable waiting value and also increases the
customer’s semaphore this wakes up the barber if he is sleeping.
·
At this point, customer and barber are both awake and
the barber is ready to give that person a haircut. When the haircut is over,
the customer exits the procedure and if there are no customers in waiting room
barber sleeps.
Q8. Explain in
detail about the principle of Concurrency. |
ANSWER-
Concurrency is the tendency for things to happen at the same
time in a system. It also refers to techniques that make program more usable.
Concurrency can be implemented and is used a lot on single processing units,
nonetheless it may benefit from multiple processing units with respect to
speed. If an operating system is called a multi-tasking operating system, this
is a synonym for supporting concurrency. If you can load multiple documents
simultaneously in the tabs of your browser and you can still open menus and
perform more actions, this is concurrency. If you run distributed-net
computations in the background, that is concurrency.
Concurrency is the interleaving of
processes in time to give the appearance of simultaneous execution.
Thus it differs from parallelism, which
offers genuine simultaneous execution. However the issues and difficulties
raised by the two overlap to a large extent:
• Sharing global resources safely is
difficult;
• Optimal allocation of resources is
difficult;
• Locating programming errors can be
difficult, because the contexts in which errors occur cannot always be
reproduced easily.
Parallelism also introduces the issue
that different processors may run at different speeds, but again this problem
is mirrored in concurrency because different processes progress at different
rates.
P2 enters this code, and runs it to
completion, reading and displaying the character y.
P1 is resumed, but chin now contains the
character y, so P1 displays the wrong character. The essence of the problem is
the shared global variable chin. P1 sets chin, but this write is subsequently
lost during the execution of P2.
The general solution is to allow only
one process at a time to enter the code that accesses chin: such code is often
called a critical section. When one process is inside a critical section of
code, other processes must be prevented from entering that section.
This requirement is known as mutual
exclusion.
Mutual Exclusion
Mutual exclusion is in many ways the
fundamental issue in concurrency. It is the requirement that when a process P
is accessing a shared resource R, no other process should be able to access R
until P has finished with R.
Examples of such resources include
files, I/O devices such as printers, and shared data structures.
There are essentially three approaches
to implementing mutual exclusion. Leave the responsibility with the processes
themselves: this is the basis of most software approaches.
These approaches are usually highly
error-prone and carry high overheads. • Allow access to shared resources only
through special-purpose machine instructions: i.e. a hardware approach. These
approaches are faster but still do not offer a complete solution to the
problem, e.g. they cannot guarantee the absence of deadlock and starvation. •
Provide support through the operating system, or through the programming
language.
Q9. Illustrate
the characteristics of mutual exclusion using centralized approach. |
ANSWER-
characteristics
of mutual exclusion using centralized approach.
·
In centralized algorithm one process is elected as the
coordinator which may be the machine with the highest network address.
·
Whenever a process wants to enter a critical region, it sends a
request message to the coordinator stating which critical region it wants to
enter and asking for permission. If no other process is currently in that
critical region, the coordinator sends back a reply granting permission, as
shown in Fig (a). When the reply arrives, the requesting process enters the
critical region.
- Suppose
another process 2 shown in Fig (b), asks for permission to enter the same
critical region. Now the coordinator knows that a different process is
already in the critical region, so it cannot grant permission. The
coordinator just refrains from replying, thus blocking process 2, which is
waiting for a reply or it could send a reply saying ‘permission denied.’
- When
process 1 exits the critical region, it sends a message to the coordinator
releasing its exclusive access as shown in Fig (c).
- The
coordinator takes the first item off the queue of deferred requests and
sends that process a grant message. If the process was still blocked it
unblocks and enters the critical region.
- If
an explicit message has already been sent denying permission, the process
will have to poll for incoming traffic or block later. When it sees the
grant, it can enter the critical region.
- Advantages:
- Algorithm
guarantees mutual exclusion by letting one process at a time into each
critical region.
- It
is also fair as requests are granted in the order in which they are
received.
- No
process ever waits forever so no starvation.
- Easy
to implement so it requires only three messages per use of a critical
region (request, grant, release).
- Used
for more general resource allocation rather than just managing critical
regions.
- Disadvantages:
- The
coordinator is a single point of failure, the entire system may go down
if it crashes.
- If
processes normally block after making a request, they cannot distinguish
a dead coordinator from ‘‘permission denied’’ since no message comes
back.
- In
a large system a single coordinator can become a performance bottleneck.
Q10. In
Process generation what are the three 3 states of the process scheduler, Explain the
three state process model. |
A scheduler is a type of system software that allows you to
handle process scheduling.
There are mainly three types of Process Schedulers:
1.
Long Term
2.
Short Term
3.
Medium Term
Long
Term Scheduler
Long term scheduler is also known as a job scheduler.
This scheduler regulates the program and select process from the queue and
loads them into memory for execution. It also regulates the degree of
multi-programing.
However, the main goal of this type of scheduler is to offer a
balanced mix of jobs, like Processor, I/O jobs., that allows managing
multiprogramming.
Medium
Term Scheduler
Medium-term scheduling is an important part of swapping. It
enables you to handle the swapped out-processes. In this scheduler, a running
process can become suspended, which makes an I/O request.
A running process can become suspended if it makes an I/O
request. A suspended processes can't make any progress towards completion. In
order to remove the process from memory and make space for other processes, the
suspended process should be moved to secondary storage.
Short
Term Scheduler
Short term scheduling is also known as CPU scheduler.
The main goal of this scheduler is to boost the system performance according to
set criteria. This helps you to select from a group of processes that are ready
to execute and allocates CPU to one of them. The dispatcher gives control of
the CPU to the process selected by the short term scheduler.
Difference
between Schedulers
Long-Term Vs. Short Term Vs. Medium-Term
Long-Term |
Short-Term |
Medium-Term |
Long term is also
known as a job scheduler |
Short term is also
known as CPU scheduler |
Medium-term is also
called swapping scheduler. |
It is either absent
or minimal in a time-sharing system. |
It is insignificant
in the time-sharing order. |
This scheduler is an
element of Time-sharing systems. |
Speed is less
compared to the short term scheduler. |
Speed is the fastest
compared to the short-term and medium-term scheduler. |
It offers medium
speed. |
Allow you to select
processes from the loads and pool back into the memory |
It only selects processes
that is in a ready state of the execution. |
It helps you to send
process back to memory. |
Offers full control |
Offers less control |
Reduce the level of
multiprogramming. |
Comments
Post a Comment