Operating Systems Exam 3 v2

Consider the sequence of code shown at the right, where blue "producer" code to implement counter++ is interspersed with gold "consumer" code to implement counter--. How is it possible that the code could interleave in this way?
Click the card to flip 👆
1 / 38
Terms in this set (38)
It could interleave if the blue code starts running, but then that process running that code gets taken off the CPU by the CPU scheduler, and then the process running the gold code is put onto the CPU by the CPU scheduler. It could also happen in a multiprocessor system if the blue code is on one processor and the gold code is on another processor. In either case, the final result for counter could be incorrect. Treating the code as a critical section would have avoided this problem
Image: Consider the sequence of code shown at the right, where blue "producer" code to implement counter++ is interspersed with gold "consumer" code to implement counter--. How is it possible that the code could interleave in this way?
Any solution to the critical-section problem must satisfy the progress condition, which says: If no process is executing in its critical section and there exist some processes that wish to enter their critical section, then the selection of the processes that will enter the critical section next cannot be postponed indefinitely. What does that mean?
Consider the synchronization Algorithm 1 shown in class. The code for one process, p1,is shown at the right; the code for the other process, p2, is similar.

a). Does this algorithm enforce mutual exclusion? Explain.

b). Does this algorithm avoid dead lock? Explain

c). What is the primary deficiency with this algorithm?
a). Yes, a process can only enter the critical section at a time - if it is turn.

b). Yes, since they take turns one process should always be able to enter the critical section

c). It forces the processes to take turns in a strict alternation, so if one process wants to enter the critical section more frequently than the other, it cannot. Finally, it wastes time busy waiting
Image: Consider the synchronization Algorithm 1 shown in class. The code for one process, p1,is shown at the right; the code for the other process, p2, is similar.

a). Does this algorithm enforce mutual exclusion? Explain.

b). Does this algorithm avoid dead lock? Explain

c). What is the primary deficiency with this algorithm?
Consider the following CPU scheduling algorithms: First-Come, First Served (FCFS),Shortest Job First (SJF), Shortest Remaining Time First (SRTF), and Round Robin(RR). (10 points) a). Which of these are preemptive algorithms? (5 points) b). Which of these have high overhead, and why is the overhead so high? (5 points)a). SRTF and RR b). SJF and SRTF since they have to record CPU burst times and estimate the next CPU burst timeWhat is the advantage of using Multilevel Queue scheduling instead of just using one of the algorithms, such as Round Robin? (5 points)It provides flexibility for dividing the processes into different groups, each of which can use a different scheduling algorithm, and with some method to give some queues a higher priority or more CPU time than the others.One of the concerns with Multilevel Feedback Queue scheduling is choosing the method used to determine when to upgrade a process. What is meant by upgrading a process?(5 points)With Multilevel Feedback Queue scheduling, processes can move between queues. Upgrading refers to moving a process into a higher priority queue.In symmetric multiprocessing, each processor is self-scheduling, with either a common ready queue or with individual ready queues. What complication does individual ready queues add, and how is that complication resolved? (5 points)It adds the need for keeping all the CPUs equally busy, which is accomplished by load balancing - push and pull migration.Any solution to the critical-section problem must not only enforce mutual exclusion, but it must also avoid deadlock (also stated as it must make progress). What does it mean to "avoid deadlock" in this context?If no process is in the critical section, and some processes want to get into the critical section, one of them must get in - that selection cannot be postponed indefinitely. Essentially, they cannot all be stuck waiting on something that will never occur.Consider the critical section synchronization Algorithm 2a shown in class. The code for one process, p1, is shown at the right; the code for the other process, p2, is similar. This algorithm does not correctly enforce mutual exclusion. Explain how two processes can enter the critical section at the same time.Both can check the other process's status in the while loop at essentially the same time, and each can see that the other is not in the critical section. Then they set their own status variable to true and go into the critical section, meaning they are both now in the critical section and mutual exclusion was not enforced.Consider the test_and_set instruction, which can be used to protect a critical section. What does this instruction do if it accesses a lock variable which is unlocked (currently set to false)?It returns the value false, indicating that it was unlocked. It also sets the lock variable to true, meaning it is now locked. All of this happens atomically, as a single operation.Mutex Locks, also called "spinlocks", are a high-level solution to the critical section problem. What functionality does the Mutext Lock acquire( ) function provide?It attempts to acquire the lock. If the lock is not available, it busy waits. When the lock is available, it acquires it, and the program can enter the critical section. It is particularly useful, and low in overhead, when the critical section is short.Consider semaphores with busy waiting, defined in class by the code on the right, a semaphore S that is initialized to 1, and a critical region protected by that semaphore. What happens when process A calls the P operation on that semaphore (answer in terms of the value of the semaphore S and the effect on process A or any other process)?S is decremented from 1 to 0 A continues into the critical region (No other processes are affected)While process A is in the critical region, what happens when process B calls the P operation on that semaphore (answer in terms of the value of the semaphore S and the effect on process B or any other process)?S is decremented from 0 to -1 B busy waits as long until S goes above 0 (A is already in the critical region and there is no effect on A)What happens when process A leaves critical region and calls the V operation on that semaphore (answer in terms of the value of the semaphore S and the effect on process A or any other process)?S is incremented from -1 to 0 B gets out of the while loop, and enters the critical sectionWhat changes if S is initialized to 2 instead of 1?2 process would be allowed into the critical section at the same time. This would not be appropriate for a critical section, but might be appropriate for code using a resource that allows 2 processes to use that resource simultaneously.Consider the critical section synchronization Algorithm 1 shown in class. Does this algorithm enforce mutual exclusion? Explain?Yes, a process can only enter the critical section at a time - if it is turn.Consider the critical section synchronization Algorithm 3 shown in class, which is also known as Peterson's Solution. Does this algorithm enforce mutual exclusion? Explain.Yes, a process can only enter the critical section if the other process does not want to enter (i.e., if it has not set its "in_crit" variable to true).What role does the "turn" variable play? Be specific.If both processes express a desire to enter the critical section, the turn variable specifies which one gets to enter.Consider the test_and_set instruction, defined on the right. What does it mean to say this is an "atomic" instruction?All of these operations are executed by a single assembly / machine language instruction, and the operation cannot be interrupted until it finishes.Mutex Locks, or "spinlocks", are typically implemented via instructions like test_and_set, which requires busy waiting. Is this bad? Explain?Busy waiting is "bad" in the sense that it wastes CPU time. However, if the critical section is short, using Mutex Locks is ok since the amount of time spent busy waiting would also be short. This would not be the case if the critical section is long.Consider the version of semaphores that uses blocking instead of busy waiting, defined on the right. Is it possible that more than one process may be blocked waiting on a semaphore? Explain.Yes. If a process is in the critical section, the next process to call P/wait will set the semaphore to -1 and the process will block. Further calls to P/wait will continue to decrement the semaphore and those processes will also block.Consider the critical section synchronization Algorithm 1 shown in class. The code for one process, p1, is shown at the right; the code for the other process, p2, is similar. This algorithm does enforce mutual exclusion and does avoid deadlock. What is the primary deficiency with this algorithm?The main deficiency is that it forces the processes to strictly alternate turns, so if one process wants to enter the critical section more frequently than the other, it cannot. However, it also has two other deficiencies: it breaks if the process crashes while in the critical section, and it wastes time busy waiting (running around the igloo).Consider the critical section synchronization Algorithm 2b shown in class. The code for one process, p1, is shown at the right; the code for the other process, p2, is similar. This algorithm does enforce mutual exclusion but it does not avoid deadlock. Explain how two processes can deadlock.Both can set their "in_crit" variable to true at essentially the same time, indicating they want to go into the critical section. Then when they check on the status of the other process, they will see the other process is in the critical section, so they will both wait in the while loop forever, meaning the system is now deadlocked.Any solution to the critical-section problem must not only enforce mutual exclusion, but it must also avoid starvation (also known as supporting bounded waiting). What does it mean to "avoid starvation" in this context?If a thread wants to get into the critical section, it must eventually get a chance. There must be a bound on the number of times that other processes can enter the critical section before its request is granted. The solution should not allow it to be denied access indefinitely.The test_and_set instruction reads a memory value, sets it to 1, and returns its original value, as all one atomic operation. What do we mean by "atomic operation"?It does all this as a single unit, without interruption. That is, it can not read the memory value, set it to 1, get interrupted by another process, and only later finish its execution and return the original value.Write the pseudocode that defines the "P" or "wait" semaphore operation without busy waiting.s = s - 1 if (s < 0) block the thread that called wait(s) on a queue associated with that semaphore otherwise let the thread that called wait(s) continue into the critical sectionFirst-Come, First-Served Scheduling is a very simple CPU scheduling algorithm. Summarize how it works.Choose the process at the head of the ready queue and run it non-preemptively until it terminates or blocks. No time slice is involved.The Shortest Job First (SJF) and Shortest Remaining Time First (SRTF) algorithms share a lot of similarities. How do these two differ?SJF is non-preemptive. SRTF is preemptive when a process (new or previously blocked) enters the ready queue. Or... SJF choose the shortest job; SRTF chooses the job with the shortest amount of remaining time.These two algorithms have a big overhead. Explain.They have to record CPU burst times and use that history to estimate the length of the next CPU burst using exponential averaging.What types of processes are penalized by these two algorithms? Explain.They emphasize short processes, so they penalize long processes. Long processes may eventually get some CPU time, but it is also possible they could starve.Round-Robin CPU scheduling is particularly sensitive to the length of the time slice. What's the problem with making that time slice too short?If it's overly short, many more context switches will be required, which is unnecessary extra overhead.